eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-01-08
124
0
0
10.4230/LIPIcs.ITCS.2019
article
LIPIcs, Volume 124, ITCS'19, Complete Volume
Blum, Avrim
LIPIcs, Volume 124, ITCS'19, Complete Volume
https://drops.dagstuhl.de/storage/00lipics/lipics-vol124-itcs2019/LIPIcs.ITCS.2019/LIPIcs.ITCS.2019.pdf
Theory of computation, Mathematics of computing
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-01-08
124
0:i
0:xii
10.4230/LIPIcs.ITCS.2019.0
article
Front Matter, Table of Contents, Preface, Conference Organization
Blum, Avrim
Front Matter, Table of Contents, Preface, Conference Organization
https://drops.dagstuhl.de/storage/00lipics/lipics-vol124-itcs2019/LIPIcs.ITCS.2019.0/LIPIcs.ITCS.2019.0.pdf
Front Matter
Table of Contents
Preface
Conference Organization
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-01-08
124
1:1
1:19
10.4230/LIPIcs.ITCS.2019.1
article
Submodular Secretary Problem with Shortlists
Agrawal, Shipra
1
Shadravan, Mohammad
1
Stein, Cliff
1
Columbia University, New York, NY, USA, 10027
In submodular k-secretary problem, the goal is to select k items in a randomly ordered input so as to maximize the expected value of a given monotone submodular function on the set of selected items. In this paper, we introduce a relaxation of this problem, which we refer to as submodular k-secretary problem with shortlists. In the proposed problem setting, the algorithm is allowed to choose more than k items as part of a shortlist. Then, after seeing the entire input, the algorithm can choose a subset of size k from the bigger set of items in the shortlist. We are interested in understanding to what extent this relaxation can improve the achievable competitive ratio for the submodular k-secretary problem. In particular, using an O(k) sized shortlist, can an online algorithm achieve a competitive ratio close to the best achievable offline approximation factor for this problem? We answer this question affirmatively by giving a polynomial time algorithm that achieves a 1-1/e-epsilon-O(k^{-1}) competitive ratio for any constant epsilon>0, using a shortlist of size eta_epsilon(k)=O(k). This is especially surprising considering that the best known competitive ratio (in polynomial time) for the submodular k-secretary problem is (1/e-O(k^{-1/2}))(1-1/e) [Thomas Kesselheim and Andreas Tönnis, 2017].
The proposed algorithm also has significant implications for another important problem of submodular function maximization under random order streaming model and k-cardinality constraint. We show that our algorithm can be implemented in the streaming setting using a memory buffer of size eta_epsilon(k)=O(k) to achieve a 1-1/e-epsilon-O(k^{-1}) approximation. This result substantially improves upon [Norouzi-Fard et al., 2018], which achieved the previously best known approximation factor of 1/2 + 8 x 10^{-14} using O(k log k) memory; and closely matches the known upper bound for this problem [McGregor and Vu, 2017].
https://drops.dagstuhl.de/storage/00lipics/lipics-vol124-itcs2019/LIPIcs.ITCS.2019.1/LIPIcs.ITCS.2019.1.pdf
Submodular Optimization
Secretary Problem
Streaming Algorithms
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-01-08
124
2:1
2:21
10.4230/LIPIcs.ITCS.2019.2
article
Hamiltonian Sparsification and Gap-Simulation
Aharonov, Dorit
1
Zhou, Leo
2
https://orcid.org/0000-0001-7598-8621
School of Computer Science and Engineering, The Hebrew University, Jerusalem 91904, Israel
Department of Physics, Harvard University, Cambridge, MA 02138, USA
Analog quantum simulation - simulation of one Hamiltonian by another - is one of the major goals in the noisy intermediate-scale quantum computation (NISQ) era, and has many applications in quantum complexity. We initiate the rigorous study of the physical resources required for such simulations, where we focus on the task of Hamiltonian sparsification. The goal is to find a simulating Hamiltonian H~ whose underlying interaction graph has bounded degree (this is called degree-reduction) or much fewer edges than that of the original Hamiltonian H (this is called dilution). We set this study in a relaxed framework for analog simulations that we call gap-simulation, where H~ is only required to simulate the groundstate(s) and spectral gap of H instead of its full spectrum, and we believe it is of independent interest.
Our main result is a proof that in stark contrast to the classical setting, general degree-reduction is impossible in the quantum world, even under our relaxed notion of gap-simulation. The impossibility proof relies on devising counterexample Hamiltonians and applying a strengthened variant of Hastings-Koma decay of correlations theorem. We also show a complementary result where degree-reduction is possible when the strength of interactions is allowed to grow polynomially. Furthermore, we prove the impossibility of the related sparsification task of generic Hamiltonian dilution, under a computational hardness assumption. We also clarify the (currently weak) implications of our results to the question of quantum PCP. Our work provides basic answers to many of the "first questions" one would ask about Hamiltonian sparsification and gap-simulation; we hope this serves as a good starting point for future research of these topics.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol124-itcs2019/LIPIcs.ITCS.2019.2/LIPIcs.ITCS.2019.2.pdf
quantum simulation
quantum Hamiltonian complexity
sparsification
quantum PCP
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-01-08
124
3:1
3:19
10.4230/LIPIcs.ITCS.2019.3
article
On Solving Linear Systems in Sublinear Time
Andoni, Alexandr
1
Krauthgamer, Robert
2
Pogrow, Yosef
2
Columbia University, New York, NY, USA
Weizmann Institute of Science, Rehovot, Israel
We study sublinear algorithms that solve linear systems locally. In the classical version of this problem the input is a matrix S in R^{n x n} and a vector b in R^n in the range of S, and the goal is to output x in R^n satisfying Sx=b. For the case when the matrix S is symmetric diagonally dominant (SDD), the breakthrough algorithm of Spielman and Teng [STOC 2004] approximately solves this problem in near-linear time (in the input size which is the number of non-zeros in S), and subsequent papers have further simplified, improved, and generalized the algorithms for this setting.
Here we focus on computing one (or a few) coordinates of x, which potentially allows for sublinear algorithms. Formally, given an index u in [n] together with S and b as above, the goal is to output an approximation x^_u for x^*_u, where x^* is a fixed solution to Sx=b.
Our results show that there is a qualitative gap between SDD matrices and the more general class of positive semidefinite (PSD) matrices. For SDD matrices, we develop an algorithm that approximates a single coordinate x_{u} in time that is polylogarithmic in n, provided that S is sparse and has a small condition number (e.g., Laplacian of an expander graph). The approximation guarantee is additive | x^_u-x^*_u | <=epsilon | x^* |_infty for accuracy parameter epsilon>0. We further prove that the condition-number assumption is necessary and tight.
In contrast to the SDD matrices, we prove that for certain PSD matrices S, the running time must be at least polynomial in n (for the same additive approximation), even if S has bounded sparsity and condition number.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol124-itcs2019/LIPIcs.ITCS.2019.3/LIPIcs.ITCS.2019.3.pdf
Linear systems
Laplacian solver
Sublinear time
Randomized linear algebra
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-01-08
124
4:1
4:14
10.4230/LIPIcs.ITCS.2019.4
article
Placing Conditional Disclosure of Secrets in the Communication Complexity Universe
Applebaum, Benny
1
Vasudevan, Prashant Nalini
2
Tel Aviv University, Tel Aviv, Israel, https://www.eng.tau.ac.il/~bennyap/
UC Berkeley, Berkeley, USA, http://people.eecs.berkeley.edu/~prashvas
In the conditional disclosure of secrets (CDS) problem (Gertner et al., J. Comput. Syst. Sci., 2000) Alice and Bob, who hold n-bit inputs x and y respectively, wish to release a common secret z to Carol (who knows both x and y) if and only if the input (x,y) satisfies some predefined predicate f. Alice and Bob are allowed to send a single message to Carol which may depend on their inputs and some shared randomness, and the goal is to minimize the communication complexity while providing information-theoretic security.
Despite the growing interest in this model, very few lower-bounds are known. In this paper, we relate the CDS complexity of a predicate f to its communication complexity under various communication games. For several basic predicates our results yield tight, or almost tight, lower-bounds of Omega(n) or Omega(n^{1-epsilon}), providing an exponential improvement over previous logarithmic lower-bounds.
We also define new communication complexity classes that correspond to different variants of the CDS model and study the relations between them and their complements. Notably, we show that allowing for imperfect correctness can significantly reduce communication - a seemingly new phenomenon in the context of information-theoretic cryptography. Finally, our results show that proving explicit super-logarithmic lower-bounds for imperfect CDS protocols is a necessary step towards proving explicit lower-bounds against the class AM, or even AM cap coAM - a well known open problem in the theory of communication complexity. Thus imperfect CDS forms a new minimal class which is placed just beyond the boundaries of the "civilized" part of the communication complexity world for which explicit lower-bounds are known.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol124-itcs2019/LIPIcs.ITCS.2019.4/LIPIcs.ITCS.2019.4.pdf
Conditional Disclosure of Secrets
Information-Theoretic Security
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-01-08
124
5:1
5:1
10.4230/LIPIcs.ITCS.2019.5
article
Bitcoin: A Natural Oligopoly
Arnosti, Nick
1
Weinberg, S. Matthew
2
Columbia University, New York City, NY, USA
Princeton University, Princeton, NJ, USA
Although Bitcoin was intended to be a decentralized digital currency, in practice, mining power is quite concentrated. This fact is a persistent source of concern for the Bitcoin community.
We provide an explanation using a simple model to capture miners' incentives to invest in equipment. In our model, n miners compete for a prize of fixed size. Each miner chooses an investment q_i, incurring cost c_i q_i, and then receives reward (q_i^alpha)/(sum_j q_j^alpha), for some alpha >= 1. When c_i = c_j for all i,j, and alpha = 1, there is a unique equilibrium where all miners invest equally. However, we prove that under seemingly mild deviations from this model, equilibrium outcomes become drastically more centralized. In particular,
- When costs are asymmetric, if miner i chooses to invest, then miner j has market share at least 1-c_j/c_i. That is, if miner j has costs that are (e.g.) 20% lower than those of miner i, then miner j must control at least 20% of the total mining power.
- In the presence of economies of scale (alpha > 1), every market participant has a market share of at least 1-1/(alpha), implying that the market features at most alpha/(alpha - 1) miners in total.
We discuss the implications of our results for the future design of cryptocurrencies. In particular, our work further motivates the study of protocols that minimize "orphaned" blocks, proof-of-stake protocols, and incentive compatible protocols.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol124-itcs2019/LIPIcs.ITCS.2019.5/LIPIcs.ITCS.2019.5.pdf
Bitcoin
Cryptocurrencies
Rent-Seeking Competition
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-01-08
124
6:1
6:20
10.4230/LIPIcs.ITCS.2019.6
article
A Simple Sublinear-Time Algorithm for Counting Arbitrary Subgraphs via Edge Sampling
Assadi, Sepehr
1
Kapralov, Michael
2
Khanna, Sanjeev
1
Department of Computer and Information Science, University of Pennsylvania, Philadelphia, PA, USA
School of Computer and Communication Sciences, EPFL, Lausanne, Switzerland
In the subgraph counting problem, we are given a (large) input graph G(V, E) and a (small) target graph H (e.g., a triangle); the goal is to estimate the number of occurrences of H in G. Our focus here is on designing sublinear-time algorithms for approximately computing number of occurrences of H in G in the setting where the algorithm is given query access to G. This problem has been studied in several recent papers which primarily focused on specific families of graphs H such as triangles, cliques, and stars. However, not much is known about approximate counting of arbitrary graphs H in the literature. This is in sharp contrast to the closely related subgraph enumeration problem that has received significant attention in the database community as the database join problem. The AGM bound shows that the maximum number of occurrences of any arbitrary subgraph H in a graph G with m edges is O(m^{rho(H)}), where rho(H) is the fractional edge-cover of H, and enumeration algorithms with matching runtime are known for any H.
We bridge this gap between subgraph counting and subgraph enumeration by designing a simple sublinear-time algorithm that can estimate the number of occurrences of any arbitrary graph H in G, denoted by #H, to within a (1 +/- epsilon)-approximation with high probability in O(m^{rho(H)}/#H) * poly(log(n),1/epsilon) time. Our algorithm is allowed the standard set of queries for general graphs, namely degree queries, pair queries and neighbor queries, plus an additional edge-sample query that returns an edge chosen uniformly at random. The performance of our algorithm matches those of Eden et al. [FOCS 2015, STOC 2018] for counting triangles and cliques and extend them to all choices of subgraph H under the additional assumption of edge-sample queries.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol124-itcs2019/LIPIcs.ITCS.2019.6/LIPIcs.ITCS.2019.6.pdf
Sublinear-time algorithms
Subgraph counting
AGM bound
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-01-08
124
7:1
7:21
10.4230/LIPIcs.ITCS.2019.7
article
Tensor Network Complexity of Multilinear Maps
Austrin, Per
1
Kaski, Petteri
2
Kubjas, Kaie
3
School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden
Department of Computer Science, Aalto University, Helsinki, Finland
Department of Mathematics and Systems Analysis, Aalto University, Helsinki, Finland, and, Laboratoire d'Informatique de Paris 6, Sorbonne Université, Paris, France
We study tensor networks as a model of arithmetic computation for evaluating multilinear maps. These capture any algorithm based on low border rank tensor decompositions, such as O(n^{omega+epsilon}) time matrix multiplication, and in addition many other algorithms such as O(n log n) time discrete Fourier transform and O^*(2^n) time for computing the permanent of a matrix. However tensor networks sometimes yield faster algorithms than those that follow from low-rank decompositions. For instance the fastest known O(n^{(omega +epsilon)t}) time algorithms for counting 3t-cliques can be implemented with tensor networks, even though the underlying tensor has border rank n^{3t} for all t >= 2. For counting homomorphisms of a general pattern graph P into a host graph on n vertices we obtain an upper bound of O(n^{(omega+epsilon)bw(P)/2}) where bw(P) is the branchwidth of P. This essentially matches the bound for counting cliques, and yields small improvements over previous algorithms for many choices of P.
While powerful, the model still has limitations, and we are able to show a number of unconditional lower bounds for various multilinear maps, including:
b) an Omega(n^{bw(P)}) time lower bound for counting homomorphisms from P to an n-vertex graph, matching the upper bound if omega = 2. In particular for P a v-clique this yields an Omega(n^{ceil[2v/3]}) time lower bound for counting v-cliques, and for P a k-uniform v-hyperclique we obtain an Omega(n^v) time lower bound for k >= 3, ruling out tensor networks as an approach to obtaining non-trivial algorithms for hyperclique counting and the Max-3-CSP problem.
c) an Omega(2^{0.918n}) time lower bound for the permanent of an n x n matrix.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol124-itcs2019/LIPIcs.ITCS.2019.7/LIPIcs.ITCS.2019.7.pdf
arithmetic complexity
lower bound
multilinear map
tensor network
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-01-08
124
8:1
8:20
10.4230/LIPIcs.ITCS.2019.8
article
A #SAT Algorithm for Small Constant-Depth Circuits with PTF Gates
Bajpai, Swapnam
1
Krishan, Vaibhav
1
Kush, Deepanshu
1
Limaye, Nutan
1
Srinivasan, Srikanth
2
Indian Institute of Technology, Bombay, Mumbai, India
Department of Mathematics, Indian Institute of Technology Bombay, Mumbai, India
We show that there is a zero-error randomized algorithm that, when given a small constant-depth Boolean circuit C made up of gates that compute constant-degree Polynomial Threshold functions or PTFs (i.e., Boolean functions that compute signs of constant-degree polynomials), counts the number of satisfying assignments to C in significantly better than brute-force time.
Formally, for any constants d,k, there is an epsilon > 0 such that the zero-error randomized algorithm counts the number of satisfying assignments to a given depth-d circuit C made up of k-PTF gates such that C has size at most n^{1+epsilon}. The algorithm runs in time 2^{n-n^{Omega(epsilon)}}.
Before our result, no algorithm for beating brute-force search was known for counting the number of satisfying assignments even for a single degree-k PTF (which is a depth-1 circuit of linear size).
The main new tool is the use of a learning algorithm for learning degree-1 PTFs (or Linear Threshold Functions) using comparison queries due to Kane, Lovett, Moran and Zhang (FOCS 2017). We show that their ideas fit nicely into a memoization approach that yields the #SAT algorithms.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol124-itcs2019/LIPIcs.ITCS.2019.8/LIPIcs.ITCS.2019.8.pdf
SAT
Polynomial Threshold Functions
Constant-depth Boolean Circuits
Linear Decision Trees
Zero-error randomized algorithms
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-01-08
124
9:1
9:12
10.4230/LIPIcs.ITCS.2019.9
article
Small-Set Expansion in Shortcode Graph and the 2-to-2 Conjecture
Barak, Boaz
1
Kothari, Pravesh K.
2
Steurer, David
3
Harvard University School of Engineering and Applied Sciences, Cambridge, USA
Princeton University and IAS Princeton, USA
ETH Zurich, Zurich, Switzerland
Dinur, Khot, Kindler, Minzer and Safra (2016) recently showed that the (imperfect completeness variant of) Khot's 2 to 2 games conjecture follows from a combinatorial hypothesis about the soundness of a certain "Grassmanian agreement tester". In this work, we show that soundness of Grassmannian agreement tester follows from a conjecture we call the "Shortcode Expansion Hypothesis" characterizing the non-expanding sets of the degree-two Short code graph. We also show the latter conjecture is equivalent to a characterization of the non-expanding sets in the Grassman graph, as hypothesized by a follow-up paper of Dinur et al. (2017).
Following our work, Khot, Minzer and Safra (2018) proved the "Shortcode Expansion Hypothesis". Combining their proof with our result and the reduction of Dinur et al. (2016), completes the proof of the 2 to 2 conjecture with imperfect completeness. We believe that the Shortcode graph provides a useful view of both the hypothesis and the reduction, and might be suitable for obtaining new hardness reductions.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol124-itcs2019/LIPIcs.ITCS.2019.9/LIPIcs.ITCS.2019.9.pdf
Unique Games Conjecture
Small-Set Expansion
Grassmann Graph
Shortcode
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-01-08
124
10:1
10:18
10.4230/LIPIcs.ITCS.2019.10
article
Algorithms, Bounds, and Strategies for Entangled XOR Games
Bene Watts, Adam
1
Harrow, Aram W.
1
Kanwar, Gurtej
1
Natarajan, Anand
2
MIT Center for Theoretical Physics, 77 Massachusetts Ave, 6-304, Cambridge, MA, USA
California Institute of Technology, 1200 E. California Blvd, Pasadena, CA, USA
Entangled games are a quantum analog of constraint satisfaction problems and have had important applications to quantum complexity theory, quantum cryptography, and the foundations of quantum mechanics. Given a game, the basic computational problem is to compute its entangled value: the supremum success probability attainable by a quantum strategy. We study the complexity of computing the (commuting-operator) entangled value omega^* of entangled XOR games with any number of players. Based on a duality theory for systems of operator equations, we introduce necessary and sufficient criteria for an XOR game to have omega^* = 1, and use these criteria to derive the following results:
1) An algorithm for symmetric games that decides in polynomial time whether omega^* = 1 or omega^* < 1, a task that was not previously known to be decidable, together with a simple tensor-product strategy that achieves value 1 in the former case. The only previous candidate algorithm for this problem was the Navascués-Pironio-Acín (also known as noncommutative Sum of Squares or ncSoS) hierarchy, but no convergence bounds were known.
2) A family of games with three players and with omega^* < 1, where it takes doubly exponential time for the ncSoS algorithm to witness this. By contrast, our algorithm runs in polynomial time.
3) Existence of an unsatisfiable phase for random (non-symmetric) XOR games. We show that there exists a constant C_k^{unsat} depending only on the number k of players, such that a random k-XOR game over an alphabet of size n has omega^* < 1 with high probability when the number of clauses is above C_k^{unsat} n.
4) A lower bound of Omega(n log(n)/log log(n)) on the number of levels in the ncSoS hierarchy required to detect unsatisfiability for most random 3-XOR games. This is in contrast with the classical case where the (3n)^{th} level of the sum-of-squares hierarchy is equivalent to brute-force enumeration of all possible solutions.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol124-itcs2019/LIPIcs.ITCS.2019.10/LIPIcs.ITCS.2019.10.pdf
Nonlocal games
XOR Games
Pseudotelepathy games
Multipartite entanglement
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-01-08
124
11:1
11:20
10.4230/LIPIcs.ITCS.2019.11
article
Testing Local Properties of Arrays
Ben-Eliezer, Omri
1
Tel Aviv University, Tel Aviv 69978, Israel
We study testing of local properties in one-dimensional and multi-dimensional arrays. A property of d-dimensional arrays f:[n]^d -> Sigma is k-local if it can be defined by a family of k x ... x k forbidden consecutive patterns. This definition captures numerous interesting properties. For example, monotonicity, Lipschitz continuity and submodularity are 2-local; convexity is (usually) 3-local; and many typical problems in computational biology and computer vision involve o(n)-local properties.
In this work, we present a generic approach to test all local properties of arrays over any finite (and not necessarily bounded size) alphabet. We show that any k-local property of d-dimensional arrays is testable by a simple canonical one-sided error non-adaptive epsilon-test, whose query complexity is O(epsilon^{-1}k log{(epsilon n)/k}) for d = 1 and O(c_d epsilon^{-1/d} k * n^{d-1}) for d > 1. The queries made by the canonical test constitute sphere-like structures of varying sizes, and are completely independent of the property and the alphabet Sigma. The query complexity is optimal for a wide range of parameters: For d=1, this matches the query complexity of many previously investigated local properties, while for d > 1 we design and analyze new constructions of k-local properties whose one-sided non-adaptive query complexity matches our upper bounds. For some previously studied properties, our method provides the first known sublinear upper bound on the query complexity.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol124-itcs2019/LIPIcs.ITCS.2019.11/LIPIcs.ITCS.2019.11.pdf
Property Testing
Local Properties
Monotonicity Testing
Hypergrid
Pattern Matching
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-01-08
124
12:1
12:30
10.4230/LIPIcs.ITCS.2019.12
article
The Complexity of User Retention
Ben-Sasson, Eli
1
https://orcid.org/0000-0002-0708-0483
Saig, Eden
1
https://orcid.org/0000-0002-0810-2218
Department of Computer Science, Technion, Haifa, Israel
This paper studies families of distributions T that are amenable to retentive learning, meaning that an expert can retain users that seek to predict their future, assuming user attributes are sampled from T and exposed gradually over time. Limited attention span is the main problem experts face in our model. We make two contributions.
First, we formally define the notions of retentively learnable distributions and properties. Along the way, we define a retention complexity measure of distributions and a natural class of retentive scoring rules that model the way users evaluate experts they interact with. These rules are shown to be tightly connected to truth-eliciting "proper scoring rules" studied in Decision Theory since the 1950's [McCarthy, PNAS 1956].
Second, we take a first step towards relating retention complexity to other measures of significance in computational complexity. In particular, we show that linear properties (over the binary field) are retentively learnable, whereas random Low Density Parity Check (LDPC) codes have, with high probability, maximal retention complexity. Intriguingly, these results resemble known results from the field of property testing and suggest that deeper connections between retentive distributions and locally testable properties may exist.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol124-itcs2019/LIPIcs.ITCS.2019.12/LIPIcs.ITCS.2019.12.pdf
retentive learning
retention complexity
information elicitation
proper scoring rules
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-01-08
124
13:1
13:16
10.4230/LIPIcs.ITCS.2019.13
article
Torus Polynomials: An Algebraic Approach to ACC Lower Bounds
Bhrushundi, Abhishek
1
Hosseini, Kaave
2
Lovett, Shachar
2
Rao, Sankeerth
2
Rutgers University, New Brunswick, USA
University of California, San Diego, USA
We propose an algebraic approach to proving circuit lower bounds for ACC^0 by defining and studying the notion of torus polynomials. We show how currently known polynomial-based approximation results for AC^0 and ACC^0 can be reformulated in this framework, implying that ACC^0 can be approximated by low-degree torus polynomials. Furthermore, as a step towards proving ACC^0 lower bounds for the majority function via our approach, we show that MAJORITY cannot be approximated by low-degree symmetric torus polynomials. We also pose several open problems related to our framework.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol124-itcs2019/LIPIcs.ITCS.2019.13/LIPIcs.ITCS.2019.13.pdf
Circuit complexity
ACC
lower bounds
polynomials
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-01-08
124
14:1
14:21
10.4230/LIPIcs.ITCS.2019.14
article
Almost Envy-Free Allocations with Connected Bundles
Bilò, Vittorio
1
Caragiannis, Ioannis
2
Flammini, Michele
3
Igarashi, Ayumi
4
Monaco, Gianpiero
5
Peters, Dominik
6
Vinci, Cosimo
5
Zwicker, William S.
7
University of Salento, Lecce, Italy
University of Patras, Rion-Patras, Greece
Gran Sasso Science Institute and University of L'Aquila, L'Aquila, Italy
Kyushu University, Fukuoka, Japan
University of L'Aquila, L'Aquila, Italy
University of Oxford, Oxford, U.K.
Union College, Schenectady, USA
We study the existence of allocations of indivisible goods that are envy-free up to one good (EF1), under the additional constraint that each bundle needs to be connected in an underlying item graph G. When the items are arranged in a path, we show that EF1 allocations are guaranteed to exist for arbitrary monotonic utility functions over bundles, provided that either there are at most four agents, or there are any number of agents but they all have identical utility functions. Our existence proofs are based on classical arguments from the divisible cake-cutting setting, and involve discrete analogues of cut-and-choose, of Stromquist's moving-knife protocol, and of the Su-Simmons argument based on Sperner's lemma. Sperner's lemma can also be used to show that on a path, an EF2 allocation exists for any number of agents. Except for the results using Sperner's lemma, all of our procedures can be implemented by efficient algorithms. Our positive results for paths imply the existence of connected EF1 or EF2 allocations whenever G is traceable, i.e., contains a Hamiltonian path. For the case of two agents, we completely characterize the class of graphs G that guarantee the existence of EF1 allocations as the class of graphs whose biconnected components are arranged in a path. This class is strictly larger than the class of traceable graphs; one can check in linear time whether a graph belongs to this class, and if so return an EF1 allocation.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol124-itcs2019/LIPIcs.ITCS.2019.14/LIPIcs.ITCS.2019.14.pdf
Envy-free Division
Cake-cutting
Resource Allocation
Algorithmic Game Theory
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-01-08
124
15:1
15:2
10.4230/LIPIcs.ITCS.2019.15
article
"Quantum Supremacy" and the Complexity of Random Circuit Sampling
Bouland, Adam
1
https://orcid.org/0000-0002-8556-8337
Fefferman, Bill
2
https://orcid.org/0000-0002-9627-0210
Nirkhe, Chinmay
1
https://orcid.org/0000-0002-5808-4994
Vazirani, Umesh
1
Electrical Engineering and Computer Sciences, University of California, Berkeley, 387 Soda Hall Berkeley, CA 94720, U.S.A.
Electrical Engineering and Computer Sciences, University of California, Berkeley, 387 Soda Hall Berkeley, CA 94720, U.S.A. , Joint Center for Quantum Information and Computer Science (QuICS), University of Maryland/NIST, Bldg 224 Stadium Dr Room 3100, College Park, MD 20742, U.S.A.
A critical goal for the field of quantum computation is quantum supremacy - a demonstration of any quantum computation that is prohibitively hard for classical computers. It is both a necessary milestone on the path to useful quantum computers as well as a test of quantum theory in the realm of high complexity. A leading near-term candidate, put forth by the Google/UCSB team, is sampling from the probability distributions of randomly chosen quantum circuits, called Random Circuit Sampling (RCS).
While RCS was defined with experimental realization in mind, we give strong complexity-theoretic evidence for the classical hardness of RCS, placing it on par with the best theoretical proposals for supremacy. Specifically, we show that RCS satisfies an average-case hardness condition - computing output probabilities of typical quantum circuits is as hard as computing them in the worst-case, and therefore #P-hard. Our reduction exploits the polynomial structure in the output amplitudes of random quantum circuits, enabled by the Feynman path integral. In addition, it follows from known results that RCS also satisfies an anti-concentration property, namely that errors in estimating output probabilities are small with respect to the probabilities themselves. This makes RCS the first proposal for quantum supremacy with both of these properties. We also give a natural condition under which an existing statistical measure, cross-entropy, verifies RCS, as well as describe a new verification measure which in some formal sense maximizes the information gained from experimental samples.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol124-itcs2019/LIPIcs.ITCS.2019.15/LIPIcs.ITCS.2019.15.pdf
quantum supremacy
average-case hardness
verification
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-01-08
124
16:1
16:20
10.4230/LIPIcs.ITCS.2019.16
article
Adversarially Robust Property-Preserving Hash Functions
Boyle, Elette
1
LaVigne, Rio
2
Vaikuntanathan, Vinod
2
IDC Herzliya, Kanfei Nesharim Herzliya, Israel
MIT CSAIL, 32 Vassar Street, Cambridge MA, 02139 USA
Property-preserving hashing is a method of compressing a large input x into a short hash h(x) in such a way that given h(x) and h(y), one can compute a property P(x, y) of the original inputs. The idea of property-preserving hash functions underlies sketching, compressed sensing and locality-sensitive hashing.
Property-preserving hash functions are usually probabilistic: they use the random choice of a hash function from a family to achieve compression, and as a consequence, err on some inputs. Traditionally, the notion of correctness for these hash functions requires that for every two inputs x and y, the probability that h(x) and h(y) mislead us into a wrong prediction of P(x, y) is negligible. As observed in many recent works (incl. Mironov, Naor and Segev, STOC 2008; Hardt and Woodruff, STOC 2013; Naor and Yogev, CRYPTO 2015), such a correctness guarantee assumes that the adversary (who produces the offending inputs) has no information about the hash function, and is too weak in many scenarios.
We initiate the study of adversarial robustness for property-preserving hash functions, provide definitions, derive broad lower bounds due to a simple connection with communication complexity, and show the necessity of computational assumptions to construct such functions. Our main positive results are two candidate constructions of property-preserving hash functions (achieving different parameters) for the (promise) gap-Hamming property which checks if x and y are "too far" or "too close". Our first construction relies on generic collision-resistant hash functions, and our second on a variant of the syndrome decoding assumption on low-density parity check codes.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol124-itcs2019/LIPIcs.ITCS.2019.16/LIPIcs.ITCS.2019.16.pdf
Hash function
compression
property-preserving
one-way communication
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-01-08
124
17:1
17:16
10.4230/LIPIcs.ITCS.2019.17
article
On Closest Pair in Euclidean Metric: Monochromatic is as Hard as Bichromatic
C. S., Karthik
1
https://orcid.org/0000-0001-9105-364X
Manurangsi, Pasin
2
Weizmann Institute of Science, Rehovot, Israel
University of California, Berkeley, USA
Given a set of n points in R^d, the (monochromatic) Closest Pair problem asks to find a pair of distinct points in the set that are closest in the l_p-metric. Closest Pair is a fundamental problem in Computational Geometry and understanding its fine-grained complexity in the Euclidean metric when d=omega(log n) was raised as an open question in recent works (Abboud-Rubinstein-Williams [FOCS'17], Williams [SODA'18], David-Karthik-Laekhanukit [SoCG'18]).
In this paper, we show that for every p in R_{>= 1} cup {0}, under the Strong Exponential Time Hypothesis (SETH), for every epsilon>0, the following holds:
- No algorithm running in time O(n^{2-epsilon}) can solve the Closest Pair problem in d=(log n)^{Omega_{epsilon}(1)} dimensions in the l_p-metric.
- There exists delta = delta(epsilon)>0 and c = c(epsilon)>= 1 such that no algorithm running in time O(n^{1.5-epsilon}) can approximate Closest Pair problem to a factor of (1+delta) in d >= c log n dimensions in the l_p-metric.
In particular, our first result is shown by establishing the computational equivalence of the bichromatic Closest Pair problem and the (monochromatic) Closest Pair problem (up to n^{epsilon} factor in the running time) for d=(log n)^{Omega_epsilon(1)} dimensions.
Additionally, under SETH, we rule out nearly-polynomial factor approximation algorithms running in subquadratic time for the (monochromatic) Maximum Inner Product problem where we are given a set of n points in n^{o(1)}-dimensional Euclidean space and are required to find a pair of distinct points in the set that maximize the inner product.
At the heart of all our proofs is the construction of a dense bipartite graph with low contact dimension, i.e., we construct a balanced bipartite graph on n vertices with n^{2-epsilon} edges whose vertices can be realized as points in a (log n)^{Omega_epsilon(1)}-dimensional Euclidean space such that every pair of vertices which have an edge in the graph are at distance exactly 1 and every other pair of vertices are at distance greater than 1. This graph construction is inspired by the construction of locally dense codes introduced by Dumer-Miccancio-Sudan [IEEE Trans. Inf. Theory'03].
https://drops.dagstuhl.de/storage/00lipics/lipics-vol124-itcs2019/LIPIcs.ITCS.2019.17/LIPIcs.ITCS.2019.17.pdf
Closest Pair
Bichromatic Closest Pair
Contact Dimension
Fine-Grained Complexity
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-01-08
124
18:1
18:14
10.4230/LIPIcs.ITCS.2019.18
article
Expander-Based Cryptography Meets Natural Proofs
Carboni Oliveira, Igor
1
Santhanam, Rahul
1
Tell, Roei
2
Department of Computer Science, University of Oxford, UK
Department of Computer Science and Applied Mathematics, Weizmann Institute of Science, Rehovot, Israel
We introduce new forms of attack on expander-based cryptography, and in particular on Goldreich's pseudorandom generator and one-way function. Our attacks exploit low circuit complexity of the underlying expander's neighbor function and/or of the local predicate. Our two key conceptual contributions are:
1) We put forward the possibility that the choice of expander matters in expander-based cryptography. In particular, using expanders whose neighbour function has low circuit complexity might compromise the security of Goldreich's PRG and OWF in certain settings.
2) We show that the security of Goldreich's PRG and OWF is closely related to two other long-standing problems: Specifically, to the existence of unbalanced lossless expanders with low-complexity neighbor function, and to limitations on circuit lower bounds (i.e., natural proofs). In particular, our results further motivate the investigation of affine/local unbalanced lossless expanders and of average-case lower bounds against DNF-XOR circuits.
We prove two types of technical results that support the above conceptual messages. First, we unconditionally break Goldreich's PRG when instantiated with a specific expander (whose existence we prove), for a class of predicates that match the parameters of the currently-best "hard" candidates, in the regime of quasi-polynomial stretch. Secondly, conditioned on the existence of expanders whose neighbor functions have extremely low circuit complexity, we present attacks on Goldreich's generator in the regime of polynomial stretch. As one corollary, conditioned on the existence of the foregoing expanders, we show that either the parameters of natural properties for several constant-depth circuit classes cannot be improved, even mildly; or Goldreich's generator is insecure in the regime of a large polynomial stretch, regardless of the predicate used.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol124-itcs2019/LIPIcs.ITCS.2019.18/LIPIcs.ITCS.2019.18.pdf
Pseudorandom Generators
One-Way Functions
Expanders
Circuit Complexity
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-01-08
124
19:1
19:7
10.4230/LIPIcs.ITCS.2019.19
article
A Note on the Quantum Query Complexity of Permutation Symmetric Functions
Chailloux, André
1
Inria de Paris, EPI SECRET, Paris, France
It is known since the work of [Aaronson and Ambainis, 2014] that for any permutation symmetric function f, the quantum query complexity is at most polynomially smaller than the classical randomized query complexity, more precisely that R(f) = O~(Q^7(f)). In this paper, we improve this result and show that R(f) = O(Q^3(f)) for a more general class of symmetric functions. Our proof is constructive and relies largely on the quantum hardness of distinguishing a random permutation from a random function with small range from Zhandry [Zhandry, 2015].
https://drops.dagstuhl.de/storage/00lipics/lipics-vol124-itcs2019/LIPIcs.ITCS.2019.19/LIPIcs.ITCS.2019.19.pdf
quantum query complexity
permutation symmetric functions
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-01-08
124
20:1
20:7
10.4230/LIPIcs.ITCS.2019.20
article
Adaptive Boolean Monotonicity Testing in Total Influence Time
Chakrabarty, Deeparnab
1
Seshadhri, C.
2
Dartmouth College, Hanover, NH 03755, USA
University of California, Santa Cruz, CA 95064, USA
Testing monotonicity of a Boolean function f:{0,1}^n -> {0,1} is an important problem in the field of property testing. It has led to connections with many interesting combinatorial questions on the directed hypercube: routing, random walks, and new isoperimetric theorems. Denoting the proximity parameter by epsilon, the best tester is the non-adaptive O~(epsilon^{-2}sqrt{n}) tester of Khot-Minzer-Safra (FOCS 2015). A series of recent results by Belovs-Blais (STOC 2016) and Chen-Waingarten-Xie (STOC 2017) have led to Omega~(n^{1/3}) lower bounds for adaptive testers. Reducing this gap is a significant question, that touches on the role of adaptivity in monotonicity testing of Boolean functions.
We approach this question from the perspective of parametrized property testing, a concept recently introduced by Pallavoor-Raskhodnikova-Varma (ACM TOCT 2017), where one seeks to understand performance of testers with respect to parameters other than just the size. Our result is an adaptive monotonicity tester with one-sided error whose query complexity is O(epsilon^{-2}I(f)log^5 n), where I(f) is the total influence of the function. Therefore, adaptivity provably helps monotonicity testing for low influence functions.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol124-itcs2019/LIPIcs.ITCS.2019.20/LIPIcs.ITCS.2019.20.pdf
Property Testing
Monotonicity Testing
Influence of Boolean Functions
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-01-08
124
21:1
21:17
10.4230/LIPIcs.ITCS.2019.21
article
On Locality-Sensitive Orderings and Their Applications
Chan, Timothy M.
1
Har-Peled, Sariel
1
Jones, Mitchell
1
Department of Computer Science, University of Illinois at Urbana-Champaign, USA
For any constant d and parameter epsilon > 0, we show the existence of (roughly) 1/epsilon^d orderings on the unit cube [0,1)^d, such that any two points p, q in [0,1)^d that are close together under the Euclidean metric are "close together" in one of these linear orderings in the following sense: the only points that could lie between p and q in the ordering are points with Euclidean distance at most epsilon | p - q | from p or q. These orderings are extensions of the Z-order, and they can be efficiently computed.
Functionally, the orderings can be thought of as a replacement to quadtrees and related structures (like well-separated pair decompositions). We use such orderings to obtain surprisingly simple algorithms for a number of basic problems in low-dimensional computational geometry, including (i) dynamic approximate bichromatic closest pair, (ii) dynamic spanners, (iii) dynamic approximate minimum spanning trees, (iv) static and dynamic fault-tolerant spanners, and (v) approximate nearest neighbor search.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol124-itcs2019/LIPIcs.ITCS.2019.21/LIPIcs.ITCS.2019.21.pdf
Approximation algorithms
Data structures
Computational geometry
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-01-08
124
22:1
22:15
10.4230/LIPIcs.ITCS.2019.22
article
Pseudorandom Generators from the Second Fourier Level and Applications to AC0 with Parity Gates
Chattopadhyay, Eshan
1
Hatami, Pooya
2
Lovett, Shachar
3
Tal, Avishay
4
Department of Computer Science, Cornell University, 107 Hoy Rd, Ithaca, NY, USA
Department of Computer Science, University of Texas at Austin, 2317 Speedway, Austin, TX, USA
Department of Computer Science, University of California San Diego, La Jolla, CA, USA
Department of Computer Science, Stanford University, 353 Serra Mall, Stanford, CA, USA
A recent work of Chattopadhyay et al. (CCC 2018) introduced a new framework for the design of pseudorandom generators for Boolean functions. It works under the assumption that the Fourier tails of the Boolean functions are uniformly bounded for all levels by an exponential function. In this work, we design an alternative pseudorandom generator that only requires bounds on the second level of the Fourier tails. It is based on a derandomization of the work of Raz and Tal (ECCC 2018) who used the above framework to obtain an oracle separation between BQP and PH.
As an application, we give a concrete conjecture for bounds on the second level of the Fourier tails for low degree polynomials over the finite field F_2. If true, it would imply an efficient pseudorandom generator for AC^0[oplus], a well-known open problem in complexity theory. As a stepping stone towards resolving this conjecture, we prove such bounds for the first level of the Fourier tails.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol124-itcs2019/LIPIcs.ITCS.2019.22/LIPIcs.ITCS.2019.22.pdf
Derandomization
Pseudorandom generator
Explicit construction
Random walk
Small-depth circuits with parity gates
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-01-08
124
23:1
23:20
10.4230/LIPIcs.ITCS.2019.23
article
Classical Algorithms from Quantum and Arthur-Merlin Communication Protocols
Chen, Lijie
1
Wang, Ruosong
2
Massachusetts Institute of Technology, Cambridge, MA, USA
Carnegie Mellon University, Pittsburgh, PA, USA
In recent years, the polynomial method from circuit complexity has been applied to several fundamental problems and obtains the state-of-the-art running times (e.g., R. Williams's n^3 / 2^{Omega(sqrt{log n})} time algorithm for APSP). As observed in [Alman and Williams, STOC 2017], almost all applications of the polynomial method in algorithm design ultimately rely on certain (probabilistic) low-rank decompositions of the computation matrices corresponding to key subroutines. They suggest that making use of low-rank decompositions directly could lead to more powerful algorithms, as the polynomial method is just one way to derive such a decomposition.
Inspired by their observation, in this paper, we study another way of systematically constructing low-rank decompositions of matrices which could be used by algorithms - communication protocols. Since their introduction, it is known that various types of communication protocols lead to certain low-rank decompositions (e.g., P protocols/rank, BQP protocols/approximate rank). These are usually interpreted as approaches for proving communication lower bounds, while in this work we explore the other direction.
We have the following two generic algorithmic applications of communication protocols:
- Quantum Communication Protocols and Deterministic Approximate Counting. Our first connection is that a fast BQP communication protocol for a function f implies a fast deterministic additive approximate counting algorithm for a related pair counting problem. Applying known BQP communication protocols, we get fast deterministic additive approximate counting algorithms for Count-OV (#OV), Sparse Count-OV and Formula of SYM circuits. In particular, our approximate counting algorithm for #OV runs in near-linear time for all dimensions d = o(log^2 n). Previously, even no truly-subquadratic time algorithm was known for d = omega(log n).
- Arthur-Merlin Communication Protocols and Faster Satisfying-Pair Algorithms. Our second connection is that a fast AM^{cc} protocol for a function f implies a faster-than-bruteforce algorithm for f-Satisfying-Pair. Using the classical Goldwasser-Sisper AM protocols for approximating set size, we obtain a new algorithm for approximate Max-IP_{n,c log n} in time n^{2 - 1/O(log c)}, matching the state-of-the-art algorithms in [Chen, CCC 2018].
We also apply our second connection to shed some light on long-standing open problems in communication complexity. We show that if the Longest Common Subsequence (LCS) problem admits a fast (computationally efficient) AM^{cc} protocol (polylog(n) complexity), then polynomial-size Formula-SAT admits a 2^{n - n^{1-delta}} time algorithm for any constant delta > 0, which is conjectured to be unlikely by a recent work [Abboud and Bringmann, ICALP 2018]. The same holds even for a fast (computationally efficient) PH^{cc} protocol.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol124-itcs2019/LIPIcs.ITCS.2019.23/LIPIcs.ITCS.2019.23.pdf
Quantum communication protocols
Arthur-Merlin communication protocols
approximate counting
approximate rank
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-01-08
124
24:1
24:20
10.4230/LIPIcs.ITCS.2019.24
article
Capturing Complementarity in Set Functions by Going Beyond Submodularity/Subadditivity
Chen, Wei
1
Teng, Shang-Hua
2
Zhang, Hanrui
3
Microsoft Research, Beijing, China
USC, Los Angeles, CA, USA
Duke University, Durham, NC, USA
We introduce two new "degree of complementarity" measures: supermodular width and superadditive width. Both are formulated based on natural witnesses of complementarity. We show that both measures are robust by proving that they, respectively, characterize the gap of monotone set functions from being submodular and subadditive. Thus, they define two new hierarchies over monotone set functions, which we will refer to as Supermodular Width (SMW) hierarchy and Superadditive Width (SAW) hierarchy, with foundations - i.e. level 0 of the hierarchies - resting exactly on submodular and subadditive functions, respectively.
We present a comprehensive comparative analysis of the SMW hierarchy and the Supermodular Degree (SD) hierarchy, defined by Feige and Izsak. We prove that the SMW hierarchy is strictly more expressive than the SD hierarchy: Every monotone set function of supermodular degree d has supermodular width at most d, and there exists a supermodular-width-1 function over a ground set of m elements whose supermodular degree is m-1. We show that previous results regarding approximation guarantees for welfare and constrained maximization as well as regarding the Price of Anarchy (PoA) of simple auctions can be extended without any loss from the supermodular degree to the supermodular width. We also establish almost matching information-theoretical lower bounds for these two well-studied fundamental maximization problems over set functions. The combination of these approximation and hardness results illustrate that the SMW hierarchy provides not only a natural notion of complementarity, but also an accurate characterization of "near submodularity" needed for maximization approximation. While SD and SMW hierarchies support nontrivial bounds on the PoA of simple auctions, we show that our SAW hierarchy seems to capture more intrinsic properties needed to realize the efficiency of simple auctions. So far, the SAW hierarchy provides the best dependency for the PoA of Single-bid Auction, and is nearly as competitive as the Maximum over Positive Hypergraphs (MPH) hierarchy for Simultaneous Item First Price Auction (SIA). We also provide almost tight lower bounds for the PoA of both auctions with respect to the SAW hierarchy.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol124-itcs2019/LIPIcs.ITCS.2019.24/LIPIcs.ITCS.2019.24.pdf
set functions
measure of complementarity
submodularity
subadditivity
cardinality constrained maximization
welfare maximization
simple auctions
price of anarchy
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-01-08
124
25:1
25:17
10.4230/LIPIcs.ITCS.2019.25
article
Probabilistic Checking Against Non-Signaling Strategies from Linearity Testing
Chiesa, Alessandro
1
Manohar, Peter
1
Shinkar, Igor
2
UC Berkeley, Berkeley, CA, USA
Simon Fraser University, Vancouver, Canada
Non-signaling strategies are a generalization of quantum strategies that have been studied in physics over the past three decades. Recently, they have found applications in theoretical computer science, including to proving inapproximability results for linear programming and to constructing protocols for delegating computation. A central tool for these applications is probabilistically checkable proofs (PCPs) that are sound against non-signaling strategies.
In this paper we prove that the exponential-length constant-query PCP construction due to Arora et al. (JACM 1998) is sound against non-signaling strategies.
Our result offers a new length-vs-query tradeoff when compared to the non-signaling PCP of Kalai, Raz, and Rothblum (STOC 2013 and 2014) and, moreover, may serve as an intermediate step to a proof of a non-signaling analogue of the PCP Theorem.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol124-itcs2019/LIPIcs.ITCS.2019.25/LIPIcs.ITCS.2019.25.pdf
probabilistically checkable proofs
linearity testing
non-signaling strategies
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-01-08
124
26:1
26:20
10.4230/LIPIcs.ITCS.2019.26
article
On the Algorithmic Power of Spiking Neural Networks
Chou, Chi-Ning
1
Chung, Kai-Min
2
Lu, Chi-Jen
2
School of Engineering and Applied Sciences, Harvard University, Cambridge, MA 02138, USA
Institute of Information Science, Academia Sinica, Taipei, Taiwan
Spiking Neural Networks (SNN) are mathematical models in neuroscience to describe the dynamics among a set of neurons that interact with each other by firing instantaneous signals, a.k.a., spikes. Interestingly, a recent advance in neuroscience [Barrett-Denève-Machens, NIPS 2013] showed that the neurons' firing rate, i.e., the average number of spikes fired per unit of time, can be characterized by the optimal solution of a quadratic program defined by the parameters of the dynamics. This indicated that SNN potentially has the computational power to solve non-trivial quadratic programs. However, the results were justified empirically without rigorous analysis.
We put this into the context of natural algorithms and aim to investigate the algorithmic power of SNN. Especially, we emphasize on giving rigorous asymptotic analysis on the performance of SNN in solving optimization problems. To enforce a theoretical study, we first identify a simplified SNN model that is tractable for analysis. Next, we confirm the empirical observation in the work of Barrett et al. by giving an upper bound on the convergence rate of SNN in solving the quadratic program. Further, we observe that in the case where there are infinitely many optimal solutions, SNN tends to converge to the one with smaller l_1 norm. We give an affirmative answer to our finding by showing that SNN can solve the l_1 minimization problem under some regular conditions.
Our main technical insight is a dual view of the SNN dynamics, under which SNN can be viewed as a new natural primal-dual algorithm for the l_1 minimization problem. We believe that the dual view is of independent interest and may potentially find interesting interpretation in neuroscience.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol124-itcs2019/LIPIcs.ITCS.2019.26/LIPIcs.ITCS.2019.26.pdf
Spiking Neural Networks
Natural Algorithms
l_1 Minimization
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-01-08
124
27:1
27:18
10.4230/LIPIcs.ITCS.2019.27
article
Last-Iterate Convergence: Zero-Sum Games and Constrained Min-Max Optimization
Daskalakis, Constantinos
1
Panageas, Ioannis
2
CSAIL, MIT, Cambridge MA, USA
ISTD, SUTD, Singapore
Motivated by applications in Game Theory, Optimization, and Generative Adversarial Networks, recent work of Daskalakis et al [Daskalakis et al., ICLR, 2018] and follow-up work of Liang and Stokes [Liang and Stokes, 2018] have established that a variant of the widely used Gradient Descent/Ascent procedure, called "Optimistic Gradient Descent/Ascent (OGDA)", exhibits last-iterate convergence to saddle points in unconstrained convex-concave min-max optimization problems. We show that the same holds true in the more general problem of constrained min-max optimization under a variant of the no-regret Multiplicative-Weights-Update method called "Optimistic Multiplicative-Weights Update (OMWU)". This answers an open question of Syrgkanis et al [Syrgkanis et al., NIPS, 2015].
The proof of our result requires fundamentally different techniques from those that exist in no-regret learning literature and the aforementioned papers. We show that OMWU monotonically improves the Kullback-Leibler divergence of the current iterate to the (appropriately normalized) min-max solution until it enters a neighborhood of the solution. Inside that neighborhood we show that OMWU becomes a contracting map converging to the exact solution. We believe that our techniques will be useful in the analysis of the last iterate of other learning algorithms.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol124-itcs2019/LIPIcs.ITCS.2019.27/LIPIcs.ITCS.2019.27.pdf
No regret learning
Zero-sum games
Convergence
Dynamical Systems
KL divergence
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-01-08
124
28:1
28:20
10.4230/LIPIcs.ITCS.2019.28
article
Density Estimation for Shift-Invariant Multidimensional Distributions
De, Anindya
1
Long, Philip M.
2
Servedio, Rocco A.
3
EECS Department, Northwestern University, 2145 Sheridan Road, Evanston, IL 60208, USA
Google, 1600 Amphitheatre Parkway, Mountain View, CA 94043, USA
Department of Computer Science, Columbia University, 500 W. 120th Street, Room 450, New York, NY 10027, USA
We study density estimation for classes of shift-invariant distributions over R^d. A multidimensional distribution is "shift-invariant" if, roughly speaking, it is close in total variation distance to a small shift of it in any direction. Shift-invariance relaxes smoothness assumptions commonly used in non-parametric density estimation to allow jump discontinuities. The different classes of distributions that we consider correspond to different rates of tail decay.
For each such class we give an efficient algorithm that learns any distribution in the class from independent samples with respect to total variation distance. As a special case of our general result, we show that d-dimensional shift-invariant distributions which satisfy an exponential tail bound can be learned to total variation distance error epsilon using O~_d(1/ epsilon^{d+2}) examples and O~_d(1/ epsilon^{2d+2}) time. This implies that, for constant d, multivariate log-concave distributions can be learned in O~_d(1/epsilon^{2d+2}) time using O~_d(1/epsilon^{d+2}) samples, answering a question of [Diakonikolas et al., 2016]. All of our results extend to a model of noise-tolerant density estimation using Huber's contamination model, in which the target distribution to be learned is a (1-epsilon,epsilon) mixture of some unknown distribution in the class with some other arbitrary and unknown distribution, and the learning algorithm must output a hypothesis distribution with total variation distance error O(epsilon) from the target distribution. We show that our general results are close to best possible by proving a simple Omega (1/epsilon^d) information-theoretic lower bound on sample complexity even for learning bounded distributions that are shift-invariant.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol124-itcs2019/LIPIcs.ITCS.2019.28/LIPIcs.ITCS.2019.28.pdf
Density estimation
unsupervised learning
log-concave distributions
non-parametrics
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-01-08
124
29:1
29:18
10.4230/LIPIcs.ITCS.2019.29
article
From Local to Robust Testing via Agreement Testing
Dinur, Irit
1
Harsha, Prahladh
2
Kaufman, Tali
3
Ron-Zewi, Noga
4
Department of Mathematics and Computer Science, Weizmann Institute of Science, Rehovot, Israel
Tata Institute of Fundamental Research, India
Department of Computer Science, Bar-Ilan University, Ramat Gan, Israel
Department of Computer Science, University of Haifa, Haifa, Israel
A local tester for an error-correcting code is a probabilistic procedure that queries a small subset of coordinates, accepts codewords with probability one, and rejects non-codewords with probability proportional to their distance from the code. The local tester is robust if for non-codewords it satisfies the stronger property that the average distance of local views from accepting views is proportional to the distance from the code. Robust testing is an important component in constructions of locally testable codes and probabilistically checkable proofs as it allows for composition of local tests.
In this work we show that for certain codes, any (natural) local tester can be converted to a roubst tester with roughly the same number of queries. Our result holds for the class of affine-invariant lifted codes which is a broad class of codes that includes Reed-Muller codes, as well as recent constructions of high-rate locally testable codes (Guo, Kopparty, and Sudan, ITCS 2013). Instantiating this with known local testing results for lifted codes gives a more direct proof that improves some of the parameters of the main result of Guo, Haramaty, and Sudan (FOCS 2015), showing robustness of lifted codes.
To obtain the above transformation we relate the notions of local testing and robust testing to the notion of agreement testing that attempts to find out whether valid partial assignments can be stitched together to a global codeword. We first show that agreement testing implies robust testing, and then show that local testing implies agreement testing. Our proof is combinatorial, and is based on expansion / sampling properties of the collection of local views of local testers. Thus, it immediately applies to local testers of lifted codes that query random affine subspaces in F_q^m, and moreover seems amenable to extension to other families of locally testable codes with expanding families of local views.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol124-itcs2019/LIPIcs.ITCS.2019.29/LIPIcs.ITCS.2019.29.pdf
Local testing
Robust testing
Agreement testing
Affine-invariant codes
Lifted codes
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-01-08
124
30:1
30:17
10.4230/LIPIcs.ITCS.2019.30
article
Every Set in P Is Strongly Testable Under a Suitable Encoding
Dinur, Irit
1
Goldreich, Oded
1
Gur, Tom
2
Weizmann Institute, Rehovot, Israel
University of Warwick, UK
We show that every set in P is strongly testable under a suitable encoding. By "strongly testable" we mean having a (proximity oblivious) tester that makes a constant number of queries and rejects with probability that is proportional to the distance of the tested object from the property. By a "suitable encoding" we mean one that is polynomial-time computable and invertible. This result stands in contrast to the known fact that some sets in P are extremely hard to test, providing another demonstration of the crucial role of representation in the context of property testing.
The testing result is proved by showing that any set in P has a strong canonical PCP, where canonical means that (for yes-instances) there exists a single proof that is accepted with probability 1 by the system, whereas all other potential proofs are rejected with probability proportional to their distance from this proof. In fact, we show that UP equals the class of sets having strong canonical PCPs (of logarithmic randomness), whereas the class of sets having strong canonical PCPs with polynomial proof length equals "unambiguous- MA". Actually, for the testing result, we use a PCP-of-Proximity version of the foregoing notion and an analogous positive result (i.e., strong canonical PCPPs of logarithmic randomness for any set in UP).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol124-itcs2019/LIPIcs.ITCS.2019.30/LIPIcs.ITCS.2019.30.pdf
Probabilistically checkable proofs
property testing
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-01-08
124
31:1
31:20
10.4230/LIPIcs.ITCS.2019.31
article
Alea Iacta Est: Auctions, Persuasion, Interim Rules, and Dice
Dughmi, Shaddin
1
Kempe, David
1
Qiang, Ruixin
1
University of Southern California, Los Angeles, CA, USA
To select a subset of samples or "winners" from a population of candidates, order sampling [Rosén, 1997] and the k-unit Myerson auction [Myerson, 1981] share a common scheme: assign a (random) score to each candidate, then select the k candidates with the highest scores. We study a generalization of both order sampling and Myerson's allocation rule, called winner-selecting dice. The setting for winner-selecting dice is similar to auctions with feasibility constraints: candidates have random types drawn from independent prior distributions, and the winner set must be feasible subject to certain constraints. Dice (distributions over scores) are assigned to each type, and winners are selected to maximize the sum of the dice rolls, subject to the feasibility constraints. We examine the existence of winner-selecting dice that implement prescribed probabilities of winning (i.e., an interim rule) for all types.
Our first result shows that when the feasibility constraint is a matroid, then for any feasible interim rule, there always exist winner-selecting dice that implement it. Unfortunately, our proof does not yield an efficient algorithm for constructing the dice. In the special case of a 1-uniform matroid, i.e., only one winner can be selected, we give an efficient algorithm that constructs winner-selecting dice for any feasible interim rule. Furthermore, when the types of the candidates are drawn in an i.i.d. manner and the interim rule is symmetric across candidates, unsurprisingly, an algorithm can efficiently construct symmetric dice that only depend on the type but not the identity of the candidate.
One may ask whether we can extend our result to "second-order" interim rules, which not only specify the winning probability of a type, but also the winning probability conditioning on each other candidate's type. We show that our result does not extend, by exhibiting an instance of Bayesian persuasion whose optimal scheme is equivalent to a second-order interim rule, but which does not admit any dice-based implementation.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol124-itcs2019/LIPIcs.ITCS.2019.31/LIPIcs.ITCS.2019.31.pdf
Interim rule
order sampling
virtual value function
Border's theorem
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-01-08
124
32:1
32:20
10.4230/LIPIcs.ITCS.2019.32
article
Spanoids - An Abstraction of Spanning Structures, and a Barrier for LCCs
Dvir, Zeev
1
Gopi, Sivakanth
2
Gu, Yuzhou
3
Wigderson, Avi
4
Dept of Computer Science and Dept of Mathematics, Princeton University, Princeton, NJ, USA
Microsoft Research, Redmond, WA, USA
MIT, Cambridge, MA, USA
Institute of Advanced Study, Princeton, NJ, USA
We introduce a simple logical inference structure we call a spanoid (generalizing the notion of a matroid), which captures well-studied problems in several areas. These include combinatorial geometry (point-line incidences), algebra (arrangements of hypersurfaces and ideals), statistical physics (bootstrap percolation), network theory (gossip / infection processes) and coding theory. We initiate a thorough investigation of spanoids, from computational and structural viewpoints, focusing on parameters relevant to the applications areas above and, in particular, to questions regarding Locally Correctable Codes (LCCs).
One central parameter we study is the rank of a spanoid, extending the rank of a matroid and related to the dimension of codes. This leads to one main application of our work, establishing the first known barrier to improving the nearly 20-year old bound of Katz-Trevisan (KT) on the dimension of LCCs. On the one hand, we prove that the KT bound (and its more recent refinements) holds for the much more general setting of spanoid rank. On the other hand we show that there exist (random) spanoids whose rank matches these bounds. Thus, to significantly improve the known bounds one must step out of the spanoid framework.
Another parameter we explore is the functional rank of a spanoid, which captures the possibility of turning a given spanoid into an actual code. The question of the relationship between rank and functional rank is one of the main questions we raise as it may reveal new avenues for constructing new LCCs (perhaps even matching the KT bound). As a first step, we develop an entropy relaxation of functional rank to create a small constant gap and amplify it by tensoring to construct a spanoid whose functional rank is smaller than rank by a polynomial factor. This is evidence that the entropy method we develop can prove polynomially better bounds than KT-type methods on the dimension of LCCs.
To facilitate the above results we also develop some basic structural results on spanoids including an equivalent formulation of spanoids as set systems and properties of spanoid products. We feel that given these initial findings and their motivations, the abstract study of spanoids merits further investigation. We leave plenty of concrete open problems and directions.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol124-itcs2019/LIPIcs.ITCS.2019.32/LIPIcs.ITCS.2019.32.pdf
Locally correctable codes
spanoids
entropy
bootstrap percolation
gossip spreading
matroid
union-closed family
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-01-08
124
33:1
33:20
10.4230/LIPIcs.ITCS.2019.33
article
Fairness Under Composition
Dwork, Cynthia
1
Ilvento, Christina
2
Harvard John A Paulson School of Engineering and Applied Science, Radcliffe Institute for Advanced Study, Cambridge, MA, USA
Harvard John A Paulson School of Engineering and Applied Science, Cambridge, MA, USA
Algorithmic fairness, and in particular the fairness of scoring and classification algorithms, has become a topic of increasing social concern and has recently witnessed an explosion of research in theoretical computer science, machine learning, statistics, the social sciences, and law. Much of the literature considers the case of a single classifier (or scoring function) used once, in isolation. In this work, we initiate the study of the fairness properties of systems composed of algorithms that are fair in isolation; that is, we study fairness under composition. We identify pitfalls of naïve composition and give general constructions for fair composition, demonstrating both that classifiers that are fair in isolation do not necessarily compose into fair systems and also that seemingly unfair components may be carefully combined to construct fair systems. We focus primarily on the individual fairness setting proposed in [Dwork, Hardt, Pitassi, Reingold, Zemel, 2011], but also extend our results to a large class of group fairness definitions popular in the recent literature, exhibiting several cases in which group fairness definitions give misleading signals under composition.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol124-itcs2019/LIPIcs.ITCS.2019.33/LIPIcs.ITCS.2019.33.pdf
algorithmic fairness
fairness
fairness under composition
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-01-08
124
34:1
34:12
10.4230/LIPIcs.ITCS.2019.34
article
A Log-Sobolev Inequality for the Multislice, with Applications
Filmus, Yuval
1
O'Donnell, Ryan
2
Wu, Xinyu
2
Department of Computer Science, Technion - Israel Institute of Technology, Haifa, Israel
Computer Science Department, Carnegie Mellon University, Pittsburgh, PA, USA
Let kappa in N_+^l satisfy kappa_1 + *s + kappa_l = n, and let U_kappa denote the multislice of all strings u in [l]^n having exactly kappa_i coordinates equal to i, for all i in [l]. Consider the Markov chain on U_kappa where a step is a random transposition of two coordinates of u. We show that the log-Sobolev constant rho_kappa for the chain satisfies rho_kappa^{-1} <= n * sum_{i=1}^l 1/2 log_2(4n/kappa_i), which is sharp up to constants whenever l is constant. From this, we derive some consequences for small-set expansion and isoperimetry in the multislice, including a KKL Theorem, a Kruskal - Katona Theorem for the multislice, a Friedgut Junta Theorem, and a Nisan - Szegedy Theorem.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol124-itcs2019/LIPIcs.ITCS.2019.34/LIPIcs.ITCS.2019.34.pdf
log-Sobolev inequality
small-set expansion
conductance
hypercontractivity
Fourier analysis
representation theory
Markov chains
combinatorics
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-01-08
124
35:1
35:13
10.4230/LIPIcs.ITCS.2019.35
article
Cubic Formula Size Lower Bounds Based on Compositions with Majority
Gál, Anna
1
Tal, Avishay
2
Trejo Nuñez, Adrian
1
https://orcid.org/0000-0002-5658-9956
The University of Texas at Austin, Austin, TX, USA
Stanford University, Palo Alto, CA, USA
We define new functions based on the Andreev function and prove that they require n^{3}/polylog(n) formula size to compute. The functions we consider are generalizations of the Andreev function using compositions with the majority function. Our arguments apply to composing a hard function with any function that agrees with the majority function (or its negation) on the middle slices of the Boolean cube, as well as iterated compositions of such functions. As a consequence, we obtain n^{3}/polylog(n) lower bounds on the (non-monotone) formula size of an explicit monotone function by combining the monotone address function with the majority function.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol124-itcs2019/LIPIcs.ITCS.2019.35/LIPIcs.ITCS.2019.35.pdf
formula lower bounds
random restrictions
KRW conjecture
composition
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-01-08
124
36:1
36:14
10.4230/LIPIcs.ITCS.2019.36
article
The Space Complexity of Mirror Games
Garg, Sumegha
1
Schneider, Jon
2
Princeton University, Princeton, USA
Google Research, New York, USA
We consider the following game between two players Alice and Bob, which we call the mirror game. Alice and Bob take turns saying numbers belonging to the set {1, 2, ...,N}. A player loses if they repeat a number that has already been said. Otherwise, after N turns, when all the numbers have been spoken, both players win. When N is even, Bob, who goes second, has a very simple (and memoryless) strategy to avoid losing: whenever Alice says x, respond with N+1-x. The question is: does Alice have a similarly simple strategy to win that avoids remembering all the numbers said by Bob?
The answer is no. We prove a linear lower bound on the space complexity of any deterministic winning strategy of Alice. Interestingly, this follows as a consequence of the Eventown-Oddtown theorem from extremal combinatorics. We additionally demonstrate a randomized strategy for Alice that wins with high probability that requires only O~(sqrt N) space (provided that Alice has access to a random matching on K_N).
We also investigate lower bounds for a generalized mirror game where Alice and Bob alternate saying 1 number and b numbers each turn (respectively). When 1+b is a prime, our linear lower bounds continue to hold, but when 1+b is composite, we show that the existence of a o(N) space strategy for Bob (when N != 0 mod (1+b)) implies the existence of exponential-sized matching vector families over Z^N_{1+b}.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol124-itcs2019/LIPIcs.ITCS.2019.36/LIPIcs.ITCS.2019.36.pdf
Mirror Games
Space Complexity
Eventown-Oddtown
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-01-08
124
37:1
37:19
10.4230/LIPIcs.ITCS.2019.37
article
The Subgraph Testing Model
Goldreich, Oded
1
Ron, Dana
2
Department of Computer Science, Weizmann Institute of Science, Rehovot, Israel
School of Electrical Engineering, Tel Aviv University, Tel Aviv, Israel
We initiate a study of testing properties of graphs that are presented as subgraphs of a fixed (or an explicitly given) graph. The tester is given free access to a base graph G=([n],E), and oracle access to a function f:E -> {0,1} that represents a subgraph of G. The tester is required to distinguish between subgraphs that posses a predetermined property and subgraphs that are far from possessing this property.
We focus on bounded-degree base graphs and on the relation between testing graph properties in the subgraph model and testing the same properties in the bounded-degree graph model. We identify cases in which testing is significantly easier in one model than in the other as well as cases in which testing has approximately the same complexity in both models. Our proofs are based on the design and analysis of efficient testers and on the establishment of query-complexity lower bounds.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol124-itcs2019/LIPIcs.ITCS.2019.37/LIPIcs.ITCS.2019.37.pdf
Property Testing
Graph Properties
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-01-08
124
38:1
38:19
10.4230/LIPIcs.ITCS.2019.38
article
Adventures in Monotone Complexity and TFNP
Göös, Mika
1
Kamath, Pritish
2
Robere, Robert
3
Sokolov, Dmitry
4
Institute for Advanced Study, Princeton, NJ, USA
Massachusetts Institute of Technology, Cambridge, MA, USA
Simons Institute, Berkeley, CA, USA
KTH Royal Institute of Technology, Stockholm, Sweden
Separations: We introduce a monotone variant of Xor-Sat and show it has exponential monotone circuit complexity. Since Xor-Sat is in NC^2, this improves qualitatively on the monotone vs. non-monotone separation of Tardos (1988). We also show that monotone span programs over R can be exponentially more powerful than over finite fields. These results can be interpreted as separating subclasses of TFNP in communication complexity.
Characterizations: We show that the communication (resp. query) analogue of PPA (subclass of TFNP) captures span programs over F_2 (resp. Nullstellensatz degree over F_2). Previously, it was known that communication FP captures formulas (Karchmer - Wigderson, 1988) and that communication PLS captures circuits (Razborov, 1995).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol124-itcs2019/LIPIcs.ITCS.2019.38/LIPIcs.ITCS.2019.38.pdf
TFNP
Monotone Complexity
Communication Complexity
Proof Complexity
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-01-08
124
39:1
39:19
10.4230/LIPIcs.ITCS.2019.39
article
Algorithmic Polarization for Hidden Markov Models
Guruswami, Venkatesan
1
Nakkiran, Preetum
2
Sudan, Madhu
2
Computer Science Department, Carnegie Mellon University, Pittsburgh, PA 15213, USA
Harvard John A. Paulson School of Engineering and Applied Sciences, Harvard University, 33 Oxford Street, Cambridge, MA 02138, USA
Using a mild variant of polar codes we design linear compression schemes compressing Hidden Markov sources (where the source is a Markov chain, but whose state is not necessarily observable from its output), and to decode from Hidden Markov channels (where the channel has a state and the error introduced depends on the state). We give the first polynomial time algorithms that manage to compress and decompress (or encode and decode) at input lengths that are polynomial both in the gap to capacity and the mixing time of the Markov chain. Prior work achieved capacity only asymptotically in the limit of large lengths, and polynomial bounds were not available with respect to either the gap to capacity or mixing time. Our results operate in the setting where the source (or the channel) is known. If the source is unknown then compression at such short lengths would lead to effective algorithms for learning parity with noise - thus our results are the first to suggest a separation between the complexity of the problem when the source is known versus when it is unknown.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol124-itcs2019/LIPIcs.ITCS.2019.39/LIPIcs.ITCS.2019.39.pdf
polar codes
error-correcting codes
compression
hidden markov model
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-01-08
124
40:1
40:16
10.4230/LIPIcs.ITCS.2019.40
article
On the Communication Complexity of Key-Agreement Protocols
Haitner, Iftach
1
Mazor, Noam
1
Oshman, Rotem
1
Reingold, Omer
2
Yehudayoff, Amir
3
The Blavatnik school of computer science, Tel Aviv University, Israel
Computer Science Department, Stanford University, USA
Department of Mathematics, Technion-Israel Institute of Technology, Israel
Key-agreement protocols whose security is proven in the random oracle model are an important alternative to protocols based on public-key cryptography. In the random oracle model, the parties and the eavesdropper have access to a shared random function (an "oracle"), but the parties are limited in the number of queries they can make to the oracle. The random oracle serves as an abstraction for black-box access to a symmetric cryptographic primitive, such as a collision resistant hash. Unfortunately, as shown by Impagliazzo and Rudich [STOC '89] and Barak and Mahmoody [Crypto '09], such protocols can only guarantee limited secrecy: the key of any l-query protocol can be revealed by an O(l^2)-query adversary. This quadratic gap between the query complexity of the honest parties and the eavesdropper matches the gap obtained by the Merkle's Puzzles protocol of Merkle [CACM '78].
In this work we tackle a new aspect of key-agreement protocols in the random oracle model: their communication complexity. In Merkle's Puzzles, to obtain secrecy against an eavesdropper that makes roughly l^2 queries, the honest parties need to exchange Omega(l) bits. We show that for protocols with certain natural properties, ones that Merkle's Puzzle has, such high communication is unavoidable. Specifically, this is the case if the honest parties' queries are uniformly random, or alternatively if the protocol uses non-adaptive queries and has only two rounds. Our proof for the first setting uses a novel reduction from the set-disjointness problem in two-party communication complexity. For the second setting we prove the lower bound directly, using information-theoretic arguments.
Understanding the communication complexity of protocols whose security is proven (in the random-oracle model) is an important question in the study of practical protocols. Our results and proof techniques are a first step in this direction.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol124-itcs2019/LIPIcs.ITCS.2019.40/LIPIcs.ITCS.2019.40.pdf
key agreement
random oracle
communication complexity
Merkle's puzzles
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-01-08
124
41:1
41:6
10.4230/LIPIcs.ITCS.2019.41
article
The Paulsen Problem Made Simple
Hamilton, Linus
1
Moitra, Ankur
1
Massachusetts Institute of Technology, 77 Massachusetts Ave, USA
The Paulsen problem is a basic problem in operator theory that was resolved in a recent tour-de-force work of Kwok, Lau, Lee and Ramachandran. In particular, they showed that every epsilon-nearly equal norm Parseval frame in d dimensions is within squared distance O(epsilon d^{13/2}) of an equal norm Parseval frame. We give a dramatically simpler proof based on the notion of radial isotropic position, and along the way show an improved bound of O(epsilon d^2).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol124-itcs2019/LIPIcs.ITCS.2019.41/LIPIcs.ITCS.2019.41.pdf
radial isotropic position
operator scaling
Paulsen problem
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-01-08
124
42:1
42:20
10.4230/LIPIcs.ITCS.2019.42
article
How to Subvert Backdoored Encryption: Security Against Adversaries that Decrypt All Ciphertexts
Horel, Thibaut
1
Park, Sunoo
2
Richelson, Silas
3
Vaikuntanathan, Vinod
2
Harvard University, Cambridge, MA, USA
MIT, Cambridge, MA, USA
University of California, Riverside, CA, USA
In this work, we examine the feasibility of secure and undetectable point-to-point communication when an adversary (e.g., a government) can read all encrypted communications of surveillance targets. We consider a model where the only permitted method of communication is via a government-mandated encryption scheme, instantiated with government-mandated keys. Parties cannot simply encrypt ciphertexts of some other encryption scheme, because citizens caught trying to communicate outside the government's knowledge (e.g., by encrypting strings which do not appear to be natural language plaintexts) will be arrested. The one guarantee we suppose is that the government mandates an encryption scheme which is semantically secure against outsiders: a perhaps reasonable supposition when a government might consider it advantageous to secure its people's communication against foreign entities. But then, what good is semantic security against an adversary that holds all the keys and has the power to decrypt?
We show that even in the pessimistic scenario described, citizens can communicate securely and undetectably. In our terminology, this translates to a positive statement: all semantically secure encryption schemes support subliminal communication. Informally, this means that there is a two-party protocol between Alice and Bob where the parties exchange ciphertexts of what appears to be a normal conversation even to someone who knows the secret keys and thus can read the corresponding plaintexts. And yet, at the end of the protocol, Alice will have transmitted her secret message to Bob. Our security definition requires that the adversary not be able to tell whether Alice and Bob are just having a normal conversation using the mandated encryption scheme, or they are using the mandated encryption scheme for subliminal communication.
Our topics may be thought to fall broadly within the realm of steganography. However, we deal with the non-standard setting of an adversarially chosen distribution of cover objects (i.e., a stronger-than-usual adversary), and we take advantage of the fact that our cover objects are ciphertexts of a semantically secure encryption scheme to bypass impossibility results which we show for broader classes of steganographic schemes. We give several constructions of subliminal communication schemes under the assumption that key exchange protocols with pseudorandom messages exist (such as Diffie-Hellman, which in fact has truly random messages).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol124-itcs2019/LIPIcs.ITCS.2019.42/LIPIcs.ITCS.2019.42.pdf
Backdoored Encryption
Steganography
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-01-08
124
43:1
43:17
10.4230/LIPIcs.ITCS.2019.43
article
On Integer Programming and Convolution
Jansen, Klaus
1
Rohwedder, Lars
1
Department of Computer Science, Kiel University, Kiel, Germany
Integer programs with a constant number of constraints are solvable in pseudo-polynomial time. We give a new algorithm with a better pseudo-polynomial running time than previous results. Moreover, we establish a strong connection to the problem (min, +)-convolution. (min, +)-convolution has a trivial quadratic time algorithm and it has been conjectured that this cannot be improved significantly. We show that further improvements to our pseudo-polynomial algorithm for any fixed number of constraints are equivalent to improvements for (min, +)-convolution. This is a strong evidence that our algorithm's running time is the best possible. We also present a faster specialized algorithm for testing feasibility of an integer program with few constraints and for this we also give a tight lower bound, which is based on the SETH.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol124-itcs2019/LIPIcs.ITCS.2019.43/LIPIcs.ITCS.2019.43.pdf
Integer programming
convolution
dynamic programming
SETH
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-01-08
124
44:1
44:19
10.4230/LIPIcs.ITCS.2019.44
article
Empowering the Configuration-IP - New PTAS Results for Scheduling with Setups Times
Jansen, Klaus
1
Klein, Kim-Manuel
1
Maack, Marten
1
Rau, Malin
1
Department of Computer Science, Kiel University, Kiel, Germany
Integer linear programs of configurations, or configuration IPs, are a classical tool in the design of algorithms for scheduling and packing problems, where a set of items has to be placed in multiple target locations. Herein a configuration describes a possible placement on one of the target locations, and the IP is used to chose suitable configurations covering the items. We give an augmented IP formulation, which we call the module configuration IP. It can be described within the framework of n-fold integer programming and therefore be solved efficiently. As an application, we consider scheduling problems with setup times, in which a set of jobs has to be scheduled on a set of identical machines, with the objective of minimizing the makespan. For instance, we investigate the case that jobs can be split and scheduled on multiple machines. However, before a part of a job can be processed an uninterrupted setup depending on the job has to be paid. For both of the variants that jobs can be executed in parallel or not, we obtain an efficient polynomial time approximation scheme (EPTAS) of running time f(1/epsilon) x poly(|I|) with a single exponential term in f for the first and a double exponential one for the second case. Previously, only constant factor approximations of 5/3 and 4/3 + epsilon respectively were known. Furthermore, we present an EPTAS for a problem where classes of (non-splittable) jobs are given, and a setup has to be paid for each class of jobs being executed on one machine.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol124-itcs2019/LIPIcs.ITCS.2019.44/LIPIcs.ITCS.2019.44.pdf
Parallel Machines
Setup Time
EPTAS
n-fold integer programming
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-01-08
124
45:1
45:14
10.4230/LIPIcs.ITCS.2019.45
article
Being Corrupt Requires Being Clever, But Detecting Corruption Doesn't
Jin, Yan
1
Mossel, Elchanan
1
Ramnarayan, Govind
1
MIT, 77 Massachusetts Ave, MA, USA
We consider a variation of the problem of corruption detection on networks posed by Alon, Mossel, and Pemantle '15. In this model, each vertex of a graph can be either truthful or corrupt. Each vertex reports about the types (truthful or corrupt) of all its neighbors to a central agency, where truthful nodes report the true types they see and corrupt nodes report adversarially. The central agency aggregates these reports and attempts to find a single truthful node. Inspired by real auditing networks, we pose our problem for arbitrary graphs and consider corruption through a computational lens. We identify a key combinatorial parameter of the graph m(G), which is the minimal number of corrupted agents needed to prevent the central agency from identifying a single corrupt node. We give an efficient (in fact, linear time) algorithm for the central agency to identify a truthful node that is successful whenever the number of corrupt nodes is less than m(G)/2. On the other hand, we prove that for any constant alpha > 1, it is NP-hard to find a subset of nodes S in G such that corrupting S prevents the central agency from finding one truthful node and |S| <= alpha m(G), assuming the Small Set Expansion Hypothesis (Raghavendra and Steurer, STOC '10). We conclude that being corrupt requires being clever, while detecting corruption does not.
Our main technical insight is a relation between the minimum number of corrupt nodes required to hide all truthful nodes and a certain notion of vertex separability for the underlying graph. Additionally, this insight lets us design an efficient algorithm for a corrupt party to decide which graphs require the fewest corrupted nodes, up to a multiplicative factor of O(log n).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol124-itcs2019/LIPIcs.ITCS.2019.45/LIPIcs.ITCS.2019.45.pdf
Corruption detection
PMC Model
Small Set Expansion
Hardness of Approximation
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-01-08
124
46:1
46:15
10.4230/LIPIcs.ITCS.2019.46
article
Simulating Random Walks on Graphs in the Streaming Model
Jin, Ce
1
Institute for Interdisciplinary Information Sciences, Tsinghua University, Beijing, China
We study the problem of approximately simulating a t-step random walk on a graph where the input edges come from a single-pass stream. The straightforward algorithm using reservoir sampling needs O(nt) words of memory. We show that this space complexity is near-optimal for directed graphs. For undirected graphs, we prove an Omega(n sqrt{t})-bit space lower bound, and give a near-optimal algorithm using O(n sqrt{t}) words of space with 2^{-Omega(sqrt{t})} simulation error (defined as the l_1-distance between the output distribution of the simulation algorithm and the distribution of perfect random walks). We also discuss extending the algorithms to the turnstile model, where both insertion and deletion of edges can appear in the input stream.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol124-itcs2019/LIPIcs.ITCS.2019.46/LIPIcs.ITCS.2019.46.pdf
streaming models
random walks
sampling
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-01-08
124
47:1
47:14
10.4230/LIPIcs.ITCS.2019.47
article
On the Complexity of Symmetric Polynomials
Bläser, Markus
1
Jindal, Gorav
2
Department of Computer Science, Saarland University, Saarland Informatics Campus, Saarbrücken, Germany
Department of Computer Science, Aalto University, Espoo, Finland
The fundamental theorem of symmetric polynomials states that for a symmetric polynomial f_{Sym} in C[x_1,x_2,...,x_n], there exists a unique "witness" f in C[y_1,y_2,...,y_n] such that f_{Sym}=f(e_1,e_2,...,e_n), where the e_i's are the elementary symmetric polynomials.
In this paper, we study the arithmetic complexity L(f) of the witness f as a function of the arithmetic complexity L(f_{Sym}) of f_{Sym}. We show that the arithmetic complexity L(f) of f is bounded by poly(L(f_{Sym}),deg(f),n). To the best of our knowledge, prior to this work only exponential upper bounds were known for L(f). The main ingredient in our result is an algebraic analogue of Newton's iteration on power series. As a corollary of this result, we show that if VP != VNP then there exist symmetric polynomial families which have super-polynomial arithmetic complexity.
Furthermore, we study the complexity of testing whether a function is symmetric. For polynomials, this question is equivalent to arithmetic circuit identity testing. In contrast to this, we show that it is hard for Boolean functions.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol124-itcs2019/LIPIcs.ITCS.2019.47/LIPIcs.ITCS.2019.47.pdf
Symmetric Polynomials
Arithmetic Circuits
Arithmetic Complexity
Power Series
Elementary Symmetric Polynomials
Newton's Iteration
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-01-08
124
48:1
48:15
10.4230/LIPIcs.ITCS.2019.48
article
The Orthogonal Vectors Conjecture for Branching Programs and Formulas
Kane, Daniel M.
1
Williams, Richard Ryan
2
https://orcid.org/0000-0003-2326-2233
CSE and Mathematics, UC San Diego, La Jolla CA, USA
EECS and CSAIL, MIT, 32 Vassar St., Cambridge MA, USA
In the Orthogonal Vectors (OV) problem, we wish to determine if there is an orthogonal pair of vectors among n Boolean vectors in d dimensions. The OV Conjecture (OVC) posits that OV requires n^{2-o(1)} time to solve, for all d=omega(log n). Assuming the OVC, optimal time lower bounds have been proved for many prominent problems in P, such as Edit Distance, Frechet Distance, Longest Common Subsequence, and approximating the diameter of a graph.
We prove that OVC is true in several computational models of interest:
- For all sufficiently large n and d, OV for n vectors in {0,1}^d has branching program complexity Theta~(n * min(n,2^d)). In particular, the lower and upper bounds match up to polylog factors.
- OV has Boolean formula complexity Theta~(n * min(n,2^d)), over all complete bases of O(1) fan-in.
- OV requires Theta~(n * min(n,2^d)) wires, in formulas comprised of gates computing arbitrary symmetric functions of unbounded fan-in.
Our lower bounds basically match the best known (quadratic) lower bounds for any explicit function in those models. Analogous lower bounds hold for many related problems shown to be hard under OVC, such as Batch Partial Match, Batch Subset Queries, and Batch Hamming Nearest Neighbors, all of which have very succinct reductions to OV.
The proofs use a certain kind of input restriction that is different from typical random restrictions where variables are assigned independently. We give a sense in which independent random restrictions cannot be used to show hardness, in that OVC is false in the "average case" even for AC^0 formulas:
For all p in (0,1) there is a delta_p > 0 such that for every n and d, OV instances with input bits independently set to 1 with probability p (and 0 otherwise) can be solved with AC^0 formulas of O(n^{2-delta_p}) size, on all but a o_n(1) fraction of instances. Moreover, lim_{p - > 1}delta_p = 1.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol124-itcs2019/LIPIcs.ITCS.2019.48/LIPIcs.ITCS.2019.48.pdf
fine-grained complexity
orthogonal vectors
branching programs
symmetric functions
Boolean formulas
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-01-08
124
49:1
49:21
10.4230/LIPIcs.ITCS.2019.49
article
SOS Lower Bounds with Hard Constraints: Think Global, Act Local
Kothari, Pravesh K.
1
O'Donnell, Ryan
2
Schramm, Tselil
3
Department of Computer Science, Princeton University and Institute for Advanced Study, Princeton, USA
Department of Computer Science, Carnegie Mellon University, Pittsburgh, USA
Department of Computer Science, Harvard and MIT, Cambridge, USA
Many previous Sum-of-Squares (SOS) lower bounds for CSPs had two deficiencies related to global constraints. First, they were not able to support a "cardinality constraint", as in, say, the Min-Bisection problem. Second, while the pseudoexpectation of the objective function was shown to have some value beta, it did not necessarily actually "satisfy" the constraint "objective = beta". In this paper we show how to remedy both deficiencies in the case of random CSPs, by translating global constraints into local constraints. Using these ideas, we also show that degree-Omega(sqrt{n}) SOS does not provide a (4/3 - epsilon)-approximation for Min-Bisection, and degree-Omega(n) SOS does not provide a (11/12 + epsilon)-approximation for Max-Bisection or a (5/4 - epsilon)-approximation for Min-Bisection. No prior SOS lower bounds for these problems were known.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol124-itcs2019/LIPIcs.ITCS.2019.49/LIPIcs.ITCS.2019.49.pdf
sum-of-squares hierarchy
random constraint satisfaction problems
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-01-08
124
50:1
50:20
10.4230/LIPIcs.ITCS.2019.50
article
Semi-Online Bipartite Matching
Kumar, Ravi
1
Purohit, Manish
1
Schild, Aaron
2
Svitkina, Zoya
1
Vee, Erik
1
Google, Mountain View, CA, USA
University of California, Berkeley, CA, USA
In this paper we introduce the semi-online model that generalizes the classical online computational model. The semi-online model postulates that the unknown future has a predictable part and an adversarial part; these parts can be arbitrarily interleaved. An algorithm in this model operates as in the standard online model, i.e., makes an irrevocable decision at each step.
We consider bipartite matching in the semi-online model. Our main contributions are competitive algorithms for this problem and a near-matching hardness bound. The competitive ratio of the algorithms nicely interpolates between the truly offline setting (i.e., no adversarial part) and the truly online setting (i.e., no predictable part).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol124-itcs2019/LIPIcs.ITCS.2019.50/LIPIcs.ITCS.2019.50.pdf
Semi-Online Algorithms
Bipartite Matching
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-01-08
124
51:1
51:21
10.4230/LIPIcs.ITCS.2019.51
article
Strategies for Quantum Races
Lee, Troy
1
Ray, Maharshi
2
Santha, Miklos
3
4
Centre for Quantum Software and Information, School of Software, Faculty of Engineering and Information Technology, University of Technology Sydney, Australia
Centre for Quantum Technologies, National University of Singapore, Singapore
IRIF, Univ. Paris Diderot, CNRS, 75205 Paris, France
and, Centre for Quantum Technologies and MajuLab, National University of Singapore, Singapore 117543
We initiate the study of quantum races, games where two or more quantum computers compete to solve a computational problem. While the problem of dueling algorithms has been studied for classical deterministic algorithms [Immorlica et al., 2011], the quantum case presents additional sources of uncertainty for the players. The foremost among these is that players do not know if they have solved the problem until they measure their quantum state. This question of "when to measure?" presents a very interesting strategic problem. We develop a game-theoretic model of a multiplayer quantum race, and find an approximate Nash equilibrium where all players play the same strategy. In the two-party case, we further show that this strategy is nearly optimal in terms of payoff among all symmetric Nash equilibria. A key role in our analysis of quantum races is played by a more tractable version of the game where there is no payout on a tie; for such races we completely characterize the Nash equilibria in the two-party case.
One application of our results is to the stability of the Bitcoin protocol when mining is done by quantum computers. Bitcoin mining is a race to solve a computational search problem, with the winner gaining the right to create a new block. Our results inform the strategies that eventual quantum miners should use, and also indicate that the collision probability - the probability that two miners find a new block at the same time - would not be too high in the case of quantum miners. Such collisions are undesirable as they lead to forking of the Bitcoin blockchain.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol124-itcs2019/LIPIcs.ITCS.2019.51/LIPIcs.ITCS.2019.51.pdf
Game theory
Bitcoin mining
Quantum computing
Convex optimization
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-01-08
124
52:1
52:20
10.4230/LIPIcs.ITCS.2019.52
article
Lower Bounds for Tolerant Junta and Unateness Testing via Rejection Sampling of Graphs
Levi, Amit
1
Waingarten, Erik
2
University of Waterloo, Canada
Columbia University, USA
We introduce a new model for testing graph properties which we call the rejection sampling model. We show that testing bipartiteness of n-nodes graphs using rejection sampling queries requires complexity Omega~(n^2). Via reductions from the rejection sampling model, we give three new lower bounds for tolerant testing of Boolean functions of the form f : {0,1}^n -> {0,1}:
- Tolerant k-junta testing with non-adaptive queries requires Omega~(k^2) queries.
- Tolerant unateness testing requires Omega~(n) queries.
- Tolerant unateness testing with non-adaptive queries requires Omega~(n^{3/2}) queries.
Given the O~(k^{3/2})-query non-adaptive junta tester of Blais [Eric Blais, 2008], we conclude that non-adaptive tolerant junta testing requires more queries than non-tolerant junta testing. In addition, given the O~(n^{3/4})-query unateness tester of Chen, Waingarten, and Xie [Xi Chen et al., 2017] and the O~(n)-query non-adaptive unateness tester of Baleshzar, Chakrabarty, Pallavoor, Raskhodnikova, and Seshadhri [Roksana Baleshzar et al., 2017], we conclude that tolerant unateness testing requires more queries than non-tolerant unateness testing, in both adaptive and non-adaptive settings. These lower bounds provide the first separation between tolerant and non-tolerant testing for a natural property of Boolean functions.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol124-itcs2019/LIPIcs.ITCS.2019.52/LIPIcs.ITCS.2019.52.pdf
Property Testing
Juntas
Tolerant Testing
Boolean functions
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-01-08
124
53:1
53:20
10.4230/LIPIcs.ITCS.2019.53
article
Secret Sharing with Binary Shares
Lin, Fuchun
1
Cheraghchi, Mahdi
2
Guruswami, Venkatesan
3
Safavi-Naini, Reihaneh
4
Wang, Huaxiong
1
Division of Mathematical Sciences, School of Physical and Mathematical Sciences, Nanyang Technological University, SG
Department of Computing, Imperial College London, UK
Computer Science Department, Carnegie Mellon University, USA
Department of Computer Science, University of Calgary, CA
Shamir's celebrated secret sharing scheme provides an efficient method for encoding a secret of arbitrary length l among any N <= 2^l players such that for a threshold parameter t, (i) the knowledge of any t shares does not reveal any information about the secret and, (ii) any choice of t+1 shares fully reveals the secret. It is known that any such threshold secret sharing scheme necessarily requires shares of length l, and in this sense Shamir's scheme is optimal. The more general notion of ramp schemes requires the reconstruction of secret from any t+g shares, for a positive integer gap parameter g. Ramp secret sharing scheme necessarily requires shares of length l/g. Other than the bound related to secret length l, the share lengths of ramp schemes can not go below a quantity that depends only on the gap ratio g/N.
In this work, we study secret sharing in the extremal case of bit-long shares and arbitrarily small gap ratio g/N, where standard ramp secret sharing becomes impossible. We show, however, that a slightly relaxed but equally effective notion of semantic security for the secret, and negligible reconstruction error probability, eliminate the impossibility. Moreover, we provide explicit constructions of such schemes. One of the consequences of our relaxation is that, unlike standard ramp schemes with perfect secrecy, adaptive and non-adaptive adversaries need different analysis and construction. For non-adaptive adversaries, we explicitly construct secret sharing schemes that provide secrecy against any tau fraction of observed shares, and reconstruction from any rho fraction of shares, for any choices of 0 <= tau < rho <= 1. Our construction achieves secret length N(rho-tau-o(1)), which we show to be optimal. For adaptive adversaries, we construct explicit schemes attaining a secret length Omega(N(rho-tau)). We discuss our results and open questions.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol124-itcs2019/LIPIcs.ITCS.2019.53/LIPIcs.ITCS.2019.53.pdf
Secret sharing scheme
Wiretap channel
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-01-08
124
54:1
54:20
10.4230/LIPIcs.ITCS.2019.54
article
On the Communication Complexity of High-Dimensional Permutations
Linial, Nati
1
Pitassi, Toniann
2
Shraibman, Adi
3
Hebrew University of Jerusalem, Jerusalem, Israel
University of Toronto, Toronto, Canada and IAS, Princeton, U.S.A.
The Academic College of Tel-Aviv-Yaffo, Tel-Aviv, Israel
We study the multiparty communication complexity of high dimensional permutations in the Number On the Forehead (NOF) model. This model is due to Chandra, Furst and Lipton (CFL) who also gave a nontrivial protocol for the Exactly-n problem where three players receive integer inputs and need to decide if their inputs sum to a given integer n. There is a considerable body of literature dealing with the same problem, where (N,+) is replaced by some other abelian group. Our work can be viewed as a far-reaching extension of this line of research. We show that the known lower bounds for that group-theoretic problem apply to all high dimensional permutations. We introduce new proof techniques that reveal new and unexpected connections between NOF communication complexity of permutations and a variety of well-known problems in combinatorics. We also give a direct algorithmic protocol for Exactly-n. In contrast, all previous constructions relied on large sets of integers without a 3-term arithmetic progression.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol124-itcs2019/LIPIcs.ITCS.2019.54/LIPIcs.ITCS.2019.54.pdf
High dimensional permutations
Number On the Forehead model
Additive combinatorics
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-01-08
124
55:1
55:8
10.4230/LIPIcs.ITCS.2019.55
article
Fisher Zeros and Correlation Decay in the Ising Model
Liu, Jingcheng
1
Sinclair, Alistair
1
Srivastava, Piyush
2
Computer Science Division, UC Berkeley, USA
Tata Institute of Fundamental Research, Mumbai, India
The Ising model originated in statistical physics as a means of studying phase transitions in magnets, and has been the object of intensive study for almost a century. Combinatorially, it can be viewed as a natural distribution over cuts in a graph, and it has also been widely studied in computer science, especially in the context of approximate counting and sampling. In this paper, we study the complex zeros of the partition function of the Ising model, viewed as a polynomial in the "interaction parameter"; these are known as Fisher zeros in light of their introduction by Fisher in 1965. While the zeros of the partition function as a polynomial in the "field" parameter have been extensively studied since the classical work of Lee and Yang, comparatively little is known about Fisher zeros. Our main result shows that the zero-field Ising model has no Fisher zeros in a complex neighborhood of the entire region of parameters where the model exhibits correlation decay. In addition to shedding light on Fisher zeros themselves, this result also establishes a formal connection between two distinct notions of phase transition for the Ising model: the absence of complex zeros (analyticity of the free energy, or the logarithm of the partition function) and decay of correlations with distance. We also discuss the consequences of our result for efficient deterministic approximation of the partition function. Our proof relies heavily on algorithmic techniques, notably Weitz's self-avoiding walk tree, and as such belongs to a growing body of work that uses algorithmic methods to resolve classical questions in statistical physics.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol124-itcs2019/LIPIcs.ITCS.2019.55/LIPIcs.ITCS.2019.55.pdf
Ising model
zeros of polynomials
partition functions
approximate counting
phase transitions
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-01-08
124
56:1
56:20
10.4230/LIPIcs.ITCS.2019.56
article
Quadratic Time-Space Lower Bounds for Computing Natural Functions with a Random Oracle
McKay, Dylan M.
1
Williams, Richard Ryan
1
https://orcid.org/0000-0003-2326-2233
EECS and CSAIL, MIT, 32 Vassar St., Cambridge MA, USA
We define a model of size-S R-way branching programs with oracles that can make up to S distinct oracle queries over all of their possible inputs, and generalize a lower bound proof strategy of Beame [SICOMP 1991] to apply in the case of random oracles. Through a series of succinct reductions, we prove that the following problems require randomized algorithms where the product of running time and space usage must be Omega(n^2/poly(log n)) to obtain correct answers with constant nonzero probability, even for algorithms with constant-time access to a uniform random oracle (i.e., a uniform random hash function):
- Given an unordered list L of n elements from [n] (possibly with repeated elements), output [n]-L.
- Counting satisfying assignments to a given 2CNF, and printing any satisfying assignment to a given 3CNF. Note it is a major open problem to prove a time-space product lower bound of n^{2-o(1)} for the decision version of SAT, or even for the decision problem Majority-SAT.
- Printing the truth table of a given CNF formula F with k inputs and n=O(2^k) clauses, with values printed in lexicographical order (i.e., F(0^k), F(0^{k-1}1), ..., F(1^k)). Thus we have a 4^k/poly(k) lower bound in this case.
- Evaluating a circuit with n inputs and O(n) outputs.
As our lower bounds are based on R-way branching programs, they hold for any reasonable model of computation (e.g. log-word RAMs and multitape Turing machines).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol124-itcs2019/LIPIcs.ITCS.2019.56/LIPIcs.ITCS.2019.56.pdf
branching programs
random oracles
time-space tradeoffs
lower bounds
SAT
counting complexity
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-01-08
124
57:1
57:19
10.4230/LIPIcs.ITCS.2019.57
article
Random Projection in the Brain and Computation with Assemblies of Neurons
Papadimitriou, Christos H.
1
Vempala, Santosh S.
2
Columbia University, USA
Georgia Tech, USA
It has been recently shown via simulations [Dasgupta et al., 2017] that random projection followed by a cap operation (setting to one the k largest elements of a vector and everything else to zero), a map believed to be an important part of the insect olfactory system, has strong locality sensitivity properties. We calculate the asymptotic law whereby the overlap in the input vectors is conserved, verifying mathematically this empirical finding. We then focus on the far more complex homologous operation in the mammalian brain, the creation through successive projections and caps of an assembly (roughly, a set of excitatory neurons representing a memory or concept) in the presence of recurrent synapses and plasticity. After providing a careful definition of assemblies, we prove that the operation of assembly projection converges with high probability, over the randomness of synaptic connectivity, even if plasticity is relatively small (previous proofs relied on high plasticity). We also show that assembly projection has itself some locality preservation properties. Finally, we propose a large repertoire of assembly operations, including associate, merge, reciprocal project, and append, each of them both biologically plausible and consistent with what we know from experiments, and show that this computational system is capable of simulating, again with high probability, arbitrary computation in a quite natural way. We hope that this novel way of looking at brain computation, open-ended and based on reasonably mainstream ideas in neuroscience, may prove an attractive entry point for computer scientists to work on understanding the brain.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol124-itcs2019/LIPIcs.ITCS.2019.57/LIPIcs.ITCS.2019.57.pdf
Brain computation
random projection
assemblies
plasticity
memory
association
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-01-08
124
58:1
58:21
10.4230/LIPIcs.ITCS.2019.58
article
Local Computation Algorithms for Spanners
Parter, Merav
1
Rubinfeld, Ronitt
2
Vakilian, Ali
3
Yodpinyanee, Anak
3
Weizmann IS, Rehovot, Israel
CSAIL, MIT, Cambridge, MA, USA and TAU, Tel Aviv, Israel
CSAIL, MIT, Cambridge, MA, USA
A graph spanner is a fundamental graph structure that faithfully preserves the pairwise distances in the input graph up to a small multiplicative stretch. The common objective in the computation of spanners is to achieve the best-known existential size-stretch trade-off efficiently.
Classical models and algorithmic analysis of graph spanners essentially assume that the algorithm can read the input graph, construct the desired spanner, and write the answer to the output tape. However, when considering massive graphs containing millions or even billions of nodes not only the input graph, but also the output spanner might be too large for a single processor to store.
To tackle this challenge, we initiate the study of local computation algorithms (LCAs) for graph spanners in general graphs, where the algorithm should locally decide whether a given edge (u,v) in E belongs to the output (sparse) spanner or not. Such LCAs give the user the "illusion" that a specific sparse spanner for the graph is maintained, without ever fully computing it. We present several results for this setting, including:
- For general n-vertex graphs and for parameter r in {2,3}, there exists an LCA for (2r-1)-spanners with O~(n^{1+1/r}) edges and sublinear probe complexity of O~(n^{1-1/2r}). These size/stretch trade-offs are best possible (up to polylogarithmic factors).
- For every k >= 1 and n-vertex graph with maximum degree Delta, there exists an LCA for O(k^2) spanners with O~(n^{1+1/k}) edges, probe complexity of O~(Delta^4 n^{2/3}), and random seed of size polylog(n). This improves upon, and extends the work of [Lenzen-Levi, ICALP'18].
We also complement these constructions by providing a polynomial lower bound on the probe complexity of LCAs for graph spanners that holds even for the simpler task of computing a sparse connected subgraph with o(m) edges.
To the best of our knowledge, our results on 3 and 5-spanners are the first LCAs with sublinear (in Delta) probe-complexity for Delta = n^{Omega(1)}.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol124-itcs2019/LIPIcs.ITCS.2019.58/LIPIcs.ITCS.2019.58.pdf
Local Computation Algorithms
Sub-linear Algorithms
Graph Spanners
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-01-08
124
59:1
59:25
10.4230/LIPIcs.ITCS.2019.59
article
Proofs of Catalytic Space
Pietrzak, Krzysztof
1
Institute of Science and Technology Austria, Austria
Proofs of space (PoS) [Dziembowski et al., CRYPTO'15] are proof systems where a prover can convince a verifier that he "wastes" disk space. PoS were introduced as a more ecological and economical replacement for proofs of work which are currently used to secure blockchains like Bitcoin. In this work we investigate extensions of PoS which allow the prover to embed useful data into the dedicated space, which later can be recovered.
Our first contribution is a security proof for the original PoS from CRYPTO'15 in the random oracle model (the original proof only applied to a restricted class of adversaries which can store a subset of the data an honest prover would store). When this PoS is instantiated with recent constructions of maximally depth robust graphs, our proof implies basically optimal security.
As a second contribution we show three different extensions of this PoS where useful data can be embedded into the space required by the prover. Our security proof for the PoS extends (non-trivially) to these constructions. We discuss how some of these variants can be used as proofs of catalytic space (PoCS), a notion we put forward in this work, and which basically is a PoS where most of the space required by the prover can be used to backup useful data. Finally we discuss how one of the extensions is a candidate construction for a proof of replication (PoR), a proof system recently suggested in the Filecoin whitepaper.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol124-itcs2019/LIPIcs.ITCS.2019.59/LIPIcs.ITCS.2019.59.pdf
Proofs of Space
Proofs of Replication
Blockchains
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-01-08
124
60:1
60:15
10.4230/LIPIcs.ITCS.2019.60
article
Simple Verifiable Delay Functions
Pietrzak, Krzysztof
1
Institute of Science and Technology Austria, Austria
We construct a verifiable delay function (VDF) by showing how the Rivest-Shamir-Wagner time-lock puzzle can be made publicly verifiable.
Concretely, we give a statistically sound public-coin protocol to prove that a tuple (N,x,T,y) satisfies y=x^{2^T} mod N where the prover doesn't know the factorization of N and its running time is dominated by solving the puzzle, that is, compute x^{2^T}, which is conjectured to require T sequential squarings. To get a VDF we make this protocol non-interactive using the Fiat-Shamir heuristic.
The motivation for this work comes from the Chia blockchain design, which uses a VDF as a key ingredient. For typical parameters (T <=2^{40},N=2048), our proofs are of size around 10KB, verification cost around three RSA exponentiations and computing the proof is 8000 times faster than solving the puzzle even without any parallelism.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol124-itcs2019/LIPIcs.ITCS.2019.60/LIPIcs.ITCS.2019.60.pdf
Verifiable delay functions
Time-lock puzzles
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-01-08
124
61:1
61:20
10.4230/LIPIcs.ITCS.2019.61
article
Sum of Squares Lower Bounds from Symmetry and a Good Story
Potechin, Aaron
1
University of Chicago Department of Computer Science, 5730 S. Ellis Avenue, John Crerar Library, Chicago, IL 60637, United States
In this paper, we develop machinery which makes it much easier to prove sum of squares lower bounds when the problem is symmetric under permutations of [1,n] and the unsatisfiability of our problem comes from integrality arguments, i.e. arguments that an expression must be an integer. Roughly speaking, to prove SOS lower bounds with our machinery it is sufficient to verify that the answer to the following three questions is yes:
1) Are there natural pseudo-expectation values for the problem?
2) Are these pseudo-expectation values rational functions of the problem parameters?
3) Are there sufficiently many values of the parameters for which these pseudo-expectation values correspond to the actual expected values over a distribution of solutions which is the uniform distribution over permutations of a single solution?
We demonstrate our machinery on three problems, the knapsack problem analyzed by Grigoriev, the MOD 2 principle (which says that the complete graph K_n has no perfect matching when n is odd), and the following Turan type problem: Minimize the number of triangles in a graph G with a given edge density. For knapsack, we recover Grigoriev's lower bound exactly. For the MOD 2 principle, we tighten Grigoriev's linear degree sum of squares lower bound, making it exact. Finally, for the triangle problem, we prove a sum of squares lower bound for finding the minimum triangle density. This lower bound is completely new and gives a simple example where constant degree sum of squares methods have a constant factor error in estimating graph densities.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol124-itcs2019/LIPIcs.ITCS.2019.61/LIPIcs.ITCS.2019.61.pdf
Sum of squares hierarchy
proof complexity
graph theory
lower bounds
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-01-08
124
62:1
62:19
10.4230/LIPIcs.ITCS.2019.62
article
Learning Time Dependent Choice
Chase, Zachary
1
Prasad, Siddharth
2
Department of Mathematics, California Institute of Technology, Pasadena, USA
Department of Computing and Mathematical Sciences, California Institute of Technology, Pasadena, USA
We explore questions dealing with the learnability of models of choice over time. We present a large class of preference models defined by a structural criterion for which we are able to obtain an exponential improvement over previously known learning bounds for more general preference models. This in particular implies that the three most important discounted utility models of intertemporal choice - exponential, hyperbolic, and quasi-hyperbolic discounting - are learnable in the PAC setting with VC dimension that grows logarithmically in the number of time periods. We also examine these models in the framework of active learning. We find that the commonly studied stream-based setting is in general difficult to analyze for preference models, but we provide a redeeming situation in which the learner can indeed improve upon the guarantees provided by PAC learning. In contrast to the stream-based setting, we show that if the learner is given full power over the data he learns from - in the form of learning via membership queries - even very naive algorithms significantly outperform the guarantees provided by higher level active learning algorithms.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol124-itcs2019/LIPIcs.ITCS.2019.62/LIPIcs.ITCS.2019.62.pdf
Intertemporal Choice
Discounted Utility
Preference Recovery
PAC Learning
Active Learning
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-01-08
124
63:1
63:21
10.4230/LIPIcs.ITCS.2019.63
article
Erasures vs. Errors in Local Decoding and Property Testing
Raskhodnikova, Sofya
1
Ron-Zewi, Noga
2
Varma, Nithin
1
https://orcid.org/0000-0002-1211-2566
Department of Computer Science, Boston University, USA
Department of Computer Science, University of Haifa, Israel
We initiate the study of the role of erasures in local decoding and use our understanding to prove a separation between erasure-resilient and tolerant property testing. Local decoding in the presence of errors has been extensively studied, but has not been considered explicitly in the presence of erasures.
Motivated by applications in property testing, we begin our investigation with local list decoding in the presence of erasures. We prove an analog of a famous result of Goldreich and Levin on local list decodability of the Hadamard code. Specifically, we show that the Hadamard code is locally list decodable in the presence of a constant fraction of erasures, arbitrary close to 1, with list sizes and query complexity better than in the Goldreich-Levin theorem. We use this result to exhibit a property which is testable with a number of queries independent of the length of the input in the presence of erasures, but requires a number of queries that depends on the input length, n, for tolerant testing. We further study approximate locally list decodable codes that work against erasures and use them to strengthen our separation by constructing a property which is testable with a constant number of queries in the presence of erasures, but requires n^{Omega(1)} queries for tolerant testing.
Next, we study the general relationship between local decoding in the presence of errors and in the presence of erasures. We observe that every locally (uniquely or list) decodable code that works in the presence of errors also works in the presence of twice as many erasures (with the same parameters up to constant factors). We show that there is also an implication in the other direction for locally decodable codes (with unique decoding): specifically, that the existence of a locally decodable code that works in the presence of erasures implies the existence of a locally decodable code that works in the presence of errors and has related parameters. However, it remains open whether there is an implication in the other direction for locally list decodable codes. We relate this question to other open questions in local decoding.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol124-itcs2019/LIPIcs.ITCS.2019.63/LIPIcs.ITCS.2019.63.pdf
Error-correcting codes
probabilistically checkable proofs (PCPs) of proximity
Hadamard code
local list decoding
tolerant testing
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-01-08
124
64:1
64:19
10.4230/LIPIcs.ITCS.2019.64
article
A New Approach to Multi-Party Peer-to-Peer Communication Complexity
Rosén, Adi
1
Urrutia, Florent
2
CNRS and Université Paris Diderot
Université Paris Diderot
We introduce new models and new information theoretic measures for the study of communication complexity in the natural peer-to-peer, multi-party, number-in-hand setting. We prove a number of properties of our new models and measures, and then, in order to exemplify their effectiveness, we use them to prove two lower bounds. The more elaborate one is a tight lower bound of Omega(kn) on the multi-party peer-to-peer randomized communication complexity of the k-player, n-bit function Disjointness, Disj_k^n. The other one is a tight lower bound of Omega(kn) on the multi-party peer-to-peer randomized communication complexity of the k-player, n-bit bitwise parity function, Par_k^n. Both lower bounds hold when n=Omega(k). The lower bound for Disj_k^n improves over the lower bound that can be inferred from the result of Braverman et al. (FOCS 2013), which was proved in the coordinator model and can yield a lower bound of Omega(kn/log k) in the peer-to-peer model.
To the best of our knowledge, our lower bounds are the first tight (non-trivial) lower bounds on communication complexity in the natural peer-to-peer multi-party setting.
In addition to the above results for communication complexity, we also prove, using the same tools, an Omega(n) lower bound on the number of random bits necessary for the (information theoretic) private computation of the function Disj_k^n.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol124-itcs2019/LIPIcs.ITCS.2019.64/LIPIcs.ITCS.2019.64.pdf
communication complexity
multi-party communication complexity
peer-to-peer communication complexity
information complexity
private computation
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-01-08
124
65:1
65:15
10.4230/LIPIcs.ITCS.2019.65
article
A Schur Complement Cheeger Inequality
Schild, Aaron
1
University of California, Berkeley, CA, USA
Cheeger's inequality shows that any undirected graph G with minimum normalized Laplacian eigenvalue lambda_G has a cut with conductance at most O(sqrt{lambda_G}). Qualitatively, Cheeger's inequality says that if the mixing time of a graph is high, there is a cut that certifies this. However, this relationship is not tight, as some graphs (like cycles) do not have cuts with conductance o(sqrt{lambda_G}).
To better approximate the mixing time of a graph, we consider a more general object. Specifically, instead of bounding the mixing time with cuts, we bound it with cuts in graphs obtained by Schur complementing out vertices from the graph G. Combinatorially, these Schur complements describe random walks in G restricted to a subset of its vertices. As a result, all Schur complement cuts have conductance at least Omega(lambda_G). We show that unlike with cuts, this inequality is tight up to a constant factor. Specifically, there is a Schur complement cut with conductance at most O(lambda_G).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol124-itcs2019/LIPIcs.ITCS.2019.65/LIPIcs.ITCS.2019.65.pdf
electrical networks
Cheeger's inequality
mixing time
conductance
Schur complements
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-01-08
124
66:1
66:20
10.4230/LIPIcs.ITCS.2019.66
article
Game Efficiency Through Linear Programming Duality
Kim Thang, Nguyen
1
https://orcid.org/0000-0002-6085-9453
IBISC, Univ Evry, University Paris Saclay, Evry, France
The efficiency of a game is typically quantified by the price of anarchy (PoA), defined as the worst ratio of the value of an equilibrium - solution of the game - and that of an optimal outcome. Given the tremendous impact of tools from mathematical programming in the design of algorithms and the similarity of the price of anarchy and different measures such as the approximation and competitive ratios, it is intriguing to develop a duality-based method to characterize the efficiency of games.
In the paper, we present an approach based on linear programming duality to study the efficiency of games. We show that the approach provides a general recipe to analyze the efficiency of games and also to derive concepts leading to improvements. The approach is particularly appropriate to bound the PoA. Specifically, in our approach the dual programs naturally lead to competitive PoA bounds that are (almost) optimal for several classes of games. The approach indeed captures the smoothness framework and also some current non-smooth techniques/concepts. We show the applicability to the wide variety of games and environments, from congestion games to Bayesian welfare, from full-information settings to incomplete-information ones.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol124-itcs2019/LIPIcs.ITCS.2019.66/LIPIcs.ITCS.2019.66.pdf
Price of Anarchy
Primal-Dual