eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
1
2410
10.4230/LIPIcs.ITCS.2022
article
LIPIcs, Volume 215, ITCS 2022, Complete Volume
Braverman, Mark
1
Princeton University, USA
LIPIcs, Volume 215, ITCS 2022, Complete Volume
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022/LIPIcs.ITCS.2022.pdf
LIPIcs, Volume 215, ITCS 2022, Complete Volume
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
0:i
0:xxiv
10.4230/LIPIcs.ITCS.2022.0
article
Front Matter, Table of Contents, Preface, Conference Organization
Braverman, Mark
1
Princeton University, USA
Front Matter, Table of Contents, Preface, Conference Organization
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.0/LIPIcs.ITCS.2022.0.pdf
Front Matter
Table of Contents
Preface
Conference Organization
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
1:1
1:22
10.4230/LIPIcs.ITCS.2022.1
article
Maximizing Revenue in the Presence of Intermediaries
Aggarwal, Gagan
1
Bhawalkar, Kshipra
1
Guruganesh, Guru
1
Perlroth, Andres
1
Google Research, Mountain View, CA, USA
We study the mechanism design problem of selling k items to unit-demand buyers with private valuations for the items. A buyer either participates directly in the auction or is represented by an intermediary, who represents a subset of buyers. Our goal is to design robust mechanisms that are independent of the demand structure (i.e. how the buyers are partitioned across intermediaries), and perform well under a wide variety of possible contracts between intermediaries and buyers.
We first consider the case of k identical items where each buyer draws its private valuation for an item i.i.d. from a known λ-regular distribution. We construct a robust mechanism that, independent of the demand structure and under certain conditions on the contracts between intermediaries and buyers, obtains a constant factor of the revenue that the mechanism designer could obtain had she known the buyers' valuations. In other words, our mechanism’s expected revenue achieves a constant factor of the optimal welfare, regardless of the demand structure. Our mechanism is a simple posted-price mechanism that sets a take-it-or-leave-it per-item price that depends on k and the total number of buyers, but does not depend on the demand structure or the downstream contracts.
Next we generalize our result to the case when the items are not identical. We assume that the item valuations are separable, i.e. v_{i j} = η_j v_i for buyer i and item j, with each private v_i drawn i.i.d. from a known λ-regular distribution. For this case, we design a mechanism that obtains at least a constant fraction of the optimal welfare, by using a menu of posted prices. This mechanism is also independent of the demand structure, but makes a relatively stronger assumption on the contracts between intermediaries and buyers, namely that each intermediary prefers outcomes with a higher sum of utilities of the subset of buyers represented by it.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.1/LIPIcs.ITCS.2022.1.pdf
Mechanism Design
Revenue Maximization
Posted Price Mechanisms
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
2:1
2:15
10.4230/LIPIcs.ITCS.2022.2
article
Algebraic Restriction Codes and Their Applications
Aggarwal, Divesh
1
Döttling, Nico
2
https://orcid.org/0000-0002-5914-7635
Dujmovic, Jesko
2
3
Hajiabadi, Mohammad
4
Malavolta, Giulio
5
Obremski, Maciej
1
National University of Singapore, Singapore
Helmholtz Center for Information Security (CISPA), Saarbrücken, Germany
Saarland University, Saarbrücken, Germany
University of Waterloo, ON, Canada
Max Planck Institute for Security and Privacy, Bochum, Germany
Consider the following problem: You have a device that is supposed to compute a linear combination of its inputs, which are taken from some finite field. However, the device may be faulty and compute arbitrary functions of its inputs. Is it possible to encode the inputs in such a way that only linear functions can be evaluated over the encodings? I.e., learning an arbitrary function of the encodings will not reveal more information about the inputs than a linear combination.
In this work, we introduce the notion of algebraic restriction codes (AR codes), which constrain adversaries who might compute any function to computing a linear function. Our main result is an information-theoretic construction AR codes that restrict any class of function with a bounded number of output bits to linear functions. Our construction relies on a seed which is not provided to the adversary.
While interesting and natural on its own, we show an application of this notion in cryptography. In particular, we show that AR codes lead to the first construction of rate-1 oblivious transfer with statistical sender security from the Decisional Diffie-Hellman assumption, and the first-ever construction that makes black-box use of cryptography. Previously, such protocols were known only from the LWE assumption, using non-black-box cryptographic techniques. We expect our new notion of AR codes to find further applications, e.g., in the context of non-malleability, in the future.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.2/LIPIcs.ITCS.2022.2.pdf
Algebraic Restriction Codes
Oblivious Transfer
Rate 1
Statistically Sender Private
OT
Diffie-Hellman
DDH
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
3:1
3:25
10.4230/LIPIcs.ITCS.2022.3
article
Improved Merlin-Arthur Protocols for Central Problems in Fine-Grained Complexity
Akmal, Shyan
1
https://orcid.org/0000-0002-7266-2041
Chen, Lijie
1
Jin, Ce
1
Raj, Malvika
2
Williams, Ryan
1
MIT, EECS and CSAIL, Cambridge, MA, USA
University of California Berkeley, CA, USA
In a Merlin-Arthur proof system, the proof verifier (Arthur) accepts valid proofs (from Merlin) with probability 1, and rejects invalid proofs with probability arbitrarily close to 1. The running time of such a system is defined to be the length of Merlin’s proof plus the running time of Arthur. We provide new Merlin-Arthur proof systems for some key problems in fine-grained complexity. In several cases our proof systems have optimal running time. Our main results include:
- Certifying that a list of n integers has no 3-SUM solution can be done in Merlin-Arthur time Õ(n). Previously, Carmosino et al. [ITCS 2016] showed that the problem has a nondeterministic algorithm running in Õ(n^{1.5}) time (that is, there is a proof system with proofs of length Õ(n^{1.5}) and a deterministic verifier running in Õ(n^{1.5}) time).
- Counting the number of k-cliques with total edge weight equal to zero in an n-node graph can be done in Merlin-Arthur time Õ(n^{⌈ k/2⌉}) (where k ≥ 3). For odd k, this bound can be further improved for sparse graphs: for example, counting the number of zero-weight triangles in an m-edge graph can be done in Merlin-Arthur time Õ(m). Previous Merlin-Arthur protocols by Williams [CCC'16] and Björklund and Kaski [PODC'16] could only count k-cliques in unweighted graphs, and had worse running times for small k.
- Computing the All-Pairs Shortest Distances matrix for an n-node graph can be done in Merlin-Arthur time Õ(n²). Note this is optimal, as the matrix can have Ω(n²) nonzero entries in general. Previously, Carmosino et al. [ITCS 2016] showed that this problem has an Õ(n^{2.94}) nondeterministic time algorithm.
- Certifying that an n-variable k-CNF is unsatisfiable can be done in Merlin-Arthur time 2^{n/2 - n/O(k)}. We also observe an algebrization barrier for the previous 2^{n/2}⋅ poly(n)-time Merlin-Arthur protocol of R. Williams [CCC'16] for #SAT: in particular, his protocol algebrizes, and we observe there is no algebrizing protocol for k-UNSAT running in 2^{n/2}/n^{ω(1)} time. Therefore we have to exploit non-algebrizing properties to obtain our new protocol.
- Certifying a Quantified Boolean Formula is true can be done in Merlin-Arthur time 2^{4n/5}⋅ poly(n). Previously, the only nontrivial result known along these lines was an Arthur-Merlin-Arthur protocol (where Merlin’s proof depends on some of Arthur’s coins) running in 2^{2n/3}⋅poly(n) time. Due to the centrality of these problems in fine-grained complexity, our results have consequences for many other problems of interest. For example, our work implies that certifying there is no Subset Sum solution to n integers can be done in Merlin-Arthur time 2^{n/3}⋅poly(n), improving on the previous best protocol by Nederlof [IPL 2017] which took 2^{0.49991n}⋅poly(n) time.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.3/LIPIcs.ITCS.2022.3.pdf
Fine-grained complexity
Merlin-Arthur proofs
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
4:1
4:20
10.4230/LIPIcs.ITCS.2022.4
article
Pre-Constrained Encryption
Ananth, Prabhanjan
1
Jain, Abhishek
2
Jin, Zhengzhong
2
Malavolta, Giulio
3
University of California Santa Barbara, CA, USA
Johns Hopkins University, Baltimore, MD, USA
Max Planck Institute for Security and Privacy, Bochum, Germany
In all existing encryption systems, the owner of the master secret key has the ability to decrypt all ciphertexts. In this work, we propose a new notion of pre-constrained encryption (PCE) where the owner of the master secret key does not have "full" decryption power. Instead, its decryption power is constrained in a pre-specified manner during the system setup.
We present formal definitions and constructions of PCE, and discuss societal applications and implications to some well-studied cryptographic primitives.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.4/LIPIcs.ITCS.2022.4.pdf
Advanced encryption systems
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
5:1
5:23
10.4230/LIPIcs.ITCS.2022.5
article
Domain Sparsification of Discrete Distributions Using Entropic Independence
Anari, Nima
1
Dereziński, Michał
2
Vuong, Thuy-Duong
1
Yang, Elizabeth
3
Stanford University, CA, USA
University of Michigan, Ann Arbor, MI, USA
UC Berkeley, CA, USA
We present a framework for speeding up the time it takes to sample from discrete distributions μ defined over subsets of size k of a ground set of n elements, in the regime where k is much smaller than n. We show that if one has access to estimates of marginals P_{S∼ μ} {i ∈ S}, then the task of sampling from μ can be reduced to sampling from related distributions ν supported on size k subsets of a ground set of only n^{1-α}⋅ poly(k) elements. Here, 1/α ∈ [1, k] is the parameter of entropic independence for μ. Further, our algorithm only requires sparsified distributions ν that are obtained by applying a sparse (mostly 0) external field to μ, an operation that for many distributions μ of interest, retains algorithmic tractability of sampling from ν. This phenomenon, which we dub domain sparsification, allows us to pay a one-time cost of estimating the marginals of μ, and in return reduce the amortized cost needed to produce many samples from the distribution μ, as is often needed in upstream tasks such as counting and inference.
For a wide range of distributions where α = Ω(1), our result reduces the domain size, and as a corollary, the cost-per-sample, by a poly(n) factor. Examples include monomers in a monomer-dimer system, non-symmetric determinantal point processes, and partition-constrained Strongly Rayleigh measures. Our work significantly extends the reach of prior work of Anari and Dereziński who obtained domain sparsification for distributions with a log-concave generating polynomial (corresponding to α = 1). As a corollary of our new analysis techniques, we also obtain a less stringent requirement on the accuracy of marginal estimates even for the case of log-concave polynomials; roughly speaking, we show that constant-factor approximation is enough for domain sparsification, improving over O(1/k) relative error established in prior work.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.5/LIPIcs.ITCS.2022.5.pdf
Domain Sparsification
Markov Chains
Sampling
Entropic Independence
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
6:1
6:22
10.4230/LIPIcs.ITCS.2022.6
article
Circuit Lower Bounds for Low-Energy States of Quantum Code Hamiltonians
Anshu, Anurag
1
2
3
https://orcid.org/0000-0002-3859-9309
Nirkhe, Chinmay
2
3
https://orcid.org/0000-0002-5808-4994
Simons Institute for the Theory of Computing, Berkeley, California, USA
Challenge Institute for Quantum Computation, University of California, Berkeley, CA, USA
Electrical Engineering and Computer Sciences, University of California, Berkeley, CA, USA
The No Low-energy Trivial States (NLTS) conjecture of Freedman and Hastings [Freedman and Hastings, 2014] - which posits the existence of a local Hamiltonian with a super-constant quantum circuit lower bound on the complexity of all low-energy states - identifies a fundamental obstacle to the resolution of the quantum PCP conjecture. In this work, we provide new techniques, based on entropic and local indistinguishability arguments, that prove circuit lower bounds for all the low-energy states of local Hamiltonians arising from quantum error-correcting codes.
For local Hamiltonians arising from nearly linear-rate or nearly linear-distance LDPC stabilizer codes, we prove super-constant circuit lower bounds for the complexity of all states of energy o(n). Such codes are known to exist and are not necessarily locally-testable, a property previously suspected to be essential for the NLTS conjecture. Curiously, such codes can also be constructed on a two-dimensional lattice, showing that low-depth states cannot accurately approximate the ground-energy even in physically relevant systems.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.6/LIPIcs.ITCS.2022.6.pdf
quantum pcps
local hamiltonians
error-correcting codes
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
7:1
7:25
10.4230/LIPIcs.ITCS.2022.7
article
Near-Optimal Distributed Implementations of Dynamic Algorithms for Symmetry Breaking Problems
Antaki, Shiri
1
Liu, Quanquan C.
2
Solomon, Shay
1
Tel Aviv University, Tel Aviv, Israel
Massachusetts Institute of Technology, Cambridge, MA, USA
The field of dynamic graph algorithms aims at achieving a thorough understanding of real-world networks whose topology evolves with time. Traditionally, the focus has been on the classic sequential, centralized setting where the main quality measure of an algorithm is its update time, i.e. the time needed to restore the solution after each update. While real-life networks are very often distributed across multiple machines, the fundamental question of finding efficient dynamic, distributed graph algorithms received little attention to date. The goal in this setting is to optimize both the round and message complexities incurred per update step, ideally achieving a message complexity that matches the centralized update time in O(1) (perhaps amortized) rounds.
Toward initiating a systematic study of dynamic, distributed algorithms, we study some of the most central symmetry-breaking problems: maximal independent set (MIS), maximal matching/(approx-) maximum cardinality matching (MM/MCM), and (Δ + 1)-vertex coloring. This paper focuses on dynamic, distributed algorithms that are deterministic, and in particular - robust against an adaptive adversary. Most of our focus is on our MIS algorithm, which achieves O (m^{2/3}log² n) amortized messages in O(log² n) amortized rounds in the Congest model. Notably, the amortized message complexity of our algorithm matches the amortized update time of the best-known deterministic centralized MIS algorithm by Gupta and Khan [SOSA'21] up to a polylog n factor. The previous best deterministic distributed MIS algorithm, by Assadi et al. [STOC'18], uses O(m^{3/4}) amortized messages in O(1) amortized rounds, i.e., we achieve a polynomial improvement in the message complexity by a polylog n increase to the round complexity; moreover, the algorithm of Assadi et al. makes an implicit assumption that the network is connected at all times, which seems excessively strong when it comes to dynamic networks. Using techniques similar to the ones we developed for our MIS algorithm, we also provide deterministic algorithms for MM, approximate MCM and (Δ + 1)-vertex coloring whose message complexities match or nearly match the update times of the best centralized algorithms, while having either constant or polylog(n) round complexities.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.7/LIPIcs.ITCS.2022.7.pdf
dynamic graph algorithms
distributed algorithms
symmetry breaking problems
maximal independent set
matching
coloring
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
8:1
8:23
10.4230/LIPIcs.ITCS.2022.8
article
Secret Sharing, Slice Formulas, and Monotone Real Circuits
Applebaum, Benny
1
Beimel, Amos
2
Nir, Oded
1
Peter, Naty
1
Pitassi, Toniann
3
4
Tel-Aviv University, Tel-Aviv, Israel
Ben-Gurion University, Be'er-Sheva, Israel
University of Toronto, Toronto, Canada
Columbia University, New York, NY, USA
A secret-sharing scheme allows to distribute a secret s among n parties such that only some predefined "authorized" sets of parties can reconstruct the secret s, and all other "unauthorized" sets learn nothing about s. For over 30 years, it was known that any (monotone) collection of authorized sets can be realized by a secret-sharing scheme whose shares are of size 2^{n-o(n)} and until recently no better scheme was known. In a recent breakthrough, Liu and Vaikuntanathan (STOC 2018) have reduced the share size to 2^{0.994n+o(n)}, and this was further improved by several follow-ups accumulating in an upper bound of 1.5^{n+o(n)} (Applebaum and Nir, CRYPTO 2021). Following these advances, it is natural to ask whether these new approaches can lead to a truly sub-exponential upper-bound of 2^{n^{1-ε}} for some constant ε > 0, or even all the way down to polynomial upper-bounds.
In this paper, we relate this question to the complexity of computing monotone Boolean functions by monotone real circuits (MRCs) - a computational model that was introduced by Pudlák (J. Symb. Log., 1997) in the context of proof complexity. We introduce a new notion of "separable" MRCs that lies between monotone real circuits and monotone real formulas (MRFs). As our main results, we show that recent constructions of general secret-sharing schemes implicitly give rise to separable MRCs for general monotone functions of similar complexity, and that some monotone functions (in monotone NP) cannot be computed by sub-exponential size separable MRCs. Interestingly, it seems that proving similar lower-bounds for general MRCs is beyond the reach of current techniques.
We use this connection to obtain lower-bounds against a natural family of secret-sharing schemes, as well as new non-trivial upper-bounds for MRCs. Specifically, we conclude that recent approaches for secret-sharing schemes cannot achieve sub-exponential share size and that every monotone function can be realized by an MRC (or even MRF) of complexity 1.5^{n+o(n)}. To the best of our knowledge, this is the first improvement over the trivial 2^{n-o(n)} upper-bound. Along the way, we show that the recent constructions of general secret-sharing schemes implicitly give rise to Boolean formulas over slice functions and prove that such formulas can be simulated by separable MRCs of similar size. On a conceptual level, our paper continues the rich line of study that relates the share size of secret-sharing schemes to monotone complexity measures.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.8/LIPIcs.ITCS.2022.8.pdf
Secret Sharing Schemes
Monotone Real Circuits
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
9:1
9:23
10.4230/LIPIcs.ITCS.2022.9
article
An Asymptotically Optimal Algorithm for Maximum Matching in Dynamic Streams
Assadi, Sepehr
1
Shah, Vihan
1
Department of Computer Science, Rutgers University, Piscataway, NJ, USA
We present an algorithm for the maximum matching problem in dynamic (insertion-deletions) streams with asymptotically optimal space: for any n-vertex graph, our algorithm with high probability outputs an α-approximate matching in a single pass using O(n²/α³) bits of space.
A long line of work on the dynamic streaming matching problem has reduced the gap between space upper and lower bounds first to n^{o(1)} factors [Assadi-Khanna-Li-Yaroslavtsev; SODA 2016] and subsequently to polylog factors [Dark-Konrad; CCC 2020]. Our upper bound now matches the Dark-Konrad lower bound up to O(1) factors, thus completing this research direction.
Our approach consists of two main steps: we first (provably) identify a family of graphs, similar to the instances used in prior work to establish the lower bounds for this problem, as the only "hard" instances to focus on. These graphs include an induced subgraph which is both sparse and contains a large matching. We then design a dynamic streaming algorithm for this family of graphs which is more efficient than prior work. The key to this efficiency is a novel sketching method, which bypasses the typical loss of polylog(n)-factors in space compared to standard L₀-sampling primitives, and can be of independent interest in designing optimal algorithms for other streaming problems.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.9/LIPIcs.ITCS.2022.9.pdf
Graph streaming algorithms
Sketching
Maximum matching
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
10:1
10:20
10.4230/LIPIcs.ITCS.2022.10
article
Sublinear Time and Space Algorithms for Correlation Clustering via Sparse-Dense Decompositions
Assadi, Sepehr
1
Wang, Chen
1
https://orcid.org/0000-0003-4044-9438
Department of Computer Science, Rutgers University, Piscataway, NJ, USA
We present a new approach for solving (minimum disagreement) correlation clustering that results in sublinear algorithms with highly efficient time and space complexity for this problem. In particular, we obtain the following algorithms for n-vertex (+/-)-labeled graphs G:
- A sublinear-time algorithm that with high probability returns a constant approximation clustering of G in O(nlog²n) time assuming access to the adjacency list of the (+)-labeled edges of G (this is almost quadratically faster than even reading the input once). Previously, no sublinear-time algorithm was known for this problem with any multiplicative approximation guarantee.
- A semi-streaming algorithm that with high probability returns a constant approximation clustering of G in O(n log n) space and a single pass over the edges of the graph G (this memory is almost quadratically smaller than input size). Previously, no single-pass algorithm with o(n²) space was known for this problem with any approximation guarantee. The main ingredient of our approach is a novel connection to sparse-dense graph decompositions that are used extensively in the graph coloring literature. To our knowledge, this connection is the first application of these decompositions beyond graph coloring, and in particular for the correlation clustering problem, and can be of independent interest.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.10/LIPIcs.ITCS.2022.10.pdf
Correlation Clustering
Sublinear Algorithms
Semi-streaming Algorithms
Sublinear time Algorithms
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
11:1
11:2
10.4230/LIPIcs.ITCS.2022.11
article
Multi-Channel Bayesian Persuasion
Babichenko, Yakov
1
Talgam-Cohen, Inbal
1
Xu, Haifeng
2
Zabarnyi, Konstantin
1
Technion - Israel Institute of Technology, Haifa, Israel
University of Virginia, Charlottesville, VA, USA
The celebrated Bayesian persuasion model considers strategic communication between an informed agent (the sender) and uninformed decision makers (the receivers). The current rapidly-growing literature assumes a dichotomy: either the sender is powerful enough to communicate separately with each receiver (a.k.a. private persuasion), or she cannot communicate separately at all (a.k.a. public persuasion). We propose a model that smoothly interpolates between the two, by introducing a natural multi-channel communication structure in which each receiver observes a subset of the sender’s communication channels. This captures, e.g., receivers on a network, where information spillover is almost inevitable.
Our main result is a complete characterization specifying when one communication structure is better for the sender than another, in the sense of yielding higher optimal expected utility universally over all prior distributions and utility functions. The characterization is based on a simple pairwise relation among receivers - one receiver information-dominates another if he observes at least the same channels. We prove that a communication structure M₁ is (weakly) better than M₂ if and only if every information-dominating pair of receivers in M₁ is also such in M₂. This result holds in the most general model of Bayesian persuasion in which receivers may have externalities - that is, the receivers' actions affect each other. The proof is cryptographic-inspired and it has a close conceptual connection to secret sharing protocols.
As a surprising consequence of the main result, the sender can implement private Bayesian persuasion (which is the best communication structure for the sender) for k receivers using only O(log k) communication channels, rather than k channels in the naive implementation. We provide an implementation that matches the information-theoretical lower bound on the number of channels - not only asymptotically, but exactly. Moreover, the main result immediately implies some results of [Kerman and Tenev, 2021] on persuading receivers arranged in a network such that each receiver observes both the signals sent to him and to his neighbours in the network.
We further provide an additive FPTAS for an optimal sender’s signaling scheme when the number of states of nature is constant, the sender has an additive utility function and the graph of the information-dominating pairs of receivers is a directed forest. We focus on a constant number of states, as even for the special case of public persuasion and additive sender’s utility, it was shown by [Shaddin Dughmi and Haifeng Xu, 2017] that one can achieve neither an additive PTAS nor a polynomial-time constant-factor optimal sender’s utility approximation (unless P=NP). We leave for future research studying exact tractability of forest communication structures, as well as generalizing our result to more families of sender’s utility functions and communication structures.
Finally, we prove that finding an optimal signaling scheme under multi-channel persuasion is computationally hard for a general family of sender’s utility functions - separable supermajority functions, which are specified by choosing a partition of the set of receivers and summing supermajority functions corresponding to different elements of the partition, multiplied by some non-negative constants. Note that one can easily deduce from [Emir Kamenica and Matthew Gentzkow, 2011] and [Itai Arieli and Yakov Babichenko, 2019] that finding an optimal signaling scheme for such utility functions is computationally tractable for both public and private persuasion. This difference illustrates both the conceptual and the computational hardness of general multi-channel persuasion.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.11/LIPIcs.ITCS.2022.11.pdf
Algorithmic game theory
Bayesian persuasion
Private Bayesian persuasion
Public Bayesian persuasion
Secret sharing
Networks
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
12:1
12:14
10.4230/LIPIcs.ITCS.2022.12
article
Randomness Extraction from Somewhat Dependent Sources
Ball, Marshall
1
https://orcid.org/0000-0002-4236-3710
Goldreich, Oded
2
https://orcid.org/0000-0002-4329-135X
Malkin, Tal
1
Computer Science Department, Columbia University, New York, NY, USA
Faculty of Mathematics and Computer Science, Weizmann Institute of Science, Rehovot, Israel
We initiate a comprehensive study of the question of randomness extractions from two somewhat dependent sources of defective randomness. Specifically, we present three natural models, which are based on different natural perspectives on the notion of bounded dependency between a pair of distributions. Going from the more restricted model to the less restricted one, our models and main results are as follows.
1) Bounded dependence as bounded coordination: Here we consider pairs of distributions that arise from independent random processes that are applied to the outcome of a single global random source, which may be viewed as a mechanism of coordination (which is adversarial from our perspective).
We show that if the min-entropy of each of the two outcomes is larger than the length of the global source, then extraction is possible (and is, in fact, feasible). We stress that the extractor has no access to the global random source nor to the internal randomness that the two processes use, but rather gets only the two dependent outcomes.
This model is equivalent to a setting in which the two outcomes are generated by two independent sources, but then each outcome is modified based on limited leakage (equiv., communication) between the two sources.
(Here this leakage is measured in terms of the number of bits that were communicated, but in the next model we consider the actual influence of this leakage.)
2) Bounded dependence as bounded cross influence: Here we consider pairs of outcomes that are produced by a pair of sources such that each source has bounded (worst-case) influence on the outcome of the other source. We stress that the extractor has no access to the randomness that the two processes use, but rather gets only the two dependent outcomes.
We show that, while (proper) randomness extraction is impossible in this case, randomness condensing is possible and feasible; specifically, the randomness deficiency of condensing is linear in our measure of cross influence, and this upper bound is tight. We also discuss various applications of such condensers, including for cryptography, standard randomized algorithms, and sublinear-time algorithms, while pointing out their benefit over using a seeded (single-source) extractor.
3) Bounded dependence as bounded mutual information: Due to the average-case nature of mutual information, here there is a trade-off between the error (or deviation) probability of the extracted output and its randomness deficiency. Loosely speaking, for joint distributions of mutual information t, we can condense with randomness deficiency O(t/ε) and error ε, and this trade-off is optimal. All positive results are obtained by using a standard two-source extractor (or condenser) as a black-box.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.12/LIPIcs.ITCS.2022.12.pdf
Randomness Extraction
min-entropy
mutual information
two-source extractors
two-source condenser
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
13:1
13:22
10.4230/LIPIcs.ITCS.2022.13
article
Prefix Discrepancy, Smoothed Analysis, and Combinatorial Vector Balancing
Bansal, Nikhil
1
Jiang, Haotian
2
Meka, Raghu
3
Singla, Sahil
4
Sinha, Makrand
5
6
University of Michigan, Ann Arbor, MI, USA
University of Washington, Seattle, WA, USA
University of California, Los Angeles, CA, USA
Georgia Institute of Technology, Atlanta, GA, USA
Simons Institute, Berkeley, CA, USA
University of California, Berkeley, CA, USA
A well-known result of Banaszczyk in discrepancy theory concerns the prefix discrepancy problem (also known as the signed series problem): given a sequence of T unit vectors in ℝ^d, find ± signs for each of them such that the signed sum vector along any prefix has a small 𝓁_∞-norm? This problem is central to proving upper bounds for the Steinitz problem, and the popular Komlós problem is a special case where one is only concerned with the final signed sum vector instead of all prefixes.
Banaszczyk gave an O(√{log d+ log T}) bound for the prefix discrepancy problem. We investigate the tightness of Banaszczyk’s bound and consider natural generalizations of prefix discrepancy:
- We first consider a smoothed analysis setting, where a small amount of additive noise perturbs the input vectors. We show an exponential improvement in T compared to Banaszczyk’s bound. Using a primal-dual approach and a careful chaining argument, we show that one can achieve a bound of O(√{log d+ log log T}) with high probability in the smoothed setting. Moreover, this smoothed analysis bound is the best possible without further improvement on Banaszczyk’s bound in the worst case.
- We also introduce a generalization of the prefix discrepancy problem to arbitrary DAGs. Here, vertices correspond to unit vectors, and the discrepancy constraints correspond to paths on a DAG on T vertices - prefix discrepancy is precisely captured when the DAG is a simple path. We show that an analog of Banaszczyk’s O(√{log d+ log T}) bound continues to hold in this setting for adversarially given unit vectors and that the √{log T} factor is unavoidable for DAGs. We also show that unlike for prefix discrepancy, the dependence on T cannot be improved significantly in the smoothed case for DAGs.
- We conclude by exploring a more general notion of vector balancing, which we call combinatorial vector balancing. In this problem, the discrepancy constraints are generalized from paths of a DAG to an arbitrary set system. We obtain near-optimal bounds in this setting, up to poly-logarithmic factors.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.13/LIPIcs.ITCS.2022.13.pdf
Prefix discrepancy
smoothed analysis
combinatorial vector balancing
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
14:1
14:21
10.4230/LIPIcs.ITCS.2022.14
article
Classical Algorithms and Quantum Limitations for Maximum Cut on High-Girth Graphs
Barak, Boaz
1
https://orcid.org/0000-0002-4053-8927
Marwaha, Kunal
2
https://orcid.org/0000-0001-9084-6971
Harvard University, Cambridge, MA, USA
Berkeley Center for Quantum Information and Computation, Berkeley, CA, USA
We study the performance of local quantum algorithms such as the Quantum Approximate Optimization Algorithm (QAOA) for the maximum cut problem, and their relationship to that of randomized classical algorithms.
1) We prove that every (quantum or classical) one-local algorithm (where the value of a vertex only depends on its and its neighbors' state) achieves on D-regular graphs of girth > 5 a maximum cut of at most 1/2 + C/√D for C = 1/√2 ≈ 0.7071. This is the first such result showing that one-local algorithms achieve a value that is bounded away from the true optimum for random graphs, which is 1/2 + P_*/√D + o(1/√D) for P_* ≈ 0.7632 [Dembo et al., 2017].
2) We show that there is a classical k-local algorithm that achieves a value of 1/2 + C/√D - O(1/√k) for D-regular graphs of girth > 2k+1, where C = 2/π ≈ 0.6366. This is an algorithmic version of the existential bound of [Lyons, 2017] and is related to the algorithm of [Aizenman et al., 1987] (ALR) for the Sherrington-Kirkpatrick model. This bound is better than that achieved by the one-local and two-local versions of QAOA on high-girth graphs [M. B. Hastings, 2019; Marwaha, 2021].
3) Through computational experiments, we give evidence that the ALR algorithm achieves better performance than constant-locality QAOA for random D-regular graphs, as well as other natural instances, including graphs that do have short cycles.
While our theoretical bounds require the locality and girth assumptions, our experimental work suggests that it could be possible to extend them beyond these constraints. This points at the tantalizing possibility that O(1)-local quantum maximum-cut algorithms might be pointwise dominated by polynomial-time classical algorithms, in the sense that there is a classical algorithm outputting cuts of equal or better quality on every possible instance. This is in contrast to the evidence that polynomial-time algorithms cannot simulate the probability distributions induced by local quantum algorithms.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.14/LIPIcs.ITCS.2022.14.pdf
approximation algorithms
QAOA
maximum cut
local distributions
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
15:1
15:13
10.4230/LIPIcs.ITCS.2022.15
article
Indistinguishability Obfuscation of Null Quantum Circuits and Applications
Bartusek, James
1
Malavolta, Giulio
2
University of California, Berkeley, CA, USA
Max Planck Institute for Security and Privacy, Bochum, Germany
We study the notion of indistinguishability obfuscation for null quantum circuits (quantum null-iO). We present a construction assuming:
- The quantum hardness of learning with errors (LWE).
- Post-quantum indistinguishability obfuscation for classical circuits.
- A notion of "dual-mode" classical verification of quantum computation (CVQC). We give evidence that our notion of dual-mode CVQC exists by proposing a scheme that is secure assuming LWE in the quantum random oracle model (QROM).
Then we show how quantum null-iO enables a series of new cryptographic primitives that, prior to our work, were unknown to exist even making heuristic assumptions. Among others, we obtain the first witness encryption scheme for QMA, the first publicly verifiable non-interactive zero-knowledge (NIZK) scheme for QMA, and the first attribute-based encryption (ABE) scheme for BQP.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.15/LIPIcs.ITCS.2022.15.pdf
Obfuscation
Witness Encryption
Classical Verification of Quantum Computation
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
16:1
16:23
10.4230/LIPIcs.ITCS.2022.16
article
An Efficient Semi-Streaming PTAS for Tournament Feedback Arc Set with Few Passes
Baweja, Anubhav
1
Jia, Justin
1
Woodruff, David P.
1
Carnegie Mellon University, Pittsburgh, PA, USA
We present the first semi-streaming polynomial-time approximation scheme (PTAS) for the minimum feedback arc set problem on directed tournaments in a small number of passes. Namely, we obtain a (1 + ε)-approximation in time O (poly(n) 2^{poly(1/ε)}), with p passes, in n^{1+1/p} ⋅ poly((log n)/ε) space. The only previous algorithm with this pass/space trade-off gave a 3-approximation (SODA, 2020), and other polynomial-time algorithms which achieved a (1+ε)-approximation did so with quadratic memory or with a linear number of passes. We also present a new time/space trade-off for 1-pass algorithms that solve the tournament feedback arc set problem. This problem has several applications in machine learning such as creating linear classifiers and doing Bayesian inference. We also provide several additional algorithms and lower bounds for related streaming problems on directed graphs, which is a largely unexplored territory.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.16/LIPIcs.ITCS.2022.16.pdf
Minimum Feedback Arc Set
Tournament Graphs
Approximation Algorithms
Semi-streaming Algorithms
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
17:1
17:24
10.4230/LIPIcs.ITCS.2022.17
article
FPT Algorithms for Finding Near-Cliques in c-Closed Graphs
Behera, Balaram
1
Husić, Edin
2
https://orcid.org/0000-0002-6708-5112
Jain, Shweta
3
Roughgarden, Tim
4
Seshadhri, C.
5
Georgia Institute of Technology, Atlanta, GA, USA
London School of Economics and Political Science, UK
University of Illinois, Urbana-Champaign, IL, USA
Columbia University, New York, NY, USA
University of California, Santa Cruz, CA, USA
Finding large cliques or cliques missing a few edges is a fundamental algorithmic task in the study of real-world graphs, with applications in community detection, pattern recognition, and clustering. A number of effective backtracking-based heuristics for these problems have emerged from recent empirical work in social network analysis. Given the NP-hardness of variants of clique counting, these results raise a challenge for beyond worst-case analysis of these problems. Inspired by the triadic closure of real-world graphs, Fox et al. (SICOMP 2020) introduced the notion of c-closed graphs and proved that maximal clique enumeration is fixed-parameter tractable with respect to c.
In practice, due to noise in data, one wishes to actually discover "near-cliques", which can be characterized as cliques with a sparse subgraph removed. In this work, we prove that many different kinds of maximal near-cliques can be enumerated in polynomial time (and FPT in c) for c-closed graphs. We study various established notions of such substructures, including k-plexes, complements of bounded-degeneracy and bounded-treewidth graphs. Interestingly, our algorithms follow relatively simple backtracking procedures, analogous to what is done in practice. Our results underscore the significance of the c-closed graph class for theoretical understanding of social network analysis.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.17/LIPIcs.ITCS.2022.17.pdf
c-closed graph
dense subgraphs
FPT algorithm
enumeration algorithm
k-plex
Moon-Moser theorem
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
18:1
18:23
10.4230/LIPIcs.ITCS.2022.18
article
What Does Dynamic Optimality Mean in External Memory?
Bender, Michael A.
1
Farach-Colton, Martín
2
Kuszmaul, William
3
Stony Brook University, Stony Brook, NY, USA
Rutgers University, New Brunswick, NJ, USA
MIT, Cambridge, MA, USA
A data structure A is said to be dynamically optimal over a class of data structures 𝒞 if A is constant-competitive with every data structure C ∈ 𝒞. Much of the research on binary search trees in the past forty years has focused on studying dynamic optimality over the class of binary search trees that are modified via rotations (and indeed, the question of whether splay trees are dynamically optimal has gained notoriety as the so-called dynamic-optimality conjecture). Recently, researchers have extended this to consider dynamic optimality over certain classes of external-memory search trees. In particular, Demaine, Iacono, Koumoutsos, and Langerman propose a class of external-memory trees that support a notion of tree rotations, and then give an elegant data structure, called the Belga B-tree, that is within an O(log log N)-factor of being dynamically optimal over this class.
In this paper, we revisit the question of how dynamic optimality should be defined in external memory. A defining characteristic of external-memory data structures is that there is a stark asymmetry between queries and inserts/updates/deletes: by making the former slightly asymptotically slower, one can make the latter significantly asymptotically faster (even allowing for operations with sub-constant amortized I/Os). This asymmetry makes it so that rotation-based search trees are not optimal (or even close to optimal) in insert/update/delete-heavy external-memory workloads. To study dynamic optimality for such workloads, one must consider a different class of data structures.
The natural class of data structures to consider are what we call buffered-propagation trees. Such trees can adapt dynamically to the locality properties of an input sequence in order to optimize the interactions between different inserts/updates/deletes and queries. We also present a new form of beyond-worst-case analysis that allows for us to formally study a continuum between static and dynamic optimality. Finally, we give a novel data structure, called the Jεllo Tree, that is statically optimal and that achieves dynamic optimality for a large natural class of inputs defined by our beyond-worst-case analysis.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.18/LIPIcs.ITCS.2022.18.pdf
Dynamic optimality
external memory
buffer propagation
search trees
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
19:1
19:12
10.4230/LIPIcs.ITCS.2022.19
article
Improved Hardness of BDD and SVP Under Gap-(S)ETH
Bennett, Huck
1
Peikert, Chris
2
Tang, Yi
2
Oregon State University, Corvallis, OR, USA
University of Michigan, Ann Arbor, MI, USA
We show improved fine-grained hardness of two key lattice problems in the 𝓁_p norm: Bounded Distance Decoding to within an α factor of the minimum distance (BDD_{p, α}) and the (decisional) γ-approximate Shortest Vector Problem (GapSVP_{p,γ}), assuming variants of the Gap (Strong) Exponential Time Hypothesis (Gap-(S)ETH). Specifically, we show:
1) For all p ∈ [1, ∞), there is no 2^{o(n)}-time algorithm for BDD_{p, α} for any constant α > α_kn, where α_kn = 2^{-c_kn} < 0.98491 and c_kn is the 𝓁₂ kissing-number constant, unless non-uniform Gap-ETH is false.
2) For all p ∈ [1, ∞), there is no 2^{o(n)}-time algorithm for BDD_{p, α} for any constant α > α^‡_p, where α^‡_p is explicit and satisfies α^‡_p = 1 for 1 ≤ p ≤ 2, α^‡_p < 1 for all p > 2, and α^‡_p → 1/2 as p → ∞, unless randomized Gap-ETH is false.
3) For all p ∈ [1, ∞) ⧵ 2 ℤ and all C > 1, there is no 2^{n/C}-time algorithm for BDD_{p, α} for any constant α > α^†_{p, C}, where α^†_{p, C} is explicit and satisfies α^†_{p, C} → 1 as C → ∞ for any fixed p ∈ [1, ∞), unless non-uniform Gap-SETH is false.
4) For all p > p₀ ≈ 2.1397, p ∉ 2ℤ, and all C > C_p, there is no 2^{n/C}-time algorithm for GapSVP_{p, γ} for some constant γ > 1, where C_p > 1 is explicit and satisfies C_p → 1 as p → ∞, unless randomized Gap-SETH is false.
Our results for BDD_{p, α} improve and extend work by Aggarwal and Stephens-Davidowitz (STOC, 2018) and Bennett and Peikert (CCC, 2020). Specifically, the quantities α_kn and α^‡_p (respectively, α^†_{p,C}) significantly improve upon the corresponding quantity α_p^* (respectively, α_{p,C}^*) of Bennett and Peikert for small p (but arise from somewhat stronger assumptions). In particular, Item 1 improves the smallest value of α for which BDD_{p, α} is known to be exponentially hard in the Euclidean norm (p = 2) to an explicit constant α < 1 for the first time under a general-purpose complexity assumption. Items 1 and 3 crucially use the recent breakthrough result of Vlăduţ (Moscow Journal of Combinatorics and Number Theory, 2019), which showed an explicit exponential lower bound on the lattice kissing number. Finally, Item 4 answers a natural question left open by Aggarwal, Bennett, Golovnev, and Stephens-Davidowitz (SODA, 2021), which showed an analogous result for the Closest Vector Problem.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.19/LIPIcs.ITCS.2022.19.pdf
lattices
lattice-based cryptography
fine-grained complexity
Bounded Distance Decoding
Shortest Vector Problem
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
20:1
20:9
10.4230/LIPIcs.ITCS.2022.20
article
Mixing of 3-Term Progressions in Quasirandom Groups
Bhangale, Amey
1
Harsha, Prahladh
2
https://orcid.org/0000-0002-2739-5642
Roy, Sourya
1
University of California, Riverside, CA, USA
Tata Institute of Fundamental Research, Mumbai, India
In this paper, we show the mixing of three-term progressions (x, xg, xg²) in every finite quasirandom group, fully answering a question of Gowers. More precisely, we show that for any D-quasirandom group G and any three sets A₁, A₂, A₃ ⊂ G, we have
|Pr_{x,y∼ G}[x ∈ A₁, xy ∈ A₂, xy² ∈ A₃] - ∏_{i = 1}³ Pr_{x∼ G}[x ∈ A_i]| ≤ (2/(√{D)})^{1/4}.
Prior to this, Tao answered this question when the underlying quasirandom group is SL_{d}(𝔽_q). Subsequently, Peluse extended the result to all non-abelian finite simple groups. In this work, we show that a slight modification of Peluse’s argument is sufficient to fully resolve Gowers' quasirandom conjecture for 3-term progressions. Surprisingly, unlike the proofs of Tao and Peluse, our proof is elementary and only uses basic facts from non-abelian Fourier analysis.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.20/LIPIcs.ITCS.2022.20.pdf
Quasirandom groups
3-term arithmetic progressions
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
21:1
21:21
10.4230/LIPIcs.ITCS.2022.21
article
Max-3-Lin over Non-Abelian Groups with Universal Factor Graphs
Bhangale, Amey
1
Stanković, Aleksa
2
https://orcid.org/0000-0002-8416-8665
University of California, Riverside, CA, USA
Department of Mathematics, KTH Royal Institute of Technology, Sweden
Factor graph of an instance of a constraint satisfaction problem with n variables and m constraints is the bipartite graph between [m] and [n] describing which variable appears in which constraints. Thus, an instance of a CSP is completely defined by its factor graph and the list of predicates. We show inapproximability of Max-3-LIN over non-abelian groups (both in the perfect completeness case and in the imperfect completeness case), with the same inapproximability factor as in the general case, even when the factor graph is fixed.
Along the way, we also show that these optimal hardness results hold even when we restrict the linear equations in the Max-3-LIN instances to the form x⋅ y⋅ z = g, where x,y,z are the variables and g is a group element. We use representation theory and Fourier analysis over non-abelian groups to analyze the reductions.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.21/LIPIcs.ITCS.2022.21.pdf
Universal factor graphs
linear equations
non-abelian groups
hardness of approximation
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
22:1
22:17
10.4230/LIPIcs.ITCS.2022.22
article
Separating the NP-Hardness of the Grothendieck Problem from the Little-Grothendieck Problem
Bhattiprolu, Vijay
1
2
Lee, Euiwoong
3
Tulsiani, Madhur
4
Institute for Advanced Study, Princeton, NJ, USA
Princeton University, NJ, USA
University of Michigan, Ann-Arbor, USA
Toyota Technological Institute Chicago, IL, USA
Grothendieck’s inequality [Grothendieck, 1953] states that there is an absolute constant K > 1 such that for any n× n matrix A,
‖A‖_{∞→1} := max_{s,t ∈ {± 1}ⁿ}∑_{i,j} A[i,j]⋅s(i)⋅t(j) ≥ 1/K ⋅ max_{u_i,v_j ∈ S^{n-1}}∑_{i,j} A[i,j]⋅⟨u_i,v_j⟩.
In addition to having a tremendous impact on Banach space theory, this inequality has found applications in several unrelated fields like quantum information, regularity partitioning, communication complexity, etc. Let K_G (known as Grothendieck’s constant) denote the smallest constant K above. Grothendieck’s inequality implies that a natural semidefinite programming relaxation obtains a constant factor approximation to ‖A‖_{∞ → 1}. The exact value of K_G is yet unknown with the best lower bound (1.67…) being due to Reeds and the best upper bound (1.78…) being due to Braverman, Makarychev, Makarychev and Naor [Braverman et al., 2013]. In contrast, the little Grothendieck inequality states that under the assumption that A is PSD the constant K above can be improved to π/2 and moreover this is tight.
The inapproximability of ‖A‖_{∞ → 1} has been studied in several papers culminating in a tight UGC-based hardness result due to Raghavendra and Steurer (remarkably they achieve this without knowing the value of K_G). Briet, Regev and Saket [Briët et al., 2015] proved tight NP-hardness of approximating the little Grothendieck problem within π/2, based on a framework by Guruswami, Raghavendra, Saket and Wu [Guruswami et al., 2016] for bypassing UGC for geometric problems. This also remained the best known NP-hardness for the general Grothendieck problem due to the nature of the Guruswami et al. framework, which utilized a projection operator onto the degree-1 Fourier coefficients of long code encodings, which naturally yielded a PSD matrix A.
We show how to extend the above framework to go beyond the degree-1 Fourier coefficients, using the global structure of optimal solutions to the Grothendieck problem. As a result, we obtain a separation between the NP-hardness results for the two problems, obtaining an inapproximability result for the Grothendieck problem, of a factor π/2 + ε₀ for a fixed constant ε₀ > 0.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.22/LIPIcs.ITCS.2022.22.pdf
Grothendieck’s Inequality
Hardness of Approximation
Semidefinite Programming
Optimization
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
23:1
23:18
10.4230/LIPIcs.ITCS.2022.23
article
Fixed-Parameter Sensitivity Oracles
Bilò, Davide
1
https://orcid.org/0000-0003-3169-4300
Casel, Katrin
2
https://orcid.org/0000-0001-6146-8684
Choudhary, Keerti
3
https://orcid.org/0000-0002-8289-5930
Cohen, Sarel
4
https://orcid.org/0000-0003-4578-1245
Friedrich, Tobias
2
https://orcid.org/0000-0003-0076-6308
Lagodzinski, J.A. Gregor
2
https://orcid.org/0000-0002-8771-1870
Schirneck, Martin
2
Wietheger, Simon
2
https://orcid.org/0000-0002-0734-0708
Department of Humanities and Social Sciences, University of Sassari, Italy
Hasso Plattner Institute, University of Potsdam, Germany
Department of Computer Science and Engineering, Indian Institute of Technology Delhi, India
School of Computer Science, Tel-Aviv-Yaffo Academic College, Israel
We combine ideas from distance sensitivity oracles (DSOs) and fixed-parameter tractability (FPT) to design sensitivity oracles for FPT graph problems. An oracle with sensitivity f for an FPT problem Π on a graph G with parameter k preprocesses G in time O(g(f,k) ⋅ poly(n)). When queried with a set F of at most f edges of G, the oracle reports the answer to the Π - with the same parameter k - on the graph G-F, i.e., G deprived of F. The oracle should answer queries in a time that is significantly faster than merely running the best-known FPT algorithm on G-F from scratch.
We design sensitivity oracles for the k-Path and the k-Vertex Cover problem. Our first oracle for k-Path has size O(k^{f+1}) and query time O(f min{f, log(f) + k}). We use a technique inspired by the work of Weimann and Yuster [FOCS 2010, TALG 2013] on distance sensitivity problems to reduce the space to O(({f+k}/f)^f ({f+k}/k)^k fk⋅log(n)) at the expense of increasing the query time to O(({f+k}/f)^f ({f+k}/k)^k f min{f,k}⋅log(n)). Both oracles can be modified to handle vertex-failures, but we need to replace k with 2k in all the claimed bounds.
Regarding k-Vertex Cover, we design three oracles offering different trade-offs between the size and the query time. The first oracle takes O(3^{f+k}) space and has O(2^f) query time, the second one has a size of O(2^{f+k²+k}) and a query time of O(f+k²); finally, the third one takes O(fk+k²) space and can be queried in time O(1.2738^k + f). All our oracles are computable in time (at most) proportional to their size and the time needed to detect a k-path or k-vertex cover, respectively. We also provide an interesting connection between k-Vertex Cover and the fault-tolerant shortest path problem, by giving a DSO of size O(poly(f,k) ⋅ n) with query time in O(poly(f,k)), where k is the size of a vertex cover.
Following our line of research connecting fault-tolerant FPT and shortest paths problems, we introduce parameterization to the computation of distance preservers. We study the problem, given a directed unweighted graph with a fixed source s and parameters f and k, to construct a polynomial-sized oracle that efficiently reports, for any target vertex v and set F of at most f edges, whether the distance from s to v increases at most by an additive term of k in G-F. The oracle size is O(2^k k²⋅n), while the time needed to answer a query is O(2^k f^ω k^ω), where ω < 2.373 is the matrix multiplication exponent. The second problem we study is about the construction of bounded-stretch fault-tolerant preservers. We construct a subgraph with O(2^{fk+f+k} k ⋅ n) edges that preserves those s-v-distances that do not increase by more than k upon failure of F. This improves significantly over the Õ(f n^{2-1/(2^f)}) bound in the unparameterized case by Bodwin et al. [ICALP 2017].
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.23/LIPIcs.ITCS.2022.23.pdf
data structures
distance preservers
distance sensitivity oracles
fault tolerance
fixed-parameter tractability
k-path
vertex cover
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
24:1
24:22
10.4230/LIPIcs.ITCS.2022.24
article
Local Access to Random Walks
Biswas, Amartya Shankha
1
Pyne, Edward
2
Rubinfeld, Ronitt
1
CSAIL, MIT, Cambridge, MA, USA
Harvard University, Cambridge, MA, USA
For a graph G on n vertices, naively sampling the position of a random walk of at time t requires work Ω(t). We desire local access algorithms supporting position_G(t) queries, which return the position of a random walk from some fixed start vertex s at time t, where the joint distribution of returned positions is 1/poly(n) close to those of a uniformly random walk in 𝓁₁ distance.
We first give an algorithm for local access to random walks on a given undirected d-regular graph with Õ(1/(1-λ)√n) runtime per query, where λ is the second-largest eigenvalue of the random walk matrix of the graph in absolute value. Since random d-regular graphs G(n,d) are expanders with high probability, this gives an Õ(√n) algorithm for a graph drawn from G(n,d) whp, which improves on the naive method for small numbers of queries.
We then prove that no algorithm with subconstant error given probe access to an input d-regular graph can have runtime better than Ω(√n/log(n)) per query in expectation when the input graph is drawn from G(n,d), obtaining a nearly matching lower bound. We further show an Ω(n^{1/4}) runtime per query lower bound even with an oblivious adversary (i.e. when the query sequence is fixed in advance).
We then show that for families of graphs with additional group theoretic structure, dramatically better results can be achieved. We give local access to walks on small-degree abelian Cayley graphs, including cycles and hypercubes, with runtime polylog(n) per query. This also allows for efficient local access to walks on polylog degree expanders. We show that our techniques apply to graphs with high degree by extending or results to graphs constructed using the tensor product (giving fast local access to walks on degree n^ε graphs for any ε ∈ (0,1]) and Cartesian product.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.24/LIPIcs.ITCS.2022.24.pdf
sublinear time algorithms
random generation
local computation
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
25:1
25:22
10.4230/LIPIcs.ITCS.2022.25
article
Vertex Fault-Tolerant Emulators
Bodwin, Greg
1
Dinitz, Michael
2
Nazari, Yasamin
3
University of Michigan, Ann Arbor, MI, USA
Johns Hopkins University, Baltimore, MD, United States
University of Salzburg, Austria
A k-spanner of a graph G is a sparse subgraph that preserves its shortest path distances up to a multiplicative stretch factor of k, and a k-emulator is similar but not required to be a subgraph of G. A classic theorem by Althöfer et al. [Disc. Comp. Geom. '93] and Thorup and Zwick [JACM '05] shows that, despite the extra flexibility available to emulators, the size/stretch tradeoffs for spanners and emulators are equivalent. Our main result is that this equivalence in tradeoffs no longer holds in the commonly-studied setting of graphs with vertex failures. That is: we introduce a natural definition of vertex fault-tolerant emulators, and then we show a three-way tradeoff between size, stretch, and fault-tolerance for these emulators that polynomially surpasses the tradeoff known to be optimal for spanners.
We complement our emulator upper bound with a lower bound construction that is essentially tight (within log n factors of the upper bound) when the stretch is 2k-1 and k is either a fixed odd integer or 2. We also show constructions of fault-tolerant emulators with additive error, demonstrating that these also enjoy significantly improved tradeoffs over those available for fault-tolerant additive spanners.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.25/LIPIcs.ITCS.2022.25.pdf
Emulators
Fault Tolerance
Girth Conjecture
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
26:1
26:18
10.4230/LIPIcs.ITCS.2022.26
article
Bounded Indistinguishability for Simple Sources
Bogdanov, Andrej
1
Dinesh, Krishnamoorthy
2
Filmus, Yuval
3
Ishai, Yuval
3
Kaplan, Avi
3
Srinivasan, Akshayaram
4
Department of Computer Science and Engineering and Institute of Theoretical Computer Science and Communications, The Chinese University of Hong Kong, Hong Kong
Institute of Theoretical Computer Science and Communications, The Chinese University of Hong Kong, Hong Kong
The Henry and Marylin Taub Faculty of Computer Science, Technion, Haifa, Israel
School of Technology and Computer Science, Tata Institute of Fundamental Research, Mumbai, India
A pair of sources X, Y over {0,1}ⁿ are k-indistinguishable if their projections to any k coordinates are identically distributed. Can some AC^0 function distinguish between two such sources when k is big, say k = n^{0.1}? Braverman’s theorem (Commun. ACM 2011) implies a negative answer when X is uniform, whereas Bogdanov et al. (Crypto 2016) observe that this is not the case in general.
We initiate a systematic study of this question for natural classes of low-complexity sources, including ones that arise in cryptographic applications, obtaining positive results, negative results, and barriers. In particular:
- There exist Ω(√n)-indistinguishable X, Y, samplable by degree-O(log n) polynomial maps (over F₂) and by poly(n)-size decision trees, that are Ω(1)-distinguishable by OR.
- There exists a function f such that all f(d, ε)-indistinguishable X, Y that are samplable by degree-d polynomial maps are ε-indistinguishable by OR for all sufficiently large n. Moreover, f(1, ε) = ⌈log(1/ε)⌉ + 1 and f(2, ε) = O(log^{10}(1/ε)).
- Extending (weaker versions of) the above negative results to AC^0 distinguishers would require settling a conjecture of Servedio and Viola (ECCC 2012). Concretely, if every pair of n^{0.9}-indistinguishable X, Y that are samplable by linear maps is ε-indistinguishable by AC^0 circuits, then the binary inner product function can have at most an ε-correlation with AC^0 ◦ ⊕ circuits.
Finally, we motivate the question and our results by presenting applications of positive results to low-complexity secret sharing and applications of negative results to leakage-resilient cryptography.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.26/LIPIcs.ITCS.2022.26.pdf
Pseudorandomness
bounded indistinguishability
complexity of sampling
constant-depth circuits
secret sharing
leakage-resilient cryptography
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
27:1
27:24
10.4230/LIPIcs.ITCS.2022.27
article
Locality-Preserving Hashing for Shifts with Connections to Cryptography
Boyle, Elette
1
2
Dinur, Itai
3
Gilboa, Niv
3
Ishai, Yuval
4
Keller, Nathan
5
Klein, Ohad
5
IDC Herzliya, Israel
NTT Research, Sunnyvale, USA
Ben-Gurion University, Be'er Sheva, Israel
Technion, Haifa, Israel
Bar-Ilan University, Ramat Gan, Israel
Can we sense our location in an unfamiliar environment by taking a sublinear-size sample of our surroundings? Can we efficiently encrypt a message that only someone physically close to us can decrypt? To solve this kind of problems, we introduce and study a new type of hash functions for finding shifts in sublinear time. A function h:{0,1}ⁿ → ℤ_n is a (d,δ) locality-preserving hash function for shifts (LPHS) if: (1) h can be computed by (adaptively) querying d bits of its input, and (2) Pr[h(x) ≠ h(x ≪ 1) + 1] ≤ δ, where x is random and ≪ 1 denotes a cyclic shift by one bit to the left. We make the following contributions.
- Near-optimal LPHS via Distributed Discrete Log. We establish a general two-way connection between LPHS and algorithms for distributed discrete logarithm in the generic group model. Using such an algorithm of Dinur et al. (Crypto 2018), we get LPHS with near-optimal error of δ = Õ(1/d²). This gives an unusual example for the usefulness of group-based cryptography in a post-quantum world. We extend the positive result to non-cyclic and worst-case variants of LPHS.
- Multidimensional LPHS. We obtain positive and negative results for a multidimensional extension of LPHS, making progress towards an optimal 2-dimensional LPHS.
- Applications. We demonstrate the usefulness of LPHS by presenting cryptographic and algorithmic applications. In particular, we apply multidimensional LPHS to obtain an efficient "packed" implementation of homomorphic secret sharing and a sublinear-time implementation of location-sensitive encryption whose decryption requires a significantly overlapping view.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.27/LIPIcs.ITCS.2022.27.pdf
Sublinear algorithms
metric embeddings
shift finding
discrete logarithm
homomorphic secret sharing
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
28:1
28:20
10.4230/LIPIcs.ITCS.2022.28
article
Lattice-Inspired Broadcast Encryption and Succinct Ciphertext-Policy ABE
Brakerski, Zvika
1
Vaikuntanathan, Vinod
2
Weizmann Institute of Science, Rehovot, Israel
MIT, Boston, USA
Broadcast encryption remains one of the few remaining central cryptographic primitives that are not yet known to be achievable under a standard cryptographic assumption (excluding obfuscation-based constructions, see below). Furthermore, prior to this work, there were no known direct candidates for post-quantum-secure broadcast encryption.
We propose a candidate ciphertext-policy attribute-based encryption (CP-ABE) scheme for circuits, where the ciphertext size depends only on the depth of the policy circuit (and not its size). This, in particular, gives us a Broadcast Encryption (BE) scheme where the size of the keys and ciphertexts have a poly-logarithmic dependence on the number of users. This goal was previously only known to be achievable assuming ideal multilinear maps (Boneh, Waters and Zhandry, Crypto 2014) or indistinguishability obfuscation (Boneh and Zhandry, Crypto 2014) and in a concurrent work from generic bilinear groups and the learning with errors (LWE) assumption (Agrawal and Yamada, Eurocrypt 2020).
Our construction relies on techniques from lattice-based (and in particular LWE-based) cryptography. We analyze some attempts at cryptanalysis, but we are unable to provide a security proof.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.28/LIPIcs.ITCS.2022.28.pdf
Theoretical Cryptography
Broadcast Encryption
Attribute-Based Encryption
Lattice-Based Cryptography
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
29:1
29:26
10.4230/LIPIcs.ITCS.2022.29
article
Local Problems on Trees from the Perspectives of Distributed Algorithms, Finitary Factors, and Descriptive Combinatorics
Brandt, Sebastian
1
https://orcid.org/0000-0001-5393-6636
Chang, Yi-Jun
2
https://orcid.org/0000-0002-0109-2432
Grebík, Jan
3
Grunau, Christoph
4
Rozhoň, Václav
4
https://orcid.org/0000-0002-9646-8446
Vidnyánszky, Zoltán
5
https://orcid.org/0000-0001-8168-9353
CISPA Helmholtz Center for Information Security, Saarbrücken, Germany
National University of Singapore, Singapore
University of Warwick, Coventry, UK
ETH Zürich, Switzerland
California Institute of Technology, Pasadena, CA, USA
We study connections between three different fields: distributed local algorithms, finitary factors of iid processes, and descriptive combinatorics. We focus on two central questions: Can we apply techniques from one of the areas to obtain results in another? Can we show that complexity classes coming from different areas contain precisely the same problems? We give an affirmative answer to both questions in the context of local problems on regular trees:
1) We extend the Borel determinacy technique of Marks [Marks - J. Am. Math. Soc. 2016] coming from descriptive combinatorics and adapt it to the area of distributed computing, thereby obtaining a more generally applicable lower bound technique in descriptive combinatorics and an entirely new lower bound technique for distributed algorithms. Using our new technique, we prove deterministic distributed Ω(log n)-round lower bounds for problems from a natural class of homomorphism problems. Interestingly, these lower bounds seem beyond the current reach of the powerful round elimination technique [Brandt - PODC 2019] responsible for all substantial locality lower bounds of the last years. Our key technical ingredient is a novel ID graph technique that we expect to be of independent interest; in fact, it has already played an important role in a new lower bound for the Lovász local lemma in the Local Computation Algorithms model from sequential computing [Brandt, Grunau, Rozhoň - PODC 2021].
2) We prove that a local problem admits a Baire measurable coloring if and only if it admits a local algorithm with local complexity O(log n), extending the classification of Baire measurable colorings of Bernshteyn [Bernshteyn - personal communication]. A key ingredient of the proof is a new and simple characterization of local problems that can be solved in O(log n) rounds. We complement this result by showing separations between complexity classes from distributed computing, finitary factors, and descriptive combinatorics. Most notably, the class of problems that allow a distributed algorithm with sublogarithmic randomized local complexity is incomparable with the class of problems with a Borel solution. We hope that our treatment will help to view all three perspectives as part of a common theory of locality, in which we follow the insightful paper of [Bernshteyn - arXiv 2004.04905].
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.29/LIPIcs.ITCS.2022.29.pdf
Distributed Algorithms
Descriptive Combinatorics
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
30:1
30:19
10.4230/LIPIcs.ITCS.2022.30
article
PCPs and Instance Compression from a Cryptographic Lens
Bronfman, Liron
1
Rothblum, Ron D.
1
Technion, Haifa, Israel
Modern cryptography fundamentally relies on the assumption that the adversary trying to break the scheme is computationally bounded. This assumption lets us construct cryptographic protocols and primitives that are known to be impossible otherwise. In this work we explore the effect of bounding the adversary’s power in other information theoretic proof-systems and show how to use this assumption to bypass impossibility results.
We first consider the question of constructing succinct PCPs. These are PCPs whose length is polynomial only in the length of the original NP witness (in contrast to standard PCPs whose length is proportional to the non-deterministic verification time). Unfortunately, succinct PCPs are known to be impossible to construct under standard complexity assumptions. Assuming the sub-exponential hardness of the learning with errors (LWE) problem, we construct succinct probabilistically checkable arguments or PCAs (Kalai and Raz 2009), which are PCPs in which soundness is guaranteed against efficiently generated false proofs. Our PCA construction is for every NP relation that can be verified by a small-depth circuit (e.g., SAT, clique, TSP, etc.) and in contrast to prior work is publicly verifiable and has constant query complexity. Curiously, we also show, as a proof-of-concept, that such publicly-verifiable PCAs can be used to derive hardness of approximation results.
Second, we consider the notion of Instance Compression (Harnik and Naor, 2006). An instance compression scheme lets one compress, for example, a CNF formula φ on m variables and n ≫ m clauses to a new formula φ' with only poly(m) clauses, so that φ is satisfiable if and only if φ' is satisfiable. Instance compression has been shown to be closely related to succinct PCPs and is similarly highly unlikely to exist. We introduce a computational analog of instance compression in which we require that if φ is unsatisfiable then φ' is effectively unsatisfiable, in the sense that it is computationally infeasible to find a satisfying assignment for φ' (although such an assignment may exist). Assuming the same sub-exponential LWE assumption, we construct such computational instance compression schemes for every bounded-depth NP relation. As an application, this lets one compress k formulas ϕ₁,… ,ϕ_k into a single short formula ϕ that is effectively satisfiable if and only if at least one of the original formulas was satisfiable.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.30/LIPIcs.ITCS.2022.30.pdf
PCP
Succinct Arguments
Instance Compression
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
31:1
31:12
10.4230/LIPIcs.ITCS.2022.31
article
Limits of Quantum Speed-Ups for Computational Geometry and Other Problems: Fine-Grained Complexity via Quantum Walks
Buhrman, Harry
1
2
Loff, Bruno
3
4
Patro, Subhasree
1
2
Speelman, Florian
1
2
QuSoft, CWI Amsterdam, The Netherlands
University of Amsterdam, The Netherlands
University of Porto, Portugal
INESC-Tec, Porto, Portugal
Many computational problems are subject to a quantum speed-up: one might find that a problem having an O(n³)-time or O(n²)-time classic algorithm can be solved by a known O(n^{1.5})-time or O(n)-time quantum algorithm. The question naturally arises: how much quantum speed-up is possible?
The area of fine-grained complexity allows us to prove optimal lower-bounds on the complexity of various computational problems, based on the conjectured hardness of certain natural, well-studied problems. This theory has recently been extended to the quantum setting, in two independent papers by Buhrman, Patro and Speelman [Buhrman et al., 2021], and by Aaronson, Chia, Lin, Wang, and Zhang [Aaronson et al., 2020].
In this paper, we further extend the theory of fine-grained complexity to the quantum setting. A fundamental conjecture in the classical setting states that the 3SUM problem cannot be solved by (classical) algorithms in time O(n^{2-ε}), for any ε > 0. We formulate an analogous conjecture, the Quantum-3SUM-Conjecture, which states that there exist no sublinear O(n^{1-ε})-time quantum algorithms for the 3SUM problem.
Based on the Quantum-3SUM-Conjecture, we show new lower-bounds on the time complexity of quantum algorithms for several computational problems. Most of our lower-bounds are optimal, in that they match known upper-bounds, and hence they imply tight limits on the quantum speedup that is possible for these problems.
These results are proven by adapting to the quantum setting known classical fine-grained reductions from the 3SUM problem. This adaptation is not trivial, however, since the original classical reductions require pre-processing the input in various ways, e.g. by sorting it according to some order, and this pre-processing (provably) cannot be done in sublinear quantum time.
We overcome this bottleneck by combining a quantum walk with a classical dynamic data-structure having a certain "history-independence" property. This type of construction has been used in the past to prove upper bounds, and here we use it for the first time as part of a reduction. This general proof strategy allows us to prove tight lower bounds on several computational-geometry problems, on Convolution-3SUM and on the 0-Edge-Weight-Triangle problem, conditional on the Quantum-3SUM-Conjecture.
We believe this proof strategy will be useful in proving tight (conditional) lower-bounds, and limits on quantum speed-ups, for many other problems.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.31/LIPIcs.ITCS.2022.31.pdf
complexity theory
fine-grained complexity
3SUM
computational geometry problems
data structures
quantum walk
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
32:1
32:24
10.4230/LIPIcs.ITCS.2022.32
article
Small Hazard-Free Transducers
Bund, Johannes
1
https://orcid.org/0000-0002-1108-1091
Lenzen, Christoph
1
Medina, Moti
2
https://orcid.org/0000-0002-5572-3754
CISPA Helmholtz Center for Information Security, Saarbrücken, Germany
Faculty of Engineering, Bar-Ilan University, Ramat Gan, Israel
Ikenmeyer et al. (JACM'19) proved an unconditional exponential separation between the hazard-free complexity and (standard) circuit complexity of explicit functions. This raises the question: which classes of functions permit efficient hazard-free circuits?
In this work, we prove that circuit implementations of transducers with small state space are such a class. A transducer is a finite state machine that transcribes, symbol by symbol, an input string of length n into an output string of length n. We present a construction that transforms any function arising from a transducer into an efficient circuit of size 𝒪(n) computing the hazard-free extension of the function. More precisely, given a transducer with s states, receiving n input symbols encoded by l bits, and computing n output symbols encoded by m bits, the transducer has a hazard-free circuit of size n*m*2^{𝒪(s+𝓁)} and depth 𝒪(s*log(n) + 𝓁); in particular, if s, 𝓁,m ∈ 𝒪(1), size and depth are asymptotically optimal. In light of the strong hardness results by Ikenmeyer et al. (JACM'19), we consider this a surprising result.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.32/LIPIcs.ITCS.2022.32.pdf
Hazard-Freeness
Parallel Prefix Computation
Finite State Transducers
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
33:1
33:24
10.4230/LIPIcs.ITCS.2022.33
article
Faster Sparse Matrix Inversion and Rank Computation in Finite Fields
Casacuberta, Sílvia
1
Kyng, Rasmus
2
Harvard University, Cambridge, MA, USA
ETH Zürich, Switzerland
We improve the current best running time value to invert sparse matrices over finite fields, lowering it to an expected O(n^{2.2131}) time for the current values of fast rectangular matrix multiplication. We achieve the same running time for the computation of the rank and nullspace of a sparse matrix over a finite field. This improvement relies on two key techniques. First, we adopt the decomposition of an arbitrary matrix into block Krylov and Hankel matrices from Eberly et al. (ISSAC 2007). Second, we show how to recover the explicit inverse of a block Hankel matrix using low displacement rank techniques for structured matrices and fast rectangular matrix multiplication algorithms. We generalize our inversion method to block structured matrices with other displacement operators and strengthen the best known upper bounds for explicit inversion of block Toeplitz-like and block Hankel-like matrices, as well as for explicit inversion of block Vandermonde-like matrices with structured blocks. As a further application, we improve the complexity of several algorithms in topological data analysis and in finite group theory.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.33/LIPIcs.ITCS.2022.33.pdf
Matrix inversion
rank computation
displacement operators
numerical linear algebra
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
34:1
34:21
10.4230/LIPIcs.ITCS.2022.34
article
Algorithms and Lower Bounds for Comparator Circuits from Shrinkage
Cavalar, Bruno P.
1
https://orcid.org/0000-0002-0458-8767
Lu, Zhenjian
1
University of Warwick, Coventry, UK
Comparator circuits are a natural circuit model for studying bounded fan-out computation whose power sits between nondeterministic branching programs and general circuits. Despite having been studied for nearly three decades, the first superlinear lower bound against comparator circuits was proved only recently by Gál and Robere (ITCS 2020), who established a Ω((n/log n)^{1.5}) lower bound on the size of comparator circuits computing an explicit function of n bits.
In this paper, we initiate the study of average-case complexity and circuit analysis algorithms for comparator circuits. Departing from previous approaches, we exploit the technique of shrinkage under random restrictions to obtain a variety of new results for this model. Among them, we show
- Average-case Lower Bounds. For every k = k(n) with k ≥ log n, there exists a polynomial-time computable function f_k on n bits such that, for every comparator circuit C with at most n^{1.5}/O(k⋅ √{log n}) gates, we have
Pr_{x ∈ {0,1}ⁿ} [C(x) = f_k(x)] ≤ 1/2 + 1/{2^{Ω(k)}}.
This average-case lower bound matches the worst-case lower bound of Gál and Robere by letting k = O(log n).
- #SAT Algorithms. There is an algorithm that counts the number of satisfying assignments of a given comparator circuit with at most n^{1.5}/O (k⋅ √{log n}) gates, in time 2^{n-k} · poly(n), for any k ≤ n/4. The running time is non-trivial (i.e., 2ⁿ/n^{ω(1)}) when k = ω(log n).
- Pseudorandom Generators and MCSP Lower Bounds. There is a pseudorandom generator of seed length s^{2/3+o(1)} that fools comparator circuits with s gates. Also, using this PRG, we obtain an n^{1.5-o(1)} lower bound for MCSP against comparator circuits.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.34/LIPIcs.ITCS.2022.34.pdf
comparator circuits
average-case complexity
satisfiability algorithms
pseudorandom generators
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
35:1
35:25
10.4230/LIPIcs.ITCS.2022.35
article
Quantum Distributed Algorithms for Detection of Cliques
Censor-Hillel, Keren
1
https://orcid.org/0000-0003-4395-5205
Fischer, Orr
2
Le Gall, François
3
Leitersdorf, Dean
1
Oshman, Rotem
2
Technion, Haifa, Israel
Tel-Aviv University, Israel
Nagoya University, Aichi, Japan
The possibilities offered by quantum computing have drawn attention in the distributed computing community recently, with several breakthrough results showing quantum distributed algorithms that run faster than the fastest known classical counterparts, and even separations between the two models. A prime example is the result by Izumi, Le Gall, and Magniez [STACS 2020], who showed that triangle detection by quantum distributed algorithms is easier than triangle listing, while an analogous result is not known in the classical case.
In this paper we present a framework for fast quantum distributed clique detection. This improves upon the state-of-the-art for the triangle case, and is also more general, applying to larger clique sizes.
Our main technical contribution is a new approach for detecting cliques by encapsulating this as a search task for nodes that can be added to smaller cliques. To extract the best complexities out of our approach, we develop a framework for nested distributed quantum searches, which employ checking procedures that are quantum themselves.
Moreover, we show a circuit-complexity barrier on proving a lower bound of the form Ω(n^{3/5+ε}) for K_p-detection for any p ≥ 4, even in the classical (non-quantum) distributed CONGEST setting.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.35/LIPIcs.ITCS.2022.35.pdf
distributed graph algorithms
quantum algorithms
cycles
cliques
Congested Clique
CONGEST
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
36:1
36:23
10.4230/LIPIcs.ITCS.2022.36
article
Distributed Vertex Cover Reconfiguration
Censor-Hillel, Keren
1
https://orcid.org/0000-0003-4395-5205
Maus, Yannic
2
https://orcid.org/0000-0003-4062-6991
Romem-Peled, Shahar
1
Tonoyan, Tigran
1
https://orcid.org/0000-0003-2062-9896
Department of Computer Science, Technion, Haifa, Israel
Institute of Software Technology, TU Graz, Austria
Reconfiguration schedules, i.e., sequences that gradually transform one solution of a problem to another while always maintaining feasibility, have been extensively studied. Most research has dealt with the decision problem of whether a reconfiguration schedule exists, and the complexity of finding one. A prime example is the reconfiguration of vertex covers. We initiate the study of batched vertex cover reconfiguration, which allows to reconfigure multiple vertices concurrently while requiring that any adversarial reconfiguration order within a batch maintains feasibility. The latter provides robustness, e.g., if the simultaneous reconfiguration of a batch cannot be guaranteed. The quality of a schedule is measured by the number of batches until all nodes are reconfigured, and its cost, i.e., the maximum size of an intermediate vertex cover.
To set a baseline for batch reconfiguration, we show that for graphs belonging to one of the classes {{cycles, trees, forests, chordal, cactus, even-hole-free, claw-free}}, there are schedules that use O(ε^{-1}) batches and incur only a 1+ε multiplicative increase in cost over the best sequential schedules. Our main contribution is to compute such batch schedules in a distributed setting O(ε^{-1} {log^*} n) rounds, which we also show to be tight. Further, we show that once we step out of these graph classes we face a very different situation. There are graph classes on which no efficient distributed algorithm can obtain the best (or almost best) existing schedule. Moreover, there are classes of bounded degree graphs which do not admit any reconfiguration schedules without incurring a large multiplicative increase in the cost at all.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.36/LIPIcs.ITCS.2022.36.pdf
reconfiguration
vertex cover
network decomposition
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
37:1
37:23
10.4230/LIPIcs.ITCS.2022.37
article
Adversarially Robust Coloring for Graph Streams
Chakrabarti, Amit
1
https://orcid.org/0000-0003-3633-9180
Ghosh, Prantar
1
Stoeckl, Manuel
1
https://orcid.org/0000-0001-8189-0516
Department of Computer Science, Dartmouth College, Hanover, NH, USA
A streaming algorithm is considered to be adversarially robust if it provides correct outputs with high probability even when the stream updates are chosen by an adversary who may observe and react to the past outputs of the algorithm. We grow the burgeoning body of work on such algorithms in a new direction by studying robust algorithms for the problem of maintaining a valid vertex coloring of an n-vertex graph given as a stream of edges. Following standard practice, we focus on graphs with maximum degree at most Δ and aim for colorings using a small number f(Δ) of colors.
A recent breakthrough (Assadi, Chen, and Khanna; SODA 2019) shows that in the standard, non-robust, streaming setting, (Δ+1)-colorings can be obtained while using only Õ(n) space. Here, we prove that an adversarially robust algorithm running under a similar space bound must spend almost Ω(Δ²) colors and that robust O(Δ)-coloring requires a linear amount of space, namely Ω(nΔ). We in fact obtain a more general lower bound, trading off the space usage against the number of colors used. From a complexity-theoretic standpoint, these lower bounds provide (i) the first significant separation between adversarially robust algorithms and ordinary randomized algorithms for a natural problem on insertion-only streams and (ii) the first significant separation between randomized and deterministic coloring algorithms for graph streams, since deterministic streaming algorithms are automatically robust.
We complement our lower bounds with a suite of positive results, giving adversarially robust coloring algorithms using sublinear space. In particular, we can maintain an O(Δ²)-coloring using Õ(n √Δ) space and an O(Δ³)-coloring using Õ(n) space.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.37/LIPIcs.ITCS.2022.37.pdf
Data streaming
graph algorithms
graph coloring
lower bounds
online algorithms
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
38:1
38:19
10.4230/LIPIcs.ITCS.2022.38
article
Smaller ACC0 Circuits for Symmetric Functions
Chapman, Brynmor
1
Williams, R. Ryan
1
Department of Electrical Engineering and Computer Science, MIT, Cambridge, MA, USA
What is the power of constant-depth circuits with MOD_m gates, that can count modulo m? Can they efficiently compute MAJORITY and other symmetric functions? When m is a constant prime power, the answer is well understood. In this regime, Razborov and Smolensky proved in the 1980s that MAJORITY and MOD_m require super-polynomial-size MOD_q circuits, where q is any prime power not dividing m. However, relatively little is known about the power of MOD_m gates when m is not a prime power. For example, it is still open whether every problem decidable in exponential time can be computed by depth-3 circuits of polynomial-size and only MOD_6 gates.
In this paper, we shed some light on the difficulty of proving lower bounds for MOD_m circuits, by giving new upper bounds. We show how to construct MOD_m circuits computing symmetric functions with non-prime power m, with size-depth tradeoffs that beat the longstanding lower bounds for AC^0[m] circuits when m is a prime power. Furthermore, we observe that our size-depth tradeoff circuits have essentially optimal dependence on m and d in the exponent, under a natural circuit complexity hypothesis.
For example, we show that for every ε > 0, every symmetric function can be computed using MOD_m circuits of depth 3 and 2^{n^ε} size, for a constant m depending only on ε > 0. In other words, depth-3 CC^0 circuits can compute any symmetric function in subexponential size. This demonstrates a significant difference in the power of depth-3 CC^0 circuits, compared to other models: for certain symmetric functions, depth-3 AC^0 circuits require 2^{Ω(√n)} size [Håstad 1986], and depth-3 AC^0[p^k] circuits (for fixed prime power p^k) require 2^{Ω(n^{1/6})} size [Smolensky 1987]. Even for depth-2 MOD_p ∘ MOD_m circuits, 2^{Ω(n)} lower bounds were known [Barrington Straubing Thérien 1990].
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.38/LIPIcs.ITCS.2022.38.pdf
ACC
CC
circuit complexity
symmetric functions
Chinese Remainder Theorem
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
39:1
39:21
10.4230/LIPIcs.ITCS.2022.39
article
Monotone Complexity of Spanning Tree Polynomial Re-Visited
Chattopadhyay, Arkadev
1
Datta, Rajit
2
Ghosal, Utsab
3
Mukhopadhyay, Partha
3
TIFR, Mumbai, India
Goldman-Sachs, Bangalore, India
Chennai Mathematical Institute, India
We prove two results that shed new light on the monotone complexity of the spanning tree polynomial, a classic polynomial in algebraic complexity and beyond.
First, we show that the spanning tree polynomials having n variables and defined over constant-degree expander graphs, have monotone arithmetic complexity 2^{Ω(n)}. This yields the first strongly exponential lower bound on monotone arithmetic circuit complexity for a polynomial in VP. Before this result, strongly exponential size monotone lower bounds were known only for explicit polynomials in VNP [S. B. Gashkov and I. S. Sergeev, 2012; Ran Raz and Amir Yehudayoff, 2011; Srikanth Srinivasan, 2020; Bruno Pasqualotto Cavalar et al., 2020; Pavel Hrubeš and Amir Yehudayoff, 2021].
Recently, Hrubeš [Pavel Hrubeš, 2020] initiated a program to prove lower bounds against general arithmetic circuits by proving ε-sensitive lower bounds for monotone arithmetic circuits for a specific range of values for ε ∈ (0,1). The first ε-sensitive lower bound was just proved for a family of polynomials inside VNP by Chattopadhyay, Datta and Mukhopadhyay [Arkadev Chattopadhyay et al., 2021]. We consider the spanning tree polynomial ST_n defined over the complete graph of n vertices and show that the polynomials F_{n-1,n} - ε⋅ ST_{n} and F_{n-1,n} + ε⋅ ST_{n}, defined over (n-1)n variables, have monotone circuit complexity 2^{Ω(n)} if ε ≥ 2^{- Ω(n)} and F_{n-1,n} := ∏_{i = 2}ⁿ (x_{i,1} + ⋯ + x_{i,n}) is the complete set-multilinear polynomial. This provides the first ε-sensitive exponential lower bound for a family of polynomials inside VP. En-route, we consider a problem in 2-party, best partition communication complexity of deciding whether two sets of oriented edges distributed among Alice and Bob form a spanning tree or not. We prove that there exists a fixed distribution, under which the problem has low discrepancy with respect to every nearly-balanced partition. This result could be of interest beyond algebraic complexity.
Our two results, thus, are incomparable generalizations of the well known result by Jerrum and Snir [Mark Jerrum and Marc Snir, 1982] which showed that the spanning tree polynomial, defined over complete graphs with n vertices (so the number of variables is (n-1)n), has monotone complexity 2^{Ω(n)}. In particular, the first result is an optimal lower bound and the second result can be thought of as a robust version of the earlier monotone lower bound for the spanning tree polynomial.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.39/LIPIcs.ITCS.2022.39.pdf
Spanning Tree Polynomial
Monotone Computation
Lower Bounds
Communication Complexity
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
40:1
40:23
10.4230/LIPIcs.ITCS.2022.40
article
The Space Complexity of Sampling
Chattopadhyay, Eshan
1
Goodman, Jesse
1
Zuckerman, David
2
Cornell University, Ithaca, NY, USA
University of Texas at Austin, TX, USA
Recently, there has been exciting progress in understanding the complexity of distributions. Here, the goal is to quantify the resources required to generate (or sample) a distribution. Proving lower bounds in this new setting is more challenging than in the classical setting, and has yielded interesting new techniques and surprising applications. In this work, we initiate a study of the complexity of sampling with limited memory, and obtain the first nontrivial sampling lower bounds against oblivious read-once branching programs (ROBPs).
In our first main result, we show that any distribution sampled by an ROBP of width 2^{Ω(n)} has statistical distance 1-2^{-Ω(n)} from any distribution that is uniform over a good code. More generally, we obtain sampling lower bounds for any list decodable code, which are nearly tight. Previously, such a result was only known for sampling in AC⁰ (Lovett and Viola, CCC'11; Beck, Impagliazzo and Lovett, FOCS'12). As an application of our result, a known connection implies new data structure lower bounds for storing codewords.
In our second main result, we prove a direct product theorem for sampling with ROBPs. Previously, no direct product theorems were known for the task of sampling, for any computational model. A key ingredient in our proof is a simple new lemma about amplifying statistical distance between sequences of somewhat-dependent random variables. Using this lemma, we also obtain a simple new proof of a known lower bound for sampling disjoint sets using two-party communication protocols (Göös and Watson, RANDOM'19).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.40/LIPIcs.ITCS.2022.40.pdf
Complexity of distributions
complexity of sampling
extractors
list decodable codes
lower bounds
read-once branching programs
small-space computation
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
41:1
41:13
10.4230/LIPIcs.ITCS.2022.41
article
On the Existence of Competitive Equilibrium with Chores
Chaudhury, Bhaskar Ray
1
Garg, Jugal
1
McGlaughlin, Peter
1
Mehta, Ruta
1
University of Illinois at Urbana Champaign, IL, USA
We study the chore division problem in the classic Arrow-Debreu exchange setting, where a set of agents want to divide their divisible chores (bads) to minimize their disutilities (costs). We assume that agents have linear disutility functions. Like the setting with goods, a division based on competitive equilibrium is regarded as one of the best mechanisms for bads. Equilibrium existence for goods has been extensively studied, resulting in a simple, polynomial-time verifiable, necessary and sufficient condition. However, dividing bads has not received a similar extensive study even though it is as relevant as dividing goods in day-to-day life.
In this paper, we show that the problem of checking whether an equilibrium exists in chore division is NP-complete, which is in sharp contrast to the case of goods. Further, we derive a simple, polynomial-time verifiable, sufficient condition for existence. Our fixed-point formulation to show existence makes novel use of both Kakutani and Brouwer fixed-point theorems, the latter nested inside the former, to avoid the undefined demand issue specific to bads.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.41/LIPIcs.ITCS.2022.41.pdf
Fair Division
Competitive Equilibrium
Fixed Point Theorems
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
42:1
42:21
10.4230/LIPIcs.ITCS.2022.42
article
Individual Fairness in Advertising Auctions Through Inverse Proportionality
Chawla, Shuchi
1
Jagadeesan, Meena
2
The University of Texas at Austin, TX, USA
University of California, Berkeley, CA, USA
Recent empirical work demonstrates that online advertisement can exhibit bias in the delivery of ads across users even when all advertisers bid in a non-discriminatory manner. We study the design ad auctions that, given fair bids, are guaranteed to produce fair outcomes. Following the works of Dwork and Ilvento [2019] and Chawla et al. [2020], our goal is to design a truthful auction that satisfies "individual fairness" in its outcomes: informally speaking, users that are similar to each other should obtain similar allocations of ads. Within this framework we quantify the tradeoff between social welfare maximization and fairness.
This work makes two conceptual contributions. First, we express the fairness constraint as a kind of stability condition: any two users that are assigned multiplicatively similar values by all the advertisers must receive additively similar allocations for each advertiser. This value stability constraint is expressed as a function that maps the multiplicative distance between value vectors to the maximum allowable 𝓁_{∞} distance between the corresponding allocations. Standard auctions do not satisfy this kind of value stability.
Second, we introduce a new class of allocation algorithms called Inverse Proportional Allocation that achieve a near optimal tradeoff between fairness and social welfare for a broad and expressive class of value stability conditions. These allocation algorithms are truthful and prior-free, and achieve a constant factor approximation to the optimal (unconstrained) social welfare. In particular, the approximation ratio is independent of the number of advertisers in the system. In this respect, these allocation algorithms greatly surpass the guarantees achieved in previous work. We also extend our results to broader notions of fairness that we call subset fairness.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.42/LIPIcs.ITCS.2022.42.pdf
Algorithmic fairness
advertising auctions
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
43:1
43:3
10.4230/LIPIcs.ITCS.2022.43
article
Improved Decoding of Expander Codes
Chen, Xue
1
Cheng, Kuan
2
Li, Xin
3
Ouyang, Minghui
2
University of Science and Technology of China, Anhui, China
Peking University, China
Johns Hopkins University, Baltimore, MD, USA
We study the classical expander codes, introduced by Sipser and Spielman [M. Sipser and D. A. Spielman, 1996]. Given any constants 0 < α, ε < 1/2, and an arbitrary bipartite graph with N vertices on the left, M < N vertices on the right, and left degree D such that any left subset S of size at most α N has at least (1-ε)|S|D neighbors, we show that the corresponding linear code given by parity checks on the right has distance at least roughly {α N}/{2 ε}. This is strictly better than the best known previous result of 2(1-ε) α N [Madhu Sudan, 2000; Viderman, 2013] whenever ε < 1/2, and improves the previous result significantly when ε is small. Furthermore, we show that this distance is tight in general, thus providing a complete characterization of the distance of general expander codes.
Next, we provide several efficient decoding algorithms, which vastly improve previous results in terms of the fraction of errors corrected, whenever ε < 1/4. Finally, we also give a bound on the list-decoding radius of general expander codes, which beats the classical Johnson bound in certain situations (e.g., when the graph is almost regular and the code has a high rate).
Our techniques exploit novel combinatorial properties of bipartite expander graphs. In particular, we establish a new size-expansion tradeoff, which may be of independent interests.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.43/LIPIcs.ITCS.2022.43.pdf
Expander Code
Decoding
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
44:1
44:1
10.4230/LIPIcs.ITCS.2022.44
article
Cursed yet Satisfied Agents
Chen, Yiling
1
Eden, Alon
1
Wang, Juntao
1
School of Engineering and Applied Science, Harvard University, Boston, MA, US
In real-life auctions, a widely observed phenomenon is the winner’s curse - the winner’s high bid implies that the winner often overestimates the value of the good for sale, resulting in an incurred negative utility. The seminal work of Eyster and Rabin [Econometrica'05] introduced a behavioral model aimed to explain this observed anomaly. We term agents who display this bias "cursed agents." We adopt their model in the interdependent value setting, and aim to devise mechanisms that prevent the agents from obtaining negative utility. We design mechanisms that are cursed ex-post incentive compatible, that is, incentivize agents to bid their true signal even though they are cursed, while ensuring that the outcome is ex-post individually rational (EPIR) - the price the agents pay is no more than the agents' true value.
Since the agents might overestimate the value of the allocated good, such mechanisms might require the seller to make positive (monetary) transfers to the agents in order to prevent agents from over-paying for the good. While the revenue of the seller not requiring EPIR might increase when agents are cursed, when imposing EPIR, cursed agents will always pay less than fully rational agents (due to the positive transfers the seller makes). We devise revenue and welfare maximizing mechanisms for cursed agents. For revenue maximization, we give the optimal deterministic and anonymous mechanism. For welfare maximization, we require ex-post budget balance (EPBB), as positive transfers might cause the seller to have negative revenue. We propose a masking operation that takes any deterministic mechanism, and masks the allocation whenever the seller requires to make positive transfers. The masking operation ensures that the mechanism is both EPIR and EPBB. We show that in typical settings, EPBB implies that the mechanism cannot make any positive transfers. Thus, applying the masking operation on the fully efficient mechanism results in a socially optimal EPBB mechanism. This further implies that if the valuation function is the maximum of agents' signals, the optimal EPBB mechanism obtains zero welfare. In contrast, we show that for sum-concave valuations, which include weighted-sum valuations and 𝓁_p-norms, the welfare optimal EPBB mechanism obtains half of the optimal welfare as the number of agents grows large.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.44/LIPIcs.ITCS.2022.44.pdf
Mechanism Design
Interdependent Valuation Auction
Bounded Rationality
Cursed Equilibrium
Winner’s curse
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
45:1
45:16
10.4230/LIPIcs.ITCS.2022.45
article
Average-Case Hardness of NP and PH from Worst-Case Fine-Grained Assumptions
Chen, Lijie
1
Hirahara, Shuichi
2
Vafa, Neekon
1
MIT, Boston, MA, USA
National Institute of Informatics, Tokyo, Japan
What is a minimal worst-case complexity assumption that implies non-trivial average-case hardness of NP or PH? This question is well motivated by the theory of fine-grained average-case complexity and fine-grained cryptography. In this paper, we show that several standard worst-case complexity assumptions are sufficient to imply non-trivial average-case hardness of NP or PH:
- NTIME[n] cannot be solved in quasi-linear time on average if UP ̸ ⊆ DTIME[2^{Õ(√n)}].
- Σ₂TIME[n] cannot be solved in quasi-linear time on average if Σ_kSAT cannot be solved in time 2^{Õ(√n)} for some constant k. Previously, it was not known if even average-case hardness of Σ₃SAT implies the average-case hardness of Σ₂TIME[n].
- Under the Exponential-Time Hypothesis (ETH), there is no average-case n^{1+ε}-time algorithm for NTIME[n] whose running time can be estimated in time n^{1+ε} for some constant ε > 0.
Our results are given by generalizing the non-black-box worst-case-to-average-case connections presented by Hirahara (STOC 2021) to the settings of fine-grained complexity. To do so, we construct quite efficient complexity-theoretic pseudorandom generators under the assumption that the nondeterministic linear time is easy on average, which may be of independent interest.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.45/LIPIcs.ITCS.2022.45.pdf
Average-case complexity
worst-case to average-case reduction
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
46:1
46:25
10.4230/LIPIcs.ITCS.2022.46
article
Symmetric Sparse Boolean Matrix Factorization and Applications
Chen, Sitan
1
Song, Zhao
2
Tao, Runzhou
3
Zhang, Ruizhe
4
University of California, Berkeley, CA, USA
Adobe Research, Seattle, WA, USA
Columbia University, New York, NY, USA
University of Texas at Austin, TX, USA
In this work, we study a variant of nonnegative matrix factorization where we wish to find a symmetric factorization of a given input matrix into a sparse, Boolean matrix. Formally speaking, given {𝐌} ∈ {ℤ}^{m× m}, we want to find {𝐖} ∈ {0,1}^{m× r} such that ‖ {𝐌} - {𝐖} {𝐖}^⊤ ‖₀ is minimized among all {𝐖} for which each row is k-sparse. This question turns out to be closely related to a number of questions like recovering a hypergraph from its line graph, as well as reconstruction attacks for private neural network training.
As this problem is hard in the worst-case, we study a natural average-case variant that arises in the context of these reconstruction attacks: {𝐌} = {𝐖} {𝐖}^{⊤} for {𝐖} a random Boolean matrix with k-sparse rows, and the goal is to recover {𝐖} up to column permutation. Equivalently, this can be thought of as recovering a uniformly random k-uniform hypergraph from its line graph.
Our main result is a polynomial-time algorithm for this problem based on bootstrapping higher-order information about {𝐖} and then decomposing an appropriate tensor. The key ingredient in our analysis, which may be of independent interest, is to show that such a matrix {𝐖} has full column rank with high probability as soon as m = Ω̃(r), which we do using tools from Littlewood-Offord theory and estimates for binary Krawtchouk polynomials.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.46/LIPIcs.ITCS.2022.46.pdf
Matrix factorization
tensors
random matrices
average-case complexity
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
47:1
47:16
10.4230/LIPIcs.ITCS.2022.47
article
Quantum Meets the Minimum Circuit Size Problem
Chia, Nai-Hui
1
Chou, Chi-Ning
2
Zhang, Jiayu
3
4
Zhang, Ruizhe
5
Luddy School of Informatics, Computing, and Engineering, Indiana University, Bloomington, IN, USA
School of Engineering and Applied Sciences, Harvard University, Boston, MA, USA
Department of Computer Science, Boston University, MA, USA
Computing and Mathematical Sciences, California Institute and Technology, Pasadena, CA, USA
Department of Computer Science, The University of Texas at Austin, TX, USA
In this work, we initiate the study of the Minimum Circuit Size Problem (MCSP) in the quantum setting. MCSP is a problem to compute the circuit complexity of Boolean functions. It is a fascinating problem in complexity theory - its hardness is mysterious, and a better understanding of its hardness can have surprising implications to many fields in computer science.
We first define and investigate the basic complexity-theoretic properties of minimum quantum circuit size problems for three natural objects: Boolean functions, unitaries, and quantum states. We show that these problems are not trivially in NP but in QCMA (or have QCMA protocols). Next, we explore the relations between the three quantum MCSPs and their variants. We discover that some reductions that are not known for classical MCSP exist for quantum MCSPs for unitaries and states, e.g., search-to-decision reductions and self-reductions. Finally, we systematically generalize results known for classical MCSP to the quantum setting (including quantum cryptography, quantum learning theory, quantum circuit lower bounds, and quantum fine-grained complexity) and also find new connections to tomography and quantum gravity. Due to the fundamental differences between classical and quantum circuits, most of our results require extra care and reveal properties and phenomena unique to the quantum setting. Our findings could be of interest for future studies, and we post several open problems for further exploration along this direction.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.47/LIPIcs.ITCS.2022.47.pdf
Quantum Computation
Quantum Complexity
Minimum Circuit Size Problem
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
48:1
48:20
10.4230/LIPIcs.ITCS.2022.48
article
Larger Corner-Free Sets from Combinatorial Degenerations
Christandl, Matthias
1
Fawzi, Omar
2
Ta, Hoang
2
Zuiddam, Jeroen
3
Department of Mathematical Sciences, University of Copenhagen, Denmark
Univ. Lyon, ENS Lyon, UCBL, CNRS, Inria, LIP, France
Korteweg-de Vries Institute for Mathematics, University of Amsterdam, The Netherlands
There is a large and important collection of Ramsey-type combinatorial problems, closely related to central problems in complexity theory, that can be formulated in terms of the asymptotic growth of the size of the maximum independent sets in powers of a fixed small hypergraph, also called the Shannon capacity. An important instance of this is the corner problem studied in the context of multiparty communication complexity in the Number On the Forehead (NOF) model. Versions of this problem and the NOF connection have seen much interest (and progress) in recent works of Linial, Pitassi and Shraibman (ITCS 2019) and Linial and Shraibman (CCC 2021).
We introduce and study a general algebraic method for lower bounding the Shannon capacity of directed hypergraphs via combinatorial degenerations, a combinatorial kind of "approximation" of subgraphs that originates from the study of matrix multiplication in algebraic complexity theory (and which play an important role there) but which we use in a novel way.
Using the combinatorial degeneration method, we make progress on the corner problem by explicitly constructing a corner-free subset in F₂ⁿ × F₂ⁿ of size Ω(3.39ⁿ/poly(n)), which improves the previous lower bound Ω(2.82ⁿ) of Linial, Pitassi and Shraibman (ITCS 2019) and which gets us closer to the best upper bound 4^{n - o(n)}. Our new construction of corner-free sets implies an improved NOF protocol for the Eval problem. In the Eval problem over a group G, three players need to determine whether their inputs x₁, x₂, x₃ ∈ G sum to zero. We find that the NOF communication complexity of the Eval problem over F₂ⁿ is at most 0.24n + 𝒪(log n), which improves the previous upper bound 0.5n + 𝒪(log n).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.48/LIPIcs.ITCS.2022.48.pdf
Corner-free sets
communication complexity
number on the forehead
combinatorial degeneration
hypergraphs
Shannon capacity
eval problem
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
49:1
49:23
10.4230/LIPIcs.ITCS.2022.49
article
Optimal Deterministic Clock Auctions and Beyond
Christodoulou, Giorgos
1
Gkatzelis, Vasilis
2
Schoepflin, Daniel
2
University of Liverpool, UK
Drexel University, Philadelphia, PA, USA
We design and analyze deterministic and randomized clock auctions for single-parameter domains with downward-closed feasibility constraints, aiming to maximize the social welfare. Clock auctions have been shown to satisfy a list of compelling incentive properties making them a very practical solution for real-world applications, partly because they require very little reasoning from the participating bidders. However, the first results regarding the worst-case performance of deterministic clock auctions from a welfare maximization perspective indicated that they face obstacles even for a seemingly very simple family of instances, leading to a logarithmic inapproximability result; this inapproximability result is information-theoretic and holds even if the auction has unbounded computational power. In this paper we propose a deterministic clock auction that achieves a logarithmic approximation for any downward-closed set system, using black box access to a solver for the underlying optimization problem. This proves that our clock auction is optimal and that the aforementioned family of instances exactly captures the information limitations of deterministic clock auctions. We then move beyond deterministic auctions and design randomized clock auctions that achieve improved approximation guarantees for a generalization of this family of instances, suggesting that the earlier indications regarding the performance of clock auctions may have been overly pessimistic.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.49/LIPIcs.ITCS.2022.49.pdf
Auctions
Obvious Strategyproofness
Mechanism Design
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
50:1
50:1
10.4230/LIPIcs.ITCS.2022.50
article
Nonlinear Repair Schemes of Reed-Solomon Codes.
Con, Roni
1
Tamo, Itzhak
2
Department of Computer Science, Tel Aviv University, Israel
Department of Electrical Engineering-Systems, Tel Aviv University, Israel
The problem of repairing linear codes and, in particular, Reed Solomon (RS) codes has attracted a lot of attention in recent years due to their extreme importance to distributed storage systems. In this problem, a failed code symbol (node) needs to be repaired by downloading as little information as possible from a subset of the remaining nodes. By now, there are examples of RS codes that have efficient repair schemes, and some even attain the cut-set bound. However, these schemes fall short in several aspects; they require a considerable field extension degree. They do not provide any nontrivial repair scheme over prime fields. Lastly, they are all linear repairs, i.e., the computed functions are linear over the base field. Motivated by these and by a question raised in [Guruswami and Wootters, 2017] on the power of nonlinear repair schemes, we study the problem of nonlinear repair schemes of RS codes.
Our main results are the first nonlinear repair scheme of RS codes with asymptotically optimal repair bandwidth (asymptotically matching the cut-set bound). Specifically, we show that almost all 2 dimensional RS codes over prime fields (for large enough prime) are asymptotically MSR codes. This is the first example of a nonlinear repair scheme of any code and also the first example that a nonlinear repair scheme can outperform all linear ones. Moreover, we construct several RS codes over prime fields that exhibits efficient repair properties. We also show that unlike the problem of repairing RS codes over field extensions, over prime fields, one can not achieve the cut-set bound with equality. Concretely, by using ideas from additive combinatorics, we improve the cut-set bound by an additive factor, hence showing that every node must transmit more bits than the cut-set bound during a repair. Lastly, we discuss the implications of our results on repairing RS codes for leakage-resilient of Shamir’s secret sharing scheme over prime fields.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.50/LIPIcs.ITCS.2022.50.pdf
Exact repair problem
Reed-Solomon codes
Cut-set bound
Regenerating codes
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
51:1
51:22
10.4230/LIPIcs.ITCS.2022.51
article
A Complete Linear Programming Hierarchy for Linear Codes
Coregliano, Leonardo Nagami
1
Jeronimo, Fernando Granha
1
Jones, Chris
2
Institute for Advanced Study, Princeton, NJ, USA
University of Chicago, IL, USA
A longstanding open problem in coding theory is to determine the best (asymptotic) rate R₂(δ) of binary codes with minimum constant (relative) distance δ. An existential lower bound was given by Gilbert and Varshamov in the 1950s. On the impossibility side, in the 1970s McEliece, Rodemich, Rumsey and Welch (MRRW) proved an upper bound by analyzing Delsarte’s linear programs. To date these results remain the best known lower and upper bounds on R₂(δ) with no improvement even for the important class of linear codes. Asymptotically, these bounds differ by an exponential factor in the blocklength.
In this work, we introduce a new hierarchy of linear programs (LPs) that converges to the true size A^{Lin}₂(n,d) of an optimum linear binary code (in fact, over any finite field) of a given blocklength n and distance d. This hierarchy has several notable features:
1) It is a natural generalization of the Delsarte LPs used in the first MRRW bound.
2) It is a hierarchy of linear programs rather than semi-definite programs potentially making it more amenable to theoretical analysis.
3) It is complete in the sense that the optimum code size can be retrieved from level O(n²).
4) It provides an answer in the form of a hierarchy (in larger dimensional spaces) to the question of how to cut Delsarte’s LP polytopes to approximate the true size of linear codes.
We obtain our hierarchy by generalizing the Krawtchouk polynomials and MacWilliams inequalities to a suitable "higher-order" version taking into account interactions of 𝓁 words. Our method also generalizes to translation schemes under mild assumptions.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.51/LIPIcs.ITCS.2022.51.pdf
Coding theory
code bounds
convex programming
linear programming hierarchy
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
52:1
52:22
10.4230/LIPIcs.ITCS.2022.52
article
Lower Bounds for Symmetric Circuits for the Determinant
Dawar, Anuj
1
https://orcid.org/0000-0003-4014-8248
Wilsenach, Gregory
1
Department of Computer Science and Technology, University of Cambridge, UK
Dawar and Wilsenach (ICALP 2020) introduce the model of symmetric arithmetic circuits and show an exponential separation between the sizes of symmetric circuits for computing the determinant and the permanent. The symmetry restriction is that the circuits which take a matrix input are unchanged by a permutation applied simultaneously to the rows and columns of the matrix. Under such restrictions we have polynomial-size circuits for computing the determinant but no subexponential size circuits for the permanent. Here, we consider a more stringent symmetry requirement, namely that the circuits are unchanged by arbitrary even permutations applied separately to rows and columns, and prove an exponential lower bound even for circuits computing the determinant. The result requires substantial new machinery. We develop a general framework for proving lower bounds for symmetric circuits with restricted symmetries, based on a new support theorem and new two-player restricted bijection games. These are applied to the determinant problem with a novel construction of matrices that are bi-adjacency matrices of graphs based on the CFI construction. Our general framework opens the way to exploring a variety of symmetry restrictions and studying trade-offs between symmetry and other resources used by arithmetic circuits.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.52/LIPIcs.ITCS.2022.52.pdf
arithmetic circuits
symmetric arithmetic circuits
Boolean circuits
symmetric circuits
permanent
determinant
counting width
Weisfeiler-Leman dimension
Cai-Fürer-Immerman constructions
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
53:1
53:21
10.4230/LIPIcs.ITCS.2022.53
article
Convex Influences
De, Anindya
1
Nadimpalli, Shivam
2
Servedio, Rocco A.
2
University of Pennsylvania, Philadelphia, PA, USA
Columbia University, New York, NY, USA
We introduce a new notion of influence for symmetric convex sets over Gaussian space, which we term "convex influence". We show that this new notion of influence shares many of the familiar properties of influences of variables for monotone Boolean functions f: {±1}ⁿ → {±1}.
Our main results for convex influences give Gaussian space analogues of many important results on influences for monotone Boolean functions. These include (robust) characterizations of extremal functions, the Poincaré inequality, the Kahn-Kalai-Linial theorem [J. Kahn et al., 1988], a sharp threshold theorem of Kalai [G. Kalai, 2004], a stability version of the Kruskal-Katona theorem due to O'Donnell and Wimmer [R. O'Donnell and K. Wimmer, 2009], and some partial results towards a Gaussian space analogue of Friedgut’s junta theorem [E. Friedgut, 1998]. The proofs of our results for convex influences use very different techniques than the analogous proofs for Boolean influences over {±1}ⁿ. Taken as a whole, our results extend the emerging analogy between symmetric convex sets in Gaussian space and monotone Boolean functions from {±1}ⁿ to {±1}.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.53/LIPIcs.ITCS.2022.53.pdf
Fourier analysis of Boolean functions
convex geometry
influences
threshold phenomena
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
54:1
54:6
10.4230/LIPIcs.ITCS.2022.54
article
The Importance of the Spectral Gap in Estimating Ground-State Energies
Deshpande, Abhinav
1
2
https://orcid.org/0000-0002-6114-1830
Gorshkov, Alexey V.
1
https://orcid.org/0000-0003-0509-3421
Fefferman, Bill
3
https://orcid.org/0000-0002-9627-0210
Joint Center for Quantum Information and Computer Science and Joint Quantum Institute, NIST/University of Maryland, College Park, MD, USA
Institute for Quantum Information and Matter, California Institute of Technology, Pasadena, CA, USA
Department of Computer Science, University of Chicago, IL, USA
The field of quantum Hamiltonian complexity lies at the intersection of quantum many-body physics and computational complexity theory, with deep implications to both fields. The main object of study is the Local Hamiltonian problem, which is concerned with estimating the ground-state energy of a local Hamiltonian and is complete for the class QMA, a quantum generalization of the class NP. A major challenge in the field is to understand the complexity of the Local Hamiltonian problem in more physically natural parameter regimes. One crucial parameter in understanding the ground space of any Hamiltonian in many-body physics is the spectral gap, which is the difference between the smallest two eigenvalues. Despite its importance in quantum many-body physics, the role played by the spectral gap in the complexity of the Local Hamiltonian problem is less well-understood. In this work, we make progress on this question by considering the precise regime, in which one estimates the ground-state energy to within inverse exponential precision. Computing ground-state energies precisely is a task that is important for quantum chemistry and quantum many-body physics.
In the setting of inverse-exponential precision (promise gap), there is a surprising result that the complexity of Local Hamiltonian is magnified from QMA to PSPACE, the class of problems solvable in polynomial space (but possibly exponential time). We clarify the reason behind this boost in complexity. Specifically, we show that the full complexity of the high precision case only comes about when the spectral gap is exponentially small. As a consequence of the proof techniques developed to show our results, we uncover important implications for the representability and circuit complexity of ground states of local Hamiltonians, the theory of uniqueness of quantum witnesses, and techniques for the amplification of quantum witnesses in the presence of postselection.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.54/LIPIcs.ITCS.2022.54.pdf
Local Hamiltonian problem
PSPACE
PP
QMA
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
55:1
55:17
10.4230/LIPIcs.ITCS.2022.55
article
Mechanism Design with Moral Bidders
Dobzinski, Shahar
1
Oren, Sigal
2
https://orcid.org/0000-0002-4271-7291
Weizmann Institute of Science, Rehovot, Israel
Ben-Gurion University of the Negev, Beer-Sheva, Israel
A rapidly growing literature on lying in behavioral economics and psychology shows that individuals often do not lie even when lying maximizes their utility. In this work, we attempt to incorporate these findings into the theory of mechanism design.
We consider players that have a preference for truth-telling and will only lie if their benefit from lying is sufficiently larger than the loss of the others. To accommodate such players, we introduce α-moral mechanisms, in which the gain of a player from misreporting his true value, comparing to truth-telling, is at most α times the loss that the others incur due to misreporting. Note that a 0-moral mechanism is a truthful mechanism.
We develop a theory of moral mechanisms in the canonical setting of single-item auctions within the "reasonable" range of α, 0 ≤ α ≤ 1. We identify similarities and disparities to the standard theory of truthful mechanisms. In particular, we show that the allocation function does not uniquely determine the payments and is unlikely to admit a simple characterization. In contrast, recall that monotonicity characterizes the allocation function of truthful mechanisms.
Our main technical effort is invested in determining whether the auctioneer can exploit the preference for truth-telling of the players to extract more revenue comparing to truthful mechanisms. We show that the auctioneer can indeed extract more revenue when the values of the players are correlated, even when there are only two players. However, we show that truthful mechanisms are revenue-maximizing even among moral ones when the values of the players are independently drawn from certain identical distributions (e.g., the uniform and exponential distributions).
A by-product of our proof that optimal moral mechanisms are truthful is an alternative proof to Myerson’s optimal truthful mechanism characterization in the settings that we consider. We flesh out this approach by providing an alternative proof that does not involve moral mechanisms to Myerson’s characterization of optimal truthful mechanisms to all settings in which the values are independently drawn from regular distributions (not necessarily identical).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.55/LIPIcs.ITCS.2022.55.pdf
Mechanism Design
Cognitive Biases
Revenue Maximization
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
56:1
56:25
10.4230/LIPIcs.ITCS.2022.56
article
Small-Box Cryptography
Dodis, Yevgeniy
1
Karthikeyan, Harish
1
Wichs, Daniel
2
3
New York University, NY, USA
Northeastern University, Boston, MA, USA
NTT Research, Sunnyvale, CA, USA
One of the ultimate goals of symmetric-key cryptography is to find a rigorous theoretical framework for building block ciphers from small components, such as cryptographic S-boxes, and then argue why iterating such small components for sufficiently many rounds would yield a secure construction. Unfortunately, a fundamental obstacle towards reaching this goal comes from the fact that traditional security proofs cannot get security beyond 2^{-n}, where n is the size of the corresponding component.
As a result, prior provably secure approaches - which we call "big-box cryptography" - always made n larger than the security parameter, which led to several problems: (a) the design was too coarse to really explain practical constructions, as (arguably) the most interesting design choices happening when instantiating such "big-boxes" were completely abstracted out; (b) the theoretically predicted number of rounds for the security of this approach was always dramatically smaller than in reality, where the "big-box" building block could not be made as ideal as required by the proof. For example, Even-Mansour (and, more generally, key-alternating) ciphers completely ignored the substitution-permutation network (SPN) paradigm which is at the heart of most real-world implementations of such ciphers.
In this work, we introduce a novel paradigm for justifying the security of existing block ciphers, which we call small-box cryptography. Unlike the "big-box" paradigm, it allows one to go much deeper inside the existing block cipher constructions, by only idealizing a small (and, hence, realistic!) building block of very small size n, such as an 8-to-32-bit S-box. It then introduces a clean and rigorous mixture of proofs and hardness conjectures which allow one to lift traditional, and seemingly meaningless, "at most 2^{-n}" security proofs for reduced-round idealized variants of the existing block ciphers, into meaningful, full-round security justifications of the actual ciphers used in the real world.
We then apply our framework to the analysis of SPN ciphers (e.g, generalizations of AES), getting quite reasonable and plausible concrete hardness estimates for the resulting ciphers. We also apply our framework to the design of stream ciphers. Here, however, we focus on the simplicity of the resulting construction, for which we managed to find a direct "big-box"-style security justification, under a well studied and widely believed eXact Linear Parity with Noise (XLPN) assumption.
Overall, we hope that our work will initiate many follow-up results in the area of small-box cryptography.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.56/LIPIcs.ITCS.2022.56.pdf
Block Ciphers
S-Box
Cryptography
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
57:1
57:18
10.4230/LIPIcs.ITCS.2022.57
article
Interaction-Preserving Compilers for Secure Computation
Döttling, Nico
1
Goyal, Vipul
2
3
Malavolta, Giulio
4
Raizes, Justin
2
CISPA Helmholtz Center for Information Security, Saarbrücken, Germany
Carnegie Mellon University, Pittsburgh, PA, USA
NTT Research, Sunnyvale, CA, USA
Max Planck Institute for Security and Privacy, Bochum, Germany
In this work we consider the following question: What is the cost of security for multi-party protocols? Specifically, given an insecure protocol where parties exchange (in the worst case) Γ bits in N rounds, is it possible to design a secure protocol with communication complexity close to Γ and N rounds? We systematically study this problem in a variety of settings and we propose solutions based on the intractability of different cryptographic problems.
For the case of two parties we design an interaction-preserving compiler where the number of bits exchanged in the secure protocol approaches Γ and the number of rounds is exactly N, assuming the hardness of standard problems over lattices. For the more general multi-party case, we obtain the same result assuming either (i) an additional round of interaction or (ii) the existence of extractable witness encryption and succinct non-interactive arguments of knowledge. As a contribution of independent interest, we construct the first multi-key fully homomorphic encryption scheme with message-to-ciphertext ratio (i.e., rate) of 1 - o(1), assuming the hardness of the learning with errors (LWE) problem.
We view our work as a support for the claim that, as far as interaction and communication are concerned, one does not need to pay a significant price for security in multi-party protocols.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.57/LIPIcs.ITCS.2022.57.pdf
Multiparty Computation
Communication Complexity
Fully Homomorphic Encryption
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
58:1
58:23
10.4230/LIPIcs.ITCS.2022.58
article
Matroid Secretary Is Equivalent to Contention Resolution
Dughmi, Shaddin
1
https://orcid.org/0000-0002-2784-1868
Department of Computer Science, University of Southern California, Los Angeles, CA, USA
We show that the matroid secretary problem is equivalent to correlated contention resolution in the online random-order model. Specifically, the matroid secretary conjecture is true if and only if every matroid admits an online random-order contention resolution scheme which, given an arbitrary (possibly correlated) prior distribution over subsets of the ground set, matches the balance ratio of the best offline scheme for that distribution up to a constant. We refer to such a scheme as universal. Our result indicates that the core challenge of the matroid secretary problem lies in resolving contention for positively correlated inputs, in particular when the positive correlation is benign in as much as offline contention resolution is concerned.
Our result builds on our previous work which establishes one direction of this equivalence, namely that the secretary conjecture implies universal random-order contention resolution, as well as a weak converse, which derives a matroid secretary algorithm from a random-order contention resolution scheme with only partial knowledge of the distribution. It is this weak converse that we strengthen in this paper: We show that universal random-order contention resolution for matroids, in the usual setting of a fully known prior distribution, suffices to resolve the matroid secretary conjecture in the affirmative.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.58/LIPIcs.ITCS.2022.58.pdf
Contention Resolution
Secretary Problems
Matroids
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
59:1
59:10
10.4230/LIPIcs.ITCS.2022.59
article
Uniform Brackets, Containers, and Combinatorial Macbeath Regions
Dutta, Kunal
1
https://orcid.org/0000-0003-3055-9326
Ghosh, Arijit
2
Moran, Shay
3
4
https://orcid.org/0000-0002-8662-2737
Department of Informatics, University of Warsaw, Poland
Indian Statistical Institute, Kolkata, India
Technion – Israel Institute of Technology, Haifa, Israel
Google Research, Tel Aviv, Israel
We study the connections between three seemingly different combinatorial structures - uniform brackets in statistics and probability theory, containers in online and distributed learning theory, and combinatorial Macbeath regions, or Mnets in discrete and computational geometry. We show that these three concepts are manifestations of a single combinatorial property that can be expressed under a unified framework along the lines of Vapnik-Chervonenkis type theory for uniform convergence. These new connections help us to bring tools from discrete and computational geometry to prove improved bounds for these objects. Our improved bounds help to get an optimal algorithm for distributed learning of halfspaces, an improved algorithm for the distributed convex set disjointness problem, and improved regret bounds for online algorithms against σ-smoothed adversary for a large class of semi-algebraic threshold functions.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.59/LIPIcs.ITCS.2022.59.pdf
communication complexity
distributed learning
emperical process theory
online algorithms
discrete geometry
computational geometry
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
60:1
60:21
10.4230/LIPIcs.ITCS.2022.60
article
Multiscale Entropic Regularization for MTS on General Metric Spaces
Ebrahimnejad, Farzam
1
Lee, James R.
1
University of Washington, Seattle, WA, USA
We present an O((log n)²)-competitive algorithm for metrical task systems (MTS) on any n-point metric space that is also 1-competitive for service costs. This matches the competitive ratio achieved by Bubeck, Cohen, Lee, and Lee (2019) and the refined competitive ratios obtained by Coester and Lee (2019). Those algorithms work by first randomly embedding the metric space into an ultrametric and then solving MTS there. In contrast, our algorithm is cast as regularized gradient descent where the regularizer is a multiscale metric entropy defined directly on the metric space. This answers an open question of Bubeck (Highlights of Algorithms, 2019).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.60/LIPIcs.ITCS.2022.60.pdf
Metrical task systems
online algorithms
metric embeddings
convex optimization
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
61:1
61:12
10.4230/LIPIcs.ITCS.2022.61
article
Counting and Sampling Perfect Matchings in Regular Expanding Non-Bipartite Graphs
Ebrahimnejad, Farzam
1
Nagda, Ansh
1
Gharan, Shayan Oveis
1
University of Washington, Seattle, WA, USA
We show that the ratio of the number of near perfect matchings to the number of perfect matchings in d-regular strong expander (non-bipartite) graphs, with 2n vertices, is a polynomial in n, thus the Jerrum and Sinclair Markov chain [Jerrum and Sinclair, 1989] mixes in polynomial time and generates an (almost) uniformly random perfect matching. Furthermore, we prove that such graphs have at least Ω(d)ⁿ many perfect matchings, thus proving the Lovasz-Plummer conjecture [L. Lovász and M.D. Plummer, 1986] for this family of graphs.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.61/LIPIcs.ITCS.2022.61.pdf
perfect matchings
approximate sampling
approximate counting
expanders
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
62:1
62:19
10.4230/LIPIcs.ITCS.2022.62
article
Embeddings and Labeling Schemes for A*
Eden, Talya
1
2
Indyk, Piotr
1
Xu, Haike
3
Massachusetts Institute of Technology, Cambridge, MA, USA
Boston University, MA, USA
Tsinghua University, Beijing, China
A* is a classic and popular method for graphs search and path finding. It assumes the existence of a heuristic function h(u,t) that estimates the shortest distance from any input node u to the destination t. Traditionally, heuristics have been handcrafted by domain experts. However, over the last few years, there has been a growing interest in learning heuristic functions. Such learned heuristics estimate the distance between given nodes based on "features" of those nodes.
In this paper we formalize and initiate the study of such feature-based heuristics. In particular, we consider heuristics induced by norm embeddings and distance labeling schemes, and provide lower bounds for the tradeoffs between the number of dimensions or bits used to represent each graph node, and the running time of the A* algorithm. We also show that, under natural assumptions, our lower bounds are almost optimal.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.62/LIPIcs.ITCS.2022.62.pdf
A* algorithm
path finding
graph search
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
63:1
63:23
10.4230/LIPIcs.ITCS.2022.63
article
A Unifying Framework for Characterizing and Computing Width Measures
Eiben, Eduard
1
https://orcid.org/0000-0003-2628-3435
Ganian, Robert
2
https://orcid.org/0000-0002-7762-8045
Hamm, Thekla
2
https://orcid.org/0000-0002-4595-9982
Jaffke, Lars
3
https://orcid.org/0000-0003-4856-5863
Kwon, O-joung
4
5
https://orcid.org/0000-0003-1820-1962
Department of Computer Science, Royal Holloway, University of London, Egham, UK
Algorithms and Complexity Group, TU Wien, Austria
Department of Informatics, University of Bergen, Norway
Department of Mathematics, Incheon National University, South Korea
Discrete Mathematics Group, Institute for Basic Science, Daejeon, South Korea
Algorithms for computing or approximating optimal decompositions for decompositional parameters such as treewidth or clique-width have so far traditionally been tailored to specific width parameters. Moreover, for mim-width, no efficient algorithms for computing good decompositions were known, even under highly restrictive parameterizations. In this work we identify ℱ-branchwidth as a class of generic decompositional parameters that can capture mim-width, treewidth, clique-width as well as other measures. We show that while there is an infinite number of ℱ-branchwidth parameters, only a handful of these are asymptotically distinct. We then develop fixed-parameter and kernelization algorithms (under several structural parameterizations) that can approximate every possible ℱ-branchwidth, providing a unifying parameterized framework that can efficiently obtain near-optimal tree-decompositions, k-expressions, as well as optimal mim-width decompositions.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.63/LIPIcs.ITCS.2022.63.pdf
branchwidth
parameterized algorithms
mim-width
treewidth
clique-width
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
64:1
64:25
10.4230/LIPIcs.ITCS.2022.64
article
Reduction from Non-Unique Games to Boolean Unique Games
Eldan, Ronen
1
Moshkovitz, Dana
2
Department of Mathematics, Weizmann Institute of Science, Rehovot, Israel
Department of Computer Science, University of Texas at Austin, TX, USA
We reduce the problem of proving a "Boolean Unique Games Conjecture" (with gap 1-δ vs. 1-Cδ, for any C > 1, and sufficiently small δ > 0) to the problem of proving a PCP Theorem for a certain non-unique game. In a previous work, Khot and Moshkovitz suggested an inefficient candidate reduction (i.e., without a proof of soundness). The current work is the first to provide an efficient reduction along with a proof of soundness. The non-unique game we reduce from is similar to non-unique games for which PCP theorems are known.
Our proof relies on a new concentration theorem for functions in Gaussian space that are restricted to a random hyperplane. We bound the typical Euclidean distance between the low degree part of the restriction of the function to the hyperplane and the restriction to the hyperplane of the low degree part of the function.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.64/LIPIcs.ITCS.2022.64.pdf
Unique Games Conjecture
hyperplane encoding
concentration of measure
low degree testing
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
65:1
65:12
10.4230/LIPIcs.ITCS.2022.65
article
Pseudorandom Self-Reductions for NP-Complete Problems
Elrazik, Reyad Abed
1
Robere, Robert
2
Schuster, Assaf
1
Yehuda, Gal
1
Taub Faculty of Computer Science, Technion, Haifa, Israel
School of Computer Science, McGill University, Montreal, Canada
A language L is random-self-reducible if deciding membership in L can be reduced (in polynomial time) to deciding membership in L for uniformly random instances. It is known that several "number theoretic" languages (such as computing the permanent of a matrix) admit random self-reductions. Feigenbaum and Fortnow showed that NP-complete languages are not non-adaptively random-self-reducible unless the polynomial-time hierarchy collapses, giving suggestive evidence that NP may not admit random self-reductions. Hirahara and Santhanam introduced a weakening of random self-reductions that they called pseudorandom self-reductions, in which a language L is reduced to a distribution that is computationally indistinguishable from the uniform distribution. They then showed that the Minimum Circuit Size Problem (MCSP) admits a non-adaptive pseudorandom self-reduction, and suggested that this gave further evidence that distinguished MCSP from standard NP-Complete problems.
We show that, in fact, the Clique problem admits a non-adaptive pseudorandom self-reduction, assuming the planted clique conjecture. More generally we show the following. Call a property of graphs π hereditary if G ∈ π implies H ∈ π for every induced subgraph of G. We show that for any infinite hereditary property π, the problem of finding a maximum induced subgraph H ∈ π of a given graph G admits a non-adaptive pseudorandom self-reduction.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.65/LIPIcs.ITCS.2022.65.pdf
computational complexity
pseudorandomness
worst-case to average-case
self reductions
planted clique
hereditary graph family
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
66:1
66:19
10.4230/LIPIcs.ITCS.2022.66
article
Credible, Strategyproof, Optimal, and Bounded Expected-Round Single-Item Auctions for All Distributions
Essaidi, Meryem
1
Ferreira, Matheus V. X.
2
Weinberg, S. Matthew
1
Computer Science, Princeton University, NJ, USA
Computer Science, Harvard University, MA, USA
We consider a revenue-maximizing seller with a single item for sale to multiple buyers with independent and identically distributed valuations. Akbarpour and Li (2020) show that the only optimal, credible, strategyproof auction is the ascending price auction with reserves which has unbounded communication complexity. Recent work of Ferreira and Weinberg (2020) circumvents their impossibility result assuming the existence of cryptographically secure commitment schemes, and designs a two-round credible, strategyproof, optimal auction. However, their auction is only credible when buyers' valuations are MHR or α-strongly regular: they show their auction might not be credible even when there is a single buyer drawn from a non-MHR distribution. In this work, under the same cryptographic assumptions, we identify a new single-item auction that is credible, strategyproof, revenue optimal, and terminates in constant rounds in expectation for all distributions with finite monopoly price.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.66/LIPIcs.ITCS.2022.66.pdf
Credible Auctions
Cryptographically Secure
Single-Item
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
67:1
67:16
10.4230/LIPIcs.ITCS.2022.67
article
Small Circuits Imply Efficient Arthur-Merlin Protocols
Ezra, Michael
1
Rothblum, Ron D.
1
Department of Computer Science, Technion, Haifa, Israel
The inner product function ⟨ x,y ⟩ = ∑_i x_i y_i mod 2 can be easily computed by a (linear-size) AC⁰(⊕) circuit: that is, a constant depth circuit with AND, OR and parity (XOR) gates. But what if we impose the restriction that the parity gates can only be on the bottom most layer (closest to the input)? Namely, can the inner product function be computed by an AC⁰ circuit composed with a single layer of parity gates? This seemingly simple question is an important open question at the frontier of circuit lower bound research.
In this work, we focus on a minimalistic version of the above question. Namely, whether the inner product function cannot be approximated by a small DNF augmented with a single layer of parity gates. Our main result shows that the existence of such a circuit would have unexpected implications for interactive proofs, or more specifically, for interactive variants of the Data Streaming and Communication Complexity models. In particular, we show that the existence of such a small (i.e., polynomial-size) circuit yields:
1) An O(d)-message protocol in the Arthur-Merlin Data Streaming model for every n-variate, degree d polynomial (over GF(2)), using only Õ(d) ⋅log(n) communication and space complexity. In particular, this gives an AM[2] Data Streaming protocol for a variant of the well-studied triangle counting problem, with poly-logarithmic communication and space complexities.
2) A 2-message communication complexity protocol for any sparse (or low degree) polynomial, and for any function computable by an AC⁰(⊕) circuit. Specifically, for the latter, we obtain a protocol with communication complexity that is poly-logarithmic in the size of the AC⁰(⊕) circuit.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.67/LIPIcs.ITCS.2022.67.pdf
Circuits Complexity
Circuit Lower Bounds
Communication Complexity
Data Streaming
Arthur-Merlin games
Interactive Proofs
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
68:1
68:20
10.4230/LIPIcs.ITCS.2022.68
article
A Lower Bound on the Space Overhead of Fault-Tolerant Quantum Computation
Fawzi, Omar
1
Müller-Hermes, Alexander
2
3
Shayeghi, Ala
1
Univ Lyon, ENS Lyon, UCBL, CNRS, Inria, LIP, F-69342, Lyon Cedex 07, France
Institut Camille Jordan, Université Claude Bernard Lyon 1, 69622 Villeurbanne cedex, France
Department of Mathematics, University of Oslo, Norway
The threshold theorem is a fundamental result in the theory of fault-tolerant quantum computation stating that arbitrarily long quantum computations can be performed with a polylogarithmic overhead provided the noise level is below a constant level. A recent work by Fawzi, Grospellier and Leverrier (FOCS 2018) building on a result by Gottesman (QIC 2013) has shown that the space overhead can be asymptotically reduced to a constant independent of the circuit provided we only consider circuits with a length bounded by a polynomial in the width. In this work, using a minimal model for quantum fault tolerance, we establish a general lower bound on the space overhead required to achieve fault tolerance.
For any non-unitary qubit channel 𝒩 and any quantum fault tolerance schemes against i.i.d. noise modeled by 𝒩, we prove a lower bound of max{Q(𝒩)^{-1}n,α_𝒩 log T} on the number of physical qubits, for circuits of length T and width n. Here, Q(𝒩) denotes the quantum capacity of 𝒩 and α_𝒩 > 0 is a constant only depending on the channel 𝒩. In our model, we allow for qubits to be replaced by fresh ones during the execution of the circuit and in the case of unital noise, we allow classical computation to be free and perfect. This improves upon results that assumed classical computations to be also affected by noise, and that sometimes did not allow for fresh qubits to be added. Along the way, we prove an exponential upper bound on the maximal length of fault-tolerant quantum computation with amplitude damping noise resolving a conjecture by Ben-Or, Gottesman and Hassidim (2013).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.68/LIPIcs.ITCS.2022.68.pdf
Fault-tolerant quantum computation
quantum error correction
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
69:1
69:25
10.4230/LIPIcs.ITCS.2022.69
article
On Semi-Algebraic Proofs and Algorithms
Fleming, Noah
1
2
Göös, Mika
3
Grosser, Stefan
4
Robere, Robert
4
University of California, San Diego, CA, USA
Memorial University, St. John’s, Canada
EPFL, Lausanne, Switzerland
McGill University, Montreal, Canada
We give a new characterization of the Sherali-Adams proof system, showing that there is a degree-d Sherali-Adams refutation of an unsatisfiable CNF formula C if and only if there is an ε > 0 and a degree-d conical junta J such that viol_C(x) - ε = J, where viol_C(x) counts the number of falsified clauses of C on an input x. Using this result we show that the linear separation complexity, a complexity measure recently studied by Hrubeš (and independently by de Oliveira Oliveira and Pudlák under the name of weak monotone linear programming gates), monotone feasibly interpolates Sherali-Adams proofs.
We then investigate separation results for viol_C(x) - ε. In particular, we give a family of unsatisfiable CNF formulas C which have polynomial-size and small-width resolution proofs, but for which any representation of viol_C(x) - 1 by a conical junta requires degree Ω(n); this resolves an open question of Filmus, Mahajan, Sood, and Vinyals. Since Sherali-Adams can simulate resolution, this separates the non-negative degree of viol_C(x) - 1 and viol_C(x) - ε for arbitrarily small ε > 0. Finally, by applying lifting theorems, we translate this lower bound into new separation results between extension complexity and monotone circuit complexity.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.69/LIPIcs.ITCS.2022.69.pdf
Proof Complexity
Extended Formulations
Circuit Complexity
Sherali-Adams
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
70:1
70:23
10.4230/LIPIcs.ITCS.2022.70
article
Extremely Deep Proofs
Fleming, Noah
1
2
Pitassi, Toniann
3
4
5
Robere, Robert
6
University of California, San Diego, CA, USA
Memorial University, Canada
University of Toronto, Canada
Columbia University, New York, NY, USA
IAS, Princeton, NJ, USA
McGill University, Montreal, Canada
We further the study of supercritical tradeoffs in proof and circuit complexity, which is a type of tradeoff between complexity parameters where restricting one complexity parameter forces another to exceed its worst-case upper bound. In particular, we prove a new family of supercritical tradeoffs between depth and size for Resolution, Res(k), and Cutting Planes proofs. For each of these proof systems we construct, for each c ≤ n^{1-ε}, a formula with n^{O(c)} clauses and n variables that has a proof of size n^{O(c)} but in which any proof of size no more than roughly exponential in n^{1-ε}/c must necessarily have depth ≈ n^c. By setting c = o(n^{1-ε}) we therefore obtain exponential lower bounds on proof depth; this far exceeds the trivial worst-case upper bound of n. In doing so we give a simplified proof of a supercritical depth/width tradeoff for tree-like Resolution from [Alexander A. Razborov, 2016]. Finally, we outline several conjectures that would imply similar supercritical tradeoffs between size and depth in circuit complexity via lifting theorems.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.70/LIPIcs.ITCS.2022.70.pdf
Proof Complexity
Tradeoffs
Resolution
Cutting Planes
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
71:1
71:22
10.4230/LIPIcs.ITCS.2022.71
article
On the Download Rate of Homomorphic Secret Sharing
Fosli, Ingerid
1
Ishai, Yuval
2
Kolobov, Victor I.
2
Wootters, Mary
3
Google, Houston, TX, USA
Technion, Haifa, Israel
Stanford University, CA, USA
A homomorphic secret sharing (HSS) scheme is a secret sharing scheme that supports evaluating functions on shared secrets by means of a local mapping from input shares to output shares. We initiate the study of the download rate of HSS, namely, the achievable ratio between the length of the output shares and the output length when amortized over 𝓁 function evaluations. We obtain the following results.
- In the case of linear information-theoretic HSS schemes for degree-d multivariate polynomials, we characterize the optimal download rate in terms of the optimal minimal distance of a linear code with related parameters. We further show that for sufficiently large 𝓁 (polynomial in all problem parameters), the optimal rate can be realized using Shamir’s scheme, even with secrets over 𝔽₂.
- We present a general rate-amplification technique for HSS that improves the download rate at the cost of requiring more shares. As a corollary, we get high-rate variants of computationally secure HSS schemes and efficient private information retrieval protocols from the literature.
- We show that, in some cases, one can beat the best download rate of linear HSS by allowing nonlinear output reconstruction and 2^{-Ω(𝓁)} error probability.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.71/LIPIcs.ITCS.2022.71.pdf
Information-theoretic cryptography
homomorphic secret sharing
private information retrieval
secure multiparty computation
regenerating codes
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
72:1
72:19
10.4230/LIPIcs.ITCS.2022.72
article
A Variant of the VC-Dimension with Applications to Depth-3 Circuits
Frankl, Peter
1
Gryaznov, Svyatoslav
2
3
https://orcid.org/0000-0002-5648-8194
Talebanfard, Navid
2
https://orcid.org/0000-0002-3524-9282
Rényi Institute, Budapest, Hungary
Institute of Mathematics of the Czech Academy of Sciences, Prague, Czech Republic
St. Petersburg Department of V.A. Steklov Institute of Mathematics of the Russian Academy of Sciences, Russia
We introduce the following variant of the VC-dimension. Given S ⊆ {0,1}ⁿ and a positive integer d, we define 𝕌_d(S) to be the size of the largest subset I ⊆ [n] such that the projection of S on every subset of I of size d is the d-dimensional cube. We show that determining the largest cardinality of a set with a given 𝕌_d dimension is equivalent to a Turán-type problem related to the total number of cliques in a d-uniform hypergraph. This allows us to beat the Sauer-Shelah lemma for this notion of dimension. We use this to obtain several results on Σ₃^k-circuits, i.e., depth-3 circuits with top gate OR and bottom fan-in at most k:
- Tight relationship between the number of satisfying assignments of a 2-CNF and the dimension of the largest projection accepted by it, thus improving Paturi, Saks, and Zane (Comput. Complex. '00).
- Improved Σ₃³-circuit lower bounds for affine dispersers for sublinear dimension. Moreover, we pose a purely hypergraph-theoretic conjecture under which we get further improvement.
- We make progress towards settling the Σ₃² complexity of the inner product function and all degree-2 polynomials over 𝔽₂ in general. The question of determining the Σ₃³ complexity of IP was recently posed by Golovnev, Kulikov, and Williams (ITCS'21).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.72/LIPIcs.ITCS.2022.72.pdf
VC-dimension
Hypergraph
Clique
Affine Disperser
Circuit
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
73:1
73:27
10.4230/LIPIcs.ITCS.2022.73
article
Continuous Tasks and the Asynchronous Computability Theorem
Galeana, Hugo Rincon
1
https://orcid.org/0000-0002-8152-1275
Rajsbaum, Sergio
2
https://orcid.org/0000-0002-0009-5287
Schmid, Ulrich
1
https://orcid.org/0000-0001-9831-8583
Embedded Computing Systems Group TU Wien, Austria
UNAM, Instituto de Matemáticas, Mexico City, Mexico
The celebrated 1999 Asynchronous Computability Theorem (ACT) of Herlihy and Shavit characterized distributed tasks that are wait-free solvable and uncovered deep connections with combinatorial topology. We provide an alternative characterization of those tasks by means of the novel concept of continuous tasks, which have an input/output specification that is a continuous function between the geometric realizations of the input and output complex: We state and prove a precise characterization theorem (CACT) for wait-free solvable tasks in terms of continuous tasks. Its proof utilizes a novel chromatic version of a foundational result in algebraic topology, the simplicial approximation theorem, which is also proved in this paper. Apart from the alternative proof of the ACT implied by our CACT, we also demonstrate that continuous tasks have an expressive power that goes beyond classic task specifications, and hence open up a promising venue for future research: For the well-known approximate agreement task, we show that one can easily encode the desired proportion of the occurrence of specific outputs, namely, exact agreement, in the continuous task specification.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.73/LIPIcs.ITCS.2022.73.pdf
Wait-free computability
topology
distributed computing
decision tasks
shared memory
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
74:1
74:8
10.4230/LIPIcs.ITCS.2022.74
article
Correlation Detection in Trees for Planted Graph Alignment
Ganassali, Luca
1
Massoulié, Laurent
2
Lelarge, Marc
1
Inria, DI/ENS, PSL Research University, Paris, France
MSR-Inria Joint Centre, Inria, DI/ENS, PSL Research University, Paris, France
Motivated by alignment of correlated sparse random graphs, we study a hypothesis problem of deciding whether two random trees are correlated or not. Based on this correlation detection problem, we propose MPAlign, a message-passing algorithm for graph alignment, which we prove to succeed in polynomial time at partial alignment whenever tree detection is feasible. As a result our analysis of tree detection reveals new ranges of parameters for which partial alignment of sparse random graphs is feasible in polynomial time.
We conjecture that the connection between partial graph alignment and tree detection runs deeper, and that the parameter range where tree detection is impossible, which we partially characterize, corresponds to a region where partial graph alignment is hard (not polytime feasible).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.74/LIPIcs.ITCS.2022.74.pdf
inference on graphs
hypothesis testing
Erdős-Rényi random graphs
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
75:1
75:27
10.4230/LIPIcs.ITCS.2022.75
article
On Polynomially Many Queries to NP or QMA Oracles
Gharibian, Sevag
1
https://orcid.org/0000-0002-9992-3379
Rudolph, Dorian
1
https://orcid.org/0000-0002-2440-7388
Department of Computer Science and Institute for Photonic Quantum Systems (PhoQS), Paderborn University, Germany
We study the complexity of problems solvable in deterministic polynomial time with access to an NP or Quantum Merlin-Arthur (QMA)-oracle, such as P^NP and P^QMA, respectively. The former allows one to classify problems more finely than the Polynomial-Time Hierarchy (PH), whereas the latter characterizes physically motivated problems such as Approximate Simulation (APX-SIM) [Ambainis, CCC 2014]. In this area, a central role has been played by the classes P^NP[log] and P^QMA[log], defined identically to P^NP and P^QMA, except that only logarithmically many oracle queries are allowed. Here, [Gottlob, FOCS 1993] showed that if the adaptive queries made by a P^NP machine have a "query graph" which is a tree, then this computation can be simulated in P^NP[log].
In this work, we first show that for any verification class C ∈ {NP, MA, QCMA, QMA, QMA(2), NEXP, QMA_exp}, any P^C machine with a query graph of "separator number" s can be simulated using deterministic time exp(slog n) and slog n queries to a C-oracle. When s ∈ O(1) (which includes the case of O(1)-treewidth, and thus also of trees), this gives an upper bound of P^C[log], and when s ∈ O(log^k(n)), this yields bound QP^{C[log^{k+1}]} (QP meaning quasi-polynomial time). We next show how to combine Gottlob’s "admissible-weighting function" framework with the "flag-qubit" framework of [Watson, Bausch, Gharibian, 2020], obtaining a unified approach for embedding P^C computations directly into APX-SIM instances in a black-box fashion. Finally, we formalize a simple no-go statement about polynomials (c.f. [Krentel, STOC 1986]): Given a multi-linear polynomial p specified via an arithmetic circuit, if one can "weakly compress" p so that its optimal value requires m bits to represent, then P^NP can be decided with only m queries to an NP-oracle.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.75/LIPIcs.ITCS.2022.75.pdf
admissible weighting function
oracle complexity class
quantum complexity theory
Quantum Merlin Arthur (QMA)
simulation of local measurement
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
76:1
76:18
10.4230/LIPIcs.ITCS.2022.76
article
Eliminating Intermediate Measurements Using Pseudorandom Generators
Girish, Uma
1
Raz, Ran
1
Princeton University, Princeton, NJ, USA
We show that quantum algorithms of time T and space S ≥ log T with unitary operations and intermediate measurements can be simulated by quantum algorithms of time T ⋅ poly (S) and space {O}(S⋅ log T) with unitary operations and without intermediate measurements. The best results prior to this work required either Ω(T) space (by the deferred measurement principle) or poly(2^S) time [Bill Fefferman and Zachary Remscrim, 2021; Uma Girish et al., 2021]. Our result is thus a time-efficient and space-efficient simulation of algorithms with unitary operations and intermediate measurements by algorithms with unitary operations and without intermediate measurements.
To prove our result, we study pseudorandom generators for quantum space-bounded algorithms. We show that (an instance of) the INW pseudorandom generator for classical space-bounded algorithms [Russell Impagliazzo et al., 1994] also fools quantum space-bounded algorithms. More precisely, we show that for quantum space-bounded algorithms that have access to a read-once tape consisting of random bits, the final state of the algorithm when the random bits are drawn from the uniform distribution is nearly identical to the final state when the random bits are drawn using the INW pseudorandom generator. This result applies to general quantum algorithms which can apply unitary operations, perform intermediate measurements and reset qubits.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.76/LIPIcs.ITCS.2022.76.pdf
quantum algorithms
intermediate measurements
deferred measurement
pseudorandom generator
INW generator
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
77:1
77:19
10.4230/LIPIcs.ITCS.2022.77
article
Sample-Based Proofs of Proximity
Goldberg, Guy
1
https://orcid.org/0000-0003-3772-4494
N. Rothblum, Guy
1
https://orcid.org/0000-0001-5273-6472
Weizmann Institute of Science, Rehovot, Israel
Suppose we have random sampling access to a huge object, such as a graph or a database. Namely, we can observe the values of random locations in the object, say random records in the database or random edges in the graph. We cannot, however, query locations of our choice. Can we verify complex properties of the object using only this restricted sampling access?
In this work, we initiate the study of sample-based proof systems, where the verifier is extremely constrained; Given an input, the verifier can only obtain samples of uniformly random and i.i.d. locations in the input string, together with the values at those locations. The goal is verifying complex properties in sublinear time, using only this restricted access. Following the literature on Property Testing and on Interactive Proofs of Proximity (IPPs), we seek proof systems where the verifier accepts every input that has the property, and with high probability rejects every input that is far from the property.
We study both interactive and non-interactive sample-based proof systems, showing:
- On the positive side, our main result is that rich families of properties / languages have sub-linear sample-based interactive proofs of proximity (SIPPs). We show that every language in NC has a SIPP, where the sample and communication complexities, as well as the verifier’s running time, are Õ(√n), and with polylog(n) communication rounds. We also show that every language that can be computed in polynomial-time and bounded-polynomial space has a SIPP, where the sample and communication complexities of the protocol, as well as the verifier’s running time are roughly √n, and with a constant number of rounds.
This is achieved by constructing a reduction protocol from SIPPs to IPPs. With the aid of an untrusted prover, this reduction enables a restricted, sample-based verifier to simulate an execution of a (query-based) IPP, even though it cannot query the input. Applying the reduction to known query-based IPPs yields SIPPs for the families described above.
- We show that every language with an adequate (query-based) property tester has a 1-round SIPP with constant sample complexity and logarithmic communication complexity. One such language is equality testing, for which we give an explicit and simple SIPP.
- On the negative side, we show that interaction can be essential: we prove that there is no non-interactive sample-based proof of proximity for equality testing.
- Finally, we prove that private coins can dramatically increase the power of SIPPs. We show a strong separation between the power of public-coin SIPPs and private-coin SIPPs for Equality Testing.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.77/LIPIcs.ITCS.2022.77.pdf
Interactive Proof Systems
Sample-Based Access
Proofs of Proximity
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
78:1
78:19
10.4230/LIPIcs.ITCS.2022.78
article
Testing Distributions of Huge Objects
Goldreich, Oded
1
https://orcid.org/0000-0002-4329-135X
Ron, Dana
2
https://orcid.org/0000-0001-6576-7200
Department of Computer Science, Weizmann Institute of Science, Israel
School of Electrical Engineering, Tel Aviv University, Israel
We initiate a study of a new model of property testing that is a hybrid of testing properties of distributions and testing properties of strings. Specifically, the new model refers to testing properties of distributions, but these are distributions over huge objects (i.e., very long strings). Accordingly, the model accounts for the total number of local probes into these objects (resp., queries to the strings) as well as for the distance between objects (resp., strings). Specifically, the distance between distributions is defined as the earth mover’s distance with respect to the relative Hamming distance between strings.
We study the query complexity of testing in this new model, focusing on three directions. First, we try to relate the query complexity of testing properties in the new model to the sample complexity of testing these properties in the standard distribution testing model. Second, we consider the complexity of testing properties that arise naturally in the new model (e.g., distributions that capture random variations of fixed strings). Third, we consider the complexity of testing properties that were extensively studied in the standard distribution testing model: Two such cases are uniform distributions and pairs of identical distributions, where we obtain the following results.
- Testing whether a distribution over n-bit long strings is uniform on some set of size m can be done with query complexity Õ(m/ε³), where ε > (log₂m)/n is the proximity parameter.
- Testing whether two distribution over n-bit long strings that have support size at most m are identical can be done with query complexity Õ(m^{2/3}/ε³). Both upper bounds are quite tight; that is, for ε = Ω(1), the first task requires Ω(m^c) queries for any c < 1 and n = ω(log m), whereas the second task requires Ω(m^{2/3}) queries. Note that the query complexity of the first task is higher than the sample complexity of the corresponding task in the standard distribution testing model, whereas in the case of the second task the bounds almost match.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.78/LIPIcs.ITCS.2022.78.pdf
Property Testing
Distributions
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
79:1
79:21
10.4230/LIPIcs.ITCS.2022.79
article
Omnipredictors
Gopalan, Parikshit
1
https://orcid.org/0000-0003-3069-9054
Kalai, Adam Tauman
2
Reingold, Omer
3
Sharan, Vatsal
4
Wieder, Udi
1
VMware Research, Palo Alto, California, USA
Microsoft Research, Boston, MA, USA
Stanford University, CA, USA
University of Southern California, Los Angeles, CA, USA
Loss minimization is a dominant paradigm in machine learning, where a predictor is trained to minimize some loss function that depends on an uncertain event (e.g., "will it rain tomorrow?"). Different loss functions imply different learning algorithms and, at times, very different predictors. While widespread and appealing, a clear drawback of this approach is that the loss function may not be known at the time of learning, requiring the algorithm to use a best-guess loss function. Alternatively, the same classifier may be used to inform multiple decisions, which correspond to multiple loss functions, requiring multiple learning algorithms to be run on the same data. We suggest a rigorous new paradigm for loss minimization in machine learning where the loss function can be ignored at the time of learning and only be taken into account when deciding an action.
We introduce the notion of an (L,𝒞)-omnipredictor, which could be used to optimize any loss in a family L. Once the loss function is set, the outputs of the predictor can be post-processed (a simple univariate data-independent transformation of individual predictions) to do well compared with any hypothesis from the class C. The post processing is essentially what one would perform if the outputs of the predictor were true probabilities of the uncertain events. In a sense, omnipredictors extract all the predictive power from the class 𝒞, irrespective of the loss function in L.
We show that such "loss-oblivious" learning is feasible through a connection to multicalibration, a notion introduced in the context of algorithmic fairness. A multicalibrated predictor doesn’t aim to minimize some loss function, but rather to make calibrated predictions, even when conditioned on inputs lying in certain sets c belonging to a family 𝒞 which is weakly learnable. We show that a 𝒞-multicalibrated predictor is also an (L,𝒞)-omnipredictor, where L contains all convex loss functions with some mild Lipschitz conditions. The predictors are even omnipredictors with respect to sparse linear combinations of functions in 𝒞. As a corollary, we deduce that distribution-specific weak agnostic learning is complete for a large class of loss minimization tasks.
In addition, we show how multicalibration can be viewed as a solution concept for agnostic boosting, shedding new light on past results. Finally, we transfer our insights back to the context of algorithmic fairness by providing omnipredictors for multi-group loss minimization.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.79/LIPIcs.ITCS.2022.79.pdf
Loss-minimzation
multi-group fairness
agnostic learning
boosting
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
80:1
80:9
10.4230/LIPIcs.ITCS.2022.80
article
Mixing in Non-Quasirandom Groups
Gowers, W. T.
1
Viola, Emanuele
2
Collège de France, Paris, France
Khoury College of Computer Sciences, Northeastern University, Boston, MA, USA
We initiate a systematic study of mixing in non-quasirandom groups. Let A and B be two independent, high-entropy distributions over a group G. We show that the product distribution AB is statistically close to the distribution F(AB) for several choices of G and F, including:
1) G is the affine group of 2x2 matrices, and F sets the top-right matrix entry to a uniform value,
2) G is the lamplighter group, that is the wreath product of ℤ₂ and ℤ_{n}, and F is multiplication by a certain subgroup,
3) G is Hⁿ where H is non-abelian, and F selects a uniform coordinate and takes a uniform conjugate of it.
The obtained bounds for (1) and (2) are tight.
This work is motivated by and applied to problems in communication complexity. We consider the 3-party communication problem of deciding if the product of three group elements multiplies to the identity. We prove lower bounds for the groups above, which are tight for the affine and the lamplighter groups.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.80/LIPIcs.ITCS.2022.80.pdf
Groups
representation theory
mixing
communication complexity
quasi-random
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
81:1
81:19
10.4230/LIPIcs.ITCS.2022.81
article
Time-Traveling Simulators Using Blockchains and Their Applications
Goyal, Vipul
1
2
Raizes, Justin
1
Soni, Pratik
1
Carnegie Mellon University, Pittsburgh, PA, USA
NTT Research, Sunnyvale, CA, USA
Blockchain technology has the potential of transforming cryptography. We study the problem of round-complexity of zero-knowledge, and more broadly, of secure computation in the blockchain-hybrid model, where all parties can access the blockchain as an oracle.
We study zero-knowledge and secure computation through the lens of a new security notion where the simulator is given the ability to "time-travel” or more accurately, to look into the future states of the blockchain and use this information to perform simulation. Such a time-traveling simulator gives a novel security guarantee of the following form: whatever the adversary could have learnt from an interaction, it could have computed on its own shortly into the future (e.g., a few hours from now).
We exhibit the power of time-traveling simulators by constructing round-efficient protocols in the blockchain-hybrid model. In particular, we construct:
1) Three-round zero-knowledge (ZK) argument for NP with a polynomial-time black-box time-traveling simulator.
2) Three-round secure two-party computation (2PC) for any functionality with a polynomial-time black-box time-traveling simulator for both parties.
In addition to standard cryptographic assumptions, we rely on natural hardness assumptions for Proof-of-Work based blockchains. In comparison, in the plain model, three-round protocols with black-box simulation are impossible, and constructions with non-black-box simulation for ZK require novel cryptographic assumptions while no construction for three-round 2PC is known. Our three-round 2PC result relies on a new, two-round extractable commitment that admits a time-traveling extractor.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.81/LIPIcs.ITCS.2022.81.pdf
Cryptography
Zero Knowledge
Secure Two-Party Computation
Blockchain
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
82:1
82:24
10.4230/LIPIcs.ITCS.2022.82
article
Online Multivalid Learning: Means, Moments, and Prediction Intervals
Gupta, Varun
1
Jung, Christopher
1
Noarov, Georgy
1
Pai, Mallesh M.
2
Roth, Aaron
1
University of Pennsylvania, Philadelphia, PA, USA
Rice University, Houston, TX, USA
We present a general, efficient technique for providing contextual predictions that are "multivalid" in various senses, against an online sequence of adversarially chosen examples (x,y). This means that the resulting estimates correctly predict various statistics of the labels y not just marginally - as averaged over the sequence of examples - but also conditionally on x ∈ G for any G belonging to an arbitrary intersecting collection of groups 𝒢.
We provide three instantiations of this framework. The first is mean prediction, which corresponds to an online algorithm satisfying the notion of multicalibration from [Hébert-Johnson et al., 2018]. The second is variance and higher moment prediction, which corresponds to an online algorithm satisfying the notion of mean-conditioned moment multicalibration from [Jung et al., 2021]. Finally, we define a new notion of prediction interval multivalidity, and give an algorithm for finding prediction intervals which satisfy it. Because our algorithms handle adversarially chosen examples, they can equally well be used to predict statistics of the residuals of arbitrary point prediction methods, giving rise to very general techniques for quantifying the uncertainty of predictions of black box algorithms, even in an online adversarial setting. When instantiated for prediction intervals, this solves a similar problem as conformal prediction, but in an adversarial environment and with multivalidity guarantees stronger than simple marginal coverage guarantees.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.82/LIPIcs.ITCS.2022.82.pdf
Uncertainty Estimation
Calibration
Online Learning
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
83:1
83:23
10.4230/LIPIcs.ITCS.2022.83
article
Adaptive Massively Parallel Constant-Round Tree Contraction
Hajiaghayi, MohammadTaghi
1
Knittel, Marina
1
Saleh, Hamed
1
Su, Hsin-Hao
2
University of Maryland, College Park, MD, USA
Boston College, MA, USA
Miller and Reif’s FOCS'85 [Gary L. Miller and John H. Reif, 1989] classic and fundamental tree contraction algorithm is a broadly applicable technique for the parallel solution of a large number of tree problems. Additionally it is also used as an algorithmic design technique for a large number of parallel graph algorithms. In all previously explored models of computation, however, tree contractions have only been achieved in Ω(log n) rounds of parallel run time. In this work, we not only introduce a generalized tree contraction method but also show it can be computed highly efficiently in O(1/ε³) rounds in the Adaptive Massively Parallel Computing (AMPC) setting, where each machine has O(n^ε) local memory for some 0 < ε < 1. AMPC is a practical extension of Massively Parallel Computing (MPC) which utilizes distributed hash tables [MohammadHossein Bateni et al., 2017; Behnezhad et al., 2019; Raimondas Kiveris et al., 2014]. In general, MPC is an abstract model for MapReduce, Hadoop, Spark, and Flume which are currently widely used across industry and has been studied extensively in the theory community in recent years. Last but not least, we show that our results extend to multiple problems on trees, including but not limited to maximum and maximal matching, maximum and maximal independent set, tree isomorphism testing, and more.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.83/LIPIcs.ITCS.2022.83.pdf
Adaptive Massively Parallel Computation
Tree Contraction
Matching
Independent Set
Tree Isomorphism
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
84:1
84:23
10.4230/LIPIcs.ITCS.2022.84
article
Errorless Versus Error-Prone Average-Case Complexity
Hirahara, Shuichi
1
Santhanam, Rahul
2
Principle of Informatics Research Division, National Institute of Informatics, Tokyo, Japan
Department of Computer Science, University of Oxford, UK
We consider the question of whether errorless and error-prone notions of average-case hardness are equivalent, and make several contributions.
First, we study this question in the context of hardness for NP, and connect it to the long-standing open question of whether there are instance checkers for NP. We show that there is an efficient non-uniform non-adaptive reduction from errorless to error-prone heuristics for NP if and only if there is an efficient non-uniform average-case non-adaptive instance-checker for NP. We also suggest an approach to proving equivalence of the two notions of average-case hardness for PH.
Second, we show unconditionally that error-prone average-case hardness is equivalent to errorless average-case hardness for P against NC¹ and for UP ∩ coUP against P.
Third, we apply our results about errorless and error-prone average-case hardness to get new equivalences between hitting set generators and pseudo-random generators.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.84/LIPIcs.ITCS.2022.84.pdf
average-case complexity
instance checker
pseudorandomness
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
85:1
85:25
10.4230/LIPIcs.ITCS.2022.85
article
Excluding PH Pessiland
Hirahara, Shuichi
1
Santhanam, Rahul
2
Principle of Informatics Research Division, National Institute of Informatics, Tokyo, Japan
Department of Computer Science, University of Oxford, UK
Heuristica and Pessiland are "worlds" of average-case complexity [Impagliazzo95] that are considered unlikely but that current techniques are unable to rule out. Recently, [Hirahara20] considered a PH (Polynomial Hierarchy) analogue of Heuristica, and showed that to rule it out, it would be sufficient to prove the NP-completeness of the problem GapMINKT^PH of estimating the PH-oracle time-bounded Kolmogorov complexity of a string.
In this work, we analogously define "PH Pessiland" to be a world where PH is hard on average but PH-computable pseudo-random generators do not exist. We unconditionally rule out PH-Pessiland in both non-uniform and uniform settings, by showing that the distributional problem of computing PH-oracle time-bounded Kolmogorov complexity of a string over the uniform distribution is complete for an (error-prone) average-case analogue of PH. Moreover, we show the equivalence between error-prone average-case hardness of PH and the existence of PH-computable pseudorandom generators.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.85/LIPIcs.ITCS.2022.85.pdf
average-case complexity
pseudorandomness
meta-complexity
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
86:1
86:20
10.4230/LIPIcs.ITCS.2022.86
article
Nash-Bargaining-Based Models for Matching Markets: One-Sided and Two-Sided; Fisher and Arrow-Debreu
Hosseini, Mojtaba
1
https://orcid.org/0000-0003-0272-016X
Vazirani, Vijay V.
2
https://orcid.org/0000-0002-4106-9077
The Paul Merage School of Business, University of California, Irvine, CA, USA
Computer Science Department, University of California, Irvine, CA, USA
This paper addresses two deficiencies of models in the area of matching-based market design. The first arises from the recent realization that the most prominent solution that uses cardinal utilities, namely the Hylland-Zeckhauser (HZ) mechanism [Hylland and Zeckhauser, 1979], is intractable; computation of even an approximate equilibrium is PPAD-complete [Vazirani and Yannakakis, 2021; Chen et al., 2021]. The second is the extreme paucity of models that use cardinal utilities, in sharp contrast with general equilibrium theory.
Our paper addresses both these issues by proposing Nash-bargaining-based matching market models. Since the Nash bargaining solution is captured by a convex program, efficiency follow; in addition, it possesses a number of desirable game-theoretic properties. Our approach yields a rich collection of models: for one-sided as well as two-sided markets, for Fisher as well as Arrow-Debreu settings, and for a wide range of utility functions, all the way from linear to Leontief.
We also give very fast implementations for these models which solve large instances, with n = 2000, in one hour on a PC, even for a two-sided matching market. A number of new ideas were needed, beyond the standard methods, to obtain these implementations.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.86/LIPIcs.ITCS.2022.86.pdf
Matching-based market design
Nash bargaining
convex optimization
Frank-Wolfe algorithm
cutting planes
general equilibrium theory
one-sided markets
two-sided markets
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
87:1
87:21
10.4230/LIPIcs.ITCS.2022.87
article
Symbolic Determinant Identity Testing and Non-Commutative Ranks of Matrix Lie Algebras
Ivanyos, Gábor
1
https://orcid.org/0000-0003-3826-1735
Mittal, Tushant
2
https://orcid.org/0000-0002-4017-2662
Qiao, Youming
3
https://orcid.org/0000-0003-4334-1449
Institute for Computer Science and Control, Eötvös Loránd Research Network (ELKH), Budapest, Hungary
Department of Computer Science, University of Chicago, IL, USA
Centre for Quantum Software and Information, University of Technology Sydney, Australia
One approach to make progress on the symbolic determinant identity testing (SDIT) problem is to study the structure of singular matrix spaces. After settling the non-commutative rank problem (Garg-Gurvits-Oliveira-Wigderson, Found. Comput. Math. 2020; Ivanyos-Qiao-Subrahmanyam, Comput. Complex. 2018), a natural next step is to understand singular matrix spaces whose non-commutative rank is full. At present, examples of such matrix spaces are mostly sporadic, so it is desirable to discover them in a more systematic way.
In this paper, we make a step towards this direction, by studying the family of matrix spaces that are closed under the commutator operation, that is, matrix Lie algebras. On the one hand, we demonstrate that matrix Lie algebras over the complex number field give rise to singular matrix spaces with full non-commutative ranks. On the other hand, we show that SDIT of such spaces can be decided in deterministic polynomial time. Moreover, we give a characterization for the matrix Lie algebras to yield a matrix space possessing singularity certificates as studied by Lovász (B. Braz. Math. Soc., 1989) and Raz and Wigderson (Building Bridges II, 2019).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.87/LIPIcs.ITCS.2022.87.pdf
derandomization
polynomial identity testing
symbolic determinant
non-commutative rank
Lie algebras
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
88:1
88:21
10.4230/LIPIcs.ITCS.2022.88
article
Explicit Abelian Lifts and Quantum LDPC Codes
Jeronimo, Fernando Granha
1
Mittal, Tushant
2
https://orcid.org/0000-0002-4017-2662
O'Donnell, Ryan
3
Paredes, Pedro
3
Tulsiani, Madhur
4
Institute for Advanced Study, Princeton, NJ, USA
Department of Computer Science, University of Chicago, IL, USA
Computer Science Department, Carnegie Mellon University, Pittsburgh, PA, USA
Toyota Technological Institute at Chicago, IL, USA
For an abelian group H acting on the set [𝓁], an (H,𝓁)-lift of a graph G₀ is a graph obtained by replacing each vertex by 𝓁 copies, and each edge by a matching corresponding to the action of an element of H.
Expanding graphs obtained via abelian lifts, form a key ingredient in the recent breakthrough constructions of quantum LDPC codes, (implicitly) in the fiber bundle codes by Hastings, Haah and O'Donnell [STOC 2021] achieving distance Ω̃(N^{3/5}), and in those by Panteleev and Kalachev [IEEE Trans. Inf. Theory 2021] of distance Ω(N/log(N)). However, both these constructions are non-explicit. In particular, the latter relies on a randomized construction of expander graphs via abelian lifts by Agarwal et al. [SIAM J. Discrete Math 2019].
In this work, we show the following explicit constructions of expanders obtained via abelian lifts. For every (transitive) abelian group H ⩽ Sym(𝓁), constant degree d ≥ 3 and ε > 0, we construct explicit d-regular expander graphs G obtained from an (H,𝓁)-lift of a (suitable) base n-vertex expander G₀ with the following parameters:
ii) λ(G) ≤ 2√{d-1} + ε, for any lift size 𝓁 ≤ 2^{n^{δ}} where δ = δ(d,ε),
iii) λ(G) ≤ ε ⋅ d, for any lift size 𝓁 ≤ 2^{n^{δ₀}} for a fixed δ₀ > 0, when d ≥ d₀(ε), or
iv) λ(G) ≤ Õ(√d), for lift size "exactly" 𝓁 = 2^{Θ(n)}. As corollaries, we obtain explicit quantum lifted product codes of Panteleev and Kalachev of almost linear distance (and also in a wide range of parameters) and explicit classical quasi-cyclic LDPC codes with wide range of circulant sizes.
Items (i) and (ii) above are obtained by extending the techniques of Mohanty, O'Donnell and Paredes [STOC 2020] for 2-lifts to much larger abelian lift sizes (as a byproduct simplifying their construction). This is done by providing a new encoding of special walks arising in the trace power method, carefully "compressing" depth-first search traversals. Result (iii) is via a simpler proof of Agarwal et al. [SIAM J. Discrete Math 2019] at the expense of polylog factors in the expansion.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.88/LIPIcs.ITCS.2022.88.pdf
Graph lifts
expander graphs
quasi-cyclic LDPC codes
quantum LDPC codes
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
89:1
89:21
10.4230/LIPIcs.ITCS.2022.89
article
Almost-Orthogonal Bases for Inner Product Polynomials
Jones, Chris
1
Potechin, Aaron
1
University of Chicago, IL, USA
In this paper, we consider low-degree polynomials of inner products between a collection of random vectors. We give an almost orthogonal basis for this vector space of polynomials when the random vectors are Gaussian, spherical, or Boolean. In all three cases, our basis admits an interesting combinatorial description based on the topology of the underlying graph of inner products.
We also analyze the expected value of the product of two polynomials in our basis. In all three cases, we show that this expected value can be expressed in terms of collections of matchings on the underlying graph of inner products. In the Gaussian and Boolean cases, we show that this expected value is always non-negative. In the spherical case, we show that this expected value can be negative but we conjecture that if the underlying graph of inner products is planar then this expected value will always be non-negative.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.89/LIPIcs.ITCS.2022.89.pdf
Orthogonal polynomials
Fourier analysis
combinatorics
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
90:1
90:25
10.4230/LIPIcs.ITCS.2022.90
article
Sublinear-Time Computation in the Presence of Online Erasures
Kalemaj, Iden
1
https://orcid.org/0000-0002-0995-6346
Raskhodnikova, Sofya
1
https://orcid.org/0000-0002-4902-050X
Varma, Nithin
2
https://orcid.org/0000-0002-1211-2566
Department of Computer Science, Boston University, MA, USA
Chennai Mathematical Institute, India
We initiate the study of sublinear-time algorithms that access their input via an online adversarial erasure oracle. After answering each query to the input object, such an oracle can erase t input values. Our goal is to understand the complexity of basic computational tasks in extremely adversarial situations, where the algorithm’s access to data is blocked during the execution of the algorithm in response to its actions. Specifically, we focus on property testing in the model with online erasures. We show that two fundamental properties of functions, linearity and quadraticity, can be tested for constant t with asymptotically the same complexity as in the standard property testing model. For linearity testing, we prove tight bounds in terms of t, showing that the query complexity is Θ(log t). In contrast to linearity and quadraticity, some other properties, including sortedness and the Lipschitz property of sequences, cannot be tested at all, even for t = 1. Our investigation leads to a deeper understanding of the structure of violations of linearity and other widely studied properties.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.90/LIPIcs.ITCS.2022.90.pdf
Randomized algorithms
property testing
Fourier analysis
linear functions
quadratic functions
Lipschitz and monotone functions
sorted sequences
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
91:1
91:19
10.4230/LIPIcs.ITCS.2022.91
article
Noisy Boolean Hidden Matching with Applications
Kapralov, Michael
1
Musipatla, Amulya
2
Tardos, Jakab
1
Woodruff, David P.
2
Zhou, Samson
2
https://orcid.org/0000-0001-8288-5698
EPFL, Lausanne, Switzerland
Carnegie Mellon University, Pittsburgh, PA, USA
The Boolean Hidden Matching (BHM) problem, introduced in a seminal paper of Gavinsky et al. [STOC'07], has played an important role in lower bounds for graph problems in the streaming model (e.g., subgraph counting, maximum matching, MAX-CUT, Schatten p-norm approximation). The BHM problem typically leads to Ω(√n) space lower bounds for constant factor approximations, with the reductions generating graphs that consist of connected components of constant size. The related Boolean Hidden Hypermatching (BHH) problem provides Ω(n^{1-1/t}) lower bounds for 1+O(1/t) approximation, for integers t ≥ 2. The corresponding reductions produce graphs with connected components of diameter about t, and essentially show that long range exploration is hard in the streaming model with an adversarial order of updates.
In this paper we introduce a natural variant of the BHM problem, called noisy BHM (and its natural noisy BHH variant), that we use to obtain stronger than Ω(√n) lower bounds for approximating a number of the aforementioned problems in graph streams when the input graphs consist only of components of diameter bounded by a fixed constant.
We next introduce and study the graph classification problem, where the task is to test whether the input graph is isomorphic to a given graph. As a first step, we use the noisy BHM problem to show that the problem of classifying whether an underlying graph is isomorphic to a complete binary tree in insertion-only streams requires Ω(n) space, which seems challenging to show using either BHM or BHH.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.91/LIPIcs.ITCS.2022.91.pdf
Boolean Hidden Matching
Lower Bounds
Communication Complexity
Streaming Algorithms
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
92:1
92:17
10.4230/LIPIcs.ITCS.2022.92
article
On Fairness and Stability in Two-Sided Matchings
Karni, Gili
1
Rothblum, Guy N.
1
Yona, Gal
1
Weizmann Institute of Science, Rehovot, Israel
There are growing concerns that algorithms, which increasingly make or influence important decisions pertaining to individuals, might produce outcomes that discriminate against protected groups. We study such fairness concerns in the context of a two-sided market, where there are two sets of agents, and each agent has preferences over the other set. The goal is producing a matching between the sets. Throughout this work, we use the example of matching medical residents (who we call "doctors") to hospitals. This setting has been the focus of a rich body of work. The seminal work of Gale and Shapley formulated a stability desideratum, and showed that a stable matching always exists and can be found in polynomial time.
With fairness concerns in mind, it is natural to ask: might a stable matching be discriminatory towards some of the doctors? How can we obtain a fair matching? The question is interesting both when hospital preferences might be discriminatory, and also when each hospital’s preferences are fair.
We study this question through the lens of metric-based fairness notions (Dwork et al. [ITCS 2012] and Kim et al. [ITCS 2020]). We formulate appropriate definitions of fairness and stability in the presence of a similarity metric, and ask: does a fair and stable matching always exist? Can such a matching be found in polynomial time? Can classical Gale-Shapley algorithms find such a matching? Our contributions are as follows:
- Composition failures for classical algorithms. We show that composing the Gale-Shapley algorithm with fair hospital preferences can produce blatantly unfair outcomes.
- New algorithms for finding fair and stable matchings. Our main technical contributions are efficient new algorithms for finding fair and stable matchings when: (i) the hospitals' preferences are fair, and (ii) the fairness metric satisfies a strong "proto-metric" condition: the distance between every two doctors is either zero or one. In particular, these algorithms also show that, in this setting, fairness and stability are compatible.
- Barriers for finding fair and stable matchings in the general case. We show that if the hospital preferences can be unfair, or if the metric fails to satisfy the proto-metric condition, then no algorithm in a natural class can find a fair and stable matching. The natural class includes the classical Gale-Shapley algorithms and our new algorithms.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.92/LIPIcs.ITCS.2022.92.pdf
algorithmic fairness
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
93:1
93:23
10.4230/LIPIcs.ITCS.2022.93
article
Optimal Bounds for Dominating Set in Graph Streams
Khanna, Sanjeev
1
Konrad, Christian
2
https://orcid.org/0000-0003-1802-4011
Department of Computer and Information Science, University of Pennsylvania, Philadelphia, PA, US
Department of Computer Science, University of Bristol, UK
We resolve the space complexity of one-pass streaming algorithms for Minimum Dominating Set (MDS) in both insertion-only and insertion-deletion streams (up to poly-logarithmic factors) where an input graph is revealed by a sequence of edge updates. Recently, streaming algorithms for the related Set Cover problem have received significant attention. Even though MDS can be viewed as a special case of Set Cover, it is however harder to solve in the streaming setting since the input stream consists of individual edges rather than entire vertex-neighborhoods, as is the case in Set Cover.
We prove the following results (n is the number of vertices of the input graph):
1) In insertion-only streams, we give a one-pass semi-streaming algorithm (meaning Õ(n) space) with approximation factor Õ(√n). We also prove that every one-pass streaming algorithm with space o(n) has an approximation factor of Ω(n/log n).
Combined with a result by [Assadi et al., STOC'16] for Set Cover which, translated to MDS, shows that space Θ̃(n² / α) is necessary and sufficient for computing an α-approximation for every α = o(√n), this completely settles the space requirements for MDS in the insertion-only setting.
2) In insertion-deletion streams, we prove that space Ω(n² / (α log n)) is necessary for every approximation factor α ≤ Θ(n / log³ n). Combined with the Set Cover algorithm of [Assadi et al., STOC'16], which can be adapted to MDS even in the insertion-deletion setting to give an α-approximation in Õ(n² / α) space, this completely settles the space requirements for MDS in the insertion-deletion setting.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.93/LIPIcs.ITCS.2022.93.pdf
Streaming algorithms
communication complexity
information complexity
dominating set
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
94:1
94:21
10.4230/LIPIcs.ITCS.2022.94
article
Deterministic Dynamic Matching in Worst-Case Update Time
Kiss, Peter
1
Department of Computer Science, University of Warwick, Coventry, UK
We present deterministic algorithms for maintaining a (3/2 + ε) and (2 + ε)-approximate maximum matching in a fully dynamic graph with worst-case update times Ô(√n) and Õ(1) respectively. The fastest known deterministic worst-case update time algorithms for achieving approximation ratio (2 - δ) (for any δ > 0) and (2 + ε) were both shown by Roghani et al. [arXiv'2021] with update times O(n^{3/4}) and O_ε(√n) respectively. We close the gap between worst-case and amortized algorithms for the two approximation ratios as the best deterministic amortized update times for the problem are O_ε(√n) and Õ(1) which were shown in Bernstein and Stein [SODA'2021] and Bhattacharya and Kiss [ICALP'2021] respectively.
The algorithm achieving (3/2 + ε) approximation builds on the EDCS concept introduced by the influential paper of Bernstein and Stein [ICALP'2015]. Say that H is a (α, δ)-approximate matching sparsifier if at all times H satisfies that μ(H) ⋅ α + δ ⋅ n ≥ μ(G) (define (α, δ)-approximation similarly for matchings). We show how to maintain a locally damaged version of the EDCS which is a (3/2 + ε, δ)-approximate matching sparsifier. We further show how to reduce the maintenance of an α-approximate maximum matching to the maintenance of an (α, δ)-approximate maximum matching building based on an observation of Assadi et al. [EC'2016]. Our reduction requires an update time blow-up of Ô(1) or Õ(1) and is deterministic or randomized against an adaptive adversary respectively.
To achieve (2 + ε)-approximation we improve on the update time guarantee of an algorithm of Bhattacharya and Kiss [ICALP'2021]. In order to achieve both results we explicitly state a method implicitly used in Nanongkai and Saranurak [STOC'2017] and Bernstein et al. [arXiv'2020] which allows to transform dynamic algorithms capable of processing the input in batches to a dynamic algorithms with worst-case update time.
Independent Work: Independently and concurrently to our work Grandoni et al. [arXiv'2021] has presented a fully dynamic algorithm for maintaining a (3/2 + ε)-approximate maximum matching with deterministic worst-case update time O_ε(√n).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.94/LIPIcs.ITCS.2022.94.pdf
Dynamic Algorithms
Matching
Approximate Matching
EDCS
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
95:1
95:20
10.4230/LIPIcs.ITCS.2022.95
article
More Dominantly Truthful Multi-Task Peer Prediction with a Finite Number of Tasks
Kong, Yuqing
1
https://orcid.org/0000-0002-5901-3004
The Center on Frontiers of Computing Studies, Peking University, Beijing, China
In the setting where we ask participants multiple similar possibly subjective multi-choice questions (e.g. Do you like Bulbasaur? Y/N; do you like Squirtle? Y/N), peer prediction aims to design mechanisms that encourage honest feedback without verification. A series of works have successfully designed multi-task peer prediction mechanisms where reporting truthfully is better than any other strategy (dominantly truthful), while they require an infinite number of tasks. A recent work proposes the first multi-task peer prediction mechanism, Determinant Mutual Information (DMI)-Mechanism, where not only is dominantly truthful but also works for a finite number of tasks (practical).
However, the existence of other practical dominantly-truthful multi-task peer prediction mechanisms remains to be an open question. This work answers the above question by providing
- a new family of information-monotone information measures: volume mutual information (VMI), where DMI is a special case;
- a new family of practical dominantly-truthful multi-task peer prediction mechanisms, VMI-Mechanisms.
To illustrate the importance of VMI-Mechanisms, we also provide a tractable effort incentive optimization goal. We show that DMI-Mechanism may not be not optimal but we can construct a sequence of VMI-Mechanisms that are approximately optimal.
The main technical highlight in this paper is a novel geometric information measure, Volume Mutual Information, that is based on a simple idea: we can measure an object A’s information amount by the number of objects that is less informative than A. Different densities over the object lead to different information measures. This also gives Determinant Mutual Information a simple geometric interpretation.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.95/LIPIcs.ITCS.2022.95.pdf
Information elicitation
information theory
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
96:1
96:24
10.4230/LIPIcs.ITCS.2022.96
article
Dynamic Matching Algorithms Under Vertex Updates
Le, Hung
1
Milenković, Lazar
2
Solomon, Shay
2
Vassilevska Williams, Virginia
3
University of Massachusetts, Amherst, MA, USA
Tel Aviv University, Israel
MIT, Cambridge, MA, USA
Dynamic graph matching algorithms have been extensively studied, but mostly under edge updates. This paper concerns dynamic matching algorithms under vertex updates, where in each update step a single vertex is either inserted or deleted along with its incident edges.
A basic setting arising in online algorithms and studied by Bosek et al. [FOCS'14] and Bernstein et al. [SODA'18] is that of dynamic approximate maximum cardinality matching (MCM) in bipartite graphs in which one side is fixed and vertices on the other side either arrive or depart via vertex updates. In the BASIC-incremental setting, vertices only arrive, while in the BASIC-decremental setting vertices only depart. When vertices can both arrive and depart, we have the BASIC-dynamic setting. In this paper we also consider the setting in which both sides of the bipartite graph are dynamic. We call this the MEDIUM-dynamic setting, and MEDIUM-decremental is the restriction when vertices can only depart. The GENERAL-dynamic setting is when the graph is not necessarily bipartite and the vertices can both depart and arrive.
Denote by K the total number of edges inserted and deleted to and from the graph throughout the entire update sequence. A well-studied measure, the recourse of a dynamic matching algorithm is the number of changes made to the matching per step. We largely focus on Maximal Matching (MM) which is a 2-approximation to the MCM. Our main results are as follows.
- In the BASIC-dynamic setting, there is a straightforward algorithm for maintaining a MM, with a total runtime of O(K) and constant worst-case recourse. In fact, this algorithm never removes an edge from the matching; we refer to such an algorithm as irrevocable.
- For the MEDIUM-dynamic setting we give a strong conditional lower bound that even holds in the MEDIUM-decremental setting: if for any fixed η > 0, there is an irrevocable decremental MM algorithm with a total runtime of O(K ⋅ n^{1-η}), this would refute the OMv conjecture; a similar (but weaker) hardness result can be achieved via a reduction from the Triangle Detection conjecture.
- Next, we consider the GENERAL-dynamic setting, and design an MM algorithm with a total runtime of O(K) and constant worst-case recourse. We achieve this result via a 1-revocable algorithm, which may remove just one edge per update step. As argued above, an irrevocable algorithm with such a runtime is not likely to exist.
- Finally, back to the BASIC-dynamic setting, we present an algorithm with a total runtime of O(K), which provides an (e/(e-1))-approximation to the MCM.
To this end, we build on the classic "ranking" online algorithm by Karp et al. [STOC'90]. Beyond the results, our work draws connections between the areas of dynamic graph algorithms and online algorithms, and it proposes several open questions that seem to be overlooked thus far.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.96/LIPIcs.ITCS.2022.96.pdf
maximal matching
approximate matching
dynamic algorithm
vertex updates
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
97:1
97:23
10.4230/LIPIcs.ITCS.2022.97
article
Quantum Meets Fine-Grained Complexity: Sublinear Time Quantum Algorithms for String Problems
Le Gall, François
1
Seddighin, Saeed
2
Nagoya University, Japan
Toyota Technological Institute at Chicago, IL, USA
Longest common substring (LCS), longest palindrome substring (LPS), and Ulam distance (UL) are three fundamental string problems that can be classically solved in near linear time. In this work, we present sublinear time quantum algorithms for these problems along with quantum lower bounds. Our results shed light on a very surprising fact: Although the classic solutions for LCS and LPS are almost identical (via suffix trees), their quantum computational complexities are different. While we give an exact Õ(√n) time algorithm for LPS, we prove that LCS needs at least time ̃ Ω(n^{2/3}) even for 0/1 strings.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.97/LIPIcs.ITCS.2022.97.pdf
Longest common substring
Longest palindrome substring
Quantum algorithms
Sublinear algorithms
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
98:1
98:21
10.4230/LIPIcs.ITCS.2022.98
article
Optimal Sub-Gaussian Mean Estimation in Very High Dimensions
Lee, Jasper C.H.
1
Valiant, Paul
2
University of Wisconsin-Madison, WI, USA
Purdue University, West Lafayette, IN, USA
We address the problem of mean estimation in very high dimensions, in the high probability regime parameterized by failure probability δ. For a distribution with covariance Σ, let its "effective dimension" be d_eff = {Tr(Σ)}/{λ_{max}(Σ)}. For the regime where d_eff = ω(log^2 (1/δ)), we show the first algorithm whose sample complexity is optimal to within 1+o(1) factor. The algorithm has a surprisingly simple structure: 1) re-center the samples using a known sub-Gaussian estimator, 2) carefully choose an easy-to-compute positive integer t and then remove the t samples farthest from the origin and 3) return the sample mean of the remaining samples. The core of the analysis relies on a novel vector Bernstein-type tail bound, showing that under general conditions, the sample mean of a bounded high-dimensional distribution is highly concentrated around a spherical shell.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.98/LIPIcs.ITCS.2022.98.pdf
High-dimensional mean estimation
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
99:1
99:18
10.4230/LIPIcs.ITCS.2022.99
article
Double Coverage with Machine-Learned Advice
Lindermayr, Alexander
1
https://orcid.org/0000-0001-6714-5034
Megow, Nicole
1
https://orcid.org/0000-0002-3531-7644
Simon, Bertrand
2
https://orcid.org/0000-0002-2565-1163
Faculty of Mathematics and Computer Science, University of Bremen, Germany
IN2P3 Computing Center, CNRS, Villeurbanne, France
We study the fundamental online k-server problem in a learning-augmented setting. While in the traditional online model, an algorithm has no information about the request sequence, we assume that there is given some advice (e.g. machine-learned predictions) on an algorithm’s decision. There is, however, no guarantee on the quality of the prediction and it might be far from being correct.
Our main result is a learning-augmented variation of the well-known Double Coverage algorithm for k-server on the line (Chrobak et al., SIDMA 1991) in which we integrate predictions as well as our trust into their quality. We give an error-dependent competitive ratio, which is a function of a user-defined confidence parameter, and which interpolates smoothly between an optimal consistency, the performance in case that all predictions are correct, and the best-possible robustness regardless of the prediction quality. When given good predictions, we improve upon known lower bounds for online algorithms without advice. We further show that our algorithm achieves for any k an almost optimal consistency-robustness tradeoff, within a class of deterministic algorithms respecting local and memoryless properties.
Our algorithm outperforms a previously proposed (more general) learning-augmented algorithm. It is remarkable that the previous algorithm crucially exploits memory, whereas our algorithm is memoryless. Finally, we demonstrate in experiments the practicability and the superior performance of our algorithm on real-world data.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.99/LIPIcs.ITCS.2022.99.pdf
online k-server problem
competitive analysis
learning-augmented algorithms
untrusted predictions
consistency
robustness
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
100:1
100:11
10.4230/LIPIcs.ITCS.2022.100
article
Beating Classical Impossibility of Position Verification
Liu, Jiahui
1
Liu, Qipeng
2
Qian, Luowen
3
Department of Computer Science, University of Texas at Austin, TX, USA
Simons Institute for the Theory of Computing, Berkeley, CA, USA
Department of Computer Science, Boston University, MA, USA
Chandran et al. (SIAM J. Comput. '14) formally introduced the cryptographic task of position verification, where they also showed that it cannot be achieved by classical protocols. In this work, we initiate the study of position verification protocols with classical verifiers. We identify that proofs of quantumness (and thus computational assumptions) are necessary for such position verification protocols. For the other direction, we adapt the proof of quantumness protocol by Brakerski et al. (FOCS '18) to instantiate such a position verification protocol. As a result, we achieve classically verifiable position verification assuming the quantum hardness of Learning with Errors.
Along the way, we develop the notion of 1-of-2 non-local soundness for a natural non-local game for 1-of-2 puzzles, first introduced by Radian and Sattath (AFT '19), which can be viewed as a computational unclonability property. We show that 1-of-2 non-local soundness follows from the standard 2-of-2 soundness (and therefore the adaptive hardcore bit property), which could be of independent interest.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.100/LIPIcs.ITCS.2022.100.pdf
cryptographic protocol
position verification
quantum cryptography
proof of quantumness
non-locality
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
101:1
101:10
10.4230/LIPIcs.ITCS.2022.101
article
A Gaussian Fixed Point Random Walk
Liu, Yang P.
1
Sah, Ashwin
2
Sawhney, Mehtaab
2
Department of Mathematics, Stanford University, Stanford, CA, USA
Department of Mathematics, Massachusetts Institute of Technology, Cambridge, MA, USA
In this note, we design a discrete random walk on the real line which takes steps 0,±1 (and one with steps in {±1,2}) where at least 96% of the signs are ±1 in expectation, and which has 𝒩(0,1) as a stationary distribution. As an immediate corollary, we obtain an online version of Banaszczyk’s discrepancy result for partial colorings and ±1,2 signings. Additionally, we recover linear time algorithms for logarithmic bounds for the Komlós conjecture in an oblivious online setting.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.101/LIPIcs.ITCS.2022.101.pdf
Discrepancy
Partial Coloring
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
102:1
102:16
10.4230/LIPIcs.ITCS.2022.102
article
Correlation-Intractable Hash Functions via Shift-Hiding
Lombardi, Alex
1
Vaikuntanathan, Vinod
1
Massachusetts Institute of Technology, Cambridge, MA, USA
A hash function family ℋ is correlation intractable for a t-input relation ℛ if, given a random function h chosen from ℋ, it is hard to find x_1,…,x_t such that ℛ(x_1,…,x_t,h(x₁),…,h(x_t)) is true. Among other applications, such hash functions are a crucial tool for instantiating the Fiat-Shamir heuristic in the plain model, including the only known NIZK for NP based on the learning with errors (LWE) problem (Peikert and Shiehian, CRYPTO 2019).
We give a conceptually simple and generic construction of single-input CI hash functions from shift-hiding shiftable functions (Peikert and Shiehian, PKC 2018) satisfying an additional one-wayness property. This results in a clean abstract framework for instantiating CI, and also shows that a previously existing function family (PKC 2018) was already CI under the LWE assumption.
In addition, our framework transparently generalizes to other settings, yielding new results:
- We show how to instantiate certain forms of multi-input CI under the LWE assumption. Prior constructions either relied on a very strong "brute-force-is-best" type of hardness assumption (Holmgren and Lombardi, FOCS 2018) or were restricted to "output-only" relations (Zhandry, CRYPTO 2016).
- We construct single-input CI hash functions from indistinguishability obfuscation (iO) and one-way permutations. Prior constructions relied essentially on variants of fully homomorphic encryption that are impossible to construct from such primitives. This result also generalizes to more expressive variants of multi-input CI under iO and additional standard assumptions.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.102/LIPIcs.ITCS.2022.102.pdf
Cryptographic hash functions
correlation intractability
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
103:1
103:23
10.4230/LIPIcs.ITCS.2022.103
article
Balanced Allocations with Incomplete Information: The Power of Two Queries
Los, Dimitrios
1
Sauerwald, Thomas
1
https://orcid.org/0000-0002-0882-283X
Department of Computer Science & Technology, University of Cambridge, UK
We consider the allocation of m balls into n bins with incomplete information. In the classical Two-Choice process a ball first queries the load of two randomly chosen bins and is then placed in the least loaded bin. In our setting, each ball also samples two random bins but can only estimate a bin’s load by sending binary queries of the form "Is the load at least the median?" or "Is the load at least 100?".
For the lightly loaded case m = 𝒪(n), Feldheim and Gurel-Gurevich (2021) showed that with one query it is possible to achieve a maximum load of 𝒪(√{log n/log log n}), and they also pose the question whether a maximum load of m/n+𝒪(√{log n/log log n}) is possible for any m = Ω(n). In this work, we resolve this open problem by proving a lower bound of m/n+Ω(√{log n}) for a fixed m = Θ(n √{log n}), and a lower bound of m/n+Ω(log n/log log n) for some m depending on the used strategy.
We complement this negative result by proving a positive result for multiple queries. In particular, we show that with only two binary queries per chosen bin, there is an oblivious strategy which ensures a maximum load of m/n+𝒪(√{log n}) for any m ≥ 1. Further, for any number of k = 𝒪(log log n) binary queries, the upper bound on the maximum load improves to m/n + 𝒪(k(log n)^{1/k}) for any m ≥ 1.
This result for k queries has several interesting consequences: (i) it implies new bounds for the (1+β)-process introduced by Peres, Talwar and Wieder (2015), (ii) it leads to new bounds for the graphical balanced allocation process on dense expander graphs, and (iii) it recovers and generalizes the bound of m/n+𝒪(log log n) on the maximum load achieved by the Two-Choice process, including the heavily loaded case m = Ω(n) which was derived in previous works by Berenbrink et al. (2006) as well as Talwar and Wieder (2014).
One novel aspect of our proofs is the use of multiple super-exponential potential functions, which might be of use in future work.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.103/LIPIcs.ITCS.2022.103.pdf
power-of-two-choices
balanced allocations
potential functions
thinning
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
104:1
104:24
10.4230/LIPIcs.ITCS.2022.104
article
Lifting with Sunflowers
Lovett, Shachar
1
Meka, Raghu
2
Mertz, Ian
3
Pitassi, Toniann
3
4
Zhang, Jiapeng
5
Department of Computer Science, University of California San Diego, CA, USA
Department of Computer Science, University of California Los Angeles, CA, USA
Department of Computer Science, University of Toronto, Canada
Department of Mathematics, Institute for Advanced Study, Princeton, NJ, USA
Department of Computer Science, University of Southern California, Los Angeles, CA, USA
Query-to-communication lifting theorems translate lower bounds on query complexity to lower bounds for the corresponding communication model. In this paper, we give a simplified proof of deterministic lifting (in both the tree-like and dag-like settings). Our proof uses elementary counting together with a novel connection to the sunflower lemma.
In addition to a simplified proof, our approach opens up a new avenue of attack towards proving lifting theorems with improved gadget size - one of the main challenges in the area. Focusing on one of the most widely used gadgets - the index gadget - existing lifting techniques are known to require at least a quadratic gadget size. Our new approach combined with robust sunflower lemmas allows us to reduce the gadget size to near linear. We conjecture that it can be further improved to polylogarithmic, similar to the known bounds for the corresponding robust sunflower lemmas.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.104/LIPIcs.ITCS.2022.104.pdf
Lifting theorems
communication complexity
combinatorics
sunflowers
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
105:1
105:21
10.4230/LIPIcs.ITCS.2022.105
article
Interactive Communication in Bilateral Trade
Mao, Jieming
1
Paes Leme, Renato
1
Wang, Kangning
2
Google Research, New York, NY, USA
Duke University, Durham, NC, USA
We define a model of interactive communication where two agents with private types can exchange information before a game is played. The model contains Bayesian persuasion as a special case of a one-round communication protocol. We define message complexity corresponding to the minimum number of interactive rounds necessary to achieve the best possible outcome. Our main result is that for bilateral trade, agents don't stop talking until they reach an efficient outcome: Either agents achieve an efficient allocation in finitely many rounds of communication; or the optimal communication protocol has infinite number of rounds. We show an important class of bilateral trade settings where efficient allocation is achievable with a small number of rounds of communication.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.105/LIPIcs.ITCS.2022.105.pdf
Bayesian persuasion
bilateral trade
information design
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
106:1
106:20
10.4230/LIPIcs.ITCS.2022.106
article
Support Recovery in Universal One-Bit Compressed Sensing
Mazumdar, Arya
1
Pal, Soumyabrata
2
https://orcid.org/0000-0003-2949-3761
Halıcıoğlu Data Science Institute, University of California, San Diego, CA, USA
College of Information and Computer Sciences, University of Massachusetts Amherst, MA, USA
One-bit compressed sensing (1bCS) is an extreme-quantized signal acquisition method that has been intermittently studied in the past decade. In 1bCS, linear samples of a high dimensional signal are quantized to only one bit per sample (sign of the measurement). The extreme quantization makes it an interesting case study of the more general single-index or generalized linear models. At the same time it can also be thought of as a "design" version of learning a binary linear classifier or halfspace-learning.
Assuming the original signal vector to be sparse, existing results in 1bCS either aim to find the support of the vector, or approximate the signal within an ε-ball. The focus of this paper is support recovery, which often also computationally facilitate approximate signal recovery. A universal measurement matrix for 1bCS refers to one set of measurements that work for all sparse signals. With universality, it is known that Θ̃(k²) 1bCS measurements are necessary and sufficient for support recovery (where k denotes the sparsity). In this work, we show that it is possible to universally recover the support with a small number of false positives with Õ(k^{3/2}) measurements. If the dynamic range of the signal vector is known, then with a different technique, this result can be improved to only Õ(k) measurements. Other results on universal but approximate support recovery are also provided in this paper. All of our main recovery algorithms are simple and polynomial-time.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.106/LIPIcs.ITCS.2022.106.pdf
Superset Recovery
Approximate Support Recovery
List union-free family
Descartes’ rule of signs
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
107:1
107:28
10.4230/LIPIcs.ITCS.2022.107
article
Keep That Card in Mind: Card Guessing with Limited Memory
Menuhin, Boaz
1
Naor, Moni
1
Department of Computer Science and Applied Mathematics, Weizmann Institute of Science, Rehovot, Israel
A card guessing game is played between two players, Guesser and Dealer. At the beginning of the game, the Dealer holds a deck of n cards (labeled 1, ..., n). For n turns, the Dealer draws a card from the deck, the Guesser guesses which card was drawn, and then the card is discarded from the deck. The Guesser receives a point for each correctly guessed card.
With perfect memory, a Guesser can keep track of all cards that were played so far and pick at random a card that has not appeared so far, yielding in expectation ln n correct guesses, regardless of how the Dealer arranges the deck. With no memory, the best a Guesser can do will result in a single guess in expectation.
We consider the case of a memory bounded Guesser that has m < n memory bits. We show that the performance of such a memory bounded Guesser depends much on the behavior of the Dealer. In more detail, we show that there is a gap between the static case, where the Dealer draws cards from a properly shuffled deck or a prearranged one, and the adaptive case, where the Dealer draws cards thoughtfully, in an adversarial manner. Specifically:
1) We show a Guesser with O(log² n) memory bits that scores a near optimal result against any static Dealer.
2) We show that no Guesser with m bits of memory can score better than O(√m) correct guesses against a random Dealer, thus, no Guesser can score better than min {√m, ln n}, i.e., the above Guesser is optimal.
3) We show an efficient adaptive Dealer against which no Guesser with m memory bits can make more than ln m + 2 ln log n + O(1) correct guesses in expectation.
These results are (almost) tight, and we prove them using compression arguments that harness the guessing strategy for encoding.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.107/LIPIcs.ITCS.2022.107.pdf
Adaptivity vs Non-adaptivity
Adversarial Robustness
Card Guessing
Compression Argument
Information Theory
Streaming Algorithms
Two Player Game
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
108:1
108:22
10.4230/LIPIcs.ITCS.2022.108
article
A Spectral Approach to Polytope Diameter
Narayanan, Hariharan
1
Shah, Rikhav
2
Srivastava, Nikhil
2
Tata Institute of Fundamental Research, Mumbai, India
University of California Berkeley, CA, USA
We prove upper bounds on the graph diameters of polytopes in two settings. The first is a worst-case bound for integer polytopes in terms of the length of the description of the polytope (in bits) and the minimum angle between facets of its polar. The second is a smoothed analysis bound: given an appropriately normalized polytope, we add small Gaussian noise to each constraint. We consider a natural geometric measure on the vertices of the perturbed polytope (corresponding to the mean curvature measure of its polar) and show that with high probability there exists a "giant component" of vertices, with measure 1-o(1) and polynomial diameter. Both bounds rely on spectral gaps - of a certain Schrödinger operator in the first case, and a certain continuous time Markov chain in the second - which arise from the log-concavity of the volume of a simple polytope in terms of its slack variables.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.108/LIPIcs.ITCS.2022.108.pdf
Polytope diameter
Markov Chain
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
109:1
109:1
10.4230/LIPIcs.ITCS.2022.109
article
Geometric Bounds on the Fastest Mixing Markov Chain
Olesker-Taylor, Sam
1
Zanetti, Luca
1
University of Bath, UK
In the Fastest Mixing Markov Chain problem, we are given a graph G = (V, E) and desire the discrete-time Markov chain with smallest mixing time τ subject to having equilibrium distribution uniform on V and non-zero transition probabilities only across edges of the graph [Boyd et al., 2004].
It is well-known that the mixing time τ_RW of the lazy random walk on G is characterised by the edge conductance Φ of G via Cheeger’s inequality: Φ^{-1} ≲ τ_RW ≲ Φ^{-2} log|V|. Edge conductance, however, fails to characterise the fastest mixing time τ^⋆ of G. Take, for example, a graph consisting of two n-vertex cliques connected by a perfect matching: its edge conductance is Θ(1/n), while τ^⋆ can be shown to be O(log n). We show, instead, that is possible to characterise the fastest mixing time τ^⋆ via a Cheeger-type inequality but for a different geometric quantity, namely the vertex conductance Ψ of G: Ψ^{-1} ≲ τ^* ≲ Ψ^{-2}(log|V|)^2. We prove this result by first relating vertex conductance to a new expansion measure, which we call matching conductance. We then relate matching conductance to a variational characterisation of τ^⋆ (or, more precisely, of the fastest relaxation time) due to Roch [Roch, 2005]. This is done by interpreting Roch’s characterisation as a particular instance of fractional vertex cover, which is dual to fractional matching. We believe matching conductance to be of independent interest and might have further applications in studying connectivity properties of graphs.
This characterisation forbids fast mixing for graphs with small vertex conductance. To bypass this fundamental barrier, we consider Markov chains on G with equilibrium distribution which need not be uniform, but rather only ε-close to uniform in total variation. We call such chains almost mixing. We show that it is always possible to construct an almost mixing chain with mixing time τ ≲ ε^{-1} (diam G)² log |V|. Our proof is based on carefully constructing a reweighted spanning tree of G with good expansion properties and superimposing it over a simple "base" chain.
In summary, our work together with known results shows that three fundamental geometric quantities characterise the mixing time on a graph according to three different notions of mixing: edge conductance characterises the mixing time of the lazy random walk, vertex conductance the fastest mixing time, while the diameter characterises the almost mixing time.
Finally, we also discuss analogous questions for continuous-time and time-inhomogeneous chains.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.109/LIPIcs.ITCS.2022.109.pdf
mixing time
random walks
conductance
fastest mixing Markov chain
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
110:1
110:4
10.4230/LIPIcs.ITCS.2022.110
article
Lower Bounds on Stabilizer Rank
Peleg, Shir
1
https://orcid.org/0000-0002-7836-7780
Volk, Ben Lee
2
https://orcid.org/0000-0002-7143-7280
Shpilka, Amir
1
https://orcid.org/0000-0003-2384-425X
Tel Aviv University, Israel
Reichman University, Herzliya, Israel
The stabilizer rank of a quantum state ψ is the minimal r such that |ψ⟩ = ∑_{j = 1}^r c_j |φ_j⟩ for c_j ∈ ℂ and stabilizer states φ_j. The running time of several classical simulation methods for quantum circuits is determined by the stabilizer rank of the n-th tensor power of single-qubit magic states.
We prove a lower bound of Ω(n) on the stabilizer rank of such states, improving a previous lower bound of Ω(√n) of Bravyi, Smith and Smolin [Bravyi et al., 2016]. Further, we prove that for a sufficiently small constant δ, the stabilizer rank of any state which is δ-close to those states is Ω(√n/log n). This is the first non-trivial lower bound for approximate stabilizer rank.
Our techniques rely on the representation of stabilizer states as quadratic functions over affine subspaces of 𝔽₂ⁿ, and we use tools from analysis of boolean functions and complexity theory. The proof of the first result involves a careful analysis of directional derivatives of quadratic polynomials, whereas the proof of the second result uses Razborov-Smolensky low degree polynomial approximations and correlation bounds against the majority function.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.110/LIPIcs.ITCS.2022.110.pdf
Quantum Computation
Lower Bounds
Stabilizer rank
Simulation of Quantum computers
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
111:1
111:23
10.4230/LIPIcs.ITCS.2022.111
article
Beating the Folklore Algorithm for Dynamic Matching
Roghani, Mohammad
1
2
Saberi, Amin
1
3
Wajc, David
1
4
Stanford University, CA, USA
roghani@stanford.edu
saberi@stanford.edu
wajc@stanford.edu
The maximum matching problem in dynamic graphs subject to edge updates (insertions and deletions) has received much attention over the last few years; a multitude of approximation/time tradeoffs were obtained, improving upon the folklore algorithm, which maintains a maximal (and hence 2-approximate) matching in O(n) worst-case update time in n-node graphs.
We present the first deterministic algorithm which outperforms the folklore algorithm in terms of both approximation ratio and worst-case update time. Specifically, we give a (2-Ω(1))-approximate algorithm with O(m^{3/8}) = O(n^{3/4}) worst-case update time in n-node, m-edge graphs. For sufficiently small constant ε > 0, no deterministic (2+ε)-approximate algorithm with worst-case update time O(n^{0.99}) was known. Our second result is the first deterministic (2+ε)-approximate weighted matching algorithm with O_ε(1)⋅ O(∜{m}) = O_ε(1)⋅ O(√n) worst-case update time. Neither of our results were previously known to be achievable by a randomized algorithm against an adaptive adversary.
Our main technical contributions are threefold: first, we characterize the tight cases for kernels, which are the well-studied matching sparsifiers underlying much of the (2+ε)-approximate dynamic matching literature. This characterization, together with multiple ideas - old and new - underlies our result for breaking the approximation barrier of 2. Our second technical contribution is the first example of a dynamic matching algorithm whose running time is improved due to improving the recourse of other dynamic matching algorithms. Finally, we show how to use dynamic bipartite matching algorithms as black-box subroutines for dynamic matching in general graphs without incurring the natural 3/2 factor in the approximation ratio which such approaches naturally incur (reminiscent of the integrality gap of the fractional matching polytope in general graphs).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.111/LIPIcs.ITCS.2022.111.pdf
dynamic matching
dynamic graph algorithms
sublinear algorithms
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
112:1
112:4
10.4230/LIPIcs.ITCS.2022.112
article
Interactive Proofs for Synthesizing Quantum States and Unitaries
Rosenthal, Gregory
1
https://orcid.org/0000-0002-5099-9882
Yuen, Henry
2
https://orcid.org/0000-0002-2684-1129
Department of Computer Science, University of Toronto, Canada
Department of Computer Science, Columbia University, New York, NY, USA
Whereas quantum complexity theory has traditionally been concerned with problems arising from classical complexity theory (such as computing boolean functions), it also makes sense to study the complexity of inherently quantum operations such as constructing quantum states or performing unitary transformations. With this motivation, we define models of interactive proofs for synthesizing quantum states and unitaries, where a polynomial-time quantum verifier interacts with an untrusted quantum prover, and a verifier who accepts also outputs an approximation of the target state (for the state synthesis problem) or the result of the target unitary applied to the input state (for the unitary synthesis problem); furthermore there should exist an "honest" prover which the verifier accepts with probability 1.
Our main result is a "state synthesis" analogue of the inclusion PSPACE ⊆ IP: any sequence of states computable by a polynomial-space quantum algorithm (which may run for exponential time) admits an interactive protocol of the form described above. Leveraging this state synthesis protocol, we also give a unitary synthesis protocol for polynomial space-computable unitaries that act nontrivially on only a polynomial-dimensional subspace. We obtain analogous results in the setting with multiple entangled provers as well.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.112/LIPIcs.ITCS.2022.112.pdf
interactive proofs
quantum state complexity
quantum unitary complexity
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
113:1
113:23
10.4230/LIPIcs.ITCS.2022.113
article
Budget-Smoothed Analysis for Submodular Maximization
Rubinstein, Aviad
1
Zhao, Junyao
1
Computer Science Department, Stanford University, CA, USA
The greedy algorithm for monotone submodular function maximization subject to cardinality constraint is guaranteed to approximate the optimal solution to within a 1-1/e factor. Although it is well known that this guarantee is essentially tight in the worst case - for greedy and in fact any efficient algorithm, experiments show that greedy performs better in practice. We observe that for many applications in practice, the empirical distribution of the budgets (i.e., cardinality constraints) is supported on a wide range, and moreover, all the existing hardness results in theory break under a large perturbation of the budget.
To understand the effect of the budget from both algorithmic and hardness perspectives, we introduce a new notion of budget-smoothed analysis. We prove that greedy is optimal for every budget distribution, and we give a characterization for the worst-case submodular functions. Based on these results, we show that on the algorithmic side, under realistic budget distributions, greedy and related algorithms enjoy provably better approximation guarantees, that hold even for worst-case functions, and on the hardness side, there exist hard functions that are fairly robust to all the budget distributions.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.113/LIPIcs.ITCS.2022.113.pdf
Submodular optimization
Beyond worst-case analysis
Greedy algorithms
Hardness of approximation
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
114:1
114:30
10.4230/LIPIcs.ITCS.2022.114
article
Uniform Bounds for Scheduling with Job Size Estimates
Scully, Ziv
1
https://orcid.org/0000-0002-8547-1068
Grosof, Isaac
1
https://orcid.org/0000-0001-6205-8652
Mitzenmacher, Michael
2
https://orcid.org/0000-0001-5430-5457
Computer Science Department, Carnegie Mellon University, Pittsburgh, PA, USA
School of Engineering and Applied Sciences, Harvard University, Cambridge, MA, USA
We consider the problem of scheduling to minimize mean response time in M/G/1 queues where only estimated job sizes (processing times) are known to the scheduler, where a job of true size s has estimated size in the interval [β s, α s] for some α ≥ β > 0. We evaluate each scheduling policy by its approximation ratio, which we define to be the ratio between its mean response time and that of Shortest Remaining Processing Time (SRPT), the optimal policy when true sizes are known. Our question: is there a scheduling policy that (a) has approximation ratio near 1 when α and β are near 1, (b) has approximation ratio bounded by some function of α and β even when they are far from 1, and (c) can be implemented without knowledge of α and β?
We first show that naively running SRPT using estimated sizes in place of true sizes is not such a policy: its approximation ratio can be arbitrarily large for any fixed β < 1. We then provide a simple variant of SRPT for estimated sizes that satisfies criteria (a), (b), and (c). In particular, we prove its approximation ratio approaches 1 uniformly as α and β approach 1. This is the first result showing this type of convergence for M/G/1 scheduling.
We also study the Preemptive Shortest Job First (PSJF) policy, a cousin of SRPT. We show that, unlike SRPT, naively running PSJF using estimated sizes in place of true sizes satisfies criteria (b) and (c), as well as a weaker version of (a).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.114/LIPIcs.ITCS.2022.114.pdf
Scheduling
queueing systems
algorithms with predictions
shortest remaining processing time (SRPT)
preemptive shortest job first (PSJF)
M/G/1 queue
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
115:1
115:22
10.4230/LIPIcs.ITCS.2022.115
article
3+ε Approximation of Tree Edit Distance in Truly Subquadratic Time
Seddighin, Masoud
1
Seddighin, Saeed
2
Institute for Research in Fundamental Sciences (IPM), School of Computer Science, Tehran, Iran
Toyota Technological Institute at Chicago, IL, USA
Tree edit distance is a well-known generalization of the edit distance problem to rooted trees. In this problem, the goal is to transform a rooted tree into another rooted tree via (i) node addition, (ii) node deletion, and (iii) node relabel. In this work, we give a truly subquadratic time algorithm that approximates tree edit distance within a factor 3+ε.
Our result is obtained through a novel extension of a 3-step framework that approximates edit distance in truly subquadratic time. This framework has also been previously used to approximate longest common subsequence in subquadratic time.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.115/LIPIcs.ITCS.2022.115.pdf
tree edit distance
approximation
subquadratic
edit distance
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
116:1
116:17
10.4230/LIPIcs.ITCS.2022.116
article
On Hardness Assumptions Needed for "Extreme High-End" PRGs and Fast Derandomization
Shaltiel, Ronen
1
Viola, Emanuele
2
Department of computer science, University of Haifa, Israel
Khoury College of Computer Sciences, Northeastern University, Boston, MA, USA
The hardness vs. randomness paradigm aims to explicitly construct pseudorandom generators G:{0,1}^r → {0,1}^m that fool circuits of size m, assuming the existence of explicit hard functions. A "high-end PRG" with seed length r = O(log m) (implying BPP=P) was achieved in a seminal work of Impagliazzo and Wigderson (STOC 1997), assuming the high-end hardness assumption: there exist constants 0 < β < 1 < B, and functions computable in time 2^{B ⋅ n} that cannot be computed by circuits of size 2^{β ⋅ n}.
Recently, motivated by fast derandomization of randomized algorithms, Doron et al. (FOCS 2020) and Chen and Tell (STOC 2021), construct "extreme high-end PRGs" with seed length r = (1+o(1))⋅ log m, under qualitatively stronger assumptions.
We study whether extreme high-end PRGs can be constructed from the corresponding hardness assumption in which β = 1-o(1) and B = 1+o(1), which we call the extreme high-end hardness assumption. We give a partial negative answer:
- The construction of Doron et al. composes a PEG (pseudo-entropy generator) with an extractor. The PEG is constructed starting from a function that is hard for MA-type circuits. We show that black-box PEG constructions from the extreme high-end hardness assumption must have large seed length (and so cannot be used to obtain extreme high-end PRGs by applying an extractor).
To prove this, we establish a new property of (general) black-box PRG constructions from hard functions: it is possible to fix many output bits of the construction while fixing few bits of the hard function. This property distinguishes PRG constructions from typical extractor constructions, and this may explain why it is difficult to design PRG constructions.
- The construction of Chen and Tell composes two PRGs: G₁:{0,1}^{(1+o(1)) ⋅ log m} → {0,1}^{r₂ = m^{Ω(1)}} and G₂:{0,1}^{r₂} → {0,1}^m. The first PRG is constructed from the extreme high-end hardness assumption, and the second PRG needs to run in time m^{1+o(1)}, and is constructed assuming one way functions. We show that in black-box proofs of hardness amplification to 1/2+1/m, reductions must make Ω(m) queries, even in the extreme high-end. Known PRG constructions from hard functions are black-box and use (or imply) hardness amplification, and so cannot be used to construct a PRG G₂ from the extreme high-end hardness assumption.
The new feature of our hardness amplification result is that it applies even to the extreme high-end setting of parameters, whereas past work does not. Our techniques also improve recent lower bounds of Ron-Zewi, Shaltiel and Varma (ITCS 2021) on the number of queries of local list-decoding algorithms.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.116/LIPIcs.ITCS.2022.116.pdf
Complexity Theory
Derandomization
Pseudorandom generators
Black-box proofs
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
117:1
117:19
10.4230/LIPIcs.ITCS.2022.117
article
Low-Bandwidth Recovery of Linear Functions of Reed-Solomon-Encoded Data
Shutty, Noah
1
Wootters, Mary
1
Stanford University, CA, USA
We study the problem of efficiently computing on encoded data. More specifically, we study the question of low-bandwidth computation of functions F:F^k → F of some data 𝐱 ∈ F^k, given access to an encoding 𝐜 ∈ Fⁿ of 𝐱 under an error correcting code. In our model - relevant in distributed storage, distributed computation and secret sharing - each symbol of 𝐜 is held by a different party, and we aim to minimize the total amount of information downloaded from each party in order to compute F(𝐱). Special cases of this problem have arisen in several domains, and we believe that it is fruitful to study this problem in generality.
Our main result is a low-bandwidth scheme to compute linear functions for Reed-Solomon codes, even in the presence of erasures. More precisely, let ε > 0 and let 𝒞: F^k → Fⁿ be a full-length Reed-Solomon code of rate 1 - ε over a field F with constant characteristic. For any γ ∈ [0, ε), our scheme can compute any linear function F(𝐱) given access to any (1 - γ)-fraction of the symbols of 𝒞(𝐱), with download bandwidth O(n/(ε - γ)) bits. In contrast, the naive scheme that involves reconstructing the data 𝐱 and then computing F(𝐱) uses Θ(n log n) bits. Our scheme has applications in distributed storage, coded computation, and homomorphic secret sharing.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.117/LIPIcs.ITCS.2022.117.pdf
Reed-Solomon Codes
Regenerating Codes
Coded Computation
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
118:1
118:33
10.4230/LIPIcs.ITCS.2022.118
article
Efficient Reconstruction of Depth Three Arithmetic Circuits with Top Fan-In Two
Sinha, Gaurav
1
Adobe Research, Bangalore, India
In this paper we develop efficient randomized algorithms to solve the black-box reconstruction problem for polynomials over finite fields, computable by depth three arithmetic circuits with alternating addition/multiplication gates, such that output gate is an addition gate with in-degree two. Such circuits naturally compute polynomials of the form G×(T₁ + T₂), where G,T₁,T₂ are product of affine forms computed at the first layer in the circuit, and polynomials T₁,T₂ have no common factors. Rank of such a circuit is defined to be the dimension of vector space spanned by all affine factors of T₁ and T₂. For any polynomial f computable by such a circuit, rank(f) is defined to be the minimum rank of any such circuit computing it. Our work develops randomized reconstruction algorithms which take as input black-box access to a polynomial f (over finite field 𝔽), computable by such a circuit. Here are the results.
- [Low rank]: When 5 ≤ rank(f) = O(log³ d), it runs in time (nd^{log³d}log |𝔽|)^{O(1)}, and, with high probability, outputs a depth three circuit computing f, with top addition gate having in-degree ≤ d^{rank(f)}.
- [High rank]: When rank(f) = Ω(log³ d), it runs in time (ndlog |𝔽|)^{O(1)}, and, with high probability, outputs a depth three circuit computing f, with top addition gate having in-degree two.
Prior to our work, black-box reconstruction for this circuit class was addressed in [Amir Shpilka, 2007; Karnin and Shpilka, 2009; Sinha, 2016]. Reconstruction algorithm in [Amir Shpilka, 2007] runs in time quasi-polynomial in n,d,|𝔽| and that in [Karnin and Shpilka, 2009] is quasi-polynomial in d,|𝔽|. Algorithm in [Sinha, 2016] works only for polynomials over characteristic zero fields. Thus, ours is the first blackbox reconstruction algorithm for this class of circuits that runs in time polynomial in log |𝔽|. This problem has been mentioned as an open problem in [Ankit Gupta et al., 2012] (STOC 2012). In the high rank case, our algorithm runs in (ndlog|𝔽|)^{O(1)} time, thereby significantly improving the existing algorithms in [Amir Shpilka, 2007; Karnin and Shpilka, 2009].
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.118/LIPIcs.ITCS.2022.118.pdf
Arithmetic Circuits
Circuit Reconstruction
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
119:1
119:24
10.4230/LIPIcs.ITCS.2022.119
article
Polynomial Identity Testing via Evaluation of Rational Functions
van Melkebeek, Dieter
1
Morgan, Andrew
1
University of Wisconsin-Madison, Madison, WI, USA
We introduce a hitting set generator for Polynomial Identity Testing based on evaluations of low-degree univariate rational functions at abscissas associated with the variables. In spite of the univariate nature, we establish an equivalence up to rescaling with a generator introduced by Shpilka and Volkovich, which has a similar structure but uses multivariate polynomials in the abscissas.
We study the power of the generator by characterizing its vanishing ideal, i.e., the set of polynomials that it fails to hit. Capitalizing on the univariate nature, we develop a small collection of polynomials that jointly produce the vanishing ideal. As corollaries, we obtain tight bounds on the minimum degree, sparseness, and partition size of set-multi-linearity in the vanishing ideal. Inspired by an alternating algebra representation, we develop a structured deterministic membership test for the vanishing ideal. As a proof of concept we rederive known derandomization results based on the generator by Shpilka and Volkovich, and present a new application for read-once oblivious arithmetic branching programs that provably transcends the usual combinatorial techniques.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.119/LIPIcs.ITCS.2022.119.pdf
Derandomization
Gröbner Basis
Lower Bounds
Polynomial Identity Testing
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2022-01-25
215
120:1
120:23
10.4230/LIPIcs.ITCS.2022.120
article
Probing to Minimize
Wang, Weina
1
Gupta, Anupam
1
Williams, Jalani K.
1
Computer Science Department, Carnegie Mellon University, Pittsburgh, PA, USA
We develop approximation algorithms for set-selection problems with deterministic constraints, but random objective values, i.e., stochastic probing problems. When the goal is to maximize the objective, approximation algorithms for probing problems are well-studied. On the other hand, few techniques are known for minimizing the objective, especially in the adaptive setting, where information about the random objective is revealed during the set-selection process and allowed to influence it. For minimization problems in particular, incorporating adaptivity can have a considerable effect on performance. In this work, we seek approximation algorithms that compare well to the optimal adaptive policy.
We develop new techniques for adaptive minimization, applying them to a few problems of interest. The core technique we develop here is an approximate reduction from an adaptive expectation minimization problem to a set of adaptive probability minimization problems which we call threshold problems. By providing near-optimal solutions to these threshold problems, we obtain bicriteria adaptive policies.
We apply this method to obtain an adaptive approximation algorithm for the Min-Element problem, where the goal is to adaptively pick random variables to minimize the expected minimum value seen among them, subject to a knapsack constraint. This partially resolves an open problem raised in [Goel et al., 2010]. We further consider three extensions on the Min-Element problem, where our objective is the sum of the smallest k element-weights, or the weight of the min-weight basis of a given matroid, or where the constraint is not given by a knapsack but by a matroid constraint. For all three of the variations we explore, we develop adaptive approximation algorithms for their corresponding threshold problems, and prove their near-optimality via coupling arguments.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol215-itcs2022/LIPIcs.ITCS.2022.120/LIPIcs.ITCS.2022.120.pdf
approximation algorithms
stochastic probing
minimization