eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
1
2102
10.4230/LIPIcs.ITCS.2023
article
LIPIcs, Volume 251, ITCS 2023, Complete Volume
Tauman Kalai, Yael
1
Microsoft Research New England, Cambridge, USA
LIPIcs, Volume 251, ITCS 2023, Complete Volume
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023/LIPIcs.ITCS.2023.pdf
LIPIcs, Volume 251, ITCS 2023, Complete Volume
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
0:i
0:xxii
10.4230/LIPIcs.ITCS.2023.0
article
Front Matter, Table of Contents, Preface, Conference Organization
Tauman Kalai, Yael
1
Microsoft Research New England, Cambridge, USA
Front Matter, Table of Contents, Preface, Conference Organization
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.0/LIPIcs.ITCS.2023.0.pdf
Front Matter
Table of Contents
Preface
Conference Organization
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
1:1
1:23
10.4230/LIPIcs.ITCS.2023.1
article
Worst-Case to Expander-Case Reductions
Abboud, Amir
1
https://orcid.org/0000-0002-0502-4517
Wallheimer, Nathan
1
https://orcid.org/0000-0001-7147-2855
Weizmann Institute of Science, Rehovot, Israel
In recent years, the expander decomposition method was used to develop many graph algorithms, resulting in major improvements to longstanding complexity barriers. This powerful hammer has led the community to (1) believe that most problems are as easy on worst-case graphs as they are on expanders, and (2) suspect that expander decompositions are the key to breaking the remaining longstanding barriers in fine-grained complexity.
We set out to investigate the extent to which these two things are true (and for which problems). Towards this end, we put forth the concept of worst-case to expander-case self-reductions. We design a collection of such reductions for fundamental graph problems, verifying belief (1) for them. The list includes k-Clique, 4-Cycle, Maximum Cardinality Matching, Vertex-Cover, and Minimum Dominating Set. Interestingly, for most (but not all) of these problems the proof is via a simple gadget reduction, not via expander decompositions, showing that this hammer is effectively useless against the problem and contradicting (2).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.1/LIPIcs.ITCS.2023.1.pdf
Fine-Grained Complexity
Expander Decomposition
Reductions
Exact and Parameterized Complexity
Expander Graphs
Triangle
Maximum Matching
Clique
4-Cycle
Vertex Cover
Dominating Set
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
2:1
2:9
10.4230/LIPIcs.ITCS.2023.2
article
Matroid Partition Property and the Secretary Problem
Abdolazimi, Dorna
1
Karlin, Anna R.
1
Klein, Nathan
1
Oveis Gharan, Shayan
1
University of Washington, Seattle, WA, USA
A matroid M on a set E of elements has the α-partition property, for some α > 0, if it is possible to (randomly) construct a partition matroid 𝒫 on (a subset of) elements of M such that every independent set of 𝒫 is independent in M and for any weight function w:E → ℝ_{≥0}, the expected value of the optimum of the matroid secretary problem on 𝒫 is at least an α-fraction of the optimum on M. We show that the complete binary matroid, B_d on 𝔽₂^d does not satisfy the α-partition property for any constant α > 0 (independent of d).
Furthermore, we refute a recent conjecture of [Kristóf Bérczi et al., 2021] by showing the same matroid is 2^d/d-colorable but cannot be reduced to an α 2^d/d-colorable partition matroid for any α that is sublinear in d.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.2/LIPIcs.ITCS.2023.2.pdf
Online algorithms
Matroids
Matroid secretary problem
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
3:1
3:19
10.4230/LIPIcs.ITCS.2023.3
article
Kolmogorov Complexity Characterizes Statistical Zero Knowledge
Allender, Eric
1
https://orcid.org/0000-0002-0650-028X
Hirahara, Shuichi
2
https://orcid.org/0000-0002-3101-446X
Tirumala, Harsha
1
https://orcid.org/0000-0002-4600-3675
Rutgers University, Piscataway, NJ, USA
National Institute of Informatics, Tokyo, Japan
We show that a decidable promise problem has a non-interactive statistical zero-knowledge proof system if and only if it is randomly reducible via an honest polynomial-time reduction to a promise problem for Kolmogorov-random strings, with a superlogarithmic additive approximation term. This extends recent work by Saks and Santhanam (CCC 2022). We build on this to give new characterizations of Statistical Zero Knowledge SZK, as well as the related classes NISZK_L and SZK_L.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.3/LIPIcs.ITCS.2023.3.pdf
Kolmogorov Complexity
Interactive Proofs
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
4:1
4:22
10.4230/LIPIcs.ITCS.2023.4
article
Communication Complexity of Inner Product in Symmetric Normed Spaces
Andoni, Alexandr
1
Błasiok, Jarosław
1
Filtser, Arnold
2
Columbia University, New York, NY, USA
Bar-Ilan University, Ramat-Gan, Israel
We introduce and study the communication complexity of computing the inner product of two vectors, where the input is restricted w.r.t. a norm N on the space ℝⁿ. Here, Alice and Bob hold two vectors v,u such that ‖v‖_N ≤ 1 and ‖u‖_{N^*} ≤ 1, where N^* is the dual norm. The goal is to compute their inner product ⟨v,u⟩ up to an ε additive term. The problem is denoted by IP_N, and generalizes important previously studied problems, such as: (1) Computing the expectation 𝔼_{x∼𝒟}[f(x)] when Alice holds 𝒟 and Bob holds f is equivalent to IP_{𝓁₁}. (2) Computing v^TAv where Alice has a symmetric matrix with bounded operator norm (denoted S_∞) and Bob has a vector v where ‖v‖₂ = 1. This problem is complete for quantum communication complexity and is equivalent to IP_{S_∞}.
We systematically study IP_N, showing the following results, near tight in most cases:
1) For any symmetric norm N, given ‖v‖_N ≤ 1 and ‖u‖_{N^*} ≤ 1 there is a randomized protocol using 𝒪̃(ε^{-6} log n) bits of communication that returns a value in ⟨u,v⟩±ε with probability 2/3 - we will denote this by ℛ_{ε,1/3}(IP_N) ≤ 𝒪̃(ε^{-6} log n). In a special case where N = 𝓁_p and N^* = 𝓁_q for p^{-1} + q^{-1} = 1, we obtain an improved bound ℛ_{ε,1/3}(IP_{𝓁_p}) ≤ 𝒪(ε^{-2} log n), nearly matching the lower bound ℛ_{ε, 1/3}(IP_{𝓁_p}) ≥ Ω(min(n, ε^{-2})).
2) One way communication complexity ℛ^{→}_{ε,δ}(IP_{𝓁_p}) ≤ 𝒪(ε^{-max(2,p)}⋅ log n/ε), and a nearly matching lower bound ℛ^{→}_{ε, 1/3}(IP_{𝓁_p}) ≥ Ω(ε^{-max(2,p)}) for ε^{-max(2,p)} ≪ n.
3) One way communication complexity ℛ^{→}_{ε,δ}(N) for a symmetric norm N is governed by the distortion of the embedding 𝓁_∞^k into N. Specifically, while a small distortion embedding easily implies a lower bound Ω(k), we show that, conversely, non-existence of such an embedding implies protocol with communication k^𝒪(log log k) log² n.
4) For arbitrary origin symmetric convex polytope P, we show ℛ_{ε,1/3}(IP_{N}) ≤ 𝒪(ε^{-2} log xc(P)), where N is the unique norm for which P is a unit ball, and xc(P) is the extension complexity of P (i.e. the smallest number of inequalities describing some polytope P' s.t. P is projection of P').
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.4/LIPIcs.ITCS.2023.4.pdf
communication complexity
symmetric norms
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
5:1
5:8
10.4230/LIPIcs.ITCS.2023.5
article
Concentration Bounds for Quantum States and Limitations on the QAOA from Polynomial Approximations
Anshu, Anurag
1
Metger, Tony
2
School of Engineering and Applied Sciences, Harvard University, Cambridge, MA, USA
Institute for Theoretical Studies, ETH Zürich, Switzerland
We prove concentration bounds for the following classes of quantum states: (i) output states of shallow quantum circuits, answering an open question from [De Palma et al., 2022]; (ii) injective matrix product states; (iii) output states of dense Hamiltonian evolution, i.e. states of the form e^{ιH^{(p)}} ⋯ e^{ιH^{(1)}} |ψ₀⟩ for any n-qubit product state |ψ₀⟩, where each H^{(i)} can be any local commuting Hamiltonian satisfying a norm constraint, including dense Hamiltonians with interactions between any qubits. Our proofs use polynomial approximations to show that these states are close to local operators. This implies that the distribution of the Hamming weight of a computational basis measurement (and of other related observables) concentrates. An example of (iii) are the states produced by the quantum approximate optimisation algorithm (QAOA). Using our concentration results for these states, we show that for a random spin model, the QAOA can only succeed with negligible probability even at super-constant level p = o(log log n), assuming a strengthened version of the so-called overlap gap property. This gives the first limitations on the QAOA on dense instances at super-constant level, improving upon the recent result [Basso et al., 2022].
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.5/LIPIcs.ITCS.2023.5.pdf
quantum computing
polynomial approximation
quantum optimization algorithm
QAOA
overlap gap property
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
6:1
6:23
10.4230/LIPIcs.ITCS.2023.6
article
On Identity Testing and Noncommutative Rank Computation over the Free Skew Field
Arvind, V.
1
Chatterjee, Abhranil
2
Ghosal, Utsab
3
Mukhopadhyay, Partha
3
Ramya, C.
1
Institute of Mathematical Sciences (HBNI), Chennai, India
National Institute of Science Education and Research (HBNI), Bhubaneswar, India
Chennai Mathematical Institute, India
The identity testing of rational formulas (RIT) in the free skew field efficiently reduces to computing the rank of a matrix whose entries are linear polynomials in noncommuting variables [Hrubeš and Wigderson, 2015]. This rank computation problem has deterministic polynomial-time white-box algorithms [Ankit Garg et al., 2016; Ivanyos et al., 2018] and a randomized polynomial-time algorithm in the black-box setting [Harm Derksen and Visu Makam, 2017]. In this paper, we propose a new approach for efficient derandomization of black-box RIT. Additionally, we obtain results for matrix rank computation over the free skew field and construct efficient linear pencil representations for a new class of rational expressions. More precisely, we show:
- Under the hardness assumption that the ABP (algebraic branching program) complexity of every polynomial identity for the k×k matrix algebra is 2^Ω(k) [Andrej Bogdanov and Hoeteck Wee, 2005], we obtain a subexponential-time black-box RIT algorithm for rational formulas of inversion height almost logarithmic in the size of the formula. This can be seen as the first "hardness implies derandomization" type theorem for rational formulas.
- We show that the noncommutative rank of any matrix over the free skew field whose entries have small linear pencil representations can be computed in deterministic polynomial time. While an efficient rank computation was known for matrices with noncommutative formulas as entries [Ankit Garg et al., 2020], we obtain the first deterministic polynomial-time algorithms for rank computation of matrices whose entries are noncommutative ABPs or rational formulas.
- Motivated by the definition given by Bergman [George M Bergman, 1976], we define a new class of rational functions where a rational function of inversion height at most h is defined as a composition of a noncommutative r-skewed circuit (equivalently an ABP) with inverses of rational functions of this class of inversion height at most h-1 which are also disjoint. We obtain a polynomial-size linear pencil representation for this class which gives a white-box deterministic polynomial-time identity testing algorithm for the class.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.6/LIPIcs.ITCS.2023.6.pdf
Algebraic Complexity
Identity Testing
Non-commutative rank
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
7:1
7:24
10.4230/LIPIcs.ITCS.2023.7
article
All-Norm Load Balancing in Graph Streams via the Multiplicative Weights Update Method
Assadi, Sepehr
1
Bernstein, Aaron
1
Langley, Zachary
1
https://orcid.org/0000-0003-1369-4948
Rutgers University, New Brunswick, NJ, USA
In the weighted load balancing problem, the input is an n-vertex bipartite graph between a set of clients and a set of servers, and each client comes with some nonnegative real weight. The output is an assignment that maps each client to one of its adjacent servers, and the load of a server is then the sum of the weights of the clients assigned to it. The goal is to find an assignment that is well-balanced, typically captured by (approximately) minimizing either the 𝓁_∞- or 𝓁₂-norm of the server loads. Generalizing both of these objectives, the all-norm load balancing problem asks for an assignment that approximately minimizes all 𝓁_p-norm objectives for p ≥ 1, including p = ∞, simultaneously.
Our main result is a deterministic O(log n)-pass O(1)-approximation semi-streaming algorithm for the all-norm load balancing problem. Prior to our work, only an O(log n)-pass O(log n)-approximation algorithm for the 𝓁_∞-norm objective was known in the semi-streaming setting.
Our algorithm uses a novel application of the multiplicative weights update method to a mixed covering/packing convex program for the all-norm load balancing problem involving an infinite number of constraints.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.7/LIPIcs.ITCS.2023.7.pdf
Load Balancing
Semi-Streaming Algorithms
Semi-Matching
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
8:1
8:19
10.4230/LIPIcs.ITCS.2023.8
article
A Framework for Adversarial Streaming via Differential Privacy and Difference Estimators
Attias, Idan
1
Cohen, Edith
2
3
Shechner, Moshe
3
Stemmer, Uri
3
4
Ben-Gurion University, Beer Sheva, Israel
Google Research, Mountain View, CA, USA
Tel Aviv University, Israel
Google Research, Herzliya, Israel
Classical streaming algorithms operate under the (not always reasonable) assumption that the input stream is fixed in advance. Recently, there is a growing interest in designing robust streaming algorithms that provide provable guarantees even when the input stream is chosen adaptively as the execution progresses. We propose a new framework for robust streaming that combines techniques from two recently suggested frameworks by Hassidim et al. [NeurIPS 2020] and by Woodruff and Zhou [FOCS 2021]. These recently suggested frameworks rely on very different ideas, each with its own strengths and weaknesses. We combine these two frameworks into a single hybrid framework that obtains the "best of both worlds", thereby solving a question left open by Woodruff and Zhou.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.8/LIPIcs.ITCS.2023.8.pdf
Streaming
adversarial robustness
differential privacy
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
9:1
9:23
10.4230/LIPIcs.ITCS.2023.9
article
Making Auctions Robust to Aftermarkets
Babaioff, Moshe
1
Immorlica, Nicole
2
Li, Yingkai
3
Lucier, Brendan
4
Microsoft Research, Herzliya, Israel
Microsoft Research, New York, NY, USA
Cowles Foundation for Research in Economics, Yale University, New Haven, CT, USA
Microsoft Research, Cambridge, MA, USA
A prevalent assumption in auction theory is that the auctioneer has full control over the market and that the allocation she dictates is final. In practice, however, agents might be able to resell acquired items in an aftermarket. A prominent example is the market for carbon emission allowances. These allowances are commonly allocated by the government using uniform-price auctions, and firms can typically trade these allowances among themselves in an aftermarket that may not be fully under the auctioneer’s control. While the uniform-price auction is approximately efficient in isolation, we show that speculation and resale in aftermarkets might result in a significant welfare loss. Motivated by this issue, we consider three approaches, each ensuring high equilibrium welfare in the combined market. The first approach is to adopt smooth auctions such as discriminatory auctions. This approach is robust to correlated valuations and to participants acquiring information about others' types. However, discriminatory auctions have several downsides, notably that of charging bidders different prices for identical items, resulting in fairness concerns that make the format unpopular. Two other approaches we suggest are either using posted-pricing mechanisms, or using uniform-price auctions with anonymous reserves. We show that when using balanced prices, both these approaches ensure high equilibrium welfare in the combined market. The latter also inherits many of the benefits from uniform-price auctions such as price discovery, and can be introduced with a minor modification to auctions currently in use to sell carbon emission allowances.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.9/LIPIcs.ITCS.2023.9.pdf
carbon markets
aftermarkets
price of anarchy
multi-unit auctions
posted prices
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
10:1
10:23
10.4230/LIPIcs.ITCS.2023.10
article
Efficiently Testable Circuits
Baig, Mirza Ahad
1
Chakraborty, Suvradip
2
Dziembowski, Stefan
3
4
Gałązka, Małgorzata
3
Lizurej, Tomasz
3
4
Pietrzak, Krzysztof
1
ISTA, Klosterneuburg, Austria
ETH Zürich, Switzerland
University of Warsaw, Poland
IDEAS NCBR, Warsaw, Poland
In this work, we put forward the notion of "efficiently testable circuits" and provide circuit compilers that transform any circuit into an efficiently testable one. Informally, a circuit is testable if one can detect tampering with the circuit by evaluating it on a small number of inputs from some test set.
Our technical contribution is a compiler that transforms any circuit C into a testable circuit (Ĉ,𝕋̂) for which we can detect arbitrary tampering with all wires in Ĉ. The notion of a testable circuit is weaker or incomparable to existing notions of tamper-resilience, which aim to detect or even correct for errors introduced by tampering during every query, but our new notion is interesting in several settings, and we achieve security against much more general tampering classes - like tampering with all wires - with very modest overhead.
Concretely, starting from a circuit C of size n and depth d, for any L (think of L as a small constant, say L = 4), we get a testable (Ĉ,𝕋̂) where Ĉ is of size ≈ 12n and depth d+log(n)+L⋅ n^{1/L}. The test set 𝕋̂ is of size 4⋅ 2^L. The number of extra input and output wires (i.e., pins) we need to add for the testing is 3+L and 2^L, respectively.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.10/LIPIcs.ITCS.2023.10.pdf
circuit compilers
circuit integrity
circuit testing
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
11:1
11:22
10.4230/LIPIcs.ITCS.2023.11
article
Strategyproof Scheduling with Predictions
Balkanski, Eric
1
https://orcid.org/0000-0001-6876-7919
Gkatzelis, Vasilis
2
https://orcid.org/0000-0001-7203-438X
Tan, Xizhi
2
https://orcid.org/0000-0002-3928-523X
Columbia University, New York, NY, USA
Drexel University, Philadelphia, PA, USA
In their seminal paper that initiated the field of algorithmic mechanism design, Nisan and Ronen [Noam Nisan and Amir Ronen, 1999] studied the problem of designing strategyproof mechanisms for scheduling jobs on unrelated machines aiming to minimize the makespan. They provided a strategyproof mechanism that achieves an n-approximation and they made the bold conjecture that this is the best approximation achievable by any deterministic strategyproof scheduling mechanism. After more than two decades and several efforts, n remains the best known approximation and very recent work by Christodoulou et al. [George Christodoulou et al., 2021] has been able to prove an Ω(√n) approximation lower bound for all deterministic strategyproof mechanisms. This strong negative result, however, heavily depends on the fact that the performance of these mechanisms is evaluated using worst-case analysis. To overcome such overly pessimistic, and often uninformative, worst-case bounds, a surge of recent work has focused on the "learning-augmented framework", whose goal is to leverage machine-learned predictions to obtain improved approximations when these predictions are accurate (consistency), while also achieving near-optimal worst-case approximations even when the predictions are arbitrarily wrong (robustness).
In this work, we study the classic strategic scheduling problem of Nisan and Ronen [Noam Nisan and Amir Ronen, 1999] using the learning-augmented framework and give a deterministic polynomial-time strategyproof mechanism that is 6-consistent and 2n-robust. We thus achieve the "best of both worlds": an O(1) consistency and an O(n) robustness that asymptotically matches the best-known approximation. We then extend this result to provide more general worst-case approximation guarantees as a function of the prediction error. Finally, we complement our positive results by showing that any 1-consistent deterministic strategyproof mechanism has unbounded robustness.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.11/LIPIcs.ITCS.2023.11.pdf
Mechanism Design with Predictions
Strategyproof Scheduling
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
12:1
12:24
10.4230/LIPIcs.ITCS.2023.12
article
Graph Searching with Predictions
Banerjee, Siddhartha
1
https://orcid.org/0000-0002-8954-4578
Cohen-Addad, Vincent
2
Gupta, Anupam
3
Li, Zhouzi
4
Operations Research and Information Engineering, Cornell University, Ithaca, NY, USA
Google Research, Zürich, Switzerland
Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA
IIIS, Tsinghua University, Beijing, China
Consider an agent exploring an unknown graph in search of some goal state. As it walks around the graph, it learns the nodes and their neighbors. The agent only knows where the goal state is when it reaches it. How do we reach this goal while moving only a small distance? This problem seems hopeless, even on trees of bounded degree, unless we give the agent some help. This setting with "help" often arises in exploring large search spaces (e.g., huge game trees) where we assume access to some score/quality function for each node, which we use to guide us towards the goal. In our case, we assume the help comes in the form of distance predictions: each node v provides a prediction f(v) of its distance to the goal vertex. Naturally if these predictions are correct, we can reach the goal along a shortest path. What if the predictions are unreliable and some of them are erroneous? Can we get an algorithm whose performance relates to the error of the predictions?
In this work, we consider the problem on trees and give deterministic algorithms whose total movement cost is only O(OPT + Δ ⋅ ERR), where OPT is the distance from the start to the goal vertex, Δ the maximum degree, and the ERR is the total number of vertices whose predictions are erroneous. We show this guarantee is optimal. We then consider a "planning" version of the problem where the graph and predictions are known at the beginning, so the agent can use this global information to devise a search strategy of low cost. For this planning version, we go beyond trees and give an algorithms which gets good performance on (weighted) graphs with bounded doubling dimension.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.12/LIPIcs.ITCS.2023.12.pdf
Algorithms with predictions
network algorithms
graph search
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
13:1
13:21
10.4230/LIPIcs.ITCS.2023.13
article
On Computing Homological Hitting Sets
Bauer, Ulrich
1
https://orcid.org/0000-0002-9683-0724
Rathod, Abhishek
2
https://orcid.org/0000-0003-2533-3699
Zehavi, Meirav
3
https://orcid.org/0000-0002-3636-5322
Department of Mathematics, TUM School of CIT, Technische Universität München, Germany
Department of Computer Science, Purdue University, West Lafayette, IN, USA
Department of Computer Science, Ben-Gurion University of the Negev, Beer-Sheva, Israel
Cut problems form one of the most fundamental classes of problems in algorithmic graph theory. In this paper, we initiate the algorithmic study of a high-dimensional cut problem. The problem we study, namely, Homological Hitting Set (HHS), is defined as follows: Given a nontrivial r-cycle z in a simplicial complex, find a set 𝒮 of r-dimensional simplices of minimum cardinality so that 𝒮 meets every cycle homologous to z. Our first result is that HHS admits a polynomial-time solution on triangulations of closed surfaces. Interestingly, the minimal solution is given in terms of the cocycles of the surface. Next, we provide an example of a 2-complex for which the (unique) minimal hitting set is not a cocycle. Furthermore, for general complexes, we show that HHS is W[1]-hard with respect to the solution size p. In contrast, on the positive side, we show that HHS admits an FPT algorithm with respect to p+Δ, where Δ is the maximum degree of the Hasse graph of the complex 𝖪.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.13/LIPIcs.ITCS.2023.13.pdf
Algorithmic topology
Cut problems
Surfaces
Parameterized complexity
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
14:1
14:17
10.4230/LIPIcs.ITCS.2023.14
article
On Disperser/Lifting Properties of the Index and Inner-Product Functions
Beame, Paul
1
Koroth, Sajin
2
https://orcid.org/0000-0002-7989-1963
University of Washington, Seattle, WA, USA
University of Victoria, Victoria, BC, Canada
Query-to-communication lifting theorems, which connect the query complexity of a Boolean function to the communication complexity of an associated "lifted" function obtained by composing the function with many copies of another function known as a gadget, have been instrumental in resolving many open questions in computational complexity. A number of important complexity questions could be resolved if we could make substantial improvements in the input size required for lifting with the Index function, which is a universal gadget for lifting, from its current near-linear size down to polylogarithmic in the number of inputs N of the original function or, ideally, constant. The near-linear size bound was recently shown by Lovett, Meka, Mertz, Pitassi and Zhang [Shachar Lovett et al., 2022] using a recent breakthrough improvement on the Sunflower Lemma to show that a certain graph associated with an Index function of that size is a disperser. They also stated a conjecture about the Index function that is essential for further improvements in the size required for lifting with Index using current techniques. In this paper we prove the following;
- The conjecture of Lovett et al. is false when the size of the Index gadget is less than logarithmic in N.
- The same limitation applies to the Inner-Product function. More precisely, the Inner-Product function, which is known to satisfy the disperser property at size O(log N), also does not have this property when its size is less than log N.
- Notwithstanding the above, we prove a lifting theorem that applies to Index gadgets of any size at least 4 and yields lower bounds for a restricted class of communication protocols in which one of the players is limited to sending parities of its inputs.
- Using a modification of the same idea with improved lifting parameters we derive a strong lifting theorem from decision tree size to parity decision tree size. We use this, in turn, to derive a general lifting theorem in proof complexity from tree-resolution size to tree-like Res(⊕) refutation size, which yields many new exponential lower bounds on such proofs.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.14/LIPIcs.ITCS.2023.14.pdf
Decision trees
communication complexity
lifting theorems
proof complexity
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
15:1
15:11
10.4230/LIPIcs.ITCS.2023.15
article
Is This Correct? Let’s Check!
Ben-Eliezer, Omri
1
https://orcid.org/0000-0001-6366-5964
Mikulincer, Dan
1
https://orcid.org/0000-0003-3597-3550
Mossel, Elchanan
1
https://orcid.org/0000-0001-7812-7886
Sudan, Madhu
2
https://orcid.org/0000-0003-3718-6489
Department of Mathematics, Massachusetts Institute of Technology, Cambridge, MA, USA
School of Engineering and Applied Sciences, Harvard University, Cambridge, MA, USA
Societal accumulation of knowledge is a complex process. The correctness of new units of knowledge depends not only on the correctness of new reasoning, but also on the correctness of old units that the new one builds on. The errors in such accumulation processes are often remedied by error correction and detection heuristics. Motivating examples include the scientific process based on scientific publications, and software development based on libraries of code.
Natural processes that aim to keep errors under control, such as peer review in scientific publications, and testing and debugging in software development, would typically check existing pieces of knowledge - both for the reasoning that generated them and the previous facts they rely on. In this work, we present a simple process that models such accumulation of knowledge and study the persistence (or lack thereof) of errors. We consider a simple probabilistic model for the generation of new units of knowledge based on the preferential attachment growth model, which additionally allows for errors. Furthermore, the process includes checks aimed at catching these errors. We investigate when effects of errors persist forever in the system (with positive probability) and when they get rooted out completely by the checking process. The two basic parameters associated with the checking process are the probability of conducting a check and the depth of the check. We show that errors are rooted out if checks are sufficiently frequent and sufficiently deep. In contrast, shallow or infrequent checks are insufficient to root out errors.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.15/LIPIcs.ITCS.2023.15.pdf
Error Propagation
Preferential Attachment
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
16:1
16:24
10.4230/LIPIcs.ITCS.2023.16
article
Online Learning and Bandits with Queried Hints
Bhaskara, Aditya
1
Gollapudi, Sreenivas
2
Im, Sungjin
3
Kollias, Kostas
2
Munagala, Kamesh
4
School of Computing, University of Utah, Salt Lake City, UT, USA
Google Research, Mountain View, CA, USA
University of California, Merced, CA, USA
Computer Science Department, Duke University, Durham, NC, USA
We consider the classic online learning and stochastic multi-armed bandit (MAB) problems, when at each step, the online policy can probe and find out which of a small number (k) of choices has better reward (or loss) before making its choice. In this model, we derive algorithms whose regret bounds have exponentially better dependence on the time horizon compared to the classic regret bounds. In particular, we show that probing with k = 2 suffices to achieve time-independent regret bounds for online linear and convex optimization. The same number of probes improve the regret bound of stochastic MAB with independent arms from O(√{nT}) to O(n² log T), where n is the number of arms and T is the horizon length. For stochastic MAB, we also consider a stronger model where a probe reveals the reward values of the probed arms, and show that in this case, k = 3 probes suffice to achieve parameter-independent constant regret, O(n²). Such regret bounds cannot be achieved even with full feedback after the play, showcasing the power of limited "advice" via probing before making the play. We also present extensions to the setting where the hints can be imperfect, and to the case of stochastic MAB where the rewards of the arms can be correlated.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.16/LIPIcs.ITCS.2023.16.pdf
Online learning
multi-armed bandits
regret
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
17:1
17:23
10.4230/LIPIcs.ITCS.2023.17
article
Bootstrapping Homomorphic Encryption via Functional Encryption
Bitansky, Nir
1
Solomon, Tomer
1
Tel Aviv University, Israel
Homomorphic encryption is a central object in modern cryptography, with far-reaching applications. Constructions supporting homomorphic evaluation of arbitrary Boolean circuits have been known for over a decade, based on standard lattice assumptions. However, these constructions are leveled, meaning that they only support circuits up to some a-priori bounded depth. These leveled constructions can be bootstrapped into fully homomorphic ones, but this requires additional circular security assumptions, which are construction-dependent, and where reductions to standard lattice assumptions are no longer known. Alternative constructions are known based on indistinguishability obfuscation, which has been recently constructed under standard assumptions. However, this alternative requires subexponential hardness of the underlying primitives.
We prove a new bootstrapping theorem based on functional encryption, which is known based on standard polynomial hardness assumptions. As a result we obtain the first fully homomorphic encryption scheme that avoids both circular security assumptions and super-polynomial hardness assumptions. The construction is secure against uniform adversaries, and can be made non-uniformly secure assuming a generalization of the time-hierarchy theorem, which follows for example from non-uniform ETH.
At the heart of the construction is a new proof technique based on cryptographic puzzles and decomposable obfuscation. Unlike most cryptographic reductions, our security reduction does not fully treat the adversary as a black box, but rather makes explicit use of its running time (or circuit size).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.17/LIPIcs.ITCS.2023.17.pdf
Fully Homomorphic Encryption
Polynomial Assumptions
Cryptographic Puzzles
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
18:1
18:22
10.4230/LIPIcs.ITCS.2023.18
article
Certification with an NP Oracle
Blanc, Guy
1
Koch, Caleb
1
Lange, Jane
2
Strassle, Carmen
1
Tan, Li-Yang
1
Department of Computer Science, Stanford University, CA, USA
Department of Electrical Engineering and Computer Science, MIT, Cambridge, MA, USA
In the certification problem, the algorithm is given a function f with certificate complexity k and an input x^⋆, and the goal is to find a certificate of size ≤ poly(k) for f’s value at x^⋆. This problem is in NP^NP, and assuming 𝖯 ≠ NP, is not in 𝖯. Prior works, dating back to Valiant in 1984, have therefore sought to design efficient algorithms by imposing assumptions on f such as monotonicity.
Our first result is a BPP^NP algorithm for the general problem. The key ingredient is a new notion of the balanced influence of variables, a natural variant of influence that corrects for the bias of the function. Balanced influences can be accurately estimated via uniform generation, and classic BPP^NP algorithms are known for the latter task.
We then consider certification with stricter instance-wise guarantees: for each x^⋆, find a certificate whose size scales with that of the smallest certificate for x^⋆. In sharp contrast with our first result, we show that this problem is NP^NP-hard even to approximate. We obtain an optimal inapproximability ratio, adding to a small handful of problems in the higher levels of the polynomial hierarchy for which optimal inapproximability is known. Our proof involves the novel use of bit-fixing dispersers for gap amplification.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.18/LIPIcs.ITCS.2023.18.pdf
Certificate complexity
Boolean functions
polynomial hierarchy
hardness of approximation
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
19:1
19:16
10.4230/LIPIcs.ITCS.2023.19
article
Matrix Multiplication via Matrix Groups
Blasiak, Jonah
1
https://orcid.org/0000-0003-0793-9232
Cohn, Henry
2
https://orcid.org/0000-0001-9261-4656
Grochow, Joshua A.
3
https://orcid.org/0000-0002-6466-0476
Pratt, Kevin
4
https://orcid.org/0000-0002-2923-0905
Umans, Chris
5
https://orcid.org/0000-0002-6390-9401
Department of Mathematics, Drexel University, Philadelphia, PA, USA
Microsoft Research New England, One Memorial Drive, Cambridge, MA, USA
Departments of Computer Science and Mathematics, University of Colorado Boulder, CO, USA
School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA
Department of Computing and Mathematical Sciences, California Institute of Technology, Pasadena, CA, USA
In 2003, Cohn and Umans proposed a group-theoretic approach to bounding the exponent of matrix multiplication. Previous work within this approach ruled out certain families of groups as a route to obtaining ω = 2, while other families of groups remain potentially viable. In this paper we turn our attention to matrix groups, whose usefulness within this framework was relatively unexplored.
We first show that groups of Lie type cannot prove ω = 2 within the group-theoretic approach. This is based on a representation-theoretic argument that identifies the second-smallest dimension of an irreducible representation of a group as a key parameter that determines its viability in this framework. Our proof builds on Gowers' result concerning product-free sets in quasirandom groups. We then give another barrier that rules out certain natural matrix group constructions that make use of subgroups that are far from being self-normalizing.
Our barrier results leave open several natural paths to obtain ω = 2 via matrix groups. To explore these routes we propose working in the continuous setting of Lie groups, in which we develop an analogous theory. Obtaining the analogue of ω = 2 in this potentially easier setting is a key challenge that represents an intermediate goal short of actually proving ω = 2. We give two constructions in the continuous setting, each of which evades one of our two barriers.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.19/LIPIcs.ITCS.2023.19.pdf
Fast matrix multiplication
representation theory
matrix groups
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
20:1
20:22
10.4230/LIPIcs.ITCS.2023.20
article
Epic Fail: Emulators Can Tolerate Polynomially Many Edge Faults for Free
Bodwin, Greg
1
Dinitz, Michael
2
Nazari, Yasamin
3
University of Michigan, Ann Arbor, MI, USA
Johns Hopkins University, Baltimore, MD, USA
Universität Salzburg, Austria
A t-emulator of a graph G is a graph H that approximates its pairwise shortest path distances up to multiplicative t error. We study fault tolerant t-emulators, under the model recently introduced by Bodwin, Dinitz, and Nazari [ITCS 2022] for vertex failures. In this paper we consider the version for edge failures, and show that they exhibit surprisingly different behavior.
In particular, our main result is that, for (2k-1)-emulators with k odd, we can tolerate a polynomial number of edge faults for free. For example: for any n-node input graph, we construct a 5-emulator (k = 3) on O(n^{4/3}) edges that is robust to f = O(n^{2/9}) edge faults. It is well known that Ω(n^{4/3}) edges are necessary even if the 5-emulator does not need to tolerate any faults. Thus we pay no extra cost in the size to gain this fault tolerance. We leave open the precise range of free fault tolerance for odd k, and whether a similar phenomenon can be proved for even k.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.20/LIPIcs.ITCS.2023.20.pdf
Emulators
Fault Tolerance
Girth Conjecture
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
21:1
21:21
10.4230/LIPIcs.ITCS.2023.21
article
Opponent Indifference in Rating Systems: A Theoretical Case for Sonas
Bodwin, Greg
1
Zhang, Forest
1
University of Michigan, Ann Arbor, MI, United States
In competitive games, it is common to assign each player a real number rating signifying their skill level. A rating system is a procedure by which player ratings are adjusted upwards each time they win, or downwards each time they lose.
Many matchmaking systems give players some control over their opponent’s rating; for example, a player might be able to selectively initiate games against opponents whose ratings are publicly visible, or abort a game without penalty before it begins but after glimpsing their opponent’s rating. It is natural to ask whether one can design a rating system that does not incentivize a rating-maximizing player to act strategically, seeking games against opponents of one rating over another. We show the following:
- The full version of this "opponent indifference" property is unfortunately too strong to be feasible. Although it is satisfied by some rating systems, these systems lack certain desirable expressiveness properties, suggesting that they are not suitable to capture most games of interest.
- However, there is a natural relaxation, roughly requiring indifference between any two opponents who are both "reasonably evenly matched" with the choosing player. We prove that this relaxed variant of opponent indifference, which we call P opponent indifference, is viable. In fact, a certain strong version of P opponent indifference precisely characterizes the rating system Sonas, which was originally proposed for its empirical predictive accuracy on the outcomes of high-level chess games.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.21/LIPIcs.ITCS.2023.21.pdf
Rating systems
opponent indifference
incentive compatibility
mechanism design
game theory
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
22:1
22:20
10.4230/LIPIcs.ITCS.2023.22
article
PPP-Completeness and Extremal Combinatorics
Bourneuf, Romain
1
Folwarczný, Lukáš
2
3
https://orcid.org/0000-0002-3020-6443
Hubáček, Pavel
2
https://orcid.org/0000-0002-6850-6222
Rosen, Alon
4
5
https://orcid.org/0000-0002-3021-7150
Schwartzbach, Nikolaj I.
6
https://orcid.org/0000-0002-0610-4455
ENS de Lyon, France
Charles University, Faculty of Mathematics and Physics, Prague, Czech Republic
Institute of Mathematics of the Czech Academy of Sciences, Prague, Czech Republic
Bocconi University, Milano, Italy
Reichman University, Herzliya, Israel
Department of Computer Science, Aarhus University, Denmark
Many classical theorems in combinatorics establish the emergence of substructures within sufficiently large collections of objects. Well-known examples are Ramsey’s theorem on monochromatic subgraphs and the Erdős-Rado sunflower lemma. Implicit versions of the corresponding total search problems are known to be PWPP-hard under randomized reductions in the case of Ramsey’s theorem and PWPP-hard in the case of the sunflower lemma; here "implicit” means that the collection is represented by a poly-sized circuit inducing an exponentially large number of objects.
We show that several other well-known theorems from extremal combinatorics - including Erdős-Ko-Rado, Sperner, and Cayley’s formula – give rise to complete problems for PWPP and PPP. This is in contrast to the Ramsey and Erdős-Rado problems, for which establishing inclusion in PWPP has remained elusive. Besides significantly expanding the set of problems that are complete for PWPP and PPP, our work identifies some key properties of combinatorial proofs of existence that can give rise to completeness for these classes.
Our completeness results rely on efficient encodings for which finding collisions allows extracting the desired substructure. These encodings are made possible by the tightness of the bounds for the problems at hand (tighter than what is known for Ramsey’s theorem and the sunflower lemma). Previous techniques for proving bounds in TFNP invariably made use of structured algorithms. Such algorithms are not known to exist for the theorems considered in this work, as their proofs "from the book" are non-constructive.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.22/LIPIcs.ITCS.2023.22.pdf
total search problems
extremal combinatorics
PPP-completeness
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
23:1
23:28
10.4230/LIPIcs.ITCS.2023.23
article
On Low-End Obfuscation and Learning
Boyle, Elette
1
2
Ishai, Yuval
3
Meyer, Pierre
1
4
Robere, Robert
5
Yehuda, Gal
3
Reichman University, Herzliya, Israel
NTT Research, Sunnyvale, CA, USA
Technion, Haifa, Israel
IRIF, Université Paris Cité, CNRS, France
McGill University, Montreal, Canada
Most recent works on cryptographic obfuscation focus on the high-end regime of obfuscating general circuits while guaranteeing computational indistinguishability between functionally equivalent circuits. Motivated by the goals of simplicity and efficiency, we initiate a systematic study of "low-end" obfuscation, focusing on simpler representation models and information-theoretic notions of security. We obtain the following results.
- Positive results via "white-box" learning. We present a general technique for obtaining perfect indistinguishability obfuscation from exact learning algorithms that are given restricted access to the representation of the input function. We demonstrate the usefulness of this approach by obtaining simple obfuscation for decision trees and multilinear read-k arithmetic formulas.
- Negative results via PAC learning. A proper obfuscation scheme obfuscates programs from a class C by programs from the same class. Assuming the existence of one-way functions, we show that there is no proper indistinguishability obfuscation scheme for k-CNF formulas for any constant k ≥ 3; in fact, even obfuscating 3-CNF by k-CNF is impossible. This result applies even to computationally secure obfuscation, and makes an unexpected use of PAC learning in the context of negative results for obfuscation.
- Separations. We study the relations between different information-theoretic notions of indistinguishability obfuscation, giving cryptographic evidence for separations between them.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.23/LIPIcs.ITCS.2023.23.pdf
Indistinguishability obfuscation
cryptography
learning
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
24:1
24:21
10.4230/LIPIcs.ITCS.2023.24
article
On the Computational Hardness Needed for Quantum Cryptography
Brakerski, Zvika
1
https://orcid.org/0000-0002-4867-7999
Canetti, Ran
2
https://orcid.org/0000-0002-5479-7540
Qian, Luowen
2
https://orcid.org/0000-0002-1112-8822
Weizmann Institute of Science, Rehovot, Israel
Boston University, MA, US
In the classical model of computation, it is well established that one-way functions (OWF) are minimal for computational cryptography: They are essential for almost any cryptographic application that cannot be realized with respect to computationally unbounded adversaries. In the quantum setting, however, OWFs appear not to be essential (Kretschmer 2021; Ananth et al., Morimae and Yamakawa 2022), and the question of whether such a minimal primitive exists remains open.
We consider EFI pairs - efficiently samplable, statistically far but computationally indistinguishable pairs of (mixed) quantum states. Building on the work of Yan (2022), which shows equivalence between EFI pairs and statistical commitment schemes, we show that EFI pairs are necessary for a large class of quantum-cryptographic applications. Specifically, we construct EFI pairs from minimalistic versions of commitments schemes, oblivious transfer, and general secure multiparty computation, as well as from QCZK proofs from essentially any non-trivial language. We also construct quantum computational zero knowledge (QCZK) proofs for all of QIP from any EFI pair.
This suggests that, for much of quantum cryptography, EFI pairs play a similar role to that played by OWFs in the classical setting: they are simple to describe, essential, and also serve as a linchpin for demonstrating equivalence between primitives.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.24/LIPIcs.ITCS.2023.24.pdf
quantum cryptography
efi
commitment scheme
oblivious transfer
zero knowledge
secure multiparty computation
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
25:1
25:24
10.4230/LIPIcs.ITCS.2023.25
article
Improved Monotonicity Testers via Hypercube Embeddings
Braverman, Mark
1
Khot, Subhash
2
Kindler, Guy
3
Minzer, Dor
4
Department of Computer Science, Princeton University, NJ, USA
Courant institute of Mathematical Sciences, New York University, NY, USA
Engineering and Computer Science Department, The Hebrew University, Jerusalem, USA
Department of Mathematics, Massachusetts Institute of Technology, Cambridge, MA, USA
We show improved monotonicity testers for the Boolean hypercube under the p-biased measure, as well as over the hypergrid [m]ⁿ. Our results are:
1) For any p ∈ (0,1), for the p-biased hypercube we show a non-adaptive tester that makes Õ(√n/ε²) queries, accepts monotone functions with probability 1 and rejects functions that are ε-far from monotone with probability at least 2/3.
2) For all m ∈ ℕ, we show an Õ(√nm³/ε²) query monotonicity tester over [m]ⁿ. We also establish corresponding directed isoperimetric inequalities in these domains, analogous to the isoperimetric inequality in [Subhash Khot et al., 2018]. Previously, the best known tester due to Black, Chakrabarty and Seshadhri [Hadley Black et al., 2018] had Ω(n^{5/6}) query complexity. Our results are optimal up to poly-logarithmic factors and the dependency on m.
Our proof uses a notion of monotone embeddings of measures into the Boolean hypercube that can be used to reduce the problem of monotonicity testing over an arbitrary product domains to the Boolean cube. The embedding maps a function over a product domain of dimension n into a function over a Boolean cube of a larger dimension n', while preserving its distance from being monotone; an embedding is considered efficient if n' is not much larger than n, and we show how to construct efficient embeddings in the above mentioned settings.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.25/LIPIcs.ITCS.2023.25.pdf
Property Testing
Monotonicity Testing
Isoperimetric Inequalities
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
26:1
26:30
10.4230/LIPIcs.ITCS.2023.26
article
Rounding via Low Dimensional Embeddings
Braverman, Mark
1
Minzer, Dor
2
Department of Computer Science, Princeton University, NJ, USA
Department of Mathematics, Massachusetts Institute of Technology, Cambridge, MA, USA
A regular graph G = (V,E) is an (ε,γ) small-set expander if for any set of vertices of fractional size at most ε, at least γ of the edges that are adjacent to it go outside. In this paper, we give a unified approach to several known complexity-theoretic results on small-set expanders. In particular, we show:
1) Max-Cut: we show that if a regular graph G = (V,E) is an (ε,γ) small-set expander that contains a cut of fractional size at least 1-δ, then one can find in G a cut of fractional size at least 1-O(δ/(εγ⁶)) in polynomial time.
2) Improved spectral partitioning, Cheeger’s inequality and the parallel repetition theorem over small-set expanders. The general form of each one of these results involves square-root loss that comes from certain rounding procedure, and we show how this can be avoided over small set expanders. Our main idea is to project a high dimensional vector solution into a low-dimensional space while roughly maintaining 𝓁₂² distances, and then perform a pre-processing step using low-dimensional geometry and the properties of 𝓁₂² distances over it. This pre-processing leverages the small-set expansion property of the graph to transform a vector valued solution to a different vector valued solution with additional structural properties, which give rise to more efficient integral-solution rounding schemes.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.26/LIPIcs.ITCS.2023.26.pdf
Parallel Repetition
Small Set Expanders
Semi-Definite Programs
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
27:1
27:14
10.4230/LIPIcs.ITCS.2023.27
article
Counting Subgraphs in Somewhere Dense Graphs
Bressan, Marco
1
Goldberg, Leslie Ann
2
Meeks, Kitty
3
Roth, Marc
2
Department of Computer Science, University of Milan, Italy
Department of Computer Science, University of Oxford, UK
School of Computing Science, University of Glasgow, UK
We study the problems of counting copies and induced copies of a small pattern graph H in a large host graph G. Recent work fully classified the complexity of those problems according to structural restrictions on the patterns H. In this work, we address the more challenging task of analysing the complexity for restricted patterns and restricted hosts. Specifically we ask which families of allowed patterns and hosts imply fixed-parameter tractability, i.e., the existence of an algorithm running in time f(H)⋅|G|^O(1) for some computable function f. Our main results present exhaustive and explicit complexity classifications for families that satisfy natural closure properties. Among others, we identify the problems of counting small matchings and independent sets in subgraph-closed graph classes 𝒢 as our central objects of study and establish the following crisp dichotomies as consequences of the Exponential Time Hypothesis:
- Counting k-matchings in a graph G ∈ 𝒢 is fixed-parameter tractable if and only if 𝒢 is nowhere dense.
- Counting k-independent sets in a graph G ∈ 𝒢 is fixed-parameter tractable if and only if 𝒢 is nowhere dense. Moreover, we obtain almost tight conditional lower bounds if 𝒢 is somewhere dense, i.e., not nowhere dense. These base cases of our classifications subsume a wide variety of previous results on the matching and independent set problem, such as counting k-matchings in bipartite graphs (Curticapean, Marx; FOCS 14), in F-colourable graphs (Roth, Wellnitz; SODA 20), and in degenerate graphs (Bressan, Roth; FOCS 21), as well as counting k-independent sets in bipartite graphs (Curticapean et al.; Algorithmica 19).
At the same time our proofs are much simpler: using structural characterisations of somewhere dense graphs, we show that a colourful version of a recent breakthrough technique for analysing pattern counting problems (Curticapean, Dell, Marx; STOC 17) applies to any subgraph-closed somewhere dense class of graphs, yielding a unified view of our current understanding of the complexity of subgraph counting.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.27/LIPIcs.ITCS.2023.27.pdf
counting problems
somewhere dense graphs
parameterised complexity theory
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
28:1
28:29
10.4230/LIPIcs.ITCS.2023.28
article
Rigidity for Monogamy-Of-Entanglement Games
Broadbent, Anne
1
Culf, Eric
1
Department of Mathematics and Statistics, University of Ottawa, Canada
In a monogamy-of-entanglement (MoE) game, two players who do not communicate try to simultaneously guess a referee’s measurement outcome on a shared quantum state they prepared. We study the prototypical example of a game where the referee measures in either the computational or Hadamard basis and informs the players of her choice.
We show that this game satisfies a rigidity property similar to what is known for some nonlocal games. That is, in order to win optimally, the players' strategy must be of a specific form, namely a convex combination of four unentangled optimal strategies generated by the Breidbart state. We extend this to show that strategies that win near-optimally must also be near an optimal state of this form. We also show rigidity for multiple copies of the game played in parallel.
We give three applications: (1) We construct for the first time a weak string erasure (WSE) scheme where the security does not rely on limitations on the parties' hardware. Instead, we add a prover, which enables security via the rigidity of this MoE game. (2) We show that the WSE scheme can be used to achieve bit commitment in a model where it is impossible classically. (3) We achieve everlasting-secure randomness expansion in the model of trusted but leaky measurement and untrusted preparation and measurements by two isolated devices, while relying only on the temporary assumption of pseudorandom functions. This achieves randomness expansion without the need for shared entanglement.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.28/LIPIcs.ITCS.2023.28.pdf
Rigidity
Self-Testing Monogamy-of-Entanglement Games
Bit Commitment
Randomness Expansion
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
29:1
29:1
10.4230/LIPIcs.ITCS.2023.29
article
Quantum Majority Vote
Buhrman, Harry
1
2
Linden, Noah
3
Mančinska, Laura
4
https://orcid.org/0000-0001-9727-4961
Montanaro, Ashley
5
3
https://orcid.org/0000-0001-5640-0343
Ozols, Maris
6
2
https://orcid.org/0000-0002-3238-8594
QuSoft, CWI, Amsterdam, The Netherlands
University of Amsterdam, The Netherlands
University of Bristol, UK
University of Copenhagen, Denmark
Phasecraft Ltd., Bristol, UK
QuSoft, Amsterdam, The Nehterlands
Majority vote is a basic method for amplifying correct outcomes that is widely used in computer science and beyond. While it can amplify the correctness of a quantum device with classical output, the analogous procedure for quantum output is not known. We introduce quantum majority vote as the following task: given a product state |ψ_1⟩ ⊗ … ⊗ |ψ_n⟩ where each qubit is in one of two orthogonal states |ψ⟩ or |ψ^⟂⟩, output the majority state. We show that an optimal algorithm for this problem achieves worst-case fidelity of 1/2 + Θ(1/√n). Under the promise that at least 2/3 of the input qubits are in the majority state, the fidelity increases to 1 - Θ(1/n) and approaches 1 as n increases.
We also consider the more general problem of computing any symmetric and equivariant Boolean function f: {0,1}ⁿ → {0,1} in an unknown quantum basis, and show that a generalization of our quantum majority vote algorithm is optimal for this task. The optimal parameters for the generalized algorithm and its worst-case fidelity can be determined by a simple linear program of size O(n). The time complexity of the algorithm is O(n⁴ log n) where n is the number of input qubits.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.29/LIPIcs.ITCS.2023.29.pdf
quantum algorithms
quantum majority vote
Schur-Weyl duality
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
30:1
30:40
10.4230/LIPIcs.ITCS.2023.30
article
TFNP Characterizations of Proof Systems and Monotone Circuits
Buss, Sam
1
Fleming, Noah
2
https://orcid.org/0000-0002-8636-1290
Impagliazzo, Russell
1
University of California, San Diego, CA, USA
Memorial University, St. John’s, Canada
Connections between proof complexity and circuit complexity have become major tools for obtaining lower bounds in both areas. These connections - which take the form of interpolation theorems and query-to-communication lifting theorems - translate efficient proofs into small circuits, and vice versa, allowing tools from one area to be applied to the other. Recently, the theory of TFNP has emerged as a unifying framework underlying these connections. For many of the proof systems which admit such a connection there is a TFNP problem which characterizes it: the class of problems which are reducible to this TFNP problem via query-efficient reductions is equivalent to the tautologies that can be efficiently proven in the system. Through this, proof complexity has become a major tool for proving separations in black-box TFNP. Similarly, for certain monotone circuit models, the class of functions that it can compute efficiently is equivalent to what can be reduced to a certain TFNP problem in a communication-efficient manner. When a TFNP problem has both a proof and circuit characterization, one can prove an interpolation theorem. Conversely, many lifting theorems can be viewed as relating the communication and query reductions to TFNP problems. This is exciting, as it suggests that TFNP provides a roadmap for the development of further interpolation theorems and lifting theorems.
In this paper we begin to develop a more systematic understanding of when these connections to TFNP occur. We give exact conditions under which a proof system or circuit model admits a characterization by a TFNP problem. We show:
- Every well-behaved proof system which can prove its own soundness (a reflection principle) is characterized by a TFNP problem. Conversely, every TFNP problem gives rise to a well-behaved proof system which proves its own soundness.
- Every well-behaved monotone circuit model which admits a universal family of functions is characterized by a TFNP problem. Conversely, every TFNP problem gives rise to a well-behaved monotone circuit model with a universal problem. As an example, we provide a TFNP characterization of the Polynomial Calculus, answering a question from [Mika Göös et al., 2022], and show that it can prove its own soundness.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.30/LIPIcs.ITCS.2023.30.pdf
Proof Complexity
Circuit Complexity
TFNP
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
31:1
31:24
10.4230/LIPIcs.ITCS.2023.31
article
Clustering Permutations: New Techniques with Streaming Applications
Chakraborty, Diptarka
1
Das, Debarati
2
Krauthgamer, Robert
3
National University of Singapore, Singapore
Pennsylvania State University, University Park, PA, USA
Weizmann Institute of Science, Rehovot, Israel
We study the classical metric k-median clustering problem over a set of input rankings (i.e., permutations), which has myriad applications, from social-choice theory to web search and databases. A folklore algorithm provides a 2-approximate solution in polynomial time for all k = O(1), and works irrespective of the underlying distance measure, so long it is a metric; however, going below the 2-factor is a notorious challenge. We consider the Ulam distance, a variant of the well-known edit-distance metric, where strings are restricted to be permutations. For this metric, Chakraborty, Das, and Krauthgamer [SODA, 2021] provided a (2-δ)-approximation algorithm for k = 1, where δ≈ 2^{-40}.
Our primary contribution is a new algorithmic framework for clustering a set of permutations. Our first result is a 1.999-approximation algorithm for the metric k-median problem under the Ulam metric, that runs in time (k log (nd))^{O(k)} nd³ for an input consisting of n permutations over [d]. In fact, our framework is powerful enough to extend this result to the streaming model (where the n input permutations arrive one by one) using only polylogarithmic (in n) space. Additionally, we show that similar results can be obtained even in the presence of outliers, which is presumably a more difficult problem.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.31/LIPIcs.ITCS.2023.31.pdf
Clustering
Approximation Algorithms
Ulam Distance
Rank Aggregation
Streaming
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
32:1
32:24
10.4230/LIPIcs.ITCS.2023.32
article
Certificate Games
Chakraborty, Sourav
1
Gál, Anna
2
Laplante, Sophie
3
Mittal, Rajat
4
Sunny, Anupa
3
Indian Statistical Institute, Kolkata, India
University of Texas at Austin, TX, USA
Université Paris Cité, IRIF, France
IIT Kanpur, India
We introduce and study Certificate Game complexity, a measure of complexity based on the probability of winning a game where two players are given inputs with different function values and are asked to output some index i such that x_i≠ y_i, in a zero-communication setting.
We give upper and lower bounds for private coin, public coin, shared entanglement and non-signaling strategies, and give some separations. We show that complexity in the public coin model is upper bounded by Randomized query and Certificate complexity. On the other hand, it is lower bounded by fractional and randomized certificate complexity, making it a good candidate to prove strong lower bounds on randomized query complexity. Complexity in the private coin model is bounded from below by zero-error randomized query complexity. The quantum measure highlights an interesting and surprising difference between classical and quantum query models. Whereas the public coin certificate game complexity is bounded from above by randomized query complexity, the quantum certificate game complexity can be quadratically larger than quantum query complexity. We use non-signaling, a notion from quantum information, to give a lower bound of n on the quantum certificate game complexity of the OR function, whose quantum query complexity is Θ(√n), then go on to show that this "non-signaling bottleneck" applies to all functions with high sensitivity, block sensitivity or fractional block sensitivity.
We also consider the single-bit version of certificate games, where the inputs of the two players are restricted to having Hamming distance 1. We prove that the single-bit version of certificate game complexity with shared randomness is equal to sensitivity up to constant factors, thus giving a new characterization of sensitivity. On the other hand, the single-bit version of certificate game complexity with private randomness is equal to λ², where λ is the spectral sensitivity.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.32/LIPIcs.ITCS.2023.32.pdf
block sensitivity
boolean function complexity
certificate complexity
query complexity
sensitivity
zero-communication two-player games
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
33:1
33:20
10.4230/LIPIcs.ITCS.2023.33
article
Lifting to Parity Decision Trees via Stifling
Chattopadhyay, Arkadev
1
Mande, Nikhil S.
2
Sanyal, Swagato
3
Sherif, Suhail
4
Tata Institute of Fundamental Research, Mumbai, India
QuSoft and CWI, Amsterdam, The Netherlands
Indian Institute of Technology, Kharagpur, India
Vector Institute, Toronto, Canada
We show that the deterministic decision tree complexity of a (partial) function or relation f lifts to the deterministic parity decision tree (PDT) size complexity of the composed function/relation f∘g as long as the gadget g satisfies a property that we call stifling. We observe that several simple gadgets of constant size, like Indexing on 3 input bits, Inner Product on 4 input bits, Majority on 3 input bits and random functions, satisfy this property. It can be shown that existing randomized communication lifting theorems ([Göös, Pitassi, Watson. SICOMP'20], [Chattopadhyay et al. SICOMP'21]) imply PDT-size lifting. However there are two shortcomings of this approach: first they lift randomized decision tree complexity of f, which could be exponentially smaller than its deterministic counterpart when either f is a partial function or even a total search problem. Second, the size of the gadgets in such lifting theorems are as large as logarithmic in the size of the input to f. Reducing the gadget size to a constant is an important open problem at the frontier of current research.
Our result shows that even a random constant-size gadget does enable lifting to PDT size. Further, it also yields the first systematic way of turning lower bounds on the width of tree-like resolution proofs of the unsatisfiability of constant-width CNF formulas to lower bounds on the size of tree-like proofs in the resolution with parity system, i.e., Res(⊕), of the unsatisfiability of closely related constant-width CNF formulas.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.33/LIPIcs.ITCS.2023.33.pdf
Decision trees
parity decision trees
lifting theorems
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
34:1
34:15
10.4230/LIPIcs.ITCS.2023.34
article
New Lower Bounds and Derandomization for ACC, and a Derandomization-Centric View on the Algorithmic Method
Chen, Lijie
1
Miller Institute for Basic Research in Science at University of California, Berkeley, CA, USA
In this paper, we obtain several new results on lower bounds and derandomization for ACC⁰ circuits (constant-depth circuits consisting of AND/OR/MOD_m gates for a fixed constant m, a frontier class in circuit complexity):
1) We prove that any polynomial-time Merlin-Arthur proof system with an ACC⁰ verifier (denoted by MA_{ACC⁰}) can be simulated by a nondeterministic proof system with quasi-polynomial running time and polynomial proof length, on infinitely many input lengths. This improves the previous simulation by [Chen, Lyu, and Williams, FOCS 2020], which requires both quasi-polynomial running time and proof length.
2) We show that MA_{ACC⁰} cannot be computed by fixed-polynomial-size ACC⁰ circuits, and our hard languages are hard on a sufficiently dense set of input lengths.
3) We show that NEXP (nondeterministic exponential-time) does not have ACC⁰ circuits of sub-half-exponential size, improving the previous sub-third-exponential size lower bound for NEXP against ACC⁰ by [Williams, J. ACM 2014].
Combining our first and second results gives a conceptually simpler and derandomization-centric proof of the recent breakthrough result NQP := NTIME[2^polylog(n)] ̸ ⊂ ACC⁰ by [Murray and Williams, SICOMP 2020]: Instead of going through an easy witness lemma as they did, we first prove an ACC⁰ lower bound for a subclass of MA, and then derandomize that subclass into NQP, while retaining its hardness against ACC⁰.
Moreover, since our derandomization of MA_{ACC⁰} achieves a polynomial proof length, we indeed prove that nondeterministic quasi-polynomial-time with n^ω(1) nondeterminism bits (denoted as NTIMEGUESS[2^polylog(n), n^ω(1)]) has no poly(n)-size ACC⁰ circuits, giving a new proof of a result by Vyas. Combining with a win-win argument based on randomized encodings from [Chen and Ren, STOC 2020], we also prove that NTIMEGUESS[2^polylog(n), n^ω(1)] cannot be 1/2+1/poly(n)-approximated by poly(n)-size ACC⁰ circuits, improving the recent strongly average-case lower bounds for NQP against ACC⁰ by [Chen and Ren, STOC 2020].
One interesting technical ingredient behind our second result is the construction of a PSPACE-complete language that is paddable, downward self-reducible, same-length checkable, and weakly error correctable. Moreover, all its reducibility properties have corresponding AC⁰[2] non-adaptive oracle circuits. Our construction builds and improves upon similar constructions from [Trevisan and Vadhan, Complexity 2007] and [Chen, FOCS 2019], which all require at least TC⁰ oracle circuits for implementing these properties.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.34/LIPIcs.ITCS.2023.34.pdf
Circuit Lower Bounds
Derandomization
Algorithmic Method
ACC
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
35:1
35:24
10.4230/LIPIcs.ITCS.2023.35
article
Black-Box Constructive Proofs Are Unavoidable
Chen, Lijie
1
https://orcid.org/0000-0002-6084-4729
Williams, Ryan
2
https://orcid.org/0000-0003-2326-2233
Yang, Tianqi
3
https://orcid.org/0000-0001-9476-6880
Miller Institute for Basic Research in Science, UC Berkeley, CA, USA
CSAIL, MIT, Cambridge, MA, USA
IIIS, Tsinghua University, Beijing, China
Following Razborov and Rudich, a "natural property" for proving a circuit lower bound satisfies three axioms: constructivity, largeness, and usefulness. In 2013, Williams proved that for any reasonable circuit class C, NEXP ⊂ C is equivalent to the existence of a constructive property useful against C. Here, a property is constructive if it can be decided in poly(N) time, where N = 2ⁿ is the length of the truth-table of the given n-input function.
Recently, Fan, Li, and Yang initiated the study of black-box natural properties, which require a much stronger notion of constructivity, called black-box constructivity: the property should be decidable in randomized polylog(N) time, given oracle access to the n-input function. They showed that most proofs based on random restrictions yield black-box natural properties, and demonstrated limitations on what black-box natural properties can prove.
In this paper, perhaps surprisingly, we prove that the equivalence of Williams holds even with this stronger notion of black-box constructivity: for any reasonable circuit class C, NEXP ⊂ C is equivalent to the existence of a black-box constructive property useful against C. The main technical ingredient in proving this equivalence is a smooth, strong, and locally-decodable probabilistically checkable proof (PCP), which we construct based on a recent work by Paradise. As a by-product, we show that average-case witness lower bounds for PCP verifiers follow from NEXP lower bounds.
We also show that randomness is essential in the definition of black-box constructivity: we unconditionally prove that there is no deterministic polylog(N)-time constructive property that is useful against even polynomial-size AC⁰ circuits.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.35/LIPIcs.ITCS.2023.35.pdf
Circuit lower bounds
natural proofs
probabilistic checkable proofs
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
36:1
36:21
10.4230/LIPIcs.ITCS.2023.36
article
Necessary Conditions in Multi-Server Differential Privacy
Cheu, Albert
1
https://orcid.org/0000-0002-4812-7081
Yan, Chao
1
https://orcid.org/0000-0001-6482-6643
Department of Computer Science, Georgetown University, Washington D. C., USA
We consider protocols where users communicate with multiple servers to perform a computation on the users' data. An adversary exerts semi-honest control over many of the parties but its view is differentially private with respect to honest users. Prior work described protocols that required multiple rounds of interaction or offered privacy against a computationally bounded adversary. Our work presents limitations of non-interactive protocols that offer privacy against unbounded adversaries. We prove that these protocols require exponentially more samples than centrally private counterparts to solve some learning, testing, and estimation tasks. This means sample-efficiency demands interactivity or computational differential privacy, or both.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.36/LIPIcs.ITCS.2023.36.pdf
Differential Privacy
Parity Learning
Multi-server
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
37:1
37:22
10.4230/LIPIcs.ITCS.2023.37
article
Quantum Algorithms and the Power of Forgetting
Childs, Andrew M.
1
2
3
https://orcid.org/0000-0002-9903-837X
Coudron, Matthew
1
4
2
https://orcid.org/0000-0002-3296-4723
Gilani, Amin Shiraz
1
2
https://orcid.org/0000-0003-0859-7254
Joint Center for Quantum Information and Computer Science, College Park, MD, USA
Department of Computer Science, University of Maryland, College Park, MD, USA
Institute for Advanced Computer Studies, University of Maryland, MD, USA
National Institute of Standards and Technology, Gaithersburg, MD, USA
The so-called welded tree problem provides an example of a black-box problem that can be solved exponentially faster by a quantum walk than by any classical algorithm [Andrew M. Childs et al., 2003]. Given the name of a special entrance vertex, a quantum walk can find another distinguished exit vertex using polynomially many queries, though without finding any particular path from entrance to exit. It has been an open problem for twenty years whether there is an efficient quantum algorithm for finding such a path, or if the path-finding problem is hard even for quantum computers. We show that a natural class of efficient quantum algorithms provably cannot find a path from entrance to exit. Specifically, we consider algorithms that, within each branch of their superposition, always store a set of vertex labels that form a connected subgraph including the entrance, and that only provide these vertex labels as inputs to the oracle. While this does not rule out the possibility of a quantum algorithm that efficiently finds a path, it is unclear how an algorithm could benefit by deviating from this behavior. Our no-go result suggests that, for some problems, quantum algorithms must necessarily forget the path they take to reach a solution in order to outperform classical computation.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.37/LIPIcs.ITCS.2023.37.pdf
Quantum algorithms
quantum query complexity
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
38:1
38:23
10.4230/LIPIcs.ITCS.2023.38
article
A New Conjecture on Hardness of 2-CSP’s with Implications to Hardness of Densest k-Subgraph and Other Problems
Chuzhoy, Julia
1
Dalirrooyfard, Mina
2
Grinberg, Vadim
3
Tan, Zihan
4
Toyota Technological Institute at Chicago, IL, USA
Massachusetts Institute of Technology, Cambridge, MA, USA
Weizmann Institute of Science, Rehovot, Israel
DIMACS, Rutgers University, New Brunswick, NJ, USA
We propose a new conjecture on hardness of 2-CSP’s, and show that new hardness of approximation results for Densest k-Subgraph and several other problems, including a graph partitioning problem, and a variation of the Graph Crossing Number problem, follow from this conjecture. The conjecture can be viewed as occupying a middle ground between the d-to-1 conjecture, and hardness results for 2-CSP’s that can be obtained via standard techniques, such as Parallel Repetition combined with standard 2-prover protocols for the 3SAT problem. We hope that this work will motivate further exploration of hardness of 2-CSP’s in the regimes arising from the conjecture. We believe that a positive resolution of the conjecture will provide a good starting point for other hardness of approximation proofs.
Another contribution of our work is proving that the problems that we consider are roughly equivalent from the approximation perspective. Some of these problems arose in previous work, from which it appeared that they may be related to each other. We formalize this relationship in this work.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.38/LIPIcs.ITCS.2023.38.pdf
Hardness of Approximation
Densest k-Subgraph
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
39:1
39:23
10.4230/LIPIcs.ITCS.2023.39
article
Generalized Private Selection and Testing with High Confidence
Cohen, Edith
1
2
Lyu, Xin
3
1
Nelson, Jelani
3
1
Sarlós, Tamás
1
Stemmer, Uri
2
4
Google Research, Mountain View, CA, USA
Tel Aviv University, Israel
UC Berkeley, CA, USA
Google Research, Herzliya, Israel
Composition theorems are general and powerful tools that facilitate privacy accounting across multiple data accesses from per-access privacy bounds. However they often result in weaker bounds compared with end-to-end analysis. Two popular tools that mitigate that are the exponential mechanism (or report noisy max) and the sparse vector technique, generalized in a recent private selection framework by Liu and Talwar (STOC 2019). In this work, we propose a flexible framework of private selection and testing that generalizes the one proposed by Liu and Talwar, supporting a wide range of applications. We apply our framework to solve several fundamental tasks, including query releasing, top-k selection, and stable selection, with improved confidence-accuracy tradeoffs. Additionally, for online settings, we apply our private testing to design a mechanism for adaptive query releasing, which improves the sample complexity dependence on the confidence parameter for the celebrated private multiplicative weights algorithm of Hardt and Rothblum (FOCS 2010).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.39/LIPIcs.ITCS.2023.39.pdf
differential privacy
sparse vector technique
adaptive data analysis
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
40:1
40:18
10.4230/LIPIcs.ITCS.2023.40
article
Exact Completeness of LP Hierarchies for Linear Codes
Coregliano, Leonardo Nagami
1
https://orcid.org/0000-0001-7189-8565
Jeronimo, Fernando Granha
1
Jones, Chris
2
Institute for Advanced Study, Princeton, NJ, USA
University of Chicago, Il, USA
Determining the maximum size A₂(n,d) of a binary code of blocklength n and distance d remains an elusive open question even when restricted to the important class of linear codes. Recently, two linear programming hierarchies extending Delsarte’s LP were independently proposed to upper bound A₂^{Lin}(n,d) (the analogue of A₂(n,d) for linear codes). One of these hierarchies, by the authors, was shown to be approximately complete in the sense that the hierarchy converges to A₂^{Lin}(n,d) as the level grows beyond n². Despite some structural similarities, not even approximate completeness was known for the other hierarchy by Loyfer and Linial.
In this work, we prove that both hierarchies recover the exact value of A₂^{Lin}(n,d) at level n. We also prove that at this level the polytope of Loyfer and Linial is integral. Even though these hierarchies seem less powerful than general hierarchies such as Sum-of-Squares, we show that they have enough structure to yield exact completeness via pseudoprobabilities.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.40/LIPIcs.ITCS.2023.40.pdf
LP bound
linear codes
Delsarte’s LP
combinatorial polytopes
pseudoexpectation
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
41:1
41:23
10.4230/LIPIcs.ITCS.2023.41
article
HappyMap : A Generalized Multicalibration Method
Deng, Zhun
1
Dwork, Cynthia
2
Zhang, Linjun
3
https://orcid.org/0000-0002-8309-7164
Department of Computer Science, Columbia University, New York, NY, USA
Department of Computer Science, Harvard University, Cambridge, MA, USA
Department of Statistics, Rutgers University, Piscataway, NJ, USA
Multicalibration is a powerful and evolving concept originating in the field of algorithmic fairness. For a predictor f that estimates the outcome y given covariates x, and for a function class C, multi-calibration requires that the predictor f(x) and outcome y are indistinguishable under the class of auditors in C. Fairness is captured by incorporating demographic subgroups into the class of functions C. Recent work has shown that, by enriching the class C to incorporate appropriate propensity re-weighting functions, multi-calibration also yields target-independent learning, wherein a model trained on a source domain performs well on unseen, future, target domains {(approximately) captured by the re-weightings.}
Formally, multicalibration with respect to C bounds |𝔼_{(x,y)∼D}[c(f(x),x)⋅(f(x)-y)]| for all c ∈ C. In this work, we view the term (f(x)-y) as just one specific mapping, and explore the power of an enriched class of mappings. We propose s-Happy Multicalibration, a generalization of multi-calibration, which yields a wide range of new applications, including a new fairness notion for uncertainty quantification, a novel technique for conformal prediction under covariate shift, and a different approach to analyzing missing data, while also yielding a unified understanding of several existing seemingly disparate algorithmic fairness notions and target-independent learning approaches.
We give a single HappyMap meta-algorithm that captures all these results, together with a sufficiency condition for its success.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.41/LIPIcs.ITCS.2023.41.pdf
algorithmic fairness
target-independent learning
transfer learning
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
42:1
42:18
10.4230/LIPIcs.ITCS.2023.42
article
Bit Complexity of Jordan Normal Form and Polynomial Spectral Factorization
Dey, Papri
1
Kannan, Ravi
2
Ryder, Nick
3
Srivastava, Nikhil
4
Georgia Tech, Atlanta, GA, USA
Microsoft Research, Bangalore, India
OpenAI, San Francisco, CA, USA
UC Berkeley, CA, USA
We study the bit complexity of two related fundamental computational problems in linear algebra and control theory. Our results are: (1) An Õ(n^{ω+3}a+n⁴a²+n^ωlog(1/ε)) time algorithm for finding an ε-approximation to the Jordan Normal form of an integer matrix with a-bit entries, where ω is the exponent of matrix multiplication. (2) An Õ(n⁶d⁶a+n⁴d⁴a²+n³d³log(1/ε)) time algorithm for ε-approximately computing the spectral factorization P(x) = Q^*(x)Q(x) of a given monic n× n rational matrix polynomial of degree 2d with rational a-bit coefficients having a-bit common denominators, which satisfies P(x)⪰0 for all real x. The first algorithm is used as a subroutine in the second one.
Despite its being of central importance, polynomial complexity bounds were not previously known for spectral factorization, and for Jordan form the best previous best running time was an unspecified polynomial in n of degree at least twelve [Cai, 1994]. Our algorithms are simple and judiciously combine techniques from numerical and symbolic computation, yielding significant advantages over either approach by itself.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.42/LIPIcs.ITCS.2023.42.pdf
Symbolic algorithms
numerical algorithms
linear algebra
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
43:1
43:19
10.4230/LIPIcs.ITCS.2023.43
article
Constant-Depth Sorting Networks
Dobrokhotova-Maikova, Natalia
1
Kozachinskiy, Alexander
2
3
https://orcid.org/0000-0002-9956-9023
Podolskii, Vladimir
4
5
https://orcid.org/0000-0001-7154-138X
Yandex, Moscow, Russia
Institute for Mathematical and Computational Engineering, Universidad Católica de Chile, Santiago, Chile
IMFD & CENIA Chile, Santiago, Chile
Courant Institute of Mathematical Sciences, New York University, NY, USA
Steklov Mathematical Institute of Russian Academy of Sciences, Moscow, Russia
In this paper, we address sorting networks that are constructed from comparators of arity k > 2. I.e., in our setting the arity of the comparators - or, in other words, the number of inputs that can be sorted at the unit cost - is a parameter. We study its relationship with two other parameters - n, the number of inputs, and d, the depth.
This model received considerable attention. Partly, its motivation is to better understand the structure of sorting networks. In particular, sorting networks with large arity are related to recursive constructions of ordinary sorting networks. Additionally, studies of this model have natural correspondence with a recent line of work on constructing circuits for majority functions from majority gates of lower fan-in.
Motivated by these questions, we initiate the studies of lower bounds for constant-depth sorting networks. More precisely, we consider sorting networks of constant depth d and estimate the minimal k for which there is such a network with comparators of arity k. We prove tight lower bounds for d ≤ 4. More precisely, for depths d = 1,2 we observe that k = n. For d = 3 we show that k = ⌈n/2⌉. As our main result, we show that for d = 4 the minimal arity becomes sublinear: k = Θ(n^{2/3}). This contrasts with the case of majority circuits, in which k = O(n^{2/3}) is achievable already for depth d = 3. To prove these results, we develop a new combinatorial technique based on the notion of access to cells of a sorting network.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.43/LIPIcs.ITCS.2023.43.pdf
Sorting networks
constant depth
lower bounds
threshold circuits
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
44:1
44:21
10.4230/LIPIcs.ITCS.2023.44
article
Rigidity in Mechanism Design and Its Applications
Dobzinski, Shahar
1
Shaulker, Ariel
1
Weizmann Institute of Science, Rehovot, Israel
We introduce the notion of rigidity in auction design and use it to analyze some fundamental aspects of mechanism design. We focus on the setting of a single-item auction where the values of the bidders are drawn from some (possibly correlated) distribution F. Let f be the allocation function of an optimal mechanism for F. Informally, S is (linearly) rigid in F if for every mechanism M' with an allocation function f' where f and f' agree on the allocation of at most x-fraction of the instances of S, it holds that the expected revenue of M' is at most an x fraction of the optimal revenue.
We start with using rigidity to explain the singular success of Cremer and McLean’s auction assuming interim individual rationality. Recall that the revenue of Cremer and McLean’s auction is the optimal welfare if the distribution obeys a certain "full rank" conditions, but no analogous constructions are known if this condition does not hold. We show that the allocation function of the Cremer and McLean auction has logarithmic (in the size of the support) Kolmogorov complexity, whereas we use rigidity to show that there exist distributions that do not obey the full rank condition for which the allocation function of every mechanism that provides a constant approximation is almost linear.
We further investigate rigidity assuming different notions of individual rationality. Assuming ex-post individual rationality, if there exists a rigid set then the structure of the optimal mechanism is relatively simple: the player with the highest value "usually" wins the item and contributes most of the revenue. In contrast, assuming interim individual rationality, there are distributions with a rigid set S where the optimal mechanism has no obvious allocation pattern (in the sense that its Kolmogorov complexity is high). Since the existence of rigid sets essentially implies that the hands of the designer are tied, our results help explain why we have little hope of developing good, simple and generic approximation mechanisms in the interim individual rationality world.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.44/LIPIcs.ITCS.2023.44.pdf
Revenue Maximization
Auctions
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
45:1
45:24
10.4230/LIPIcs.ITCS.2023.45
article
Beeping Shortest Paths via Hypergraph Bipartite Decomposition
Dufoulon, Fabien
1
https://orcid.org/0000-0003-2977-4109
Emek, Yuval
2
https://orcid.org/0000-0002-3123-3451
Gelles, Ran
3
https://orcid.org/0000-0003-3615-3239
Department of Computer Science, University of Houston, TX, USA
Technion, Haifa, Israel
Bar-Ilan University, Ramat-Gan, Israel
Constructing a shortest path between two network nodes is a fundamental task in distributed computing. This work develops schemes for the construction of shortest paths in randomized beeping networks between a predetermined source node and an arbitrary set of destination nodes. Our first scheme constructs a (single) shortest path to an arbitrary destination in O(D log log n + log³ n) rounds with high probability. Our second scheme constructs multiple shortest paths, one per each destination, in O(D log² n + log³ n) rounds with high probability.
Our schemes are based on a reduction of the above shortest path construction tasks to a decomposition of hypergraphs into bipartite hypergraphs: We develop a beeping procedure that partitions the hyperedge set of a hypergraph H = (V_H, E_H) into k = Θ (log² n) disjoint subsets F₁ ∪ ⋯ ∪ F_k = E_H such that the (sub-)hypergraph (V_H, F_i) is bipartite in the sense that there exists a vertex subset U ⊆ V such that |U ∩ e| = 1 for every e ∈ F_i. This procedure turns out to be instrumental in speeding up shortest path constructions under the beeping model.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.45/LIPIcs.ITCS.2023.45.pdf
Beeping Networks
Shortest Paths
Hypergraph Bipartite Decomposition
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
46:1
46:20
10.4230/LIPIcs.ITCS.2023.46
article
Noisy Radio Network Lower Bounds via Noiseless Beeping Lower Bounds
Efremenko, Klim
1
Kol, Gillat
2
Paramonov, Dmitry
2
Saxena, Raghuvansh R.
3
Ben-Gurion University, Beer Sheva, Israel
Princeton University, NJ, USA
Microsoft, Cambridge, MA, USA
Much of today’s communication is carried out over large wireless systems with different input-output behaviors. In this work, we compare the power of central abstractions of wireless communication through the general notion of boolean symmetric f-channels: In every round of the f-channel, each of its n parties decides to either broadcast or not, and the channel outputs f(m), where m is the number of broadcasting parties.
Our first result is that the well studied beeping channel, where f is the threshold-1 function, is not stronger than any other f-channel. To this end, we design a protocol over the f-channel and prove that any protocol that simulates it over the beeping channel blows up the round complexity by a factor of Ω(log n). Our lower bound technique may be of independent interest, as it essentially generalizes the popular fooling set technique by exploiting a "local" relaxation of combinatorial rectangles.
Curiously, while this result shows the limitations of a noiseless channel, namely, the beeping channel, we are able to use it to show the limitations of the noisy version of many other channels. This includes the extensively studied single-hop radio network model with collisions-as-silence (CAS), which is equivalent to the f-channel with f(m) = 1 iff m = 1.
In particular, our second and main result, obtained from the first, shows that converting CAS protocols to noise resilient ones may incur a large performance overhead, i.e., no constant rate interactive code exists. To this end, we design a CAS protocol and prove that any protocol that simulates it over the noisy CAS model with correlated stochastic noise, blows up the round complexity by a factor of Ω(log n). We mention that the Ω(log n) overhead in both our results is tight.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.46/LIPIcs.ITCS.2023.46.pdf
Beeping Model
Radio Networks
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
47:1
47:21
10.4230/LIPIcs.ITCS.2023.47
article
Asymptotically Tight Bounds on the Time Complexity of Broadcast and Its Variants in Dynamic Networks
El-Hayek, Antoine
1
https://orcid.org/0000-0003-4268-7368
Henzinger, Monika
1
https://orcid.org/0000-0002-5008-6530
Schmid, Stefan
2
3
https://orcid.org/0000-0002-7798-1711
Faculty of Computer Science, Universität Wien, Austria
TU Berlin, Germany
Fraunhofer SIT, Darmstadt, Germany
Data dissemination is a fundamental task in distributed computing. This paper studies broadcast problems in various innovative models where the communication network connecting n processes is dynamic (e.g., due to mobility or failures) and controlled by an adversary.
In the first model, the processes transitively communicate their ids in synchronous rounds along a rooted tree given in each round by the adversary whose goal is to maximize the number of rounds until at least one id is known by all processes. Previous research has shown a ⌈(3n-1)/2⌉-2 lower bound and an O(nlog log n) upper bound. We show the first linear upper bound for this problem, namely ⌈(1+√2) n-1⌉ ≈ 2.4n.
We extend these results to the setting where the adversary gives in each round k-disjoint forests and their goal is to maximize the number of rounds until there is a set of k ids such that each process knows of at least one of them. We give a ⌈3(n-k)/2⌉-1 lower bound and a (π²+6)/6 n+1 ≈ 2.6n upper bound for this problem.
Finally, we study the setting where the adversary gives in each round a directed graph with k roots and their goal is to maximize the number of rounds until there exist k ids that are known by all processes. We give a ⌈3(n-3k)/2⌉+2 lower bound and a ⌈(1+√2)n⌉+k-1 ≈ 2.4n+k upper bound for this problem.
For the two latter problems no upper or lower bounds were previously known.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.47/LIPIcs.ITCS.2023.47.pdf
broadcast
cover
k-broadcast
dynamic radius
dynamic graphs
oblivious message adversary
time complexity
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
48:1
48:24
10.4230/LIPIcs.ITCS.2023.48
article
Differentially Private Continual Releases of Streaming Frequency Moment Estimations
Epasto, Alessandro
1
Mao, Jieming
1
Medina, Andres Munoz
1
Mirrokni, Vahab
1
Vassilvitskii, Sergei
1
Zhong, Peilin
1
Google, New York, NY, USA
The streaming model of computation is a popular approach for working with large-scale data. In this setting, there is a stream of items and the goal is to compute the desired quantities (usually data statistics) while making a single pass through the stream and using as little space as possible.
Motivated by the importance of data privacy, we develop differentially private streaming algorithms under the continual release setting, where the union of outputs of the algorithm at every timestamp must be differentially private. Specifically, we study the fundamental 𝓁_p (p ∈ [0,+∞)) frequency moment estimation problem under this setting, and give an ε-DP algorithm that achieves (1+η)-relative approximation (∀ η ∈ (0,1)) with polylog(Tn) additive error and uses polylog(Tn)⋅ max(1, n^{1-2/p}) space, where T is the length of the stream and n is the size of the universe of elements. Our space is near optimal up to poly-logarithmic factors even in the non-private setting.
To obtain our results, we first reduce several primitives under the differentially private continual release model, such as counting distinct elements, heavy hitters and counting low frequency elements, to the simpler, counting/summing problems in the same setting. Based on these primitives, we develop a differentially private continual release level set estimation approach to address the 𝓁_p frequency moment estimation problem.
We also provide a simple extension of our results to the harder sliding window model, where the statistics must be maintained over the past W data items.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.48/LIPIcs.ITCS.2023.48.pdf
Differential Privacy
Continual Release
Sliding Window
Streaming Algorithms
Distinct Elements
Frequency Moment Estimation
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
49:1
49:6
10.4230/LIPIcs.ITCS.2023.49
article
A Subpolynomial-Time Algorithm for the Free Energy of One-Dimensional Quantum Systems in the Thermodynamic Limit
Fawzi, Hamza
1
Fawzi, Omar
2
Scalet, Samuel O.
1
Department of Applied Mathematics and Theoretical Physics, University of Cambridge, UK
Univ Lyon, Inria, ENS Lyon, UCBL, LIP, Lyon, France
We introduce a classical algorithm to approximate the free energy of local, translation-invariant, one-dimensional quantum systems in the thermodynamic limit of infinite chain size. While the ground state problem (i.e., the free energy at temperature T = 0) for these systems is expected to be computationally hard even for quantum computers, our algorithm runs for any fixed temperature T > 0 in subpolynomial time, i.e., in time O((1/ε)^c) for any constant c > 0 where ε is the additive approximation error. Previously, the best known algorithm had a runtime that is polynomial in 1/ε where the degree of the polynomial is exponential in the inverse temperature 1/T. Our algorithm is also particularly simple as it reduces to the computation of the spectral radius of a linear map. This linear map has an interpretation as a noncommutative transfer matrix and has been studied previously to prove results on the analyticity of the free energy and the decay of correlations. We also show that the corresponding eigenvector of this map gives an approximation of the marginal of the Gibbs state and thereby allows for the computation of various thermodynamic properties of the quantum system.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.49/LIPIcs.ITCS.2023.49.pdf
One-dimensional quantum systems
Free energy
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
50:1
50:13
10.4230/LIPIcs.ITCS.2023.50
article
Expander Decomposition in Dynamic Streams
Filtser, Arnold
1
Kapralov, Michael
2
Makarov, Mikhail
2
Bar-Ilan University, Ramat-Gan, Israel
EPFL, Lausanne, Switzerland
In this paper we initiate the study of expander decompositions of a graph G = (V, E) in the streaming model of computation. The goal is to find a partitioning 𝒞 of vertices V such that the subgraphs of G induced by the clusters C ∈ 𝒞 are good expanders, while the number of intercluster edges is small. Expander decompositions are classically constructed by a recursively applying balanced sparse cuts to the input graph. In this paper we give the first implementation of such a recursive sparsest cut process using small space in the dynamic streaming model.
Our main algorithmic tool is a new type of cut sparsifier that we refer to as a power cut sparsifier - it preserves cuts in any given vertex induced subgraph (or, any cluster in a fixed partition of V) to within a (δ, ε)-multiplicative/additive error with high probability. The power cut sparsifier uses Õ(n/εδ) space and edges, which we show is asymptotically tight up to polylogarithmic factors in n for constant δ.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.50/LIPIcs.ITCS.2023.50.pdf
Streaming
expander decomposition
graph sparsifiers
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
51:1
51:22
10.4230/LIPIcs.ITCS.2023.51
article
On Flipping the Fréchet Distance
Filtser, Omrit
1
https://orcid.org/0000-0002-3978-1428
Goswami, Mayank
2
Mitchell, Joseph S. B.
3
Polishchuk, Valentin
4
The Open University of Israel, Ra'anana, Israel
Queens College CUNY, Flushing, NY, USA
Stony Brook University, NY, USA
Linköping University, Sweden
The classical and extensively-studied Fréchet distance between two curves is defined as an inf max, where the infimum is over all traversals of the curves, and the maximum is over all concurrent positions of the two agents. In this article we investigate a "flipped" Fréchet measure defined by a sup min - the supremum is over all traversals of the curves, and the minimum is over all concurrent positions of the two agents. This measure produces a notion of "social distance" between two curves (or general domains), where agents traverse curves while trying to stay as far apart as possible.
We first study the flipped Fréchet measure between two polygonal curves in one and two dimensions, providing conditional lower bounds and matching algorithms. We then consider this measure on polygons, where it denotes the minimum distance that two agents can maintain while restricted to travel in or on the boundary of the same polygon. We investigate several variants of the problem in this setting, for some of which we provide linear time algorithms. Finally, we consider this measure on graphs.
We draw connections between our proposed flipped Fréchet measure and existing related work in computational geometry, hoping that our new measure may spawn investigations akin to those performed for the Fréchet distance, and into further interesting problems that arise.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.51/LIPIcs.ITCS.2023.51.pdf
curves
polygons
distancing measure
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
52:1
52:1
10.4230/LIPIcs.ITCS.2023.52
article
Budget Pacing in Repeated Auctions: Regret and Efficiency Without Convergence
Gaitonde, Jason
1
Li, Yingkai
2
Light, Bar
3
Lucier, Brendan
4
Slivkins, Aleksandrs
3
Department of Computer Science, Cornell University, Ithaca, NY, USA
Cowles Foundation, Yale University, New Haven, CT, USA
Microsoft Research New York City, NY, USA
Microsoft Research New England, Cambridge, MA, USA
Online advertising via auctions increasingly dominates the marketing landscape. A typical advertiser may participate in thousands of auctions each day with bids tailored to a variety of signals about user demographics and intent. These auctions are strategically linked through a global budget constraint. To help address the difficulty of bidding, many major online platforms now provide automated budget management via a flexible approach called budget pacing: rather than bidding directly, an advertiser specifies a global budget target and a maximum willingness-to-pay for different types of advertising opportunities. The specified maximums are then scaled down (or "paced") by a multiplier so that the realized total spend matches the target budget.
These automated bidders are now near-universally adopted across all mature advertising platforms, raising pressing questions about market outcomes that arise when advertisers use budget pacing simultaneously. In this paper we study the aggregate welfare and individual regret guarantees of dynamic pacing algorithms in repeated auctions with budgets. We show that when agents simultaneously use a natural form of gradient-based pacing, the liquid welfare obtained over the course of the dynamics is at least half the optimal liquid welfare obtainable by any allocation rule, matching the best possible bound for static auctions even in pure Nash equilibria [Aggarwal et al., WINE 2019; Babaioff et al., ITCS 2021]. In contrast to prior work, these results hold without requiring convergence of the dynamics, circumventing known computational obstacles of finding equilibria [Chen et al., EC 2021]. Our result is robust to the correlation structure among agents' valuations and holds for any core auction, a broad class that includes first-price, second-price, and GSP auctions. We complement the aggregate guarantees by showing that an agent using such pacing algorithms achieves an O(T^{3/4}) regret relative to the value obtained by the best fixed pacing multiplier in hindsight in stochastic bidding environments. Compared to past work, this result applies to more general auctions and extends to adversarial settings with respect to dynamic regret.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.52/LIPIcs.ITCS.2023.52.pdf
repeated auctions with budgets
pacing
learning in auctions
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
53:1
53:23
10.4230/LIPIcs.ITCS.2023.53
article
Quantum Space, Ground Space Traversal, and How to Embed Multi-Prover Interactive Proofs into Unentanglement
Gharibian, Sevag
1
https://orcid.org/0000-0002-9992-3379
Rudolph, Dorian
1
https://orcid.org/0000-0002-2440-7388
Universität Paderborn, Germany
A celebrated result in classical complexity theory is Savitch’s theorem, which states that non-deterministic polynomial-space computations (NPSPACE) can be simulated by deterministic poly-space computations (PSPACE). In this work, we initiate the study of a quantum analogue of NPSPACE, denoted Streaming-QCMASPACE (SQCMASPACE), in which an exponentially long classical proof is streamed to a poly-space quantum verifier. We first show that a quantum analogue of Savitch’s theorem is unlikely to hold, in that SQCMASPACE = NEXP. For completeness, we also introduce the companion class Streaming-QMASPACE (SQMASPACE) with an exponentially long streamed quantum proof, and show SQMASPACE = QMAEXP (the quantum analogue of NEXP). Our primary focus, however, is on the study of exponentially long streaming classical proofs, where we next show the following two main results.
The first result shows that, in strong contrast to the classical setting, the solution space of a quantum constraint satisfaction problem (i.e. a local Hamiltonian) is always connected when exponentially long proofs are permitted. For this, we show how to simulate any Lipschitz continuous path on the unit hypersphere via a sequence of local unitary gates, at the expense of blowing up the circuit size. This shows that quantum error-correcting codes can be unable to detect one codeword erroneously evolving to another if the evolution happens sufficiently slowly, and answers an open question of [Gharibian, Sikora, ICALP 2015] regarding the Ground State Connectivity problem.
Our second main result is that any SQCMASPACE computation can be embedded into "unentanglement", i.e. into a quantum constraint satisfaction problem with unentangled provers. Formally, we show how to embed SQCMASPACE into the Sparse Separable Hamiltonian problem of [Chailloux, Sattath, CCC 2012] (QMA(2)-complete for 1/poly promise gap), at the expense of scaling the promise gap with the streamed proof size. As a corollary, we obtain the first systematic construction for obtaining QMA(2)-type upper bounds on arbitrary multi-prover interactive proof systems, where the QMA(2) promise gap scales exponentially with the number of bits of communication in the interactive proof. Our construction uses a new technique for exploiting unentanglement to simulate quadratic Boolean functions, which in some sense allows history states to encode the future.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.53/LIPIcs.ITCS.2023.53.pdf
quantum complexity theory
Quantum Merlin Arthur (QMA)
QMA(2)
ground state connectivity (GSCON)
quantum error correction
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
54:1
54:24
10.4230/LIPIcs.ITCS.2023.54
article
Algorithms with More Granular Differential Privacy Guarantees
Ghazi, Badih
1
Kumar, Ravi
1
Manurangsi, Pasin
1
Steinke, Thomas
1
Google, Mountain View, CA, USA
Differential privacy is often applied with a privacy parameter that is larger than the theory suggests is ideal; various informal justifications for tolerating large privacy parameters have been proposed. In this work, we consider partial differential privacy (DP), which allows quantifying the privacy guarantee on a per-attribute basis. We study several basic data analysis and learning tasks in this framework, and design algorithms whose per-attribute privacy parameter is smaller that the best possible privacy parameter for the entire record of a person (i.e., all the attributes).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.54/LIPIcs.ITCS.2023.54.pdf
Differential Privacy
Algorithms
Per-Attribute Privacy
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
55:1
55:24
10.4230/LIPIcs.ITCS.2023.55
article
Private Counting of Distinct and k-Occurring Items in Time Windows
Ghazi, Badih
1
Kumar, Ravi
1
Nelson, Jelani
2
1
Manurangsi, Pasin
1
Google, Mountain View, CA, USA
UC Berkeley, CA, USA
In this work, we study the task of estimating the numbers of distinct and k-occurring items in a time window under the constraint of differential privacy (DP). We consider several variants depending on whether the queries are on general time windows (between times t₁ and t₂), or are restricted to being cumulative (between times 1 and t₂), and depending on whether the DP neighboring relation is event-level or the more stringent item-level. We obtain nearly tight upper and lower bounds on the errors of DP algorithms for these problems. En route, we obtain an event-level DP algorithm for estimating, at each time step, the number of distinct items seen over the last W updates with error polylogarithmic in W; this answers an open question of Bolot et al. (ICDT 2013).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.55/LIPIcs.ITCS.2023.55.pdf
Differential Privacy
Algorithms
Distinct Elements
Time Windows
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
56:1
56:18
10.4230/LIPIcs.ITCS.2023.56
article
Is Untrusted Randomness Helpful?
Girish, Uma
1
Raz, Ran
1
Zhan, Wei
1
Princeton University, NJ, USA
Randomized algorithms and protocols assume the availability of a perfect source of randomness. In real life, however, perfect randomness is rare and is almost never guaranteed. The gap between these two facts motivated much of the work on randomness and derandomization in theoretical computer science.
In this work, we define a new type of randomized algorithms (and protocols), that we call robustly-randomized algorithms (protocols). Such algorithms have access to two separate (read-once) random strings. The first string is trusted to be perfectly random, but its length is bounded by some parameter k = k(n) (where n is the length of the input). We think of k as relatively small, say sub-linear or poly-logarithmic in n. The second string is of unbounded length and is assumed to be random, but its randomness is not trusted.
The output of the algorithm is either an output in the set of possible outputs of the problem, or a special symbol, interpreted as do not know and denoted by ⊥. On every input for the algorithm, the output of the algorithm must satisfy the following two requirements:
1) If the second random string is perfectly random then the algorithm must output the correct answer with high probability.
2) If the second random string is an arbitrary string, even adversarially chosen after seeing the input, the algorithm must output with high probability either the correct answer or the special symbol ⊥.
We discuss relations of this new definition to several previously studied notions in randomness and derandomization. For example, when considering polynomial-time algorithms, if k is logarithmic we get the complexity class ZPP, while if k is unbounded we get the complexity class BPP, and for a general k, the algorithm can be viewed as an interactive proof with a probabilistic polynomial-time prover and a probabilistic polynomial-time verifier, where the prover is allowed an unlimited number of random bits and the verifier is limited to at most k random bits.
Every previously-studied class of randomized algorithms or protocols, and more generally, every previous use of randomness in theoretical computer science, can be revisited and redefined in light of our new definition, by replacing each random string with a pair of random strings, the first is trusted to be perfectly random but is relatively short and the second is of unlimited length but its randomness is not trusted. The main question that we ask is: In which settings and for which problems is the untrusted random string helpful?
Our main technical observation is that every problem in the class BPL (of problems solvable by bounded-error randomized logspace algorithms) can be solved by a robustly-randomized logspace algorithm with k = O(log n), that is with just a logarithmic number of trusted random bits. We also give query complexity separations that show cases where the untrusted random string is provenly helpful. Specifically, we show that there are promise problems that can be solved by robustly-randomized protocols with only one query and just a logarithmic number of trusted random bits, whereas any randomized protocol requires either a linear number of random bits or an exponential number of queries, and any zero-error randomized protocol requires a polynomial number of queries.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.56/LIPIcs.ITCS.2023.56.pdf
Untrusted
Randomness
Verifiable
ZPL
BPL
ZPP
BPP
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
57:1
57:18
10.4230/LIPIcs.ITCS.2023.57
article
Consensus Division in an Arbitrary Ratio
Goldberg, Paul
1
https://orcid.org/0000-0002-5436-7890
Li, Jiawei
2
https://orcid.org/0000-0002-1441-1711
University of Oxford, UK
The University of Texas at Austin, TX, USA
We consider the problem of partitioning a line segment into two subsets, so that n finite measures all have the same ratio of values for the subsets. Letting α ∈ [0,1] denote the desired ratio, this generalises the PPA-complete consensus-halving problem, in which α = 1/2. Stromquist and Woodall [Stromquist and Woodall, 1985] showed that for any α, there exists a solution using 2n cuts of the segment. They also showed that if α is irrational, that upper bound is almost optimal. In this work, we elaborate the bounds for rational values α. For α = 𝓁/k, we show a lower bound of (k-1)/k ⋅ 2n - O(1) cuts; we also obtain almost matching upper bounds for a large subset of rational α.
On the computational side, we explore its dependence on the number of cuts available. More specifically,
1) when using the minimal number of cuts for each instance is required, the problem is NP-hard for any α;
2) for a large subset of rational α = 𝓁/k, when (k-1)/k ⋅ 2n cuts are available, the problem is in PPA-k under Turing reduction;
3) when 2n cuts are allowed, the problem belongs to PPA for any α; more generally, the problem belong to PPA-p for any prime p if 2(p-1)⋅⌈p/2⌉/⌊p/2⌋ ⋅ n cuts are available.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.57/LIPIcs.ITCS.2023.57.pdf
Consensus Halving
TFNP
PPA-k
Necklace Splitting
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
58:1
58:23
10.4230/LIPIcs.ITCS.2023.58
article
An Algorithmic Bridge Between Hamming and Levenshtein Distances
Goldenberg, Elazar
1
https://orcid.org/0000-0001-7993-3580
Kociumaka, Tomasz
2
https://orcid.org/0000-0002-2477-1702
Krauthgamer, Robert
3
Saha, Barna
4
https://orcid.org/0000-0002-6494-3839
Academic College of Tel Aviv-Yafo, Israel
Max Planck Institute for Informatics, Saarland Informatics Campus, Saarbrücken, Germany
Weizmann Institute of Science, Rehovot, Israel
University of California, San Diego, CA, USA
The edit distance between strings classically assigns unit cost to every character insertion, deletion, and substitution, whereas the Hamming distance only allows substitutions. In many real-life scenarios, insertions and deletions (abbreviated indels) appear frequently but significantly less so than substitutions. To model this, we consider substitutions being cheaper than indels, with cost 1/a for a parameter a ≥ 1. This basic variant, denoted ED_a, bridges classical edit distance (a = 1) with Hamming distance (a → ∞), leading to interesting algorithmic challenges: Does the time complexity of computing ED_a interpolate between that of Hamming distance (linear time) and edit distance (quadratic time)? What about approximating ED_a?
We first present a simple deterministic exact algorithm for ED_a and further prove that it is near-optimal assuming the Orthogonal Vectors Conjecture. Our main result is a randomized algorithm computing a (1+ε)-approximation of ED_a(X,Y), given strings X,Y of total length n and a bound k ≥ ED_a(X,Y). For simplicity, let us focus on k ≥ 1 and a constant ε > 0; then, our algorithm takes Õ(n/a + ak³) time. Unless a = Õ(1), in which case ED_a resembles the standard edit distance, and for the most interesting regime of small enough k, this running time is sublinear in n.
We also consider a very natural version that asks to find a (k_I, k_S)-alignment, i.e., an alignment with at most k_I indels and k_S substitutions. In this setting, we give an exact algorithm and, more importantly, an Õ((nk_I)/k_S + k_S k_I³)-time (1,1+ε)-bicriteria approximation algorithm. The latter solution is based on the techniques we develop for ED_a for a = Θ(k_S/k_I), and its running time is again sublinear in n whenever k_I ≪ k_S and the overall distance is small enough.
These bounds are in stark contrast to unit-cost edit distance, where state-of-the-art algorithms are far from achieving (1+ε)-approximation in sublinear time, even for a favorable choice of k.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.58/LIPIcs.ITCS.2023.58.pdf
edit distance
Hamming distance
Longest Common Extension queries
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
59:1
59:16
10.4230/LIPIcs.ITCS.2023.59
article
On Interactive Proofs of Proximity with Proof-Oblivious Queries
Goldreich, Oded
1
Rothblum, Guy N.
2
Skverer, Tal
1
Weizmann Institute of Science, Rehovot, Israel
Apple, Cupertino, CA, USA
Interactive proofs of proximity (IPPs) offer ultra-fast approximate verification of assertions regarding their input, where ultra-fast means that only a small portion of the input is read and approximate verification is analogous to the notion of approximate decision that underlies property testing. Specifically, in an IPP, the prover can make the verifier accept each input in the property, but cannot fool the verifier into accepting an input that is far from the property (except for with small probability).
The verifier in an IPP system engages in two very different types of activities: interacting with an untrusted prover, and querying its input. The definition allows for arbitrary coordination between these two activities, but keeping them separate is both conceptually interesting and necessary for important applications such as addressing temporal considerations (i.e., at what time is each of the services available) and facilitating the construction of zero-knowledge schemes. In this work we embark on a systematic study of IPPs with proof-oblivious queries, where the queries should not be affected by the interaction with the prover. We assign the query and interaction activities to separate modules, and consider different limitations on their coordination.
The most strict limitation requires these activities to be totally isolated from one another; they just feed their views to a separate deciding module. We show that such systems can be efficiently emulated by standard testers.
Going to the other extreme, we only disallow information to flow from the interacting module to the querying module, but allow free information flow in the other direction. We show that extremely efficient one-round (i.e., two-message) systems of such type can be used to verify properties that are extremely hard to test (without the help of a prover). That is, the complexity of verifying can be polylogarithmic in the complexity of testing. This stands in contrast the MAPs (viewed as 1/2-round systems) in which proof-oblivious queries are as limited as our isolated model.
Our focus is on an intermediate model that allows shared randomness between the querying and interacting modules but no information flow between them. In this case we show that 1-round systems are efficiently emulated by standard testers but 3/2-round systems of extremely low complexity exist for properties that are extremely hard to test. One additional result about this model is that it can efficiently emulate any IPP for any property of low-degree polynomials.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.59/LIPIcs.ITCS.2023.59.pdf
Complexity Theory
Property Testing
Interactive Proofs
Interactive Proofs of Proximity
Proof-Oblivious Queries
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
60:1
60:20
10.4230/LIPIcs.ITCS.2023.60
article
Loss Minimization Through the Lens Of Outcome Indistinguishability
Gopalan, Parikshit
1
Hu, Lunjia
2
Kim, Michael P.
3
Reingold, Omer
2
Wieder, Udi
4
Apple, Cupertino, CA, USA
Stanford University, CA, USA
Miller Institute, UC Berkeley, CA, USA
VMware Research, Palo Alto, CA, USA
We present a new perspective on loss minimization and the recent notion of Omniprediction through the lens of Outcome Indistingusihability. For a collection of losses and hypothesis class, omniprediction requires that a predictor provide a loss-minimization guarantee simultaneously for every loss in the collection compared to the best (loss-specific) hypothesis in the class. We present a generic template to learn predictors satisfying a guarantee we call Loss Outcome Indistinguishability. For a set of statistical tests - based on a collection of losses and hypothesis class - a predictor is Loss OI if it is indistinguishable (according to the tests) from Nature’s true probabilities over outcomes. By design, Loss OI implies omniprediction in a direct and intuitive manner. We simplify Loss OI further, decomposing it into a calibration condition plus multiaccuracy for a class of functions derived from the loss and hypothesis classes. By careful analysis of this class, we give efficient constructions of omnipredictors for interesting classes of loss functions, including non-convex losses.
This decomposition highlights the utility of a new multi-group fairness notion that we call calibrated multiaccuracy, which lies in between multiaccuracy and multicalibration. We show that calibrated multiaccuracy implies Loss OI for the important set of convex losses arising from Generalized Linear Models, without requiring full multicalibration. For such losses, we show an equivalence between our computational notion of Loss OI and a geometric notion of indistinguishability, formulated as Pythagorean theorems in the associated Bregman divergence. We give an efficient algorithm for calibrated multiaccuracy with computational complexity comparable to that of multiaccuracy. In all, calibrated multiaccuracy offers an interesting tradeoff point between efficiency and generality in the omniprediction landscape.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.60/LIPIcs.ITCS.2023.60.pdf
Loss Minimization
Indistinguishability
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
61:1
61:23
10.4230/LIPIcs.ITCS.2023.61
article
List Agreement Expansion from Coboundary Expansion
Gotlib, Roy
1
Kaufman, Tali
1
Department of Computer Science, Bar-Ilan University, Ramat-Gan, Israel
One of the key components in PCP constructions are agreement tests. In agreement test the tester is given access to subsets of fixed size of some set, each equipped with an assignment. The tester is then tasked with testing whether these local assignments agree with some global assignment over the entire set. One natural generalization of this concept is the case where, instead of a single assignment to each local view, the tester is given access to l different assignments for every subset. The tester is then tasked with testing whether there exist l global functions that agree with all of the assignments of all of the local views. In this work we present sufficient condition for a set system to exhibit this generalized definition of list agreement expansion. This is, to our knowledge, the first work to consider this natural generalization of agreement testing.
Despite initially appearing very similar to agreement expansion in definition, proving that a set system exhibits list agreement expansion seem to require a different set of techniques. This is due to the fact that the natural extension of agreement testing (i.e. that there exists a pairing of the lists such that each pair agrees with each other) does not suffice when testing for list agreement as list agreement crucially relies on a global structure. It follows that if a local assignments satisfy list agreement they must not only agree locally but also exhibit some additional structure. In order to test for the existence of this additional structure we use the connection between covering spaces of a high dimensional complex and its coboundaries. Specifically, we use this connection as a form of "decoupling".
Moreover, we show that any set system that exhibits list agreement expansion also supports direct sum testing. This is the first scheme for direct sum testing that works regardless of the parity of the sizes of the local sets. Prior to our work the schemes for direct sum testing were based on the parity of the sizes of the local tests.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.61/LIPIcs.ITCS.2023.61.pdf
High dimensional Expanders
Property Testing
Agreement Testing
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
62:1
62:22
10.4230/LIPIcs.ITCS.2023.62
article
Asynchronous Multi-Party Quantum Computation
Goyal, Vipul
1
2
Liu-Zhang, Chen-Da
2
Raizes, Justin
1
Ribeiro, João
1
Carnegie Mellon University, Pittsburgh, PA, USA
NTT Research, Sunnyvale, CA, USA
Multi-party quantum computation (MPQC) allows a set of parties to securely compute a quantum circuit over private quantum data. Current MPQC protocols rely on the fact that the network is synchronous, i.e., messages sent are guaranteed to be delivered within a known fixed delay upper bound, and unfortunately completely break down even when only a single message arrives late.
Motivated by real-world networks, the seminal work of Ben-Or, Canetti and Goldreich (STOC'93) initiated the study of multi-party computation for classical circuits over asynchronous networks, where the network delay can be arbitrary. In this work, we begin the study of asynchronous multi-party quantum computation (AMPQC) protocols, where the circuit to compute is quantum.
Our results completely characterize the optimal achievable corruption threshold: we present an n-party AMPQC protocol secure up to t < n/4 corruptions, and an impossibility result when t ≥ n/4 parties are corrupted. Remarkably, this characterization differs from the analogous classical setting, where the optimal corruption threshold is t < n/3.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.62/LIPIcs.ITCS.2023.62.pdf
Quantum Cryptography
Multiparty Computation
Asynchronous
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
63:1
63:13
10.4230/LIPIcs.ITCS.2023.63
article
Unsplittable Euclidean Capacitated Vehicle Routing: A (2+ε)-Approximation Algorithm
Grandoni, Fabrizio
1
Mathieu, Claire
2
Zhou, Hang
3
IDSIA, USI-SUPSI, Lugano, Switzerland
CNRS, IRIF, Université de Paris, France
École Polytechnique, Institut Polytechnique de Paris, France
In the unsplittable capacitated vehicle routing problem, we are given a metric space with a vertex called depot and a set of vertices called terminals. Each terminal is associated with a positive demand between 0 and 1. The goal is to find a minimum length collection of tours starting and ending at the depot such that the demand of each terminal is covered by a single tour (i.e., the demand cannot be split), and the total demand of the terminals in each tour does not exceed the capacity of 1.
Our main result is a polynomial-time (2+ε)-approximation algorithm for this problem in the two-dimensional Euclidean plane, i.e., for the special case where the terminals and the depot are associated with points in the Euclidean plane and their distances are defined accordingly. This improves on recent work by Blauth, Traub, and Vygen [IPCO'21] and Friggstad, Mousavi, Rahgoshay, and Salavatipour [IPCO'22].
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.63/LIPIcs.ITCS.2023.63.pdf
capacitated vehicle routing
approximation algorithms
Euclidean plane
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
64:1
64:20
10.4230/LIPIcs.ITCS.2023.64
article
Low-Stabilizer-Complexity Quantum States Are Not Pseudorandom
Grewal, Sabee
1
https://orcid.org/0000-0002-8241-560X
Iyer, Vishnu
1
https://orcid.org/0000-0001-8072-1390
Kretschmer, William
1
https://orcid.org/0000-0002-7784-9817
Liang, Daniel
1
https://orcid.org/0000-0002-7418-0468
The University of Texas at Austin, TX, USA
We show that quantum states with "low stabilizer complexity" can be efficiently distinguished from Haar-random. Specifically, given an n-qubit pure state |ψ⟩, we give an efficient algorithm that distinguishes whether |ψ⟩ is (i) Haar-random or (ii) a state with stabilizer fidelity at least 1/k (i.e., has fidelity at least 1/k with some stabilizer state), promised that one of these is the case. With black-box access to |ψ⟩, our algorithm uses O(k^{12} log(1/δ)) copies of |ψ⟩ and O(n k^{12} log(1/δ)) time to succeed with probability at least 1-δ, and, with access to a state preparation unitary for |ψ⟩ (and its inverse), O(k³ log(1/δ)) queries and O(n k³ log(1/δ)) time suffice.
As a corollary, we prove that ω(log(n)) T-gates are necessary for any Clifford+T circuit to prepare computationally pseudorandom quantum states, a first-of-its-kind lower bound.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.64/LIPIcs.ITCS.2023.64.pdf
Pseudorandom quantum states
Clifford + T
Haar random
Bell sampling
stabilizer formalism
stabilizer extent
stabilizer fidelity
learning theory
complexity theory
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
65:1
65:17
10.4230/LIPIcs.ITCS.2023.65
article
Look Before, Before You Leap: Online Vector Load Balancing with Few Reassignments
Gupta, Varun
1
Krishnaswamy, Ravishankar
2
Sandeep, Sai
3
Sundaresan, Janani
4
Booth School of Business, University of Chicago, IL, USA
Microsoft Research, Bengaluru, India
University of California, Berkeley, CA, USA
Department of Computer Science, Rutgers University, Piscataway, NJ, USA
In this paper we study two fully-dynamic multi-dimensional vector load balancing problems with recourse. The adversary presents a stream of n job insertions and deletions, where each job j is a vector in ℝ^d_{≥ 0}. In the vector scheduling problem, the algorithm must maintain an assignment of the active jobs to m identical machines to minimize the makespan (maximum load on any dimension on any machine). In the vector bin packing problem, the algorithm must maintain an assignment of active jobs into a number of bins of unit capacity in all dimensions, to minimize the number of bins currently used. In both problems, the goal is to maintain solutions that are competitive against the optimal solution for the active set of jobs, at every time instant. The algorithm is allowed to change the assignment from time to time, with the secondary objective of minimizing the amortized recourse, which is the average cardinality of the change of the assignment per update to the instance.
For the vector scheduling problem, we present two simple algorithms. The first is a randomized algorithm with an O(1) amortized recourse and an O(log d/log log d) competitive ratio against oblivious adversaries. The second algorithm is a deterministic algorithm that is competitive against adaptive adversaries but with a slightly higher competitive ratio of O(log d) and a per-job recourse guarantee bounded by Õ(log n + log d log OPT). We also prove a sharper instance-dependent recourse guarantee for the deterministic algorithm.
For the vector bin packing problem, we make the so-called small jobs assumption that the size of all jobs in all the coordinates is O(1/log d) and present a simple O(1)-competitive algorithm with O(log n) recourse against oblivious adversaries.
For both problems, the main challenge is to determine when and how to migrate jobs to maintain competitive solutions. Our central idea is that for each job, we make these decisions based only on the active set of jobs that are "earlier" than this job in some ordering ≺ of the jobs.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.65/LIPIcs.ITCS.2023.65.pdf
Vector Scheduling
Vector Load Balancing
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
66:1
66:18
10.4230/LIPIcs.ITCS.2023.66
article
Incompressiblity and Next-Block Pseudoentropy
Haitner, Iftach
1
Mazor, Noam
1
Silbak, Jad
1
The Blavatnik School of Computer Science at Tel-Aviv University, Israel
A distribution is k-incompressible, Yao [FOCS '82], if no efficient compression scheme compresses it to less than k bits. While being a natural measure, its relation to other computational analogs of entropy such as pseudoentropy, Hastad, Impagliazzo, Levin, and Luby [SICOMP '99], and to other cryptographic hardness assumptions, was unclear.
We advance towards a better understating of this notion, showing that a k-incompressible distribution has (k-2) bits of next-block pseudoentropy, a refinement of pseudoentropy introduced by Haitner, Reingold, and Vadhan [SICOMP '13]. We deduce that a samplable distribution X that is (H(X)+2)-incompressible, implies the existence of one-way functions.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.66/LIPIcs.ITCS.2023.66.pdf
incompressibility
next-block pseudoentropy
sparse languages
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
67:1
67:17
10.4230/LIPIcs.ITCS.2023.67
article
Downward Self-Reducibility in TFNP
Harsha, Prahladh
1
https://orcid.org/0000-0002-2739-5642
Mitropolsky, Daniel
2
Rosen, Alon
3
4
https://orcid.org/0000-0002-3021-7150
Tata Institute of Fundamental Research, Mumbai, India
Columbia University, New York, NY, USA
Bocconi University, Milano, Italy
Reichman University, Herzliya, Israel
A problem is downward self-reducible if it can be solved efficiently given an oracle that returns solutions for strictly smaller instances. In the decisional landscape, downward self-reducibility is well studied and it is known that all downward self-reducible problems are in PSPACE. In this paper, we initiate the study of downward self-reducible search problems which are guaranteed to have a solution - that is, the downward self-reducible problems in TFNP. We show that most natural PLS-complete problems are downward self-reducible and any downward self-reducible problem in TFNP is contained in PLS. Furthermore, if the downward self-reducible problem is in TFUP (i.e. it has a unique solution), then it is actually contained in UEOPL, a subclass of CLS. This implies that if integer factoring is downward self-reducible then it is in fact in UEOPL, suggesting that no efficient factoring algorithm exists using the factorization of smaller numbers.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.67/LIPIcs.ITCS.2023.67.pdf
downward self-reducibility
TFNP
TFUP
factoring
PLS
CLS
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
68:1
68:23
10.4230/LIPIcs.ITCS.2023.68
article
Symmetric Formulas for Products of Permutations
He, William
1
Rossman, Benjamin
1
Duke University, Durham, NC, USA
We study the formula complexity of the word problem Word_{S_n,k} : {0,1}^{kn²} → {0,1}: given n-by-n permutation matrices M₁,… ,M_k, compute the (1,1)-entry of the matrix product M₁⋯ M_k. An important feature of this function is that it is invariant under action of S_n^{k-1} given by (π₁,… ,π_{k-1})(M₁,… ,M_k) = (M₁π₁^{-1},π₁M₂π₂^{-1},… ,π_{k-2}M_{k-1}π_{k-1}^{-1},π_{k-1}M_k).
This symmetry is also exhibited in the smallest known unbounded fan-in {and,or,not}-formulas for Word_{S_n,k}, which have size n^O(log k).
In this paper we prove a matching n^{Ω(log k)} lower bound for S_n^{k-1}-invariant formulas computing Word_{S_n,k}. This result is motivated by the fact that a similar lower bound for unrestricted (non-invariant) formulas would separate complexity classes NC¹ and Logspace.
Our more general main theorem gives a nearly tight n^d(k^{1/d}-1) lower bound on the G^{k-1}-invariant depth-d {maj,and,or,not}-formula size of Word_{G,k} for any finite simple group G whose minimum permutation representation has degree n. We also give nearly tight lower bounds on the G^{k-1}-invariant depth-d {and,or,not}-formula size in the case where G is an abelian group.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.68/LIPIcs.ITCS.2023.68.pdf
circuit complexity
group-invariant formulas
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
69:1
69:22
10.4230/LIPIcs.ITCS.2023.69
article
A Combinatorial Cut-Toggling Algorithm for Solving Laplacian Linear Systems
Henzinger, Monika
1
https://orcid.org/0000-0002-5008-6530
Jin, Billy
2
https://orcid.org/0000-0002-6362-2048
Peng, Richard
3
https://orcid.org/0000-0002-5407-7965
Williamson, David P.
2
https://orcid.org/0000-0002-2884-0058
Faculty of Computer Science, Universität Wien, Austria
School of Operations Research and Information Engineering, Cornell University, Ithaca, NY, USA
Cheriton School of Computer Science, University of Waterloo, Canada
Over the last two decades, a significant line of work in theoretical algorithms has made progress in solving linear systems of the form 𝐋𝐱 = 𝐛, where 𝐋 is the Laplacian matrix of a weighted graph with weights w(i,j) > 0 on the edges. The solution 𝐱 of the linear system can be interpreted as the potentials of an electrical flow in which the resistance on edge (i,j) is 1/w(i,j). Kelner, Orrechia, Sidford, and Zhu [Kelner et al., 2013] give a combinatorial, near-linear time algorithm that maintains the Kirchoff Current Law, and gradually enforces the Kirchoff Potential Law by updating flows around cycles (cycle toggling).
In this paper, we consider a dual version of the algorithm that maintains the Kirchoff Potential Law, and gradually enforces the Kirchoff Current Law by cut toggling: each iteration updates all potentials on one side of a fundamental cut of a spanning tree by the same amount. We prove that this dual algorithm also runs in a near-linear number of iterations.
We show, however, that if we abstract cut toggling as a natural data structure problem, this problem can be reduced to the online vector-matrix-vector problem (OMv), which has been conjectured to be difficult for dynamic algorithms [Henzinger et al., 2015]. The conjecture implies that the data structure does not have an O(n^{1-ε}) time algorithm for any ε > 0, and thus a straightforward implementation of the cut-toggling algorithm requires essentially linear time per iteration.
To circumvent the lower bound, we batch update steps, and perform them simultaneously instead of sequentially. An appropriate choice of batching leads to an Õ(m^{1.5}) time cut-toggling algorithm for solving Laplacian systems. Furthermore, we show that if we sparsify the graph and call our algorithm recursively on the Laplacian system implied by batching and sparsifying, we can reduce the running time to O(m^{1 + ε}) for any ε > 0. Thus, the dual cut-toggling algorithm can achieve (almost) the same running time as its primal cycle-toggling counterpart.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.69/LIPIcs.ITCS.2023.69.pdf
Laplacian solver
electrical flow
data structure
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
70:1
70:18
10.4230/LIPIcs.ITCS.2023.70
article
Learning Versus Pseudorandom Generators in Constant Parallel Time
Hirahara, Shuichi
1
Nanashima, Mikito
2
National Institute of Informatics, Tokyo, Japan
Tokyo Institute of Technology, Japan
A polynomial-stretch pseudorandom generator (PPRG) in NC⁰ (i.e., constant parallel time) is one of the most important cryptographic primitives, especially for constructing highly efficient cryptography and indistinguishability obfuscation. The celebrated work (Applebaum, Ishai, and Kushilevitz, SIAM Journal on Computing, 2006) on randomized encodings yields the characterization of sublinear-stretch pseudorandom generators in NC⁰ by the existence of logspace-computable one-way functions, but characterizing PPRGs in NC⁰ seems out of reach at present. Therefore, it is natural to ask which sort of hardness notion is essential for constructing PPRGs in NC⁰. Particularly, to the best of our knowledge, all the previously known candidates for PPRGs in NC⁰ follow only one framework based on Goldreich’s one-way function.
In this paper, we present a new learning-theoretic characterization for PPRGs in NC⁰ and related classes. Specifically, we consider the average-case hardness of learning for well-studied classes in parameterized settings, where the number of samples is restricted to fixed-parameter tractable (FPT), and show that the following are equivalent:
- The existence of (a collection of) PPRGs in NC⁰.
- The average-case hardness of learning sparse 𝔽₂-polynomials on a sparse example distribution and an NC⁰-samplable target distribution (i.e., a distribution on target functions).
- The average-case hardness of learning Fourier-sparse functions on a sparse example distribution and an NC⁰-samplable target distribution.
- The average-case hardness of learning constant-depth parity decision trees on a sparse example distribution and an NC⁰-samplable target distribution. Furthermore, we characterize a (single) PPRG in parity-NC⁰ by the average-case hardness of learning constant-degree 𝔽₂-polynomials on a uniform example distribution with FPT samples. Based on our results, we propose new candidates for PPRGs in NC⁰ and related classes under a hardness assumption on a natural learning problem. An important property of PPRGs in NC⁰ constructed in our framework is that the output bits are computed by various predicates; thus, it seems to resist an attack that depends on a specific property of one fixed predicate.
Conceptually, the main contribution of this study is to formalize a theory of FPT dualization of concept classes, which yields a meta-theorem for the first result. For the second result on PPRGs in parity-NC⁰, we use a different technique of pseudorandom 𝔽₂-polynomials.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.70/LIPIcs.ITCS.2023.70.pdf
Parallel cryptography
polynomial-stretch pseudorandom generators in NC⁰
PAC learning
average-case complexity
fixed-parameter tractability
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
71:1
71:20
10.4230/LIPIcs.ITCS.2023.71
article
Secure Distributed Network Optimization Against Eavesdroppers
Hitron, Yael
1
Parter, Merav
1
Yogev, Eylon
2
Weizmann Institute of Science, Rehovot, Israel
Bar-Ilan University, Ramat-Gan, Israel
We present a new algorithmic framework for distributed network optimization in the presence of eavesdropper adversaries, also known as passive wiretappers. In this setting, the adversary is listening to the traffic exchanged over a fixed set of edges in the graph, trying to extract information on the private input and output of the vertices. A distributed algorithm is denoted as f-secure, if it guarantees that the adversary learns nothing on the input and output for the vertices, provided that it controls at most f graph edges.
Recent work has presented general simulation results for f-secure algorithms, with a round overhead of D^Θ(f), where D is the diameter of the graph. In this paper, we present a completely different white-box, and yet quite general, approach for obtaining f-secure algorithms for fundamental network optimization tasks. Specifically, for n-vertex D-diameter graphs with (unweighted) edge-connectivity Ω(f), there are f-secure congest algorithms for computing MST, partwise aggregation, and (1+ε) (weighted) minimum cut approximation, within Õ(D+f √n) congest rounds, hence nearly tight for f = Õ(1).
Our algorithms are based on designing a secure algorithmic-toolkit that leverages the special structure of congest algorithms for global optimization graph problems. One of these tools is a general secure compiler that simulates light-weight distributed algorithms in a congestion-sensitive manner. We believe that these tools set the ground for designing additional secure solutions in the congest model and beyond.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.71/LIPIcs.ITCS.2023.71.pdf
congest
secure computation
network optimization
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
72:1
72:30
10.4230/LIPIcs.ITCS.2023.72
article
Comparative Learning: A Sample Complexity Theory for Two Hypothesis Classes
Hu, Lunjia
1
Peale, Charlotte
1
Computer Science Department, Stanford University, CA, USA
In many learning theory problems, a central role is played by a hypothesis class: we might assume that the data is labeled according to a hypothesis in the class (usually referred to as the realizable setting), or we might evaluate the learned model by comparing it with the best hypothesis in the class (the agnostic setting). Taking a step beyond these classic setups that involve only a single hypothesis class, we study a variety of problems that involve two hypothesis classes simultaneously.
We introduce comparative learning as a combination of the realizable and agnostic settings in PAC learning: given two binary hypothesis classes S and B, we assume that the data is labeled according to a hypothesis in the source class S and require the learned model to achieve an accuracy comparable to the best hypothesis in the benchmark class B. Even when both S and B have infinite VC dimensions, comparative learning can still have a small sample complexity. We show that the sample complexity of comparative learning is characterized by the mutual VC dimension VC(S,B) which we define to be the maximum size of a subset shattered by both S and B. We also show a similar result in the online setting, where we give a regret characterization in terms of the analogous mutual Littlestone dimension Ldim(S,B). These results also hold for partial hypotheses.
We additionally show that the insights necessary to characterize the sample complexity of comparative learning can be applied to other tasks involving two hypothesis classes. In particular, we characterize the sample complexity of realizable multiaccuracy and multicalibration using the mutual fat-shattering dimension, an analogue of the mutual VC dimension for real-valued hypotheses. This not only solves an open problem proposed by Hu, Peale, Reingold (2022), but also leads to independently interesting results extending classic ones about regression, boosting, and covering number to our two-hypothesis-class setting.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.72/LIPIcs.ITCS.2023.72.pdf
Comparative learning
mutual VC dimension
realizable multiaccuracy and multicalibration
sample complexity
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
73:1
73:22
10.4230/LIPIcs.ITCS.2023.73
article
Recovery from Non-Decomposable Distance Oracles
Hu, Zhuangfei
1
Li, Xinda
1
Woodruff, David P.
2
Zhang, Hongyang
1
Zhang, Shufan
1
University of Waterloo, Canada
Carnegie Mellon University, Pittsburgh, PA, USA
A line of work has looked at the problem of recovering an input from distance queries. In this setting, there is an unknown sequence s ∈ {0,1}^{≤ n}, and one chooses a set of queries y ∈ {0,1}^𝒪(n) and receives d(s,y) for a distance function d. The goal is to make as few queries as possible to recover s. Although this problem is well-studied for decomposable distances, i.e., distances of the form d(s,y) = ∑_{i=1}^n f(s_i, y_i) for some function f, which includes the important cases of Hamming distance, 𝓁_p-norms, and M-estimators, to the best of our knowledge this problem has not been studied for non-decomposable distances, for which there are important special cases such as edit distance, dynamic time warping (DTW), Fréchet distance, earth mover’s distance, and so on. We initiate the study and develop a general framework for such distances. Interestingly, for some distances such as DTW or Fréchet, exact recovery of the sequence s is provably impossible, and so we show by allowing the characters in y to be drawn from a slightly larger alphabet this then becomes possible. In a number of cases we obtain optimal or near-optimal query complexity. We also study the role of adaptivity for a number of different distance functions. One motivation for understanding non-adaptivity is that the query sequence can be fixed and the distances of the input to the queries provide a non-linear embedding of the input, which can be used in downstream applications involving, e.g., neural networks for natural language processing.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.73/LIPIcs.ITCS.2023.73.pdf
Sequence Recovery
Edit Distance
DTW Distance
Fréchet Distance
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
74:1
74:25
10.4230/LIPIcs.ITCS.2023.74
article
Karchmer-Wigderson Games for Hazard-Free Computation
Ikenmeyer, Christian
1
Komarath, Balagopal
2
Saurabh, Nitin
3
University of Warwick, Coventry, UK
IIT Gandhinagar, India
IIT Hyderabad, India
We present a Karchmer-Wigderson game to study the complexity of hazard-free formulas. This new game is both a generalization of the monotone Karchmer-Wigderson game and an analog of the classical Boolean Karchmer-Wigderson game. Therefore, it acts as a bridge between the existing monotone and general games.
Using this game, we prove hazard-free formula size and depth lower bounds that are provably stronger than those possible by the standard technique of transferring results from monotone complexity in a black-box fashion. For the multiplexer function we give (1) a hazard-free formula of optimal size and (2) an improved low-depth hazard-free formula of almost optimal size and (3) a hazard-free formula with alternation depth 2 that has optimal depth. We then use our optimal constructions to obtain an improved universal worst-case hazard-free formula size upper bound. We see our results as a step towards establishing hazard-free computation as an independent missing link between Boolean complexity and monotone complexity.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.74/LIPIcs.ITCS.2023.74.pdf
Hazard-free computation
monotone computation
Karchmer-Wigderson games
communication complexity
lower bounds
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
75:1
75:24
10.4230/LIPIcs.ITCS.2023.75
article
Learning Reserve Prices in Second-Price Auctions
Jin, Yaonan
1
https://orcid.org/0000-0001-6256-7625
Lu, Pinyan
2
Xiao, Tao
3
Columbia University, New York, NY, USA
Shanghai University of Finance and Economics, China
Huawei TCS Lab, Shanghai, China
This paper proves the tight sample complexity of Second-Price Auction with Anonymous Reserve, up to a logarithmic factor, for each of all the value distribution families studied in the literature: [0,1]-bounded, [1,H]-bounded, regular, and monotone hazard rate (MHR). Remarkably, the setting-specific tight sample complexity poly(ε^{-1}) depends on the precision ε ∈ (0, 1), but not on the number of bidders n ≥ 1. Further, in the two bounded-support settings, our learning algorithm allows correlated value distributions.
In contrast, the tight sample complexity Θ̃(n) ⋅ poly(ε^{-1}) of Myerson Auction proved by Guo, Huang and Zhang (STOC 2019) has a nearly-linear dependence on n ≥ 1, and holds only for independent value distributions in every setting.
We follow a similar framework as the Guo-Huang-Zhang work, but replace their information theoretical arguments with a direct proof.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.75/LIPIcs.ITCS.2023.75.pdf
Revenue Maximization
Sample Complexity
Anonymous Reserve
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
76:1
76:20
10.4230/LIPIcs.ITCS.2023.76
article
The Complexity of Infinite-Horizon General-Sum Stochastic Games
Jin, Yujia
1
Muthukumar, Vidya
2
Sidford, Aaron
1
Stanford University, CA, USA
Georgia Institute of Technology, Atlanta, GA, USA
We study the complexity of computing stationary Nash equilibrium (NE) in n-player infinite-horizon general-sum stochastic games. We focus on the problem of computing NE in such stochastic games when each player is restricted to choosing a stationary policy and rewards are discounted. First, we prove that computing such NE is in PPAD (in addition to clearly being PPAD-hard). Second, we consider turn-based specializations of such games where at each state there is at most a single player that can take actions and show that these (seemingly-simpler) games remain PPAD-hard. Third, we show that under further structural assumptions on the rewards computing NE in such turn-based games is possible in polynomial time. Towards achieving these results we establish structural facts about stochastic games of broader utility, including monotonicity of utilities under single-state single-action changes and reductions to settings where each player controls a single state.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.76/LIPIcs.ITCS.2023.76.pdf
complexity
stochastic games
general-sum games
Nash equilibrium
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
77:1
77:26
10.4230/LIPIcs.ITCS.2023.77
article
Random Max-CSPs Inherit Algorithmic Hardness from Spin Glasses
Jones, Chris
1
https://orcid.org/0000-0003-2777-6290
Marwaha, Kunal
2
https://orcid.org/0000-0001-9084-6971
Sandhu, Juspreet Singh
3
https://orcid.org/0000-0001-8598-5096
Shi, Jonathan
4
University of Chicago, Il, USA
University of Chicago, USA
Harvard University, Cambridge, MA, USA
Bocconi University, Milano, Italy
We study random constraint satisfaction problems (CSPs) at large clause density. We relate the structure of near-optimal solutions for any Boolean Max-CSP to that for an associated spin glass on the hypercube, using the Guerra-Toninelli interpolation from statistical physics. The noise stability polynomial of the CSP’s predicate is, up to a constant, the mixture polynomial of the associated spin glass. We show two main consequences:
1) We prove that the maximum fraction of constraints that can be satisfied in a random Max-CSP at large clause density is determined by the ground state energy density of the corresponding spin glass. Since the latter value can be computed with the Parisi formula [Parisi, 1980; Talagrand, 2006; Auffinger and Chen, 2017], we provide numerical values for some popular CSPs.
2) We prove that a Max-CSP at large clause density possesses generalized versions of the overlap gap property if and only if the same holds for the corresponding spin glass. We transfer results from [Huang and Sellke, 2021] to obstruct algorithms with overlap concentration on a large class of Max-CSPs. This immediately includes local classical and local quantum algorithms [Chou et al., 2022].
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.77/LIPIcs.ITCS.2023.77.pdf
spin glass
overlap gap property
constraint satisfaction problem
Guerra-Toninelli interpolation
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
78:1
78:22
10.4230/LIPIcs.ITCS.2023.78
article
Garland’s Technique for Posets and High Dimensional Grassmannian Expanders
Kaufman, Tali
1
Tessler, Ran J.
2
Department of Computer Science, Bar-Ilan University, Ramat-Gan, Israel
Department of Mathematics, Weizmann Institute of Science, Rehovot, Israel
Local to global machinery plays an important role in the study of simplicial complexes, since the seminal work of Garland [Garland, 1973] to our days. In this work we develop a local to global machinery for general posets. We show that the high dimensional expansion notions and many recent expansion results have a generalization to posets. Examples are fast convergence of high dimensional random walks generalizing [Kaufman et al., 2020], [Alev and Lau, 2020], an equivalence with a global random walk definition, generalizing [Dikstein et al., 2018] and a trickling down theorem, generalizing [Oppenheim, 2018].
In particular, we show that some posets, such as the Grassmannian poset, exhibit qualitatively stronger trickling down effect than simplicial complexes.
Using these methods, and the novel idea of posetification to Ramanujan complexes [Lubotzky et al., 2005a], [Lubotzky et al., 2005b], we construct a constant degree expanding Grassmannian poset, and analyze its expansion. This it the first construction of such object, whose existence was conjectured in [Dikstein et al., 2018].
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.78/LIPIcs.ITCS.2023.78.pdf
High dimensional Expanders
Posets
Grassmannian
Garland Method
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
79:1
79:15
10.4230/LIPIcs.ITCS.2023.79
article
Making Decisions Under Outcome Performativity
Kim, Michael P.
1
Perdomo, Juan C.
2
Miller Institute, UC Berkeley,CA, USA
Department of Computer Science, UC Berkeley, CA, USA
Decision-makers often act in response to data-driven predictions, with the goal of achieving favorable outcomes. In such settings, predictions don’t passively forecast the future; instead, predictions actively shape the distribution of outcomes they are meant to predict. This performative prediction setting [Brown et al., 2022] raises new challenges for learning "optimal" decision rules. In particular, existing solution concepts do not address the apparent tension between the goals of forecasting outcomes accurately and steering individuals to achieve desirable outcomes.
To contend with this concern, we introduce a new optimality concept - performative omniprediction - adapted from the supervised (non-performative) learning setting [Gopalan et al., 2022]. A performative omnipredictor is a single predictor that simultaneously encodes the optimal decision rule with respect to many possibly-competing objectives. Our main result demonstrates that efficient performative omnipredictors exist, under a natural restriction of performative prediction, which we call outcome performativity. On a technical level, our results follow by carefully generalizing the notion of outcome indistinguishability [Cynthia Dwork et al., 2021] to the outcome performative setting. From an appropriate notion of Performative OI, we recover many consequences known to hold in the supervised setting, such as omniprediction and universal adaptability [Kim et al., 2022].
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.79/LIPIcs.ITCS.2023.79.pdf
performative prediction
outcome indistinguishability
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
80:1
80:15
10.4230/LIPIcs.ITCS.2023.80
article
Characterizing the Multi-Pass Streaming Complexity for Solving Boolean CSPs Exactly
Kol, Gillat
1
Paramonov, Dmitry
1
Saxena, Raghuvansh R.
2
Yu, Huacheng
1
Princeton University, NJ, USA
Microsoft, Cambridge, MA, USA
We study boolean constraint satisfaction problems (CSPs) Max-CSP^f_n for all predicates f: {0,1}^k → {0,1}. In these problems, given an integer v and a list of constraints over n boolean variables, each obtained by applying f to a sequence of literals, we wish to decide if there is an assignment to the variables that satisfies at least v constraints. We consider these problems in the streaming model, where the algorithm makes a small number of passes over the list of constraints.
Our first and main result is the following complete characterization: For every predicate f, the streaming space complexity of the Max-CSP^f_n problem is Θ̃(n^deg(f)), where deg(f) is the degree of f when viewed as a multilinear polynomial. While the upper bound is obtained by a (very simple) one-pass streaming algorithm, our lower bound shows that a better space complexity is impossible even with constant-pass streaming algorithms.
Building on our techniques, we are also able to get an optimal Ω(n²) lower bound on the space complexity of constant-pass streaming algorithms for the well studied Max-CUT problem, even though it is not technically a Max-CSP^f_n problem as, e.g., negations of variables and repeated constraints are not allowed.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.80/LIPIcs.ITCS.2023.80.pdf
Streaming algorithms
Constraint Satisfaction Problems
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
81:1
81:23
10.4230/LIPIcs.ITCS.2023.81
article
False Consensus, Information Theory, and Prediction Markets
Kong, Yuqing
1
https://orcid.org/0000-0002-5901-3004
Schoenebeck, Grant
2
https://orcid.org/0000-0001-6878-0670
The Center on Frontiers of Computing Studies, School of Computer Science, Peking University, China
School of Information, University of Michigan, Ann Arbor, MI, USA
We study a setting where Bayesian agents with a common prior have private information related to an event’s outcome and sequentially make public announcements relating to their information. Our main result shows that when agents' private information is independent conditioning on the event’s outcome whenever agents have similar beliefs about the outcome, their information is aggregated. That is, there is no false consensus.
Our main result has a short proof based on a natural information-theoretic framework. A key ingredient of the framework is the equivalence between the sign of the "interaction information" and a super/sub-additive property of the value of people’s information. This provides an intuitive interpretation and an interesting application of the interaction information, which measures the amount of information shared by three random variables.
We illustrate the power of this information-theoretic framework by reproving two additional results within it: 1) that agents quickly agree when announcing (summaries of) beliefs in round-robin fashion [Aaronson 2005], and 2) results from [Chen et al 2010] on when prediction market agents should release information to maximize their payment. We also interpret the information-theoretic framework and the above results in prediction markets by proving that the expected reward of revealing information is the conditional mutual information of the information revealed.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.81/LIPIcs.ITCS.2023.81.pdf
Agreeing to disagree
false consensus
information theory
prediction market
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
82:1
82:18
10.4230/LIPIcs.ITCS.2023.82
article
Depth-Bounded Quantum Cryptography with Applications to One-Time Memory and More
Liu, Qipeng
1
Simons Institute for the Theory of Computing, Berkeley, CA, USA
With the power of quantum information, we can achieve exciting and classically impossible cryptographic primitives. However, almost all quantum cryptography faces extreme difficulties with the near-term intermediate-scale quantum technology (NISQ technology); namely, the short lifespan of quantum states and limited sequential computation. At the same time, considering only limited quantum adversaries may still enable us to achieve never-before-possible tasks.
In this work, we consider quantum cryptographic primitives against limited quantum adversaries - depth-bounded adversaries. We introduce a model for (depth-bounded) NISQ computers, which are classical circuits interleaved with shallow quantum circuits. Then, we show one-time memory can be achieved against any depth-bounded quantum adversaries introduced in the work, with their depth being any pre-fixed polynomial. Therefore we obtain applications like one-time programs and one-time proofs. Finally, we show our one-time memory has correctness even against constant-rate errors.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.82/LIPIcs.ITCS.2023.82.pdf
cryptographic protocol
one-time memory
quantum cryptography
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
83:1
83:15
10.4230/LIPIcs.ITCS.2023.83
article
Vertex Sparsification for Edge Connectivity in Polynomial Time
Liu, Yang P.
1
Department of Mathematics, Stanford University, CA, USA
An important open question in the area of vertex sparsification is whether (1+ε)-approximate cut-preserving vertex sparsifiers with size close to the number of terminals exist. The work [Parinya Chalermsook et al., 2021] (SODA 2021) introduced a relaxation called connectivity-c mimicking networks, which asks to construct a vertex sparsifier which preserves connectivity among k terminals exactly up to the value of c, and showed applications to dynamic connectivity data structures and survivable network design. We show that connectivity-c mimicking networks with Õ(kc³) edges exist and can be constructed in polynomial time in n and c, improving over the results of [Parinya Chalermsook et al., 2021] for any c ≥ log n, whose runtimes depended exponentially on c.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.83/LIPIcs.ITCS.2023.83.pdf
Vertex-sparsification
edge-connectivity
Gammoids
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
84:1
84:13
10.4230/LIPIcs.ITCS.2023.84
article
Fractional Certificates for Bounded Functions
Lovett, Shachar
1
https://orcid.org/0000-0003-4552-1443
Zhang, Jiapeng
2
https://orcid.org/0000-0002-6410-8649
Department of Computer Science and Engineering, University of California San Diego, CA, USA
Department of Computer Science, University of Southern California, Los Angeles, CA, USA
A folklore conjecture in quantum computing is that the acceptance probability of a quantum query algorithm can be approximated by a classical decision tree, with only a polynomial increase in the number of queries. Motivated by this conjecture, Aaronson and Ambainis (Theory of Computing, 2014) conjectured that this should hold more generally for any bounded function computed by a low degree polynomial.
In this work we prove two new results towards establishing this conjecture: first, that any such polynomial has a small fractional certificate complexity; and second, that many inputs have a small sensitive block. We show that these would imply the Aaronson and Ambainis conjecture, assuming a conjectured extension of Talagrand’s concentration inequality.
On the technical side, many classical techniques used in the analysis of Boolean functions seem to fail when applied to bounded functions. Here, we develop a new technique, based on a mix of combinatorics, analysis and geometry, and which in part extends a recent technique of Knop et al. (STOC 2021) to bounded functions.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.84/LIPIcs.ITCS.2023.84.pdf
Aaronson-Ambainis conjecture
fractional block sensitivity
Talagrand inequality
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
85:1
85:18
10.4230/LIPIcs.ITCS.2023.85
article
Improved Inapproximability of VC Dimension and Littlestone’s Dimension via (Unbalanced) Biclique
Manurangsi, Pasin
1
Google Research, Bangkok, Thailand
We study the complexity of computing (and approximating) VC Dimension and Littlestone’s Dimension when we are given the concept class explicitly. We give a simple reduction from Maximum (Unbalanced) Biclique problem to approximating VC Dimension and Littlestone’s Dimension. With this connection, we derive a range of hardness of approximation results and running time lower bounds. For example, under the (randomized) Gap-Exponential Time Hypothesis or the Strongish Planted Clique Hypothesis, we show a tight inapproximability result: both dimensions are hard to approximate to within a factor of o(log n) in polynomial-time. These improve upon constant-factor inapproximability results from [Pasin Manurangsi and Aviad Rubinstein, 2017].
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.85/LIPIcs.ITCS.2023.85.pdf
VC Dimension
Littlestone’s Dimension
Maximum Biclique
Hardness of Approximation
Fine-Grained Complexity
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
86:1
86:19
10.4230/LIPIcs.ITCS.2023.86
article
Resilience of 3-Majority Dynamics to Non-Uniform Schedulers
Meir, Uri
1
Oshman, Rotem
1
Shayevitz, Ofer
2
Volkov, Yuval
2
Blavatnik School of Computer Science, Tel Aviv University, Israel
School of Electrical Engineering, Tel Aviv University, Israel
In recent years there has been great interest in networks of passive, computationally-weak nodes, whose interactions are controlled by the outside environment; examples include population protocols, chemical reactions networks (CRNs), DNA computing, and more. Such networks are usually studied under one of two extreme regimes: the schedule of interactions is either assumed to be adversarial, or it is assumed to be chosen uniformly at random. In this paper we study an intermediate regime, where the interaction at each step is chosen from some not-necessarily-uniform distribution: we introduce the definition of a (p,ε)-scheduler, where the distribution that the scheduler chooses at every round can be arbitrary, but it must have 𝓁_p-distance at most ε from the uniform distribution. We ask how far from uniform we can get before the dynamics of the model break down.
For simplicity, we focus on the 3-majority dynamics, a type of chemical reaction network where the nodes of the network interact in triplets. Each node initially has an opinion of either 𝖷 or 𝖸, and when a triplet of nodes interact, all three nodes change their opinion to the majority of their three opinions. It is known that under a uniformly random scheduler, if we have an initial gap of Ω(√{n log n}) in favor of one value, then w.h.p. all nodes converge to the majority value within O(n log n) steps.
For the 3-majority dynamics, we prove that among all non-uniform schedulers with a given 𝓁_1- or 𝓁_∞-distance to the uniform scheduler, the worst case is a scheduler that creates a partition in the network, disconnecting some nodes from the rest: under any (p,ε)-close scheduler, if the scheduler’s distance from uniform only suffices to disconnect a set of size at most S nodes and we start from a configuration with a gap of Ω(S+√{n log n}) in favor of one value, then we are guaranteed that all but O(S) nodes will convert to the majority value. We also show that creating a partition is not necessary to cause the system to converge to the wrong value, or to fail to converge at all. We believe that our work can serve as a first step towards understanding the resilience of chemical reaction networks and population protocols under non-uniform schedulers.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.86/LIPIcs.ITCS.2023.86.pdf
chemical reaction networks
population protocols
randomized scheduler
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
87:1
87:14
10.4230/LIPIcs.ITCS.2023.87
article
Proofs of Quantumness from Trapdoor Permutations
Morimae, Tomoyuki
1
Yamakawa, Takashi
2
1
Yukawa Institute for Theoretical Physics, Kyoto University, Japan
NTT Social Informatics Laboratories, Tokyo, Japan
Assume that Alice can do only classical probabilistic polynomial-time computing while Bob can do quantum polynomial-time computing. Alice and Bob communicate over only classical channels, and finally Bob gets a state |x₀⟩+|x₁⟩ with some bit strings x₀ and x₁. Is it possible that Alice can know {x₀,x₁} but Bob cannot? Such a task, called remote state preparations, is indeed possible under some complexity assumptions, and is bases of many quantum cryptographic primitives such as proofs of quantumness, (classical-client) blind quantum computing, (classical) verifications of quantum computing, and quantum money. A typical technique to realize remote state preparations is to use 2-to-1 trapdoor collision resistant hash functions: Alice sends a 2-to-1 trapdoor collision resistant hash function f to Bob, and Bob evaluates it coherently, i.e., Bob generates ∑_x|x⟩|f(x)⟩. Bob measures the second register to get the measurement result y, and sends y to Alice. Bob’s post-measurement state is |x₀⟩+|x₁⟩, where f(x₀) = f(x₁) = y. With the trapdoor, Alice can learn {x₀,x₁} from y, but due to the collision resistance, Bob cannot. This Alice’s advantage can be leveraged to realize the quantum cryptographic primitives listed above. It seems that the collision resistance is essential here. In this paper, surprisingly, we show that the collision resistance is not necessary for a restricted case: we show that (non-verifiable) remote state preparations of |x₀⟩+|x₁⟩ secure against classical probabilistic polynomial-time Bob can be constructed from classically-secure (full-domain) trapdoor permutations. Trapdoor permutations are not likely to imply the collision resistance, because black-box reductions from collision-resistant hash functions to trapdoor permutations are known to be impossible. As an application of our result, we construct proofs of quantumness from classically-secure (full-domain) trapdoor permutations.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.87/LIPIcs.ITCS.2023.87.pdf
Quantum cryptography
Proofs of quantumness
Trapdoor permutations
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
88:1
88:20
10.4230/LIPIcs.ITCS.2023.88
article
Extremal Combinatorics, Iterated Pigeonhole Arguments and Generalizations of PPP
Pasarkar, Amol
1
https://orcid.org/0000-0002-2970-3207
Papadimitriou, Christos
1
Yannakakis, Mihalis
1
https://orcid.org/0000-0003-2857-1860
Columbia University, New York, NY, USA
We study the complexity of computational problems arising from existence theorems in extremal combinatorics. For some of these problems, a solution is guaranteed to exist based on an iterated application of the Pigeonhole Principle. This results in the definition of a new complexity class within TFNP, which we call PLC (for "polynomial long choice"). PLC includes all of PPP, as well as numerous previously unclassified total problems, including search problems related to Ramsey’s theorem, the Sunflower theorem, the Erdős-Ko-Rado lemma, and König’s lemma. Whether the first two of these four problems are PLC-complete is an important open question which we pursue; in contrast, we show that the latter two are PPP-complete. Finally, we reframe PPP as an optimization problem, and define a hierarchy of such problems related to Turàn’s theorem.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.88/LIPIcs.ITCS.2023.88.pdf
Total Complexity
Extremal Combinatorics
Pigeonhole Principle
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
89:1
89:19
10.4230/LIPIcs.ITCS.2023.89
article
The Strength of Equality Oracles in Communication
Pitassi, Toniann
1
Shirley, Morgan
2
Shraibman, Adi
3
Columbia University, New York, NY, USA
University of Toronto, Canada
The Academic College of Tel Aviv-Yaffo, Israel
It is well-known that randomized communication protocols are more powerful than deterministic protocols. In particular the Equality function requires Ω(n) deterministic communication complexity but has efficient randomized protocols. Previous work of Chattopadhyay, Lovett and Vinyals shows that randomized communication is strictly stronger than what can be solved by deterministic protocols equipped with an Equality oracle. Despite this separation, we are far from understanding the exact strength of Equality oracles in the context of communication complexity.
In this work we focus on nondeterminisic communication equipped with an Equality oracle, which is a subclass of Merlin-Arthur communication. We show that this inclusion is strict by proving that the previously-studied Integer Inner Product function, which can be efficiently computed even with bounded-error randomness, cannot be computed using sublinear communication in the nondeterministic Equality model. To prove this we give a new matrix-theoretic characterization of the nondeterministic Equality model: specifically, there is a tight connection between this model and a covering number based on the blocky matrices of Hambardzumyan, Hatami, and Hatami, as well as a natural variant of the Gamma-2 factorization norm. Similar equivalences are shown for the unambiguous nondeterministic model with Equality oracles. A bonus result arises from these proofs: for the studied communication models, a single Equality oracle call suffices without loss of generality.
Our results allow us to prove a separation between deterministic and unambiguous nondeterminism in the presence of Equality oracles. This stands in contrast to the result of Yannakakis which shows that these models are polynomially-related without oracles. We suggest a number of intriguing open questions along this direction of inquiry, as well as others that arise from our work.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.89/LIPIcs.ITCS.2023.89.pdf
Factorization norm
blocky rank
Merlin-Arthur
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
90:1
90:14
10.4230/LIPIcs.ITCS.2023.90
article
Quantum Proofs of Deletion for Learning with Errors
Poremba, Alexander
1
https://orcid.org/0000-0002-7330-1539
California Institute of Technology, Pasadena, CA, USA
Quantum information has the property that measurement is an inherently destructive process. This feature is most apparent in the principle of complementarity, which states that mutually incompatible observables cannot be measured at the same time. Recent work by Broadbent and Islam (TCC 2020) builds on this aspect of quantum mechanics to realize a cryptographic notion called certified deletion. While this remarkable notion enables a classical verifier to be convinced that a (private-key) quantum ciphertext has been deleted by an untrusted party, it offers no additional layer of functionality.
In this work, we augment the proof-of-deletion paradigm with fully homomorphic encryption (FHE). We construct the first fully homomorphic encryption scheme with certified deletion - an interactive protocol which enables an untrusted quantum server to compute on encrypted data and, if requested, to simultaneously prove data deletion to a client. Our scheme has the desirable property that verification of a deletion certificate is public; meaning anyone can verify that deletion has taken place. Our main technical ingredient is an interactive protocol by which a quantum prover can convince a classical verifier that a sample from the Learning with Errors (LWE) distribution in the form of a quantum state was deleted. As an application of our protocol, we construct a Dual-Regev public-key encryption scheme with certified deletion, which we then extend towards a (leveled) FHE scheme of the same type. We introduce the notion of Gaussian-collapsing hash functions - a special case of collapsing hash functions defined by Unruh (Eurocrypt 2016) - and we prove the security of our schemes under the assumption that the Ajtai hash function satisfies a certain strong Gaussian-collapsing property in the presence of leakage.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.90/LIPIcs.ITCS.2023.90.pdf
Learning with errors
certified deletion
fully homomorphic encryption
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
91:1
91:26
10.4230/LIPIcs.ITCS.2023.91
article
Online Pen Testing
Qiao, Mingda
1
https://orcid.org/0000-0002-9182-6152
Valiant, Gregory
1
https://orcid.org/0000-0002-2211-1073
Stanford University, CA, USA
We study a "pen testing" problem, in which we are given n pens with unknown amounts of ink X₁, X₂, …, X_n, and we want to choose a pen with the maximum amount of remaining ink in it. The challenge is that we cannot access each X_i directly; we only get to write with the i-th pen until either a certain amount of ink is used, or the pen runs out of ink. In both cases, this testing reduces the remaining ink in the pen and thus the utility of selecting it.
Despite this significant lack of information, we show that it is possible to approximately maximize our utility up to an O(log n) factor. Formally, we consider two different setups: the "prophet" setting, in which each X_i is independently drawn from some distribution 𝒟_i, and the "secretary" setting, in which (X_i)_{i=1}^n is a random permutation of arbitrary a₁, a₂, …, a_n. We derive the optimal competitive ratios in both settings up to constant factors. Our algorithms are surprisingly robust: (1) In the prophet setting, we only require one sample from each 𝒟_i, rather than a full description of the distribution; (2) In the secretary setting, the algorithm also succeeds under an arbitrary permutation, if an estimate of the maximum a_i is given.
Our techniques include a non-trivial online sampling scheme from a sequence with an unknown length, as well as the construction of a hard, non-uniform distribution over permutations. Both might be of independent interest. We also highlight some immediate open problems and discuss several directions for future research.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.91/LIPIcs.ITCS.2023.91.pdf
Optimal stopping
online algorithm
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
92:1
92:20
10.4230/LIPIcs.ITCS.2023.92
article
Decision-Making Under Miscalibration
Rothblum, Guy N.
1
Yona, Gal
1
Weizmann Institute, Rehovot, Israel
How should we use ML-based predictions (e.g., risk of heart attack) to inform downstream binary classification decisions (e.g., undergoing a medical procedure)? When the risk estimates are perfectly calibrated, the answer is well understood: a classification problem’s cost structure induces an optimal treatment threshold j^⋆. In practice, however, predictors are often miscalibrated, and this can lead to harmful decisions. This raises a fundamental question: how should one use potentially miscalibrated predictions to inform binary decisions?
In this work, we study this question from the perspective of algorithmic fairness. Specifically, we focus on the impact of decisions on protected demographic subgroups, when we are only given a bound on the predictor’s anticipated degree of subgroup-miscalibration. We formalize a natural (distribution-free) solution concept for translating predictions into decisions: given anticipated miscalibration of α, we propose using the threshold j that minimizes the worst-case regret over all α-miscalibrated predictors, where the regret is the difference in clinical utility between using the threshold in question and using the optimal threshold in hindsight. We provide closed form expressions for j when miscalibration is measured using both expected and maximum calibration error which reveal that it indeed differs from j^⋆ (the optimal threshold under perfect calibration).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.92/LIPIcs.ITCS.2023.92.pdf
risk prediction
calibration
algorithmic fairness
multi-group fairness
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
93:1
93:22
10.4230/LIPIcs.ITCS.2023.93
article
Beyond Worst-Case Budget-Feasible Mechanism Design
Rubinstein, Aviad
1
Zhao, Junyao
1
Computer Science Department, Stanford University, CA, USA
Motivated by large-market applications such as crowdsourcing, we revisit the problem of budget-feasible mechanism design under a "small-bidder assumption". Anari, Goel, and Nikzad (2018) gave a mechanism that has optimal competitive ratio 1-1/e on worst-case instances. However, we observe that on many realistic instances, their mechanism is significantly outperformed by a simpler open clock auction by Ensthaler and Giebe (2014), although the open clock auction only achieves competitive ratio 1/2 in the worst case. Is there a mechanism that gets the best of both worlds, i.e., a mechanism that is worst-case optimal and performs favorably on realistic instances? To answer this question, we initiate the study of beyond worst-case budget-feasible mechanism design.
Our first main result is the design and the analysis of a natural mechanism that gives an affirmative answer to our question above:
- We prove that on every instance, our mechanism performs at least as good as all uniform mechanisms, including Anari, Goel, and Nikzad’s and Ensthaler and Giebe’s mechanisms.
- Moreover, we empirically evaluate our mechanism on various realistic instances and observe that it beats the worst-case 1-1/e competitive ratio by a large margin and compares favorably to both mechanisms mentioned above.
Our second main result is more interesting in theory: We show that in the semi-adversarial model of budget-smoothed analysis, where the adversary designs a single worst-case market for a distribution of budgets, our mechanism is optimal among all (including non-uniform) mechanisms; furthermore our mechanism guarantees a strictly better-than-(1-1/e) expected competitive ratio for any non-trivial budget distribution regardless of the market. (In contrast, given any bounded range of budgets, we can construct a single market where Anari, Goel, and Nikzad’s mechanism achieves only 1-1/e competitive ratio for every budget in this range.) We complement the positive result with a characterization of the worst-case markets for any given budget distribution and prove a fairly robust hardness result that holds against any budget distribution and any mechanism.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.93/LIPIcs.ITCS.2023.93.pdf
Procurement auctions
Mechanism design
Beyond worst-case analysis
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
94:1
94:23
10.4230/LIPIcs.ITCS.2023.94
article
Is It Easier to Count Communities Than Find Them?
Rush, Cynthia
1
https://orcid.org/0000-0001-6857-2855
Skerman, Fiona
2
https://orcid.org/0000-0003-4141-7059
Wein, Alexander S.
3
https://orcid.org/0000-0002-3406-1747
Yang, Dana
4
https://orcid.org/0000-0002-2158-0759
Department of Statistics, Columbia University, New York, NY,USA
Department of Mathematics, Uppsala University, Sweden
Department of Mathematics, University of California, Davis, CA, USA
Department of Statistics and Data Science, Cornell University, Ithaca, NY, USA
Random graph models with community structure have been studied extensively in the literature. For both the problems of detecting and recovering community structure, an interesting landscape of statistical and computational phase transitions has emerged. A natural unanswered question is: might it be possible to infer properties of the community structure (for instance, the number and sizes of communities) even in situations where actually finding those communities is believed to be computationally hard? We show the answer is no. In particular, we consider certain hypothesis testing problems between models with different community structures, and we show (in the low-degree polynomial framework) that testing between two options is as hard as finding the communities.
In addition, our methods give the first computational lower bounds for testing between two different "planted" distributions, whereas previous results have considered testing between a planted distribution and an i.i.d. "null" distribution.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.94/LIPIcs.ITCS.2023.94.pdf
Community detection
Hypothesis testing
Low-degree polynomials
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
95:1
95:20
10.4230/LIPIcs.ITCS.2023.95
article
An Improved Lower Bound for Matroid Intersection Prophet Inequalities
Saxena, Raghuvansh R.
1
Velusamy, Santhoshini
2
Weinberg, S. Matthew
3
Microsoft Research, Cambridge, MA, USA
Harvard University, Cambridge, MA, USA
Princeton University, NJ, USA
We consider prophet inequalities subject to feasibility constraints that are the intersection of q matroids. The best-known algorithms achieve a Θ(q)-approximation, even when restricted to instances that are the intersection of q partition matroids, and with i.i.d. Bernoulli random variables [José R. Correa et al., 2022; Moran Feldman et al., 2016; Marek Adamczyk and Michal Wlodarczyk, 2018]. The previous best-known lower bound is Θ(√q) due to a simple construction of [Robert Kleinberg and S. Matthew Weinberg, 2012] (which uses i.i.d. Bernoulli random variables, and writes the construction as the intersection of partition matroids).
We establish an improved lower bound of q^{1/2+Ω(1/log log q)} by writing the construction of [Robert Kleinberg and S. Matthew Weinberg, 2012] as the intersection of asymptotically fewer partition matroids. We accomplish this via an improved upper bound on the product dimension of a graph with p^p disjoint cliques of size p, using recent techniques developed in [Noga Alon and Ryan Alweiss, 2020].
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.95/LIPIcs.ITCS.2023.95.pdf
Prophet Inequalities
Intersection of Matroids
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
96:1
96:17
10.4230/LIPIcs.ITCS.2023.96
article
Unitary Property Testing Lower Bounds by Polynomials
She, Adrian
1
Yuen, Henry
2
University of Toronto, Canada
Columbia University, New York, NY, USA
We study unitary property testing, where a quantum algorithm is given query access to a black-box unitary and has to decide whether it satisfies some property. In addition to containing the standard quantum query complexity model (where the unitary encodes a binary string) as a special case, this model contains "inherently quantum" problems that have no classical analogue. Characterizing the query complexity of these problems requires new algorithmic techniques and lower bound methods.
Our main contribution is a generalized polynomial method for unitary property testing problems. By leveraging connections with invariant theory, we apply this method to obtain lower bounds on problems such as determining recurrence times of unitaries, approximating the dimension of a marked subspace, and approximating the entanglement entropy of a marked state. We also present a unitary property testing-based approach towards an oracle separation between QMA and QMA(2), a long standing question in quantum complexity theory.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.96/LIPIcs.ITCS.2023.96.pdf
Quantum query complexity
polynomial method
unitary property testing
quantum proofs
invariant theory
quantum recurrence time
entanglement entropy
BQP
QMA
QMA(2)
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
97:1
97:22
10.4230/LIPIcs.ITCS.2023.97
article
What Can Cryptography Do for Decentralized Mechanism Design?
Shi, Elaine
1
Chung, Hao
2
Wu, Ke
3
https://orcid.org/0000-0002-2756-8750
ECE and CSD Department, Carnegie Mellon University, Pittsburgh, PA, USA
ECE Department, Carnegie Mellon University, Pittsburgh, PA, USA
CSD Department, Carnegie Mellon University, Pittsburgh, PA, USA
Recent works of Roughgarden (EC'21) and Chung and Shi (SODA'23) initiate the study of a new decentralized mechanism design problem called transaction fee mechanism design (TFM). Unlike the classical mechanism design literature, in the decentralized environment, even the auctioneer (i.e., the miner) can be a strategic player, and it can even collude with a subset of the users facilitated by binding side contracts. Chung and Shi showed two main impossibility results that rule out the existence of a dream TFM. First, any TFM that provides incentive compatibility for individual users and miner-user coalitions must always have zero miner revenue, no matter whether the block size is finite or infinite. Second, assuming finite block size, no non-trivial TFM can simultaneously provide incentive compatibility for any individual user and for any miner-user coalition.
In this work, we explore what new models and meaningful relaxations can allow us to circumvent the impossibility results of Chung and Shi. Besides today’s model that does not employ cryptography, we introduce a new MPC-assisted model where the TFM is implemented by a joint multi-party computation (MPC) protocol among the miners. We prove several feasibility and infeasibility results for achieving strict and approximate incentive compatibility, respectively, in the plain model as well as the MPC-assisted model. We show that while cryptography is not a panacea, it indeed allows us to overcome some impossibility results pertaining to the plain model, leading to non-trivial mechanisms with useful guarantees that are otherwise impossible in the plain model. Our work is also the first to characterize the mathematical landscape of transaction fee mechanism design under approximate incentive compatibility, as well as in a cryptography-assisted model.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.97/LIPIcs.ITCS.2023.97.pdf
Transaction Fee Mechanism Design
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
98:1
98:12
10.4230/LIPIcs.ITCS.2023.98
article
Efficient Algorithms for Certifying Lower Bounds on the Discrepancy of Random Matrices
Venkat, Prayaag
1
https://orcid.org/0000-0002-1366-8646
School of Engineering and Applied Sciences, Harvard University, Boston, MA, USA
In this paper, we initiate the study of the algorithmic problem of certifying lower bounds on the discrepancy of random matrices: given an input matrix A ∈ ℝ^{m × n}, output a value that is a lower bound on disc(A) = min_{x ∈ {± 1}ⁿ} ‖Ax‖_∞ for every A, but is close to the typical value of disc(A) with high probability over the choice of a random A. This problem is important because of its connections to conjecturally-hard average-case problems such as negatively-spiked PCA [Afonso S. Bandeira et al., 2020], the number-balancing problem [Gamarnik and Kızıldağ, 2021] and refuting random constraint satisfaction problems [Prasad Raghavendra et al., 2017]. We give the first polynomial-time algorithms with non-trivial guarantees for two main settings. First, when the entries of A are i.i.d. standard Gaussians, it is known that disc(A) = Θ (√n2^{-n/m}) with high probability [Karthekeyan Chandrasekaran and Santosh S. Vempala, 2014; Aubin et al., 2019; Paxton Turner et al., 2020] and that super-constant levels of the Sum-of-Squares SDP hierarchy fail to certify anything better than disc(A) ≥ 0 when m < n - o(n) [Mrinalkanti Ghosh et al., 2020]. In contrast, our algorithm certifies that disc(A) ≥ exp(-O(n²/m)) with high probability. As an application, this formally refutes a conjecture of Bandeira, Kunisky, and Wein [Afonso S. Bandeira et al., 2020] on the computational hardness of the detection problem in the negatively-spiked Wishart model. Second, we consider the integer partitioning problem: given n uniformly random b-bit integers a₁, …, a_n, certify the non-existence of a perfect partition, i.e. certify that disc(A) ≥ 1 for A = (a₁, …, a_n). Under the scaling b = α n, it is known that the probability of the existence of a perfect partition undergoes a phase transition from 1 to 0 at α = 1 [Christian Borgs et al., 2001]; our algorithm certifies the non-existence of perfect partitions for some α = O(n). We also give efficient non-deterministic algorithms with significantly improved guarantees, raising the possibility that the landscape of these certification problems closely resembles that of e.g. the problem of refuting random 3SAT formulas in the unsatisfiable regime. Our algorithms involve a reduction to the Shortest Vector Problem and employ the Lenstra-Lenstra-Lovász algorithm.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.98/LIPIcs.ITCS.2023.98.pdf
Average-case discrepancy theory
lattices
shortest vector problem
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
99:1
99:26
10.4230/LIPIcs.ITCS.2023.99
article
On Oracles and Algorithmic Methods for Proving Lower Bounds
Vyas, Nikhil
1
https://orcid.org/0000-0002-4055-7693
Williams, Ryan
2
https://orcid.org/0000-0003-2326-2233
Harvard University, Cambridge, MA, USA
MIT CSAIL and EECS, Cambridge, MA, USA
This paper studies the interaction of oracles with algorithmic approaches to proving circuit complexity lower bounds, establishing new results on two different kinds of questions.
1) We revisit some prominent open questions in circuit lower bounds, and provide a clean way of viewing them as circuit upper bound questions. Let Missing-String be the (total) search problem of producing a string that does not appear in a given list L containing M bit-strings of length N, where M < 2ⁿ. We show in a generic way how algorithms and uniform circuits (from restricted classes) for Missing-String imply complexity lower bounds (and in some cases, the converse holds as well).
We give a local algorithm for Missing-String, which can compute any desired output bit making very few probes into the input, when the number of strings M is small enough. We apply this to prove a new nearly-optimal (up to oracles) time hierarchy theorem with advice.
We show that the problem of constructing restricted uniform circuits for Missing-String is essentially equivalent to constructing functions without small non-uniform circuits, in a relativizing way. For example, we prove that small uniform depth-3 circuits for Missing-String would imply exponential circuit lower bounds for Σ₂ EXP, and depth-3 lower bounds for Missing-String would imply non-trivial circuits (relative to an oracle) for Σ₂ EXP problems. Both conclusions are longstanding open problems in circuit complexity.
2) It has been known since Impagliazzo, Kabanets, and Wigderson [JCSS 2002] that generic derandomizations improving subexponentially over exhaustive search would imply lower bounds such as NEXP ̸ ⊂ 𝖯/poly. Williams [SICOMP 2013] showed that Circuit-SAT algorithms running barely faster than exhaustive search would imply similar lower bounds. The known proofs of such results do not relativize (they use techniques from interactive proofs/PCPs). However, it has remained open whether there is an oracle under which the generic implications from circuit-analysis algorithms to circuit lower bounds fail.
Building on an oracle of Fortnow, we construct an oracle relative to which the circuit approximation probability problem (CAPP) is in 𝖯, yet EXP^{NP} has polynomial-size circuits.
We construct an oracle relative to which SAT can be solved in "half-exponential" time, yet exponential time (EXP) has polynomial-size circuits. Improving EXP to NEXP would give an oracle relative to which Σ₂ 𝖤 has "half-exponential" size circuits, which is open. (Recall it is known that Σ₂ 𝖤 is not in "sub-half-exponential" size, and the proof relativizes.) Moreover, the running time of the SAT algorithm cannot be improved: relative to all oracles, if SAT is in "sub-half-exponential" time then EXP does not have polynomial-size circuits.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.99/LIPIcs.ITCS.2023.99.pdf
oracles
relativization
circuit complexity
missing string
exponential hierarchy
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
100:1
100:28
10.4230/LIPIcs.ITCS.2023.100
article
The Time Complexity of Consensus Under Oblivious Message Adversaries
Winkler, Kyrill
1
https://orcid.org/0000-0002-7310-1748
Paz, Ami
2
https://orcid.org/0000-0002-6629-8335
Rincon Galeana, Hugo
3
https://orcid.org/0000-0002-8152-1275
Schmid, Stefan
4
5
https://orcid.org/0000-0002-7798-1711
Schmid, Ulrich
3
https://orcid.org/0000-0001-9831-8583
ITK Engineering, Wien, Austria
LISN - CNRS & Paris-Saclay University, France
TU Wien, Austria
TU Berlin, Germany
Fraunhofer SIT, Darmstadt, Germany
We study the problem of solving consensus in synchronous directed dynamic networks, in which communication is controlled by an oblivious message adversary that picks the communication graph to be used in a round from a fixed set of graphs 𝐃 arbitrarily. In this fundamental model, determining consensus solvability and designing efficient consensus algorithms is surprisingly difficult. Enabled by a decision procedure that is derived from a well-established previous consensus solvability characterization for a given set 𝐃, we study, for the first time, the time complexity of solving consensus in this model: We provide both upper and lower bounds for this time complexity, and also relate it to the number of iterations required by the decision procedure. Among other results, we find that reaching consensus under an oblivious message adversary can take exponentially longer than both deciding consensus solvability and broadcasting the input value of some unknown process to all other processes.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.100/LIPIcs.ITCS.2023.100.pdf
dynamic networks
oblivious message adversaries
consensus
time complexity
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2023-02-01
251
101:1
101:22
10.4230/LIPIcs.ITCS.2023.101
article
Exponential Separations Using Guarded Extension Variables
Yolcu, Emre
1
https://orcid.org/0000-0002-4255-9748
Heule, Marijn J. H.
1
https://orcid.org/0000-0002-5587-8801
Carnegie Mellon University, Pittsburgh, PA, USA
We study the complexity of proof systems augmenting resolution with inference rules that allow, given a formula Γ in conjunctive normal form, deriving clauses that are not necessarily logically implied by Γ but whose addition to Γ preserves satisfiability. When the derived clauses are allowed to introduce variables not occurring in Γ, the systems we consider become equivalent to extended resolution. We are concerned with the versions of these systems without new variables. They are called BC⁻, RAT⁻, SBC⁻, and GER⁻, denoting respectively blocked clauses, resolution asymmetric tautologies, set-blocked clauses, and generalized extended resolution. Each of these systems formalizes some restricted version of the ability to make assumptions that hold "without loss of generality," which is commonly used informally to simplify or shorten proofs.
Except for SBC⁻, these systems are known to be exponentially weaker than extended resolution. They are, however, all equivalent to it under a relaxed notion of simulation that allows the translation of the formula along with the proof when moving between proof systems. By taking advantage of this fact, we construct formulas that separate RAT⁻ from GER⁻ and vice versa. With the same strategy, we also separate SBC⁻ from RAT⁻. Additionally, we give polynomial-size SBC⁻ proofs of the pigeonhole principle, which separates SBC⁻ from GER⁻ by a previously known lower bound. These results also separate the three systems from BC⁻ since they all simulate it. We thus give an almost complete picture of their relative strengths.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol251-itcs2023/LIPIcs.ITCS.2023.101/LIPIcs.ITCS.2023.101.pdf
proof complexity
separations
resolution
extended resolution
blocked clauses