eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
1
2170
10.4230/LIPIcs.ITCS.2024
article
LIPIcs, Volume 287, ITCS 2024, Complete Volume
Guruswami, Venkatesan
1
https://orcid.org/0000-0001-7926-3396
University of California, Berkeley, CA, USA
LIPIcs, Volume 287, ITCS 2024, Complete Volume
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024/LIPIcs.ITCS.2024.pdf
LIPIcs, Volume 287, ITCS 2024, Complete Volume
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
0:i
0:xxiv
10.4230/LIPIcs.ITCS.2024.0
article
Front Matter, Table of Contents, Preface, Conference Organization
Guruswami, Venkatesan
1
https://orcid.org/0000-0001-7926-3396
University of California, Berkeley, CA, USA
Front Matter, Table of Contents, Preface, Conference Organization
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.0/LIPIcs.ITCS.2024.0.pdf
Front Matter
Table of Contents
Preface
Conference Organization
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
1:1
1:24
10.4230/LIPIcs.ITCS.2024.1
article
A Qubit, a Coin, and an Advice String Walk into a Relational Problem
Aaronson, Scott
1
2
Buhrman, Harry
3
4
5
Kretschmer, William
6
7
https://orcid.org/0000-0002-7784-9817
University of Texas at Austin, TX, USA
OpenAI, San Francisco, CA, USA
QuSoft, Amsterdam, The Netherlands
Centrum Wiskunde & Informatica, Amsterdam, The Netherlands
University of Amsterdam, The Netherlands
Simons Institute for the Theory of Computing, Berkeley, CA, USA
University of California, Berkeley, CA, USA
Relational problems (those with many possible valid outputs) are different from decision problems, but it is easy to forget just how different. This paper initiates the study of FBQP/qpoly, the class of relational problems solvable in quantum polynomial-time with the help of polynomial-sized quantum advice, along with its analogues for deterministic and randomized computation (FP, FBPP) and advice (/poly, /rpoly).
Our first result is that FBQP/qpoly ≠ FBQP/poly, unconditionally, with no oracle - a striking contrast with what we know about the analogous decision classes. The proof repurposes the separation between quantum and classical one-way communication complexities due to Bar-Yossef, Jayram, and Kerenidis. We discuss how this separation raises the prospect of near-term experiments to demonstrate "quantum information supremacy," a form of quantum supremacy that would not depend on unproved complexity assumptions.
Our second result is that FBPP ̸ ⊂ FP/poly - that is, Adleman’s Theorem fails for relational problems - unless PSPACE ⊂ NP/poly. Our proof uses IP = PSPACE and time-bounded Kolmogorov complexity. On the other hand, we show that proving FBPP ̸ ⊂ FP/poly will be hard, as it implies a superpolynomial circuit lower bound for PromiseBPEXP.
We prove the following further results:
- Unconditionally, FP ≠ FBPP and FP/poly ≠ FBPP/poly (even when these classes are carefully defined).
- FBPP/poly = FBPP/rpoly (and likewise for FBQP). For sampling problems, by contrast, SampBPP/poly ≠ SampBPP/rpoly (and likewise for SampBQP).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.1/LIPIcs.ITCS.2024.1.pdf
Relational problems
quantum advice
randomized advice
FBQP
FBPP
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
2:1
2:21
10.4230/LIPIcs.ITCS.2024.2
article
Quantum Pseudoentanglement
Aaronson, Scott
1
Bouland, Adam
2
Fefferman, Bill
3
Ghosh, Soumik
3
Vazirani, Umesh
4
Zhang, Chenyi
2
Zhou, Zixin
2
Department of Computer Science, University of Texas at Austin, TX, USA
Department of Computer Science, Stanford University, CA, USA
Department of Computer Science, University of Chicago, IL, USA
Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, CA, USA
Entanglement is a quantum resource, in some ways analogous to randomness in classical computation. Inspired by recent work of Gheorghiu and Hoban, we define the notion of "pseudoentanglement", a property exhibited by ensembles of efficiently constructible quantum states which are indistinguishable from quantum states with maximal entanglement. Our construction relies on the notion of quantum pseudorandom states - first defined by Ji, Liu and Song - which are efficiently constructible states indistinguishable from (maximally entangled) Haar-random states. Specifically, we give a construction of pseudoentangled states with entanglement entropy arbitrarily close to log n across every cut, a tight bound providing an exponential separation between computational vs information theoretic quantum pseudorandomness. We discuss applications of this result to Matrix Product State testing, entanglement distillation, and the complexity of the AdS/CFT correspondence. As compared with a previous version of this manuscript (arXiv:2211.00747v1) this version introduces a new pseudorandom state construction, has a simpler proof of correctness, and achieves a technically stronger result of low entanglement across all cuts simultaneously.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.2/LIPIcs.ITCS.2024.2.pdf
Quantum computing
Quantum complexity theory
entanglement
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
3:1
3:21
10.4230/LIPIcs.ITCS.2024.3
article
Differentially Private Medians and Interior Points for Non-Pathological Data
Aliakbarpour, Maryam
1
Silver, Rose
2
Steinke, Thomas
3
Ullman, Jonathan
2
Department of Computer Science, Rice University, Houston, TX, USA
Khoury College of Computer Sciences, Northeastern University, Boston, MA, USA
Google DeepMind, Mountain View, CA, USA
We construct sample-efficient differentially private estimators for the approximate-median and interior-point problems, that can be applied to arbitrary input distributions over ℝ satisfying very mild statistical assumptions. Our results stand in contrast to the surprising negative result of Bun et al. (FOCS 2015), which showed that private estimators with finite sample complexity cannot produce interior points on arbitrary distributions.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.3/LIPIcs.ITCS.2024.3.pdf
Differential Privacy
Statistical Estimation
Approximate Medians
Interior Point Problem
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
4:1
4:23
10.4230/LIPIcs.ITCS.2024.4
article
Tensor Ranks and the Fine-Grained Complexity of Dynamic Programming
Alman, Josh
1
Turok, Ethan
1
Yu, Hantao
1
Zhang, Hengzhi
1
Columbia University, New York, NY, USA
Generalizing work of Künnemann, Paturi, and Schneider [ICALP 2017], we study a wide class of high-dimensional dynamic programming (DP) problems in which one must find the shortest path between two points in a high-dimensional grid given a tensor of transition costs between nodes in the grid. This captures many classical problems which are solved using DP such as the knapsack problem, the airplane refueling problem, and the minimal-weight polygon triangulation problem. We observe that for many of these problems, the tensor naturally has low tensor rank or low slice rank.
We then give new algorithms and a web of fine-grained reductions to tightly determine the complexity of these problems. For instance, we show that a polynomial speedup over the DP algorithm is possible when the tensor rank is a constant or the slice rank is 1, but that such a speedup is impossible if the tensor rank is slightly super-constant (assuming SETH) or the slice rank is at least 3 (assuming the APSP conjecture).
We find that this characterizes the known complexities for many of these problems, and in some cases leads to new faster algorithms.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.4/LIPIcs.ITCS.2024.4.pdf
Fine-grained complexity
Dynamic programming
Least-weight subsequence
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
5:1
5:24
10.4230/LIPIcs.ITCS.2024.5
article
On the Complexity of Computing Sparse Equilibria and Lower Bounds for No-Regret Learning in Games
Anagnostides, Ioannis
1
Kalavasis, Alkis
2
Sandholm, Tuomas
1
Zampetakis, Manolis
2
Department of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA
Department of Computer Science, Yale University, New Haven, CT, USA
Characterizing the performance of no-regret dynamics in multi-player games is a foundational problem at the interface of online learning and game theory. Recent results have revealed that when all players adopt specific learning algorithms, it is possible to improve exponentially over what is predicted by the overly pessimistic no-regret framework in the traditional adversarial regime, thereby leading to faster convergence to the set of coarse correlated equilibria (CCE) - a standard game-theoretic equilibrium concept. Yet, despite considerable recent progress, the fundamental complexity barriers for learning in normal- and extensive-form games are poorly understood. In this paper, we make a step towards closing this gap by first showing that - barring major complexity breakthroughs - any polynomial-time learning algorithms in extensive-form games need at least 2^{log^{1/2 - o(1)} |𝒯|} iterations for the average regret to reach below even an absolute constant, where |𝒯| is the number of nodes in the game. This establishes a superpolynomial separation between no-regret learning in normal- and extensive-form games, as in the former class a logarithmic number of iterations suffices to achieve constant average regret. Furthermore, our results imply that algorithms such as multiplicative weights update, as well as its optimistic counterpart, require at least 2^{(log log m)^{1/2 - o(1)}} iterations to attain an O(1)-CCE in m-action normal-form games under any parameterization. These are the first non-trivial - and dimension-dependent - lower bounds in that setting for the most well-studied algorithms in the literature. From a technical standpoint, we follow a beautiful connection recently made by Foster, Golowich, and Kakade (ICML '23) between sparse CCE and Nash equilibria in the context of Markov games. Consequently, our lower bounds rule out polynomial-time algorithms well beyond the traditional online learning framework, capturing techniques commonly used for accelerating centralized equilibrium computation.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.5/LIPIcs.ITCS.2024.5.pdf
No-regret learning
extensive-form games
multiplicative weights update
optimism
lower bounds
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
6:1
6:22
10.4230/LIPIcs.ITCS.2024.6
article
Pseudorandom Strings from Pseudorandom Quantum States
Ananth, Prabhanjan
1
https://orcid.org/0000-0001-5387-5730
Lin, Yao-Ting
1
Yuen, Henry
2
https://orcid.org/0000-0002-2684-1129
Department of Computer Science, University of California Santa Barbara, CA, USA
Department of Computer Science, Columbia University, New York, NY, USA
We study the relationship between notions of pseudorandomness in the quantum and classical worlds. Pseudorandom quantum state generator (PRSG), a pseudorandomness notion in the quantum world, is an efficient circuit that produces states that are computationally indistinguishable from Haar random states. PRSGs have found applications in quantum gravity, quantum machine learning, quantum complexity theory, and quantum cryptography. Pseudorandom generators, on the other hand, a pseudorandomness notion in the classical world, is ubiquitous to theoretical computer science. While some separation results were known between PRSGs, for some parameter regimes, and PRGs, their relationship has not been completely understood.
In this work, we show that a natural variant of pseudorandom generators called quantum pseudorandom generators (QPRGs) can be based on the existence of logarithmic output length PRSGs. Our result along with the previous separations gives a better picture regarding the relationship between the two notions. We also study the relationship between other notions, namely, pseudorandom function-like state generators and pseudorandom functions. We provide evidence that QPRGs can be as useful as PRGs by providing cryptographic applications of QPRGs such as commitments and encryption schemes.
Our primary technical contribution is a method for pseudodeterministically extracting uniformly random strings from Haar-random states.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.6/LIPIcs.ITCS.2024.6.pdf
Quantum Cryptography
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
7:1
7:20
10.4230/LIPIcs.ITCS.2024.7
article
Geometric Covering via Extraction Theorem
Bandyapadhyay, Sayan
1
https://orcid.org/0000-0001-8875-0102
Maheshwari, Anil
2
https://orcid.org/0000-0002-1274-4598
Roy, Sasanka
3
Smid, Michiel
2
Varadarajan, Kasturi
4
Department of Computer Science, Portland State University, OR, USA
School of Computer Science, Carleton University, Ottawa, Canada
ACMU, Indian Statistical Institute, Kolkata, India
Department of Computer Science, University of Iowa, IA, USA
In this work, we address the following question. Suppose we are given a set D of positive-weighted disks and a set T of n points in the plane, such that each point of T is contained in at least two disks of D. Then is there always a subset S of D such that the union of the disks in S contains all the points of T and the total weight of the disks of D that are not in S is at least a constant fraction of the total weight of the disks in D?
In our work, we prove the Extraction Theorem that answers this question in the affirmative. Our constructive proof heavily exploits the geometry of disks, and in the process, we make interesting connections between our work and the literature on local search for geometric optimization problems.
The Extraction Theorem helps to design the first polynomial-time O(1)-approximations for two important geometric covering problems involving disks.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.7/LIPIcs.ITCS.2024.7.pdf
Covering
Extraction theorem
Double-disks
Submodularity
Local search
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
8:1
8:23
10.4230/LIPIcs.ITCS.2024.8
article
Sublinear Approximation Algorithm for Nash Social Welfare with XOS Valuations
Barman, Siddharth
1
https://orcid.org/0000-0001-9276-2181
Krishna, Anand
1
https://orcid.org/0009-0003-1017-0361
Kulkarni, Pooja
2
https://orcid.org/0000-0003-1983-1317
Narang, Shivika
3
https://orcid.org/0000-0002-9220-5200
Indian Institute of Science, Bangalore, India
University of Illinois at Urbana-Champaign, IL, USA
Simons Laufer Mathematical Sciences Institute, Berkeley, CA, USA
We study the problem of allocating indivisible goods among n agents with the objective of maximizing Nash social welfare (NSW). This welfare function is defined as the geometric mean of the agents' valuations and, hence, it strikes a balance between the extremes of social welfare (arithmetic mean) and egalitarian welfare (max-min value). Nash social welfare has been extensively studied in recent years for various valuation classes. In particular, a notable negative result is known when the agents' valuations are complement-free and are specified via value queries: for XOS valuations, one necessarily requires exponentially many value queries to find any sublinear (in n) approximation for NSW. Indeed, this lower bound implies that stronger query models are needed for finding better approximations. Towards this, we utilize demand oracles and XOS oracles; both of these query models are standard and have been used in prior work on social welfare maximization with XOS valuations.
We develop the first sublinear approximation algorithm for maximizing Nash social welfare under XOS valuations, specified via demand and XOS oracles. Hence, this work breaks the O(n)-approximation barrier for NSW maximization under XOS valuations. We obtain this result by developing a novel connection between NSW and social welfare under a capped version of the agents' valuations. In addition to this insight, which might be of independent interest, this work relies on an intricate combination of multiple technical ideas, including the use of repeated matchings and the discrete moving knife method. In addition, we partially complement the algorithmic result by showing that, under XOS valuations, an exponential number of demand and XOS queries are necessarily required to approximate NSW within a factor of (1 - 1/e).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.8/LIPIcs.ITCS.2024.8.pdf
Discrete Fair Division
Nash Social Welfare
XOS Valuations
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
9:1
9:19
10.4230/LIPIcs.ITCS.2024.9
article
Quantum Merlin-Arthur and Proofs Without Relative Phase
Bassirian, Roozbeh
1
Fefferman, Bill
1
Marwaha, Kunal
1
https://orcid.org/0000-0001-9084-6971
University of Chicago, IL, USA
We study a variant of QMA where quantum proofs have no relative phase (i.e. non-negative amplitudes, up to a global phase). If only completeness is modified, this class is equal to QMA [Grilo et al., 2014]; but if both completeness and soundness are modified, the class (named QMA+ by Jeronimo and Wu [Jeronimo and Wu, 2023]) can be much more powerful. We show that QMA+ with some constant gap is equal to NEXP, yet QMA+ with some other constant gap is equal to QMA. One interpretation is that Merlin’s ability to "deceive" originates from relative phase at least as much as from entanglement, since QMA(2) ⊆ NEXP.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.9/LIPIcs.ITCS.2024.9.pdf
quantum complexity
QMA(2)
PCPs
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
10:1
10:24
10.4230/LIPIcs.ITCS.2024.10
article
Towards Stronger Depth Lower Bounds
Bathie, Gabriel
1
2
https://orcid.org/0000-0003-2400-4914
Williams, R. Ryan
3
https://orcid.org/0000-0003-2326-2233
LaBRI, Université de Bordeaux, France
DIENS, PSL Research University, Paris, France
CSAIL, Massachusetts Institute of Technology, Cambridge, MA, USA
A fundamental problem in circuit complexity is to find explicit functions that require large depth to compute. When considering the natural DeMorgan basis of {OR,AND}, where negations incur no cost, the best known depth lower bounds for an explicit function in NP have the form (3-o(1))log₂ n, established by Håstad (building on others) in the early 1990s. We make progress on the problem of improving this factor of 3, in two different ways:
- We consider an "algorithmic method" approach to proving stronger depth lower bounds for non-uniform circuits in the DeMorgan basis. We show that slightly faster algorithms (than what is known) for counting the number of satisfying assignments on subcubic-size DeMorgan formulas would imply supercubic-size DeMorgan formula lower bounds, implying that the depth must be at least (3+ε)log₂ n for some ε > 0. For example, if #SAT on formulas of size n^{2+2ε} can be solved in 2^{n - n^{1-ε}log^k n} time for some ε > 0 and a sufficiently large constant k, then there is a function computable in 2^{O(n)} time with a SAT oracle which does not have n^{3+ε}-size formulas. In fact, the #SAT algorithm only has to work on formulas that are a conjunction of n^{1-ε} subformulas, each of which is n^{1+3ε} size, in order to obtain the supercubic lower bound. As a proof of concept, we show that our new algorithms-to-lower-bounds connection can be applied to prove new lower bounds for "hybrid" DeMorgan formula models which compute interesting functions at their leaves.
- Turning to the {NAND} basis, we establish a greater-than-(3 log₂ n) depth lower bound against uniform circuits solving the SAT problem, using an extension of the "indirect diagonalization" method for NAND formulas. Note that circuits over the NAND basis are a special case of circuits over the DeMorgan basis; however, hard functions such as Andreev’s function (known to require depth (3-o(1))log₂ n in the DeMorgan basis) can still be computed with NAND circuits of depth (3+o(1))log₂ n. Our results imply that SAT requires polylogtime-uniform NAND circuits of depth at least 3.603 log₂ n.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.10/LIPIcs.ITCS.2024.10.pdf
DeMorgan formulas
depth complexity
circuit complexity
lower bounds
#SAT
NAND gates
SAT
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
11:1
11:25
10.4230/LIPIcs.ITCS.2024.11
article
Property Testing with Online Adversaries
Ben-Eliezer, Omri
1
https://orcid.org/0000-0001-6366-5964
Kelman, Esty
2
3
https://orcid.org/0009-0007-4962-848X
Meir, Uri
4
https://orcid.org/0009-0003-4274-346X
Raskhodnikova, Sofya
5
https://orcid.org/0000-0002-4902-050X
Department of Mathematics, Massachusetts Institute of Technology, Cambridge, MA, USA
Department of Computer Science and Faculty of Computing & Data Sciences, Boston University, MA, USA
CSAIL, Massachusetts Institute of Technology, Cambridge, MA, USA
Blavatnik School of Computer Science, Tel Aviv University, Israel
Department of Computer Science, Boston University, MA, USA
The online manipulation-resilient testing model, proposed by Kalemaj, Raskhodnikova and Varma (ITCS 2022 and Theory of Computing 2023), studies property testing in situations where access to the input degrades continuously and adversarially. Specifically, after each query made by the tester is answered, the adversary can intervene and either erase or corrupt t data points. In this work, we investigate a more nuanced version of the online model in order to overcome old and new impossibility results for the original model. We start by presenting an optimal tester for linearity and a lower bound for low-degree testing of Boolean functions in the original model. We overcome the lower bound by allowing batch queries, where the tester gets a group of queries answered between manipulations of the data. Our batch size is small enough so that function values for a single batch on their own give no information about whether the function is of low degree. Finally, to overcome the impossibility results of Kalemaj et al. for sortedness and the Lipschitz property of sequences, we extend the model to include t < 1, i.e., adversaries that make less than one erasure per query. For sortedness, we characterize the rate of erasures for which online testing can be performed, exhibiting a sharp transition from optimal query complexity to impossibility of testability (with any number of queries). Our online tester works for a general class of local properties of sequences. One feature of our results is that we get new (and in some cases, simpler) optimal algorithms for several properties in the standard property testing model.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.11/LIPIcs.ITCS.2024.11.pdf
Linearity testing
low-degree testing
Reed-Muller codes
testing properties of sequences
erasure-resilience
corruption-resilience
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
12:1
12:22
10.4230/LIPIcs.ITCS.2024.12
article
Are There Graphs Whose Shortest Path Structure Requires Large Edge Weights?
Bernstein, Aaron
1
Bodwin, Greg
2
Wein, Nicole
3
Rutgers University, New Brunswick, NJ, USA
University of Michigan, Ann Arbor, MI, USA
Simons Institute, Berkeley, CA, USA
The aspect ratio of a (positively) weighted graph G is the ratio of its maximum edge weight to its minimum edge weight. Aspect ratio commonly arises as a complexity measure in graph algorithms, especially related to the computation of shortest paths. Popular paradigms are to interpolate between the settings of weighted and unweighted input graphs by incurring a dependence on aspect ratio, or by simply restricting attention to input graphs of low aspect ratio.
This paper studies the effects of these paradigms, investigating whether graphs of low aspect ratio have more structured shortest paths than graphs in general. In particular, we raise the question of whether one can generally take a graph of large aspect ratio and reweight its edges, to obtain a graph with bounded aspect ratio while preserving the structure of its shortest paths. Our findings are:
- Every weighted DAG on n nodes has a shortest-paths preserving graph of aspect ratio O(n). A simple lower bound shows that this is tight.
- The previous result does not extend to general directed or undirected graphs; in fact, the answer turns out to be exponential in these settings. In particular, we construct directed and undirected n-node graphs for which any shortest-paths preserving graph has aspect ratio 2^{Ω(n)}.
We also consider the approximate version of this problem, where the goal is for shortest paths in H to correspond to approximate shortest paths in G. We show that our exponential lower bounds extend even to this setting. We also show that in a closely related model, where approximate shortest paths in H must also correspond to approximate shortest paths in G, even DAGs require exponential aspect ratio.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.12/LIPIcs.ITCS.2024.12.pdf
shortest paths
graph theory
weighted graphs
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
13:1
13:24
10.4230/LIPIcs.ITCS.2024.13
article
Universal Matrix Sparsifiers and Fast Deterministic Algorithms for Linear Algebra
Bhattacharjee, Rajarshi
1
Dexter, Gregory
2
Musco, Cameron
1
Ray, Archan
1
Sachdeva, Sushant
3
Woodruff, David P.
4
University of Massachusetts Amherst, MA, USA
Purdue University, West Lafayette, IN, USA
University of Toronto, Canada
Carnegie Mellon University, Pittsburgh, PA, USA
Let S ∈ ℝ^{n × n} be any matrix satisfying ‖1-S‖₂ ≤ εn, where 1 is the all ones matrix and ‖⋅‖₂ is the spectral norm. It is well-known that there exists S with just O(n/ε²) non-zero entries achieving this guarantee: we can let 𝐒 be the scaled adjacency matrix of a Ramanujan expander graph. We show that, beyond giving a sparse approximation to the all ones matrix, 𝐒 yields a universal sparsifier for any positive semidefinite (PSD) matrix. In particular, for any PSD A ∈ ℝ^{n×n} which is normalized so that its entries are bounded in magnitude by 1, we show that ‖A-A∘S‖₂ ≤ ε n, where ∘ denotes the entrywise (Hadamard) product. Our techniques also yield universal sparsifiers for non-PSD matrices. In this case, we show that if S satisfies ‖1-S‖₂ ≤ (ε²n)/(c log²(1/ε)) for some sufficiently large constant c, then ‖A-A∘S‖₂ ≤ ε⋅max(n,‖ A‖₁), where ‖A‖₁ is the nuclear norm. Again letting 𝐒 be a scaled Ramanujan graph adjacency matrix, this yields a sparsifier with Õ(n/ε⁴) entries. We prove that the above universal sparsification bounds for both PSD and non-PSD matrices are tight up to logarithmic factors.
Since 𝐀∘𝐒 can be constructed deterministically without reading all of A, our result for PSD matrices derandomizes and improves upon established results for randomized matrix sparsification, which require sampling a random subset of O((n log n)/ε²) entries and only give an approximation to any fixed A with high probability. We further show that any randomized algorithm must read at least Ω(n/ε²) entries to spectrally approximate general A to error εn, thus proving that these existing randomized algorithms are optimal up to logarithmic factors. We leverage our deterministic sparsification results to give the first {deterministic algorithms} for several problems, including singular value and singular vector approximation and positive semidefiniteness testing, that run in faster than matrix multiplication time. This partially addresses a significant gap between randomized and deterministic algorithms for fast linear algebraic computation.
Finally, if A ∈ {-1,0,1}^{n × n} is PSD, we show that a spectral approximation Ã with ‖A-Ã‖₂ ≤ ε n can be obtained by deterministically reading Õ(n/ε) entries of A. This improves the 1/ε dependence on our result for general PSD matrices by a quadratic factor and is information-theoretically optimal up to a logarithmic factor.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.13/LIPIcs.ITCS.2024.13.pdf
sublinear algorithms
randomized linear algebra
spectral sparsification
expanders
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
14:1
14:21
10.4230/LIPIcs.ITCS.2024.14
article
Homomorphic Indistinguishability Obfuscation and Its Applications
Bhushan, Kaartik
1
Koppula, Venkata
2
Prabhakaran, Manoj
1
IIT Bombay, India
IIT Delhi, India
In this work, we propose the notion of homomorphic indistinguishability obfuscation (HiO) and present a construction based on subexponentially-secure iO and one-way functions. An HiO scheme allows us to convert an obfuscation of circuit C to an obfuscation of C'∘C, and this can be performed obliviously (that is, without knowing the circuit C). A naïve solution would be to obfuscate C'∘iO(C). However, if we do this for k hops, then the size of the final obfuscation is exponential in k. HiO ensures that the size of the final obfuscation remains polynomial after repeated compositions. As an application, we show how to build function-hiding hierarchical multi-input functional encryption and homomorphic witness encryption using HiO.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.14/LIPIcs.ITCS.2024.14.pdf
Program Obfuscation
Homomorphisms
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
15:1
15:21
10.4230/LIPIcs.ITCS.2024.15
article
Testing and Learning Convex Sets in the Ternary Hypercube
Black, Hadley
1
https://orcid.org/0009-0008-9662-2870
Blais, Eric
2
https://orcid.org/0009-0000-5824-9034
Harms, Nathaniel
3
https://orcid.org/0000-0003-0259-9355
University of California, Los Angeles, CA, USA
University of Waterloo, Canada
EPFL, Lausanne, Switzerland
We study the problems of testing and learning high-dimensional discrete convex sets. The simplest high-dimensional discrete domain where convexity is a non-trivial property is the ternary hypercube, {-1,0,1}ⁿ. The goal of this work is to understand structural combinatorial properties of convex sets in this domain and to determine the complexity of the testing and learning problems. We obtain the following results.
Structural: We prove nearly tight bounds on the edge boundary of convex sets in {0,±1}ⁿ, showing that the maximum edge boundary of a convex set is Õ(n^{3/4})⋅3ⁿ, or equivalently that every convex set has influence Õ(n^{3/4}) and a convex set exists with influence Ω(n^{3/4}).
Learning and sample-based testing: We prove upper and lower bounds of 3^{Õ(n^{3/4})} and 3^{Ω(√n)} for the task of learning convex sets under the uniform distribution from random examples. The analysis of the learning algorithm relies on our upper bound on the influence. Both the upper and lower bound also hold for the problem of sample-based testing with two-sided error. For sample-based testing with one-sided error we show that the sample-complexity is 3^{Θ(n)}.
Testing with queries: We prove nearly matching upper and lower bounds of 3^{Θ̃(√n)} for one-sided error testing of convex sets with non-adaptive queries.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.15/LIPIcs.ITCS.2024.15.pdf
Property testing
learning theory
convex sets
testing convexity
fluctuation
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
16:1
16:20
10.4230/LIPIcs.ITCS.2024.16
article
A Characterization of Optimal-Rate Linear Homomorphic Secret Sharing Schemes, and Applications
Blackwell, Keller
1
https://orcid.org/0000-0003-3588-9199
Wootters, Mary
1
https://orcid.org/0000-0002-2345-2531
Department of Computer Science, Stanford University, CA, USA
A Homomorphic Secret Sharing (HSS) scheme is a secret-sharing scheme that shares a secret x among s servers, and additionally allows an output client to reconstruct some function f(x), using information that can be locally computed by each server. A key parameter in HSS schemes is download rate, which quantifies how much information the output client needs to download from each server. Recent work (Fosli, Ishai, Kolobov, and Wootters, ITCS 2022) established a fundamental limitation on the download rate of linear HSS schemes for computing low-degree polynomials, and gave an example of HSS schemes that meet this limit.
In this paper, we further explore optimal-rate linear HSS schemes for polynomials. Our main result is a complete characterization of such schemes, in terms of a coding-theoretic notion that we introduce, termed optimal labelweight codes. We use this characterization to answer open questions about the amortization required by HSS schemes that achieve optimal download rate. In more detail, the construction of Fosli et al. required amortization over 𝓁 instances of the problem, and only worked for particular values of 𝓁. We show that - perhaps surprisingly - the set of 𝓁’s for which their construction works is in fact nearly optimal, possibly leaving out only one additional value of 𝓁. We show this by using our coding-theoretic characterization to prove a necessary condition on the 𝓁’s admitting optimal-rate linear HSS schemes. We then provide a slightly improved construction of optimal-rate linear HSS schemes, where the set of allowable 𝓁’s is optimal in even more parameter settings. Moreover, based on a connection to the MDS conjecture, we conjecture that our construction is optimal for all parameter regimes.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.16/LIPIcs.ITCS.2024.16.pdf
Error Correcting Codes
Homomorphic Secret Sharing
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
17:1
17:21
10.4230/LIPIcs.ITCS.2024.17
article
Loss Minimization Yields Multicalibration for Large Neural Networks
Błasiok, Jarosław
1
Gopalan, Parikshit
2
Hu, Lunjia
3
Kalai, Adam Tauman
4
Nakkiran, Preetum
2
ETH Zürich, Switzerland
Apple, Palo Alto, CA, USA
Stanford University, CA, USA
Microsoft Research, Cambridge, MA, USA
Multicalibration is a notion of fairness for predictors that requires them to provide calibrated predictions across a large set of protected groups. Multicalibration is known to be a distinct goal than loss minimization, even for simple predictors such as linear functions.
In this work, we consider the setting where the protected groups can be represented by neural networks of size k, and the predictors are neural networks of size n > k. We show that minimizing the squared loss over all neural nets of size n implies multicalibration for all but a bounded number of unlucky values of n. We also give evidence that our bound on the number of unlucky values is tight, given our proof technique. Previously, results of the flavor that loss minimization yields multicalibration were known only for predictors that were near the ground truth, hence were rather limited in applicability. Unlike these, our results rely on the expressivity of neural nets and utilize the representation of the predictor.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.17/LIPIcs.ITCS.2024.17.pdf
Multi-group fairness
loss minimization
neural networks
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
18:1
18:18
10.4230/LIPIcs.ITCS.2024.18
article
Winning Without Observing Payoffs: Exploiting Behavioral Biases to Win Nearly Every Round
Blum, Avrim
1
https://orcid.org/0000-0003-2450-5102
Dutz, Melissa
1
https://orcid.org/0009-0008-0971-8285
Toyota Technological Institute at Chicago, IL, USA
Gameplay under various forms of uncertainty has been widely studied. Feldman et al. [Michal Feldman et al., 2010] studied a particularly low-information setting in which one observes the opponent’s actions but no payoffs, not even one’s own, and introduced an algorithm which guarantees one’s payoff nonetheless approaches the minimax optimal value (i.e., zero) in a symmetric zero-sum game. Against an opponent playing a minimax-optimal strategy, approaching the value of the game is the best one can hope to guarantee. However, a wealth of research in behavioral economics shows that people often do not make perfectly rational, optimal decisions. Here we consider whether it is possible to actually win in this setting if the opponent is behaviorally biased. We model several deterministic, biased opponents and show that even without knowing the game matrix in advance or observing any payoffs, it is possible to take advantage of each bias in order to win nearly every round (so long as the game has the property that each action beats and is beaten by at least one other action). We also provide a partial characterization of the kinds of biased strategies that can be exploited to win nearly every round, and provide algorithms for beating some kinds of biased strategies even when we don't know which strategy the opponent uses.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.18/LIPIcs.ITCS.2024.18.pdf
Game theory
Behavioral bias
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
19:1
19:21
10.4230/LIPIcs.ITCS.2024.19
article
Spanning Adjacency Oracles in Sublinear Time
Bodwin, Greg
1
Fleischmann, Henry
2
https://orcid.org/0000-0002-6093-3393
Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, MI, USA
Department of Pure Mathematics and Mathematical Statistics, University of Cambridge, UK
Suppose we are given an n-node, m-edge input graph G, and the goal is to compute a spanning subgraph H on O(n) edges. This can be achieved in linear O(m + n) time via breadth-first search. But can we hope for sublinear runtime in some range of parameters - for example, perhaps O(n^{1.9}) worst-case runtime, even when the input graph has n² edges?
If the goal is to return H as an adjacency list, there are simple lower bounds showing that Ω(m + n) runtime is necessary. If the goal is to return H as an adjacency matrix, then we need Ω(n²) time just to write down the entries of the output matrix. However, we show that neither of these lower bounds still apply if instead the goal is to return H as an implicit adjacency matrix, which we call an adjacency oracle. An adjacency oracle is a data structure that gives a user the illusion that an adjacency matrix has been computed: it accepts edge queries (u, v), and it returns in near-constant time a bit indicating whether or not (u, v) ∈ E(H).
Our main result is that, for any 0 < ε < 1, one can construct an adjacency oracle for a spanning subgraph on at most (1+ε)n edges, in Õ(n ε^{-1}) time (hence sublinear time on input graphs with m ≫ n edges), and that this construction time is near-optimal. Additional results include constructions of adjacency oracles for k-connectivity certificates and spanners, which are similarly sublinear on dense-enough input graphs.
Our adjacency oracles are closely related to Local Computation Algorithms (LCAs) for graph sparsifiers; they can be viewed as LCAs with some computation moved to a preprocessing step, in order to speed up queries. Our oracles imply the first LCAs for computing sparse spanning subgraphs of general input graphs in Õ(n) query time, which works by constructing our adjacency oracle, querying it once, and then throwing the rest of the oracle away. This addresses an open problem of Rubinfeld [CSR '17].
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.19/LIPIcs.ITCS.2024.19.pdf
Graph algorithms
Sublinear algorithms
Data structures
Graph theory
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
20:1
20:14
10.4230/LIPIcs.ITCS.2024.20
article
Discreteness of Asymptotic Tensor Ranks (Extended Abstract)
Briët, Jop
1
https://orcid.org/0000-0002-9909-3635
Christandl, Matthias
2
https://orcid.org/0000-0003-2281-3355
Leigh, Itai
3
https://orcid.org/0009-0007-4298-6284
Shpilka, Amir
3
https://orcid.org/0000-0003-2384-425X
Zuiddam, Jeroen
4
https://orcid.org/0000-0003-0651-6238
Centrum Wiskunde & Informatica, Amsterdam, The Netherlands
University of Copenhagen, Denmark
Tel Aviv University, Israel
University of Amsterdam, The Netherlands
Tensor parameters that are amortized or regularized over large tensor powers, often called "asymptotic" tensor parameters, play a central role in several areas including algebraic complexity theory (constructing fast matrix multiplication algorithms), quantum information (entanglement cost and distillable entanglement), and additive combinatorics (bounds on cap sets, sunflower-free sets, etc.). Examples are the asymptotic tensor rank, asymptotic slice rank and asymptotic subrank. Recent works (Costa-Dalai, Blatter-Draisma-Rupniewski, Christandl-Gesmundo-Zuiddam) have investigated notions of discreteness (no accumulation points) or "gaps" in the values of such tensor parameters.
We prove a general discreteness theorem for asymptotic tensor parameters of order-three tensors and use this to prove that (1) over any finite field (and in fact any finite set of coefficients in any field), the asymptotic subrank and the asymptotic slice rank have no accumulation points, and (2) over the complex numbers, the asymptotic slice rank has no accumulation points.
Central to our approach are two new general lower bounds on the asymptotic subrank of tensors, which measures how much a tensor can be diagonalized. The first lower bound says that the asymptotic subrank of any concise three-tensor is at least the cube-root of the smallest dimension. The second lower bound says that any concise three-tensor that is "narrow enough" (has one dimension much smaller than the other two) has maximal asymptotic subrank.
Our proofs rely on new lower bounds on the maximum rank in matrix subspaces that are obtained by slicing a three-tensor in the three different directions. We prove that for any concise tensor, the product of any two such maximum ranks must be large, and as a consequence there are always two distinct directions with large max-rank.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.20/LIPIcs.ITCS.2024.20.pdf
Tensors
Asymptotic rank
Subrank
Slice rank
Restriction
Degeneration
Diagonalization
SLOCC
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
21:1
21:11
10.4230/LIPIcs.ITCS.2024.21
article
Noisy Decoding by Shallow Circuits with Parities: Classical and Quantum (Extended Abstract)
Briët, Jop
1
https://orcid.org/0000-0002-9909-3635
Buhrman, Harry
1
2
Castro-Silva, Davi
1
https://orcid.org/0000-0002-7101-5758
Neumann, Niels M. P.
1
3
https://orcid.org/0000-0003-2159-8251
Centrum Wiskunde & Informatica, Amsterdam, The Netherlands
University of Amsterdam, The Netherlands
The Netherlands Organisation for Applied Scientific Research (TNO), Den Haag, The Netherlands
We consider the problem of decoding corrupted error correcting codes with NC⁰[⊕] circuits in the classical and quantum settings. We show that any such classical circuit can correctly recover only a vanishingly small fraction of messages, if the codewords are sent over a noisy channel with positive error rate. Previously this was known only for linear codes with large dual distance, whereas our result applies to any code. By contrast, we give a simple quantum circuit that correctly decodes the Hadamard code with probability Ω(ε²) even if a (1/2 - ε)-fraction of a codeword is adversarially corrupted.
Our classical hardness result is based on an equidistribution phenomenon for multivariate polynomials over a finite field under biased input-distributions. This is proved using a structure-versus-randomness strategy based on a new notion of rank for high-dimensional polynomial maps that may be of independent interest.
Our quantum circuit is inspired by a non-local version of the Bernstein-Vazirani problem, a technique to generate "poor man’s cat states" by Watts et al., and a constant-depth quantum circuit for the OR function by Takahashi and Tani.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.21/LIPIcs.ITCS.2024.21.pdf
Coding theory
circuit complexity
quantum complexity theory
higher-order Fourier analysis
non-local games
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
22:1
22:25
10.4230/LIPIcs.ITCS.2024.22
article
The NFA Acceptance Hypothesis: Non-Combinatorial and Dynamic Lower Bounds
Bringmann, Karl
1
2
Grønlund, Allan
3
4
Künnemann, Marvin
5
Larsen, Kasper Green
3
Saarland University, Saarland Informatics Campus, Saarbrücken, Germany
Max Planck Institute for Informatics, Saarland Informatics Campus, Saarbrücken, Germany
Aarhus University, Denmark
Kvantify, Aarhus, Denmark
Karlsruhe Institute of Technology, Germany
We pose the fine-grained hardness hypothesis that the textbook algorithm for the NFA Acceptance problem is optimal up to subpolynomial factors, even for dense NFAs and fixed alphabets.
We show that this barrier appears in many variations throughout the algorithmic literature by introducing a framework of Colored Walk problems. These yield fine-grained equivalent formulations of the NFA Acceptance problem as problems concerning detection of an s-t-walk with a prescribed color sequence in a given edge- or node-colored graph. For NFA Acceptance on sparse NFAs (or equivalently, Colored Walk in sparse graphs), a tight lower bound under the Strong Exponential Time Hypothesis has been rediscovered several times in recent years. We show that our hardness hypothesis, which concerns dense NFAs, has several interesting implications:
- It gives a tight lower bound for Context-Free Language Reachability. This proves conditional optimality for the class of 2NPDA-complete problems, explaining the cubic bottleneck of interprocedural program analysis.
- It gives a tight (n+nm^{1/3})^{1-o(1)} lower bound for the Word Break problem on strings of length n and dictionaries of total size m.
- It implies the popular OMv hypothesis. Since the NFA acceptance problem is a static (i.e., non-dynamic) problem, this provides a static reason for the hardness of many dynamic problems. Thus, a proof of the NFA Acceptance hypothesis would resolve several interesting barriers. Conversely, a refutation of the NFA Acceptance hypothesis may lead the way to attacking the current barriers observed for Context-Free Language Reachability, the Word Break problem and the growing list of dynamic problems proven hard under the OMv hypothesis.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.22/LIPIcs.ITCS.2024.22.pdf
Fine-grained complexity theory
non-deterministic finite automata
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
23:1
23:24
10.4230/LIPIcs.ITCS.2024.23
article
Private Distribution Testing with Heterogeneous Constraints: Your Epsilon Might Not Be Mine
Canonne, Clément L.
1
https://orcid.org/0000-0001-7153-5211
Sun, Yucheng
2
University of Sydney, School of Computer Science, Australia
ETH Zürich, Switzerland
Private closeness testing asks to decide whether the underlying probability distributions of two sensitive datasets are identical or differ significantly in statistical distance, while guaranteeing (differential) privacy of the data. As in most (if not all) distribution testing questions studied under privacy constraints, however, previous work assumes that the two datasets are equally sensitive, i.e., must be provided the same privacy guarantees. This is often an unrealistic assumption, as different sources of data come with different privacy requirements; as a result, known closeness testing algorithms might be unnecessarily conservative, "paying" too high a privacy budget for half of the data. In this work, we initiate the study of the closeness testing problem under heterogeneous privacy constraints, where the two datasets come with distinct privacy requirements.
We formalize the question and provide algorithms under the three most widely used differential privacy settings, with a particular focus on the local and shuffle models of privacy; and show that one can indeed achieve better sample efficiency when taking into account the two different "epsilon" requirements.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.23/LIPIcs.ITCS.2024.23.pdf
differential privacy
distribution testing
local privacy
shuffle privacy
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
24:1
24:23
10.4230/LIPIcs.ITCS.2024.24
article
Classical Verification of Quantum Learning
Caro, Matthias C.
1
2
https://orcid.org/0000-0001-9009-2372
Hinsche, Marcel
1
https://orcid.org/0000-0003-4174-5706
Ioannou, Marios
1
Nietner, Alexander
1
https://orcid.org/0000-0002-6685-8400
Sweke, Ryan
3
1
https://orcid.org/0000-0002-6202-8864
Dahlem Center for Complex Quantum Systems, Freie Universität Berlin, Germany
Institute for Quantum Information and Matter, Caltech, Pasadena, CA, USA
IBM Quantum, Almaden Research Center, San Jose, CA, USA
Quantum data access and quantum processing can make certain classically intractable learning tasks feasible. However, quantum capabilities will only be available to a select few in the near future. Thus, reliable schemes that allow classical clients to delegate learning to untrusted quantum servers are required to facilitate widespread access to quantum learning advantages. Building on a recently introduced framework of interactive proof systems for classical machine learning, we develop a framework for classical verification of quantum learning. We exhibit learning problems that a classical learner cannot efficiently solve on their own, but that they can efficiently and reliably solve when interacting with an untrusted quantum prover. Concretely, we consider the problems of agnostic learning parities and Fourier-sparse functions with respect to distributions with uniform input marginal. We propose a new quantum data access model that we call "mixture-of-superpositions" quantum examples, based on which we give efficient quantum learning algorithms for these tasks. Moreover, we prove that agnostic quantum parity and Fourier-sparse learning can be efficiently verified by a classical verifier with only random example or statistical query access. Finally, we showcase two general scenarios in learning and verification in which quantum mixture-of-superpositions examples do not lead to sample complexity improvements over classical data. Our results demonstrate that the potential power of quantum data for learning tasks, while not unlimited, can be utilized by classical agents through interaction with untrusted quantum entities.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.24/LIPIcs.ITCS.2024.24.pdf
computational learning theory
quantum learning theory
interactive proofs
quantum oracles
agnostic learning
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
25:1
25:19
10.4230/LIPIcs.ITCS.2024.25
article
Learning Arithmetic Formulas in the Presence of Noise: A General Framework and Applications to Unsupervised Learning
Chandra, Pritam
1
Garg, Ankit
1
Kayal, Neeraj
1
Mittal, Kunal
2
Sinha, Tanmay
1
Microsoft Research, Bangalore, India
Princeton University, NJ, USA
We present a general framework for designing efficient algorithms for unsupervised learning problems, such as mixtures of Gaussians and subspace clustering. Our framework is based on a meta algorithm that learns arithmetic formulas in the presence of noise, using lower bounds. This builds upon the recent work of Garg, Kayal and Saha (FOCS '20), who designed such a framework for learning arithmetic formulas without any noise. A key ingredient of our meta algorithm is an efficient algorithm for a novel problem called Robust Vector Space Decomposition. We show that our meta algorithm works well when certain matrices have sufficiently large smallest non-zero singular values. We conjecture that this condition holds for smoothed instances of our problems, and thus our framework would yield efficient algorithms for these problems in the smoothed setting.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.25/LIPIcs.ITCS.2024.25.pdf
Arithmetic Circuits
Robust Vector Space Decomposition
Subspace Clustering
Mixtures of Gaussians
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
26:1
26:25
10.4230/LIPIcs.ITCS.2024.26
article
The Distributed Complexity of Locally Checkable Labeling Problems Beyond Paths and Trees
Chang, Yi-Jun
1
https://orcid.org/0000-0002-0109-2432
National University of Singapore, Singapore
We consider locally checkable labeling (LCL) problems in the LOCAL model of distributed computing. Since 2016, there has been a substantial body of work examining the possible complexities of LCL problems. For example, it has been established that there are no LCL problems exhibiting deterministic complexities falling between ω(log^∗ n) and o(log n). This line of inquiry has yielded a wealth of algorithmic techniques and insights that are useful for algorithm designers.
While the complexity landscape of LCL problems on general graphs, trees, and paths is now well understood, graph classes beyond these three cases remain largely unexplored. Indeed, recent research trends have shifted towards a fine-grained study of special instances within the domains of paths and trees.
In this paper, we generalize the line of research on characterizing the complexity landscape of LCL problems to a much broader range of graph classes. We propose a conjecture that characterizes the complexity landscape of LCL problems for an arbitrary class of graphs that is closed under minors, and we prove a part of the conjecture.
Some highlights of our findings are as follows.
- We establish a simple characterization of the minor-closed graph classes sharing the same deterministic complexity landscape as paths, where O(1), Θ(log^∗ n), and Θ(n) are the only possible complexity classes.
- It is natural to conjecture that any minor-closed graph class shares the same complexity landscape as trees if and only if the graph class has bounded treewidth and unbounded pathwidth. We prove the "only if" part of the conjecture.
- For the class of graphs with pathwidth at most k, we show the existence of LCL problems with randomized and deterministic complexities Θ(n), Θ(n^{1/2}), Θ(n^{1/3}), …, Θ(n^{1/k}) and the non-existence of LCL problems whose deterministic complexity is between ω(log^∗ n) and o(n^{1/k}). Consequently, in addition to the well-known complexity landscapes for paths, trees, and general graphs, there are infinitely many different complexity landscapes among minor-closed graph classes.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.26/LIPIcs.ITCS.2024.26.pdf
Distributed graph algorithms
locality
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
27:1
27:13
10.4230/LIPIcs.ITCS.2024.27
article
Determinants vs. Algebraic Branching Programs
Chatterjee, Abhranil
1
Kumar, Mrinal
2
Volk, Ben Lee
3
Indian Statistical Institute, Kolkata, India
Tata Institute of Fundamental Research, Mumbai, India
Efi Arazi School of Computer Science, Reichman University, Herzliya, Israel
We show that for every homogeneous polynomial of degree d, if it has determinantal complexity at most s, then it can be computed by a homogeneous algebraic branching program (ABP) of size at most O(d⁵s). Moreover, we show that for most homogeneous polynomials, the width of the resulting homogeneous ABP is just s-1 and the size is at most O(ds).
Thus, for constant degree homogeneous polynomials, their determinantal complexity and ABP complexity are within a constant factor of each other and hence, a super-linear lower bound for ABPs for any constant degree polynomial implies a super-linear lower bound on determinantal complexity; this relates two open problems of great interest in algebraic complexity. As of now, super-linear lower bounds for ABPs are known only for polynomials of growing degree [Mrinal Kumar, 2019; Prerona Chatterjee et al., 2022], and for determinantal complexity the best lower bounds are larger than the number of variables only by a constant factor [Mrinal Kumar and Ben Lee Volk, 2022].
While determinantal complexity and ABP complexity are classically known to be polynomially equivalent [Meena Mahajan and V. Vinay, 1997], the standard transformation from the former to the latter incurs a polynomial blow up in size in the process, and thus, it was unclear if a super-linear lower bound for ABPs implies a super-linear lower bound on determinantal complexity. In particular, a size preserving transformation from determinantal complexity to ABPs does not appear to have been known prior to this work, even for constant degree polynomials.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.27/LIPIcs.ITCS.2024.27.pdf
Determinant
Algebraic Branching Program
Lower Bounds
Singular Variety
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
28:1
28:24
10.4230/LIPIcs.ITCS.2024.28
article
Extractors for Polynomial Sources over 𝔽₂
Chattopadhyay, Eshan
1
https://orcid.org/0000-0001-9140-3160
Goodman, Jesse
1
https://orcid.org/0000-0003-2552-1921
Gurumukhani, Mohit
1
https://orcid.org/0009-0007-8808-2846
Cornell University, Ithaca, NY, USA
We explicitly construct the first nontrivial extractors for degree d ≥ 2 polynomial sources over 𝔽₂. Our extractor requires min-entropy k ≥ n - (√{log n})/((log log n / d)^{d/2}). Previously, no constructions were known, even for min-entropy k ≥ n-1. A key ingredient in our construction is an input reduction lemma, which allows us to assume that any polynomial source with min-entropy k can be generated by O(k) uniformly random bits.
We also provide strong formal evidence that polynomial sources are unusually challenging to extract from, by showing that even our most powerful general purpose extractors cannot handle polynomial sources with min-entropy below k ≥ n-o(n). In more detail, we show that sumset extractors cannot even disperse from degree 2 polynomial sources with min-entropy k ≥ n-O(n/log log n). In fact, this impossibility result even holds for a more specialized family of sources that we introduce, called polynomial non-oblivious bit-fixing (NOBF) sources. Polynomial NOBF sources are a natural new family of algebraic sources that lie at the intersection of polynomial and variety sources, and thus our impossibility result applies to both of these classical settings. This is especially surprising, since we do have variety extractors that slightly beat this barrier - implying that sumset extractors are not a panacea in the world of seedless extraction.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.28/LIPIcs.ITCS.2024.28.pdf
Extractors
low-degree polynomials
varieties
sumset extractors
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
29:1
29:20
10.4230/LIPIcs.ITCS.2024.29
article
Recursive Error Reduction for Regular Branching Programs
Chattopadhyay, Eshan
1
https://orcid.org/0000-0001-9140-3160
Liao, Jyun-Jie
1
https://orcid.org/0000-0003-3332-1460
Cornell University, Ithaca, NY, USA
In a recent work, Chen, Hoza, Lyu, Tal and Wu (FOCS 2023) showed an improved error reduction framework for the derandomization of regular read-once branching programs (ROBPs). Their result is based on a clever modification to the inverse Laplacian perspective of space-bounded derandomization, which was originally introduced by Ahmadinejad, Kelner, Murtagh, Peebles, Sidford and Vadhan (FOCS 2020).
In this work, we give an alternative error reduction framework for regular ROBPs. Our new framework is based on a binary recursive formula from the work of Chattopadhyay and Liao (CCC 2020), that they used to construct weighted pseudorandom generators (WPRGs) for general ROBPs.
Based on our new error reduction framework, we give alternative proofs to the following results for regular ROBPs of length n and width w, both of which were proved in the work of Chen et al. using their error reduction:
- There is a WPRG with error ε that has seed length Õ(log(n)(√{log(1/ε)}+log(w))+log(1/ε)).
- There is a (non-black-box) deterministic algorithm which estimates the expectation of any such program within error ±ε with space complexity Õ(log(nw)⋅log log(1/ε)). This was first proved in the work of Ahmadinejad et al., but the proof by Chen et al. is simpler. Because of the binary recursive nature of our new framework, both of our proofs are based on a straightforward induction that is arguably simpler than the Laplacian-based proof in the work of Chen et al.
In fact, because of its simplicity, our proof of the second result directly gives a slightly stronger claim: our algorithm computes a ε-singular value approximation (a notion of approximation introduced in a recent work by Ahmadinejad, Peebles, Pyne, Sidford and Vadhan (FOCS 2023)) of the random walk matrix of the given ROBP in space Õ(log(nw)⋅log log(1/ε)). It is not clear how to get this stronger result from the previous proofs.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.29/LIPIcs.ITCS.2024.29.pdf
read-once branching program
regular branching program
weighted pseudorandom generator
derandomization
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
30:1
30:14
10.4230/LIPIcs.ITCS.2024.30
article
Influence Maximization in Ising Models
Chen, Zongchen
1
Mossel, Elchanan
2
Department of Computer Science and Engineering, University at Buffalo, NY, USA
Department of Mathematics, MIT, Cambridge, MA, USA
Given a complex high-dimensional distribution over {± 1}ⁿ, what is the best way to increase the expected number of +1’s by controlling the values of only a small number of variables? Such a problem is known as influence maximization and has been widely studied in social networks, biology, and computer science. In this paper, we consider influence maximization on the Ising model which is a prototypical example of undirected graphical models and has wide applications in many real-world problems. We establish a sharp computational phase transition for influence maximization on sparse Ising models under a bounded budget: In the high-temperature regime, we give a linear-time algorithm for finding a small subset of variables and their values which achieve nearly optimal influence; In the low-temperature regime, we show that the influence maximization problem cannot be solved in polynomial time under commonly-believed complexity assumption. The critical temperature coincides with the tree uniqueness/non-uniqueness threshold for Ising models which is also a critical point for other computational problems including approximate sampling and counting.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.30/LIPIcs.ITCS.2024.30.pdf
Influence maximization
Ising model
phase transition
correlation decay
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
31:1
31:23
10.4230/LIPIcs.ITCS.2024.31
article
On the Complexity of Isomorphism Problems for Tensors, Groups, and Polynomials III: Actions by Classical Groups
Chen, Zhili
1
https://orcid.org/0009-0002-2155-5817
Grochow, Joshua A.
2
https://orcid.org/0000-0002-6466-0476
Qiao, Youming
1
https://orcid.org/0000-0003-4334-1449
Tang, Gang
1
https://orcid.org/0000-0002-1135-466X
Zhang, Chuanqi
1
https://orcid.org/0009-0004-4857-3454
Center for Quantum Software and Information, University of Technology Sydney, Australia
Departments of Computer Science and Mathematics, University of Colorado Boulder, CO, USA
We study the complexity of isomorphism problems for d-way arrays, or tensors, under natural actions by classical groups such as orthogonal, unitary, and symplectic groups. These problems arise naturally in statistical data analysis and quantum information. We study two types of complexity-theoretic questions. First, for a fixed action type (isomorphism, conjugacy, etc.), we relate the complexity of the isomorphism problem over a classical group to that over the general linear group. Second, for a fixed group type (orthogonal, unitary, or symplectic), we compare the complexity of the isomorphism problems for different actions.
Our main results are as follows. First, for orthogonal and symplectic groups acting on 3-way arrays, the isomorphism problems reduce to the corresponding problems over the general linear group. Second, for orthogonal and unitary groups, the isomorphism problems of five natural actions on 3-way arrays are polynomial-time equivalent, and the d-tensor isomorphism problem reduces to the 3-tensor isomorphism problem for any fixed d > 3. For unitary groups, the preceding result implies that LOCC classification of tripartite quantum states is at least as difficult as LOCC classification of d-partite quantum states for any d. Lastly, we also show that the graph isomorphism problem reduces to the tensor isomorphism problem over orthogonal and unitary groups.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.31/LIPIcs.ITCS.2024.31.pdf
complexity class
tensor isomorphism
polynomial isomorphism
group isomorphism
local operations and classical communication
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
32:1
32:22
10.4230/LIPIcs.ITCS.2024.32
article
Space-Optimal Profile Estimation in Data Streams with Applications to Symmetric Functions
Chen, Justin Y.
1
Indyk, Piotr
1
Woodruff, David P.
2
Massachusetts Institute of Technology, Cambridge, MA, USA
Carnegie Mellon University, Pittsburgh, PA, USA
We revisit the problem of estimating the profile (also known as the rarity) in the data stream model. Given a sequence of m elements from a universe of size n, its profile is a vector ϕ whose i-th entry ϕ_i represents the number of distinct elements that appear in the stream exactly i times. A classic paper by Datar and Muthukrishan from 2002 gave an algorithm which estimates any entry ϕ_i up to an additive error of ± ε D using O(1/ε² (log n + log m)) bits of space, where D is the number of distinct elements in the stream.
In this paper, we considerably improve on this result by designing an algorithm which simultaneously estimates many coordinates of the profile vector ϕ up to small overall error. We give an algorithm which, with constant probability, produces an estimated profile ϕˆ with the following guarantees in terms of space and estimation error:
b) For any constant τ, with O(1 / ε² + log n) bits of space, ∑_{i = 1}^τ |ϕ_i - ϕˆ_i| ≤ ε D.
c) With O(1/ ε²log (1/ε) + log n + log log m) bits of space, ∑_{i = 1}^m |ϕ_i - ϕˆ_i| ≤ ε m. In addition to bounding the error across multiple coordinates, our space bounds separate the terms that depend on 1/ε and those that depend on n and m. We prove matching lower bounds on space in both regimes.
Application of our profile estimation algorithm gives estimates within error ± ε D of several symmetric functions of frequencies in O(1/ε² + log n) bits. This generalizes space-optimal algorithms for the distinct elements problems to other problems including estimating the Huber and Tukey losses as well as frequency cap statistics.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.32/LIPIcs.ITCS.2024.32.pdf
Streaming and Sketching Algorithms
Sublinear Algorithms
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
33:1
33:23
10.4230/LIPIcs.ITCS.2024.33
article
Testing Intersecting and Union-Closed Families
Chen, Xi
1
De, Anindya
2
Li, Yuhao
1
Nadimpalli, Shivam
1
Servedio, Rocco A.
1
Columbia University, New York, NY, USA
University of Pennsylvania, Philadelphia, PA, USA
Inspired by the classic problem of Boolean function monotonicity testing, we investigate the testability of other well-studied properties of combinatorial finite set systems, specifically intersecting families and union-closed families. A function f: {0,1}ⁿ → {0,1} is intersecting (respectively, union-closed) if its set of satisfying assignments corresponds to an intersecting family (respectively, a union-closed family) of subsets of [n].
Our main results are that - in sharp contrast with the property of being a monotone set system - the property of being an intersecting set system, and the property of being a union-closed set system, both turn out to be information-theoretically difficult to test. We show that:
- For ε ≥ Ω(1/√n), any non-adaptive two-sided ε-tester for intersectingness must make 2^{Ω(n^{1/4}/√{ε})} queries. We also give a 2^{Ω(√{n log(1/ε)})}-query lower bound for non-adaptive one-sided ε-testers for intersectingness.
- For ε ≥ 1/2^{Ω(n^{0.49})}, any non-adaptive two-sided ε-tester for union-closedness must make n^{Ω(log(1/ε))} queries.
Thus, neither intersectingness nor union-closedness shares the poly(n,1/ε)-query non-adaptive testability that is enjoyed by monotonicity.
To complement our lower bounds, we also give a simple poly(n^{√{nlog(1/ε)}},1/ε)-query, one-sided, non-adaptive algorithm for ε-testing each of these properties (intersectingness and union-closedness). We thus achieve nearly tight upper and lower bounds for two-sided testing of intersectingness when ε = Θ(1/√n), and for one-sided testing of intersectingness when ε = Θ(1).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.33/LIPIcs.ITCS.2024.33.pdf
Sublinear algorithms
property testing
computational complexity
monotonicity
intersecting families
union-closed families
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
34:1
34:14
10.4230/LIPIcs.ITCS.2024.34
article
On Parallel Repetition of PCPs
Chiesa, Alessandro
1
Guan, Ziyi
1
Yıldız, Burcu
1
EPFL, Lausanne, Switzerland
Parallel repetition refers to a set of valuable techniques used to reduce soundness error of probabilistic proofs while saving on certain efficiency measures. Parallel repetition has been studied for interactive proofs (IPs) and multi-prover interactive proofs (MIPs). In this paper we initiate the study of parallel repetition for probabilistically checkable proofs (PCPs).
We show that, perhaps surprisingly, parallel repetition of a PCP can increase soundness error, in fact bringing the soundness error to one as the number of repetitions tends to infinity. This "failure" of parallel repetition is common: we find that it occurs for a wide class of natural PCPs for NP-complete languages. We explain this unexpected phenomenon by providing a characterization result: the parallel repetition of a PCP brings the soundness error to zero if and only if a certain "MIP projection" of the PCP has soundness error strictly less than one. We show that our characterization is tight via a suitable example. Moreover, for those cases where parallel repetition of a PCP does bring the soundness error to zero, the aforementioned connection to MIPs offers preliminary results on the rate of decay of the soundness error.
Finally, we propose a simple variant of parallel repetition, called consistent parallel repetition (CPR), which has the same randomness complexity and query complexity as the plain variant of parallel repetition. We show that CPR brings the soundness error to zero for every PCP (with non-trivial soundness error). In fact, we show that CPR decreases the soundness error at an exponential rate in the repetition parameter.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.34/LIPIcs.ITCS.2024.34.pdf
probabilistically checkable proofs
parallel repetition
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
35:1
35:22
10.4230/LIPIcs.ITCS.2024.35
article
Collective Tree Exploration via Potential Function Method
Cosson, Romain
1
https://orcid.org/0009-0004-8784-7112
Massoulié, Laurent
1
https://orcid.org/0000-0001-7263-0069
Inria, Paris, France
We study the problem of collective tree exploration (CTE) in which a team of k agents is tasked to traverse all the edges of an unknown tree as fast as possible, assuming complete communication between the agents [FGKP06]. In this paper, we present an algorithm performing collective tree exploration in 2n/k+𝒪(kD) rounds, where n is the number of nodes in the tree, and D is the tree depth. This leads to a competitive ratio of 𝒪(√k), the first polynomial improvement over the 𝒪(k) ratio of depth-first search. Our analysis holds for an asynchronous generalization of collective tree exploration. It relies on a game with robots at the leaves of a continuously growing tree extending the "tree-mining game" of [C23] and resembling the "evolving tree game" of [BCR22]. Another surprising consequence of our results is the existence of algorithms {𝒜_k}_{k ∈ ℕ} for layered tree traversal (LTT) with cost at most 2L/k+𝒪(kD), where L is the sum of all edge lengths. For the case of layered trees of width w and unit edge lengths, our guarantee is thus in 𝒪(√wD).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.35/LIPIcs.ITCS.2024.35.pdf
collective exploration
online algorithms
evolving tree
competitive analysis
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
36:1
36:22
10.4230/LIPIcs.ITCS.2024.36
article
Fraud Detection for Random Walks
Dani, Varsha
1
Hayes, Thomas P.
2
Pettie, Seth
3
Saia, Jared
4
Rochester Institute of Technology, Rochester, NY, USA
University at Buffalo, Buffalo, NY, USA
University of Michigan, Ann Arbor, MI, USA
University of New Mexico, Albuquerque, NM, USA
Traditional fraud detection is often based on finding statistical anomalies in data sets and transaction histories. A sophisticated fraudster, aware of the exact kinds of tests being deployed, might be difficult or impossible to catch. We are interested in paradigms for fraud detection that are provably robust against any adversary, no matter how sophisticated. In other words, the detection strategy should rely on signals in the data that are inherent in the goals the adversary is trying to achieve.
Specifically, we consider a fraud detection game centered on a random walk on a graph. We assume this random walk is implemented by having a player at each vertex, who can be honest or not. In particular, when the random walk reaches a vertex owned by an honest player, it proceeds to a uniformly random neighbor at the next timestep. However, when the random walk reaches a dishonest player, it instead proceeds to an arbitrary neighbor chosen by an omniscient Adversary.
The game is played between the Adversary and a Referee who sees the trajectory of the random walk. At any point during the random walk, if the Referee determines that a {specific} vertex is controlled by a dishonest player, the Referee accuses that player, and therefore wins the game. The Referee is allowed to make the occasional incorrect accusation, but must follow a policy that makes such mistakes with small probability of error. The goal of the adversary is to make the cover time large, ideally infinite, i.e., the walk should never reach at least one vertex. We consider the following basic question: how much can the omniscient Adversary delay the cover time without getting caught? Our main result is a tight upper bound on this delay factor.
We also discuss possible applications of our results to settings such as Rotor Walks, Leader Election, and Sybil Defense.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.36/LIPIcs.ITCS.2024.36.pdf
Fraud detection
random processes
Markov chains
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
37:1
37:22
10.4230/LIPIcs.ITCS.2024.37
article
Smooth Nash Equilibria: Algorithms and Complexity
Daskalakis, Constantinos
1
Golowich, Noah
1
Haghtalab, Nika
2
Shetty, Abhishek
2
MIT, Cambridge, MA, USA
University of California at Berkeley, CA, USA
A fundamental shortcoming of the concept of Nash equilibrium is its computational intractability: approximating Nash equilibria in normal-form games is PPAD-hard. In this paper, inspired by the ideas of smoothed analysis, we introduce a relaxed variant of Nash equilibrium called σ-smooth Nash equilibrium, for a {smoothness parameter} σ. In a σ-smooth Nash equilibrium, players only need to achieve utility at least as high as their best deviation to a σ-smooth strategy, which is a distribution that does not put too much mass (as parametrized by σ) on any fixed action. We distinguish two variants of σ-smooth Nash equilibria: strong σ-smooth Nash equilibria, in which players are required to play σ-smooth strategies under equilibrium play, and weak σ-smooth Nash equilibria, where there is no such requirement.
We show that both weak and strong σ-smooth Nash equilibria have superior computational properties to Nash equilibria: when σ as well as an approximation parameter ϵ and the number of players are all constants, there is a {constant-time} randomized algorithm to find a weak ϵ-approximate σ-smooth Nash equilibrium in normal-form games. In the same parameter regime, there is a polynomial-time deterministic algorithm to find a strong ϵ-approximate σ-smooth Nash equilibrium in a normal-form game. These results stand in contrast to the optimal algorithm for computing ϵ-approximate Nash equilibria, which cannot run in faster than quasipolynomial-time, subject to complexity-theoretic assumptions. We complement our upper bounds by showing that when either σ or ϵ is an inverse polynomial, finding a weak ϵ-approximate σ-smooth Nash equilibria becomes computationally intractable.
Our results are the first to propose a variant of Nash equilibrium which is computationally tractable, allows players to act independently, and which, as we discuss, is justified by an extensive line of work on individual choice behavior in the economics literature.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.37/LIPIcs.ITCS.2024.37.pdf
Nash equilibrium
smoothed analysis
PPAD
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
38:1
38:18
10.4230/LIPIcs.ITCS.2024.38
article
Graph Threading
Demaine, Erik D.
1
https://orcid.org/0000-0003-3803-5703
Kirkpatrick, Yael
2
https://orcid.org/0009-0007-6718-7390
Lin, Rebecca
1
https://orcid.org/0000-0003-4747-4978
Computer Science and Artificial Intelligence Lab, Massachusetts Institute of Technology, Cambridge, MA, USA
Department of Mathematics, Massachusetts Institute of Technology, Cambridge, MA, USA
Inspired by artistic practices such as beadwork and himmeli, we study the problem of threading a single string through a set of tubes, so that pulling the string forms a desired graph. More precisely, given a connected graph (where edges represent tubes and vertices represent junctions where they meet), we give a polynomial-time algorithm to find a minimum-length closed walk (representing a threading of string) that induces a connected graph of string at every junction. The algorithm is based on a surprising reduction to minimum-weight perfect matching. Along the way, we give tight worst-case bounds on the length of the optimal threading and on the maximum number of times this threading can visit a single edge. We also give more efficient solutions to two special cases: cubic graphs and the case when each edge can be visited at most twice.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.38/LIPIcs.ITCS.2024.38.pdf
Shortest walk
Eulerian cycle
perfect matching
beading
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
39:1
39:23
10.4230/LIPIcs.ITCS.2024.39
article
Simple and Optimal Online Contention Resolution Schemes for k-Uniform Matroids
Dinev, Atanas
1
Weinberg, S. Matthew
2
Massachusetts Institute of Technology, Cambridge, MA, USA
Princeton University, NJ, USA
We provide a simple (1-O(1/(√{k)}))-selectable Online Contention Resolution Scheme for k-uniform matroids against a fixed-order adversary. If A_i and G_i denote the set of selected elements and the set of realized active elements among the first i (respectively), our algorithm selects with probability 1-1/(√{k)} any active element i such that |A_{i-1}| + 1 ≤ (1-1/(√{k)})⋅ 𝔼[|G_i|]+√k. This implies a (1-O(1/(√{k)})) prophet inequality against fixed-order adversaries for k-uniform matroids that is considerably simpler than previous algorithms [Alaei, 2014; Azar et al., 2014; Jiang et al., 2022].
We also prove that no OCRS can be (1-Ω(√{(log k)/k}))-selectable for k-uniform matroids against an almighty adversary. This guarantee is matched by the (known) simple greedy algorithm that selects every active element with probability 1-Θ(√{(log k)/k}) [Hajiaghayi et al., 2007].
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.39/LIPIcs.ITCS.2024.39.pdf
online contention resolutions schemes
prophet inequalities
online algorithms
approximation algorithms
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
40:1
40:24
10.4230/LIPIcs.ITCS.2024.40
article
On the Black-Box Complexity of Correlation Intractability
Döttling, Nico
1
Mour, Tamer
2
CISPA Helmholtz Center for Information Security, Saarbrücken, Germany
Bocconi University, Milan, Italy
Correlation intractability is an emerging cryptographic paradigm that enabled several recent breakthroughs in establishing soundness of the Fiat-Shamir transform and, consequently, basing non-interactive zero-knowledge proofs and succinct arguments on standard cryptographic assumptions. In a nutshell, a hash family is said to be correlation intractable for a class of relations ℛ if, for any relation R ∈ ℛ, it is hard given a random hash function h ← H to find an input z s.t. (z,h(z)) ∈ R, namely a correlation.
Despite substantial progress in constructing correlation intractable hash functions, all constructions known to date are based on highly-structured hardness assumptions and, further, are of complexity scaling with the circuit complexity of the target relation class.
In this work, we initiate the study of the barriers for building correlation intractability. Our main result is a lower bound on the complexity of any black-box construction of CIH from collision resistant hash (CRH), or one-way permutations (OWP), for any sufficiently expressive relation class. In particular, any such construction for a class of relations with circuit complexity t must make at least Ω(t) invocations of the underlying building block.
We see this as a first step in developing a methodology towards broader lower bounds.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.40/LIPIcs.ITCS.2024.40.pdf
Correlation Intractability
Fiat-Shamir
Black-box Complexity
Black-box Separations
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
41:1
41:26
10.4230/LIPIcs.ITCS.2024.41
article
The Message Complexity of Distributed Graph Optimization
Dufoulon, Fabien
1
https://orcid.org/0000-0003-2977-4109
Pai, Shreyas
2
https://orcid.org/0000-0003-2409-7807
Pandurangan, Gopal
3
https://orcid.org/0000-0001-5833-6592
Pemmaraju, Sriram V.
4
https://orcid.org/0000-0002-0834-3476
Robinson, Peter
5
https://orcid.org/0000-0002-7442-7002
Lancaster University, UK
Aalto University, Finland
University of Houston, TX, USA
University of Iowa, IA, USA
Augusta University, GA, USA
The message complexity of a distributed algorithm is the total number of messages sent by all nodes over the course of the algorithm. This paper studies the message complexity of distributed algorithms for fundamental graph optimization problems. We focus on four classical graph optimization problems: Maximum Matching (MaxM), Minimum Vertex Cover (MVC), Minimum Dominating Set (MDS), and Maximum Independent Set (MaxIS). In the sequential setting, these problems are representative of a wide spectrum of hardness of approximation. While there has been some progress in understanding the round complexity of distributed algorithms (for both exact and approximate versions) for these problems, much less is known about their message complexity and its relation with the quality of approximation. We almost fully quantify the message complexity of distributed graph optimization by showing the following results:
1) Cubic regime: Our first main contribution is showing essentially cubic, i.e., Ω̃(n³) lower bounds (where n is the number of nodes in the graph) on the message complexity of distributed exact computation of Minimum Vertex Cover (MVC), Minimum Dominating Set (MDS), and Maximum Independent Set (MaxIS). Our lower bounds apply to any distributed algorithm that runs in polynomial number of rounds (a mild and necessary restriction). Our result is significant since, to the best of our knowledge, this are the first ω(m) (where m is the number of edges in the graph) message lower bound known for distributed computation of such classical graph optimization problems. Our bounds are essentially tight, as all these problems can be solved trivially using O(n³) messages in polynomial rounds. All these bounds hold in the standard CONGEST model of distributed computation in which messages are of O(log n) size.
2) Quadratic regime: In contrast, we show that if we allow approximate computation then Θ̃(n²) messages are both necessary and sufficient. Specifically, we show that Ω̃(n²) messages are required for constant-factor approximation algorithms for all four problems. For MaxM and MVC, these bounds hold for any constant-factor approximation, whereas for MDS and MaxIS they hold for any approximation factor better than some specific constants. These lower bounds hold even in the LOCAL model (in which messages can be arbitrarily large) and they even apply to algorithms that take arbitrarily many rounds. We show that our lower bounds are essentially tight, by showing that if we allow approximation to within an arbitrarily small constant factor, then all these problems can be solved using Õ(n²) messages even in the CONGEST model.
3) Linear regime: We complement the above lower bounds by showing distributed algorithms with Õ(n) message complexity that run in polylogarithmic rounds and give constant-factor approximations for all four problems on random graphs. These results imply that almost linear (in n) message complexity is achievable on almost all (connected) graphs of every edge density.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.41/LIPIcs.ITCS.2024.41.pdf
Distributed graph algorithm
message complexity
distributed approximation
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
42:1
42:23
10.4230/LIPIcs.ITCS.2024.42
article
Time- and Communication-Efficient Overlay Network Construction via Gossip
Dufoulon, Fabien
1
https://orcid.org/0000-0003-2977-4109
Moorman, Michael
2
https://orcid.org/0009-0007-6448-6555
Moses Jr., William K.
3
https://orcid.org/0000-0002-4533-7593
Pandurangan, Gopal
2
https://orcid.org/0000-0001-5833-6592
School of Computing and Communications, Lancaster University, UK
Department of Computer Science, University of Houston, TX, USA
Department of Computer Science, Durham University, UK
We focus on the well-studied problem of distributed overlay network construction. We consider a synchronous gossip-based communication model where in each round a node can send a message of small size to another node whose identifier it knows. The network is assumed to be reconfigurable, i.e., a node can add new connections (edges) to other nodes whose identifier it knows or drop existing connections. Each node initially has only knowledge of its own identifier and the identifiers of its neighbors. The overlay construction problem is, given an arbitrary (connected) graph, to reconfigure it to obtain a bounded-degree expander graph as efficiently as possible. The overlay construction problem is relevant to building real-world peer-to-peer network topologies that have desirable properties such as low diameter, high conductance, robustness to adversarial deletions, etc.
Our main result is that we show that starting from any arbitrary (connected) graph G on n nodes and m edges, we can construct an overlay network that is a constant-degree expander in polylog rounds using only Õ(n) messages. Our time and message bounds are both essentially optimal (up to polylogarithmic factors). Our distributed overlay construction protocol is very lightweight as it uses gossip (each node communicates with only one neighbor in each round) and also scalable as it uses only Õ(n) messages, which is sublinear in m (even when m is moderately dense). To the best of our knowledge, this is the first result that achieves overlay network construction in polylog rounds and o(m) messages. Our protocol uses graph sketches in a novel way to construct an expander overlay that is both time and communication efficient.
A consequence of our overlay construction protocol is that distributed computation can be performed very efficiently in this model. In particular, a wide range of fundamental tasks such as broadcast, leader election, and minimum spanning tree (MST) construction can be accomplished in polylog rounds and Õ(n) message complexity in any graph.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.42/LIPIcs.ITCS.2024.42.pdf
Peer-to-Peer Networks
Overlay Construction Protocol
Gossip
Expanders
Sublinear Bounds
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
43:1
43:23
10.4230/LIPIcs.ITCS.2024.43
article
Homogeneous Algebraic Complexity Theory and Algebraic Formulas
Dutta, Pranjal
1
https://orcid.org/0000-0001-9137-9025
Gesmundo, Fulvio
2
https://orcid.org/0000-0001-6402-021X
Ikenmeyer, Christian
3
https://orcid.org/0000-0003-4654-177X
Jindal, Gorav
4
https://orcid.org/0000-0002-9749-5032
Lysikov, Vladimir
5
https://orcid.org/0000-0002-7816-6524
School of Computing, National University of Singapore (NUS), Singapore
Institut de Mathématiques de Toulouse, Université Paul Sabatier, Toulouse, France
University of Warwick, UK
Max Planck Institute for Software Systems, Saarbrücken, Germany
Ruhr-Universität Bochum, Germany
We study algebraic complexity classes and their complete polynomials under homogeneous linear projections, not just under the usual affine linear projections that were originally introduced by Valiant in 1979. These reductions are weaker yet more natural from a geometric complexity theory (GCT) standpoint, because the corresponding orbit closure formulations do not require the padding of polynomials. We give the first complete polynomials for VF, the class of sequences of polynomials that admit small algebraic formulas, under homogeneous linear projections: The sum of the entries of the non-commutative elementary symmetric polynomial in 3 by 3 matrices of homogeneous linear forms.
Even simpler variants of the elementary symmetric polynomial are hard for the topological closure of a large subclass of VF: the sum of the entries of the non-commutative elementary symmetric polynomial in 2 by 2 matrices of homogeneous linear forms, and homogeneous variants of the continuant polynomial (Bringmann, Ikenmeyer, Zuiddam, JACM '18). This requires a careful study of circuits with arity-3 product gates.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.43/LIPIcs.ITCS.2024.43.pdf
Homogeneous polynomials
Waring rank
Arithmetic formulas
Border complexity
Geometric Complexity theory
Symmetric polynomials
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
44:1
44:22
10.4230/LIPIcs.ITCS.2024.44
article
On the (In)approximability of Combinatorial Contracts
Ezra, Tomer
1
https://orcid.org/0000-0003-0626-4851
Feldman, Michal
2
https://orcid.org/0000-0002-2915-8405
Schlesinger, Maya
2
https://orcid.org/0009-0006-6848-746X
Simons Laufer Mathematical Sciences Institute, Berkeley, CA, USA
Tel Aviv University, Israel
We study two recent combinatorial contract design models, which highlight different sources of complexity that may arise in contract design, where a principal delegates the execution of a costly project to others. In both settings, the principal cannot observe the choices of the agent(s), only the project’s outcome (success or failure), and incentivizes the agent(s) using a contract, a payment scheme that specifies the payment to the agent(s) upon a project’s success. We present results that resolve open problems and advance our understanding of the computational complexity of both settings.
In the multi-agent setting, the project is delegated to a team of agents, where each agent chooses whether or not to exert effort. A success probability function maps any subset of agents who exert effort to a probability of the project’s success. For the family of submodular success probability functions, Dütting et al. [2023] established a poly-time constant factor approximation to the optimal contract, and left open whether this problem admits a PTAS. We answer this question on the negative, by showing that no poly-time algorithm guarantees a better than 0.7-approximation to the optimal contract. For XOS functions, they give a poly-time constant approximation with value and demand queries. We show that with value queries only, one cannot get any constant approximation.
In the multi-action setting, the project is delegated to a single agent, who can take any subset of a given set of actions. Here, a success probability function maps any subset of actions to a probability of the project’s success. Dütting et al. [2021a] showed a poly-time algorithm for computing an optimal contract for gross substitutes success probability functions, and showed that the problem is NP-hard for submodular functions. We further strengthen this hardness result by showing that this problem does not admit any constant factor approximation. Furthermore, for the broader class of XOS functions, we establish the hardness of obtaining a n^{-1/2+ε}-approximation for any ε > 0.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.44/LIPIcs.ITCS.2024.44.pdf
algorithmic contract design
combinatorial contracts
moral hazard
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
45:1
45:13
10.4230/LIPIcs.ITCS.2024.45
article
Two-State Spin Systems with Negative Interactions
Fei, Yumou
1
https://orcid.org/0000-0003-3093-8975
Goldberg, Leslie Ann
2
Lu, Pinyan
3
School of Mathematical Sciences, Peking University, China
Department of Computer Science, University of Oxford, UK
Laboratory of Interdisciplinary Research of Computation and Economics (SUFE), Ministry of Education, Shanghai University of Finance and Economics, China
We study the approximability of computing the partition functions of two-state spin systems. The problem is parameterized by a 2×2 symmetric matrix. Previous results on this problem were restricted either to the case where the matrix has non-negative entries, or to the case where the diagonal entries are equal, i.e. Ising models. In this paper, we study the generalization to arbitrary 2×2 interaction matrices with real entries. We show that in some regions of the parameter space, it’s #P-hard to even determine the sign of the partition function, while in other regions there are fully polynomial approximation schemes for the partition function. Our results reveal several new computational phase transitions.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.45/LIPIcs.ITCS.2024.45.pdf
Approximate Counting
Spin Systems
#P-Hardness
Randomized Algorithms
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
46:1
46:23
10.4230/LIPIcs.ITCS.2024.46
article
Scalable Distributed Agreement from LWE: Byzantine Agreement, Broadcast, and Leader Election
Fernando, Rex
1
https://orcid.org/0000-0002-6546-5939
Gelles, Yuval
2
https://orcid.org/0000-0003-0405-9651
Komargodski, Ilan
2
3
https://orcid.org/0000-0002-1647-2112
Aptos Labs, Palo Alto, CA, USA
The Hebrew University of Jerusalem, Israel
NTT Research, Sunnyvale, CA, USA
Distributed agreement is a general name for the task of ensuring consensus among non-faulty nodes in the presence of faulty or malicious behavior. Well-known instances of agreement tasks are Byzantine Agreement, Broadcast, and Committee or Leader Election. Since agreement tasks lie at the heart of many modern distributed applications, there has been an increased interest in designing scalable protocols for these tasks. Specifically, we want protocols where the per-party communication complexity scales sublinearly with the number of parties.
With unconditional security, the state of the art protocols have Õ(√ n) per-party communication and Õ(1) rounds, where n stands for the number of parties, tolerating 1/3-ε fraction of corruptions for any ε > 0. There are matching lower bounds showing that these protocols are essentially optimal among a large class of protocols. Recently, Boyle-Cohen-Goel (PODC 2021) relaxed the attacker to be computationally bounded and using strong cryptographic assumptions showed a protocol with Õ(1) per-party communication and rounds (similarly, tolerating 1/3-ε fraction of corruptions). The security of their protocol relies on SNARKs for NP with linear-time extraction, a somewhat strong and non-standard assumption. Their protocols further relies on a public-key infrastructure (PKI) and a common-reference-string (CRS).
In this work, we present a new protocol with Õ(1) per-party communication and rounds but relying only on the standard Learning With Errors (LWE) assumption. Our protocol also relies on a PKI and a CRS, and tolerates 1/3-ε fraction of corruptions, similarly to Boyle et al. Technically, we leverage (multi-hop) BARGs for NP directly and in a generic manner which significantly deviate from the framework of Boyle et al.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.46/LIPIcs.ITCS.2024.46.pdf
Byzantine agreement
scalable
learning with errors
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
47:1
47:14
10.4230/LIPIcs.ITCS.2024.47
article
Distribution Testing with a Confused Collector
Ferreira Pinto Jr., Renato
1
Harms, Nathaniel
2
https://orcid.org/0000-0003-0259-9355
University of Waterloo, Canada
EPFL, Lausanne, Switzerland
We are interested in testing properties of distributions with systematically mislabeled samples. Our goal is to make decisions about unknown probability distributions, using a sample that has been collected by a confused collector, such as a machine-learning classifier that has not learned to distinguish all elements of the domain. The confused collector holds an unknown clustering of the domain and an input distribution μ, and provides two oracles: a sample oracle which produces a sample from μ that has been labeled according to the clustering; and a label-query oracle which returns the label of a query point x according to the clustering.
Our first set of results shows that identity, uniformity, and equivalence of distributions can be tested efficiently, under the earth-mover distance, with remarkably weak conditions on the confused collector, even when the unknown clustering is adversarial. This requires defining a variant of the distribution testing task (inspired by the recent testable learning framework of Rubinfeld & Vasilyan), where the algorithm should test a joint property of the distribution and its clustering. As an example, we get efficient testers when the distribution tester is allowed to reject if it detects that the confused collector clustering is "far" from being a decision tree.
The second set of results shows that we can sometimes do significantly better when the clustering is random instead of adversarial. For certain one-dimensional random clusterings, we show that uniformity can be tested under the TV distance using Õ((√n)/(ρ^{3/2} ε²)) samples and zero queries, where ρ ∈ (0,1] controls the "resolution" of the clustering. We improve this to O((√n)/(ρ ε²)) when queries are allowed.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.47/LIPIcs.ITCS.2024.47.pdf
Distribution testing
property testing
uniformity testing
identity testing
earth-mover distance
sublinear algorithms
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
48:1
48:20
10.4230/LIPIcs.ITCS.2024.48
article
Proving Unsatisfiability with Hitting Formulas
Filmus, Yuval
1
https://orcid.org/0000-0002-1739-0872
Hirsch, Edward A.
2
https://orcid.org/0009-0003-2779-5536
Riazanov, Artur
3
https://orcid.org/0000-0001-7892-1502
Smal, Alexander
1
https://orcid.org/0000-0002-8241-5503
Vinyals, Marc
4
https://orcid.org/0000-0002-1487-445X
Technion - Israel Institute of Technology, Haifa, Israel
Department of Computer Science, Ariel University, Israel
EPFL, Lausanne, Switzerland
University of Auckland, New Zealand
A hitting formula is a set of Boolean clauses such that any two of the clauses cannot be simultaneously falsified. Hitting formulas have been studied in many different contexts at least since [Iwama, 1989] and, based on experimental evidence, Peitl and Szeider [Tomás Peitl and Stefan Szeider, 2022] conjectured that unsatisfiable hitting formulas are among the hardest for resolution. Using the fact that hitting formulas are easy to check for satisfiability we make them the foundation of a new static proof system {{rmHitting}}: a refutation of a CNF in {{rmHitting}} is an unsatisfiable hitting formula such that each of its clauses is a weakening of a clause of the refuted CNF. Comparing this system to resolution and other proof systems is equivalent to studying the hardness of hitting formulas.
Our first result is that {{rmHitting}} is quasi-polynomially simulated by tree-like resolution, which means that hitting formulas cannot be exponentially hard for resolution and partially refutes the conjecture of Peitl and Szeider. We show that tree-like resolution and {{rmHitting}} are quasi-polynomially separated, while for resolution, this question remains open. For a system that is only quasi-polynomially stronger than tree-like resolution, {{rmHitting}} is surprisingly difficult to polynomially simulate in another proof system. Using the ideas of Raz-Shpilka’s polynomial identity testing for noncommutative circuits [Raz and Shpilka, 2005] we show that {{rmHitting}} is p-simulated by {{rmExtended {{rmFrege}}}}, but we conjecture that much more efficient simulations exist. As a byproduct, we show that a number of static (semi)algebraic systems are verifiable in deterministic polynomial time.
We consider multiple extensions of {{rmHitting}}, and in particular a proof system {{{rmHitting}}(⊕)} related to the {{{rmRes}}(⊕)} proof system for which no superpolynomial-size lower bounds are known. {{{rmHitting}}(⊕)} p-simulates the tree-like version of {{{rmRes}}(⊕)} and is at least quasi-polynomially stronger. We show that formulas expressing the non-existence of perfect matchings in the graphs K_{n,n+2} are exponentially hard for {{{rmHitting}}(⊕)} via a reduction to the partition bound for communication complexity.
See the full version of the paper for the proofs. They are omitted in this Extended Abstract.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.48/LIPIcs.ITCS.2024.48.pdf
hitting formulas
polynomial identity testing
query complexity
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
49:1
49:24
10.4230/LIPIcs.ITCS.2024.49
article
Deterministic 3SUM-Hardness
Fischer, Nick
1
Kaliciak, Piotr
2
Polak, Adam
3
2
https://orcid.org/0000-0003-4925-774X
Weizmann Institute of Science, Rehovot, Israel
Jagiellonian University in Kraków, Poland
Max Planck Institute for Informatics, Saarbrücken, Germany
As one of the three main pillars of fine-grained complexity theory, the 3SUM problem explains the hardness of many diverse polynomial-time problems via fine-grained reductions. Many of these reductions are either directly based on or heavily inspired by Pătraşcu’s framework involving additive hashing and are thus randomized. Some selected reductions were derandomized in previous work [Chan, He; SOSA'20], but the current techniques are limited and a major fraction of the reductions remains randomized.
In this work we gather a toolkit aimed to derandomize reductions based on additive hashing. Using this toolkit, we manage to derandomize almost all known 3SUM-hardness reductions. As technical highlights we derandomize the hardness reductions to (offline) Set Disjointness, (offline) Set Intersection and Triangle Listing - these questions were explicitly left open in previous work [Kopelowitz, Pettie, Porat; SODA'16]. The few exceptions to our work fall into a special category of recent reductions based on structure-versus-randomness dichotomies.
We expect that our toolkit can be readily applied to derandomize future reductions as well. As a conceptual innovation, our work thereby promotes the theory of deterministic 3SUM-hardness.
As our second contribution, we prove that there is a deterministic universe reduction for 3SUM. Specifically, using additive hashing it is a standard trick to assume that the numbers in 3SUM have size at most n³. We prove that this assumption is similarly valid for deterministic algorithms.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.49/LIPIcs.ITCS.2024.49.pdf
3SUM
derandomization
fine-grained complexity
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
50:1
50:14
10.4230/LIPIcs.ITCS.2024.50
article
One-Way Functions vs. TFNP: Simpler and Improved
Folwarczný, Lukáš
1
2
https://orcid.org/0000-0002-3020-6443
Göös, Mika
3
Hubáček, Pavel
1
4
https://orcid.org/0000-0002-6850-6222
Maystre, Gilbert
3
https://orcid.org/0009-0002-4408-3330
Yuan, Weiqiang
3
https://orcid.org/0000-0001-9149-1842
Institute of Mathematics, Czech Academy of Sciences, Prague, Czech Republic
Faculty of Mathematics and Physics, Charles University, Prague, Czech Republic
EPFL, Lausanne, Switzerland
Charles University, Faculty of Mathematics and Physics, Czech Republic
Simon (1998) proved that it is impossible to construct collision-resistant hash functions from one-way functions using a black-box reduction. It is conjectured more generally that one-way functions do not imply, via a black-box reduction, the hardness of any total NP search problem (collision-resistant hash functions being just one such example). We make progress towards this conjecture by ruling out a large class of "single-query" reductions. In particular, we improve over the prior work of Hubáček et al. (2020) in two ways: our result is established via a novel simpler combinatorial technique and applies to a broader class of semi black-box reductions.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.50/LIPIcs.ITCS.2024.50.pdf
TFNP
One-Way Functions
Oracle
Separation
Black-Box
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
51:1
51:21
10.4230/LIPIcs.ITCS.2024.51
article
An Axiomatic Characterization of CFMMs and Equivalence to Prediction Markets
Frongillo, Rafael
1
https://orcid.org/0000-0002-0170-7572
Papireddygari, Maneesha
1
https://orcid.org/0000-0002-4810-4568
Waggoner, Bo
1
https://orcid.org/0000-0002-1366-1065
University of Colorado, Boulder, CO, USA
Constant-function market makers (CFMMs), such as Uniswap, are automated exchanges offering trades among a set of assets. We study their technical relationship to another class of automated market makers, cost-function prediction markets. We first introduce axioms for market makers and show that CFMMs with concave potential functions characterize "good" market makers according to these axioms. We then show that every such CFMM on n assets is equivalent to a cost-function prediction market for events with n outcomes. Our construction directly converts a CFMM into a prediction market, and vice versa. Using this equivalence, we give another construction which can produce any 1-homogenous, increasing, and concave CFMM, as are typically used in practice, from a cost function.
Conceptually, our results show that desirable market-making axioms are equivalent to desirable information-elicitation axioms, i.e., markets are good at facilitating trade if and only if they are good at revealing beliefs. For example, we show that every CFMM implicitly defines a proper scoring rule for eliciting beliefs; the scoring rule for Uniswap is unusual, but known. From a technical standpoint, our results show how tools for prediction markets and CFMMs can interoperate. We illustrate this interoperability by showing how liquidity strategies from both literatures transfer to the other, yielding new market designs.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.51/LIPIcs.ITCS.2024.51.pdf
Convex analysis
Equivalence result
Axiomatic characterization
Market Makers
Prediction markets
Scoring rules
Cost-functions
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
52:1
52:1
10.4230/LIPIcs.ITCS.2024.52
article
Rethinking Fairness for Human-AI Collaboration (Extended Abstract)
Ge, Haosen
1
https://orcid.org/0000-0002-2443-3949
Bastani, Hamsa
2
https://orcid.org/0000-0002-8793-4732
Bastani, Osbert
3
https://orcid.org/0000-0001-9990-7566
Wharton School, University of Pennsylvania, Philadelphia, PA, USA
Department of Operations, Information and Decisions, Wharton School, University of Pennsylvania, Philadelphia, PA, USA
Department of Computer and Information Science, University of Pennsylvania, Philadelphia, PA, USA
Most work on algorithmic fairness focuses on whether the algorithm makes fair decisions in isolation. Yet, these algorithms are rarely used in high-stakes settings without human oversight, since there are still considerable legal and regulatory challenges to full automation. Moreover, many believe that human-AI collaboration is superior to full automation because human experts may have auxiliary information that can help correct the mistakes of algorithms, producing better decisions than the human or algorithm alone. However, human-AI collaboration introduces new complexities - the overall outcomes now depend not only on the algorithmic recommendations, but also on the subset of individuals for whom the human decision-maker complies with the algorithmic recommendation. Recent studies have shown that selective compliance with algorithms can amplify discrimination relative to the prior human policy, even if the algorithmic policy is fair in the traditional sense. As a consequence, ensuring equitable outcomes requires fundamentally different algorithmic design principles that ensure robustness to the decision-maker’s (a priori unknown) compliance pattern.
To resolve this state of affairs, we introduce the notion of compliance-robust algorithms - i.e., algorithmic decision policies that are guaranteed to (weakly) improve fairness in final outcomes, regardless of the human’s (unknown) compliance pattern with algorithmic recommendations. In particular, given a human decision-maker and her policy (without access to AI assistance), we characterize the class of algorithmic recommendations that never result in collaborative final outcomes that are less fair than the pre-existing human policy, even if the decision-maker’s compliance pattern is adversarial. Next, we prove that there exists considerable tension between traditional algorithmic fairness and compliance-robust fairness. Unless the true data-generating process is itself perfectly fair, it can be infeasible to design an algorithmic policy that simultaneously satisfies traditional algorithmic fairness, is compliance-robustly fair, and is more accurate than the human-only policy; this raises the question of whether traditional fairness is even a desirable constraint to enforce for human-AI collaboration. If the goal is to improve fairness and accuracy in human-AI collaborative outcomes, it may be preferable to design an algorithmic policy that is accurate and compliance-robustly fair, but not traditionally fair. Our last result shows that the tension between traditional fairness and compliance-robust fairness is prevalent. Specifically, we prove that for a broad class of fairness definitions, fair policies are not necessarily compliance-robustly fair, implying that compliance-robust fairness imposes fundamentally different constraints compared to traditional fairness.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.52/LIPIcs.ITCS.2024.52.pdf
fairness
human-AI collaboration
selective compliance
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
53:1
53:22
10.4230/LIPIcs.ITCS.2024.53
article
New Lower Bounds in Merlin-Arthur Communication and Graph Streaming Verification
Ghosh, Prantar
1
https://orcid.org/0009-0006-9172-6553
Shah, Vihan
2
https://orcid.org/0009-0004-3024-9226
Department of Computer Science, Georgetown Univeristy, Washington, D.C., USA
Department of Computer Science, University of Waterloo, Canada
We present novel lower bounds in the Merlin-Arthur (MA) communication model and the related annotated streaming or stream verification model. The MA communication model extends the classical communication model by introducing an all-powerful but untrusted player, Merlin, who knows the inputs of the usual players, Alice and Bob, and attempts to convince them about the output. We focus on the online MA (OMA) model where Alice and Merlin each send a single message to Bob, who needs to catch Merlin if he is dishonest and announce the correct output otherwise. Most known functions have OMA protocols with total communication significantly smaller than what would be needed without Merlin. In this work, we introduce the notion of non-trivial-OMA complexity of a function. This is the minimum total communication required when we restrict ourselves to only non-trivial protocols where Alice sends Bob fewer bits than what she would have sent without Merlin. We exhibit the first explicit functions that have this complexity superlinear - even exponential - in their classical one-way complexity: this means the trivial protocol, where Merlin communicates nothing and Alice and Bob compute the function on their own, is exponentially better than any non-trivial protocol in terms of total communication. These OMA lower bounds also translate to the annotated streaming model, the MA analogue of single-pass data streaming. We show large separations between the classical streaming complexity and the non-trivial annotated streaming complexity (for the analogous notion in this setting) of fundamental problems such as counting distinct items, as well as of graph problems such as connectivity and k-connectivity in a certain edge update model called the support graph turnstile model that we introduce here.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.53/LIPIcs.ITCS.2024.53.pdf
Graph Algorithms
Streaming
Communication Complexity
Stream Verification
Merlin-Arthur Communication
Lower Bounds
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
54:1
54:23
10.4230/LIPIcs.ITCS.2024.54
article
NLTS Hamiltonians and Strongly-Explicit SoS Lower Bounds from Low-Rate Quantum LDPC Codes
Golowich, Louis
1
https://orcid.org/0000-0002-5169-0596
Kaufman, Tali
2
Department of Computer Science, University of California at Berkeley, CA, USA
Department of Computer Science, Bar-Ilan University, Ramat-Gan, Israel
Recent constructions of the first asymptotically good quantum LDPC (qLDPC) codes led to two breakthroughs in complexity theory: the NLTS (No Low-Energy Trivial States) theorem (Anshu, Breuckmann, and Nirkhe, STOC'23), and explicit lower bounds against a linear number of levels of the Sum-of-Squares (SoS) hierarchy (Hopkins and Lin, FOCS'22).
In this work, we obtain improvements to both of these results using qLDPC codes of low rate:
- Whereas Anshu et al. only obtained NLTS Hamiltonians from qLDPC codes of linear dimension, we show the stronger result that qLDPC codes of arbitrarily small positive dimension yield NLTS Hamiltonians.
- The SoS lower bounds of Hopkins and Lin are only weakly explicit because they require running Gaussian elimination to find a nontrivial codeword, which takes polynomial time. We resolve this shortcoming by introducing a new method of planting a strongly explicit nontrivial codeword in linear-distance qLDPC codes, which in turn yields strongly explicit SoS lower bounds. Our "planted" qLDPC codes may be of independent interest, as they provide a new way of ensuring a qLDPC code has positive dimension without resorting to parity check counting, and therefore provide more flexibility in the code construction.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.54/LIPIcs.ITCS.2024.54.pdf
NLTS Hamiltonian
Quantum PCP
Sum-of-squares lower bound
Quantum LDPC code
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
55:1
55:22
10.4230/LIPIcs.ITCS.2024.55
article
Electrical Flows for Polylogarithmic Competitive Oblivious Routing
Goranci, Gramoz
1
https://orcid.org/0000-0002-9603-2255
Henzinger, Monika
2
https://orcid.org/0000-0002-5008-6530
Räcke, Harald
3
https://orcid.org/0000-0001-8797-717X
Sachdeva, Sushant
4
https://orcid.org/0000-0002-5393-9324
Sricharan, A. R.
5
Faculty of Computer Science, University of Vienna, Austria
Institute of Science and Technology Austria (ISTA), Klosterneuburg, Austria
Technical University Munich, Germany
University of Toronto, Canada
Faculty of Computer Science, UniVie Doctoral School Computer Science DoCS, University of Vienna, Austria
Oblivious routing is a well-studied paradigm that uses static precomputed routing tables for selecting routing paths within a network. Existing oblivious routing schemes with polylogarithmic competitive ratio for general networks are tree-based, in the sense that routing is performed according to a convex combination of trees. However, this restriction to trees leads to a construction that has time quadratic in the size of the network and does not parallelize well.
In this paper we study oblivious routing schemes based on electrical routing. In particular, we show that general networks with n vertices and m edges admit a routing scheme that has competitive ratio O(log² n) and consists of a convex combination of only O(√m) electrical routings. This immediately leads to an improved construction algorithm with time Õ(m^{3/2}) that can also be implemented in parallel with Õ(√m) depth.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.55/LIPIcs.ITCS.2024.55.pdf
oblivious routing
electrical flows
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
56:1
56:17
10.4230/LIPIcs.ITCS.2024.56
article
An Algorithm for Bichromatic Sorting with Polylog Competitive Ratio
Goswami, Mayank
1
https://orcid.org/0000-0002-2111-3210
Jacob, Riko
2
https://orcid.org/0000-0001-9470-1809
Queens College CUNY, Flushing, New York, NY, USA
IT University of Copenhagen, Denmark
The problem of sorting with priced information was introduced by [Charikar, Fagin, Guruswami, Kleinberg, Raghavan, Sahai (CFGKRS), STOC 2000]. In this setting, different comparisons have different (potentially infinite) costs. The goal is to find a sorting algorithm with small competitive ratio, defined as the (worst-case) ratio of the algorithm’s cost to the cost of the cheapest proof of the sorted order.
The simple case of bichromatic sorting posed by [CFGKRS] remains open: We are given two sets A and B of total size N, and the cost of an A-A comparison or a B-B comparison is higher than an A-B comparison. The goal is to sort A ∪ B. An Ω(log N) lower bound on competitive ratio follows from unit-cost sorting. Note that this is a generalization of the famous nuts and bolts problem, where A-A and B-B comparisons have infinite cost, and elements of A and B are guaranteed to alternate in the final sorted order.
In this paper we give a randomized algorithm InversionSort with an almost-optimal w.h.p. competitive ratio of O(log³ N). This is the first algorithm for bichromatic sorting with a o(N) competitive ratio.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.56/LIPIcs.ITCS.2024.56.pdf
Sorting
Priced Information
Nuts and Bolts
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
57:1
57:2
10.4230/LIPIcs.ITCS.2024.57
article
Communicating with Anecdotes (Extended Abstract)
Haghtalab, Nika
1
Immorlica, Nicole
2
Lucier, Brendan
2
Mobius, Markus
2
Mohan, Divyarthi
3
University of California, Berkeley, CA, USA
Microsoft Research, Cambridge, MA, USA
Tel Aviv University, Israel
We study a communication game between a sender and receiver. The sender chooses one of her signals about the state of the world (i.e., an anecdote) and communicates it to the receiver who takes an action affecting both players. The sender and receiver both care about the state of the world but are also influenced by personal preferences, so their ideal actions can differ. We characterize perfect Bayesian equilibria. The sender faces a temptation to persuade: she wants to select a biased anecdote to influence the receiver’s action. Anecdotes are still informative to the receiver (who will debias at equilibrium) but the attempt to persuade comes at the cost of precision. This gives rise to informational homophily where the receiver prefers to listen to like-minded senders because they provide higher-precision signals. Communication becomes polarized when the sender is an expert with access to many signals, with the sender choosing extreme outlier anecdotes at equilibrium (unless preferences are perfectly aligned). This polarization dissipates all the gains from communication with an increasingly well-informed sender when the anecdote distribution is heavy-tailed. Experts therefore face a curse of informedness: receivers will prefer to listen to less-informed senders who cannot pick biased signals as easily.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.57/LIPIcs.ITCS.2024.57.pdf
Communication game
Equilibrium
Polarization
Signalling
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
58:1
58:23
10.4230/LIPIcs.ITCS.2024.58
article
An Improved Protocol for ExactlyN with More Than 3 Players
Hambardzumyan, Lianna
1
Pitassi, Toniann
2
Sherif, Suhail
3
Shirley, Morgan
4
Shraibman, Adi
5
The Hebrew University of Jerusalem, Israel
Columbia University, New York, NY, USA
LASIGE, Faculdade de Ciências, Universidade de Lisboa, Portugal
University of Toronto, Canada
The Academic College of Tel Aviv-Yaffo, Israel
The ExactlyN problem in the number-on-forehead (NOF) communication setting asks k players, each of whom can see every input but their own, if the k input numbers add up to N. Introduced by Chandra, Furst and Lipton in 1983, ExactlyN is important for its role in understanding the strength of randomness in communication complexity with many players. It is also tightly connected to the field of combinatorics: its k-party NOF communication complexity is related to the size of the largest corner-free subset in [N]^{k-1}.
In 2021, Linial and Shraibman gave more efficient protocols for ExactlyN for 3 players. As an immediate consequence, this also gave a new construction of larger corner-free subsets in [N]². Later that year Green gave a further refinement to their argument. These results represent the first improvements to the highest-order term for k = 3 since the famous work of Behrend in 1946. In this paper we give a corresponding improvement to the highest-order term for k > 3, the first since Rankin in 1961. That is, we give a more efficient protocol for ExactlyN as well as larger corner-free sets in higher dimensions.
Nearly all previous results in this line of research approached the problem from the combinatorics perspective, implicitly resulting in non-constructive protocols for ExactlyN. Approaching the problem from the communication complexity point of view and constructing explicit protocols for ExactlyN was key to the improvements in the k = 3 setting. As a further contribution we provide explicit protocols for ExactlyN for any number of players which serves as a base for our improvement.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.58/LIPIcs.ITCS.2024.58.pdf
Corner-free sets
number-on-forehead communication
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
59:1
59:21
10.4230/LIPIcs.ITCS.2024.59
article
Equivocal Blends: Prior Independent Lower Bounds
Hartline, Jason
1
https://orcid.org/0000-0001-5505-6819
Johnsen, Aleck
1
https://orcid.org/0000-0001-7764-9842
Northwestern University, Evanston, IL, USA
The prior independent framework for algorithm design considers how well an algorithm that does not know the distribution of its inputs approximates the expected performance of the optimal algorithm for this distribution. This paper gives a method that is agnostic to problem setting for proving lower bounds on the prior independent approximation factor of any algorithm. The method constructs a correlated distribution over inputs that can be described both as a distribution over i.i.d. good-for-algorithms distributions and as a distribution over i.i.d. bad-for-algorithms distributions. We call these two descriptions equivocal blends. Prior independent algorithms are upper-bounded by the optimal algorithm for the latter distribution even when the true distribution is the former. Thus, the ratio of the expected performances of the Bayesian optimal algorithms for these two decompositions is a lower bound on the prior independent approximation ratio.
We apply this framework to give new lower bounds on canonical prior independent mechanism design problems. For one of these problems, we also exhibit a near-tight upper bound. Towards solutions for general problems, we give distinct descriptions of two large classes of correlated-distribution "solutions" for the technique, depending respectively on an order-statistic separability property and a paired inverse-distribution property. We exhibit that equivocal blends do not generally have a Blackwell ordering, which puts this paper outside of standard information design.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.59/LIPIcs.ITCS.2024.59.pdf
prior independent algorithms
lower bounds
correlated decompositions
minimax
equivocal blends
mechanism design
blackwell ordering
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
60:1
60:17
10.4230/LIPIcs.ITCS.2024.60
article
The Chromatic Number of Kneser Hypergraphs via Consensus Division
Haviv, Ishay
1
School of Computer Science, The Academic College of Tel Aviv-Yaffo, Israel
We show that the Consensus Division theorem implies lower bounds on the chromatic number of Kneser hypergraphs, offering a novel proof for a result of Alon, Frankl, and Lovász (Trans. Amer. Math. Soc., 1986) and for its generalization by Kriz (Trans. Amer. Math. Soc., 1992). Our approach is applied to study the computational complexity of the total search problem Kneser^p, which given a succinct representation of a coloring of a p-uniform Kneser hypergraph with fewer colors than its chromatic number, asks to find a monochromatic hyperedge. We prove that for every prime p, the Kneser^p problem with an extended access to the input coloring is efficiently reducible to a quite weak approximation of the Consensus Division problem with p shares. In particular, for p = 2, the problem is efficiently reducible to any non-trivial approximation of the Consensus Halving problem on normalized monotone functions. We further show that for every prime p, the Kneser^p problem lies in the complexity class PPA-p. As an application, we establish limitations on the complexity of the Kneser^p problem, restricted to colorings with a bounded number of colors.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.60/LIPIcs.ITCS.2024.60.pdf
Kneser hypergraphs
consensus division
the complexity classes PPA-p
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
61:1
61:14
10.4230/LIPIcs.ITCS.2024.61
article
Quickly Determining Who Won an Election
Hellerstein, Lisa
1
https://orcid.org/0000-0002-3743-7965
Liu, Naifeng
2
3
https://orcid.org/0009-0008-9602-2760
Schewior, Kevin
4
https://orcid.org/0000-0003-2236-0210
Department of Computer Science and Engineering, New York University Tandon School of Engineering, NY, USA
Department of Computer Science, CUNY Graduate Center, New York, NY, USA
Department of Economics, University of Mannheim, Germany
Department of Computer Science and Mathematics, University of Southern Denmark, Odense, Denmark
This paper considers elections in which voters choose one candidate each, independently according to known probability distributions. A candidate receiving a strict majority (absolute or relative, depending on the version) wins. After the voters have made their choices, each vote can be inspected to determine which candidate received that vote. The time (or cost) to inspect each of the votes is known in advance. The task is to (possibly adaptively) determine the order in which to inspect the votes, so as to minimize the expected time to determine which candidate has won the election. We design polynomial-time constant-factor approximation algorithms for both the absolute-majority and the relative-majority version. Both algorithms are based on a two-phase approach. In the first phase, the algorithms reduce the number of relevant candidates to O(1), and in the second phase they utilize techniques from the literature on stochastic function evaluation to handle the remaining candidates. In the case of absolute majority, we show that the same can be achieved with only two rounds of adaptivity.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.61/LIPIcs.ITCS.2024.61.pdf
stochastic function evaluation
voting
approximation algorithms
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
62:1
62:25
10.4230/LIPIcs.ITCS.2024.62
article
On the Complexity of Algorithms with Predictions for Dynamic Graph Problems
Henzinger, Monika
1
https://orcid.org/0000-0002-5008-6530
Saha, Barna
2
https://orcid.org/0000-0002-6494-3839
Seybold, Martin P.
3
https://orcid.org/0000-0001-6901-3035
Ye, Christopher
2
https://orcid.org/0009-0004-0528-5639
Institute of Science and Technology Austria (ISTA), Klosterneuburg, Austria
University of California San Diego, La Jolla, CA, USA
University of Vienna, Austria
Algorithms with predictions is a new research direction that leverages machine learned predictions for algorithm design. So far a plethora of recent works have incorporated predictions to improve on worst-case bounds for online problems. In this paper, we initiate the study of complexity of dynamic data structures with predictions, including dynamic graph algorithms. Unlike online algorithms, the goal in dynamic data structures is to maintain the solution efficiently with every update.
We investigate three natural models of prediction: (1) δ-accurate predictions where each predicted request matches the true request with probability δ, (2) list-accurate predictions where a true request comes from a list of possible requests, and (3) bounded delay predictions where the true requests are a permutation of the predicted requests. We give general reductions among the prediction models, showing that bounded delay is the strongest prediction model, followed by list-accurate, and δ-accurate.
Further, we identify two broad problem classes based on lower bounds due to the Online Matrix Vector (OMv) conjecture. Specifically, we show that locally correctable dynamic problems have strong conditional lower bounds for list-accurate predictions that are equivalent to the non-prediction setting, unless list-accurate predictions are perfect. Moreover, we show that locally reducible dynamic problems have time complexity that degrades gracefully with the quality of bounded delay predictions. We categorize problems with known OMv lower bounds accordingly and give several upper bounds in the delay model that show that our lower bounds are almost tight.
We note that concurrent work by v.d.Brand et al. [SODA '24] and Liu and Srinivas [arXiv:2307.08890] independently study dynamic graph algorithms with predictions, but their work is mostly focused on showing upper bounds.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.62/LIPIcs.ITCS.2024.62.pdf
Dynamic Graph Algorithms
Algorithms with Predictions
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
63:1
63:24
10.4230/LIPIcs.ITCS.2024.63
article
TFNP Intersections Through the Lens of Feasible Disjunction
Hubáček, Pavel
1
2
https://orcid.org/0000-0002-6850-6222
Khaniki, Erfan
1
2
Thapen, Neil
1
Institute of Mathematics, Czech Academy of Sciences, Prague, Czech Republic
Charles University, Faculty of Mathematics and Physics, Prague, Czech Republic
The complexity class CLS was introduced by Daskalakis and Papadimitriou (SODA 2010) to capture the computational complexity of important TFNP problems solvable by local search over continuous domains and, thus, lying in both PLS and PPAD. It was later shown that, e.g., the problem of computing fixed points guaranteed by Banach’s fixed point theorem is CLS-complete by Daskalakis et al. (STOC 2018). Recently, Fearnley et al. (J. ACM 2023) disproved the plausible conjecture of Daskalakis and Papadimitriou that CLS is a proper subclass of PLS∩PPAD by proving that CLS = PLS∩PPAD.
To study the possibility of other collapses in TFNP, we connect classes formed as the intersection of existing subclasses of TFNP with the phenomenon of feasible disjunction in propositional proof complexity; where a proof system has the feasible disjunction property if, whenever a disjunction F ∨ G has a small proof, and F and G have no variables in common, then either F or G has a small proof. Based on some known and some new results about feasible disjunction, we separate the classes formed by intersecting the classical subclasses PLS, PPA, PPAD, PPADS, PPP and CLS. We also give the first examples of proof systems which have the feasible interpolation property, but not the feasible disjunction property.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.63/LIPIcs.ITCS.2024.63.pdf
TFNP
feasible disjunction
proof complexity
TFNP intersection classes
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
64:1
64:22
10.4230/LIPIcs.ITCS.2024.64
article
Exponential-Time Approximation Schemes via Compression
Inamdar, Tanmay
1
https://orcid.org/0000-0002-0184-5932
Kundu, Madhumita
2
https://orcid.org/0000-0002-8562-946X
Parviainen, Pekka
2
Ramanujan, M. S.
3
https://orcid.org/0000-0002-2116-6048
Saurabh, Saket
4
2
Indian Institute of Technology, Jodhpur, India
University of Bergen, Norway
University of Warwick, UK
Institute of Mathematical Sciences, Chennai, India
In this paper, we give a framework to design exponential-time approximation schemes for basic graph partitioning problems such as k-way cut, Multiway Cut, Steiner k-cut and Multicut, where the goal is to minimize the number of edges going across the parts. Our motivation to focus on approximation schemes for these problems comes from the fact that while it is possible to solve them exactly in 2^nn^{{𝒪}(1)} time (note that this is already faster than brute-forcing over all partitions or edge sets), it is not known whether one can do better. Using our framework, we design the first (1+ε)-approximation algorithms for the above problems that run in time 2^{f(ε)n} (for f(ε) < 1) for all these problems.
As part of our framework, we present two compression procedures. The first of these is a "lossless" procedure, which is inspired by the seminal randomized contraction algorithm for Global Min-cut of Karger [SODA '93]. Here, we reduce the graph to an equivalent instance where the total number of edges is linearly bounded in the number of edges in an optimal solution of the original instance. Following this, we show how a careful combination of greedy choices and the best exact algorithm for the respective problems can exploit this structure and lead to our approximation schemes.
Our first compression procedure bounds the number of edges linearly in the optimal solution, but this could still leave a dense graph as the solution size could be superlinear in the number of vertices. However, for several problems, it is known that they admit significantly faster algorithms on instances where solution size is linear in the number of vertices, in contrast to general instances. Hence, a natural question arises here. Could one reduce the solution size to linear in the number of vertices, at least in the case where we are willing to settle for a near-optimal solution, so that the aforementioned faster algorithms could be exploited?
In the second compression procedure, using cut sparsifiers (this time, inspired by Benczúr and Karger [STOC '96]) we introduce "solution linearization" as a methodology to give an approximation-preserving reduction to the regime where solution size is linear in the number of vertices for certain cut problems. Using this, we obtain the first polynomial-space approximation schemes faster than 2^nn^{{𝒪}(1)} for Minimum bisection and Edge Bipartization. Along the way, we also design the first polynomial-space exact algorithms for these problems that run in time faster than 2^nn^{{𝒪}(1)}, in the regime where solution size is linear in the number of vertices. The use of randomized contraction and cut sparsifiers in the exponential-time setting is novel to the best of our knowledge and forms our conceptual contribution.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.64/LIPIcs.ITCS.2024.64.pdf
Exponential-Time Algorithms
Approximation Algorithms
Graph Algorithms
Cut Problems
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
65:1
65:21
10.4230/LIPIcs.ITCS.2024.65
article
FPT Approximation for Capacitated Sum of Radii
Jaiswal, Ragesh
1
https://orcid.org/0009-0002-4475-0922
Kumar, Amit
1
https://orcid.org/0000-0002-3965-6627
Yadav, Jatin
1
https://orcid.org/0009-0003-5022-3878
CSE, IIT Delhi, India
We consider the capacitated clustering problem in general metric spaces where the goal is to identify k clusters and minimize the sum of the radii of the clusters (we call this the Capacitated k-sumRadii problem). We are interested in fixed-parameter tractable (FPT) approximation algorithms where the running time is of the form f(k) ⋅ poly(n), where f(k) can be an exponential function of k and n is the number of points in the input. In the uniform capacity case, Bandyapadhyay et al. recently gave a 4-approximation algorithm for this problem. Our first result improves this to an FPT 3-approximation and extends to a constant factor approximation for any L_p norm of the cluster radii. In the general capacities version, Bandyapadhyay et al. gave an FPT 15-approximation algorithm. We extend their framework to give an FPT (4 + √13)-approximation algorithm for this problem. Our framework relies on a novel idea of identifying approximations to optimal clusters by carefully pruning points from an initial candidate set of points. This is in contrast to prior results that rely on guessing suitable points and building balls of appropriate radii around them.
On the hardness front, we show that assuming the Exponential Time Hypothesis, there is a constant c > 1 such that any c-approximation algorithm for the non-uniform capacity version of this problem requires running time 2^Ω(k/polylog(k)).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.65/LIPIcs.ITCS.2024.65.pdf
Approximation algorithm
parameterized algorithm
clustering
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
66:1
66:22
10.4230/LIPIcs.ITCS.2024.66
article
A VLSI Circuit Model Accounting for Wire Delay
Jin, Ce
1
Williams, R. Ryan
1
https://orcid.org/0000-0003-2326-2233
Young, Nathaniel
2
https://orcid.org/0009-0000-8648-9609
MIT, Cambridge, MA, USA
Unaffiliated, San Jose, CA, USA
Given the need for ever higher performance, and the failure of CPUs to keep providing single-threaded performance gains, engineers are increasingly turning to highly-parallel custom VLSI chips to implement expensive computations. In VLSI design, the gates and wires of a logical circuit are placed on a 2-dimensional chip with a small number of layers. Traditional VLSI models use gate delay to measure the time complexity of the chip, ignoring the lengths of wires. However, as technology has advanced, wire delay is no longer negligible; it has become an important measure in the design of VLSI chips [Markov, Nature (2014)].
Motivated by this situation, we define and study a model for VLSI chips, called wire-delay VLSI, which takes wire delay into account, going beyond an earlier model of Chazelle and Monier [JACM 1985].
- We prove nearly tight upper bounds and lower bounds (up to logarithmic factors) on the time delay of this chip model for several basic problems. For example, And, Or and Parity require Θ(n^{1/3}) delay, while Addition and Multiplication require ̃ Θ(n^{1/2}) delay, and Triangle Detection on (dense) n-node graphs requires ̃ Θ(n) delay. Interestingly, when we allow input bits to be read twice, the delay for Addition can be improved to Θ(n^{1/3}).
- We also show that proving significantly higher lower bounds in our wire-delay VLSI model would imply breakthrough results in circuit lower bounds. Motivated by this barrier, we also study conditional lower bounds on the delay of chips based on the Orthogonal Vectors Hypothesis from fine-grained complexity.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.66/LIPIcs.ITCS.2024.66.pdf
circuit complexity
systolic arrays
VLSI
wire delay
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
67:1
67:22
10.4230/LIPIcs.ITCS.2024.67
article
Small Sunflowers and the Structure of Slice Rank Decompositions
Karam, Thomas
1
https://orcid.org/0009-0000-9983-7756
Mathematical Institute, University of Oxford, UK
Let d ≥ 3 be an integer. We show that whenever an order-d tensor admits d+1 decompositions according to Tao’s slice rank, if the linear subspaces spanned by their one-variable functions constitute a sunflower for each choice of special coordinate, then the tensor admits a decomposition where these linear subspaces are contained in the centers of these respective sunflowers. As an application, we deduce that for every nonnegative integer k and every finite field 𝔽 there exists an integer C(d,k,|𝔽|) such that every order-d tensor with slice rank k over 𝔽 admits at most C(d,k,|𝔽|) decompositions with length k, up to a class of transformations that can be easily described.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.67/LIPIcs.ITCS.2024.67.pdf
Slice rank
tensors
sunflowers
decompositions
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
68:1
68:23
10.4230/LIPIcs.ITCS.2024.68
article
Distributional PAC-Learning from Nisan’s Natural Proofs
Karchmer, Ari
1
https://orcid.org/0000-0002-4692-4124
Boston University, MA, USA
Do natural proofs imply efficient learning algorithms? Carmosino et al. (2016) demonstrated that natural proofs of circuit lower bounds for Λ imply efficient algorithms for learning Λ-circuits, but only over the uniform distribution, with membership queries, and provided AC⁰[p] ⊆ Λ. We consider whether this implication can be generalized to Λ ⊉ AC⁰[p], and to learning algorithms which use only random examples and learn over arbitrary example distributions (Valiant’s PAC-learning model).
We first observe that, if, for any circuit class Λ, there is an implication from natural proofs for Λ to PAC-learning for Λ, then standard assumptions from lattice-based cryptography do not hold. In particular, we observe that depth-2 majority circuits are a (conditional) counter example to this fully general implication, since Nisan (1993) gave a natural proof, but Klivans and Sherstov (2009) showed hardness of PAC-Learning under lattice-based assumptions. We thus ask: what learning algorithms can we reasonably expect to follow from Nisan’s natural proofs?
Our main result is that all natural proofs arising from a type of communication complexity argument, including Nisan’s, imply PAC-learning algorithms in a new distributional variant (i.e., an "average-case" relaxation) of Valiant’s PAC model. Our distributional PAC model is stronger than the average-case prediction model of Blum et al. (1993) and the heuristic PAC model of Nanashima (2021), and has several important properties which make it of independent interest, such as being boosting-friendly. The main applications of our result are new distributional PAC-learning algorithms for depth-2 majority circuits, polytopes and DNFs over natural target distributions, as well as the nonexistence of encoded-input weak PRFs that can be evaluated by depth-2 majority circuits.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.68/LIPIcs.ITCS.2024.68.pdf
PAC-learning
average-case complexity
communication complexity
natural proofs
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
69:1
69:22
10.4230/LIPIcs.ITCS.2024.69
article
Quantum and Classical Low-Degree Learning via a Dimension-Free Remez Inequality
Klein, Ohad
1
https://orcid.org/0000-0002-9485-890X
Slote, Joseph
2
https://orcid.org/0000-0002-6363-7821
Volberg, Alexander
3
4
https://orcid.org/0000-0002-8127-6505
Zhang, Haonan
5
https://orcid.org/0000-0001-9537-9663
School of Engineering and Computer Science, Hebrew University, Jerusalem, Israel
Department of Computing and Mathematical Sciences, California Institute of Technology, Pasadena, CA, USA
Department of Mathematics, Michigan State University, Ann Arbor, MI, USA
Hausdorff Center for Mathematics, University of Bonn, Germany
Department of Mathematics, University of South Carolina, Columbia, SC, USA
Recent efforts in Analysis of Boolean Functions aim to extend core results to new spaces, including to the slice binom([n],k), the hypergrid [K]ⁿ, and noncommutative spaces (matrix algebras). We present here a new way to relate functions on the hypergrid (or products of cyclic groups) to their harmonic extensions over the polytorus. We show the supremum of a function f over products of the cyclic group {exp(2π i k/K)}_{k = 1}^K controls the supremum of f over the entire polytorus ({z ∈ ℂ:|z| = 1}ⁿ), with multiplicative constant C depending on K and deg(f) only. This Remez-type inequality appears to be the first such estimate that is dimension-free (i.e., C does not depend on n).
This dimension-free Remez-type inequality removes the main technical barrier to giving 𝒪(log n) sample complexity, polytime algorithms for learning low-degree polynomials on the hypergrid and low-degree observables on level-K qudit systems. In particular, our dimension-free Remez inequality implies new Bohnenblust-Hille-type estimates which are central to the learning algorithms and appear unobtainable via standard techniques. Thus we extend to new spaces a recent line of work [Eskenazis and Ivanisvili, 2022; Huang et al., 2022; Volberg and Zhang, 2023] that gave similarly efficient methods for learning low-degree polynomials on the hypercube and observables on qubits.
An additional product of these efforts is a new class of distributions over which arbitrary quantum observables are well-approximated by their low-degree truncations - a phenomenon that greatly extends the reach of low-degree learning in quantum science [Huang et al., 2022].
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.69/LIPIcs.ITCS.2024.69.pdf
Analysis of Boolean Functions
Remez Inequality
Bohnenblust-Hille Inequality
Statistical Learning Theory
Qudits
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
70:1
70:22
10.4230/LIPIcs.ITCS.2024.70
article
A Combinatorial Approach to Robust PCA
Kong, Weihao
1
Qiao, Mingda
2
https://orcid.org/0000-0002-9182-6152
Sen, Rajat
3
https://orcid.org/0000-0003-4677-643X
Google Research, Mountain View, CA, USA
University of California, Berkeley, CA, USA
Google Research, Mountain View, CA,USA
We study the problem of recovering Gaussian data under adversarial corruptions when the noises are low-rank and the corruptions are on the coordinate level. Concretely, we assume that the Gaussian noises lie in an unknown k-dimensional subspace U ⊆ ℝ^d, and s randomly chosen coordinates of each data point fall into the control of an adversary. This setting models the scenario of learning from high-dimensional yet structured data that are transmitted through a highly-noisy channel, so that the data points are unlikely to be entirely clean.
Our main result is an efficient algorithm that, when ks² = O(d), recovers every single data point up to a nearly-optimal 𝓁₁ error of Õ(ks/d) in expectation. At the core of our proof is a new analysis of the well-known Basis Pursuit (BP) method for recovering a sparse signal, which is known to succeed under additional assumptions (e.g., incoherence or the restricted isometry property) on the underlying subspace U. In contrast, we present a novel approach via studying a natural combinatorial problem and show that, over the randomness in the support of the sparse signal, a high-probability error bound is possible even if the subspace U is arbitrary.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.70/LIPIcs.ITCS.2024.70.pdf
Robust PCA
Sparse Recovery
Robust Statistics
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
71:1
71:17
10.4230/LIPIcs.ITCS.2024.71
article
Hardness of Approximating Bounded-Degree Max 2-CSP and Independent Set on k-Claw-Free Graphs
Lee, Euiwoong
1
Manurangsi, Pasin
2
https://orcid.org/0000-0002-1052-2801
University of Michigan, Ann Arbor, MI, USA
Google Research, Bangkok, Thailand
We consider the question of approximating Max 2-CSP where each variable appears in at most d constraints (but with possibly arbitrarily large alphabet). There is a simple ((d+1)/2)-approximation algorithm for the problem. We prove the following results for any sufficiently large d:
- Assuming the Unique Games Conjecture (UGC), it is NP-hard (under randomized reduction) to approximate this problem to within a factor of (d/2 - o(d)).
- It is NP-hard (under randomized reduction) to approximate the problem to within a factor of (d/3 - o(d)). Thanks to a known connection [Pavel Dvorák et al., 2023], we establish the following hardness results for approximating Maximum Independent Set on k-claw-free graphs:
- Assuming the Unique Games Conjecture (UGC), it is NP-hard (under randomized reduction) to approximate this problem to within a factor of (k/4 - o(k)).
- It is NP-hard (under randomized reduction) to approximate the problem to within a factor of (k/(3 + 2√2) - o(k)) ≥ (k/(5.829) - o(k)).
In comparison, known approximation algorithms achieve (k/2 - o(k))-approximation in polynomial time [Meike Neuwohner, 2021; Theophile Thiery and Justin Ward, 2023] and (k/3 + o(k))-approximation in quasi-polynomial time [Marek Cygan et al., 2013].
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.71/LIPIcs.ITCS.2024.71.pdf
Hardness of Approximation
Bounded Degree
Constraint Satisfaction Problems
Independent Set
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
72:1
72:19
10.4230/LIPIcs.ITCS.2024.72
article
Classical vs Quantum Advice and Proofs Under Classically-Accessible Oracle
Li, Xingjian
1
https://orcid.org/0000-0002-8058-7491
Liu, Qipeng
2
https://orcid.org/0000-0002-3994-7061
Pelecanos, Angelos
3
https://orcid.org/0009-0005-6329-1786
Yamakawa, Takashi
4
https://orcid.org/0000-0003-1712-3026
Tsinghua University, Beijing, China
University of California at San Diego, La Jolla, CA, USA
University of California at Berkeley, CA, USA
NTT Social Informatics Laboratories, Tokyo, Japan
It is a long-standing open question to construct a classical oracle relative to which BQP/qpoly ≠ BQP/poly or QMA ≠ QCMA. In this paper, we construct classically-accessible classical oracles relative to which BQP/qpoly ≠ BQP/poly and QMA ≠ QCMA. Here, classically-accessible classical oracles are oracles that can be accessed only classically even for quantum algorithms. Based on a similar technique, we also show an alternative proof for the separation of QMA and QCMA relative to a distributional quantumly-accessible classical oracle, which was recently shown by Natarajan and Nirkhe.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.72/LIPIcs.ITCS.2024.72.pdf
quantum computation
computational complexity
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
73:1
73:21
10.4230/LIPIcs.ITCS.2024.73
article
Dynamic Maximal Matching in Clique Networks
Li, Minming
1
Robinson, Peter
2
Zhu, Xianbin
1
Department of Computer Science, City University of Hong Kong, Hong Kong
School of Computer & Cyber Sciences, Augusta University, GA, USA
We consider the problem of computing a maximal matching with a distributed algorithm in the presence of batch-dynamic changes to the graph topology. We assume that a graph of n nodes is vertex-partitioned among k players that communicate via message passing. Our goal is to provide an efficient algorithm that quickly updates the matching even if an adversary determines batches of 𝓁 edge insertions or deletions. We first show a lower bound of Ω((𝓁 log k)/(k²log n)) rounds for recomputing a matching assuming an oblivious adversary who is unaware of the initial (random) vertex partition as well as the current state of the players, and a stronger lower bound of Ω(𝓁/(klog n)) rounds against an adaptive adversary, who may choose any balanced (but not necessarily random) vertex partition initially and who knows the current state of the players. We also present a randomized algorithm that has an initialization time of O(n/(k log n)) rounds, while achieving an update time that that is independent of n: In more detail, the update time is O(⌈𝓁/k⌉ log k) against an oblivious adversary, who must fix all updates in advance. If we consider the stronger adaptive adversary, the update time becomes O (⌈𝓁/√k⌉ log k) rounds.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.73/LIPIcs.ITCS.2024.73.pdf
distributed graph algorithm
dynamic network
maximal matching
randomized algorithm
lower bound
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
74:1
74:22
10.4230/LIPIcs.ITCS.2024.74
article
Intersection Classes in TFNP and Proof Complexity
Li, Yuhao
1
Pires, William
1
Robere, Robert
2
Columbia University, New York, NY, USA
McGill University, Montreal, Canada
A recent breakthrough in the theory of total NP search problems (TFNP) by Fearnley, Goldberg, Hollender, and Savani has shown that CLS = PLS ∩ PPAD, or, in other words, the class of problems reducible to gradient descent are exactly those problems in the intersection of the complexity classes PLS and PPAD. Since this result, two more intersection theorems have been discovered in this theory: EOPL = PLS ∩ PPAD and SOPL = PLS ∩ PPADS. It is natural to wonder if this exhausts the list of intersection classes in TFNP, or, if other intersections exist.
In this work, we completely classify all intersection classes involved among the classical TFNP classes PLS, PPAD, and PPA, giving new complete problems for the newly-introduced intersections. Following the close links between the theory of TFNP and propositional proof complexity, we develop new proof systems - each of which is a generalization of the classical Resolution proof system - that characterize all of the classes, in the sense that a query total search problem is in the intersection class if and only if a tautology associated with the search problem has a short proof in the proof system. We complement these new characterizations with black-box separations between all of the newly introduced classes and prior classes, thus giving strong evidence that no further collapse occurs. Finally, we characterize arbitrary intersections and joins of the PPA_q classes for q ≥ 2 in terms of the Nullstellensatz proof systems.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.74/LIPIcs.ITCS.2024.74.pdf
TFNP
Proof Complexity
Intersection Classes
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
75:1
75:23
10.4230/LIPIcs.ITCS.2024.75
article
Total NP Search Problems with Abundant Solutions
Li, Jiawei
1
https://orcid.org/0000-0002-1441-1711
The University of Texas at Austin, TX, USA
We define a new complexity class TFAP to capture TFNP problems that possess abundant solutions for each input. We identify several problems across diverse fields that belong to TFAP, including WeakPigeon (finding a collision in a mapping from [2n] pigeons to [n] holes), Yamakawa-Zhandry’s problem [Takashi Yamakawa and Mark Zhandry, 2022], and all problems in TFZPP.
Conversely, we introduce the notion of "semi-gluability" to characterize TFNP problems that could have a unique or a very limited number of solutions for certain inputs. We prove that there is no black-box reduction from any "semi-gluable" problems to any TFAP problems. Furthermore, it can be extended to rule out randomized black-box reduction in most cases. We identify that the majority of common TFNP subclasses, including PPA, PPAD, PPADS, PPP, PLS, CLS, SOPL, and UEOPL, are "semi-gluable". This leads to a broad array of oracle separation results within TFNP regime. As a corollary, UEOPL^O ⊈ PWPP^O relative to an oracle O.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.75/LIPIcs.ITCS.2024.75.pdf
TFNP
Pigeonhole Principle
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
76:1
76:18
10.4230/LIPIcs.ITCS.2024.76
article
Making Progress Based on False Discoveries
Livni, Roi
1
Department of Electrical Engineering, Tel Aviv University, Israel
We consider Stochastic Convex Optimization as a case-study for Adaptive Data Analysis. A basic question is how many samples are needed in order to compute ε-accurate estimates of O(1/ε²) gradients queried by gradient descent. We provide two intermediate answers to this question.
First, we show that for a general analyst (not necessarily gradient descent) Ω(1/ε³) samples are required, which is more than the number of sample required to simply optimize the population loss. Our construction builds upon a new lower bound (that may be of interest of its own right) for an analyst that may ask several non adaptive questions in a batch of fixed and known T rounds of adaptivity and requires a fraction of true discoveries. We show that for such an analyst Ω (√T/ε²) samples are necessary.
Second, we show that, under certain assumptions on the oracle, in an interaction with gradient descent ̃ Ω(1/ε^{2.5}) samples are necessary. Which is again suboptimal in terms of optimization. Our assumptions are that the oracle has only first order access and is post-hoc generalizing. First order access means that it can only compute the gradients of the sampled function at points queried by the algorithm. Our assumption of post-hoc generalization follows from existing lower bounds for statistical queries. More generally then, we provide a generic reduction from the standard setting of statistical queries to the problem of estimating gradients queried by gradient descent.
Overall these results are in contrast with classical bounds that show that with O(1/ε²) samples one can optimize the population risk to accuracy of O(ε) but, as it turns out, with spurious gradients.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.76/LIPIcs.ITCS.2024.76.pdf
Adaptive Data Analysis
Stochastic Convex Optimization
Learning Theory
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
77:1
77:23
10.4230/LIPIcs.ITCS.2024.77
article
Kernelization of Counting Problems
Lokshtanov, Daniel
1
https://orcid.org/0000-0002-3166-9212
Misra, Pranabendu
2
https://orcid.org/0000-0002-7086-5590
Saurabh, Saket
3
4
https://orcid.org/0000-0001-7847-6402
Zehavi, Meirav
5
https://orcid.org/0000-0002-3636-5322
University of California Santa Barbara, CA, USA
Chennai Mathematical Institute, Chennai, India
The Institute of Mathematical Sciences, HBNI, Chennai, India
University of Bergen, Norway
Ben-Gurion University of the Negev, Beersheba, Israel
We introduce a new framework for the analysis of preprocessing routines for parameterized counting problems. Existing frameworks that encapsulate parameterized counting problems permit the usage of exponential (rather than polynomial) time either explicitly or by implicitly reducing the counting problems to enumeration problems. Thus, our framework is the only one in the spirit of classic kernelization (as well as lossy kernelization). Specifically, we define a compression of a counting problem P into a counting problem Q as a pair of polynomial-time procedures: reduce and lift. Given an instance of P, reduce outputs an instance of Q whose size is bounded by a function f of the parameter, and given the number of solutions to the instance of Q, lift outputs the number of solutions to the instance of P. When P = Q, compression is termed kernelization, and when f is polynomial, compression is termed polynomial compression. Our technical (and other conceptual) contributions can be classified into two categories:
Upper Bounds. We prove two theorems: (i) The #Vertex Cover problem parameterized by solution size admits a polynomial kernel; (ii) Every problem in the class of #Planar F-Deletion problems parameterized by solution size admits a polynomial compression.
Lower Bounds. We introduce two new concepts of cross-compositions: EXACT-cross-composition and SUM-cross-composition. We prove that if a #P-hard counting problem P EXACT-cross-composes into a parameterized counting problem Q, then Q does not admit a polynomial compression unless the polynomial hierarchy collapses. We conjecture that the same statement holds for SUM-cross-compositions. Then, we prove that: (i) #Min (s,t)-Cut parameterized by treewidth does not admit a polynomial compression unless the polynomial hierarchy collapses; (ii) #Min (s,t)-Cut parameterized by minimum cut size, #Odd Cycle Transversal parameterized by solution size, and #Vertex Cover parameterized by solution size minus maximum matching size, do not admit polynomial compressions unless our conjecture is false.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.77/LIPIcs.ITCS.2024.77.pdf
Kernelization
Counting Problems
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
78:1
78:21
10.4230/LIPIcs.ITCS.2024.78
article
Modularity and Graph Expansion
Louf, Baptiste
1
McDiarmid, Colin
2
Skerman, Fiona
3
https://orcid.org/0000-0003-4141-7059
CNRS and Institut de Mathématiques de Bordeaux, France
Department of Statistics, University of Oxford, UK
Department of Mathematics, Uppsala University, Sweden
We relate two important notions in graph theory: expanders which are highly connected graphs, and modularity a parameter of a graph that is primarily used in community detection. More precisely, we show that a graph having modularity bounded below 1 is equivalent to it having a large subgraph which is an expander.
We further show that a connected component H will be split in an optimal partition of the host graph G if and only if the relative size of H in G is greater than an expansion constant of H. This is a further exploration of the resolution limit known for modularity, and indeed recovers the bound that a connected component H in the host graph G will not be split if e(H) < √{2e(G)}.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.78/LIPIcs.ITCS.2024.78.pdf
edge expansion
modularity
community detection
resolution limit
conductance
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
79:1
79:23
10.4230/LIPIcs.ITCS.2024.79
article
Near-Linear Time and Fixed-Parameter Tractable Algorithms for Tensor Decompositions
Mahankali, Arvind V.
1
Woodruff, David P.
2
Zhang, Ziyu
3
https://orcid.org/0000-0002-8605-0169
Stanford University, CA, USA
Carnegie Mellon University, Pittsburgh, PA, USA
MIT CSAIL, Cambridge, MA, USA
We study low rank approximation of tensors, focusing on the Tensor Train and Tucker decompositions, as well as approximations with tree tensor networks and general tensor networks. As suggested by hardness results also shown in this work, obtaining (1+ε)-approximation algorithms for rank k tensor train and Tucker decompositions efficiently may be computationally hard for these problems. Therefore, we propose different algorithms that respectively satisfy some of the objectives above while violating some others within a bound, known as bicriteria algorithms. On the one hand, for rank-k tensor train decomposition for tensors with q modes, we give a (1 + ε)-approximation algorithm with a small bicriteria rank (O(qk/ε) up to logarithmic factors) and O(q ⋅ nnz(A)) running time, up to lower order terms. Here nnz(A) denotes the number of non-zero entries in the input tensor A. We also show how to convert the algorithm of [Huber et al., 2017] into a relative error approximation algorithm, but their algorithm necessarily has a running time of O(qr² ⋅ nnz(A)) + n ⋅ poly(qk/ε) when converted to a (1 + ε)-approximation algorithm with bicriteria rank r. Thus, the running time of our algorithm is better by at least a k² factor. To the best of our knowledge, our work is the first to achieve a near-input-sparsity time relative error approximation algorithm for tensor train decomposition. Our key technique is a method for efficiently obtaining subspace embeddings for a matrix which is the flattening of a Tensor Train of q tensors - the number of rows in the subspace embeddings is polynomial in q, thus avoiding the curse of dimensionality. We extend our algorithm to tree tensor networks and tensor networks on arbitrary graphs. Another way of coping with intractability is by looking at fixed-parameter tractable (FPT) algorithms. We give FPT algorithms for the tensor train, Tucker, and Canonical Polyadic (CP) decompositions, which are simpler than the FPT algorithms of [Song et al., 2019], since our algorithms do not make use of polynomial system solvers. Our technique of using an exponential number of Gaussian subspace embeddings with exactly k rows (and thus exponentially small success probability) may be of independent interest.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.79/LIPIcs.ITCS.2024.79.pdf
Low rank approximation
Sketching algorithms
Tensor decomposition
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
80:1
80:20
10.4230/LIPIcs.ITCS.2024.80
article
The Non-Uniform Perebor Conjecture for Time-Bounded Kolmogorov Complexity Is False
Mazor, Noam
1
Pass, Rafael
2
1
Cornell Tech, New York, NY, USA
Tel Aviv University, Israel
The Perebor (Russian for "brute-force search") conjectures, which date back to the 1950s and 1960s are some of the oldest conjectures in complexity theory. The conjectures are a stronger form of the NP ≠ P conjecture (which they predate) and state that for "meta-complexity" problems, such as the Time-bounded Kolmogorov complexity Problem, and the Minimum Circuit Size Problem, there are no better algorithms than brute force search.
In this paper, we disprove the non-uniform version of the Perebor conjecture for the Time-Bounded Kolmogorov complexity problem. We demonstrate that for every polynomial t(⋅), there exists of a circuit of size 2^{4n/5+o(n)} that solves the t(⋅)-bounded Kolmogorov complexity problem on every instance.
Our algorithm is black-box in the description of the Universal Turing Machine U employed in the definition of Kolmogorov Complexity and leverages the characterization of one-way functions through the hardness of the time-bounded Kolmogorov complexity problem of Liu and Pass (FOCS'20), and the time-space trade-off for one-way functions of Fiat and Naor (STOC'91). We additionally demonstrate that no such black-box algorithm can have circuit size smaller than 2^{n/2-o(n)}.
Along the way (and of independent interest), we extend the result of Fiat and Naor and demonstrate that any efficiently computable function can be inverted (with probability 1) by a circuit of size 2^{4n/5+o(n)}; as far as we know, this yields the first formal proof that a non-trivial circuit can invert any efficient function.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.80/LIPIcs.ITCS.2024.80.pdf
Kolmogorov complexity
perebor conjecture
function inversion
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
81:1
81:19
10.4230/LIPIcs.ITCS.2024.81
article
A Myersonian Framework for Optimal Liquidity Provision in Automated Market Makers
Milionis, Jason
1
https://orcid.org/0000-0002-9460-9559
Moallemi, Ciamac C.
2
https://orcid.org/0000-0002-4489-9260
Roughgarden, Tim
1
3
https://orcid.org/0000-0002-7163-8306
Department of Computer Science, Columbia University, New York, NY, USA
Graduate School of Business, Columbia University, New York, NY, USA
a16z Crypto, New York NY 10010, USA
In decentralized finance ("DeFi"), automated market makers (AMMs) enable traders to programmatically exchange one asset for another. Such trades are enabled by the assets deposited by liquidity providers (LPs). The goal of this paper is to characterize and interpret the optimal (i.e., profit-maximizing) strategy of a monopolist liquidity provider, as a function of that LP’s beliefs about asset prices and trader behavior. We introduce a general framework for reasoning about AMMs based on a Bayesian-like belief inference framework, where LPs maintain an asset price estimate, which is updated by incorporating traders' price estimates. In this model, the market maker (i.e., LP) chooses a demand curve that specifies the quantity of a risky asset to be held at each dollar price. Traders arrive sequentially and submit a price bid that can be interpreted as their estimate of the risky asset price; the AMM responds to this submitted bid with an allocation of the risky asset to the trader, a payment that the trader must pay, and a revised internal estimate for the true asset price. We define an incentive-compatible (IC) AMM as one in which a trader’s optimal strategy is to submit its true estimate of the asset price, and characterize the IC AMMs as those with downward-sloping demand curves and payments defined by a formula familiar from Myerson’s optimal auction theory. We generalize Myerson’s virtual values, and characterize the profit-maximizing IC AMM. The optimal demand curve generally has a jump that can be interpreted as a "bid-ask spread," which we show is caused by a combination of adverse selection risk (dominant when the degree of information asymmetry is large) and monopoly pricing (dominant when asymmetry is small). This work opens up new research directions into the study of automated exchange mechanisms from the lens of optimal auction theory and iterative belief inference, using tools of theoretical computer science in a novel way.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.81/LIPIcs.ITCS.2024.81.pdf
Posted-Price Mechanisms
Asset Exchange
Market Making
Automated Market Makers (AMMs)
Blockchains
Decentralized Finance
Incentive Compatibility
Optimization
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
82:1
82:23
10.4230/LIPIcs.ITCS.2024.82
article
A Computational Separation Between Quantum No-Cloning and No-Telegraphing
Nehoran, Barak
1
https://orcid.org/0000-0001-7371-0829
Zhandry, Mark
2
https://orcid.org/0000-0001-7071-6272
Princeton University, NJ, USA
NTT Research, Sunnyvale, CA, USA
Two of the fundamental no-go theorems of quantum information are the no-cloning theorem (that it is impossible to make copies of general quantum states) and the no-teleportation theorem (the prohibition on telegraphing, or sending quantum states over classical channels without pre-shared entanglement). They are known to be equivalent, in the sense that a collection of quantum states is telegraphable if and only if it is clonable.
Our main result suggests that this is not the case when computational efficiency is considered. We give a collection of quantum states and quantum oracles relative to which these states are efficiently clonable but not efficiently telegraphable. Given that the opposite scenario is impossible (states that can be telegraphed can always trivially be cloned), this gives the most complete quantum oracle separation possible between these two important no-go properties.
We additionally study the complexity class clonableQMA, a subset of QMA whose witnesses are efficiently clonable. As a consequence of our main result, we give a quantum oracle separation between clonableQMA and the class QCMA, whose witnesses are restricted to classical strings. We also propose a candidate oracle-free promise problem separating these classes. We finally demonstrate an application of clonable-but-not-telegraphable states to cryptography, by showing how such states can be used to protect against key exfiltration.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.82/LIPIcs.ITCS.2024.82.pdf
Cloning
telegraphing
no-cloning theorem
oracle separations
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
83:1
83:22
10.4230/LIPIcs.ITCS.2024.83
article
On the Size Overhead of Pairwise Spanners
Neiman, Ofer
1
Shabat, Idan
1
Ben-Gurion University of the Negev, Beer-Sheva, Israel
Given an undirected possibly weighted n-vertex graph G = (V,E) and a set 𝒫 ⊆ V² of pairs, a subgraph S = (V,E') is called a P-pairwise α-spanner of G, if for every pair (u,v) ∈ 𝒫 we have d_S(u,v) ≤ α⋅ d_G(u,v). The parameter α is called the stretch of the spanner, and its size overhead is define as |E'|/|P|.
A surprising connection was recently discussed between the additive stretch of (1+ε,β)-spanners, to the hopbound of (1+ε,β)-hopsets. A long sequence of works showed that if the spanner/hopset has size ≈ n^{1+1/k} for some parameter k ≥ 1, then β≈(1/ε)^{log k}. In this paper we establish a new connection to the size overhead of pairwise spanners. In particular, we show that if |P|≈ n^{1+1/k}, then a P-pairwise (1+ε)-spanner must have size at least β⋅ |P| with β≈(1/ε)^{log k} (a near matching upper bound was recently shown in [Michael Elkin and Idan Shabat, 2023]). That is, the size overhead of pairwise spanners has similar bounds to the hopbound of hopsets, and to the additive stretch of spanners.
We also extend the connection between pairwise spanners and hopsets to the large stretch regime, by showing nearly matching upper and lower bounds for P-pairwise α-spanners. In particular, we show that if |P|≈ n^{1+1/k}, then the size overhead is β≈k/α.
A source-wise spanner is a special type of pairwise spanner, for which P = A×V for some A ⊆ V. A prioritized spanner is given also a ranking of the vertices V = (v₁,… ,v_n), and is required to provide improved stretch for pairs containing higher ranked vertices. By using a sequence of reductions: from pairwise spanners to source-wise spanners to prioritized spanners, we improve on the state-of-the-art results for source-wise and prioritized spanners. Since our spanners can be equipped with a path-reporting mechanism, we also substantially improve the known bounds for path-reporting prioritized distance oracles. Specifically, we provide a path-reporting distance oracle, with size O(n⋅(log log n)²), that has a constant stretch for any query that contains a vertex ranked among the first n^{1-δ} vertices (for any constant δ > 0). Such a result was known before only for non-path-reporting distance oracles.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.83/LIPIcs.ITCS.2024.83.pdf
Graph Algorithms
Shortest Paths
Spanners
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
84:1
84:22
10.4230/LIPIcs.ITCS.2024.84
article
Budget-Feasible Mechanism Design: Simpler, Better Mechanisms and General Payment Constraints
Neogi, Rian
1
Pashkovich, Kanstantsin
1
Swamy, Chaitanya
1
https://orcid.org/0000-0003-1108-7941
Dept. of Combinatorics and Optimization, University of Waterloo, Canada
In budget-feasible mechanism design, a buyer wishes to procure a set of items of maximum value from self-interested rational players. We are given an item-set U and a nonnegative valuation function v: 2^U ↦ ℝ_+. Each item e is held by a player who incurs a private cost c_e for supplying item e. The goal is to devise a truthful mechanism such that the total payment made to the players is at most some given budget B, and the value of the set returned is a good approximation to OPT: = max {v(S): c(S) ≤ B, S ⊆ U}. We call such a mechanism a budget-feasible mechanism. More generally, there may be additional side constraints requiring that the set returned lies in some downwards-monotone family ℐ ⊆ 2^U. Budget-feasible mechanisms have been widely studied, but there are still significant gaps in our understanding of these mechanisms, both in terms of what kind of oracle access to the valuation is required to obtain good approximation ratios, and the best approximation ratio that can be achieved.
We substantially advance the state of the art of budget-feasible mechanisms by devising mechanisms that are simpler, and also better, both in terms of requiring weaker oracle access and the approximation factors they obtain. For XOS valuations, we devise the first polytime O(1)-approximation budget-feasible mechanism using only demand oracles, and also significantly improve the approximation factor. For subadditive valuations, we give the first explicit construction of an O(1)-approximation mechanism, where previously only an existential result was known.
We also introduce a fairly rich class of mechanism-design problems that we dub using the umbrella term generalized budget-feasible mechanism design, which allow one to capture payment constraints that are much-more nuanced than a single constraint on the total payment doled out. We demonstrate the versatility of our ideas by showing that our constructions can be adapted to yield approximation guarantees in such general settings as well.
A prominent insight to emerge from our work is the usefulness of a property called nobossiness, which allows us to nicely decouple the truthfulness + approximation, and budget-feasibility requirements. Some of our constructions can be viewed as reductions showing that an O(1)-approximation budget-feasible mechanism can be obtained provided we have a (randomized) truthful mechanism satisfying nobossiness that returns a (random) feasible set having (expected) value Ω(OPT).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.84/LIPIcs.ITCS.2024.84.pdf
Algorithmic mechanism design
Approximation algorithms
Budget-feasible mechanisms
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
85:1
85:23
10.4230/LIPIcs.ITCS.2024.85
article
General Gaussian Noise Mechanisms and Their Optimality for Unbiased Mean Estimation
Nikolov, Aleksandar
1
Tang, Haohua
1
University of Toronto, Canada
We investigate unbiased high-dimensional mean estimators in differential privacy. We consider differentially private mechanisms whose expected output equals the mean of the input dataset, for every dataset drawn from a fixed bounded domain K in ℝ^d. A classical approach to private mean estimation is to compute the true mean and add unbiased, but possibly correlated, Gaussian noise to it. In the first part of this paper, we study the optimal error achievable by a Gaussian noise mechanism for a given domain K, when the error is measured in the 𝓁_p norm for some p ≥ 2. We give algorithms that compute the optimal covariance for the Gaussian noise for a given K under suitable assumptions, and prove a number of nice geometric properties of the optimal error. These results generalize the theory of factorization mechanisms from domains K that are symmetric and finite (or, equivalently, symmetric polytopes) to arbitrary bounded domains.
In the second part of the paper we show that Gaussian noise mechanisms achieve nearly optimal error among all private unbiased mean estimation mechanisms in a very strong sense. In particular, for every input dataset, an unbiased mean estimator satisfying concentrated differential privacy introduces approximately at least as much error as the best Gaussian noise mechanism. We extend this result to local differential privacy, and to approximate differential privacy, but for the latter the error lower bound holds either for a dataset or for a neighboring dataset, and this relaxation is necessary.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.85/LIPIcs.ITCS.2024.85.pdf
differential privacy
mean estimation
unbiased estimator
instance optimality
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
86:1
86:23
10.4230/LIPIcs.ITCS.2024.86
article
Rumors with Changing Credibility
Out, Charlotte
1
https://orcid.org/0000-0003-1316-6336
Rivera, Nicolás
2
https://orcid.org/0000-0003-3368-9708
Sauerwald, Thomas
1
https://orcid.org/0000-0002-0882-283X
Sylvester, John
3
https://orcid.org/0000-0002-6543-2934
Department of Computer Science & Technology, University of Cambridge, UK
Facultad de Ciencias, Universidad de Valparaíso, Chile
Department of Computer Science, University of Liverpool, UK
Randomized rumor spreading processes diffuse information on an undirected graph and have been widely studied. In this work, we present a generic framework for analyzing a broad class of such processes on regular graphs. Our analysis is protocol-agnostic, as it only requires the expected proportion of newly informed vertices in each round to be bounded, and a natural negative correlation property.
This framework allows us to analyze various protocols, including PUSH, PULL, and PUSH-PULL, thereby extending prior research. Unlike previous work, our framework accommodates message failures at any time t ≥ 0 with a probability of 1-q(t), where the credibility q(t) is any function of time. This enables us to model real-world scenarios in which the transmissibility of rumors may fluctuate, as seen in the spread of "fake news" and viruses. Additionally, our framework is sufficiently broad to cover dynamic graphs.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.86/LIPIcs.ITCS.2024.86.pdf
Rumor spreading
epidemic algorithms
"fake news"
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
87:1
87:20
10.4230/LIPIcs.ITCS.2024.87
article
Tensor Reconstruction Beyond Constant Rank
Peleg, Shir
1
https://orcid.org/0000-0002-7836-7780
Shpilka, Amir
1
https://orcid.org/0000-0003-2384-425X
Volk, Ben Lee
2
https://orcid.org/0000-0002-7143-7280
Blavatnik School of Computer Science, Tel Aviv University, Israel
Efi Arazi School of Computer Science, Reichman University, Herlizya, Israel
We give reconstruction algorithms for subclasses of depth-3 arithmetic circuits. In particular, we obtain the first efficient algorithm for finding tensor rank, and an optimal tensor decomposition as a sum of rank-one tensors, when given black-box access to a tensor of super-constant rank. Specifically, we obtain the following results:
1) A deterministic algorithm that reconstructs polynomials computed by Σ^{[k]}⋀^{[d]}Σ circuits in time poly(n,d,c) ⋅ poly(k)^{k^{k^{10}}},
2) A randomized algorithm that reconstructs polynomials computed by multilinear Σ^{[k]}∏^{[d]}Σ circuits in time poly(n,d,c) ⋅ k^{k^{k^{k^{O(k)}}}},
3) A randomized algorithm that reconstructs polynomials computed by set-multilinear Σ^{[k]}∏^{[d]}Σ circuits in time poly(n,d,c) ⋅ k^{k^{k^{k^{O(k)}}}},
where c = log q if 𝔽 = 𝔽_q is a finite field, and c equals the maximum bit complexity of any coefficient of f if 𝔽 is infinite.
Prior to our work, polynomial time algorithms for the case when the rank, k, is constant, were given by Bhargava, Saraf and Volkovich [Vishwas Bhargava et al., 2021].
Another contribution of this work is correcting an error from a paper of Karnin and Shpilka [Zohar Shay Karnin and Amir Shpilka, 2009] (with some loss in parameters) that also affected Theorem 1.6 of [Vishwas Bhargava et al., 2021]. Consequently, the results of [Zohar Shay Karnin and Amir Shpilka, 2009; Vishwas Bhargava et al., 2021] continue to hold, with a slightly worse setting of parameters. For fixing the error we systematically study the relation between syntactic and semantic notions of rank of Σ Π Σ circuits, and the corresponding partitions of such circuits.
We obtain our improved running time by introducing a technique for learning rank preserving coordinate-subspaces. Both [Zohar Shay Karnin and Amir Shpilka, 2009] and [Vishwas Bhargava et al., 2021] tried all choices of finding the "correct" coordinates, which, due to the size of the set, led to having a fast growing function of k at the exponent of n. We manage to find these spaces in time that is still growing fast with k, yet it is only a fixed polynomial in n.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.87/LIPIcs.ITCS.2024.87.pdf
Algebraic circuits
reconstruction
tensor decomposition
tensor rank
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
88:1
88:17
10.4230/LIPIcs.ITCS.2024.88
article
Color Fault-Tolerant Spanners
Petruschka, Asaf
1
Sapir, Shay
1
Tzalik, Elad
1
Weizmann Institute of Science, Rehovot, Israel
We initiate the study of spanners in arbitrarily vertex- or edge-colored graphs (with no "legality" restrictions), that are resilient to failures of entire color classes. When a color fails, all vertices/edges of that color crash. An f-color fault-tolerant (f-CFT) t-spanner of an n-vertex colored graph G is a subgraph H that preserves distances up to factor t, even in the presence of at most f color faults. This notion generalizes the well-studied f-vertex/edge fault-tolerant (f-V/EFT) spanners. The size (number of edges) of an f-V/EFT spanner crucially depends on the number f of vertex/edge faults to be tolerated. In the colored variants, even a single color fault can correspond to an unbounded number of vertex/edge faults.
The key conceptual contribution of this work is in showing that the size required by an f-CFT spanner is in fact comparable to its uncolored counterpart, with no dependency on the size of color classes. We provide optimal bounds on the size required by f-CFT (2k-1)-spanners, as follows:
- When vertices have colors, we show an upper bound of O(f^{1-1/k} n^{1+1/k}) edges. This precisely matches the (tight) bounds for (2k-1)-spanners resilient to f individual vertex faults [Bodwin et al., SODA 2018; Bodwin and Patel, PODC 2019].
- For colored edges, we show that O(f n^{1+1/k}) edges are always sufficient. Further, we prove this is tight, i.e., we provide an Ω(f n^{1+1/k}) (worst-case) lower bound. The state-of-the-art bounds known for the corresponding uncolored setting of edge faults are (roughly) Θ(f^{1/2} n^{1+1/k}) [Bodwin et al., SODA 2018; Bodwin, Dinitz and Robelle, SODA 2022].
- We also consider a mixed model where both vertices and edges are colored. In this case, we show tight Θ(f^{2-1/k} n^{1+1/k}) bounds. Thus, CFT spanners exhibit an interesting phenomenon: while (individual) edge faults are "easier" than vertex faults, edge-color faults are "harder" than vertex-color faults.
Our upper bounds are based on a generalization of the blocking set technique of [Bodwin and Patel, PODC 2019] for analyzing the (exponential-time) greedy algorithm for FT spanners. We complement them by providing efficient constructions of CFT spanners with similar size guarantees, based on the algorithm of [Dinitz and Robelle, PODC 2020].
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.88/LIPIcs.ITCS.2024.88.pdf
Fault tolerance
Graph spanners
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
89:1
89:17
10.4230/LIPIcs.ITCS.2024.89
article
On Generalized Corners and Matrix Multiplication
Pratt, Kevin
1
https://orcid.org/0000-0002-2923-0905
Department of Computer Science, Courant Institute of Mathematical Sciences, New York University, NY, USA
Suppose that S ⊆ [n]² contains no three points of the form (x,y), (x,y+δ), (x+δ,y'), where δ ≠ 0. How big can S be? Trivially, n ≤ |S| ≤ n². Slight improvements on these bounds are obtained from Shkredov’s upper bound for the corners problem [Shkredov, 2006], which shows that |S| ≤ O(n²/(log log n)^c) for some small c > 0, and a construction due to Petrov [Fedor Petrov, 2023], which shows that |S| ≥ Ω(n log n/√{log log n}).
Could it be that for all ε > 0, |S| ≤ O(n^{1+ε})? We show that if so, this would rule out obtaining ω = 2 using a large family of abelian groups in the group-theoretic framework of [Cohn and Umans, 2003; Cohn et al., 2005] (which is known to capture the best bounds on ω to date), for which no barriers are currently known. Furthermore, an upper bound of O(n^{4/3 - ε}) for any fixed ε > 0 would rule out a conjectured approach to obtain ω = 2 of [Cohn et al., 2005]. Along the way, we encounter several problems that have much stronger constraints and that would already have these implications.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.89/LIPIcs.ITCS.2024.89.pdf
Algebraic computation
fast matrix multiplication
additive combinatorics
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
90:1
90:21
10.4230/LIPIcs.ITCS.2024.90
article
Pseudorandom Linear Codes Are List-Decodable to Capacity
Putterman, Aaron (Louie)
1
https://orcid.org/0000-0001-9737-2406
Pyne, Edward
2
https://orcid.org/0000-0002-3454-2057
Harvard University, Cambrdige, MA, USA
Massachusetts Institute of Technology, Cambridge, MA, USA
We introduce a novel family of expander-based error correcting codes. These codes can be sampled with randomness linear in the block-length, and achieve list decoding capacity (among other local properties). Our expander-based codes can be made starting from any family of sufficiently low-bias codes, and as a consequence, we give the first construction of a family of algebraic codes that can be sampled with linear randomness and achieve list-decoding capacity. We achieve this by introducing the notion of a pseudorandom puncturing of a code, where we select n indices of a base code C ⊂ 𝔽_q^m in a correlated fashion. Concretely, whereas a random linear code (i.e. a truly random puncturing of the Hadamard code) requires O(n log(m)) random bits to sample, we sample a pseudorandom linear code with O(n + log (m)) random bits by instantiating our pseudorandom puncturing as a length n random walk on an exapnder graph on [m]. In particular, we extend a result of Guruswami and Mosheiff (FOCS 2022) and show that a pseudorandom puncturing of a small-bias code satisfies the same local properties as a random linear code with high probability. As a further application of our techniques, we also show that pseudorandom puncturings of Reed-Solomon codes are list-recoverable beyond the Johnson bound, extending a result of Lund and Potukuchi (RANDOM 2020). We do this by instead analyzing properties of codes with large distance, and show that pseudorandom puncturings still work well in this regime.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.90/LIPIcs.ITCS.2024.90.pdf
Derandomization
error-correcting codes
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
91:1
91:22
10.4230/LIPIcs.ITCS.2024.91
article
Lower Bounds for Planar Arithmetic Circuits
Ramya, C.
1
https://orcid.org/0000-0003-1328-8229
Shastri, Pratik
1
https://orcid.org/0009-0004-2672-3036
The Institute of Mathematical Sciences (a CI of Homi Bhabha National Institute), Chennai, India
Arithmetic circuits are a natural well-studied model for computing multivariate polynomials over a field. In this paper, we study planar arithmetic circuits. These are circuits whose underlying graph is planar. In particular, we prove an Ω(nlog n) lower bound on the size of planar arithmetic circuits computing explicit bilinear forms on 2n variables. As a consequence, we get an Ω(nlog n) lower bound on the size of arithmetic formulas and planar algebraic branching programs computing explicit bilinear forms. This is the first such lower bound on the formula complexity of an explicit bilinear form. In the case of read-once planar circuits, we show Ω(n²) size lower bounds for computing explicit bilinear forms. Furthermore, we prove fine separations between the various planar models of computations mentioned above.
In addition to this, we look at multi-output planar circuits and show Ω(n^{4/3}) size lower bound for computing an explicit linear transformation on n-variables. For a suitable definition of multi-output formulas, we extend the above result to get an Ω(n²/log n) size lower bound. As a consequence, we demonstrate that there exists an n-variate polynomial computable by n^{1 + o(1)}-sized formulas such that any multi-output planar circuit (resp., multi-output formula) simultaneously computing all its first-order partial derivatives requires size Ω(n^{4/3}) (resp., Ω(n²/log n)). This shows that a statement analogous to that of Baur, Strassen[Walter Baur and Volker Strassen, 1983] does not hold in the case of planar circuits and formulas.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.91/LIPIcs.ITCS.2024.91.pdf
Arithmetic circuit complexity
Planar circuits
Bilinear forms
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
92:1
92:21
10.4230/LIPIcs.ITCS.2024.92
article
Parity vs. AC0 with Simple Quantum Preprocessing
Slote, Joseph
1
https://orcid.org/0000-0002-6363-7821
Department of Computing and Mathematical Sciences, California Institute of Technology, Pasadena, CA, USA
A recent line of work [Bravyi et al., 2018; Watts et al., 2019; Grier and Schaeffer, 2020; Bravyi et al., 2020; Watts and Parham, 2023] has shown the unconditional advantage of constant-depth quantum computation, or QNC⁰, over NC⁰, AC⁰, and related models of classical computation. Problems exhibiting this advantage include search and sampling tasks related to the parity function, and it is natural to ask whether QNC⁰ can be used to help compute parity itself. Namely, we study AC⁰∘QNC⁰ - a hybrid circuit model where AC⁰ operates on measurement outcomes of a QNC⁰ circuit - and we ask whether Par ∈ AC⁰∘QNC⁰.
We believe the answer is negative. In fact, we conjecture AC⁰∘QNC⁰ cannot even achieve Ω(1) correlation with parity. As evidence for this conjecture, we prove:
- When the QNC⁰ circuit is ancilla-free, this model can achieve only negligible correlation with parity, even when AC⁰ is replaced with any function having LMN-like decay in its Fourier spectrum.
- For the general (non-ancilla-free) case, we show via a connection to nonlocal games that the conjecture holds for any class of postprocessing functions that has approximate degree o(n) and is closed under restrictions. Moreover, this is true even when the QNC⁰ circuit is given arbitrary quantum advice. By known results [Bun et al., 2019], this confirms the conjecture for linear-size AC⁰ circuits.
- Another approach to proving the conjecture is to show a switching lemma for AC⁰∘QNC⁰. Towards this goal, we study the effect of quantum preprocessing on the decision tree complexity of Boolean functions. We find that from the point of view of decision tree complexity, nonlocal channels are no better than randomness: a Boolean function f precomposed with an n-party nonlocal channel is together equal to a randomized decision tree with worst-case depth at most DT_depth[f].
Taken together, our results suggest that while QNC⁰ is surprisingly powerful for search and sampling tasks, that power is "locked away" in the global correlations of its output, inaccessible to simple classical computation for solving decision problems.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.92/LIPIcs.ITCS.2024.92.pdf
QNC0
AC0
Nonlocal games
k-wise indistinguishability
approximate degree
switching lemma
Fourier concentration
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
93:1
93:15
10.4230/LIPIcs.ITCS.2024.93
article
Training Multi-Layer Over-Parametrized Neural Network in Subquadratic Time
Song, Zhao
1
Zhang, Lichen
2
Zhang, Ruizhe
3
Adobe Research, San Jose, CA, USA
Massachusetts Institute of Technology, Cambridge, MA, USA
Simons Institute for the Theory of Computing, Berkeley, CA, USA
We consider the problem of training a multi-layer over-parametrized neural network to minimize the empirical risk induced by a loss function. In the typical setting of over-parametrization, the network width m is much larger than the data dimension d and the number of training samples n (m = poly(n,d)), which induces a prohibitive large weight matrix W ∈ ℝ^{m× m} per layer. Naively, one has to pay O(m²) time to read the weight matrix and evaluate the neural network function in both forward and backward computation. In this work, we show how to reduce the training cost per iteration. Specifically, we propose a framework that uses m² cost only in the initialization phase and achieves a truly subquadratic cost per iteration in terms of m, i.e., m^{2-Ω(1)} per iteration. Our result has implications beyond standard over-parametrization theory, as it can be viewed as designing an efficient data structure on top of a pre-trained large model to further speed up the fine-tuning process, a core procedure to deploy large language models (LLM).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.93/LIPIcs.ITCS.2024.93.pdf
Deep learning theory
Nonconvex optimization
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
94:1
94:18
10.4230/LIPIcs.ITCS.2024.94
article
Differentially Private Approximate Pattern Matching
Steiner, Teresa Anna
1
https://orcid.org/0000-0003-1078-4075
DTU Compute, Technical University of Denmark, Kongens Lyngby, Denmark
Differential privacy is the de facto privacy standard in data analysis and widely researched in various application areas. On the other hand, analyzing sequences, or strings, is essential to many modern data analysis tasks, and those data often include highly sensitive personal data. While the problem of sanitizing sequential data to protect privacy has received growing attention, there is a surprising lack of theoretical studies of algorithms analyzing sequential data that preserve differential privacy while giving provable guarantees on the accuracy of such an algorithm. The goal of this paper is to initiate such a study.
Specifically, in this paper, we consider the k-approximate pattern matching problem under differential privacy, where the goal is to report or count all substrings of a given string S which have a Hamming distance at most k to a pattern P, or decide whether such a substring exists. In our definition of privacy, individual positions of the string S are protected. To be able to answer queries under differential privacy, we allow some slack on k, i.e. we allow reporting or counting substrings of S with a distance at most (1+γ)k+α to P, for a multiplicative error γ and an additive error α. We analyze which values of α and γ are necessary or sufficient to solve the k-approximate pattern matching problem while satisfying ε-differential privacy. Let n denote the length of S. We give
- an ε-differentially private algorithm with an additive error of O(ε^{-1}log n) and no multiplicative error for the existence variant;
- an ε-differentially private algorithm with an additive error O(ε^{-1}max(k,log n)⋅log n) for the counting variant;
- an ε-differentially private algorithm with an additive error of O(ε^{-1}log n) and multiplicative error O(1) for the reporting variant for a special class of patterns.
The error bounds hold with high probability. All of these algorithms return a witness, that is, if there exists a substring of S with distance at most k to P, then the algorithm returns a substring of S with distance at most (1+γ)k+α to P.
Further, we complement these results by a lower bound, showing that any algorithm for the existence variant which also returns a witness must have an additive error of Ω(ε^{-1}log n) with constant probability.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.94/LIPIcs.ITCS.2024.94.pdf
Differential privacy
pattern matching
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
95:1
95:22
10.4230/LIPIcs.ITCS.2024.95
article
Stretching Demi-Bits and Nondeterministic-Secure Pseudorandomness
Tzameret, Iddo
1
https://orcid.org/0000-0002-5558-9911
Zhang, Lu-Ming
2
Department of Computing, Imperial College London, UK
Department of Mathematics, London School of Economic and Political Science, UK
We develop the theory of cryptographic nondeterministic-secure pseudorandomness beyond the point reached by Rudich’s original work [S. Rudich, 1997], and apply it to draw new consequences in average-case complexity and proof complexity. Specifically, we show the following:
Demi-bit stretch: Super-bits and demi-bits are variants of cryptographic pseudorandom generators which are secure against nondeterministic statistical tests [S. Rudich, 1997]. They were introduced to rule out certain approaches to proving strong complexity lower bounds beyond the limitations set out by the Natural Proofs barrier of Razborov and Rudich [A. A. Razborov and S. Rudich, 1997]. Whether demi-bits are stretchable at all had been an open problem since their introduction. We answer this question affirmatively by showing that: every demi-bit b:{0,1}ⁿ → {0,1}^{n+1} can be stretched into sublinear many demi-bits b':{0,1}ⁿ → {0,1}^{n+n^{c}}, for every constant 0 < c < 1.
Average-case hardness: Using work by Santhanam [Rahul Santhanam, 2020], we apply our results to obtain new average-case Kolmogorov complexity results: we show that K^{poly}[n-O(1)] is zero-error average-case hard against NP/poly machines iff K^{poly}[n-o(n)] is, where for a function s(n):ℕ → ℕ, K^{poly}[s(n)] denotes the languages of all strings x ∈ {0,1}ⁿ for which there are (fixed) polytime Turing machines of description-length at most s(n) that output x.
Characterising super-bits by nondeterministic unpredictability: In the deterministic setting, Yao [Yao, 1982] proved that super-polynomial hardness of pseudorandom generators is equivalent to ("next-bit") unpredictability. Unpredictability roughly means that given any strict prefix of a random string, it is infeasible to predict the next bit. We initiate the study of unpredictability beyond the deterministic setting (in the cryptographic regime), and characterise the nondeterministic hardness of generators from an unpredictability perspective. Specifically, we propose four stronger notions of unpredictability: NP/poly-unpredictability, coNP/poly-unpredictability, ∩-unpredictability and ∪-unpredictability, and show that super-polynomial nondeterministic hardness of generators lies between ∩-unpredictability and ∪-unpredictability.
Characterising super-bits by nondeterministic hard-core predicates: We introduce a nondeterministic variant of hard-core predicates, called super-core predicates. We show that the existence of a super-bit is equivalent to the existence of a super-core of some non-shrinking function. This serves as an analogue of the equivalence between the existence of a strong pseudorandom generator and the existence of a hard-core of some one-way function [Goldreich and Levin, 1989; Håstad et al., 1999], and provides a first alternative characterisation of super-bits. We also prove that a certain class of functions, which may have hard-cores, cannot possess any super-core.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.95/LIPIcs.ITCS.2024.95.pdf
Pseudorandomness
Cryptography
Natural Proofs
Nondeterminism
Lower bounds
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
96:1
96:13
10.4230/LIPIcs.ITCS.2024.96
article
Matrix Multiplication in Quadratic Time and Energy? Towards a Fine-Grained Energy-Centric Church-Turing Thesis
Valiant, Gregory
1
https://orcid.org/0000-0002-2211-1073
Department of Computer Science, Stanford University, CA, USA
We describe two algorithms for multiplying n × n matrices using time and energy Õ(n²) under basic models of classical physics. The first algorithm is for multiplying integer-valued matrices, and the second, quite different algorithm, is for Boolean matrix multiplication. We hope this work inspires a deeper consideration of physically plausible/realizable models of computing that might allow for algorithms which improve upon the runtimes and energy usages suggested by the parallel RAM model in which each operation requires one unit of time and one unit of energy.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.96/LIPIcs.ITCS.2024.96.pdf
Physics based computing
matrix multiplication
low-energy computing
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
97:1
97:22
10.4230/LIPIcs.ITCS.2024.97
article
Quantum Event Learning and Gentle Random Measurements
Watts, Adam Bene
1
https://orcid.org/0000-0002-3289-3339
Bostanci, John
2
https://orcid.org/0000-0001-9666-7114
Institute for Quantum Computing, University of Waterloo, Canada
Computer Science Department, Columbia University, New York, NY, USA
We prove the expected disturbance caused to a quantum system by a sequence of randomly ordered two-outcome projective measurements is upper bounded by the square root of the probability that at least one measurement in the sequence accepts. We call this bound the Gentle Random Measurement Lemma.
We then extend the techniques used to prove this lemma to develop protocols for problems in which we are given sample access to an unknown state ρ and asked to estimate properties of the accepting probabilities Tr[M_i ρ] of a set of measurements {M₁, M₂, … , M_m}. We call these types of problems Quantum Event Learning Problems. In particular, we show randomly ordering projective measurements solves the Quantum OR problem, answering an open question of Aaronson. We also give a Quantum OR protocol which works on non-projective measurements and which outperforms both the random measurement protocol analyzed in this paper and the protocol of Harrow, Lin, and Montanaro. However, this protocol requires a more complicated type of measurement, which we call a Blended Measurement. Given additional guarantees on the set of measurements {M₁, …, M_m}, we show the random and blended measurement Quantum OR protocols developed in this paper can also be used to find a measurement M_i such that Tr[M_i ρ] is large. We call the problem of finding such a measurement Quantum Event Finding. We also show Blended Measurements give a sample-efficient protocol for Quantum Mean Estimation: a problem in which the goal is to estimate the average accepting probability of a set of measurements on an unknown state.
Finally we consider the Threshold Search Problem described by O'Donnell and Bădescu where, given given a set of measurements {M₁, …, M_m} along with sample access to an unknown state ρ satisfying Tr[M_i ρ] ≥ 1/2 for some M_i, the goal is to find a measurement M_j such that Tr[M_j ρ] ≥ 1/2 - ε. By building on our Quantum Event Finding result we show that randomly ordered (or blended) measurements can be used to solve this problem using O(log²(m) / ε²) copies of ρ. This matches the performance of the algorithm given by O'Donnell and Bădescu, but does not require injected noise in the measurements. Consequently, we obtain an algorithm for Shadow Tomography which matches the current best known sample complexity (i.e. requires Õ(log²(m)log(d)/ε⁴) samples). This algorithm does not require injected noise in the quantum measurements, but does require measurements to be made in a random order, and so is no longer online.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.97/LIPIcs.ITCS.2024.97.pdf
Event learning
gentle measurments
random measurements
quantum or
threshold search
shadow tomography
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
98:1
98:23
10.4230/LIPIcs.ITCS.2024.98
article
Maximizing Miner Revenue in Transaction Fee Mechanism Design
Wu, Ke
1
https://orcid.org/0000-0002-2756-8750
Shi, Elaine
2
Chung, Hao
3
CSD Department, Carnegie Mellon University, Pittsburgh, PA, USA
ECE and CSD Department, Carnegie Mellon University, Pittsburgh, PA, USA
ECE Department, Carnegie Mellon University, Pittsburgh, PA, USA
Transaction fee mechanism design is a new decentralized mechanism design problem where users bid for space on the blockchain. Several recent works showed that the transaction fee mechanism design fundamentally departs from classical mechanism design. They then systematically explored the mathematical landscape of this new decentralized mechanism design problem in two settings: in the plain setting where no cryptography is employed, and in a cryptography-assisted setting where the rules of the mechanism are enforced by a multi-party computation protocol. Unfortunately, in both settings, prior works showed that if we want the mechanism to incentivize honest behavior for both users as well as miners (possibly colluding with users), then the miner revenue has to be zero. Although adopting a relaxed, approximate notion of incentive compatibility gets around this zero miner-revenue limitation, the scaling of the miner revenue is nonetheless poor.
In this paper, we show that if we make a mild reasonable-world assumption that there are sufficiently many honest users, we can circumvent the known limitations on miner revenue, and design auctions that generate asymptotically optimal miner revenue. We also systematically explore the mathematical landscape of transaction fee mechanism design under the new reasonable-world assumptions, and demonstrate how such assumptions can alter the feasibility and infeasibility landscape.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.98/LIPIcs.ITCS.2024.98.pdf
Blockchain
Mechanism Design
Transaction Fee
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
99:1
99:15
10.4230/LIPIcs.ITCS.2024.99
article
Randomized vs. Deterministic Separation in Time-Space Tradeoffs of Multi-Output Functions
Yu, Huacheng
1
https://orcid.org/0000-0003-1450-1896
Zhan, Wei
2
https://orcid.org/0000-0003-0297-205X
Princeton University, NJ, USA
University of Chicago, IL, USA
We prove the first polynomial separation between randomized and deterministic time-space tradeoffs of multi-output functions. In particular, we present a total function that on the input of n elements in [n], outputs O(n) elements, such that:
- There exists a randomized oblivious algorithm with space O(log n), time O(nlog n) and one-way access to randomness, that computes the function with probability 1-O(1/n);
- Any deterministic oblivious branching program with space S and time T that computes the function must satisfy T²S ≥ Ω(n^{2.5}/log n). This implies that logspace randomized algorithms for multi-output functions cannot be black-box derandomized without an Ω̃(n^{1/4}) overhead in time.
Since previously all the polynomial time-space tradeoffs of multi-output functions are proved via the Borodin-Cook method, which is a probabilistic method that inherently gives the same lower bound for randomized and deterministic branching programs, our lower bound proof is intrinsically different from previous works.
We also examine other natural candidates for proving such separations, and show that any polynomial separation for these problems would resolve the long-standing open problem of proving n^{1+Ω(1)} time lower bound for decision problems with polylog(n) space.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.99/LIPIcs.ITCS.2024.99.pdf
Time-space tradeoffs
Randomness
Borodin-Cook method
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
100:1
100:11
10.4230/LIPIcs.ITCS.2024.100
article
Sampling, Flowers and Communication
Yu, Huacheng
1
https://orcid.org/0000-0003-1450-1896
Zhan, Wei
2
https://orcid.org/0000-0003-0297-205X
Princeton University, NJ, USA
University of Chicago, IL, USA
Given a distribution over [n]ⁿ such that any k coordinates need k/log^{O(1)}n bits of communication to sample, we prove that any map that samples this distribution from uniform cells requires locality Ω(log(n/k)/log log(n/k)). In particular, we show that for any constant δ > 0, there exists ε = 2^{-Ω(n^{1-δ})} such that Ω(log n/log log n) non-adaptive cell probes on uniform cells are required to:
- Sample a uniformly random permutation on n elements with error 1-ε. This provides an exponential improvement on the Ω(log log n) cell probe lower bound by Viola.
- Sample an n-vector with each element independently drawn from a random n^{1-δ}-vector, with error 1-ε. This provides the first adaptive vs non-adaptive cell probe separation for sampling.
The major technical component in our proof is a new combinatorial theorem about flower with small kernel, i.e. a collection of sets where few elements appear more than once. We show that in a family of n sets, each with size O(log n/log log n), there must be k = poly(n) sets where at most k/log^{O(1)}n elements appear more than once.
To show the lower bound on sampling permutation, we also prove a new Ω(k) communication lower bound on sampling uniformly distributed disjoint subsets of [n] of size k, with error 1-2^{-Ω(k²/n)}. This result unifies and subsumes the lower bound for k = Θ(√n) by Ambainis et al., and the lower bound for k = Θ(n) by Göös and Watson.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.100/LIPIcs.ITCS.2024.100.pdf
Flower
Sampling
Cell probe
Communcation complexity
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
101:1
101:23
10.4230/LIPIcs.ITCS.2024.101
article
Quantum Money from Abelian Group Actions
Zhandry, Mark
1
https://orcid.org/0000-0001-7071-6272
NTT Research, Sunnyvale, CA, USA
We give a construction of public key quantum money, and even a strengthened version called quantum lightning, from abelian group actions, which can in turn be constructed from suitable isogenies over elliptic curves. We prove security in the generic group model for group actions under a plausible computational assumption, and develop a general toolkit for proving quantum security in this model. Along the way, we explore knowledge assumptions and algebraic group actions in the quantum setting, finding significant limitations of these assumptions/models compared to generic group actions.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.101/LIPIcs.ITCS.2024.101.pdf
Quantum Money
Cryptographic Group Actions
Isogenies
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
102:1
102:22
10.4230/LIPIcs.ITCS.2024.102
article
The Space-Time Cost of Purifying Quantum Computations
Zhandry, Mark
1
https://orcid.org/0000-0001-7071-6272
NTT Research, Sunnyvale, CA, USA
General quantum computation consists of unitary operations and also measurements. It is well known that intermediate quantum measurements can be deferred to the end of the computation, resulting in an equivalent purely unitary computation. While time efficient, this transformation blows up the space to linear in the running time, which could be super-polynomial for low-space algorithms. Fefferman and Remscrim (STOC'21) and Girish, Raz and Zhan (ICALP'21) show different transformations which are space efficient, but blow up the running time by a factor that is exponential in the space. This leaves the case of algorithms with small-but-super-logarithmic space as incurring a large blowup in either time or space complexity. We show that such a blowup is likely inherent, demonstrating that any "black-box" transformation which removes intermediate measurements must significantly blow up either space or time.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.102/LIPIcs.ITCS.2024.102.pdf
Quantum computation
intermediate measurements
time-space trade-offs
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2024-01-24
287
103:1
103:24
10.4230/LIPIcs.ITCS.2024.103
article
Advanced Composition Theorems for Differential Obliviousness
Zhou, Mingxun
1
Zhao, Mengshi
2
Chan, T-H. Hubert
2
Shi, Elaine
1
Carnegie Mellon University, Pittsburgh, PA, USA
The University of Hong Kong, Hong Kong SAR, China
Differential obliviousness (DO) is a privacy notion which mandates that the access patterns of a program satisfy differential privacy. Earlier works have shown that in numerous applications, differential obliviousness allows us to circumvent fundamental barriers pertaining to fully oblivious algorithms, resulting in asymptotical (and sometimes even polynomial) performance improvements. Although DO has been applied to various contexts, including the design of algorithms, data structures, and protocols, its compositional properties are not explored until the recent work of Zhou et al. (Eurocrypt'23). Specifically, Zhou et al. showed that the original DO notion is not composable. They then proposed a refinement of DO called neighbor-preserving differential obliviousness (NPDO), and proved a basic composition for NPDO.
In Zhou et al.’s basic composition theorem for NPDO, the privacy loss is linear in k for k-fold composition. In comparison, for standard differential privacy, we can enjoy roughly √k loss for k-fold composition by applying the well-known advanced composition theorem given an appropriate parameter range. Therefore, a natural question left open by their work is whether we can also prove an analogous advanced composition for NPDO.
In this paper, we answer this question affirmatively. As a key step in proving an advanced composition theorem for NPDO, we define a more operational notion called symmetric NPDO which we prove to be equivalent to NPDO. Using symmetric NPDO as a stepping stone, we also show how to generalize NPDO to more general notions of divergence, resulting in Rényi-NPDO, zero-concentrated-NPDO, Gassian-NPDO, and g-NPDO notions. We also prove composition theorems for these generalized notions of NPDO.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol287-itcs2024/LIPIcs.ITCS.2024.103/LIPIcs.ITCS.2024.103.pdf
Differential Privacy
Oblivious Algorithms