LIPIcs, Volume 325

16th Innovations in Theoretical Computer Science Conference (ITCS 2025)



Thumbnail PDF

Event

ITCS 2025, January 7-10, 2025, Columbia University, New York, NY, USA

Editor

Raghu Meka
  • University of California, Los Angeles, CA, USA

Publication Details


Access Numbers

Documents

No documents found matching your filter selection.
Document
Complete Volume
LIPIcs, Volume 325, ITCS 2025, Complete Volume

Authors: Raghu Meka


Abstract
LIPIcs, Volume 325, ITCS 2025, Complete Volume

Cite as

16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 1-1922, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@Proceedings{meka:LIPIcs.ITCS.2025,
  title =	{{LIPIcs, Volume 325, ITCS 2025, Complete Volume}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{1--1922},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025},
  URN =		{urn:nbn:de:0030-drops-229037},
  doi =		{10.4230/LIPIcs.ITCS.2025},
  annote =	{Keywords: LIPIcs, Volume 325, ITCS 2025, Complete Volume}
}
Document
Front Matter
Front Matter, Table of Contents, Preface, Conference Organization

Authors: Raghu Meka


Abstract
Front Matter, Table of Contents, Preface, Conference Organization

Cite as

16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 0:i-0:xxii, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{meka:LIPIcs.ITCS.2025.0,
  author =	{Meka, Raghu},
  title =	{{Front Matter, Table of Contents, Preface, Conference Organization}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{0:i--0:xxii},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.0},
  URN =		{urn:nbn:de:0030-drops-229028},
  doi =		{10.4230/LIPIcs.ITCS.2025.0},
  annote =	{Keywords: Front Matter, Table of Contents, Preface, Conference Organization}
}
Document
Simple Is COOL: Graded Dispersal and Its Applications for Byzantine Fault Tolerance

Authors: Ittai Abraham, Gilad Asharov, and Anirudh Chandramouli


Abstract
The COOL protocol of Chen (DISC'21) is a major advance that enables perfect security for various tasks (in particular, Byzantine Agreement in Synchrony and Reliable Broadcast in Asynchrony). For an input of size L bits, its communication complexity is O(nL+n² log n), which is optimal up to a log n factor. Unfortunately, Chen’s analysis is rather intricate and complex. Our main contribution is a simple analysis of a new variant of COOL based on elementary counting arguments. Our main consistency proof takes less than two pages (instead of over 20 pages), making the COOL protocol much more accessible. In addition, the simple analysis allows us to improve the protocol by reducing one round of communication and reducing the communication complexity by 40%. In addition, we suggest a new way of extracting the core properties of COOL as a new primitive, which we call Graded Dispersal. We show how Graded Dispersal can then be used to obtain efficient solutions for Byzantine Agreement, Verifiable Information Dispersal, Gradecast, and Reliable Broadcast (in both Synchrony and Asynchrony, where appropriate). Our improvement of COOL directly applies here, and we improve the state-of-the-art in all those primitives by reducing at least one round and 40% communication.

Cite as

Ittai Abraham, Gilad Asharov, and Anirudh Chandramouli. Simple Is COOL: Graded Dispersal and Its Applications for Byzantine Fault Tolerance. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 1:1-1:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{abraham_et_al:LIPIcs.ITCS.2025.1,
  author =	{Abraham, Ittai and Asharov, Gilad and Chandramouli, Anirudh},
  title =	{{Simple Is COOL: Graded Dispersal and Its Applications for Byzantine Fault Tolerance}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{1:1--1:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.1},
  URN =		{urn:nbn:de:0030-drops-226295},
  doi =		{10.4230/LIPIcs.ITCS.2025.1},
  annote =	{Keywords: Byzantine Agreement, Broadcast}
}
Document
Improved Lower Bounds for 3-Query Matching Vector Codes

Authors: Divesh Aggarwal, Pranjal Dutta, Zeyong Li, Maciej Obremski, and Sidhant Saraogi


Abstract
A Matching Vector (MV) family modulo a positive integer m ≥ 2 is a pair of ordered lists U = (u_1, ⋯, u_K) and V = (v_1, ⋯, v_K) where u_i, v_j ∈ ℤ_m^n with the following property: for any i ∈ [K], the inner product ⟨u_i, v_i⟩ = 0 mod m, and for any i ≠ j, ⟨u_i, v_j⟩ ≠ 0 mod m. An MV family is called r-restricted if inner products ⟨u_i, v_j⟩, for all i,j, take at most r different values. The r-restricted MV families are extremely important since the only known construction of constant-query subexponential locally decodable codes (LDCs) are based on them. Such LDCs constructed via matching vector families are called matching vector codes. Let MV(m,n) (respectively MV(m, n, r)) denote the largest K such that there exists an MV family (respectively r-restricted MV family) of size K in ℤ_m^n. Such a MV family can be transformed in a black-box manner to a good r-query locally decodable code taking messages of length K to codewords of length N = m^n. For small prime m, an almost tight bound MV(m,n) ≤ O(m^{n/2}) was first shown by Dvir, Gopalan, Yekhanin (FOCS'10, SICOMP'11), while for general m, the same paper established an upper bound of O(m^{n-1+o_m(1)}), with o_m(1) denoting a function that goes to zero when m grows. For any arbitrary constant r ≥ 3 and composite m, the best upper bound till date on MV(m,n,r) is O(m^{n/2}), is due to Bhowmick, Dvir and Lovett (STOC'13, SICOMP'14).In a breakthrough work, Alrabiah, Guruswami, Kothari and Manohar (STOC'23) implicitly improve this bound for 3-restricted families to MV(m, n, 3) ≤ O(m^{n/3}). In this work, we present an upper bound for r = 3 where MV(m,n,3) ≤ m^{n/6 +O(log n)}, and as a result, any 3-query matching vector code must have codeword length of N ≥ K^{6-o(1)}.

Cite as

Divesh Aggarwal, Pranjal Dutta, Zeyong Li, Maciej Obremski, and Sidhant Saraogi. Improved Lower Bounds for 3-Query Matching Vector Codes. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 2:1-2:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{aggarwal_et_al:LIPIcs.ITCS.2025.2,
  author =	{Aggarwal, Divesh and Dutta, Pranjal and Li, Zeyong and Obremski, Maciej and Saraogi, Sidhant},
  title =	{{Improved Lower Bounds for 3-Query Matching Vector Codes}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{2:1--2:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.2},
  URN =		{urn:nbn:de:0030-drops-226308},
  doi =		{10.4230/LIPIcs.ITCS.2025.2},
  annote =	{Keywords: Locally Decodable Codes, Matching Vector Families}
}
Document
Sparsity Lower Bounds for Probabilistic Polynomials

Authors: Josh Alman, Arkadev Chattopadhyay, and Ryan Williams


Abstract
Probabilistic polynomials over commutative rings offer a powerful way of representing Boolean functions. Although many degree lower bounds for such representations have been proved, sparsity lower bounds (counting the number of monomials in the polynomials) have not been so common. Sparsity upper bounds are of great interest for potential algorithmic applications, since sparse probabilistic polynomials are the key technical tool behind the best known algorithms for many core problems, including dense All-Pairs Shortest Paths, and the existence of sparser polynomials would lead to breakthrough algorithms for these problems. In this paper, we prove several strong lower bounds on the sparsity of probabilistic and approximate polynomials computing Boolean functions when 0 means "false". Our main result is that the AND of n ORs of c log n variables requires probabilistic polynomials (over any commutative ring which isn't too large) of sparsity n^Ω(log c) to achieve even 1/4 error. The lower bound is tight, and it rules out a large class of polynomial-method approaches for refuting the APSP and SETH conjectures via matrix multiplication. Our other results include: - Every probabilistic polynomial (over a commutative ring) for the disjointness function on two n-bit vectors requires exponential sparsity in order to achieve exponentially low error. - A generic lower bound that any function requiring probabilistic polynomials of degree d must require probabilistic polynomials of sparsity Ω(2^d). - Building on earlier work, we consider the probabilistic rank of Boolean functions which generalizes the notion of sparsity for probabilistic polynomials, and prove separations of probabilistic rank and probabilistic sparsity. Some of our results and lemmas are basis independent. For example, over any basis {a,b} for true and false where a ≠ b, and any commutative ring R, the AND function on n variables has no probabilistic R-polynomial with 2^o(n) sparsity, o(n) degree, and 1/2^o(n) error simultaneously. This AND lower bound is our main technical lemma used in the above lower bounds.

Cite as

Josh Alman, Arkadev Chattopadhyay, and Ryan Williams. Sparsity Lower Bounds for Probabilistic Polynomials. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 3:1-3:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{alman_et_al:LIPIcs.ITCS.2025.3,
  author =	{Alman, Josh and Chattopadhyay, Arkadev and Williams, Ryan},
  title =	{{Sparsity Lower Bounds for Probabilistic Polynomials}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{3:1--3:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.3},
  URN =		{urn:nbn:de:0030-drops-226316},
  doi =		{10.4230/LIPIcs.ITCS.2025.3},
  annote =	{Keywords: Probabilistic Polynomials, Sparsity, Orthogonal Vectors, Probabilistic Rank}
}
Document
A Bicriterion Concentration Inequality and Prophet Inequalities for k-Fold Matroid Unions

Authors: Noga Alon, Nick Gravin, Tristan Pollner, Aviad Rubinstein, Hongao Wang, S. Matthew Weinberg, and Qianfan Zhang


Abstract
We investigate prophet inequalities with competitive ratios approaching 1, seeking to generalize k-uniform matroids. We first show that large girth does not suffice: for all k, there exists a matroid of girth ≥ k and a prophet inequality instance on that matroid whose optimal competitive ratio is 1/2. Next, we show k-fold matroid unions do suffice: we provide a prophet inequality with competitive ratio 1-O(√{(log k)/k}) for any k-fold matroid union. Our prophet inequality follows from an online contention resolution scheme. The key technical ingredient in our online contention resolution scheme is a novel bicriterion concentration inequality for arbitrary monotone 1-Lipschitz functions over independent items which may be of independent interest. Applied to our particular setting, our bicriterion concentration inequality yields "Chernoff-strength" concentration for a 1-Lipschitz function that is not (approximately) self-bounding.

Cite as

Noga Alon, Nick Gravin, Tristan Pollner, Aviad Rubinstein, Hongao Wang, S. Matthew Weinberg, and Qianfan Zhang. A Bicriterion Concentration Inequality and Prophet Inequalities for k-Fold Matroid Unions. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 4:1-4:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{alon_et_al:LIPIcs.ITCS.2025.4,
  author =	{Alon, Noga and Gravin, Nick and Pollner, Tristan and Rubinstein, Aviad and Wang, Hongao and Weinberg, S. Matthew and Zhang, Qianfan},
  title =	{{A Bicriterion Concentration Inequality and Prophet Inequalities for k-Fold Matroid Unions}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{4:1--4:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.4},
  URN =		{urn:nbn:de:0030-drops-226329},
  doi =		{10.4230/LIPIcs.ITCS.2025.4},
  annote =	{Keywords: Prophet Inequalities, Online Contention Resolution Schemes, Concentration Inequalities}
}
Document
Edge-Minimum Walk of Modular Length in Polynomial Time

Authors: Antoine Amarilli, Benoît Groz, and Nicole Wein


Abstract
We study the problem of finding, in a directed graph, an st-walk of length r od q which is edge-minimum, i.e., uses the smallest number of distinct edges. Despite the vast literature on paths and cycles with modularity constraints, to the best of our knowledge we are the first to study this problem. Our main result is a polynomial-time algorithm that solves this task when r and q are constants. We also show how our proof technique gives an algorithm to solve a generalization of the well-known Directed Steiner Network problem, in which connections between endpoint pairs are required to satisfy modularity constraints on their length. Our algorithm is polynomial when the number of endpoint pairs and the modularity constraints on the pairs are constants. In this version of the article, proofs and examples are omitted because of space constraints. Detailed proofs are available in the full version [Antoine Amarilli et al., 2024].

Cite as

Antoine Amarilli, Benoît Groz, and Nicole Wein. Edge-Minimum Walk of Modular Length in Polynomial Time. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 5:1-5:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{amarilli_et_al:LIPIcs.ITCS.2025.5,
  author =	{Amarilli, Antoine and Groz, Beno\^{i}t and Wein, Nicole},
  title =	{{Edge-Minimum Walk of Modular Length in Polynomial Time}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{5:1--5:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.5},
  URN =		{urn:nbn:de:0030-drops-226330},
  doi =		{10.4230/LIPIcs.ITCS.2025.5},
  annote =	{Keywords: Directed Steiner Network, Modularity}
}
Document
Doubly Sub-Linear Interactive Proofs of Proximity

Authors: Noga Amir, Oded Goldreich, and Guy N. Rothblum


Abstract
We initiate a study of doubly-efficient interactive proofs of proximity, while focusing on properties that can be tested within query-complexity that is significantly sub-linear, and seeking interactive proofs of proximity in which 1) The query-complexity of verification is significantly smaller than the query-complexity of testing. 2) The query-complexity of the honest prover strategy is not much larger than the query-complexity of testing. We call such proof systems doubly-sublinear IPPs (dsIPPs). We present a few doubly-sublinear IPPs. A salient feature of these IPPs is that the honest prover does not employ an optimal strategy (i.e. a strategy that maximizes the verifier’s acceptance probability). In particular, the honest prover in our IPP for sets recognizable by constant-width read-once oblivious branching programs uses a distance-approximator for such sets.

Cite as

Noga Amir, Oded Goldreich, and Guy N. Rothblum. Doubly Sub-Linear Interactive Proofs of Proximity. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 6:1-6:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{amir_et_al:LIPIcs.ITCS.2025.6,
  author =	{Amir, Noga and Goldreich, Oded and Rothblum, Guy N.},
  title =	{{Doubly Sub-Linear Interactive Proofs of Proximity}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{6:1--6:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.6},
  URN =		{urn:nbn:de:0030-drops-226345},
  doi =		{10.4230/LIPIcs.ITCS.2025.6},
  annote =	{Keywords: Interactive Proof Systems, Interactive Proofs of Proximity, Query Complexity, Read Once Branching Programs, Sub-linear}
}
Document
Simultaneous Haar Indistinguishability with Applications to Unclonable Cryptography

Authors: Prabhanjan Ananth, Fatih Kaleoglu, and Henry Yuen


Abstract
We study a novel question about nonlocal quantum state discrimination: how well can non-communicating - but entangled - players distinguish between different distributions over quantum states? We call this task simultaneous state indistinguishability. Our main technical result is to show that the players cannot distinguish between each player receiving independently-chosen Haar random states versus all players receiving the same Haar random state. We show that this question has implications to unclonable cryptography, which leverages the no-cloning principle to build cryptographic primitives that are classically impossible to achieve. Understanding the feasibility of unclonable encryption, one of the key unclonable primitives, satisfying indistinguishability security in the plain model has been a major open question in the area. So far, the existing constructions of unclonable encryption are either in the quantum random oracle model or are based on new conjectures. We leverage our main result to present the first construction of unclonable encryption satisfying indistinguishability security, with quantum decryption keys, in the plain model. We also show other implications to single-decryptor encryption and leakage-resilient secret sharing. These applications present evidence that simultaneous Haar indistinguishability could be useful in quantum cryptography.

Cite as

Prabhanjan Ananth, Fatih Kaleoglu, and Henry Yuen. Simultaneous Haar Indistinguishability with Applications to Unclonable Cryptography. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 7:1-7:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{ananth_et_al:LIPIcs.ITCS.2025.7,
  author =	{Ananth, Prabhanjan and Kaleoglu, Fatih and Yuen, Henry},
  title =	{{Simultaneous Haar Indistinguishability with Applications to Unclonable Cryptography}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{7:1--7:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.7},
  URN =		{urn:nbn:de:0030-drops-226352},
  doi =		{10.4230/LIPIcs.ITCS.2025.7},
  annote =	{Keywords: Quantum, Haar, unclonable encryption}
}
Document
Single-Round Proofs of Quantumness from Knowledge Assumptions

Authors: Petia Arabadjieva, Alexandru Gheorghiu, Victor Gitton, and Tony Metger


Abstract
A proof of quantumness is an efficiently verifiable interactive test that an efficient quantum computer can pass, but all efficient classical computers cannot (under some cryptographic assumption). Such protocols play a crucial role in the certification of quantum devices. Existing single-round protocols based solely on a cryptographic hardness assumption (like asking the quantum computer to factor a large number) require large quantum circuits, whereas multi-round ones use smaller circuits but require experimentally challenging mid-circuit measurements. In this work, we construct efficient single-round proofs of quantumness based on existing knowledge assumptions. While knowledge assumptions have not been previously considered in this context, we show that they provide a natural basis for separating classical and quantum computation. Our work also helps in understanding the interplay between black-box/white-box reductions and cryptographic assumptions in the design of proofs of quantumness. Specifically, we show that multi-round protocols based on Decisional Diffie-Hellman (DDH) or Learning With Errors (LWE) can be "compiled" into single-round protocols using a knowledge-of-exponent assumption [Bitansky et al., 2012] or knowledge-of-lattice-point assumption [Loftus et al., 2012], respectively. We also prove an adaptive hardcore-bit statement for a family of claw-free functions based on DDH, which might be of independent interest.

Cite as

Petia Arabadjieva, Alexandru Gheorghiu, Victor Gitton, and Tony Metger. Single-Round Proofs of Quantumness from Knowledge Assumptions. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 8:1-8:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{arabadjieva_et_al:LIPIcs.ITCS.2025.8,
  author =	{Arabadjieva, Petia and Gheorghiu, Alexandru and Gitton, Victor and Metger, Tony},
  title =	{{Single-Round Proofs of Quantumness from Knowledge Assumptions}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{8:1--8:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.8},
  URN =		{urn:nbn:de:0030-drops-226364},
  doi =		{10.4230/LIPIcs.ITCS.2025.8},
  annote =	{Keywords: Proofs of quantumness, Knowledge assumptions, Learning with errors, Decisional Diffie-Hellman}
}
Document
Extended Abstract
The Local Hamiltonian Problem for Quasi-Quantum States: A Toy Model for the Quantum PCP Conjecture (Extended Abstract)

Authors: Itai Arad and Miklos Santha


Abstract
In this work we define a new classical constraint satisfaction problem that shares many of the properties of the quantum local Hamiltonian problem, distinguishing it from the usual classical k-SAT problem. The problem consists of minimizing the number of violated local constraints over a restricted set of distributions of assignments. We show that these distributions can be 1-to-1 mapped to a superset of the quantum states, which we call k-local quasi-quantum states. Nevertheless, we claim that our optimization problem is essentially classical, by proving that it is an NP-complete problem. Interestingly, the optimal distribution shares many of the properties of quantum states. In particular, it is not determined straightforwardly by its local marginals, and consequently, it can be used as a classical toy model to study several aspects of Hamiltonian complexity that are different from their classical counter parts. These include the complexity of 1D systems (which is in P for classical CSPs, but is QMA-hard for quantum systems), and the lack of an easy search-to-decision reduction. Finally, we believe that our model can be used to gain insights into the quantum PCP conjecture. Indeed, while we have shown that approximating the minimal number of unsatisfiable constraints to within an Θ(1) is NP-hard, it is not clear if the problem remains hard if we want to approximate the minimal fraction of unsatisfiable constraints to within an Θ(1); as in the quantum PCP conjecture, naive quantization of the classical proofs does not seem to work.

Cite as

Itai Arad and Miklos Santha. The Local Hamiltonian Problem for Quasi-Quantum States: A Toy Model for the Quantum PCP Conjecture (Extended Abstract). In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, p. 9:1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{arad_et_al:LIPIcs.ITCS.2025.9,
  author =	{Arad, Itai and Santha, Miklos},
  title =	{{The Local Hamiltonian Problem for Quasi-Quantum States: A Toy Model for the Quantum PCP Conjecture}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{9:1--9:1},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.9},
  URN =		{urn:nbn:de:0030-drops-226377},
  doi =		{10.4230/LIPIcs.ITCS.2025.9},
  annote =	{Keywords: Quantum Complexity, Probability Checkable Proofs, Local Hamiltonians}
}
Document
Algorithmic Collusion Without Threats

Authors: Eshwar Ram Arunachaleswaran, Natalie Collina, Sampath Kannan, Aaron Roth, and Juba Ziani


Abstract
There has been substantial recent concern that automated pricing algorithms might learn to "collude." Supra-competitive prices can emerge as a Nash equilibrium of repeated pricing games, in which sellers play strategies which threaten to punish their competitors if they ever "defect" from a set of supra-competitive prices, and these strategies can be automatically learned. But threats are anti-competitive on their face. In fact, a standard economic intuition is that supra-competitive prices emerge from either the use of threats, or a failure of one party to correctly optimize their payoff. Is this intuition correct? Would explicitly preventing threats in algorithmic decision-making prevent supra-competitive prices when sellers are optimizing for their own revenue? No. We show that supra-competitive prices can robustly emerge even when both players are using algorithms which do not explicitly encode threats, and which optimize for their own revenue. Since deploying an algorithm is a form of commitment, we study sequential Bertrand pricing games (and a continuous variant) in which a first mover deploys an algorithm and then a second mover optimizes within the resulting environment. We show that if the first mover deploys any algorithm with a no-regret guarantee, and then the second mover even approximately optimizes within this now static environment, monopoly-like prices arise. The result holds for any no-regret learning algorithm deployed by the first mover and for any pricing policy of the second mover that obtains them profit at least as high as a random pricing would - and hence the result applies even when the second mover is optimizing only within a space of non-responsive pricing distributions which are incapable of encoding threats. In fact, there exists a set of strategies, neither of which explicitly encode threats that form a Nash equilibrium of the simultaneous pricing game in algorithm space, and lead to near monopoly prices. This suggests that the definition of "algorithmic collusion" may need to be expanded, to include strategies without explicitly encoded threats.

Cite as

Eshwar Ram Arunachaleswaran, Natalie Collina, Sampath Kannan, Aaron Roth, and Juba Ziani. Algorithmic Collusion Without Threats. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 10:1-10:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{arunachaleswaran_et_al:LIPIcs.ITCS.2025.10,
  author =	{Arunachaleswaran, Eshwar Ram and Collina, Natalie and Kannan, Sampath and Roth, Aaron and Ziani, Juba},
  title =	{{Algorithmic Collusion Without Threats}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{10:1--10:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.10},
  URN =		{urn:nbn:de:0030-drops-226386},
  doi =		{10.4230/LIPIcs.ITCS.2025.10},
  annote =	{Keywords: Algorithmic Game Theory, Algorithmic Collusion, No-Regret Dynamics}
}
Document
Rank Lower Bounds on Non-Local Quantum Computation

Authors: Vahid R. Asadi, Eric Culf, and Alex May


Abstract
A non-local quantum computation (NLQC) replaces an interaction between two quantum systems with a single simultaneous round of communication and shared entanglement. We study two classes of NLQC, f-routing and f-BB84, which are of relevance to classical information theoretic cryptography and quantum position-verification. We give the first non-trivial lower bounds on entanglement in both settings, but are restricted to lower bounding protocols with perfect correctness. Within this setting, we give a lower bound on the Schmidt rank of any entangled state that completes these tasks for a given function f(x,y) in terms of the rank of a matrix g(x,y) whose entries are zero when f(x,y) = 0, and strictly positive otherwise. This also leads to a lower bound on the Schmidt rank in terms of the non-deterministic quantum communication complexity of f(x,y). Because of a relationship between f-routing and the conditional disclosure of secrets (CDS) primitive studied in information theoretic cryptography, we obtain a new technique for lower bounding the randomness complexity of CDS.

Cite as

Vahid R. Asadi, Eric Culf, and Alex May. Rank Lower Bounds on Non-Local Quantum Computation. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 11:1-11:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{asadi_et_al:LIPIcs.ITCS.2025.11,
  author =	{Asadi, Vahid R. and Culf, Eric and May, Alex},
  title =	{{Rank Lower Bounds on Non-Local Quantum Computation}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{11:1--11:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.11},
  URN =		{urn:nbn:de:0030-drops-226399},
  doi =		{10.4230/LIPIcs.ITCS.2025.11},
  annote =	{Keywords: Non-local quantum computation, quantum position-verification, conditional disclosure of secrets}
}
Document
Stable Matching with Interviews

Authors: Itai Ashlagi, Jiale Chen, Mohammad Roghani, and Amin Saberi


Abstract
In several two-sided markets, including labor and dating, agents typically have limited information about their preferences prior to mutual interactions. This issue can result in matching frictions, as arising in the labor market for medical residencies, where high application rates are followed by a large number of interviews. Yet, the extensive literature on two-sided matching primarily focuses on models where agents know their preferences, leaving the interactions necessary for preference discovery largely overlooked. This paper studies this problem using an algorithmic approach, extending Gale-Shapley’s deferred acceptance to this context. Two algorithms are proposed. The first is an adaptive algorithm that expands upon Gale-Shapley’s deferred acceptance by incorporating interviews between applicants and positions. Similar to deferred acceptance, one side sequentially proposes to the other. However, the order of proposals is carefully chosen to ensure an interim stable matching is found. Furthermore, with high probability, the number of interviews conducted by each applicant or position is limited to O(log² n). In many seasonal markets, interactions occur more simultaneously, consisting of an initial interview phase followed by a clearing stage. We present a non-adaptive algorithm for generating a single stage set of in tiered random markets. The algorithm finds an interim stable matching in such markets while assigning no more than O(log³ n) interviews to each applicant or position.

Cite as

Itai Ashlagi, Jiale Chen, Mohammad Roghani, and Amin Saberi. Stable Matching with Interviews. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 12:1-12:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{ashlagi_et_al:LIPIcs.ITCS.2025.12,
  author =	{Ashlagi, Itai and Chen, Jiale and Roghani, Mohammad and Saberi, Amin},
  title =	{{Stable Matching with Interviews}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{12:1--12:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.12},
  URN =		{urn:nbn:de:0030-drops-226402},
  doi =		{10.4230/LIPIcs.ITCS.2025.12},
  annote =	{Keywords: Stable Matching, Gale–Shapley Algorithm, Algorithmic Game Theory}
}
Document
Low Sensitivity Hopsets

Authors: Vikrant Ashvinkumar, Aaron Bernstein, Chengyuan Deng, Jie Gao, and Nicole Wein


Abstract
Given a weighted graph G = (V,E,w), a (β, ε)-hopset H is an edge set such that for any s,t ∈ V, where s can reach t in G, there is a path from s to t in G ∪ H which uses at most β hops whose length is in the range [dist_G(s,t), (1+ε)dist_G(s,t)]. We break away from the traditional question that asks for a hopset H that achieves small |H| and small diameter β and instead study the sensitivity of H, a new quality measure. The sensitivity of a vertex (or edge) given a hopset H is, informally, the number of times a single hop in G ∪ H bypasses it; a bit more formally, assuming shortest paths in G are unique, it is the number of hopset edges (s,t) ∈ H such that the vertex (or edge) is contained in the unique st-path in G having length exactly dist_G(s,t). The sensitivity associated with H is then the maximum sensitivity over all vertices (or edges). The highlights of our results are: - A construction for (Õ(√n), 0)-hopsets on undirected graphs with O(log n) sensitivity, complemented with a lower bound showing that Õ(√n) is tight up to polylogarithmic factors for any construction with polylogarithmic sensitivity. - A construction for (n^o(1), ε)-hopsets on undirected graphs with n^o(1) sensitivity for any ε > 0 that is at least inverse polylogarithmic, complemented with a lower bound on the tradeoff between β, ε, and the sensitivity. - We define a notion of sensitivity for β-shortcut sets (which are the reachability analogues of hopsets) and give a construction for Õ(√n)-shortcut sets on directed graphs with O(log n) sensitivity, complemented with a lower bound showing that β = Ω̃(n^{1/3}) for any construction with polylogarithmic sensitivity. We believe hopset sensitivity is a natural measure in and of itself, and could potentially find use in a diverse range of contexts. More concretely, the notion of hopset sensitivity is also directly motivated by the Differentially Private All Sets Range Queries problem [Deng et al. WADS 23]. Our result for O(log n) sensitivity (Õ(√n), 0)-hopsets on undirected graphs immediately improves the current best-known upper bound on utility from Õ(n^{1/3}) to Õ(n^{1/4}) in the pure-DP setting, which is tight up to polylogarithmic factors.

Cite as

Vikrant Ashvinkumar, Aaron Bernstein, Chengyuan Deng, Jie Gao, and Nicole Wein. Low Sensitivity Hopsets. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 13:1-13:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{ashvinkumar_et_al:LIPIcs.ITCS.2025.13,
  author =	{Ashvinkumar, Vikrant and Bernstein, Aaron and Deng, Chengyuan and Gao, Jie and Wein, Nicole},
  title =	{{Low Sensitivity Hopsets}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{13:1--13:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.13},
  URN =		{urn:nbn:de:0030-drops-226418},
  doi =		{10.4230/LIPIcs.ITCS.2025.13},
  annote =	{Keywords: Hopsets, Shortcuts, Sensitivity, Differential Privacy}
}
Document
The More the Merrier! On Total Coding and Lattice Problems and the Complexity of Finding Multicollisions

Authors: Huck Bennett, Surendra Ghentiyala, and Noah Stephens-Davidowitz


Abstract
We show a number of connections between two types of search problems: (1) the problem of finding an L-wise multicollision in the output of a function; and (2) the problem of finding two codewords in a code (or two vectors in a lattice) that are within distance d of each other. Specifically, we study these problems in the total regime, in which L and d are chosen so that such a solution is guaranteed to exist, though it might be hard to find. In more detail, we study the total search problem in which the input is a function 𝒞 : [A] → [B] (represented as a circuit) and the goal is to find L ≤ ⌈A/B⌉ distinct elements x_1,…, x_L ∈ A such that 𝒞(x_1) = ⋯ = 𝒞(x_L). The associated complexity classes Polynomial Multi-Pigeonhole Principle ((A,B)-PMPP^L) consist of all problems that reduce to this problem. We show close connections between (A,B)-PMPP^L and many celebrated upper bounds on the minimum distance of a code or lattice (and on the list-decoding radius). In particular, we show that the associated computational problems (i.e., the problem of finding two distinct codewords or lattice points that are close to each other) are in (A,B)-PMPP^L, with a more-or-less smooth tradeoff between the distance d and the parameters A, B, and L. These connections are particularly rich in the case of codes, in which case we show that multiple incomparable bounds on the minimum distance lie in seemingly incomparable complexity classes. Surprisingly, we also show that the computational problems associated with some bounds on the minimum distance of codes are actually hard for these classes (for codes represented by arbitrary circuits). In fact, we show that finding two vectors within a certain distance d is actually hard for the important (and well-studied) class PWPP = (B²,B)-PMPP² in essentially all parameter regimes for which an efficient algorithm is not known, so that our hardness results are essentially tight. In fact, for some d (depending on the block length, message length, and alphabet size), we obtain both hardness and containment. We therefore completely settle the complexity of this problem for such parameters and add coding problems to the short list of problems known to be complete for PWPP. We also study (A,B)-PMPP^L as an interesting family of complexity classes in its own right, and we uncover a rich structure. Specifically, we use recent techniques from the cryptographic literature on multicollision-resistant hash functions to (1) show inclusions of the form (A,B)-PMPP^L ⊆ (A',B')-PMPP^L' for certain non-trivial parameters; (2) black-box separations between such classes in different parameter regimes; and (3) a non-black-box proof that (A,B)-PMPP^L ∈ FP if (A',B')-PMPP^L' ∈ FP for yet another parameter regime. We also show that (A,B)-PMPP^L lies in the recently introduced complexity class Polynomial Long Choice for some parameters.

Cite as

Huck Bennett, Surendra Ghentiyala, and Noah Stephens-Davidowitz. The More the Merrier! On Total Coding and Lattice Problems and the Complexity of Finding Multicollisions. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 14:1-14:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{bennett_et_al:LIPIcs.ITCS.2025.14,
  author =	{Bennett, Huck and Ghentiyala, Surendra and Stephens-Davidowitz, Noah},
  title =	{{The More the Merrier! On Total Coding and Lattice Problems and the Complexity of Finding Multicollisions}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{14:1--14:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.14},
  URN =		{urn:nbn:de:0030-drops-226424},
  doi =		{10.4230/LIPIcs.ITCS.2025.14},
  annote =	{Keywords: Multicollisions, Error-correcting codes, Lattices}
}
Document
Exponential-Time Approximation (Schemes) for Vertex-Ordering Problems

Authors: Matthias Bentert, Fedor V. Fomin, Tanmay Inamdar, and Saket Saurabh


Abstract
In this paper, we begin the exploration of vertex-ordering problems through the lens of exponential-time approximation algorithms. In particular, we ask the following question: Can we simultaneously beat the running times of the fastest known (exponential-time) exact algorithms and the best known approximation factors that can be achieved in polynomial time? Following the recent research initiated by Esmer et al. (ESA 2022, IPEC 2023, SODA 2024) on vertex-subset problems, and by Inamdar et al. (ITCS 2024) on graph-partitioning problems, we focus on vertex-ordering problems. In particular, we give positive results for Feedback Arc Set, Optimal Linear Arrangement, Cutwidth, and Pathwidth. Most of our algorithms build upon a novel "balanced-cut" approach - which is our main conceptual contribution. This allows us to solve various problems in very general settings allowing for directed and arc-weighted input graphs. Our main technical contribution is a (1+ε)-approximation for any ε > 0 for (weighted) Feedback Arc Set in O^*((2-δ_ε)^n) time, where δ_ε > 0 is a constant only depending on ε.

Cite as

Matthias Bentert, Fedor V. Fomin, Tanmay Inamdar, and Saket Saurabh. Exponential-Time Approximation (Schemes) for Vertex-Ordering Problems. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 15:1-15:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{bentert_et_al:LIPIcs.ITCS.2025.15,
  author =	{Bentert, Matthias and Fomin, Fedor V. and Inamdar, Tanmay and Saurabh, Saket},
  title =	{{Exponential-Time Approximation (Schemes) for Vertex-Ordering Problems}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{15:1--15:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.15},
  URN =		{urn:nbn:de:0030-drops-226431},
  doi =		{10.4230/LIPIcs.ITCS.2025.15},
  annote =	{Keywords: Feedback Arc Set, Cutwidth, Optimal Linear Arrangement, Pathwidth}
}
Document
On Fault Tolerant Single-Shot Logical State Preparation and Robust Long-Range Entanglement

Authors: Thiago Bergamaschi and Yunchao Liu


Abstract
Preparing encoded logical states is the first step in a fault-tolerant quantum computation. Standard approaches based on concatenation or repeated measurement incur a significant time overhead. The Raussendorf-Bravyi-Harrington cluster state [Raussendorf et al., 2005] offers an alternative: a single-shot preparation of encoded states of the surface code, by means of a constant depth quantum circuit, followed by a single round of measurement and classical feedforward [Bravyi et al., 2020]. In this work we generalize this approach and prove that single-shot logical state preparation can be achieved for arbitrary quantum LDPC codes. Our proof relies on a minimum-weight decoder and is based on a generalization of Gottesman’s clustering-of-errors argument [Gottesman, 2014]. As an application, we also prove single-shot preparation of the encoded GHZ state in arbitrary quantum LDPC codes. This shows that adaptive noisy constant depth quantum circuits are capable of generating generic robust long-range entanglement.

Cite as

Thiago Bergamaschi and Yunchao Liu. On Fault Tolerant Single-Shot Logical State Preparation and Robust Long-Range Entanglement. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 16:1-16:9, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{bergamaschi_et_al:LIPIcs.ITCS.2025.16,
  author =	{Bergamaschi, Thiago and Liu, Yunchao},
  title =	{{On Fault Tolerant Single-Shot Logical State Preparation and Robust Long-Range Entanglement}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{16:1--16:9},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.16},
  URN =		{urn:nbn:de:0030-drops-226444},
  doi =		{10.4230/LIPIcs.ITCS.2025.16},
  annote =	{Keywords: Quantum error correction, fault tolerance, single-shot error correction, logical state preparation}
}
Document
Random Restrictions of Bounded Low Degree Polynomials Are Juntas

Authors: Sreejata Kishor Bhattacharya


Abstract
We study the effects of random restrictions on low degree functions that are bounded on every point of the Boolean cube. Our main result shows that, with high probability, the restricted function can be approximated by a junta of arity that is just polynomial in the original degree. More precisely, let f: {± 1}ⁿ → [0,1] be a degree d polynomial (d ≥ 2) and let ρ denote a random restriction with survival probability O(log(d)/d). Then, with probability at least 1-d^{-Ω(1)}, there exists a function g: {± 1}ⁿ → [0,1] depending on at most d^O(1) coordinates such that ||f_{ρ}-g||_2^2 ≤ d^{-1-Ω(1)}. Our result has the following consequence for the well known, outstanding conjecture of Aaronson and Ambainis. The Aaronson-Ambainis conjecture was formulated to show that the acceptance probability of a quantum query algorithm can be well approximated almost everywhere by a classical query algorithm with a polynomial blow-up: it speculates that a polynomal f: {± 1}ⁿ → [0,1] with degree d has a coordinate with influence ≥ poly(1/d, Var[f]). Our result shows that this is true for a non-negligible fraction of random restrictions of f assuming Var[f] is not too low. Our work combines the ideas of Dinur, Friedgut, Kindler and O'Donnell [Dinur et al., 2006] with an approximation theoretic result, first reported in the recent work of Filmus, Hatami, Keller and Lifshitz [Yuval Filmus and Hamed Hatami, 2014].

Cite as

Sreejata Kishor Bhattacharya. Random Restrictions of Bounded Low Degree Polynomials Are Juntas. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 17:1-17:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{bhattacharya:LIPIcs.ITCS.2025.17,
  author =	{Bhattacharya, Sreejata Kishor},
  title =	{{Random Restrictions of Bounded Low Degree Polynomials Are Juntas}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{17:1--17:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.17},
  URN =		{urn:nbn:de:0030-drops-226459},
  doi =		{10.4230/LIPIcs.ITCS.2025.17},
  annote =	{Keywords: Analysis of Boolean Functions, Quantum Query Algorithms}
}
Document
Finite Matrix Multiplication Algorithms from Infinite Groups

Authors: Jonah Blasiak, Henry Cohn, Joshua A. Grochow, Kevin Pratt, and Chris Umans


Abstract
The Cohn-Umans (FOCS '03) group-theoretic framework for matrix multiplication produces fast matrix multiplication algorithms from three subsets of a finite group G satisfying a simple combinatorial condition (the Triple Product Property). The complexity of such an algorithm then depends on the representation theory of G. In this paper we extend the group-theoretic framework to the setting of infinite groups. In particular, this allows us to obtain constructions in Lie groups, with favorable parameters, that are provably impossible in finite groups of Lie type (Blasiak, Cohn, Grochow, Pratt, and Umans, ITCS '23). Previously the Lie group setting was investigated purely as an analogue of the finite group case; a key contribution in this paper is a fully developed framework for obtaining bona fide matrix multiplication algorithms directly from Lie group constructions. As part of this framework, we introduce "separating functions" as a necessary new design component, and show that when the underlying group is G = GL_n, these functions are polynomials with their degree being the key parameter. In particular, we show that a construction with "half-dimensional" subgroups and optimal degree would imply ω = 2. We then build up machinery that reduces the problem of constructing optimal-degree separating polynomials to the problem of constructing a single polynomial (and a corresponding set of group elements) in a ring of invariant polynomials determined by two out of the three subgroups that satisfy the Triple Product Property. This machinery combines border rank with the Lie algebras associated with the Lie subgroups in a critical way. We give several constructions illustrating the main components of the new framework, culminating in a construction in a special unitary group that achieves separating polynomials of optimal degree, meeting one of the key challenges. The subgroups in this construction have dimension approaching half the ambient dimension, but (just barely) too slowly. We argue that features of the classical Lie groups make it unlikely that constructions in these particular groups could produce nontrivial bounds on ω unless they prove ω = 2. One way to get ω = 2 via our new framework would be to lift our existing construction from the special unitary group to GL_n, and improve the dimension of the subgroups from (dim G)/2 - Θ(n) to (dim G)/2 - o(n).

Cite as

Jonah Blasiak, Henry Cohn, Joshua A. Grochow, Kevin Pratt, and Chris Umans. Finite Matrix Multiplication Algorithms from Infinite Groups. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 18:1-18:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{blasiak_et_al:LIPIcs.ITCS.2025.18,
  author =	{Blasiak, Jonah and Cohn, Henry and Grochow, Joshua A. and Pratt, Kevin and Umans, Chris},
  title =	{{Finite Matrix Multiplication Algorithms from Infinite Groups}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{18:1--18:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.18},
  URN =		{urn:nbn:de:0030-drops-226460},
  doi =		{10.4230/LIPIcs.ITCS.2025.18},
  annote =	{Keywords: Fast matrix multiplication, representation theory, infinite groups}
}
Document
Differential Privacy and Sublinear Time Are Incompatible Sometimes

Authors: Jeremiah Blocki, Hendrik Fichtenberger, Elena Grigorescu, and Tamalika Mukherjee


Abstract
Differential privacy and sublinear algorithms are both rapidly emerging algorithmic themes in times of big data analysis. Although recent works have shown the existence of differentially private sublinear algorithms for many problems including graph parameter estimation and clustering, little is known regarding hardness results on these algorithms. In this paper, we initiate the study of lower bounds for problems that aim for both differentially-private and sublinear-time algorithms. Our main result is the incompatibility of both the desiderata in the general case. In particular, we prove that a simple problem based on one-way marginals yields both a differentially-private algorithm, as well as a sublinear-time algorithm, but does not admit a "strictly" sublinear-time algorithm that is also differentially private.

Cite as

Jeremiah Blocki, Hendrik Fichtenberger, Elena Grigorescu, and Tamalika Mukherjee. Differential Privacy and Sublinear Time Are Incompatible Sometimes. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 19:1-19:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{blocki_et_al:LIPIcs.ITCS.2025.19,
  author =	{Blocki, Jeremiah and Fichtenberger, Hendrik and Grigorescu, Elena and Mukherjee, Tamalika},
  title =	{{Differential Privacy and Sublinear Time Are Incompatible Sometimes}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{19:1--19:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.19},
  URN =		{urn:nbn:de:0030-drops-226473},
  doi =		{10.4230/LIPIcs.ITCS.2025.19},
  annote =	{Keywords: differential privacy, sublinear algorithms, sublinear-time algorithms, one-way marginals, lower bounds}
}
Document
Estimating Euclidean Distance to Linearity

Authors: Andrej Bogdanov and Lorenzo Taschin


Abstract
Given oracle access to a real-valued function on the n-dimensional Boolean cube, how many queries does it take to estimate the squared Euclidean distance to its closest linear function within ε? Our main result is that O(log³(1/ε) ⋅ 1/ε²) queries suffice. Not only is the query complexity independent of n but it is optimal up to the polylogarithmic factor. Our estimator evaluates f on pairs correlated by noise rates chosen to cancel out the low-degree contributions to f while leaving the linear part intact. The query complexity is optimized when the noise rates are multiples of Chebyshev nodes. In contrast, we show that the dependence on n is unavoidable in two closely related settings. For estimation from random samples, Θ(√n/ε + 1/ε²) samples are necessary and sufficient. For agnostically learning a linear approximation with ε mean-square regret under the uniform distribution, Ω(n/√ε) nonadaptively chosen queries are necessary, while O(n/ε) random samples are known to be sufficient (Linial, Mansour, and Nisan). Our upper bounds apply to functions with bounded 4-norm. Our lower bounds apply even to ± 1-valued functions.

Cite as

Andrej Bogdanov and Lorenzo Taschin. Estimating Euclidean Distance to Linearity. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 20:1-20:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{bogdanov_et_al:LIPIcs.ITCS.2025.20,
  author =	{Bogdanov, Andrej and Taschin, Lorenzo},
  title =	{{Estimating Euclidean Distance to Linearity}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{20:1--20:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.20},
  URN =		{urn:nbn:de:0030-drops-226481},
  doi =		{10.4230/LIPIcs.ITCS.2025.20},
  annote =	{Keywords: sublinear-time algorithms, statistical estimation, analysis of boolean functions, property testing, regression}
}
Document
Adjacency Labeling Schemes for Small Classes

Authors: Édouard Bonnet, Julien Duron, John Sylvester, and Viktor Zamaraev


Abstract
A graph class admits an implicit representation if, for every positive integer n, its n-vertex graphs have a O(log n)-bit (adjacency) labeling scheme, i.e., their vertices can be labeled by binary strings of length O(log n) such that the presence of an edge between any pair of vertices can be deduced solely from their labels. The famous Implicit Graph Conjecture posited that every hereditary (i.e., closed under taking induced subgraphs) factorial (i.e., containing 2^O(n log n) n-vertex graphs) class admits an implicit representation. The conjecture was recently refuted [Hatami and Hatami, FOCS '22], and does not even hold among monotone (i.e., closed under taking subgraphs) factorial classes [Bonnet et al., ICALP '24]. However, monotone small (i.e., containing at most n! cⁿ many n-vertex graphs for some constant c) classes do admit implicit representations. This motivates the Small Implicit Graph Conjecture: Every hereditary small class admits an O(log n)-bit labeling scheme. We provide evidence supporting the Small Implicit Graph Conjecture. First, we show that every small weakly sparse (i.e., excluding some fixed bipartite complete graph as a subgraph) class has an implicit representation. This is a consequence of the following fact of independent interest proved in the paper: Every weakly sparse small class has bounded expansion (hence, in particular, bounded degeneracy). Second, we show that every hereditary small class admits an O(log³ n)-bit labeling scheme, which provides a substantial improvement of the best-known polynomial upper bound of n^(1-ε) on the size of adjacency labeling schemes for such classes. This is a consequence of another fact of independent interest proved in the paper: Every small class has neighborhood complexity O(n log n).

Cite as

Édouard Bonnet, Julien Duron, John Sylvester, and Viktor Zamaraev. Adjacency Labeling Schemes for Small Classes. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 21:1-21:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{bonnet_et_al:LIPIcs.ITCS.2025.21,
  author =	{Bonnet, \'{E}douard and Duron, Julien and Sylvester, John and Zamaraev, Viktor},
  title =	{{Adjacency Labeling Schemes for Small Classes}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{21:1--21:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.21},
  URN =		{urn:nbn:de:0030-drops-226493},
  doi =		{10.4230/LIPIcs.ITCS.2025.21},
  annote =	{Keywords: Adjacency labeling, degeneracy, weakly sparse classes, small classes, implicit graph conjecture}
}
Document
Round-Vs-Resilience Tradeoffs for Binary Feedback Channels

Authors: Mark Braverman, Klim Efremenko, Gillat Kol, Raghuvansh R. Saxena, and Zhijun Zhang


Abstract
In a celebrated result from the 60’s, Berlekamp showed that feedback can be used to increase the maximum fraction of adversarial noise that can be tolerated by binary error correcting codes from 1/4 to 1/3. However, his result relies on the assumption that feedback is "continuous", i.e., after every utilization of the channel, the sender gets the symbol received by the receiver. While this assumption is natural in some settings, in other settings it may be unreasonable or too costly to maintain. In this work, we initiate the study of round-restricted feedback channels, where the number r of feedback rounds is possibly much smaller than the number of utilizations of the channel. Error correcting codes for such channels are protocols where the sender can ask for feedback at most r times, and, upon a feedback request, it obtains all the symbols received since its last feedback request. We design such error correcting protocols for both the adversarial binary erasure channel and for the adversarial binary corruption (bit flip) channel. For the erasure channel, we give an exact characterization of the round-vs-resilience tradeoff by designing a (constant rate) protocol with r feedback rounds, for every r, and proving that its noise resilience is optimal. Designing such error correcting protocols for the corruption channel is substantially more involved. We show that obtaining the optimal resilience, even with one feedback round (r = 1), requires settling (proving or disproving) a new, seemingly unrelated, "clean" combinatorial conjecture, about the maximum cut in weighted graphs versus the "imbalance" of an average cut. Specifically, we prove an upper bound on the optimal resilience (impossibility result), and show that the existence of a matching lower bound (a protocol) is equivalent to the correctness of our conjecture.

Cite as

Mark Braverman, Klim Efremenko, Gillat Kol, Raghuvansh R. Saxena, and Zhijun Zhang. Round-Vs-Resilience Tradeoffs for Binary Feedback Channels. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 22:1-22:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{braverman_et_al:LIPIcs.ITCS.2025.22,
  author =	{Braverman, Mark and Efremenko, Klim and Kol, Gillat and Saxena, Raghuvansh R. and Zhang, Zhijun},
  title =	{{Round-Vs-Resilience Tradeoffs for Binary Feedback Channels}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{22:1--22:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.22},
  URN =		{urn:nbn:de:0030-drops-226506},
  doi =		{10.4230/LIPIcs.ITCS.2025.22},
  annote =	{Keywords: Round-restricted feedback channel, error correcting code, noise resilience}
}
Document
Accumulation Without Homomorphism

Authors: Benedikt Bünz, Pratyush Mishra, Wilson Nguyen, and William Wang


Abstract
Accumulation schemes are a simple yet powerful primitive that enable highly efficient constructions of incrementally verifiable computation (IVC). Unfortunately, all prior accumulation schemes rely on homomorphic vector commitments whose security is based on public-key assumptions. It is an interesting open question to construct efficient accumulation schemes that avoid the need for such assumptions. In this paper, we answer this question affirmatively by constructing an accumulation scheme from non-homomorphic vector commitments which can be realized from solely symmetric-key assumptions (e.g., Merkle trees). We overcome the need for homomorphisms by instead performing spot-checks over error-correcting encodings of the committed vectors. Unlike prior accumulation schemes, our scheme only supports a bounded number of accumulation steps. We show that such bounded-depth accumulation still suffices to construct proof-carrying data (a generalization of IVC). We also demonstrate several optimizations to our PCD construction which greatly improve concrete efficiency.

Cite as

Benedikt Bünz, Pratyush Mishra, Wilson Nguyen, and William Wang. Accumulation Without Homomorphism. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 23:1-23:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{bunz_et_al:LIPIcs.ITCS.2025.23,
  author =	{B\"{u}nz, Benedikt and Mishra, Pratyush and Nguyen, Wilson and Wang, William},
  title =	{{Accumulation Without Homomorphism}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{23:1--23:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.23},
  URN =		{urn:nbn:de:0030-drops-226510},
  doi =		{10.4230/LIPIcs.ITCS.2025.23},
  annote =	{Keywords: Proof-carrying data, incrementally verifiable computation, accumulation schemes}
}
Document
Sampling List Packings

Authors: Evan Camrud, Ewan Davies, Alex Karduna, and Holden Lee


Abstract
We initiate the study of approximately counting the number of list packings of a graph. The analogous problem for usual vertex coloring and list coloring has attracted substantial attention. For list packing the setup is similar, but we seek a full decomposition of the lists of colors into pairwise-disjoint proper list colorings. The existence of a list packing implies the existence of a list coloring, but the converse is false. Recent works on list packing have focused on existence or extremal results of on the number of list packings, but here we turn to the algorithmic aspects of counting and sampling. In graphs of maximum degree Δ and when the number of colors is at least Ω(Δ²), we give a fully polynomial-time randomized approximation scheme (FPRAS) based on rapid mixing of a natural Markov chain (the Glauber dynamics) which we analyze with the path coupling technique. Some motivation for our work is the investigation of an atypical spin system, one where the number of spins for each vertex is much larger than the graph degree.

Cite as

Evan Camrud, Ewan Davies, Alex Karduna, and Holden Lee. Sampling List Packings. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 24:1-24:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{camrud_et_al:LIPIcs.ITCS.2025.24,
  author =	{Camrud, Evan and Davies, Ewan and Karduna, Alex and Lee, Holden},
  title =	{{Sampling List Packings}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{24:1--24:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.24},
  URN =		{urn:nbn:de:0030-drops-226528},
  doi =		{10.4230/LIPIcs.ITCS.2025.24},
  annote =	{Keywords: List packing, Graph colouring, Markov chains, Path coupling}
}
Document
Locally Private Histograms in All Privacy Regimes

Authors: Clément L. Canonne and Abigail Gentle


Abstract
Frequency estimation, a.k.a. histograms, is a workhorse of data analysis, and as such has been thoroughly studied under differentially privacy. In particular, computing histograms in the local model of privacy has been the focus of a fruitful recent line of work, and various algorithms have been proposed, achieving the order-optimal 𝓁_∞ error in the high-privacy (small ε) regime while balancing other considerations such as time- and communication-efficiency. However, to the best of our knowledge, the picture is much less clear when it comes to the medium- or low-privacy regime (large ε), despite its increased relevance in practice. In this paper, we investigate locally private histograms, and the very related distribution learning task, in this medium-to-low privacy regime, and establish near-tight (and somewhat unexpected) bounds on the 𝓁_∞ error achievable. As a direct corollary of our results, we obtain a protocol for histograms in the shuffle model of differential privacy, with accuracy matching previous algorithms but significantly better message and communication complexity. Our theoretical findings emerge from a novel analysis, which appears to improve bounds across the board for the locally private histogram problem. We back our theoretical findings by an empirical comparison of existing algorithms in all privacy regimes, to assess their typical performance and behaviour beyond the worst-case setting.

Cite as

Clément L. Canonne and Abigail Gentle. Locally Private Histograms in All Privacy Regimes. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 25:1-25:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{canonne_et_al:LIPIcs.ITCS.2025.25,
  author =	{Canonne, Cl\'{e}ment L. and Gentle, Abigail},
  title =	{{Locally Private Histograms in All Privacy Regimes}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{25:1--25:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.25},
  URN =		{urn:nbn:de:0030-drops-226532},
  doi =		{10.4230/LIPIcs.ITCS.2025.25},
  annote =	{Keywords: Differential Privacy, Local Differential Privacy, Histograms, Frequency Estimation, Lower Bounds, Maximum Error}
}
Document
Settling the Complexity of Testing Grainedness of Distributions, and Application to Uniformity Testing in the Huge Object Model

Authors: Clément L. Canonne, Sayantan Sen, and Joy Qiping Yang


Abstract
In this work, we study the problem of testing m-grainedness of probability distributions over an n-element universe 𝒰, or, equivalently, of whether a probability distribution is induced by a multiset S ⊆ 𝒰 of size |S| = m. Recently, Goldreich and Ron (Computational Complexity, 2023) proved that Ω(n^c) samples are necessary for testing this property, for any c < 1 and m = Θ(n). They also conjectured that Ω(m/(log m)) samples are necessary for testing this property when m = Θ(n). In this work, we positively settle this conjecture. Using a known connection to the Distribution over Huge objects (DoHo) model introduced by Goldreich and Ron (TheoretiCS, 2023), we leverage our results to provide improved bounds for uniformity testing in the DoHo model.

Cite as

Clément L. Canonne, Sayantan Sen, and Joy Qiping Yang. Settling the Complexity of Testing Grainedness of Distributions, and Application to Uniformity Testing in the Huge Object Model. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 26:1-26:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{canonne_et_al:LIPIcs.ITCS.2025.26,
  author =	{Canonne, Cl\'{e}ment L. and Sen, Sayantan and Yang, Joy Qiping},
  title =	{{Settling the Complexity of Testing Grainedness of Distributions, and Application to Uniformity Testing in the Huge Object Model}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{26:1--26:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.26},
  URN =		{urn:nbn:de:0030-drops-226543},
  doi =		{10.4230/LIPIcs.ITCS.2025.26},
  annote =	{Keywords: Distribution testing, Uniformity testing, Huge Object Model, Lower bounds}
}
Document
The Randomness Complexity of Differential Privacy

Authors: Clément L. Canonne, Francis E. Su, and Salil P. Vadhan


Abstract
We initiate the study of the randomness complexity of differential privacy, i.e., how many random bits an algorithm needs in order to generate accurate differentially private releases. As a test case, we focus on the task of releasing the results of d counting queries, or equivalently all one-way marginals on a d-dimensional dataset with boolean attributes. While standard differentially private mechanisms for this task have randomness complexity that grows linearly with d, we show that, surprisingly, only log₂ d+O(1) random bits (in expectation) suffice to achieve an error that depends polynomially on d (and is independent of the size n of the dataset), and furthermore this is possible with pure, unbounded differential privacy and privacy-loss parameter ε = 1/poly(d). Conversely, we show that at least log₂ d-O(1) random bits are also necessary for nontrivial accuracy, even with approximate, bounded DP, provided the privacy-loss parameters satisfy ε,δ ≤ 1/poly(d). We obtain our results by establishing a close connection between the randomness complexity of differentially private mechanisms and the geometric notion of "deterministic rounding schemes" recently introduced and studied by Vander Woude et al. (2022, 2023).

Cite as

Clément L. Canonne, Francis E. Su, and Salil P. Vadhan. The Randomness Complexity of Differential Privacy. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 27:1-27:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{canonne_et_al:LIPIcs.ITCS.2025.27,
  author =	{Canonne, Cl\'{e}ment L. and Su, Francis E. and Vadhan, Salil P.},
  title =	{{The Randomness Complexity of Differential Privacy}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{27:1--27:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.27},
  URN =		{urn:nbn:de:0030-drops-226556},
  doi =		{10.4230/LIPIcs.ITCS.2025.27},
  annote =	{Keywords: differential privacy, randomness, geometry}
}
Document
Coresets for 1-Center in 𝓁₁ Metrics

Authors: Amir Carmel, Chengzhi Guo, Shaofeng H.-C. Jiang, and Robert Krauthgamer


Abstract
We explore the applicability of coresets - a small subset of the input dataset that approximates a predefined set of queries - to the 1-center problem in 𝓁₁ spaces. This approach could potentially extend to solving the 1-center problem in related metric spaces, and has implications for streaming and dynamic algorithms. We show that in 𝓁₁, unlike in Euclidean space, even weak coresets exhibit exponential dependency on the underlying dimension. Moreover, while inputs with a unique optimal center admit better bounds, they are not dimension independent. We then relax the guarantee of the coreset further, to merely approximate the value (optimal cost of 1-center), and obtain a dimension-independent coreset for every desired accuracy ε > 0. Finally, we discuss the broader implications of our findings to related metric spaces, and show explicit implications to Jaccard and Kendall’s tau distances.

Cite as

Amir Carmel, Chengzhi Guo, Shaofeng H.-C. Jiang, and Robert Krauthgamer. Coresets for 1-Center in 𝓁₁ Metrics. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 28:1-28:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{carmel_et_al:LIPIcs.ITCS.2025.28,
  author =	{Carmel, Amir and Guo, Chengzhi and Jiang, Shaofeng H.-C. and Krauthgamer, Robert},
  title =	{{Coresets for 1-Center in 𝓁₁ Metrics}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{28:1--28:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.28},
  URN =		{urn:nbn:de:0030-drops-226566},
  doi =		{10.4230/LIPIcs.ITCS.2025.28},
  annote =	{Keywords: clustering, k-center, minimum enclosing balls, coresets, 𝓁₁ norm, Kendall’s tau, Jaccard metric}
}
Document
Extracting Dual Solutions via Primal Optimizers

Authors: Yair Carmon, Arun Jambulapati, Liam O'Carroll, and Aaron Sidford


Abstract
We provide a general method to convert a "primal" black-box algorithm for solving regularized convex-concave minimax optimization problems into an algorithm for solving the associated dual maximin optimization problem. Our method adds recursive regularization over a logarithmic number of rounds where each round consists of an approximate regularized primal optimization followed by the computation of a dual best response. We apply this result to obtain new state-of-the-art runtimes for solving matrix games in specific parameter regimes, obtain improved query complexity for solving the dual of the CVaR distributionally robust optimization (DRO) problem, and recover the optimal query complexity for finding a stationary point of a convex function.

Cite as

Yair Carmon, Arun Jambulapati, Liam O'Carroll, and Aaron Sidford. Extracting Dual Solutions via Primal Optimizers. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 29:1-29:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{carmon_et_al:LIPIcs.ITCS.2025.29,
  author =	{Carmon, Yair and Jambulapati, Arun and O'Carroll, Liam and Sidford, Aaron},
  title =	{{Extracting Dual Solutions via Primal Optimizers}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{29:1--29:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.29},
  URN =		{urn:nbn:de:0030-drops-226578},
  doi =		{10.4230/LIPIcs.ITCS.2025.29},
  annote =	{Keywords: Minimax optimization, black-box optimization, matrix games, distributionally robust optimization}
}
Document
Provability of the Circuit Size Hierarchy and Its Consequences

Authors: Marco Carmosino, Valentine Kabanets, Antonina Kolokolova, Igor C. Oliveira, and Dimitrios Tsintsilidas


Abstract
The Circuit Size Hierarchy (CSH^a_b) states that if a > b ≥ 1 then the set of functions on n variables computed by Boolean circuits of size n^a is strictly larger than the set of functions computed by circuits of size n^b. This result, which is a cornerstone of circuit complexity theory, follows from the non-constructive proof of the existence of functions of large circuit complexity obtained by Shannon in 1949. Are there more "constructive" proofs of the Circuit Size Hierarchy? Can we quantify this? Motivated by these questions, we investigate the provability of CSH^a_b in theories of bounded arithmetic. Among other contributions, we establish the following results: i) Given any a > b > 1, CSH^a_b is provable in Buss’s theory 𝖳²₂. ii) In contrast, if there are constants a > b > 1 such that CSH^a_b is provable in the theory 𝖳¹₂, then there is a constant ε > 0 such that 𝖯^NP requires non-uniform circuits of size at least n^{1 + ε}. In other words, an improved upper bound on the proof complexity of CSH^a_b would lead to new lower bounds in complexity theory. We complement these results with a proof of the Formula Size Hierarchy (FSH^a_b) in PV₁ with parameters a > 2 and b = 3/2. This is in contrast with typical formalizations of complexity lower bounds in bounded arithmetic, which require APC₁ or stronger theories and are not known to hold even in 𝖳¹₂.

Cite as

Marco Carmosino, Valentine Kabanets, Antonina Kolokolova, Igor C. Oliveira, and Dimitrios Tsintsilidas. Provability of the Circuit Size Hierarchy and Its Consequences. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 30:1-30:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{carmosino_et_al:LIPIcs.ITCS.2025.30,
  author =	{Carmosino, Marco and Kabanets, Valentine and Kolokolova, Antonina and C. Oliveira, Igor and Tsintsilidas, Dimitrios},
  title =	{{Provability of the Circuit Size Hierarchy and Its Consequences}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{30:1--30:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.30},
  URN =		{urn:nbn:de:0030-drops-226586},
  doi =		{10.4230/LIPIcs.ITCS.2025.30},
  annote =	{Keywords: Bounded Arithmetic, Circuit Complexity, Hierarchy Theorems}
}
Document
Quantum Advantage and Lower Bounds in Parallel Query Complexity

Authors: Joseph Carolan, Amin Shiraz Gilani, and Mahathi Vempati


Abstract
It is well known that quantum, randomized and deterministic (sequential) query complexities are polynomially related for total boolean functions. We find that significantly larger separations between the parallel generalizations of these measures are possible. In particular, 1) We employ the cheatsheet framework to obtain an unbounded parallel quantum query advantage over its randomized analogue for a total function, falsifying a conjecture of [https://arxiv.org/abs/1309.6116]. 2) We strengthen 1 by constructing a total function which exhibits an unbounded parallel quantum query advantage despite having no sequential advantage, suggesting that genuine quantum advantage could occur entirely due to parallelism. 3) We construct a total function that exhibits a polynomial separation between 2-round quantum and randomized query complexities, contrasting a result of [https://arxiv.org/abs/1001.0018] that there is at most a constant separation for 1-round (nonadaptive) algorithms. 4) We develop a new technique for deriving parallel quantum lower bounds from sequential upper bounds. We employ this technique to give lower bounds for Boolean symmetric functions and read-once formulas, ruling out large parallel query advantages for them. We also provide separations between randomized and deterministic parallel query complexities analogous to items 1-3.

Cite as

Joseph Carolan, Amin Shiraz Gilani, and Mahathi Vempati. Quantum Advantage and Lower Bounds in Parallel Query Complexity. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 31:1-31:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{carolan_et_al:LIPIcs.ITCS.2025.31,
  author =	{Carolan, Joseph and Gilani, Amin Shiraz and Vempati, Mahathi},
  title =	{{Quantum Advantage and Lower Bounds in Parallel Query Complexity}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{31:1--31:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.31},
  URN =		{urn:nbn:de:0030-drops-226597},
  doi =		{10.4230/LIPIcs.ITCS.2025.31},
  annote =	{Keywords: Computational complexity theory, quantum, lower bounds, parallel}
}
Document
Succinct Fermion Data Structures

Authors: Joseph Carolan and Luke Schaeffer


Abstract
Simulating fermionic systems on a quantum computer requires representing fermionic states using qubits. The complexity of many simulation algorithms depends on the complexity of implementing rotations generated by fermionic creation-annihilation operators, and the space depends on the number of qubits used. While standard fermion encodings like Jordan-Wigner are space optimal for arbitrary fermionic systems, physical symmetries like particle conservation can reduce the number of physical configurations, allowing improved space complexity. Such space saving is only feasible if the gate overhead is small, suggesting a (quantum) data structures problem, wherein one would like to minimize space used to represent a fermionic state, while still enabling efficient rotations. We define a structure which naturally captures mappings from fermions to systems of qubits. We then instantiate it in two ways, giving rise to two new second-quantized fermion encodings of F fermions in M modes. An information theoretic minimum of I: = ⌈log₂ binom(M,F)⌉ qubits is required for such systems, a bound we nearly match over the entire parameter regime. 1) Our first construction uses I + o(I) qubits when F = o(M), and allows rotations generated by creation-annihilation operators in O(I) gates and O(log M log log M) depth. 2) Our second construction uses I + O(1) qubits when F = Θ(M), and allows rotations generated by creation-annihilation operators in O(I³) gates. In relation to comparable prior work, the first represents a polynomial improvement in both space and gate complexity (against Kirby et al. 2022), and the second represents an exponential improvement in gate complexity at the cost of only a constant number of additional qubits (against Harrison et al. or Shee et al. 2022), in the described parameter regimes.

Cite as

Joseph Carolan and Luke Schaeffer. Succinct Fermion Data Structures. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 32:1-32:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{carolan_et_al:LIPIcs.ITCS.2025.32,
  author =	{Carolan, Joseph and Schaeffer, Luke},
  title =	{{Succinct Fermion Data Structures}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{32:1--32:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.32},
  URN =		{urn:nbn:de:0030-drops-226600},
  doi =		{10.4230/LIPIcs.ITCS.2025.32},
  annote =	{Keywords: quantum computing, data structures, fermion encodings}
}
Document
Bosonic Quantum Computational Complexity

Authors: Ulysse Chabaud, Michael Joseph, Saeed Mehraban, and Arsalan Motamedi


Abstract
In recent years, quantum computing involving physical systems with continuous degrees of freedom, such as the bosonic quantum states of light, has attracted significant interest. However, a well-defined quantum complexity theory for these bosonic computations over infinite-dimensional Hilbert spaces is missing. In this work, we lay the foundations for such a research program. We introduce natural complexity classes and problems based on bosonic generalizations of BQP, the local Hamiltonian problem, and QMA. We uncover several relationships and subtle differences between standard Boolean classical and discrete-variable quantum complexity classes, and identify outstanding open problems. Our main contributions include the following: 1) Bosonic computations. We show that the power of Gaussian computations up to logspace reductions is equivalent to bounded-error quantum logspace (BQL, characterized by the problem of inverting well-conditioned matrices). More generally, we define classes of continuous-variable quantum polynomial time computations with a bounded probability of error (CVBQP) based on gates generated by polynomial bosonic Hamiltonians and particle-number measurements. Due to the infinite-dimensional Hilbert space, it is not a priori clear whether a decidable upper bound can be obtained for these classes. We identify complete problems for these classes, and we demonstrate a BQP lower bound and an EXPSPACE upper bound by proving bounds on the average energy throughout the computation. We further show that the problem of computing expectation values of polynomial bosonic observables at the output of bosonic quantum circuits using Gaussian and cubic phase gates is in PSPACE. 2) Bosonic ground energy problems. We prove that the problem of deciding whether the spectrum of a bosonic Hamiltonian is bounded from below is co-NP-hard. Furthermore, we show that the problem of finding the minimum energy of a bosonic Hamiltonian critically depends on the non-Gaussian stellar rank of the family of energy-constrained states one optimizes over: for zero stellar rank, i.e., optimizing over Gaussian states, it is NP-complete; for polynomially-bounded stellar rank, it is in QMA; for unbounded stellar rank, it is RE-hard, i.e., undecidable.

Cite as

Ulysse Chabaud, Michael Joseph, Saeed Mehraban, and Arsalan Motamedi. Bosonic Quantum Computational Complexity. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 33:1-33:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{chabaud_et_al:LIPIcs.ITCS.2025.33,
  author =	{Chabaud, Ulysse and Joseph, Michael and Mehraban, Saeed and Motamedi, Arsalan},
  title =	{{Bosonic Quantum Computational Complexity}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{33:1--33:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.33},
  URN =		{urn:nbn:de:0030-drops-226612},
  doi =		{10.4230/LIPIcs.ITCS.2025.33},
  annote =	{Keywords: continuous-variable quantum computing, infinite-dimensional quantum systems, stellar rank, Hamiltonian complexity}
}
Document
Directed Hypercube Routing, a Generalized Lehman-Ron Theorem, and Monotonicity Testing

Authors: Deeparnab Chakrabarty and C. Seshadhri


Abstract
Motivated by applications to monotonicity testing, Lehman and Ron (JCTA, 2001) proved the existence of a collection of vertex disjoint paths between comparable sub-level sets in the directed hypercube. The main technical contribution of this paper is a new proof method that yields a generalization of their theorem: we prove the existence of two edge-disjoint collections of vertex disjoint paths. Our main conceptual contributions are conjectures on directed hypercube flows with simultaneous vertex and edge capacities of which our generalized Lehman-Ron theorem is a special case. We show that these conjectures imply directed isoperimetric theorems, and in particular, the robust directed Talagrand inequality due to Khot, Minzer, and Safra (SIAM J. on Comp, 2018). These isoperimetric inequalities, that relate the directed surface area (of a set in the hypercube) to its distance to monotonicity, have been crucial in obtaining the best monotonicity testers for Boolean functions. We believe our conjectures pave the way towards combinatorial proofs of these directed isoperimetry theorems.

Cite as

Deeparnab Chakrabarty and C. Seshadhri. Directed Hypercube Routing, a Generalized Lehman-Ron Theorem, and Monotonicity Testing. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 34:1-34:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{chakrabarty_et_al:LIPIcs.ITCS.2025.34,
  author =	{Chakrabarty, Deeparnab and Seshadhri, C.},
  title =	{{Directed Hypercube Routing, a Generalized Lehman-Ron Theorem, and Monotonicity Testing}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{34:1--34:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.34},
  URN =		{urn:nbn:de:0030-drops-226623},
  doi =		{10.4230/LIPIcs.ITCS.2025.34},
  annote =	{Keywords: Monotonicity testing, isoperimetric inequalities, hypercube routing}
}
Document
Unraveling Universally Closest Refinements via Symmetric Density Decomposition and Fisher Market Equilibrium

Authors: T-H. Hubert Chan and Quan Xue


Abstract
We investigate the closest distribution refinements problem, which involves a vertex-weighted bipartite graph as input, where the vertex weights on each side sum to 1 and represent a probability distribution. A refinement of one side’s distribution is an edge distribution that corresponds to distributing the weight of each vertex from that side to its incident edges. The objective is to identify a pair of distribution refinements for both sides of the bipartite graph such that the two edge distributions are as close as possible with respect to a specific divergence notion. This problem is a generalization of transportation, in which the special case occurs when the two closest distributions are identical. The problem has recently emerged in the context of composing differentially oblivious mechanisms. Our main result demonstrates that a universal refinement pair exists, which is simultaneously closest under all divergence notions that satisfy the data processing inequality. Since differential obliviousness can be examined using various divergence notions, such a universally closest refinement pair offers a powerful tool in relation to such applications. We discover that this pair can be achieved via locally verifiable optimality conditions. Specifically, we observe that it is equivalent to the following problems, which have been traditionally studied in distinct research communities: (1) hypergraph density decomposition, and (2) symmetric Fisher Market equilibrium. We adopt a symmetric perspective of hypergraph density decomposition, in which hyperedges and nodes play equivalent roles. This symmetric decomposition serves as a tool for deriving precise characterizations of optimal solutions for other problems and enables the application of algorithms from one problem to another. This connection allows existing algorithms for computing or approximating the Fisher market equilibrium to be adapted for all the aforementioned problems. For example, this approach allows the well-known iterative proportional response process to provide approximations for the corresponding problems with multiplicative error in distributed settings, whereas previously, only absolute error had been achieved in these contexts. Our study contributes to the understanding of various problems within a unified framework, which may serve as a foundation for connecting other problems in the future.

Cite as

T-H. Hubert Chan and Quan Xue. Unraveling Universally Closest Refinements via Symmetric Density Decomposition and Fisher Market Equilibrium. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 35:1-35:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{chan_et_al:LIPIcs.ITCS.2025.35,
  author =	{Chan, T-H. Hubert and Xue, Quan},
  title =	{{Unraveling Universally Closest Refinements via Symmetric Density Decomposition and Fisher Market Equilibrium}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{35:1--35:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.35},
  URN =		{urn:nbn:de:0030-drops-226633},
  doi =		{10.4230/LIPIcs.ITCS.2025.35},
  annote =	{Keywords: closest distribution refinements, density decomposition, Fisher market equilibrium}
}
Document
Extended Abstract
Entry-Specific Matrix Estimation Under Arbitrary Sampling Patterns Through the Lens of Network Flows (Extended Abstract)

Authors: Yudong Chen, Xumei Xi, and Christina Lee Yu


Abstract
Matrix completion tackles the task of predicting missing values in a low-rank matrix based on a sparse set of observed entries. It is often assumed that the observation pattern is generated uniformly at random or has a very specific structure tuned to a given algorithm. There is still a gap in our understanding when it comes to arbitrary sampling patterns. Given an arbitrary sampling pattern, we introduce a matrix completion algorithm based on network flows in the bipartite graph induced by the observation pattern. For additive matrices, we show that the electrical flow is optimal, and we establish error upper bounds customized to each entry as a function of the observation set, along with matching minimax lower bounds. Our results show that the minimax squared error for recovery of a particular entry in the matrix is proportional to the effective resistance of the corresponding edge in the graph. Furthermore, we show that the electrical flow estimator is equivalent to the least squares estimator. We apply our estimator to the two-way fixed effects model and show that it enables us to accurately infer individual causal effects and the unit-specific and time-specific confounders. For rank-1 matrices, we use edge-disjoint paths to form an estimator that achieves minimax optimal estimation when the sampling is sufficiently dense. Our discovery introduces a new family of estimators parametrized by network flows, which provide a fine-grained and intuitive understanding of the impact of the given sampling pattern on the difficulty of estimation at an entry-specific level. This graph-based approach allows us to quantify the inherent complexity of matrix completion for individual entries, rather than relying solely on global measures of performance.

Cite as

Yudong Chen, Xumei Xi, and Christina Lee Yu. Entry-Specific Matrix Estimation Under Arbitrary Sampling Patterns Through the Lens of Network Flows (Extended Abstract). In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 36:1-36:2, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.ITCS.2025.36,
  author =	{Chen, Yudong and Xi, Xumei and Yu, Christina Lee},
  title =	{{Entry-Specific Matrix Estimation Under Arbitrary Sampling Patterns Through the Lens of Network Flows}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{36:1--36:2},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.36},
  URN =		{urn:nbn:de:0030-drops-226642},
  doi =		{10.4230/LIPIcs.ITCS.2025.36},
  annote =	{Keywords: Matrix estimation, network flows, panel data analysis, non-uniform sampling}
}
Document
A Lower Bound on the Trace Norm of Boolean Matrices and Its Applications

Authors: Tsun-Ming Cheung, Hamed Hatami, Kaave Hosseini, Aleksandar Nikolov, Toniann Pitassi, and Morgan Shirley


Abstract
We present a simple method based on a variant of Hölder’s inequality to lower-bound the trace norm of Boolean matrices. As the main result, we obtain an exponential separation between the randomized decision tree depth and the spectral norm (i.e. the Fourier L₁-norm) of a Boolean function. This answers an open question of Cheung, Hatami, Hosseini and Shirley (CCC 2023). As immediate consequences, we obtain the following results. - We give an exponential separation between the logarithm of the randomized and the deterministic parity decision tree size. This is in sharp contrast with the standard binary decision tree setting where the logarithms of randomized and deterministic decision tree size are essentially polynomially related, as shown recently by Chattopadhyay, Dahiya, Mande, Radhakrishnan, and Sanyal (STOC 2023). - We give an exponential separation between the approximate and the exact spectral norm for Boolean functions. - We give an exponential separation for XOR functions between the deterministic communication complexity with oracle access to Equality function (D^EQ) and randomized communication complexity. Previously, such a separation was known for general Boolean matrices by Chattopadhyay, Lovett, and Vinyals (CCC 2019) using the Integer Inner Product (IIP) function. - Finally, our method gives an elementary and short proof for the mentioned exponential D^EQ lower bound of Chattopadhyay, Lovett, and Vinyals for Integer Inner Product (IIP).

Cite as

Tsun-Ming Cheung, Hamed Hatami, Kaave Hosseini, Aleksandar Nikolov, Toniann Pitassi, and Morgan Shirley. A Lower Bound on the Trace Norm of Boolean Matrices and Its Applications. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 37:1-37:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{cheung_et_al:LIPIcs.ITCS.2025.37,
  author =	{Cheung, Tsun-Ming and Hatami, Hamed and Hosseini, Kaave and Nikolov, Aleksandar and Pitassi, Toniann and Shirley, Morgan},
  title =	{{A Lower Bound on the Trace Norm of Boolean Matrices and Its Applications}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{37:1--37:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.37},
  URN =		{urn:nbn:de:0030-drops-226654},
  doi =		{10.4230/LIPIcs.ITCS.2025.37},
  annote =	{Keywords: Boolean function complexity, parity decision trees, randomized communication complexity}
}
Document
Backdoor Defense, Learnability and Obfuscation

Authors: Paul Christiano, Jacob Hilton, Victor Lecomte, and Mark Xu


Abstract
We introduce a formal notion of defendability against backdoors using a game between an attacker and a defender. In this game, the attacker modifies a function to behave differently on a particular input known as the "trigger", while behaving the same almost everywhere else. The defender then attempts to detect the trigger at evaluation time. If the defender succeeds with high enough probability, then the function class is said to be defendable. The key constraint on the attacker that makes defense possible is that the attacker’s strategy must work for a randomly-chosen trigger. Our definition is simple and does not explicitly mention learning, yet we demonstrate that it is closely connected to learnability. In the computationally unbounded setting, we use a voting algorithm of [Hanneke et al., 2022] to show that defendability is essentially determined by the VC dimension of the function class, in much the same way as PAC learnability. In the computationally bounded setting, we use a similar argument to show that efficient PAC learnability implies efficient defendability, but not conversely. On the other hand, we use indistinguishability obfuscation to show that the class of polynomial size circuits is not efficiently defendable. Finally, we present polynomial size decision trees as a natural example for which defense is strictly easier than learning. Thus, we identify efficient defendability as a notable intermediate concept in between efficient learnability and obfuscation.

Cite as

Paul Christiano, Jacob Hilton, Victor Lecomte, and Mark Xu. Backdoor Defense, Learnability and Obfuscation. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 38:1-38:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{christiano_et_al:LIPIcs.ITCS.2025.38,
  author =	{Christiano, Paul and Hilton, Jacob and Lecomte, Victor and Xu, Mark},
  title =	{{Backdoor Defense, Learnability and Obfuscation}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{38:1--38:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.38},
  URN =		{urn:nbn:de:0030-drops-226662},
  doi =		{10.4230/LIPIcs.ITCS.2025.38},
  annote =	{Keywords: backdoors, machine learning, PAC learning, indistinguishability obfuscation}
}
Document
Data Reconstruction: When You See It and When You Don't

Authors: Edith Cohen, Haim Kaplan, Yishay Mansour, Shay Moran, Kobbi Nissim, Uri Stemmer, and Eliad Tsfadia


Abstract
We revisit the fundamental question of formally defining what constitutes a reconstruction attack. While often clear from the context, our exploration reveals that a precise definition is much more nuanced than it appears, to the extent that a single all-encompassing definition may not exist. Thus, we employ a different strategy and aim to "sandwich" the concept of reconstruction attacks by addressing two complementing questions: (i) What conditions guarantee that a given system is protected against such attacks? (ii) Under what circumstances does a given attack clearly indicate that a system is not protected? More specifically, - We introduce a new definitional paradigm - Narcissus Resiliency - to formulate a security definition for protection against reconstruction attacks. This paradigm has a self-referential nature that enables it to circumvent shortcomings of previously studied notions of security. Furthermore, as a side-effect, we demonstrate that Narcissus resiliency captures as special cases multiple well-studied concepts including differential privacy and other security notions of one-way functions and encryption schemes. - We formulate a link between reconstruction attacks and Kolmogorov complexity. This allows us to put forward a criterion for evaluating when such attacks are convincingly successful.

Cite as

Edith Cohen, Haim Kaplan, Yishay Mansour, Shay Moran, Kobbi Nissim, Uri Stemmer, and Eliad Tsfadia. Data Reconstruction: When You See It and When You Don't. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 39:1-39:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{cohen_et_al:LIPIcs.ITCS.2025.39,
  author =	{Cohen, Edith and Kaplan, Haim and Mansour, Yishay and Moran, Shay and Nissim, Kobbi and Stemmer, Uri and Tsfadia, Eliad},
  title =	{{Data Reconstruction: When You See It and When You Don't}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{39:1--39:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.39},
  URN =		{urn:nbn:de:0030-drops-226674},
  doi =		{10.4230/LIPIcs.ITCS.2025.39},
  annote =	{Keywords: differential privacy, reconstruction}
}
Document
Derandomized Squaring: An Analytical Insight into Its True Behavior

Authors: Gil Cohen, Itay Cohen, Gal Maor, and Yuval Peled


Abstract
The notion of the derandomized square of two graphs, denoted as G s H, was introduced by Rozenman and Vadhan as they rederived Reingold’s Theorem, SL = 𝐋. This pseudorandom primitive, closely related to the Zig-Zag product, plays a crucial role in recent advancements on space-bounded derandomization. For this and other reasons, understanding the spectral expansion λ(G s H) becomes paramount. Rozenman and Vadhan derived an upper bound for λ(G s H) in terms of the spectral expansions of the individual graphs, λ(G) and λ(H). They also proved their bound is optimal if the only information incorporated to the bound is the spectral expansion of the two graphs. The objective of this work is to gain deeper insights into the behavior of derandomized squaring by taking into account the entire spectrum of H, where we focus on a vertex-transitive c-regular H. Utilizing deep results from analytic combinatorics, we establish a lower bound on λ(G s H) that applies universally to all graphs G. Our work reveals that the bound is the minimum value of the function d⋅ x - d(d-1)χ_x(H)/χ_x'(H) in the domain (c,∞), where χ_x(H) is the characteristic polynomial of the d-vertex graph H. This bound lies far below the known upper bound for λ(G s H) for most reasonable choices for H. Empirical evidence suggests that our lower bound is optimal. We support the tightness of our lower bound by showing that the bound is tight for a class of graphs which exhibit local behavior similar to a derandomized squaring operation with H. To this end, we make use of finite free probability theory. In our second result, we resolve an open question posed by Cohen and Maor (STOC 2023) and establish a lower bound for the spectral expansion of rotating expanders. These graphs are constructed by taking a random walk with vertex permutations occurring after each step. We prove that Cohen and Maor’s construction is essentially optimal. Unlike our results on derandomized squaring, the proof in this instance relies solely on combinatorial methods. The key insight lies in establishing a connection between random walks on graph products and the Fuss-Catalan numbers.

Cite as

Gil Cohen, Itay Cohen, Gal Maor, and Yuval Peled. Derandomized Squaring: An Analytical Insight into Its True Behavior. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 40:1-40:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{cohen_et_al:LIPIcs.ITCS.2025.40,
  author =	{Cohen, Gil and Cohen, Itay and Maor, Gal and Peled, Yuval},
  title =	{{Derandomized Squaring: An Analytical Insight into Its True Behavior}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{40:1--40:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.40},
  URN =		{urn:nbn:de:0030-drops-226681},
  doi =		{10.4230/LIPIcs.ITCS.2025.40},
  annote =	{Keywords: Derandomized Squaring, Spectral Graph Theory, Analytic Combinatorics}
}
Document
Query Complexity of Stochastic Minimum Vertex Cover

Authors: Mahsa Derakhshan, Mohammad Saneian, and Zhiyang Xun


Abstract
We study the stochastic minimum vertex cover problem for general graphs. In this problem, we are given a graph G = (V, E) and an existence probability p_e for each edge e ∈ E. Edges of G are realized (or exist) independently with these probabilities, forming the realized subgraph 𝒢. The existence of an edge in 𝒢 can only be verified using edge queries. The goal of this problem is to find a near-optimal vertex cover of 𝒢 using a small number of queries. Previous work by Derakhshan, Durvasula, and Haghtalab [STOC 2023] established the existence of 1.5 + ε approximation algorithms for this problem with O(n/ε) queries. They also show that, under mild correlation among edge realizations, beating this approximation ratio requires querying a subgraph of size Ω(n ⋅ RS(n)). Here, RS(n) refers to Ruzsa-Szemerédi Graphs and represents the largest number of induced edge-disjoint matchings of size Θ(n) in an n-vertex graph. In this work, we design a simple algorithm for finding a (1 + ε) approximate vertex cover by querying a subgraph of size O(n ⋅ RS(n)) for any absolute constant ε > 0. Our algorithm can tolerate up to O(n ⋅ RS(n)) correlated edges, hence effectively completing our understanding of the problem under mild correlation.

Cite as

Mahsa Derakhshan, Mohammad Saneian, and Zhiyang Xun. Query Complexity of Stochastic Minimum Vertex Cover. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 41:1-41:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{derakhshan_et_al:LIPIcs.ITCS.2025.41,
  author =	{Derakhshan, Mahsa and Saneian, Mohammad and Xun, Zhiyang},
  title =	{{Query Complexity of Stochastic Minimum Vertex Cover}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{41:1--41:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.41},
  URN =		{urn:nbn:de:0030-drops-226691},
  doi =		{10.4230/LIPIcs.ITCS.2025.41},
  annote =	{Keywords: Sublinear algorithms, Vertex cover, Query complexity}
}
Document
Nearest Neighbor Complexity and Boolean Circuits

Authors: Mason DiCicco, Vladimir Podolskii, and Daniel Reichman


Abstract
A nearest neighbor representation of a Boolean function f is a set of vectors (anchors) labeled by 0 or 1 such that f(x) = 1 if and only if the closest anchor to x is labeled by 1. This model was introduced by Hajnal, Liu and Turán [2022], who studied bounds on the minimum number of anchors required to represent Boolean functions under different choices of anchors (real vs. Boolean vectors) as well as the analogous model of k-nearest neighbors representations. We initiate a systematic study of the representational power of nearest and k-nearest neighbors through Boolean circuit complexity. To this end, we establish a close connection between Boolean functions with polynomial nearest neighbor complexity and those that can be efficiently represented by classes based on linear inequalities - min-plus polynomial threshold functions - previously studied in relation to threshold circuits. This extends an observation of Hajnal et al. [2022]. Next, we further extend the connection between nearest neighbor representations and circuits to the k-nearest neighbors case. As an outcome of these connections we obtain exponential lower bounds on the k-nearest neighbors complexity of explicit n-variate functions, assuming k ≤ n^{1-ε}. Previously, no superlinear lower bound was known for any k > 1. At the same time, we show that proving superpolynomial lower bounds for the k-nearest neighbors complexity of an explicit function for arbitrary k would require a breakthrough in circuit complexity. In addition, we prove an exponential separation between the nearest neighbor and k-nearest neighbors complexity (for unrestricted k) of an explicit function. These results address questions raised by [Hajnal et al., 2022] of proving strong lower bounds for k-nearest neighbors and understanding the role of the parameter k. Finally, we devise new bounds on the nearest neighbor complexity for several families of Boolean functions.

Cite as

Mason DiCicco, Vladimir Podolskii, and Daniel Reichman. Nearest Neighbor Complexity and Boolean Circuits. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 42:1-42:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{dicicco_et_al:LIPIcs.ITCS.2025.42,
  author =	{DiCicco, Mason and Podolskii, Vladimir and Reichman, Daniel},
  title =	{{Nearest Neighbor Complexity and Boolean Circuits}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{42:1--42:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.42},
  URN =		{urn:nbn:de:0030-drops-226704},
  doi =		{10.4230/LIPIcs.ITCS.2025.42},
  annote =	{Keywords: Complexity, Nearest Neighbors, Circuits}
}
Document
Space Complexity of Minimum Cut Problems in Single-Pass Streams

Authors: Matthew Ding, Alexandro Garces, Jason Li, Honghao Lin, Jelani Nelson, Vihan Shah, and David P. Woodruff


Abstract
We consider the problem of finding a minimum cut of a weighted graph presented as a single-pass stream. While graph sparsification in streams has been intensively studied, the specific application of finding minimum cuts in streams is less well-studied. To this end, we show upper and lower bounds on minimum cut problems in insertion-only streams for a variety of settings, including for both randomized and deterministic algorithms, for both arbitrary and random order streams, and for both approximate and exact algorithms. One of our main results is an Õ(n/ε) space algorithm with fast update time for approximating a spectral cut query with high probability on a stream given in an arbitrary order. Our result breaks the Ω(n/ε²) space lower bound required of a sparsifier that approximates all cuts simultaneously. Using this result, we provide streaming algorithms with near optimal space of Õ(n/ε) for minimum cut and approximate all-pairs effective resistances, with matching space lower-bounds. The amortized update time of our algorithms is Õ(1), provided that the number of edges in the input graph is at least (n/ε²)^{1+o(1)}. We also give a generic way of incorporating sketching into a recursive contraction algorithm to improve the post-processing time of our algorithms. In addition to these results, we give a random-order streaming algorithm that computes the exact minimum cut on a simple, unweighted graph using Õ(n) space. Finally, we give an Ω(n/ε²) space lower bound for deterministic minimum cut algorithms which matches the best-known upper bound up to polylogarithmic factors.

Cite as

Matthew Ding, Alexandro Garces, Jason Li, Honghao Lin, Jelani Nelson, Vihan Shah, and David P. Woodruff. Space Complexity of Minimum Cut Problems in Single-Pass Streams. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 43:1-43:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{ding_et_al:LIPIcs.ITCS.2025.43,
  author =	{Ding, Matthew and Garces, Alexandro and Li, Jason and Lin, Honghao and Nelson, Jelani and Shah, Vihan and Woodruff, David P.},
  title =	{{Space Complexity of Minimum Cut Problems in Single-Pass Streams}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{43:1--43:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.43},
  URN =		{urn:nbn:de:0030-drops-226714},
  doi =		{10.4230/LIPIcs.ITCS.2025.43},
  annote =	{Keywords: minimum cut, approximate, random order, lower bound}
}
Document
Learning-Augmented Streaming Algorithms for Approximating MAX-CUT

Authors: Yinhao Dong, Pan Peng, and Ali Vakilian


Abstract
We study learning-augmented streaming algorithms for estimating the value of MAX-CUT in a graph. In the classical streaming model, while a 1/2-approximation for estimating the value of MAX-CUT can be trivially achieved with O(1) words of space, Kapralov and Krachun [STOC’19] showed that this is essentially the best possible: for any ε > 0, any (randomized) single-pass streaming algorithm that achieves an approximation ratio of at least 1/2 + ε requires Ω(n / 2^poly(1/ε)) space. We show that it is possible to surpass the 1/2-approximation barrier using just O(1) words of space by leveraging a (machine learned) oracle. Specifically, we consider streaming algorithms that are equipped with an ε-accurate oracle that for each vertex in the graph, returns its correct label in {-1, +1}, corresponding to an optimal MAX-CUT solution in the graph, with some probability 1/2 + ε, and the incorrect label otherwise. Within this framework, we present a single-pass algorithm that approximates the value of MAX-CUT to within a factor of 1/2 + Ω(ε²) with probability at least 2/3 for insertion-only streams, using only poly(1/ε) words of space. We also extend our algorithm to fully dynamic streams while maintaining a space complexity of poly(1/ε,log n) words.

Cite as

Yinhao Dong, Pan Peng, and Ali Vakilian. Learning-Augmented Streaming Algorithms for Approximating MAX-CUT. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 44:1-44:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{dong_et_al:LIPIcs.ITCS.2025.44,
  author =	{Dong, Yinhao and Peng, Pan and Vakilian, Ali},
  title =	{{Learning-Augmented Streaming Algorithms for Approximating MAX-CUT}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{44:1--44:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.44},
  URN =		{urn:nbn:de:0030-drops-226728},
  doi =		{10.4230/LIPIcs.ITCS.2025.44},
  annote =	{Keywords: Learning-Augmented Algorithms, Graph Streaming Algorithms, MAX-CUT}
}
Document
Distributed and Parallel Low-Diameter Decompositions for Arbitrary and Restricted Graphs

Authors: Jinfeng Dou, Thorsten Götte, Henning Hillebrandt, Christian Scheideler, and Julian Werthmann


Abstract
We consider the distributed and parallel construction of low-diameter decompositions with strong diameter. We present algorithms for arbitrary undirected, weighted graphs and also for undirected, weighted graphs that can be separated through k ∈ Õ(1) shortest paths. This class of graphs includes planar graphs, graphs of bounded treewidth, and graphs that exclude a fixed minor K_r. Our algorithms work in the PRAM, CONGEST, and the novel HYBRID communication model and are competitive in all relevant parameters. Given 𝒟 > 0, our low-diameter decomposition algorithm divides the graph into connected clusters of strong diameter 𝒟. For an arbitrary graph, an edge e ∈ E of length 𝓁_e is cut between two clusters with probability O(𝓁_e⋅log(n)/𝒟). If the graph can be separated by k ∈ Õ(1) paths, the probability improves to O(𝓁_e⋅log(log n)/𝒟). In either case, the decompositions can be computed in Õ(1) depth and Õ(m) work in the PRAM and Õ(1) time in the HYBRID model. In CONGEST, the runtimes are Õ(HD + √n) and Õ(HD) respectively. All these results hold w.h.p. Broadly speaking, we present distributed and parallel implementations of sequential divide-and-conquer algorithms where we replace exact shortest paths with approximate shortest paths. In contrast to exact paths, these can be efficiently computed in the distributed and parallel setting [STOC '22]. Further, and perhaps more importantly, we show that instead of explicitly computing vertex-separators to enable efficient parallelization of these algorithms, it suffices to sample a few random paths of bounded length and the nodes close to them. Thereby, we do not require complex embeddings whose implementation is unknown in the distributed and parallel setting.

Cite as

Jinfeng Dou, Thorsten Götte, Henning Hillebrandt, Christian Scheideler, and Julian Werthmann. Distributed and Parallel Low-Diameter Decompositions for Arbitrary and Restricted Graphs. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 45:1-45:26, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{dou_et_al:LIPIcs.ITCS.2025.45,
  author =	{Dou, Jinfeng and G\"{o}tte, Thorsten and Hillebrandt, Henning and Scheideler, Christian and Werthmann, Julian},
  title =	{{Distributed and Parallel Low-Diameter Decompositions for Arbitrary and Restricted Graphs}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{45:1--45:26},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.45},
  URN =		{urn:nbn:de:0030-drops-226734},
  doi =		{10.4230/LIPIcs.ITCS.2025.45},
  annote =	{Keywords: Distributed Graph Algorithms, Network Decomposition, Excluded Minor}
}
Document
Data-Driven Solution Portfolios

Authors: Marina Drygala, Silvio Lattanzi, Andreas Maggiori, Miltiadis Stouras, Ola Svensson, and Sergei Vassilvitskii


Abstract
In this paper, we consider a new problem of portfolio optimization using stochastic information. In a setting where there is some uncertainty, we ask how to best select k potential solutions, with the goal of optimizing the value of the best solution. More formally, given a combinatorial problem Π, a set of value functions 𝒱 over the solutions of Π, and a distribution 𝒟 over 𝒱, our goal is to select k solutions of Π that maximize or minimize the expected value of the best of those solutions. For a simple example, consider the classic knapsack problem: given a universe of elements each with unit weight and a positive value, the task is to select r elements maximizing the total value. Now suppose that each element’s weight comes from a (known) distribution. How should we select k different solutions so that one of them is likely to yield a high value? In this work, we tackle this basic problem, and generalize it to the setting where the underlying set system forms a matroid. On the technical side, it is clear that the candidate solutions we select must be diverse and anti-correlated; however, it is not clear how to do so efficiently. Our main result is a polynomial-time algorithm that constructs a portfolio within a constant factor of the optimal.

Cite as

Marina Drygala, Silvio Lattanzi, Andreas Maggiori, Miltiadis Stouras, Ola Svensson, and Sergei Vassilvitskii. Data-Driven Solution Portfolios. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 46:1-46:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{drygala_et_al:LIPIcs.ITCS.2025.46,
  author =	{Drygala, Marina and Lattanzi, Silvio and Maggiori, Andreas and Stouras, Miltiadis and Svensson, Ola and Vassilvitskii, Sergei},
  title =	{{Data-Driven Solution Portfolios}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{46:1--46:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.46},
  URN =		{urn:nbn:de:0030-drops-226740},
  doi =		{10.4230/LIPIcs.ITCS.2025.46},
  annote =	{Keywords: solution portfolios, data-driven algorithm design, matroids}
}
Document
Concentration of Submodular Functions and Read-k Families Under Negative Dependence

Authors: Sharmila Duppala, George Z. Li, Juan Luque, Aravind Srinivasan, and Renata Valieva


Abstract
We study the question of whether submodular functions of random variables satisfying various notions of negative dependence satisfy Chernoff-like concentration inequalities. We prove such a concentration inequality for the lower tail when the random variables satisfy negative association or negative regression, partially resolving an open problem raised in ([Frederick Qiu and Sahil Singla, 2022]). Previous work showed such concentration results for random variables that come from specific dependent-rounding algorithms ([Chandra Chekuri et al., 2010; Nicholas J. A. Harvey and Neil Olver, 2014]). We discuss some applications of our results to combinatorial optimization and beyond. We also show applications to the concentration of read-k families [Dmitry Gavinsky et al., 2015] under certain forms of negative dependence; we further show a simplified proof of the entropy-method approach of [Dmitry Gavinsky et al., 2015].

Cite as

Sharmila Duppala, George Z. Li, Juan Luque, Aravind Srinivasan, and Renata Valieva. Concentration of Submodular Functions and Read-k Families Under Negative Dependence. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 47:1-47:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{duppala_et_al:LIPIcs.ITCS.2025.47,
  author =	{Duppala, Sharmila and Li, George Z. and Luque, Juan and Srinivasan, Aravind and Valieva, Renata},
  title =	{{Concentration of Submodular Functions and Read-k Families Under Negative Dependence}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{47:1--47:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.47},
  URN =		{urn:nbn:de:0030-drops-226751},
  doi =		{10.4230/LIPIcs.ITCS.2025.47},
  annote =	{Keywords: Chernoff bounds, Submodular Functions, Negative Correlation}
}
Document
Robust Restaking Networks

Authors: Naveen Durvasula and Tim Roughgarden


Abstract
We study the risks of validator reuse across multiple services in a restaking protocol. We characterize the robust security of a restaking network as a function of the buffer between the costs and profits from attacks. For example, our results imply that if attack costs always exceed attack profits by 10%, then a sudden loss of .1% of the overall stake (e.g., due to a software error) cannot result in the ultimate loss of more than 1.1% of the overall stake. We also provide local analogs of these overcollateralization conditions and robust security guarantees that apply specifically for a target service or coalition of services. All of our bounds on worst-case stake loss are the best possible. Finally, we bound the maximum-possible length of a cascade of attacks. Our results suggest measures of robustness that could be exposed to the participants in a restaking protocol. We also suggest polynomial-time computable sufficient conditions that can proxy for these measures.

Cite as

Naveen Durvasula and Tim Roughgarden. Robust Restaking Networks. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 48:1-48:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{durvasula_et_al:LIPIcs.ITCS.2025.48,
  author =	{Durvasula, Naveen and Roughgarden, Tim},
  title =	{{Robust Restaking Networks}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{48:1--48:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.48},
  URN =		{urn:nbn:de:0030-drops-226769},
  doi =		{10.4230/LIPIcs.ITCS.2025.48},
  annote =	{Keywords: Proof of stake, Restaking, Staking Risks}
}
Document
Extended Abstract
Confusion Matrix Design for Downstream Decision-Making (Extended Abstract)

Authors: Yiding Feng and Wei Tang


Abstract
We initiate the study of confusion matrix design. In this problem, an algorithm designer needs to generate a machine learning model (for a classification task from contexts to labels) which makes predictions for a population of downstream decision makers. The prediction accuracy of the machine learning model is characterized by its confusion matrix, which is a stochastic matrix where each entry encodes the probability of predicting the true label to another label. Each downstream decision maker faces a separate optimization task and will decide his binary action based on his own context, realized prediction given his context, and the confusion matrix selected by the algorithm designer. Decision makers are heterogeneous, as they may hold different contexts. Both the decision makers and the algorithm designer will obtain utilities that are determined by the actions the decision makers take, and their true labels. The goal of the algorithm designer is to design a public confusion matrix that is used for all decision makers subject to some feasibility constraints in order to maximize her net utility. We consider a general class of net utility functions, which could be a combination of both decision makers' utilities and the algorithm designer’s utility. Classic outcome-independent utility and utilitarian/Nash/egalitarian social welfare are all special cases of our net utility formulation. We study the above problem through an information design framework, where we view training machine learning model as designing an information structure (signaling scheme) subject to some specific constraints motivated by the machine learning literature. By building the connection to the public persuasion with heterogeneous priors, we design convex programming-based algorithms that compute the optimal confusion matrix subject to (i) post-processing constraints and (ii) receiver operating characteristic (ROC) constraints in polynomial time, respectively. Besides the computational results, we also obtain analytical structural results and numerical results for the special cases of outcome-independent utility and social-aware utility, by utilizing the convex programming-based characterization of the optimal confusion matrix.

Cite as

Yiding Feng and Wei Tang. Confusion Matrix Design for Downstream Decision-Making (Extended Abstract). In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, p. 49:1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{feng_et_al:LIPIcs.ITCS.2025.49,
  author =	{Feng, Yiding and Tang, Wei},
  title =	{{Confusion Matrix Design for Downstream Decision-Making}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{49:1--49:1},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.49},
  URN =		{urn:nbn:de:0030-drops-226779},
  doi =		{10.4230/LIPIcs.ITCS.2025.49},
  annote =	{Keywords: Confusion Matrix, Downstream Decision-making, Public Persuasion, Human-AI Interaction, Information Design}
}
Document
Fully Characterizing Lossy Catalytic Computation

Authors: Marten Folkertsma, Ian Mertz, Florian Speelman, and Quinten Tupker


Abstract
A catalytic machine is a model of computation where a traditional space-bounded machine is augmented with an additional, significantly larger, "catalytic" tape, which, while being available as a work tape, has the caveat of being initialized with an arbitrary string, which must be preserved at the end of the computation. Despite this restriction, catalytic machines have been shown to have surprising additional power; a logspace machine with a polynomial length catalytic tape, known as catalytic logspace (CL), can compute problems which are believed to be impossible for L. A fundamental question of the model is whether the catalytic condition, of leaving the catalytic tape in its exact original configuration, is robust to minor deviations. This study was initialized by Gupta et al. (2024), who defined lossy catalytic logspace (LCL[e]) as a variant of CL where we allow up to e errors when resetting the catalytic tape. They showed that LCL[e] = CL for any e = O(1), which remains the frontier of our understanding. In this work we completely characterize lossy catalytic space (LCSPACE[s,c,e]) in terms of ordinary catalytic space (CSPACE[s,c]). We show that LCSPACE[s,c,e] = CSPACE[Θ(s + e log c), Θ(c)] In other words, allowing e errors on a catalytic tape of length c is equivalent, up to a constant stretch, to an equivalent errorless catalytic machine with an additional e log c bits of ordinary working memory. As a consequence, we show that for any e, LCL[e] = CL implies SPACE[e log n] ⊆ ZPP, thus giving a barrier to any improvement beyond LCL[O(1)] = CL. We also show equivalent results for non-deterministic and randomized catalytic space.

Cite as

Marten Folkertsma, Ian Mertz, Florian Speelman, and Quinten Tupker. Fully Characterizing Lossy Catalytic Computation. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 50:1-50:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{folkertsma_et_al:LIPIcs.ITCS.2025.50,
  author =	{Folkertsma, Marten and Mertz, Ian and Speelman, Florian and Tupker, Quinten},
  title =	{{Fully Characterizing Lossy Catalytic Computation}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{50:1--50:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.50},
  URN =		{urn:nbn:de:0030-drops-226786},
  doi =		{10.4230/LIPIcs.ITCS.2025.50},
  annote =	{Keywords: Space complexity, catalytic computation, error-correcting codes}
}
Document
Parameterized Geometric Graph Modification with Disk Scaling

Authors: Fedor V. Fomin, Petr A. Golovach, Tanmay Inamdar, Saket Saurabh, and Meirav Zehavi


Abstract
The parameterized analysis of graph modification problems represents the most extensively studied area within Parameterized Complexity. Given a graph G and an integer k ∈ ℕ as input, the goal is to determine whether we can perform at most k operations on G to transform it into a graph belonging to a specified graph class ℱ. Typical operations are combinatorial and include vertex deletions and edge deletions, insertions, and contractions. However, in many real-world scenarios, when the input graph is constrained to be a geometric intersection graph, the modification of the graph is influenced by changes in the geometric properties of the underlying objects themselves, rather than by combinatorial modifications. It raises the question of whether vertex deletions or adjacency modifications are necessarily the most appropriate modification operations for studying modifications of geometric graphs. We propose the study of the disk intersection graph modification through the scaling of disks. This operation is typical in the realm of topology control but has not yet been explored in the context of Parameterized Complexity. We design parameterized algorithms and kernels for modifying to the most basic graph classes: edgeless, connected, and acyclic. Our technical contributions encompass a novel combination of linear programming, branching, and kernelization techniques, along with a fresh application of bidimensionality theory to analyze the area covered by disks, which may have broader applicability.

Cite as

Fedor V. Fomin, Petr A. Golovach, Tanmay Inamdar, Saket Saurabh, and Meirav Zehavi. Parameterized Geometric Graph Modification with Disk Scaling. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 51:1-51:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{fomin_et_al:LIPIcs.ITCS.2025.51,
  author =	{Fomin, Fedor V. and Golovach, Petr A. and Inamdar, Tanmay and Saurabh, Saket and Zehavi, Meirav},
  title =	{{Parameterized Geometric Graph Modification with Disk Scaling}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{51:1--51:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.51},
  URN =		{urn:nbn:de:0030-drops-226795},
  doi =		{10.4230/LIPIcs.ITCS.2025.51},
  annote =	{Keywords: parameterized algorithms, kernelization, spreading points, distant representatives, unit disk packing}
}
Document
Combinatorial Pen Testing (Or Consumer Surplus of Deferred-Acceptance Auctions)

Authors: Aadityan Ganesh and Jason Hartline


Abstract
Pen testing is the problem of selecting high-capacity resources when the only way to measure the capacity of a resource expends its capacity. We have a set of n pens with unknown amounts of ink and our goal is to select a feasible subset of pens maximizing the total ink in them. We are allowed to learn about the ink levels by writing with them, but this uses up ink that was previously in the pens. We identify optimal and near optimal pen testing algorithms by drawing analogues to auction theoretic frameworks of deferred-acceptance auctions and virtual values. Our framework allows the conversion of any near optimal deferred-acceptance mechanism into a near optimal pen testing algorithm. Moreover, these algorithms guarantee an additional overhead of at most (1+o(1)) ln n in the approximation factor to the omniscient algorithm that has access to the ink levels in the pens. We use this framework to give pen testing algorithms for various combinatorial constraints like matroid, knapsack, and general downward-closed constraints, and also for online environments.

Cite as

Aadityan Ganesh and Jason Hartline. Combinatorial Pen Testing (Or Consumer Surplus of Deferred-Acceptance Auctions). In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 52:1-52:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{ganesh_et_al:LIPIcs.ITCS.2025.52,
  author =	{Ganesh, Aadityan and Hartline, Jason},
  title =	{{Combinatorial Pen Testing (Or Consumer Surplus of Deferred-Acceptance Auctions)}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{52:1--52:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.52},
  URN =		{urn:nbn:de:0030-drops-226808},
  doi =		{10.4230/LIPIcs.ITCS.2025.52},
  annote =	{Keywords: Pen testing, consumer surplus, money-burning, deferred-acceptance auctions}
}
Document
Differential Privacy on Trust Graphs

Authors: Badih Ghazi, Ravi Kumar, Pasin Manurangsi, and Serena Wang


Abstract
We study differential privacy (DP) in a multi-party setting where each party only trusts a (known) subset of the other parties with its data. Specifically, given a trust graph where vertices correspond to parties and neighbors are mutually trusting, we give a DP algorithm for aggregation with a much better privacy-utility trade-off than in the well-studied local model of DP (where each party trusts no other party). We further study a robust variant where each party trusts all but an unknown subset of at most t of its neighbors (where t is a given parameter), and give an algorithm for this setting. We complement our algorithms with lower bounds, and discuss implications of our work to other tasks in private learning and analytics.

Cite as

Badih Ghazi, Ravi Kumar, Pasin Manurangsi, and Serena Wang. Differential Privacy on Trust Graphs. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 53:1-53:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{ghazi_et_al:LIPIcs.ITCS.2025.53,
  author =	{Ghazi, Badih and Kumar, Ravi and Manurangsi, Pasin and Wang, Serena},
  title =	{{Differential Privacy on Trust Graphs}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{53:1--53:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.53},
  URN =		{urn:nbn:de:0030-drops-226816},
  doi =		{10.4230/LIPIcs.ITCS.2025.53},
  annote =	{Keywords: Differential privacy, trust graphs, minimum dominating set, packing number}
}
Document
Partial Minimum Branching Program Size Problem Is ETH-Hard

Authors: Ludmila Glinskih and Artur Riazanov


Abstract
We show that assuming the Exponential Time Hypothesis, the Partial Minimum Branching Program Size Problem ({MBPSP}^{*}) requires superpolynomial time. This result also applies to the partial minimization problems for many interesting subclasses of branching programs, such as read-k branching programs and OBDDs. Combining these results with the recent unconditional lower bounds for {MCSP} [Ludmila Glinskih and Artur Riazanov, 2022], we obtain an unconditional superpolynomial lower bound on the size of Read-Once Nondeterministic Branching Programs (1- NBP) computing the total versions of the minimum BP, read-k-BP, and OBDD size problems. Additionally we show that it is NP-hard to check whether a given BP computing a partial Boolean function can be compressed to a BP of a given size.

Cite as

Ludmila Glinskih and Artur Riazanov. Partial Minimum Branching Program Size Problem Is ETH-Hard. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 54:1-54:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{glinskih_et_al:LIPIcs.ITCS.2025.54,
  author =	{Glinskih, Ludmila and Riazanov, Artur},
  title =	{{Partial Minimum Branching Program Size Problem Is ETH-Hard}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{54:1--54:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.54},
  URN =		{urn:nbn:de:0030-drops-226822},
  doi =		{10.4230/LIPIcs.ITCS.2025.54},
  annote =	{Keywords: MCSP, branching programs, meta-complexity, lower bounds}
}
Document
Completeness Theorems for k-SUM and Geometric Friends: Deciding Fragments of Linear Integer Arithmetic

Authors: Geri Gokaj and Marvin Künnemann


Abstract
In the last three decades, the k-SUM hypothesis has emerged as a satisfying explanation of long-standing time barriers for a variety of algorithmic problems. Yet to this day, the literature knows of only few proven consequences of a refutation of this hypothesis. Taking a descriptive complexity viewpoint, we ask: What is the largest logically defined class of problems captured by the k-SUM problem? To this end, we introduce a class FOP_ℤ of problems corresponding to deciding sentences in Presburger arithmetic/linear integer arithmetic over finite subsets of integers. We establish two large fragments for which the k-SUM problem is complete under fine-grained reductions: 1) The k-SUM problem is complete for deciding the sentences with k existential quantifiers. 2) The 3-SUM problem is complete for all 3-quantifier sentences of FOP_ℤ expressible using at most 3 linear inequalities. Specifically, a faster-than-n^{⌈k/2⌉ ± o(1)} algorithm for k-SUM (or faster-than-n^{2 ± o(1)} algorithm for 3-SUM, respectively) directly translate to polynomial speedups of a general algorithm for all sentences in the respective fragment. Observing a barrier for proving completeness of 3-SUM for the entire class FOP_ℤ, we turn to the question which other - seemingly more general - problems are complete for FOP_ℤ. In this direction, we establish FOP_ℤ-completeness of the problem pair of Pareto Sum Verification and Hausdorff Distance under n Translations under the L_∞/L₁ norm in ℤ^d. In particular, our results invite to investigate Pareto Sum Verification as a high-dimensional generalization of 3-SUM.

Cite as

Geri Gokaj and Marvin Künnemann. Completeness Theorems for k-SUM and Geometric Friends: Deciding Fragments of Linear Integer Arithmetic. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 55:1-55:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{gokaj_et_al:LIPIcs.ITCS.2025.55,
  author =	{Gokaj, Geri and K\"{u}nnemann, Marvin},
  title =	{{Completeness Theorems for k-SUM and Geometric Friends: Deciding Fragments of Linear Integer Arithmetic}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{55:1--55:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.55},
  URN =		{urn:nbn:de:0030-drops-226835},
  doi =		{10.4230/LIPIcs.ITCS.2025.55},
  annote =	{Keywords: fine-grained complexity theory, descriptive complexity, presburger arithmetic, completeness results, k-SUM}
}
Document
Incompressible Functional Encryption

Authors: Rishab Goyal, Venkata Koppula, Mahesh Sreekumar Rajasree, and Aman Verma


Abstract
Incompressible encryption (Dziembowski, Crypto'06; Guan, Wichs, Zhandry, Eurocrypt'22) protects from attackers that learn the entire decryption key, but cannot store the full ciphertext. In incompressible encryption, the attacker must try to compress a ciphertext within pre-specified memory bound S before receiving the secret key. In this work, we generalize the notion of incompressibility to functional encryption. In incompressible functional encryption, the adversary can corrupt non-distinguishing keys at any point, but receives the distinguishing keys only after compressing the ciphertext to within S bits. An important efficiency measure for incompressible encryption is the ciphertext-rate (i.e., rate = |m|/|ct|). We give many new results for incompressible functional encryption for circuits, from minimal assumption of (non-incompressible) functional encryption, with 1) ct-rate-1/2 and short secret keys, 2) ct-rate-1 and large secret keys. Along the way, we also give a new incompressible attribute-based encryption for circuits from standard assumptions, with ct-rate-1/2 and short secret keys. Our results achieve optimal efficiency, as incompressible attribute-based/functional encryption with ct-rate-1 as well as short secret keys has strong barriers for provable security from standard assumptions. Moreover, our assumptions are minimal as incompressible attribute-based/functional encryption are strictly stronger than their non-incompressible counterparts.

Cite as

Rishab Goyal, Venkata Koppula, Mahesh Sreekumar Rajasree, and Aman Verma. Incompressible Functional Encryption. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 56:1-56:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{goyal_et_al:LIPIcs.ITCS.2025.56,
  author =	{Goyal, Rishab and Koppula, Venkata and Rajasree, Mahesh Sreekumar and Verma, Aman},
  title =	{{Incompressible Functional Encryption}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{56:1--56:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.56},
  URN =		{urn:nbn:de:0030-drops-226849},
  doi =		{10.4230/LIPIcs.ITCS.2025.56},
  annote =	{Keywords: functional encryption, attribute-based encryption, incompressible encryption}
}
Document
Deterministic Approximation for the Volume of the Truncated Fractional Matching Polytope

Authors: Heng Guo and Vishvajeet N


Abstract
We give a deterministic polynomial-time approximation scheme (FPTAS) for the volume of the truncated fractional matching polytope for graphs of maximum degree Δ, where the truncation is by restricting each variable to the interval [0,(1+δ)/Δ], and δ ≤ C/Δ for some constant C > 0. We also generalise our result to the fractional matching polytope for hypergraphs of maximum degree Δ and maximum hyperedge size k, truncated by [0,(1+δ)/Δ] as well, where δ ≤ CΔ^{-(2k-3)/(k-1)}k^{-1} for some constant C > 0. The latter result generalises both the first result for graphs (when k = 2), and a result by Bencs and Regts (2024) for the truncated independence polytope (when Δ = 2). Our approach is based on the cluster expansion technique.

Cite as

Heng Guo and Vishvajeet N. Deterministic Approximation for the Volume of the Truncated Fractional Matching Polytope. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 57:1-57:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{guo_et_al:LIPIcs.ITCS.2025.57,
  author =	{Guo, Heng and N, Vishvajeet},
  title =	{{Deterministic Approximation for the Volume of the Truncated Fractional Matching Polytope}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{57:1--57:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.57},
  URN =		{urn:nbn:de:0030-drops-226858},
  doi =		{10.4230/LIPIcs.ITCS.2025.57},
  annote =	{Keywords: deterministic volume computation, cluster expansion, explicit polytopes}
}
Document
The Computational Complexity of Factored Graphs

Authors: Shreya Gupta, Boyang Huang, Russell Impagliazzo, Stanley Woo, and Christopher Ye


Abstract
While graphs and abstract data structures can be large and complex, practical instances are often regular or highly structured. If the instance has sufficient structure, we might hope to compress the object into a more succinct representation. An efficient algorithm (with respect to the compressed input size) could then lead to more efficient computations than algorithms taking the explicit, uncompressed object as input. This leads to a natural question: when does knowing the input instance has a more succinct representation make computation easier? We initiate the study of the computational complexity of problems on factored graphs: graphs that are given as a formula of products and unions on smaller graphs. For any graph problem, we define a parameterized version that takes factored graphs as input, parameterized by the number of (smaller) ordinary graphs used to construct the factored graph. In this setting, we characterize the parameterized complexity of several natural graph problems, exhibiting a variety of complexities. We show that a decision version of lexicographically first maximal independent set is XP-complete, and therefore unconditionally not fixed-parameter tractable (FPT). On the other hand, we show that clique counting is FPT. Finally, we show that reachability is XNL-complete. Moreover, XNL is contained in FPT if and only if NL is contained in some fixed polynomial time.

Cite as

Shreya Gupta, Boyang Huang, Russell Impagliazzo, Stanley Woo, and Christopher Ye. The Computational Complexity of Factored Graphs. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 58:1-58:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{gupta_et_al:LIPIcs.ITCS.2025.58,
  author =	{Gupta, Shreya and Huang, Boyang and Impagliazzo, Russell and Woo, Stanley and Ye, Christopher},
  title =	{{The Computational Complexity of Factored Graphs}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{58:1--58:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.58},
  URN =		{urn:nbn:de:0030-drops-226865},
  doi =		{10.4230/LIPIcs.ITCS.2025.58},
  annote =	{Keywords: Parameterized Complexity, Fine-grained complexity, Fixed-parameter tractability, Graph algorithms}
}
Document
Error Correction for Message Streams

Authors: Meghal Gupta and Rachel Yun Zhang


Abstract
In the setting of error correcting codes, Alice wants to send a message x ∈ {0,1}ⁿ to Bob via an encoding enc(x) that is resilient to error. In this work, we investigate the scenario where Bob is a low space decoder. More precisely, he receives Alice’s encoding enc(x) bit-by-bit and desires to compute some function f(x) in low space. A generic error-correcting code does not accomplish this because decoding is a very global process and requires at least linear space. Locally decodable codes partially solve this problem as they allow Bob to learn a given bit of x in low space, but not compute a generic function f. Our main result is an encoding and decoding procedure where Bob is still able to compute any such function f in low space when a constant fraction of the stream is corrupted. More precisely, we describe an encoding function enc(x) of length poly(n) so that for any decoder (streaming algorithm) A that on input x computes f(x) in space s, there is an explicit decoder B that computes f(x) in space s ⋅ polylog(n) as long as there were not more than 1/4 - ε fraction of (adversarial) errors in the input stream enc(x).

Cite as

Meghal Gupta and Rachel Yun Zhang. Error Correction for Message Streams. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 59:1-59:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{gupta_et_al:LIPIcs.ITCS.2025.59,
  author =	{Gupta, Meghal and Zhang, Rachel Yun},
  title =	{{Error Correction for Message Streams}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{59:1--59:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.59},
  URN =		{urn:nbn:de:0030-drops-226875},
  doi =		{10.4230/LIPIcs.ITCS.2025.59},
  annote =	{Keywords: error-correcting codes, streaming algorithms, space-efficient algorithms}
}
Document
List Decoding Bounds for Binary Codes with Noiseless Feedback

Authors: Meghal Gupta and Rachel Yun Zhang


Abstract
In an error-correcting code, a sender encodes a message x ∈ {0, 1}^k such that it is still decodable by a receiver on the other end of a noisy channel. In the setting of error-correcting codes with feedback, after sending each bit, the sender learns what was received at the other end and can tailor future messages accordingly. While the unique decoding radius of feedback codes has long been known to be 1/3, the list decoding capabilities of feedback codes is not well understood. In this paper, we provide the first nontrivial bounds on the list decoding radius of feedback codes for lists of size 𝓁. For 𝓁 = 2, we fully determine the 2-list decoding radius to be 3/7. For larger values of 𝓁, we show an upper bound of 1/2 - 1/{2^(𝓁+2) - 2}, and show that the same techniques for the 𝓁 = 2 case cannot match this upper bound in general.

Cite as

Meghal Gupta and Rachel Yun Zhang. List Decoding Bounds for Binary Codes with Noiseless Feedback. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 60:1-60:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{gupta_et_al:LIPIcs.ITCS.2025.60,
  author =	{Gupta, Meghal and Zhang, Rachel Yun},
  title =	{{List Decoding Bounds for Binary Codes with Noiseless Feedback}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{60:1--60:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.60},
  URN =		{urn:nbn:de:0030-drops-226880},
  doi =		{10.4230/LIPIcs.ITCS.2025.60},
  annote =	{Keywords: error-correcting codes, feedback, list decoding}
}
Document
Hardness of Sampling for the Anti-Ferromagnetic Ising Model on Random Graphs

Authors: Neng Huang, Will Perkins, and Aaron Potechin


Abstract
We prove a hardness of sampling result for the anti-ferromagnetic Ising model on random graphs of average degree d for large constant d, proving that when the normalized inverse temperature satisfies β > 1 (asymptotically corresponding to the condensation threshold), then w.h.p. over the random graph there is no stable sampling algorithm that can output a sample close in W₂ distance to the Gibbs measure. The results also apply to a fixed-magnetization version of the model, showing that there are no stable sampling algorithms for low but positive temperature max and min bisection distributions. These results show a gap in the tractability of search and sampling problems: while there are efficient algorithms to find near optimizers, stable sampling algorithms cannot access the Gibbs distribution concentrated on such solutions. Our techniques involve extensions of the interpolation technique relating behavior of the mean field Sherrington-Kirkpatrick model to behavior of Ising models on random graphs of average degree d for large d. While previous interpolation arguments compared the free energies of the two models, our argument compares the average energies and average overlaps in the two models.

Cite as

Neng Huang, Will Perkins, and Aaron Potechin. Hardness of Sampling for the Anti-Ferromagnetic Ising Model on Random Graphs. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 61:1-61:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{huang_et_al:LIPIcs.ITCS.2025.61,
  author =	{Huang, Neng and Perkins, Will and Potechin, Aaron},
  title =	{{Hardness of Sampling for the Anti-Ferromagnetic Ising Model on Random Graphs}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{61:1--61:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.61},
  URN =		{urn:nbn:de:0030-drops-226899},
  doi =		{10.4230/LIPIcs.ITCS.2025.61},
  annote =	{Keywords: Random graph, spin glass, sampling algorithm}
}
Document
A High Dimensional Cramer’s Rule Connecting Homogeneous Multilinear Equations to Hyperdeterminants

Authors: Antoine Joux and Anand Kumar Narayanan


Abstract
We present a new algorithm for solving homogeneous multilinear equations, which are high dimensional generalisations of solving homogeneous linear equations. First, we present a linear time reduction from solving generic homogeneous multilinear equations to computing hyperdeterminants, via a high dimensional Cramer’s rule. Hyperdeterminants are generalisations of determinants, associated with tensors of formats generalising square matrices. Second, we devise arithmetic circuits to compute hyperdeterminants of boundary format tensors. Boundary format tensors are those that generalise square matrices in the strictest sense. Consequently, we obtain arithmetic circuits for solving generic homogeneous boundary format multilinear equations. The complexity as a function of the input dimension varies across boundary format families, ranging from quasi-polynomial to sub exponential. Curiously, the quasi-polynomial complexity arises for families of increasing dimension, including the family of multipartite quantum systems made of d qubits and one qudit. Finally, we identify potential directions to resolve the hardness the hyperdeterminants, notably modulo prime numbers through the cryptographically significant tensor isomorphism complexity class.

Cite as

Antoine Joux and Anand Kumar Narayanan. A High Dimensional Cramer’s Rule Connecting Homogeneous Multilinear Equations to Hyperdeterminants. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 62:1-62:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{joux_et_al:LIPIcs.ITCS.2025.62,
  author =	{Joux, Antoine and Narayanan, Anand Kumar},
  title =	{{A High Dimensional Cramer’s Rule Connecting Homogeneous Multilinear Equations to Hyperdeterminants}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{62:1--62:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.62},
  URN =		{urn:nbn:de:0030-drops-226904},
  doi =		{10.4230/LIPIcs.ITCS.2025.62},
  annote =	{Keywords: arithmetic circuits, tensors, determinants, hyperdeterminants}
}
Document
Complexity Classification of Product State Problems for Local Hamiltonians

Authors: John Kallaugher, Ojas Parekh, Kevin Thompson, Yipu Wang, and Justin Yirka


Abstract
Product states, unentangled tensor products of single qubits, are a ubiquitous ansatz in quantum computation, including for state-of-the-art Hamiltonian approximation algorithms. A natural question is whether we should expect to efficiently solve product state problems on any interesting families of Hamiltonians. We completely classify the complexity of finding minimum-energy product states for Hamiltonians defined by any fixed set of allowed 2-qubit interactions. Our results follow a line of work classifying the complexity of solving Hamiltonian problems and classical constraint satisfaction problems based on the allowed constraints. We prove that estimating the minimum energy of a product state is in 𝖯 if and only if all allowed interactions are 1-local, and NP-complete otherwise. Equivalently, any family of non-trivial two-body interactions generates Hamiltonians with NP-complete product-state problems. Our hardness constructions only require coupling strengths of constant magnitude. A crucial component of our proofs is a collection of hardness results for a new variant of the Vector Max-Cut problem, which should be of independent interest. Our definition involves sums of distances rather than squared distances and allows linear stretches. We similarly give a proof that the original Vector Max-Cut problem is NP-complete in 3 dimensions. This implies hardness of optimizing product states for Quantum Max-Cut (the quantum Heisenberg model) is NP-complete, even when every term is guaranteed to have positive unit weight.

Cite as

John Kallaugher, Ojas Parekh, Kevin Thompson, Yipu Wang, and Justin Yirka. Complexity Classification of Product State Problems for Local Hamiltonians. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 63:1-63:32, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{kallaugher_et_al:LIPIcs.ITCS.2025.63,
  author =	{Kallaugher, John and Parekh, Ojas and Thompson, Kevin and Wang, Yipu and Yirka, Justin},
  title =	{{Complexity Classification of Product State Problems for Local Hamiltonians}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{63:1--63:32},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.63},
  URN =		{urn:nbn:de:0030-drops-226910},
  doi =		{10.4230/LIPIcs.ITCS.2025.63},
  annote =	{Keywords: quantum complexity, quantum algorithms, local hamiltonians}
}
Document
A Universal Sequence of Tensors for the Asymptotic Rank Conjecture

Authors: Petteri Kaski and Mateusz Michałek


Abstract
The exponent σ(T) of a tensor T ∈ 𝔽^d⊗𝔽^d⊗𝔽^d over a field 𝔽 captures the base of the exponential growth rate of the tensor rank of T under Kronecker powers. Tensor exponents are fundamental from the standpoint of algorithms and computational complexity theory; for example, the exponent ω of square matrix multiplication can be characterized as ω = 2σ(MM₂), where MM₂ ∈ 𝔽⁴⊗𝔽⁴⊗𝔽⁴ is the tensor that represents 2×2 matrix multiplication. Strassen [FOCS 1986] initiated a duality theory for spaces of tensors that enables one to characterize the exponent of a tensor via objects in a dual space, called the asymptotic spectrum of the primal (tensor) space. While Strassen’s theory has considerable generality beyond the setting of tensors - Wigderson and Zuiddam [Asymptotic Spectra: Theory, Applications, and Extensions, preprint, 2023] give a recent exposition - progress in characterizing the dual space in the tensor setting has been slow, with the first universal points in the dual identified by Christandl, Vrana, and Zuiddam [J. Amer. Math. Soc. 36 (2023)]. In parallel to Strassen’s theory, the algebraic geometry community has developed a geometric theory of tensors aimed at characterizing the structure of the primal space and tensor exponents therein; the latter study was motivated in particular by an observation of Strassen (implicit in [J. Reine Angew. Math. 384 (1988)]) that matrix-multiplication tensors have limited universality in the sense that σ(𝔽^d⊗𝔽^d⊗𝔽^d) ≤ 2ω/3 = 4/3σ(MM₂) holds for all d ≥ 1. In particular, this limited universality of the tensor MM₂ puts forth the question whether one could construct explicit universal tensors that exactly characterize the worst-case tensor exponent in the primal space. Such explicit universal objects would, among others, give means towards a proof or a disproof of Strassen’s asymptotic rank conjecture [Progr. Math. 120 (1994)]; the former would immediately imply ω = 2 and, among others, refute the Set Cover Conjecture (cf. Björklund and Kaski [STOC 2024] and Pratt [STOC 2024]). Our main result is an explicit construction of a sequence 𝒰_d of zero-one-valued tensors that is universal for the worst-case tensor exponent; more precisely, we show that σ(𝒰_d) = σ(d) where σ(d) = sup_{T ∈ 𝔽^d⊗𝔽^d⊗𝔽^d}σ(T). We also supply an explicit universal sequence 𝒰_Δ localised to capture the worst-case exponent σ(Δ) of tensors with support contained in Δ ⊆ [d]×[d]×[d]; by combining such sequences, we obtain a universal sequence 𝒯_d such that σ(𝒯_d) = 1 holds if and only if Strassen’s asymptotic rank conjecture holds for d. Finally, we show that the limit lim_{d → ∞}σ(d) exists and can be captured as lim_{d → ∞} σ(D_d) for an explicit sequence (D_d)_{d = 1}^∞ of tensors obtained by diagonalisation of the sequences 𝒰_d. As our second result we relate the absence of polynomials of fixed degree vanishing on tensors of low rank, or more generally asymptotic rank, with upper bounds on the exponent σ(d). Using this technique, one may bound asymptotic rank for all tensors of a given format, knowing enough specific tensors of low asymptotic rank.

Cite as

Petteri Kaski and Mateusz Michałek. A Universal Sequence of Tensors for the Asymptotic Rank Conjecture. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 64:1-64:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{kaski_et_al:LIPIcs.ITCS.2025.64,
  author =	{Kaski, Petteri and Micha{\l}ek, Mateusz},
  title =	{{A Universal Sequence of Tensors for the Asymptotic Rank Conjecture}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{64:1--64:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.64},
  URN =		{urn:nbn:de:0030-drops-226925},
  doi =		{10.4230/LIPIcs.ITCS.2025.64},
  annote =	{Keywords: asymptotic rank conjecture, secant variety, Specht module, tensor rank, tensor exponent}
}
Document
Online Versus Offline Adversaries in Property Testing

Authors: Esty Kelman, Ephraim Linder, and Sofya Raskhodnikova


Abstract
We study property testing with incomplete or noisy inputs. The models we consider allow for adversarial manipulation of the input, but differ in whether the manipulation can be done only offline, i.e., before the execution of the algorithm, or online, i.e., as the algorithm runs. The manipulations by an adversary can come in the form of erasures or corruptions. We compare the query complexity and the randomness complexity of property testing in the offline and online models. Kalemaj, Raskhodnikova, and Varma (Theory Comput. `23) provide properties that can be tested with a small number of queries with offline erasures, but cannot be tested at all with online erasures. We demonstrate that the two models are incomparable in terms of query complexity: we construct properties that can be tested with a constant number of queries in the online corruption model, but require querying a significant fraction of the input in the offline erasure model. We also construct properties that exhibit a strong separation between the randomness complexity of testing in the presence of offline and online adversaries: testing these properties in the online model requires exponentially more random bits than in the offline model, even when they are tested with nearly the same number of queries in both models. Our randomness separation relies on a novel reduction from randomness-efficient testers in the adversarial online model to query-efficient testers in the standard model.

Cite as

Esty Kelman, Ephraim Linder, and Sofya Raskhodnikova. Online Versus Offline Adversaries in Property Testing. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 65:1-65:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{kelman_et_al:LIPIcs.ITCS.2025.65,
  author =	{Kelman, Esty and Linder, Ephraim and Raskhodnikova, Sofya},
  title =	{{Online Versus Offline Adversaries in Property Testing}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{65:1--65:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.65},
  URN =		{urn:nbn:de:0030-drops-226933},
  doi =		{10.4230/LIPIcs.ITCS.2025.65},
  annote =	{Keywords: Property Testing, Online Adversary, Offline Adversary, Query Complexity, Randomness Complexity, Separations}
}
Document
Graph Reconstruction via MIS Queries

Authors: Christian Konrad, Conor O'Sullivan, and Victor Traistaru


Abstract
In the Graph Reconstruction (GR) problem, a player initially only knows the vertex set V of an input graph G = (V, E) and is required to learn its set of edges E. To this end, the player submits queries to an oracle and must deduce E from the oracle’s answers. Angluin and Chen [Journal of Computer and System Sciences, 2008] resolved the number of Independent Set (IS) queries necessary and sufficient for GR on m-edge graphs. In this setting, each query consists of a subset of vertices U ⊆ V, and the oracle responds with a boolean, indicating whether U is an independent set in G. They gave algorithms that use O(m ⋅ log n) IS queries, which is best possible. In this paper, we initiate the study of GR via Maximal Independent Set (MIS) queries, a more powerful variant of IS queries. Given a query U ⊆ V, the oracle responds with any, potentially adversarially chosen, maximal independent set I ⊆ U in the induced subgraph G[U]. We show that, for GR, MIS queries are strictly more powerful than IS queries when parametrized by the maximum degree Δ of the input graph. We give tight (up to poly-logarithmic factors) upper and lower bounds for this problem: 1) We observe that the simple strategy of taking uniform independent random samples of V and submitting those to the oracle yields a non-adaptive randomized algorithm that executes O(Δ² ⋅ log n) queries and succeeds with high probability. This should be contrasted with the fact that Ω(Δ ⋅ n ⋅ log(n/Δ)) IS queries are required for such graphs, which shows that MIS queries are strictly more powerful than IS queries. Interestingly, combining the strategy of taking uniform random samples of V with the probabilistic method, we show the existence of a deterministic non-adaptive algorithm that executes O(Δ³ ⋅ log(n/Δ)) queries. 2) Regarding lower bounds, we prove that the additional Δ factor when going from randomized non-adaptive algorithms to deterministic non-adaptive algorithms is necessary. We show that every non-adaptive deterministic algorithm requires Ω(Δ³ / log² Δ) queries. For arbitrary randomized adaptive algorithms, we show that Ω(Δ²) queries are necessary in graphs of maximum degree Δ, and that Ω(log n) queries are necessary, even when the input graph is an n-vertex cycle.

Cite as

Christian Konrad, Conor O'Sullivan, and Victor Traistaru. Graph Reconstruction via MIS Queries. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 66:1-66:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{konrad_et_al:LIPIcs.ITCS.2025.66,
  author =	{Konrad, Christian and O'Sullivan, Conor and Traistaru, Victor},
  title =	{{Graph Reconstruction via MIS Queries}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{66:1--66:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.66},
  URN =		{urn:nbn:de:0030-drops-226945},
  doi =		{10.4230/LIPIcs.ITCS.2025.66},
  annote =	{Keywords: Query Complexity, Graph Reconstruction, Maximal Independent Set Queries}
}
Document
Error-Correcting Graph Codes

Authors: Swastik Kopparty, Aditya Potukuchi, and Harry Sha


Abstract
In this paper, we construct Error-Correcting Graph Codes. An error-correcting graph code of distance δ is a family C of graphs, on a common vertex set of size n, such that if we start with any graph in C, we would have to modify the neighborhoods of at least δ n vertices in order to obtain some other graph in C. This is a natural graph generalization of the standard Hamming distance error-correcting codes for binary strings. Yohananov and Yaakobi were the first to construct codes in this metric. We extend their work by showing 1) Combinatorial results determining the optimal rate vs distance trade-off nonconstructively. 2) Graph code analogues of Reed-Solomon codes and code concatenation, leading to positive distance codes for all rates and positive rate codes for all distances. 3) Graph code analogues of dual-BCH codes, yielding large codes with distance δ = 1-o(1). This gives an explicit "graph code of Ramsey graphs". Several recent works, starting with the paper of Alon, Gujgiczer, Körner, Milojević, and Simonyi, have studied more general graph codes; where the symmetric difference between any two graphs in the code is required to have some desired property. Error-correcting graph codes are a particularly interesting instantiation of this concept.

Cite as

Swastik Kopparty, Aditya Potukuchi, and Harry Sha. Error-Correcting Graph Codes. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 67:1-67:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{kopparty_et_al:LIPIcs.ITCS.2025.67,
  author =	{Kopparty, Swastik and Potukuchi, Aditya and Sha, Harry},
  title =	{{Error-Correcting Graph Codes}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{67:1--67:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.67},
  URN =		{urn:nbn:de:0030-drops-226950},
  doi =		{10.4230/LIPIcs.ITCS.2025.67},
  annote =	{Keywords: Graph codes, explicit construction, concatenation codes, tensor codes}
}
Document
New Pseudorandom Generators and Correlation Bounds Using Extractors

Authors: Vinayak M. Kumar


Abstract
We establish new correlation bounds and pseudorandom generators for a collection of computation models. These models are all natural generalization of structured low-degree 𝔽₂-polynomials that we did not have correlation bounds for before. In particular: - We construct a PRG for width-2 poly(n)-length branching programs which read d bits at a time with seed length 2^O(√{log n}) ⋅ d²log²(1/ε). This comes quadratically close to optimal dependence in d and log(1/ε). Improving the dependence on n would imply nontrivial PRGs for log n-degree 𝔽₂-polynomials. The previous PRG by Bogdanov, Dvir, Verbin, and Yehudayoff had an exponentially worse dependence on d with seed length of O(dlog n + d2^dlog(1/ε)). - We provide the first nontrivial (and nearly optimal) correlation bounds and PRGs against size-n^Ω(log n) AC⁰ circuits with either n^{.99} SYM gates (computing an arbitrary symmetric function) or n^{.49} THR gates (computing an arbitrary linear threshold function). This is a generalization of sparse 𝔽₂-polynomials, which can be simulated by an AC⁰ circuit with one parity gate at the top. Previous work of Servedio and Tan only handled n^{.49} SYM gates or n^{.24} THR gates, and previous work of Lovett and Srinivasan only handled polynomial-size circuits. - We give exponentially small correlation bounds against degree-n^O(1) 𝔽₂-polynomials which are set-multilinear over some arbitrary partition of the input into n^{1-O(1)} parts (noting that at n parts, we recover all low degree polynomials). This vastly generalizes correlation bounds against degree-d polynomials which are set-multilinear over a fixed partition into d blocks, which were established by Bhrushundi, Harsha, Hatami, Kopparty, and Kumar. The common technique behind all of these results is to fortify a hard function with the right type of extractor to obtain stronger correlation bounds for more general models of computation. Although this technique has been used in previous work, they rely on the model simplifying drastically under random restrictions. We view our results as a proof of concept that such fortification can be done even for classes that do not enjoy such behavior.

Cite as

Vinayak M. Kumar. New Pseudorandom Generators and Correlation Bounds Using Extractors. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 68:1-68:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{kumar:LIPIcs.ITCS.2025.68,
  author =	{Kumar, Vinayak M.},
  title =	{{New Pseudorandom Generators and Correlation Bounds Using Extractors}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{68:1--68:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.68},
  URN =		{urn:nbn:de:0030-drops-226961},
  doi =		{10.4230/LIPIcs.ITCS.2025.68},
  annote =	{Keywords: Pseudorandom Generators, Correlation Bounds, Constant-Depth Circuits}
}
Document
Extended Abstract
Approximate Unitary k-Designs from Shallow, Low-Communication Circuits (Extended Abstract)

Authors: Nicholas LaRacuente and Felix Leditzky


Abstract
Random unitaries are useful in quantum information and related fields but hard to generate with limited resources. An approximate unitary k-design is a measure over an ensemble of unitaries such that the average is close to a Haar (uniformly) random ensemble up to the first k moments. A strong notion of approximation bounds the distance from Haar randomness in relative error: the weighted twirl induced by an approximate design can be written as a convex combination involving that of an exact design and vice versa. The main focus of our work is on efficient constructions of approximate designs, in particular whether relative-error designs in sublinear depth are possible. We give a positive answer to this question as part of our main results: 1. Twirl-Swap-Twirl: Let A and B be systems of the same size. Consider a protocol that locally applies k-design unitaries to A^k and B^k respectively, then exchanges l qudits between each copy of A and B respectively, then again applies local k-design unitaries. This protocol yields an ε-approximate relative k-design when l = O(k log k + log(1/ε)). In particular, this bound is independent of the size of A and B as long as it is sufficiently large compared to k and 1/ε. 2. Twirl-Crosstwirl: Let A_1, … , A_P be subsystems of a multipartite system A. Consider the following protocol for k copies of A: (1) locally apply a k-design unitary to each A_p for p = 1, … , P; (2) apply a "crosstwirl" k-design unitary across a joint system combining l qudits from each A_p. Assuming each A_p’s dimension is sufficiently large compared to other parameters, one can choose l to be of the form 2 (Pk + 1) log_q k + log_q P + log_q(1/ε) + O(1) to achieve an ε-approximate relative k-design. As an intermediate step, we show that this protocol achieves a k-tensor-product-expander, in which the approximation error is in 2 → 2 norm, using communication logarithmic in k. 3. Recursive Crosstwirl: Consider an m-qudit system with connectivity given by a lattice in spatial dimension D. For every D = 1, 2, …, we give a construction of an ε-approximate relative k-design using unitaries of spatially local circuit depth O ((log m + log(1/ε) + k log k ) k polylog(k)). Moreover, across the boundaries of spatially contiguous sub-regions, unitaries used in the design ensemble require only area law communication up to corrections logarithmic in m. Hence they generate only that much entanglement on any product state input. These constructions use the alternating projection method to analyze overlapping Haar twirls, giving a bound on the convergence speed to the full twirl with respect to the 2-norm. Using von Neumann subalgebra indices to replace system dimension, the 2-norm distance converts to relative error without introducing system size. The Recursive Crosstwirl construction answers one variant of [Harrow and Mehraban, 2023, Open Problem 1], showing that with a specific, layered architecture, random circuits produce relative error k-designs in sublinear depth. Moreover, it addresses [Harrow and Mehraban, 2023, Open Problem 7], showing that structured circuits in spatial dimension D of depth << m^{1/D} may achieve approximate k-designs.

Cite as

Nicholas LaRacuente and Felix Leditzky. Approximate Unitary k-Designs from Shallow, Low-Communication Circuits (Extended Abstract). In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 69:1-69:2, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{laracuente_et_al:LIPIcs.ITCS.2025.69,
  author =	{LaRacuente, Nicholas and Leditzky, Felix},
  title =	{{Approximate Unitary k-Designs from Shallow, Low-Communication Circuits}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{69:1--69:2},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.69},
  URN =		{urn:nbn:de:0030-drops-226974},
  doi =		{10.4230/LIPIcs.ITCS.2025.69},
  annote =	{Keywords: Approximate unitary designs, Quantum circuits, Representation theory, von Neumann algebras}
}
Document
Facility Location on High-Dimensional Euclidean Spaces

Authors: Euiwoong Lee and Kijun Shin


Abstract
Recent years have seen great progress in the approximability of fundamental clustering and facility location problems on high-dimensional Euclidean spaces, including k-Means and k-Median. While they admit strictly better approximation ratios than their general metric versions, their approximation ratios are still higher than the hardness ratios for general metrics, leaving the possibility that the ultimate optimal approximation ratios will be the same between Euclidean and general metrics. Moreover, such an improved algorithm for Euclidean spaces is not known for Uncapaciated Facility Location (UFL), another fundamental problem in the area. In this paper, we prove that for any γ ≥ 1.6774 there exists ε > 0 such that Euclidean UFL admits a (γ, 1 + 2e^{-γ} - ε)-bifactor approximation algorithm, improving the result of Byrka and Aardal [Byrka and Aardal, 2010]. Together with the (γ, 1 + 2e^{-γ}) NP-hardness in general metrics, it shows the first separation between general and Euclidean metrics for the aforementioned basic problems. We also present an (α_Li - ε)-(unifactor) approximation algorithm for UFL for some ε > 0 in Euclidean spaces, where α_Li ≈ 1.488 is the best-known approximation ratio for UFL by Li [Li, 2013].

Cite as

Euiwoong Lee and Kijun Shin. Facility Location on High-Dimensional Euclidean Spaces. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 70:1-70:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{lee_et_al:LIPIcs.ITCS.2025.70,
  author =	{Lee, Euiwoong and Shin, Kijun},
  title =	{{Facility Location on High-Dimensional Euclidean Spaces}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{70:1--70:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.70},
  URN =		{urn:nbn:de:0030-drops-226982},
  doi =		{10.4230/LIPIcs.ITCS.2025.70},
  annote =	{Keywords: Approximation Algorithms, Clustering, Facility Location}
}
Document
Toward the Impossibility of Perfect Complete Quantum PKE from OWFs

Authors: Longcheng Li, Qian Li, Xingjian Li, and Qipeng Liu


Abstract
In this paper, we study the impossibility of constructing perfect complete quantum public key encryption (QPKE) from quantumly secure one-way functions (OWFs) in a black-box manner. We show that this problem is connected to a fundamental conjecture about the roots of low-degree polynomials on the Boolean hypercube. Informally, the conjecture asserts that for every nonconstant low-degree polynomial, there exists a universal (randomized) way to modify a small number of input bits such that, for every input string, the polynomial evaluated on the modified input string avoids 0 with sufficiently large probability (over the choice of how the input string is modified). Assuming this conjecture, we demonstrate the impossibility of constructing QPKE from quantumly secure one-way functions in a black-box manner, by employing the information-theoretical approach recently developed by Li, Li, Li, and Liu (CRYPTO'24). Towards resolving this conjecture, we provide various pieces of evidence supporting it and prove some special cases. In particular, we fully rule out perfect QPKE from OWFs when the key generation algorithm only makes a logarithmic number of quantum queries, improving the previous work, which can only handle classical queries.

Cite as

Longcheng Li, Qian Li, Xingjian Li, and Qipeng Liu. Toward the Impossibility of Perfect Complete Quantum PKE from OWFs. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 71:1-71:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{li_et_al:LIPIcs.ITCS.2025.71,
  author =	{Li, Longcheng and Li, Qian and Li, Xingjian and Liu, Qipeng},
  title =	{{Toward the Impossibility of Perfect Complete Quantum PKE from OWFs}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{71:1--71:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.71},
  URN =		{urn:nbn:de:0030-drops-226999},
  doi =		{10.4230/LIPIcs.ITCS.2025.71},
  annote =	{Keywords: Qautnum public-key encryption, Boolean function analysis}
}
Document
Extended Abstract
Information Design with Unknown Prior (Extended Abstract)

Authors: Tao Lin and Ce Li


Abstract
Classical information design models (e.g., Bayesian persuasion and cheap talk) require players to have perfect knowledge of the prior distribution of the state of the world. Our paper studies repeated persuasion problems in which the information designer does not know the prior. The information designer learns to design signaling schemes from repeated interactions with the receiver. We design learning algorithms for the information designer to achieve no regret compared to using the optimal signaling scheme with known prior, under two models of the receiver’s decision-making: (1) The first model assumes that the receiver knows the prior and can perform posterior update and best respond to signals. In this model, we design a learning algorithm for the information designer to achieve O(log T) regret in the general case, and another algorithm with Θ(log log T) regret in the case where the receiver has only two actions. Our algorithms are based on multi-dimensional and conservative binary search techniques, which circumvent the Ω(√T) limitation of empirical estimation in previous works. (2) The second model assumes that the receiver does not know the prior either and employs a no-regret learning algorithm to take actions. Bayesian persuasion and cheap talk are equivalent under this no-regret learning receiver model. We show that the information designer can achieve regret O(√{rReg(T) T}), where rReg(T) = o(T) is an upper bound on the receiver’s learning regret. The algorithm is based on exploration + robustification. The O(√{rReg(T) T}) regret bound is tight even when the information designer knows the prior [Lin and Chen, 2024]. Our work thus provides a learning foundation for the problem of information design with unknown prior.

Cite as

Tao Lin and Ce Li. Information Design with Unknown Prior (Extended Abstract). In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, p. 72:1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{lin_et_al:LIPIcs.ITCS.2025.72,
  author =	{Lin, Tao and Li, Ce},
  title =	{{Information Design with Unknown Prior}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{72:1--72:1},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.72},
  URN =		{urn:nbn:de:0030-drops-227009},
  doi =		{10.4230/LIPIcs.ITCS.2025.72},
  annote =	{Keywords: information design, Bayesian persuasion, online learning, unknown prior}
}
Document
On White-Box Learning and Public-Key Encryption

Authors: Yanyi Liu, Noam Mazor, and Rafael Pass


Abstract
We consider a generalization of the Learning With Error problem, referred to as the white-box learning problem: You are given the code of a sampler that with high probability produces samples of the form y,f(y) + ε where ε is small, and f is computable in polynomial-size, and the computational task consist of outputting a polynomial-size circuit C that with probability, say, 1/3 over a new sample y' according to the same distributions, approximates f(y') (i.e., |C(y')-f(y')| is small). This problem can be thought of as a generalizing of the Learning with Error Problem (LWE) from linear functions f to polynomial-size computable functions. We demonstrate that worst-case hardness of the white-box learning problem, conditioned on the instances satisfying a notion of computational shallowness (a concept from the study of Kolmogorov complexity) not only suffices to get public-key encryption, but is also necessary; as such, this yields the first problem whose worst-case hardness characterizes the existence of public-key encryption. Additionally, our results highlights to what extent LWE "overshoots" the task of public-key encryption. We complement these results by noting that worst-case hardness of the same problem, but restricting the learner to only get black-box access to the sampler, characterizes one-way functions.

Cite as

Yanyi Liu, Noam Mazor, and Rafael Pass. On White-Box Learning and Public-Key Encryption. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 73:1-73:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{liu_et_al:LIPIcs.ITCS.2025.73,
  author =	{Liu, Yanyi and Mazor, Noam and Pass, Rafael},
  title =	{{On White-Box Learning and Public-Key Encryption}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{73:1--73:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.73},
  URN =		{urn:nbn:de:0030-drops-227012},
  doi =		{10.4230/LIPIcs.ITCS.2025.73},
  annote =	{Keywords: Public-Key Encryption, White-Box Learning}
}
Document
Sublinear Metric Steiner Tree via Improved Bounds for Set Cover

Authors: Sepideh Mahabadi, Mohammad Roghani, Jakub Tarnawski, and Ali Vakilian


Abstract
We study the metric Steiner tree problem in the sublinear query model. In this problem, for a set of n points V in a metric space given to us by means of query access to an n× n matrix w, and a set of terminals T ⊆ V, the goal is to find the minimum-weight subset of the edges that connects all the terminal vertices. Recently, Chen, Khanna and Tan [SODA'23] gave an algorithm that uses Õ(n^{13/7}) queries and outputs a (2-η)-estimate of the metric Steiner tree weight, where η > 0 is a universal constant. A key component in their algorithm is a sublinear algorithm for a particular set cover problem where, given a set system (𝒰, ℱ), the goal is to provide a multiplicative-additive estimate for |𝒰|-SC(𝒰, ℱ). Here 𝒰 is the set of elements, ℱ is the collection of sets, and SC(𝒰, ℱ) denotes the optimal set cover size of (𝒰, ℱ). In particular, their algorithm returns a (1/4, ε⋅|𝒰|)-multiplicative-additive estimate for this set cover problem using Õ(|ℱ|^{7/4}) membership oracle queries (querying whether a set S ∈ 𝒮 contains an element e ∈ 𝒰), where ε is a fixed constant. In this work, we improve the query complexity of (2-η)-estimating the metric Steiner tree weight to Õ(n^{5/3}) by showing a (1/2, ε⋅|𝒰|)-estimate for the above set cover problem using Õ(|ℱ|^{5/3}) membership queries. To design our set cover algorithm, we estimate the size of a random greedy maximal matching for an auxiliary multigraph that the algorithm constructs implicitly, without access to its adjacency list or matrix. Previous analyses of random greedy maximal matching have focused on simple graphs, assuming access to their adjacency list or matrix. To address this, we extend the analysis of Behnezhad [FOCS'21] of random greedy maximal matching on simple graphs to multigraphs, and prove additional properties that may be of independent interest.

Cite as

Sepideh Mahabadi, Mohammad Roghani, Jakub Tarnawski, and Ali Vakilian. Sublinear Metric Steiner Tree via Improved Bounds for Set Cover. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 74:1-74:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{mahabadi_et_al:LIPIcs.ITCS.2025.74,
  author =	{Mahabadi, Sepideh and Roghani, Mohammad and Tarnawski, Jakub and Vakilian, Ali},
  title =	{{Sublinear Metric Steiner Tree via Improved Bounds for Set Cover}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{74:1--74:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.74},
  URN =		{urn:nbn:de:0030-drops-227029},
  doi =		{10.4230/LIPIcs.ITCS.2025.74},
  annote =	{Keywords: Sublinear Algorithms, Steiner Tree, Set Cover, Maximum Matching, Approximation Algorithm}
}
Document
Gadgetless Lifting Beats Round Elimination: Improved Lower Bounds for Pointer Chasing

Authors: Xinyu Mao, Guangxu Yang, and Jiapeng Zhang


Abstract
We prove an Ω(n / k + k) communication lower bound on (k - 1)-round distributional complexity of the k-step pointer chasing problem under uniform input distribution, improving the Ω(n/k - klog n) lower bound due to Yehudayoff (Combinatorics Probability and Computing, 2020). Our lower bound almost matches the upper bound of Õ(n/k + k) communication by Nisan and Wigderson (STOC 91). As part of our approach, we put forth gadgetless lifting, a new framework that lifts lower bounds for a family of restricted protocols into lower bounds for general protocols. A key step in gadgetless lifting is choosing the appropriate definition of restricted protocols. In this paper, our definition of restricted protocols is inspired by the structure-vs-pseudorandomness decomposition by Göös, Pitassi, and Watson (FOCS 17) and Yang and Zhang (STOC 24). Previously, round-communication trade-offs were mainly obtained by round elimination and information complexity. Both methods have some barriers in some situations, and we believe gadgetless lifting could potentially address these barriers.

Cite as

Xinyu Mao, Guangxu Yang, and Jiapeng Zhang. Gadgetless Lifting Beats Round Elimination: Improved Lower Bounds for Pointer Chasing. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 75:1-75:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{mao_et_al:LIPIcs.ITCS.2025.75,
  author =	{Mao, Xinyu and Yang, Guangxu and Zhang, Jiapeng},
  title =	{{Gadgetless Lifting Beats Round Elimination: Improved Lower Bounds for Pointer Chasing}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{75:1--75:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.75},
  URN =		{urn:nbn:de:0030-drops-227038},
  doi =		{10.4230/LIPIcs.ITCS.2025.75},
  annote =	{Keywords: communication complexity, lifting theorems, pointer chasing}
}
Document
A Quantum Unique Games Conjecture

Authors: Hamoon Mousavi and Taro Spirig


Abstract
After the NP-hardness of computational problems such as 3SAT and MaxCut was established, a natural next step was to explore whether these problems remain hard to approximate. While the quantum nonlocal games extensions of some of these problems are known to be hard - indeed undecidable - their inapproximability remains largely unresolved. In this work, we introduce definitions for the quantum extensions of Label-Cover and Unique-Label-Cover. We show that these problems play a similarly crucial role in studying the inapproximability of quantum constraint satisfaction problems as they do in the classical setting.

Cite as

Hamoon Mousavi and Taro Spirig. A Quantum Unique Games Conjecture. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 76:1-76:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{mousavi_et_al:LIPIcs.ITCS.2025.76,
  author =	{Mousavi, Hamoon and Spirig, Taro},
  title =	{{A Quantum Unique Games Conjecture}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{76:1--76:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.76},
  URN =		{urn:nbn:de:0030-drops-227047},
  doi =		{10.4230/LIPIcs.ITCS.2025.76},
  annote =	{Keywords: hardness of approximation, quantum computing, noncommutative constraint satisfaction problems, Fourier analysis}
}
Document
Sketching, Moment Estimation, and the Lévy-Khintchine Representation Theorem

Authors: Seth Pettie and Dingyu Wang


Abstract
In the d-dimensional turnstile streaming model, a frequency vector 𝐱 = (𝐱(1),…,𝐱(n)) ∈ (ℝ^d)ⁿ is updated entry-wisely over a stream. We consider the problem of f-moment estimation for which one wants to estimate f(𝐱)=∑_{v ∈ [n]}f(𝐱(v)) with a small-space sketch. A function f is tractable if the f-moment can be estimated to within a constant factor using polylog(n) space. The f-moment estimation problem has been intensively studied in the d = 1 case. Flajolet and Martin estimate the F₀-moment (f(x) = 1 (x > 0), incremental stream); Alon, Matias, and Szegedy estimate the L₂-moment (f(x) = x²); Indyk estimates the L_α-moment (f(x) = |x|^α), α ∈ (0,2]. For d ≥ 2, Ganguly, Bansal, and Dube estimate the L_{p,q} hybrid moment (f:ℝ^d → ℝ,f(x) = (∑_{j = 1}^d |x_j|^p)^q), p ∈ (0,2],q ∈ (0,1). For tractability, Bar-Yossef, Jayram, Kumar, and Sivakumar show that f(x) = |x|^α is not tractable for α > 2. Braverman, Chestnut, Woodruff, and Yang characterize the class of tractable one-variable functions except for a class of nearly periodic functions. In this work we present a simple and generic scheme to construct sketches with the novel idea of hashing indices to Lévy processes, from which one can estimate the f-moment f(𝐱) where f is the characteristic exponent of the Lévy process. The fundamental Lévy-Khintchine representation theorem completely characterizes the space of all possible characteristic exponents, which in turn characterizes the set of f-moments that can be estimated by this generic scheme. The new scheme has strong explanatory power. It unifies the construction of many existing sketches (F₀, L₀, L₂, L_α, L_{p,q}, etc.) and it implies the tractability of many nearly periodic functions that were previously unclassified. Furthermore, the scheme can be conveniently generalized to multidimensional cases (d ≥ 2) by considering multidimensional Lévy processes and can be further generalized to estimate heterogeneous moments by projecting different indices with different Lévy processes. We conjecture that the set of tractable functions can be characterized using the Lévy-Khintchine representation theorem via what we called the Fourier-Hahn-Lévy method.

Cite as

Seth Pettie and Dingyu Wang. Sketching, Moment Estimation, and the Lévy-Khintchine Representation Theorem. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 77:1-77:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{pettie_et_al:LIPIcs.ITCS.2025.77,
  author =	{Pettie, Seth and Wang, Dingyu},
  title =	{{Sketching, Moment Estimation, and the L\'{e}vy-Khintchine Representation Theorem}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{77:1--77:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.77},
  URN =		{urn:nbn:de:0030-drops-227057},
  doi =		{10.4230/LIPIcs.ITCS.2025.77},
  annote =	{Keywords: Streaming Sketches, L\'{e}vy Processes}
}
Document
Randomized Lifting to Semi-Structured Communication Complexity via Linear Diversity

Authors: Vladimir Podolskii and Alexander Shekhovtsov


Abstract
We study query-to-communication lifting. The major open problem in this area is to prove a lifting theorem for gadgets of constant size. The recent paper [Paul Beame and Sajin Koroth, 2023] introduces semi-structured communication complexity, in which one of the players can only send parities of their input bits. They have shown that for any m ≥ 4 deterministic decision tree complexity of a function f can be lifted to the so called semi-structured communication complexity of f∘Ind_m, where Ind_m is the Indexing gadget. As our main contribution we extend these results to randomized setting. Our results also apply to a substantially larger set of gadgets. More specifically, we introduce a new complexity measure of gadgets, linear diversity. For all gadgets g with non-trivial linear diversity we show that randomized decision tree complexity of f lifts to randomized semi-structured communication complexity of f∘g. In particular, this gives tight lifting results for Indexing gadget Ind_m, Inner Product gadget IP_m for all m ≥ 2, and for Majority gadget MAJ_m for all m ≥ 4. We prove the same results for deterministic case. From our result it immediately follows that deterministic/randomized decision tree complexity lifts to deterministic/randomized parity decision tree complexity. For randomized case this is the first result of this type. For deterministic case, our result improves the bound in [Arkadev Chattopadhyay et al., 2023] for Inner Product gadget. To obtain our results we introduce a new secret sets approach to simulation of semi-structured communication protocols by decision trees. It allows us to simulate (restricted classes of) communication protocols on truly uniform distribution of inputs.

Cite as

Vladimir Podolskii and Alexander Shekhovtsov. Randomized Lifting to Semi-Structured Communication Complexity via Linear Diversity. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 78:1-78:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{podolskii_et_al:LIPIcs.ITCS.2025.78,
  author =	{Podolskii, Vladimir and Shekhovtsov, Alexander},
  title =	{{Randomized Lifting to Semi-Structured Communication Complexity via Linear Diversity}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{78:1--78:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.78},
  URN =		{urn:nbn:de:0030-drops-227061},
  doi =		{10.4230/LIPIcs.ITCS.2025.78},
  annote =	{Keywords: communication complexity, decision trees, lifting}
}
Document
Catalytic Communication

Authors: Edward Pyne, Nathan S. Sheffield, and William Wang


Abstract
The study of space-bounded computation has drawn extensively from ideas and results in the field of communication complexity. Catalytic Computation (Buhrman, Cleve, Koucký, Loff and Speelman, STOC 2013) studies the power of bounded space augmented with a pre-filled hard drive that can be used non-destructively during the computation. Presently, many structural questions in this model remain open. Towards a better understanding of catalytic space, we define a model of catalytic communication complexity and prove new upper and lower bounds. In our model, Alice and Bob share a blackboard with a tiny number of free bits, and a larger section with an arbitrary initial configuration. They must jointly compute a function of their inputs, communicating only via the blackboard, and must always reset the blackboard to its initial configuration. We prove several upper and lower bounds: 1) We characterize the simplest nontrivial model, that of one bit of free space and three rounds, in terms of 𝔽₂ rank. In particular, we give natural problems that are solvable with a minimal-sized blackboard that require near-maximal (randomized) communication complexity, and vice versa. 2) We show that allowing constantly many free bits, as opposed to one, allows an exponential improvement on the size of the blackboard for natural problems. To do so, we connect the problem to existence questions in extremal graph theory. 3) We give tight connections between our model and standard notions of non-uniform catalytic computation. Using this connection, we show that with an arbitrary constant number of rounds and bits of free space, one can compute all functions in TC⁰. We view this model as a step toward understanding the value of filled space in computation.

Cite as

Edward Pyne, Nathan S. Sheffield, and William Wang. Catalytic Communication. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 79:1-79:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{pyne_et_al:LIPIcs.ITCS.2025.79,
  author =	{Pyne, Edward and Sheffield, Nathan S. and Wang, William},
  title =	{{Catalytic Communication}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{79:1--79:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.79},
  URN =		{urn:nbn:de:0030-drops-227076},
  doi =		{10.4230/LIPIcs.ITCS.2025.79},
  annote =	{Keywords: Catalytic computation, Branching programs, Communication complexity}
}
Document
Extended Abstract
Diversity in Evolutionary Dynamics (Extended Abstract)

Authors: Yuval Rabani, Leonard J. Schulman, and Alistair Sinclair


Abstract
Since this paper is under journal submission, we publish only an extended abstract here. A full version can be found at https://arxiv.org/abs/2406.03938.

Cite as

Yuval Rabani, Leonard J. Schulman, and Alistair Sinclair. Diversity in Evolutionary Dynamics (Extended Abstract). In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 80:1-80:3, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{rabani_et_al:LIPIcs.ITCS.2025.80,
  author =	{Rabani, Yuval and Schulman, Leonard J. and Sinclair, Alistair},
  title =	{{Diversity in Evolutionary Dynamics}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{80:1--80:3},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.80},
  URN =		{urn:nbn:de:0030-drops-227086},
  doi =		{10.4230/LIPIcs.ITCS.2025.80},
  annote =	{Keywords: Mathematical models of evolution, replicator dynamics, weak selection, genetic diversity, game theory, dynamical systems}
}
Document
Online Balanced Allocation of Dynamic Components

Authors: Rajmohan Rajaraman and Omer Wasim


Abstract
We introduce Online Balanced Allocation of Dynamic Components (OBADC), a problem motivated by the practical challenge of dynamic resource allocation for large-scale distributed applications. In OBADC, we need to allocate a dynamic set of at most k𝓁 vertices (representing processes) in 𝓁 > 0 clusters. We consider an over-provisioned setup in which each cluster can hold at most k(1+ε) vertices, for an arbitrary constant ε > 0. The communication requirements among the vertices are modeled by the notion of a dynamically changing component, which is a subset of vertices that need to be co-located in the same cluster. At each time t, a request r_t of one of the following types arrives: 1) insertion of a vertex v forming a singleton component v at unit cost. 2) merge of (u,v) requiring that the components containing u and v be merged and co-located thereafter. 3) deletion of an existing vertex v at zero cost. Before serving any request, an algorithm can migrate vertices from one cluster to another, at a unit migration cost per vertex. We seek an online algorithm to minimize the total migration cost incurred for an arbitrary request sequence σ = (r_t)_{t > 0}, while simultaneously minimizing the number of clusters utilized. We analyze competitiveness with respect to an optimal clairvoyant offline algorithm with identical (over-provisioned) capacity constraints. We give an O(log k)-competitive algorithm for OBADC, and a matching lower-bound. The number of clusters utilized by our algorithm is always within a (2+ε) factor of the minimum. Furthermore, in a resource augmented setting where the optimal offline algorithm is constrained to capacity k per cluster, our algorithm obtains O(log k) competitiveness and utilizes a number of clusters within (1+ε) factor of the minimum. We also consider OBADC in the context of machine-learned predictions, where for each newly inserted vertex v at time t: i) with probability η > 0, the set of vertices (that exist at time t) in the component of v is revealed and, ii) with probability 1-η, no information is revealed. For OBADC with predictions, we give a O(1)-consistent and O(min(log 1/(η), log k))-robust algorithm.

Cite as

Rajmohan Rajaraman and Omer Wasim. Online Balanced Allocation of Dynamic Components. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 81:1-81:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{rajaraman_et_al:LIPIcs.ITCS.2025.81,
  author =	{Rajaraman, Rajmohan and Wasim, Omer},
  title =	{{Online Balanced Allocation of Dynamic Components}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{81:1--81:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.81},
  URN =		{urn:nbn:de:0030-drops-227090},
  doi =		{10.4230/LIPIcs.ITCS.2025.81},
  annote =	{Keywords: online algorithms, competitive ratio, algorithms with predictions}
}
Document
Tight Bounds on List-Decodable and List-Recoverable Zero-Rate Codes

Authors: Nicolas Resch, Chen Yuan, and Yihan Zhang


Abstract
In this work, we consider the list-decodability and list-recoverability of codes in the zero-rate regime. Briefly, a code 𝒞 ⊆ [q]ⁿ is (p,𝓁,L)-list-recoverable if for all tuples of input lists (Y₁,… ,Y_n) with each Y_i ⊆ [q] and |Y_i| = 𝓁, the number of codewords c ∈ 𝒞 such that c_i ∉ Y_i for at most pn choices of i ∈ [n] is less than L; list-decoding is the special case of 𝓁 = 1. In recent work by Resch, Yuan and Zhang (ICALP 2023) the zero-rate threshold for list-recovery was determined for all parameters: that is, the work explicitly computes p_*: = p_*(q,𝓁,L) with the property that for all ε > 0 (a) there exist positive-rate (p_*-ε,𝓁,L)-list-recoverable codes, and (b) any (p_*+ε,𝓁,L)-list-recoverable code has rate 0. In fact, in the latter case the code has constant size, independent on n. However, the constant size in their work is quite large in 1/ε, at least |𝒞| ≥ (1/(ε))^O(q^L). Our contribution in this work is to show that for all choices of q,𝓁 and L with q ≥ 3, any (p_*+ε,𝓁,L)-list-recoverable code must have size O_{q,𝓁,L}(1/ε), and furthermore this upper bound is complemented by a matching lower bound Ω_{q,𝓁,L}(1/ε). This greatly generalizes work by Alon, Bukh and Polyanskiy (IEEE Trans. Inf. Theory 2018) which focused only on the case of binary alphabet (and thus necessarily only list-decoding). We remark that we can in fact recover the same result for q = 2 and even L, as obtained by Alon, Bukh and Polyanskiy: we thus strictly generalize their work. Our main technical contribution is to (a) properly define a linear programming relaxation of the list-recovery condition over large alphabets; and (b) to demonstrate that a certain function defined on a q-ary probability simplex is maximized by the uniform distribution. This represents the core challenge in generalizing to larger q (as a binary simplex can be naturally identified with a one-dimensional interval). We can subsequently re-utilize certain Schur convexity and convexity properties established for a related function by Resch, Yuan and Zhang along with ideas of Alon, Bukh and Polyanskiy.

Cite as

Nicolas Resch, Chen Yuan, and Yihan Zhang. Tight Bounds on List-Decodable and List-Recoverable Zero-Rate Codes. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 82:1-82:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{resch_et_al:LIPIcs.ITCS.2025.82,
  author =	{Resch, Nicolas and Yuan, Chen and Zhang, Yihan},
  title =	{{Tight Bounds on List-Decodable and List-Recoverable Zero-Rate Codes}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{82:1--82:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.82},
  URN =		{urn:nbn:de:0030-drops-227103},
  doi =		{10.4230/LIPIcs.ITCS.2025.82},
  annote =	{Keywords: List Decoding, List Recovery, Zero Rate}
}
Document
Fine-Grained Equivalence for Problems Related to Integer Linear Programming

Authors: Lars Rohwedder and Karol Węgrzycki


Abstract
Integer Linear Programming with n binary variables and m many 0/1-constraints can be solved in time 2^Õ(m²) poly(n) and it is open whether the dependence on m is optimal. Several seemingly unrelated problems, which include variants of Closest String, Discrepancy Minimization, Set Cover, and Set Packing, can be modelled as Integer Linear Programming with 0/1 constraints to obtain algorithms with the same running time for a natural parameter m in each of the problems. Our main result establishes through fine-grained reductions that these problems are equivalent, meaning that a 2^O(m^{2-ε}) poly(n) algorithm with ε > 0 for one of them implies such an algorithm for all of them. In the setting above, one can alternatively obtain an n^O(m) time algorithm for Integer Linear Programming using a straightforward dynamic programming approach, which can be more efficient if n is relatively small (e.g., subexponential in m). We show that this can be improved to {n'}^O(m) + O(nm), where n' is the number of distinct (i.e., non-symmetric) variables. This dominates both of the aforementioned running times.

Cite as

Lars Rohwedder and Karol Węgrzycki. Fine-Grained Equivalence for Problems Related to Integer Linear Programming. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 83:1-83:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{rohwedder_et_al:LIPIcs.ITCS.2025.83,
  author =	{Rohwedder, Lars and W\k{e}grzycki, Karol},
  title =	{{Fine-Grained Equivalence for Problems Related to Integer Linear Programming}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{83:1--83:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.83},
  URN =		{urn:nbn:de:0030-drops-227114},
  doi =		{10.4230/LIPIcs.ITCS.2025.83},
  annote =	{Keywords: Integer Programming, Fine-Grained Complexity, Fixed-Parameter Tractable Algorithms}
}
Document
Quantum Communication Complexity of Classical Auctions

Authors: Aviad Rubinstein and Zixin Zhou


Abstract
We study the fundamental, classical mechanism design problem of single-buyer multi-item Bayesian revenue-maximizing auctions under the lens of communication complexity between the buyer and the seller. Specifically, we ask whether using quantum communication can be more efficient than classical communication. We have two sets of results, revealing a surprisingly rich landscape - which looks quite different from both quantum communication in non-strategic parties, and classical communication in mechanism design. We first study the expected communication complexity of approximately optimal auctions. We give quantum auction protocols for buyers with unit-demand or combinatorial valuations that obtain an arbitrarily good approximation of the optimal revenue while running in exponentially more efficient communication compared to classical approximately optimal auctions. However, these auctions come with the caveat that they may require the seller to charge exponentially large payments from a deviating buyer. We show that this caveat is necessary - we give an exponential lower bound on the product of the expected quantum communication and the maximum payment. We then study the worst-case communication complexity of exactly optimal auctions in an extremely simple setting: additive buyer’s valuations over two items. We show the following separations: - There exists a prior where the optimal classical auction protocol requires infinitely many bits, but a one-way message of 1 qubit and 2 classical bits suffices. - There exists a prior where no finite one-way quantum auction protocol can obtain the optimal revenue. However, there is a barely-interactive revenue-optimal quantum auction protocol with the following simple structure: the seller prepares a pair of qubits in the EPR state, sends one of them to the buyer, and then the buyer sends 1 qubit and 2 classical bits. - There exists a prior where no multi-round quantum auction protocol with a finite bound on communication complexity can obtain the optimal revenue.

Cite as

Aviad Rubinstein and Zixin Zhou. Quantum Communication Complexity of Classical Auctions. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 84:1-84:27, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{rubinstein_et_al:LIPIcs.ITCS.2025.84,
  author =	{Rubinstein, Aviad and Zhou, Zixin},
  title =	{{Quantum Communication Complexity of Classical Auctions}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{84:1--84:27},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.84},
  URN =		{urn:nbn:de:0030-drops-227124},
  doi =		{10.4230/LIPIcs.ITCS.2025.84},
  annote =	{Keywords: Mechanism design, Communication complexity, Quantum computing}
}
Document
Quantum 2-SAT on Low Dimensional Systems Is QMAsubscript{1}-Complete: Direct Embeddings and Black-Box Simulation

Authors: Dorian Rudolph, Sevag Gharibian, and Daniel Nagaj


Abstract
Despite the fundamental role the Quantum Satisfiability (QSAT) problem has played in quantum complexity theory, a central question remains open: At which local dimension does the complexity of QSAT transition from "easy" to "hard"? Here, we study QSAT with each constraint acting on a d_A-dimensional and d_B-dimensional qudit pair, denoted (d_A×d_B)-QSAT. Our first main result shows that, surprisingly, QSAT on qubits can remain QMA_1-hard, in that (2×5)-QSAT is QMA_1-complete. (QMA_1 is a quantum analogue of MA with perfect completeness.) In contrast, (2×2)-QSAT (i.e. Quantum 2-SAT on qubits) is well-known to be poly-time solvable [Bravyi, 2006]. Our second main result proves that (3×d)-QSAT on the 1D line with d ∈ O(1) is also QMA_1-hard. Finally, we initiate the study of (2×d)-QSAT on the 1D line by giving a frustration-free 1D Hamiltonian with a unique, entangled ground state. As implied by our title, our first result uses a direct embedding: We combine a novel clock construction with the 2D circuit-to-Hamiltonian construction of [Gosset and Nagaj, 2013]. Of note is a new simplified and analytic proof for the latter (as opposed to a partially numeric proof in [GN13]). This exploits Unitary Labelled Graphs [Bausch, Cubitt, Ozols, 2017] together with a new "Nullspace Connection Lemma", allowing us to break low energy analyses into small patches of projectors, and to improve the soundness analysis of [GN13] from Ω(1/T⁶) to Ω(1/T²), for T the number of gates. Our second result goes via black-box reduction: Given an arbitrary 1D Hamiltonian H on d'-dimensional qudits, we show how to embed it into an effective 1D (3×d)-QSAT instance, for d ∈ O(1). Our approach may be viewed as a weaker notion of "analog simulation" (à la [Bravyi, Hastings 2017], [Cubitt, Montanaro, Piddock 2018]). As far as we are aware, this gives the first "black-box simulation"-based QMA_1-hardness result.

Cite as

Dorian Rudolph, Sevag Gharibian, and Daniel Nagaj. Quantum 2-SAT on Low Dimensional Systems Is QMAsubscript{1}-Complete: Direct Embeddings and Black-Box Simulation. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 85:1-85:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{rudolph_et_al:LIPIcs.ITCS.2025.85,
  author =	{Rudolph, Dorian and Gharibian, Sevag and Nagaj, Daniel},
  title =	{{Quantum 2-SAT on Low Dimensional Systems Is QMAsubscript\{1\}-Complete: Direct Embeddings and Black-Box Simulation}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{85:1--85:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.85},
  URN =		{urn:nbn:de:0030-drops-227139},
  doi =		{10.4230/LIPIcs.ITCS.2025.85},
  annote =	{Keywords: quantum complexity theory, Quantum Merlin Arthur (QMA), Quantum Satisfiability Problem (QSAT), Hamiltonian simulation}
}
Document
Eulerian Orientations and Hadamard Codes: A Novel Connection via Counting

Authors: Shuai Shao and Zhuxiao Tang


Abstract
We discover a novel connection between two classical mathematical notions, Eulerian orientations and Hadamard codes by studying the counting problem of Eulerian orientations (#EO) with local constraint functions imposed on vertices. We present two special classes of constraint functions and a chain reaction algorithm, and show that the #EO problem defined by each class alone is polynomial-time solvable by the algorithm. These tractable classes of functions are defined inductively, and quite remarkably the base level of these classes is characterized perfectly by the well-known Hadamard code. Thus, we establish a novel connection between counting Eulerian orientations and coding theory. We also prove a #P-hardness result for the #EO problem when constraint functions from the two tractable classes appear together.

Cite as

Shuai Shao and Zhuxiao Tang. Eulerian Orientations and Hadamard Codes: A Novel Connection via Counting. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 86:1-86:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{shao_et_al:LIPIcs.ITCS.2025.86,
  author =	{Shao, Shuai and Tang, Zhuxiao},
  title =	{{Eulerian Orientations and Hadamard Codes: A Novel Connection via Counting}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{86:1--86:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.86},
  URN =		{urn:nbn:de:0030-drops-227146},
  doi =		{10.4230/LIPIcs.ITCS.2025.86},
  annote =	{Keywords: Eulerian orientations, Hadamard codes, Counting problems, Tractable classes}
}
Document
When to Give up on a Parallel Implementation

Authors: Nathan S. Sheffield and Alek Westover


Abstract
In the Serial Parallel Decision Problem (SPDP), introduced by Kuszmaul and Westover [SPAA'24], an algorithm receives a series of tasks online, and must choose for each between a serial implementation and a parallelizable (but less efficient) implementation. Kuszmaul and Westover describe three decision models: (1) Instantly-committing schedulers must decide on arrival, irrevocably, which implementation of the task to run. (2) Eventually-committing schedulers can delay their decision beyond a task’s arrival time, but cannot revoke their decision once made. (3) Never-committing schedulers are always free to abandon their progress on the task and start over using a different implementation. Kuszmaul and Westover gave a simple instantly-committing scheduler whose total completion time is 3-competitive with the offline optimal schedule, and proved two lower bounds: no eventually-committing scheduler can have competitive ratio better than ϕ ≈ 1.618 in general, and no instantly-committing scheduler can have competitive ratio better than 2 in general. They conjectured that the three decision models should admit different competitive ratios, but left upper bounds below 3 in any model as an open problem. In this paper, we show that the powers of instantly, eventually, and never committing schedulers are distinct, at least in the "massively parallel regime". The massively parallel regime of the SPDP is the special case where the number of available processors is asymptotically larger than the number of tasks to process, meaning that the work associated with running a task in serial is negligible compared to its runtime. In this regime, we show (1) The optimal competitive ratio for instantly-committing schedulers is 2, (2) The optimal competitive ratio for eventually-committing schedulers lies in [1.618, 1.678], (3) The optimal competitive ratio for never-committing schedulers lies in [1.366, 1.500]. We additionally show that our instantly-committing scheduler is also 2-competitive outside of the massively parallel regime, giving proof-of-concept that results in the massively parallel regime can be translated to hold with fewer processors.

Cite as

Nathan S. Sheffield and Alek Westover. When to Give up on a Parallel Implementation. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 87:1-87:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{sheffield_et_al:LIPIcs.ITCS.2025.87,
  author =	{Sheffield, Nathan S. and Westover, Alek},
  title =	{{When to Give up on a Parallel Implementation}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{87:1--87:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.87},
  URN =		{urn:nbn:de:0030-drops-227154},
  doi =		{10.4230/LIPIcs.ITCS.2025.87},
  annote =	{Keywords: Scheduling, Multi-Processor, Online-Algorithms}
}
Document
Detecting and Correcting Computationally Bounded Errors: A Simple Construction Under Minimal Assumptions

Authors: Jad Silbak and Daniel Wichs


Abstract
We study error detection and error correction in a computationally bounded world, where errors are introduced by an arbitrary polynomial time adversarial channel. We consider codes where the encoding procedure uses random coins and define two distinct variants: (1) in randomized codes, fresh randomness is chosen during each encoding operation and is unknown a priori, while (2) in self-seeded codes, the randomness of the encoding procedure is fixed once upfront and is known to the adversary. In both cases, the randomness need not be known to the decoding procedure, and there is no trusted common setup between the encoder and decoder. The encoding and decoding algorithms are efficient and run in some fixed polynomial time, independent of the run time of the adversary. The parameters of standard codes for worst-case (inefficient) errors are limited by the Singleton bound: for rate R it is not possible to detect more than a 1-R fraction of errors, or uniquely correct more than a (1-R)/2 fraction of errors, and efficient codes matching this bound exist for sufficiently large alphabets. In the computationally bounded setting, we show that going beyond the Singleton bound implies one-way functions in the case of randomized codes and collision-resistant hash functions in the case of self-seeded codes. We construct randomized and self-seeded codes under these respective minimal assumptions with essentially optimal parameters over a constant-sized alphabet: - Detection: the codes have a rate R ≈ 1 while detecting a ρ ≈ 1 fraction of errors. - Correction: for any ρ < 1/2, the codes uniquely correct a ρ fraction of errors with rate R ≈ 1-ρ. Codes for computationally bounded errors were studied in several prior works starting with Lipton (STACS '94), but all such works either: (a) need some trusted common setup (e.g., public-key infrastructure, common reference string) between the encoder and decoder, or (b) only handle channels whose complexity is a prior bounded below that of the code.

Cite as

Jad Silbak and Daniel Wichs. Detecting and Correcting Computationally Bounded Errors: A Simple Construction Under Minimal Assumptions. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 88:1-88:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{silbak_et_al:LIPIcs.ITCS.2025.88,
  author =	{Silbak, Jad and Wichs, Daniel},
  title =	{{Detecting and Correcting Computationally Bounded Errors: A Simple Construction Under Minimal Assumptions}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{88:1--88:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.88},
  URN =		{urn:nbn:de:0030-drops-227167},
  doi =		{10.4230/LIPIcs.ITCS.2025.88},
  annote =	{Keywords: Error Correction, One-Way Functions, Collision Resistant Hashing}
}
Document
Optimal Communication Complexity of Chained Index

Authors: Janani Sundaresan


Abstract
We study the chain communication problem introduced by Cormode et al. [ICALP 2019]. For k ≥ 1, in the chain_{n,k} problem, there are k string and index pairs (X_i, σ_i) for i ∈ [k] such that the value at position σ_i in string X_i is the same bit for all k pairs. The input is shared between k+1 players as follows. Player 1 has the first string X₁ ∈ {0,1}ⁿ, player 2 has the first index σ₁ ∈ [n] and the second string X₂ ∈ {0,1}ⁿ, player 3 has the second index σ₂ ∈ [n] along with the third string X₃ ∈ {0,1}ⁿ, and so on. Player k+1 has the last index σ_k ∈ [n]. The communication is one way from each player to the next, starting from player 1 to player 2, then from player 2 to player 3 and so on. Player k+1, after receiving the message from player k, has to output a single bit which is the value at position σ_i in X_i for any i ∈ [k]. It is a generalization of the well-studied index problem, which is equivalent to chain_{n, 2}. Cormode et al. proved that the chain_{n,k} problem requires Ω(n/k²) communication, and they used it to prove streaming lower bounds for the approximation of maximum independent sets. Subsequently, Feldman et al. [STOC 2020] used it to prove lower bounds for streaming submodular maximization. However, it is not known whether the Ω(n/k²) lower bound used in these works is optimal for the problem, and in fact, it was conjectured by Cormode et al. that Ω(n) bits are necessary. We prove the optimal lower bound of Ω(n) for chain_{n,k} when k = o(n/log n) as our main result. This settles the open conjecture of Cormode et al., barring the range of k = Ω(n /log n). The main technique is a reduction to a non-standard index problem where the input to the players is such that the answer is biased away from uniform. This biased version of index is analyzed using tools from information theory. As a corollary, we get an improved lower bound for approximation of maximum independent set in vertex arrival streams via a reduction from chain directly.

Cite as

Janani Sundaresan. Optimal Communication Complexity of Chained Index. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 89:1-89:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{sundaresan:LIPIcs.ITCS.2025.89,
  author =	{Sundaresan, Janani},
  title =	{{Optimal Communication Complexity of Chained Index}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{89:1--89:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.89},
  URN =		{urn:nbn:de:0030-drops-227172},
  doi =		{10.4230/LIPIcs.ITCS.2025.89},
  annote =	{Keywords: communication complexity, index communciation problem}
}
Document
Polynomial Size, Short-Circuit Resilient Circuits for NC

Authors: Yael Tauman Kalai and Raghuvansh R. Saxena


Abstract
We show how to convert any circuit of poly-logarithmic depth and polynomial size into a functionally equivalent circuit of polynomial size (and polynomial depth) that is resilient to adversarial short-circuit errors. Specifically, the resulting circuit computes the same function even if up to ε d gates on every root-to-leaf path are short-circuited, i.e., their output is replaced with the value of one of its inputs, where d is the depth of the circuit and ε > 0 is a fixed constant. Previously, such a result was known for formulas (Kalai-Lewko-Rao, FOCS 2012). It was also known how to convert general circuits to error resilient ones whose size is quasi-polynomial in the size of the original circuit (Efremenko et al. STOC 2022). The reason both these works do not extend to our setting is that there may be many paths from the root to a given gate, and the resilient circuits needs to "remember" a lot of information about these paths, which causes it to be large. Our main idea is to reduce the amount of this information at the cost of increasing the depth of the resilient circuit.

Cite as

Yael Tauman Kalai and Raghuvansh R. Saxena. Polynomial Size, Short-Circuit Resilient Circuits for NC. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 90:1-90:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{taumankalai_et_al:LIPIcs.ITCS.2025.90,
  author =	{Tauman Kalai, Yael and Saxena, Raghuvansh R.},
  title =	{{Polynomial Size, Short-Circuit Resilient Circuits for NC}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{90:1--90:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.90},
  URN =		{urn:nbn:de:0030-drops-227181},
  doi =		{10.4230/LIPIcs.ITCS.2025.90},
  annote =	{Keywords: Error-resilient computation, short-circuit errors}
}
Document
Simple Norm Bounds for Polynomial Random Matrices via Decoupling

Authors: Madhur Tulsiani and June Wu


Abstract
We present a new method for obtaining norm bounds for random matrices, where each entry is a low-degree polynomial in an underlying set of independent real-valued random variables. Such matrices arise in a variety of settings in the analysis of spectral and optimization algorithms, which require understanding the spectrum of a random matrix depending on data obtained as independent samples. Using ideas of decoupling and linearization from analysis, we show a simple way of expressing norm bounds for such matrices, in terms of matrices of lower-degree polynomials corresponding to derivatives. Iterating this method gives a simple bound with an elementary proof, which can recover many bounds previously required more involved techniques.

Cite as

Madhur Tulsiani and June Wu. Simple Norm Bounds for Polynomial Random Matrices via Decoupling. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 91:1-91:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{tulsiani_et_al:LIPIcs.ITCS.2025.91,
  author =	{Tulsiani, Madhur and Wu, June},
  title =	{{Simple Norm Bounds for Polynomial Random Matrices via Decoupling}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{91:1--91:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.91},
  URN =		{urn:nbn:de:0030-drops-227194},
  doi =		{10.4230/LIPIcs.ITCS.2025.91},
  annote =	{Keywords: Matrix Concentration, Decoupling, Graph Matrices}
}
Document
Listing 6-Cycles in Sparse Graphs

Authors: Virginia Vassilevska Williams and Alek Westover


Abstract
This work considers the problem of output-sensitive listing of occurrences of 2k-cycles for fixed constant k ≥ 2 in an undirected host graph with m edges and t 2k-cycles. Recent work of Jin and Xu (and independently Abboud, Khoury, Leibowitz, and Safier) [STOC 2023] gives an O(m^{4/3}+t) time algorithm for listing 4-cycles, and recent work by Jin, Vassilevska Williams and Zhou [SOSA 2024] gives an Õ(n²+t) time algorithm for listing 6-cycles in n node graphs. We focus on resolving the next natural question: obtaining listing algorithms for 6-cycles in the sparse setting, i.e., in terms of m rather than n. Previously, the best known result here is the better of Jin, Vassilevska Williams and Zhou’s Õ(n²+t) algorithm and Alon, Yuster and Zwick’s O(m^{5/3}+t) algorithm. We give an algorithm for listing 6-cycles with running time Õ(m^{1.6}+t). Our algorithm is a natural extension of Dahlgaard, Knudsen and Stöckel’s [STOC 2017] algorithm for detecting a 2k-cycle. Our main technical contribution is the analysis of the algorithm which involves a type of "supersaturation" lemma relating the number of 2k-cycles in a bipartite graph to the sizes of the parts in the bipartition and the number of edges. We also give a simplified analysis of Dahlgaard, Knudsen and Stöckel’s 2k-cycle detection algorithm (with a small polylogarithmic increase in the running time), which is helpful in analyzing our listing algorithm.

Cite as

Virginia Vassilevska Williams and Alek Westover. Listing 6-Cycles in Sparse Graphs. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 92:1-92:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{vassilevskawilliams_et_al:LIPIcs.ITCS.2025.92,
  author =	{Vassilevska Williams, Virginia and Westover, Alek},
  title =	{{Listing 6-Cycles in Sparse Graphs}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{92:1--92:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.92},
  URN =		{urn:nbn:de:0030-drops-227207},
  doi =		{10.4230/LIPIcs.ITCS.2025.92},
  annote =	{Keywords: Graph algorithms, cycles listing, fine-grained complexity, sparse graphs}
}
Document
Polynomials, Divided Differences, and Codes

Authors: S. Venkitesh


Abstract
Multiplicity codes (Kopparty et al., J. ACM 2014) are multivariate polynomial codes where the codewords are described by evaluations of polynomials (with a degree bound) and their derivatives up to some order (the multiplicity parameter), on a suitably chosen affine set of points. While efficient decoding algorithms were known in some special cases of point sets, by a reduction to univariate multiplicity codes, a general algorithm for list decoding up to the distance of the code when the point set is an arbitrary finite grid, was obtained only recently (Bhandari et al., IEEE TIT 2023). This required the characteristic of the field to be zero or larger than the degree bound, which is somewhat necessary since list decoding up to distance with small output list size is not possible when the characteristic is significantly smaller than the degree. In this work, we present an alternative construction based on divided differences of polynomials, that conceptually resembles the classical multiplicity codes but has good list decodability "insensitive to the field characteristic". We obtain a simple algorithm that list decodes this code up to distance for arbitrary finite grids over all finite fields. Our construction can also be interpreted as a folded Reed-Muller code, which may be of independent interest.

Cite as

S. Venkitesh. Polynomials, Divided Differences, and Codes. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 93:1-93:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{venkitesh:LIPIcs.ITCS.2025.93,
  author =	{Venkitesh, S.},
  title =	{{Polynomials, Divided Differences, and Codes}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{93:1--93:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.93},
  URN =		{urn:nbn:de:0030-drops-227216},
  doi =		{10.4230/LIPIcs.ITCS.2025.93},
  annote =	{Keywords: Error-correcting code, polynomial code, Reed-Solomon code, Reed-Muller code, folded Reed-Solomon code, folded Reed-Muller code, multiplicity code, divided difference, q-derivative, polynomial method, list decoding, list decoding capacity, linear algebraic list decoding}
}
Document
New Direct Sum Tests

Authors: Alek Westover, Edward Yu, and Kai Zhe Zheng


Abstract
A function f:[n]^{d} → 𝔽₂ is a direct sum if there are functions L_i:[n] → 𝔽₂ such that f(x) = ∑_i L_i(x_i). In this work we give multiple results related to the property testing of direct sums. Our first result concerns a test proposed by Dinur and Golubev in [Dinur and Golubev, 2019]. We call their test the Diamond test and show that it is indeed a direct sum tester. More specifically, we show that if a function f is ε-far from being a direct sum function, then the Diamond test rejects f with probability at least Ω_{n,ε}(1). Even in the case of n = 2, the Diamond test is, to the best of our knowledge, novel and yields a new tester for the classic property of affinity. Apart from the Diamond test, we also analyze a broad family of direct sum tests, which at a high level, run an arbitrary affinity test on the restriction of f to a random hypercube inside of [n]^d. This family of tests includes the direct sum test analyzed in [Dinur and Golubev, 2019], but does not include the Diamond test. As an application of our result, we obtain a direct sum test which works in the online adversary model of [Iden Kalemaj et al., 2022]. Finally, we also discuss a Fourier analytic interpretation of the diamond tester in the n = 2 case, as well as prove local correction results for direct sum as conjectured by [Dinur and Golubev, 2019].

Cite as

Alek Westover, Edward Yu, and Kai Zhe Zheng. New Direct Sum Tests. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 94:1-94:26, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{westover_et_al:LIPIcs.ITCS.2025.94,
  author =	{Westover, Alek and Yu, Edward and Zheng, Kai Zhe},
  title =	{{New Direct Sum Tests}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{94:1--94:26},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.94},
  URN =		{urn:nbn:de:0030-drops-227229},
  doi =		{10.4230/LIPIcs.ITCS.2025.94},
  annote =	{Keywords: Linearity testing, Direct sum, Grids}
}
Document
Toward Separating QMA from QCMA with a Classical Oracle

Authors: Mark Zhandry


Abstract
QMA is the class of languages that can be decided by an efficient quantum verifier given a quantum witness, whereas QCMA is the class of such languages where the efficient quantum verifier only is given a classical witness. A challenging fundamental goal in quantum query complexity is to find a classical oracle separation for these classes. In this work, we offer a new approach towards proving such a separation that is qualitatively different than prior work, and show that our approach is sound assuming a natural statistical conjecture which may have other applications to quantum query complexity lower bounds.

Cite as

Mark Zhandry. Toward Separating QMA from QCMA with a Classical Oracle. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 95:1-95:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{zhandry:LIPIcs.ITCS.2025.95,
  author =	{Zhandry, Mark},
  title =	{{Toward Separating QMA from QCMA with a Classical Oracle}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{95:1--95:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.95},
  URN =		{urn:nbn:de:0030-drops-227230},
  doi =		{10.4230/LIPIcs.ITCS.2025.95},
  annote =	{Keywords: Quantum, Oracle Separations, QMA, QCMA}
}
Document
Formulations and Constructions of Remote State Preparation with Verifiability, with Applications

Authors: Jiayu Zhang


Abstract
Remote state preparation with verifiability (RSPV) is an important quantum cryptographic primitive [Alexandru Gheorghiu and Thomas Vidick, 2019; Jiayu Zhang, 2022]. In this primitive, a client would like to prepare a quantum state (sampled or chosen from a state family) on the server side, such that ideally the client knows its full description, while the server holds and only holds the state itself. In this work we make several contributions on its formulations, constructions and applications. In more detail: - We first work on the definitions and abstract properties of the RSPV problem. We select and compare different variants of definitions [Bennett et al., 2001; Alexandru Gheorghiu and Thomas Vidick, 2019; Jiayu Zhang, 2022; Alexandru Gheorghiu et al., 2022], and study their basic properties (like composability and amplification). - We also study a closely related question of how to certify the server’s operations (instead of solely the states). We introduce a new notion named remote operator application with verifiability (ROAV). We compare this notion with related existing definitions [Summers and Werner, 1987; Dominic Mayers and Andrew Chi-Chih Yao, 2004; Zhengfeng Ji et al., 2021; Tony Metger and Thomas Vidick, 2021; Anand Natarajan and Tina Zhang, 2023], study its abstract properties and leave its concrete constructions for further works. - Building on the abstract properties and existing results [Zvika Brakerski et al., 2023], we construct a series of new RSPV protocols. Our constructions not only simplify existing results [Alexandru Gheorghiu and Thomas Vidick, 2019] but also cover new state families, for example, states in the form of 1/√2 (|0⟩ + |x_0⟩ + |1⟩ |x_1⟩). All these constructions rely only on the existence of weak NTCF [Zvika Brakerski et al., 2020; Navid Alamati et al., 2022], without additional requirements like the adaptive hardcore bit property [Zvika Brakerski et al., 2018; Navid Alamati et al., 2022]. - As a further application, we show that the classical verification of quantum computations (CVQC) problem [Dorit Aharonov et al., 2010; Urmila Mahadev, 2018] could be constructed from assumptions on group actions [Navid Alamati et al., 2020]. This is achieved by combining our results on RSPV with group-action-based instantiation of weak NTCF [Navid Alamati et al., 2022], and then with the quantum-gadget-assisted quantum verification protocol [Ferracin et al., 2018].

Cite as

Jiayu Zhang. Formulations and Constructions of Remote State Preparation with Verifiability, with Applications. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 96:1-96:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{zhang:LIPIcs.ITCS.2025.96,
  author =	{Zhang, Jiayu},
  title =	{{Formulations and Constructions of Remote State Preparation with Verifiability, with Applications}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{96:1--96:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.96},
  URN =		{urn:nbn:de:0030-drops-227245},
  doi =		{10.4230/LIPIcs.ITCS.2025.96},
  annote =	{Keywords: Quantum Cryptography, Remote State Preparation, Self-testing, Verification of Quantum Computations}
}

Filters


Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail