LIPIcs, Volume 362

17th Innovations in Theoretical Computer Science Conference (ITCS 2026)



Thumbnail PDF

Event

Editor

Shubhangi Saraf
  • University of Toronto, CA

Publication Details

  • published at: 2026-01-23
  • Publisher: Schloss Dagstuhl – Leibniz-Zentrum für Informatik
  • ISBN: 978-3-95977-410-9

Access Numbers

Documents

No documents found matching your filter selection.
Document
Complete Volume
LIPIcs, Volume 362, ITCS 2026, Complete Volume

Authors: Shubhangi Saraf


Abstract
LIPIcs, Volume 362, ITCS 2026, Complete Volume

Cite as

17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 1-2406, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@Proceedings{saraf:LIPIcs.ITCS.2026,
  title =	{{LIPIcs, Volume 362, ITCS 2026, Complete Volume}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{1--2406},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026},
  URN =		{urn:nbn:de:0030-drops-254076},
  doi =		{10.4230/LIPIcs.ITCS.2026},
  annote =	{Keywords: LIPIcs, Volume 362, ITCS 2026, Complete Volume}
}
Document
Front Matter
Front Matter, Table of Contents, Preface, Conference Organization

Authors: Shubhangi Saraf


Abstract
Front Matter, Table of Contents, Preface, Conference Organization

Cite as

17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 0:i-0:xxiv, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{saraf:LIPIcs.ITCS.2026.0,
  author =	{Saraf, Shubhangi},
  title =	{{Front Matter, Table of Contents, Preface, Conference Organization}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{0:i--0:xxiv},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.0},
  URN =		{urn:nbn:de:0030-drops-254060},
  doi =		{10.4230/LIPIcs.ITCS.2026.0},
  annote =	{Keywords: Front Matter, Table of Contents, Preface, Conference Organization}
}
Document
Triangle Detection in H-Free Graphs

Authors: Amir Abboud, Ron Safier, and Nathan Wallheimer


Abstract
We initiate the study of combinatorial algorithms for Triangle Detection in H-free graphs. The goal is to decide if a graph that forbids a fixed pattern H as a subgraph contains a triangle, using only "combinatorial" methods that notably exclude fast matrix multiplication. Our work aims to classify which patterns admit a subcubic speedup, working towards a dichotomy theorem. On the lower bound side, we show that if H is not 3-colorable or contains more than one triangle, the complexity of the problem remains unchanged, and no combinatorial speedup is likely possible. On the upper bound side, we develop an embedding approach that results in a strongly subcubic, combinatorial algorithm for a rich class of "embeddable" patterns. Specifically, for an embeddable pattern of size k, our algorithm runs in Õ(n^{3-1/(2^{k-3)}}) time, where Õ(⋅) hides poly-logarithmic factors. This algorithm also extends to listing all the triangles within the same time bound. We supplement this main result with two generalizations: - A generalization to patterns that are embeddable up to a single obstacle that arises from a triangle in the pattern. This completes our classification for small patterns, yielding a dichotomy theorem for all patterns of size up to eight. - An H-sensitive algorithm for embeddable patterns, which runs faster when the number of copies of H is significantly smaller than the maximum possible Ω(n^{k}). Finally, we focus on the special case of odd cycles. We present specialized Triangle Detection algorithms that are very efficient: - A combinatorial algorithm for C_{2k+1}-free graphs that runs in Õ(m+n^{1+2/k}) time for every k ≥ 2, where m is the number of edges in the graph. - A combinatorial C₅-sensitive algorithm that runs in Õ(n² + n^{4/3} t^{1/3}) time, where t is the number of 5-cycles in the graph.

Cite as

Amir Abboud, Ron Safier, and Nathan Wallheimer. Triangle Detection in H-Free Graphs. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 1:1-1:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{abboud_et_al:LIPIcs.ITCS.2026.1,
  author =	{Abboud, Amir and Safier, Ron and Wallheimer, Nathan},
  title =	{{Triangle Detection in H-Free Graphs}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{1:1--1:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.1},
  URN =		{urn:nbn:de:0030-drops-252885},
  doi =		{10.4230/LIPIcs.ITCS.2026.1},
  annote =	{Keywords: fine-grained complexity, triangle detection, H-free graphs}
}
Document
Oracle Separations for the Quantum-Classical Polynomial Hierarchy

Authors: Avantika Agarwal and Shalev Ben{-}David


Abstract
We study the quantum-classical polynomial hierarchy, QCPH, which is the class of languages solvable by a constant number of alternating classical quantifiers followed by a quantum verifier. Our main result is that QCPH is infinite relative to a random oracle (previously, this was not even known relative to any oracle). We further prove that higher levels of PH are not contained in lower levels of QCPH relative to a random oracle; this is a strengthening of the somewhat recent result that PH is infinite relative to a random oracle (Rossman, Servedio, and Tan 2016). The oracle separation requires lower bounding a certain type of low-depth alternating circuit with some quantum gates. To establish this, we give a new switching lemma for quantum algorithms which may be of independent interest. Our lemma says that for any d, if we apply a random restriction to a function f with quantum query complexity Q(f) ≤ n^{1/3}, the restricted function becomes exponentially close (in terms of d) to a depth-d decision tree. Our switching lemma works even in a "worst-case" sense, in that only the indices to be restricted are random; the values they are restricted to are chosen adversarially. Moreover, the switching lemma also works for polynomial degree in place of quantum query complexity.

Cite as

Avantika Agarwal and Shalev Ben-David. Oracle Separations for the Quantum-Classical Polynomial Hierarchy. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 2:1-2:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{agarwal_et_al:LIPIcs.ITCS.2026.2,
  author =	{Agarwal, Avantika and Ben\{-\}David, Shalev},
  title =	{{Oracle Separations for the Quantum-Classical Polynomial Hierarchy}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{2:1--2:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.2},
  URN =		{urn:nbn:de:0030-drops-252893},
  doi =		{10.4230/LIPIcs.ITCS.2026.2},
  annote =	{Keywords: Switching Lemma, Polynomial Hierarchy, Approximate Degree, Random Oracles, Query Complexity, Quantum Computing}
}
Document
Linear Matroid Intersection Is in Catalytic Logspace

Authors: Aryan Agarwala, Yaroslav Alekseev, and Antoine Vinciguerra


Abstract
Linear matroid intersection is an important problem in combinatorial optimization. Given two linear matroids over the same ground set, the linear matroid intersection problem asks you to find a common independent set of maximum size. The deep interest in linear matroid intersection is due to the fact that it generalises many classical problems in theoretical computer science, such as bipartite matching, edge disjoint spanning trees, rainbow spanning tree, and many more. We study this problem in the model of catalytic computation: space-bounded machines are granted access to catalytic space, which is additional working memory that is full with arbitrary data that must be preserved at the end of its computation. Although linear matroid intersection has had a polynomial time algorithm for over 50 years, it remains an important open problem to show that linear matroid intersection belongs to any well studied subclass of {P}. We address this problem for the class catalytic logspace (CL) with a polynomial time bound (CLP). Recently, Agarwala and Mertz (2025) showed that bipartite maximum matching can be computed in the class CLP ⊆ {P}. This was the first subclass of {P} shown to contain bipartite matching, and additionally the first problem outside TC¹ shown to be contained in CL. We significantly improve the result of Agarwala and Mertz by showing that linear matroid intersection can be computed in CLP.

Cite as

Aryan Agarwala, Yaroslav Alekseev, and Antoine Vinciguerra. Linear Matroid Intersection Is in Catalytic Logspace. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 3:1-3:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{agarwala_et_al:LIPIcs.ITCS.2026.3,
  author =	{Agarwala, Aryan and Alekseev, Yaroslav and Vinciguerra, Antoine},
  title =	{{Linear Matroid Intersection Is in Catalytic Logspace}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{3:1--3:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.3},
  URN =		{urn:nbn:de:0030-drops-252908},
  doi =		{10.4230/LIPIcs.ITCS.2026.3},
  annote =	{Keywords: Catalytic Computing, Computational Complexity, Matroid Theory, Algorithms}
}
Document
Pseudodeterministic Algorithms for Minimum Cut Problems

Authors: Aryan Agarwala and Nithin Varma


Abstract
In this paper we present efficient pseudodeterministic algorithms for both the global minimum cut and minimum s-t cut problems. The running time of our algorithm for the global minimum cut problem is asymptotically better than the fastest sequential deterministic global minimum cut algorithm (Henzinger, Li, Rao, Wang; SODA 2024). Furthermore, we implement our algorithm in streaming, PRAM, and cut-query models, where no efficient deterministic global minimum cut algorithms are known.

Cite as

Aryan Agarwala and Nithin Varma. Pseudodeterministic Algorithms for Minimum Cut Problems. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 4:1-4:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{agarwala_et_al:LIPIcs.ITCS.2026.4,
  author =	{Agarwala, Aryan and Varma, Nithin},
  title =	{{Pseudodeterministic Algorithms for Minimum Cut Problems}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{4:1--4:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.4},
  URN =		{urn:nbn:de:0030-drops-252917},
  doi =		{10.4230/LIPIcs.ITCS.2026.4},
  annote =	{Keywords: Minimum Cut, Pseudodeterministic Algorithms}
}
Document
Maximum-Flow and Minimum-Cut Sensitivity Oracles for Directed Graphs

Authors: Mridul Ahi, Keerti Choudhary, Shlok Pande, Pushpraj, and Lakshay Saggi


Abstract
This paper addresses the problem of designing fault-tolerant data structures for the (s,t)-max-flow and (s,t)-min-cut problems in unweighted directed graphs. Given a directed graph G = (V, E) with a designated source s, sink t, and an (s,t)-max-flow of value λ, we present constructions for max-flow and min-cut sensitivity oracles, and introduce the concept of a fault-tolerant flow family, which may be of independent interest. Our main contributions are as follows. 1) Fault-Tolerant Flow Family: We construct a family ℬ of 2λ+1 (s,t)-flows such that for every edge e, ℬ contains an (s,t)-max-flow of G-e. This covering property is tight up to constants for single failures and provably cannot extend to comparably small families for k ≥ 2, where we show an Ω(n) lower bound on the family size, independent of λ. 2) Max-Flow Sensitivity Oracle: Using the fault-tolerant flow family, we construct a single as well as dual-edge sensitivity oracle for (s,t)-max-flow that requires only O(λ n) space. Given any set F of up to two failing edges, the oracle reports the updated max-flow value in G-F in O(n) time. Additionally, for the single-failure case, the oracle can determine in constant time whether the flow through an edge x changes when another edge e fails. 3) Min-Cut Sensitivity Oracle for Dual Failures: Recently, Baswana et al. (ICALP’22) designed an O(n²)-sized oracle for answering (s,t)-min-cut size queries under dual edge failures in constant time, along with a matching lower bound. We extend this by focusing on graphs with small min-cut values λ, and present a more compact oracle of size O(λ n) that answers such min-cut size queries in constant time and reports the corresponding (s,t)-min-cut partition in O(n) time. We also show that the space complexity of our oracle is asymptotically optimal in this setting. 4) Min-Cut Sensitivity Oracle for Multiple Failures: We extend our results to the general case of k edge failures. For any graph with (s,t)-min-cut of size λ, we construct a k-fault-tolerant min-cut oracle with space complexity O_{λ,k}(n log n) that answers min-cut size queries in O_{λ,k}(log n) time. This also leads to improved fault-tolerant (s,t)-reachability oracles, achieving O(n log n) space and O(log n) query time for up to k = O(1) edge failures.

Cite as

Mridul Ahi, Keerti Choudhary, Shlok Pande, Pushpraj, and Lakshay Saggi. Maximum-Flow and Minimum-Cut Sensitivity Oracles for Directed Graphs. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 5:1-5:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{ahi_et_al:LIPIcs.ITCS.2026.5,
  author =	{Ahi, Mridul and Choudhary, Keerti and Pande, Shlok and Pushpraj and Saggi, Lakshay},
  title =	{{Maximum-Flow and Minimum-Cut Sensitivity Oracles for Directed Graphs}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{5:1--5:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.5},
  URN =		{urn:nbn:de:0030-drops-252920},
  doi =		{10.4230/LIPIcs.ITCS.2026.5},
  annote =	{Keywords: Fault tolerance, Data structures, Minimum cuts, Maximum flows}
}
Document
Model-Generic Incrementally Verifiable Computation from Updatable BARGs

Authors: Eden Aldema Tshuva and Rotem Oshman


Abstract
Incrementally verifiable computation (IVC) is a computationally sound proof system that allows a prover to certify the correctness of a long or ongoing computation in an incremental manner, by repeatedly updating a proof certifying the computation so far. Updating the proof does not require access to the entire trace of the computation, which makes the IVC-prover memory efficient. Recently, such schemes were constructed for deterministic Turing machines from standard cryptographic assumptions (Paneth and Pass, FOCS 2022, and Devadas et al., FOCS 2022). In this work we generalize and extend IVC to support incremental certification and verifiability of a large set of computation models, focusing on distributed and online computation. This allows distributed algorithms to efficiently certify their own execution using low memory and communication overhead. We construct IVC for a variety of computation models by proving one generic lifting theorem from a classical (non-incremental) delegation scheme (also known as SNARG) into full-fledged IVC, while preserving the delegation scheme’s succinctness properties (up to an additive factor which is polynomial in the security parameter and independent of the size of the computation). Using this lifting theorem, we obtain IVC for the following computation models: - RAM and exclusive-read exclusive-write (EREW) PRAM algorithms, using existing delegation schemes for these models. - Streaming algorithms, using the natural memory-efficiency properties of the model. - Massively parallel computation (MPC). Notably, in this model, memory efficiency is a critical bottleneck: the machines participating in an MPC algorithm usually cannot store the entire trace of their computation. Thus, certifying MPC algorithms naturally benefits from IVC. Moreover, since prior to our work, no delegation scheme for this model was known, we also construct a delegation scheme for one-round massively parallel computations, and then apply our lifting theorem to it. - Distributed graph algorithms, using existing distributed delegation schemes (also known as locally verifiable distributed SNARGs). Here, in order to use our lifting theorem we have to first make some observations about the verification procedure of these existing schemes. At the heart of this work is a new abstraction, updatable batch arguments for NP (UpBARGs), which we define and construct. Standard BARGs allow one to prove a batch of k NP-statements using a proof whose length barely grows with k; however, the statements and their witnesses must all be known in advance. In contrast, UpBARGs support adding statements and witnesses on the fly, making them a flexible tool for constructing IVC across different computational models.

Cite as

Eden Aldema Tshuva and Rotem Oshman. Model-Generic Incrementally Verifiable Computation from Updatable BARGs. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 6:1-6:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{aldematshuva_et_al:LIPIcs.ITCS.2026.6,
  author =	{Aldema Tshuva, Eden and Oshman, Rotem},
  title =	{{Model-Generic Incrementally Verifiable Computation from Updatable BARGs}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{6:1--6:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.6},
  URN =		{urn:nbn:de:0030-drops-252931},
  doi =		{10.4230/LIPIcs.ITCS.2026.6},
  annote =	{Keywords: incrementally verifiable computation, massively parallel computation, streaming, parallel RAM, batch arguments, SNARG}
}
Document
An Unholy Trinity: TFNP, Polynomial Systems, and the Quantum Satisfiability Problem

Authors: Marco Aldi, Sevag Gharibian, and Dorian Rudolph


Abstract
The theory of Total Function NP (TFNP) and its subclasses says that, even if one is promised an efficiently verifiable proof exists for a problem, finding this proof can be intractable. Despite the success of the theory at showing intractability of problems such as computing Brouwer fixed points and Nash equilibria, subclasses of TFNP remain arguably few and far between. In this work, we define two new subclasses of TFNP borne of the study of complex polynomial systems: Multi-homogeneous Systems (MHS) and Sparse Fundamental Theorem of Algebra (SFTA). The first of these is based on Bézout’s theorem from algebraic geometry, marking the first TFNP subclass based on an algebraic geometric principle. At the heart of our study is the computational problem known as Quantum SAT (QSAT) with a System of Distinct Representatives (SDR), first studied by [Laumann, Läuchli, Moessner, Scardicchio, and Sondhi 2010]. Among other results, we show that QSAT with SDR is MHS-complete, thus giving not only the first link between quantum complexity theory and TFNP, but also the first TFNP problem whose classical variant (SAT with SDR) is easy but whose quantum variant is hard. We also show how to embed the roots of a sparse, high-degree, univariate polynomial into QSAT with SDR, obtaining that SFTA is contained in a zero-error version of MHS. We conjecture this construction also works in the low-error setting, which would imply SFTA ⊆ MHS.

Cite as

Marco Aldi, Sevag Gharibian, and Dorian Rudolph. An Unholy Trinity: TFNP, Polynomial Systems, and the Quantum Satisfiability Problem. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 7:1-7:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{aldi_et_al:LIPIcs.ITCS.2026.7,
  author =	{Aldi, Marco and Gharibian, Sevag and Rudolph, Dorian},
  title =	{{An Unholy Trinity: TFNP, Polynomial Systems, and the Quantum Satisfiability Problem}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{7:1--7:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.7},
  URN =		{urn:nbn:de:0030-drops-252946},
  doi =		{10.4230/LIPIcs.ITCS.2026.7},
  annote =	{Keywords: quantum complexity theory, Quantum Merlin Arthur (QMA), Quantum Satisfiability Problem (QSAT), total function NP (TFNP)}
}
Document
Intersection Theorems: A Potential Approach to Proof Complexity Lower Bounds

Authors: Yaroslav Alekseev and Nikita Gaevoy


Abstract
Recently, Göös et al. [Göös et al., 2024] showed that Res ⋏ uSA = RevRes in the following sense: if a formula φ has refutations of size at most s and width/degree at most w in both Res and uSA, then there is a refutation for φ of size at most poly(s ⋅ 2^w) in RevRes. Their proof relies on the TFNP characterization of the aforementioned proof systems. In our work, we give a direct and simplified proof of this result, simultaneously achieving better bounds: we show that if for a formula φ there are refutations of size at most s in both Res and uSA, then there is a refutation of φ of size at most poly(s) in RevRes. This potentially allows us to "lift" size lower bounds from RevRes to Res for the formulas for which there are upper bounds in uSA. This kind of lifting was not possible before because of the exponential blow-up in size from the width. Similarly, we improve the bounds in another intersection theorem from [Göös et al., 2024] by giving a direct proof of Res ⋏ uNS = RevResT. Finally, we generalize those intersection theorems to some proof systems for which we currently do not have a TFNP characterization. For example, we show that Res(⊕) ⋏ u-wRes(⊕) = RevRes(⊕), which effectively allows us to reduce the problem of proving Pigeonhole Principle lower bounds in Res(⊕) to proving Pigeonhole Principle lower bounds in RevRes(⊕), a potentially weaker proof system.

Cite as

Yaroslav Alekseev and Nikita Gaevoy. Intersection Theorems: A Potential Approach to Proof Complexity Lower Bounds. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 8:1-8:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{alekseev_et_al:LIPIcs.ITCS.2026.8,
  author =	{Alekseev, Yaroslav and Gaevoy, Nikita},
  title =	{{Intersection Theorems: A Potential Approach to Proof Complexity Lower Bounds}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{8:1--8:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.8},
  URN =		{urn:nbn:de:0030-drops-252953},
  doi =		{10.4230/LIPIcs.ITCS.2026.8},
  annote =	{Keywords: proof complexity, intersection theorems}
}
Document
On Closure Properties of Read-Once Oblivious Algebraic Branching Programs

Authors: Robert Andrews, Jules Armand, Prateek Dwivedi, Magnus Rahbek Dalgaard Hansen, Nutan Limaye, Srikanth Srinivasan, and Sébastien Tavenas


Abstract
We investigate the closure properties of read-once oblivious Algebraic Branching Programs (roABPs) under various natural algebraic operations and prove the following. - Non-closure under factoring: There is a sequence of explicit polynomials (f_n(x₁,…, x_n))_n that have poly(n)-sized roABPs such that some irreducible factor of f_n requires roABPs of superpolynomial size in any order. - Non-closure under powering: There is a sequence of polynomials (f_n(x₁,…, x_n))_n with poly(n)-sized roABPs such that any super-constant power of f_n does not have roABPs of polynomial size in any order (and f_nⁿ requires exponential size in any order). - Non-closure under symmetric operations: There are symmetric polynomials (f_n(e₁,…, e_n))_n that have roABPs of polynomial size such that f_n(x₁,…, x_n) do not have roABPs of subexponential size. (Here, e₁,…, e_n denote the elementary symmetric polynomials in n variables.) These results should be viewed in light of known results on models such as algebraic circuits, (general) algebraic branching programs, formulas and constant-depth circuits, all of which are known to be closed under these operations. To prove non-closure under factoring, we construct hard polynomials based on expander graphs using gadgets that lift their hardness from sparse polynomials to roABPs. For symmetric compositions, we show that the circulant polynomial requires roABPs of exponential size in every variable order.

Cite as

Robert Andrews, Jules Armand, Prateek Dwivedi, Magnus Rahbek Dalgaard Hansen, Nutan Limaye, Srikanth Srinivasan, and Sébastien Tavenas. On Closure Properties of Read-Once Oblivious Algebraic Branching Programs. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 9:1-9:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{andrews_et_al:LIPIcs.ITCS.2026.9,
  author =	{Andrews, Robert and Armand, Jules and Dwivedi, Prateek and Hansen, Magnus Rahbek Dalgaard and Limaye, Nutan and Srinivasan, Srikanth and Tavenas, S\'{e}bastien},
  title =	{{On Closure Properties of Read-Once Oblivious Algebraic Branching Programs}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{9:1--9:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.9},
  URN =		{urn:nbn:de:0030-drops-252964},
  doi =		{10.4230/LIPIcs.ITCS.2026.9},
  annote =	{Keywords: Factoring, Closure Properties, Sparsity Bounds, Symmetric Polynomials, roABP, Expander Graphs}
}
Document
On the Complexity of Unique Quantum Witnesses and Quantum Approximate Counting

Authors: Anurag Anshu, Jonas Haferkamp, Yeongwoo Hwang, and Quynh T. Nguyen


Abstract
We study the long-standing open question on the power of unique witnesses in quantum protocols, which asks if UniqueQMA, a variant of QMA whose accepting witness space is 1-dimensional, contains QMA under quantum reductions. This work rules out any black-box reduction from QMA to UniqueQMA by showing a quantum oracle separation between BQP^UniqueQMA and QMA. This provides a contrast to the classical case, where the Valiant-Vazirani theorem shows a black-box randomized reduction from UniqueNP to NP, and suggests the need for studying the structure of the ground space of local Hamiltonians in distilling a potential unique witness. Via similar techniques, we show, relative to a quantum oracle, that QMA^QMA cannot decide quantum approximate counting, ruling out a quantum analogue of Stockmeyer’s algorithm in the black-box setting. Our results employ a subspace reflection oracle, previously considered in [Scott Aaronson and Greg Kuperberg, 2007; Scott Aaronson et al., 2020; She and Yuen, 2023], but we introduce new tools which allow us to exploit the unique witness constraint. We also show a strong "polarization" behavior of QMA circuits, which could be of independent interest in studying quantum polynomial hierarchies. We then ask a natural question; what structural properties of the local Hamiltonian problem can we exploit? We introduce a physically motivated candidate by showing that the ground energy of local Hamiltonians that satisfy a computational variant of the eigenstate thermalization hypothesis (ETH) can be estimated through a UniqueQMA protocol. Our protocol can be viewed as a quantum expander test in a low energy subspace of the Hamiltonian and verifies a unique entangled state across two copies of the subspace. This allows us to conclude that if UniqueQMA is not equivalent to QMA, then QMA-hard Hamiltonians must violate ETH under adversarial perturbations (more accurately, further assuming the quantum PCP conjecture if ETH only applies to extensive energy subspaces). Under the same assumption, this also serves as evidence that chaotic local Hamiltonians, such as the SYK model may be computationally simpler than general local Hamiltonians.

Cite as

Anurag Anshu, Jonas Haferkamp, Yeongwoo Hwang, and Quynh T. Nguyen. On the Complexity of Unique Quantum Witnesses and Quantum Approximate Counting. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 10:1-10:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{anshu_et_al:LIPIcs.ITCS.2026.10,
  author =	{Anshu, Anurag and Haferkamp, Jonas and Hwang, Yeongwoo and Nguyen, Quynh T.},
  title =	{{On the Complexity of Unique Quantum Witnesses and Quantum Approximate Counting}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{10:1--10:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.10},
  URN =		{urn:nbn:de:0030-drops-252978},
  doi =		{10.4230/LIPIcs.ITCS.2026.10},
  annote =	{Keywords: Quantum complexity, approximate counting, Valiant-Vazirani, eigenstate thermalization hypothesis}
}
Document
Classical and Quantum Polynomial Freiman-Ruzsa Algorithms

Authors: Srinivasan Arunachalam, Davi Castro-Silva, Arkopal Dutt, and Tom Gur


Abstract
We prove algorithmic versions of the polynomial Freiman-Ruzsa theorem of Gowers, Green, Manners, and Tao (Annals of Mathematics, 2025) in additive combinatorics. In particular, we give classical and quantum polynomial-time algorithms that, for A ⊆ 𝔽₂ⁿ with doubling constant K, learn an explicit description of a subspace V ⊆ 𝔽₂ⁿ of size |V| ≤ |A| such that A can be covered by K^C translates of V, for a universal constant C > 1.

Cite as

Srinivasan Arunachalam, Davi Castro-Silva, Arkopal Dutt, and Tom Gur. Classical and Quantum Polynomial Freiman-Ruzsa Algorithms. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 11:1-11:8, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{arunachalam_et_al:LIPIcs.ITCS.2026.11,
  author =	{Arunachalam, Srinivasan and Castro-Silva, Davi and Dutt, Arkopal and Gur, Tom},
  title =	{{Classical and Quantum Polynomial Freiman-Ruzsa Algorithms}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{11:1--11:8},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.11},
  URN =		{urn:nbn:de:0030-drops-252987},
  doi =		{10.4230/LIPIcs.ITCS.2026.11},
  annote =	{Keywords: Additive combinatorics, sublinear algorithms}
}
Document
Semi-Random Graphs, Robust Asymmetry, and Reconstruction

Authors: Julian Asilis, Xi Chen, Dutch Hansen, and Shang-Hua Teng


Abstract
The Graph Reconstruction Conjecture famously posits that any undirected graph on at least three vertices is determined up to isomorphism by its family of (unlabeled) induced subgraphs. At present, the conjecture admits partial resolutions of two types: 1) casework-based demonstrations of reconstructibility for families of graphs satisfying certain structural properties, and 2) probabilistic arguments establishing reconstructibility of random graphs by leveraging average-case phenomena. While results in the first category capture the worst-case nature of the conjecture, they play a limited role in understanding the general case. Results in the second category address much larger graph families, but it remains unclear how heavily the necessary arguments rely on optimistic distributional properties. Drawing on the algorithmic notions of smoothed and semi-random analysis, we study the robustness of what are arguably the two most fundamental properties in this latter line of work: asymmetry and uniqueness of subgraphs. Notably, we find that various natural semi-random graph distributions exhibit these properties asymptotically, much like their Erdős-Rényi counterparts. In particular, Bollobás [Bollob{á}s, 1990] demonstrated that almost all Erdős-Rényi random graphs G = (V, E) ∼ G(n, p) enjoy the property that their induced subgraphs on n - Θ(1) vertices are asymmetric and mutually non-isomorphic, for 1 - p, p = Ω(log(n) / n). As our primary result, we demonstrate that this property is robust against perturbation - even when an adversary is permitted to add/remove each vertex pair in V^{(2)} with (independent) arbitrarily large constant probability. Exploiting this result, we derive asymptotic characterizations of asymmetry in random graphs with large planted structure and bounded adversarial corruptions, along with improved bounds on the probability mass of nonreconstructible graphs in G(n, p).

Cite as

Julian Asilis, Xi Chen, Dutch Hansen, and Shang-Hua Teng. Semi-Random Graphs, Robust Asymmetry, and Reconstruction. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 12:1-12:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{asilis_et_al:LIPIcs.ITCS.2026.12,
  author =	{Asilis, Julian and Chen, Xi and Hansen, Dutch and Teng, Shang-Hua},
  title =	{{Semi-Random Graphs, Robust Asymmetry, and Reconstruction}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{12:1--12:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.12},
  URN =		{urn:nbn:de:0030-drops-252993},
  doi =		{10.4230/LIPIcs.ITCS.2026.12},
  annote =	{Keywords: Graph reconstruction, random graphs}
}
Document
Extended Abstract
Fully Quantum Computational Entropies (Extended Abstract)

Authors: Noam Avidan, Thomas A. Hahn, Joseph M. Renes, and Rotem Arnon


Abstract
Quantum information theory has provided the formal framework for describing how information is stored, transmitted, and transformed in physical quantum systems [Renes, 2022; Tomamichel, 2015; Wilde, 2013]. Its entropic formulations underpin our understanding of quantum computation, communication, and cryptography. Yet this theory traditionally treats all quantum operations as freely available, ignoring computational restrictions. In practice, however, any manipulation of quantum information must be performed by devices of bounded complexity and runtime. Capturing such realistic constraints requires extending quantum information theory to include computational efficiency as a fundamental component. This work takes a first step toward building a computational version of quantum information theory, one that treats efficiency as part of the theory itself. The goal is to understand how the behavior of quantum information changes when the parties involved can only perform computationally efficient operations. This approach bridges the abstract, ideal setting of quantum information theory with the practical limitations of real quantum devices, offering a means to study information processing under realistic resource constraints. At the center of this work are two new quantities: the quantum computational min-entropy and the quantum computational max-entropy. These entropies extend standard quantum entropies by explicitly limiting the computational power of the observer or adversary. The quantum computational min-entropy captures how unpredictable a quantum system A remains to an observer holding system B, when that observer is restricted to quantum circuits of bounded size. Formally, for a bipartite state ρ_{AB}, we define {H^c}^s_{min}(A|B)_{ρ} ≔ -log d_A max_{ℰ^s_{B→A'}} F((𝕀_A ⊗ ℰ^s)(ρ_{AB}),|Φ_{AA'}⟩⟨Φ_{AA'}|) , where the maximization is over quantum channels that can be implemented by circuits of size at most s, and F denotes fidelity with a maximally entangled state. In the classical setting, the min-entropy can be expressed through the maximal probability of correctly guessing a random variable given some side-information. In the fully quantum setting, this idea extends to uncertainty about quantum information [König et al., 2009], quantifying how well one system can be inferred from another using local quantum operations. Our definition generalizes this operational viewpoint by restricting the computational power of the observer to efficient quantum circuits. This definition extends the operational meaning of the information-theoretic quantum min-entropy [König et al., 2009] by incorporating computational constraints, and it provides the fully quantum counterpart of the classical unpredictability entropy [Hsiao et al., 2007]. We establish fundamental properties for the computational min-entropy, including monotonicity in the circuit size and smoothing parameters, efficient data-processing inequalities, and fully quantum leakage and purification chain rules, which were left as open questions in earlier definitions of quantum computational entropies [Yi-Hsiu Chen et al., 2017; Munson et al., 2025]. For classical–quantum states, it coincides with the previously defined quantum computational unpredictability entropy [Noam Avidan and Rotem Arnon, 2025], showing that the new definition correctly generalizes known results. We also introduce the quantum computational max-entropy through a duality relation [Tomamichel et al., 2010] with the min-entropy using a fixed purification. Finally, we prove unconditional separations between the computational and information-theoretic entropies, demonstrating that computational restrictions can fundamentally alter entropic behavior even for simple states. These results establish the fundamental mathematical framework for studying quantum information within realistic computational constraints. By integrating efficiency directly into entropic quantities, they open the door to a fully developed computational quantum information theory that parallels its information-theoretic counterpart. Such a framework provides the foundation for analyzing cryptographic security against computationally bounded quantum adversaries [Noam Avidan and Rotem Arnon, 2025] and the limits of efficient quantum state manipulation. More broadly, it suggests that many core notions in quantum information theory may have refined computational analogues yet to be explored.

Cite as

Noam Avidan, Thomas A. Hahn, Joseph M. Renes, and Rotem Arnon. Fully Quantum Computational Entropies (Extended Abstract). In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 13:1-13:3, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{avidan_et_al:LIPIcs.ITCS.2026.13,
  author =	{Avidan, Noam and Hahn, Thomas A. and Renes, Joseph M. and Arnon, Rotem},
  title =	{{Fully Quantum Computational Entropies}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{13:1--13:3},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.13},
  URN =		{urn:nbn:de:0030-drops-253003},
  doi =		{10.4230/LIPIcs.ITCS.2026.13},
  annote =	{Keywords: quantum information theory, computational entropy, min-entropy, max-entropy}
}
Document
General Computation Using Slidable Tiles with Deterministic Global Forces

Authors: Alberto Avila-Jimenez, David Barreda, Sarah-Laurie Evans, Austin Luchsinger, Aiden Massie, Robert Schweller, Evan Tomai, and Tim Wylie


Abstract
We study the computational power of the Full-Tilt model of motion planning, where slidable polyominos are moved maximally around a board by way of a sequence of directional "tilts." We focus on the deterministic scenario in which the tilts constitute a repeated clockwise rotation. We show that general-purpose computation is possible within this framework by providing a direct and efficient simulation of space-bounded Turing machines in which one computational step of the machine is simulated per O(1) rotations. We further show that the initial tape of the machine can be programmed by an initial tilt-sequence preceding the rotations. This result immediately implies new PSPACE-completeness results for the well-studied problems of occupancy (deciding if a given board location can be occupied by a tile), vacancy (deciding if a location can be emptied), relocation (deciding if a tile can be moved from one location to another), and reconfiguration (can a given board configuration be reconfigured into a second given configuration) that hold even for deterministically repeating tilt cycles such as rotations. All of our PSPACE-completeness results hold even when there is only a single domino in the system beyond singleton tiles. Following, we show that these results work in the Single-Step tilt model for larger constant cycles. We then investigate computational efficiency by showing a modification to implement a two-tape Turing machine in the Full-Tilt model and Systolic Arrays in the Single-Step model. Finally, we show a cyclic implementation for tilt-efficient Threshold Circuits.

Cite as

Alberto Avila-Jimenez, David Barreda, Sarah-Laurie Evans, Austin Luchsinger, Aiden Massie, Robert Schweller, Evan Tomai, and Tim Wylie. General Computation Using Slidable Tiles with Deterministic Global Forces. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 14:1-14:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{avilajimenez_et_al:LIPIcs.ITCS.2026.14,
  author =	{Avila-Jimenez, Alberto and Barreda, David and Evans, Sarah-Laurie and Luchsinger, Austin and Massie, Aiden and Schweller, Robert and Tomai, Evan and Wylie, Tim},
  title =	{{General Computation Using Slidable Tiles with Deterministic Global Forces}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{14:1--14:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.14},
  URN =		{urn:nbn:de:0030-drops-253019},
  doi =		{10.4230/LIPIcs.ITCS.2026.14},
  annote =	{Keywords: motion planning, global control, external forces, deterministic computation, occupancy, vacancy}
}
Document
How to Use Nondeterminism in Cryptography

Authors: Marshall Ball and Peter Crawford-Kahrl


Abstract
Nondeterministic reductions have yielded powerful results in the theory of computational complexity, yet are effectively useless in a cryptographic context. The reason for this is simple, a nondeterministic polynomial time adversary can trivially break almost any cryptographic primitive by simply guessing the "key." In order to use this powerful nondeterministic tool kit in the cryptographic context, we initiate the study of cryptography against adversaries with limited nondeterminism: polynomial time nondeterministic algorithms that are restricted to just a few bits of nondeterminism. We demonstrate that limited nondeterministic security is sufficient to prove two foundational results that have eluded our grasp for decades: dream hardness amplification, and extracting ω(log n) hardcore bits.

Cite as

Marshall Ball and Peter Crawford-Kahrl. How to Use Nondeterminism in Cryptography. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 15:1-15:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{ball_et_al:LIPIcs.ITCS.2026.15,
  author =	{Ball, Marshall and Crawford-Kahrl, Peter},
  title =	{{How to Use Nondeterminism in Cryptography}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{15:1--15:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.15},
  URN =		{urn:nbn:de:0030-drops-253024},
  doi =		{10.4230/LIPIcs.ITCS.2026.15},
  annote =	{Keywords: limited nondeterminism, cryptography, computational complexity, hardness amplification, pseudorandom generators, hardcore bits}
}
Document
Robust Streaming Against Low-Memory Adversaries

Authors: Omri Ben-Eliezer, Krzysztof Onak, and Sandeep Silwal


Abstract
Robust streaming, the study of streaming algorithms that provably work when the stream is generated by an adaptive adversary, has seen tremendous progress in recent years. However, fundamental barriers remain: the best known algorithm for turnstile F_p-estimation in the robust streaming setting is exponentially worse than in the oblivious setting, and closing this gap seems difficult. Arguably, one possible cause of this barrier is the adversarial model, which may be too strong: unlike the space-bounded streaming algorithm, the adversary can memorize the entire history of the interaction with the algorithm. Can we then close the exponential gap if we insist that the adversary itself is an adaptive but low-memory entity, roughly as powerful as (or even weaker than) the algorithm? In this work we present the first set of models and results aimed towards this question. We design efficient robust streaming algorithms against adversaries that are fully adaptive but have no long-term memory ("memoryless") or very little memory of the history of interaction. Roughly speaking, a memoryless adversary only sees, at any given round, the last output of the algorithm (and does not even know the current time) and can generate an unlimited number of independent coin tosses. A low-memory adversary is similar, but maintains an additional small buffer. While these adversaries may seem quite limited at first glance, we show that this adversarial model is strong enough to produce streams that have high flip number and density in the context of F₂-estimation, which rules out most known robustification techniques. We then design a new simple approach, similar to the computation paths framework, to obtain efficient algorithms against memoryless and low-memory adversaries for a wide class of order-invariant problems. We conclude by posing various open questions proposing further exploration of the landscape of robust streaming against fully adaptive but computationally constrained adversaries.

Cite as

Omri Ben-Eliezer, Krzysztof Onak, and Sandeep Silwal. Robust Streaming Against Low-Memory Adversaries. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 16:1-16:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{beneliezer_et_al:LIPIcs.ITCS.2026.16,
  author =	{Ben-Eliezer, Omri and Onak, Krzysztof and Silwal, Sandeep},
  title =	{{Robust Streaming Against Low-Memory Adversaries}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{16:1--16:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.16},
  URN =		{urn:nbn:de:0030-drops-253037},
  doi =		{10.4230/LIPIcs.ITCS.2026.16},
  annote =	{Keywords: robust streaming, adaptive robustness, bounded-space adversaries}
}
Document
Unconditional Quantum Advantage for Sampling with Shallow Circuits

Authors: Adam Bene Watts and Natalie Parham


Abstract
Recent work by Bravyi, Gosset, and Koenig showed that there exists a search problem that a constant-depth quantum circuit can solve, but that any constant-depth classical circuit with bounded fan-in cannot. They also pose the question: Can we achieve a similar proof of separation for an input-independent sampling task? In this paper, we show that the answer to this question is yes when the number of random input bits given to the classical circuit is bounded. We introduce a distribution D_{n} over {0,1}ⁿ and construct a constant-depth uniform quantum circuit family {C_n}_n such that C_n samples from a distribution close to D_{n} in total variation distance. For any δ < 1 we also prove, unconditionally, that any classical circuit with bounded fan-in gates that takes as input kn + n^δ i.i.d. Bernouli random variables with entropy 1/k and produces output close to D_{n} in total variation distance has depth Ω(log log n). This gives an unconditional proof that constant-depth quantum circuits can sample from distributions that can't be reproduced by constant-depth bounded fan-in classical circuits, even up to additive error. We also show a similar separation between constant-depth quantum circuits with advice and classical circuits with bounded fan-in and fan-out, but access to an unbounded number of i.i.d random inputs. The distribution D_n and classical circuit lower bounds are inspired by work of Viola, in which he shows a different (but related) distribution cannot be sampled from approximately by constant-depth bounded fan-in classical circuits.

Cite as

Adam Bene Watts and Natalie Parham. Unconditional Quantum Advantage for Sampling with Shallow Circuits. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 17:1-17:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{benewatts_et_al:LIPIcs.ITCS.2026.17,
  author =	{Bene Watts, Adam and Parham, Natalie},
  title =	{{Unconditional Quantum Advantage for Sampling with Shallow Circuits}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{17:1--17:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.17},
  URN =		{urn:nbn:de:0030-drops-253048},
  doi =		{10.4230/LIPIcs.ITCS.2026.17},
  annote =	{Keywords: Circuit Complexity, Sampling Separation, Shallow Quantum Circuits, Unconditional Separations, Complexity of Distributions}
}
Document
Interactive Proofs for Distribution Testing with Conditional Oracles

Authors: Ari Biswas, Mark Bun, Clément L. Canonne, and Satchit Sivakumar


Abstract
We revisit the framework of interactive proofs for distribution testing, first introduced by Chiesa and Gur (ITCS 2018), which has recently experienced a surge in interest, accompanied by notable progress (e.g., Herman and Rothblum, STOC 2022, FOCS 2023; Herman, RANDOM 2024). In this model, a data-poor verifier determines whether a probability distribution has a property of interest by interacting with an all-powerful, data-rich but untrusted prover bent on convincing them that it has the property. While prior work gave sample-, time-, and communication-efficient protocols for testing and estimating a range of distribution properties, they all suffer from an inherent issue: for most interesting properties of distributions over a domain of size N, the verifier must draw at least Ω(√N) samples of its own. While sublinear in N, this is still prohibitive for large domains encountered in practice. In this work, we circumvent this limitation by augmenting the verifier with the ability to perform an exponentially smaller number of more powerful (but reasonable) pairwise conditional queries, effectively enabling them to perform "local comparison checks" of the prover’s claims. We systematically investigate the landscape of interactive proofs in this new setting, giving poly-logarithmic query and sample protocols for (tolerantly) testing all label-invariant properties, thus demonstrating exponential savings without compromising on communication, for this large and fundamental class of testing tasks.

Cite as

Ari Biswas, Mark Bun, Clément L. Canonne, and Satchit Sivakumar. Interactive Proofs for Distribution Testing with Conditional Oracles. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 18:1-18:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{biswas_et_al:LIPIcs.ITCS.2026.18,
  author =	{Biswas, Ari and Bun, Mark and Canonne, Cl\'{e}ment L. and Sivakumar, Satchit},
  title =	{{Interactive Proofs for Distribution Testing with Conditional Oracles}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{18:1--18:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.18},
  URN =		{urn:nbn:de:0030-drops-253059},
  doi =		{10.4230/LIPIcs.ITCS.2026.18},
  annote =	{Keywords: Distribution Testing, Interactive Proofs}
}
Document
Limitations to Computing Quadratic Functions on Reed-Solomon Encoded Data

Authors: Keller Blackwell and Mary Wootters


Abstract
We study the problem of low-bandwidth non-linear computation on Reed-Solomon encoded data. Given an [n,k] Reed-Solomon encoding of a message vector 𝐟 ∈ 𝔽_q^k, and a polynomial g ∈ 𝔽_q[X₁, X₂, …, X_k], a user wishing to evaluate g(𝐟) is given local query access to each codeword symbol. The query response is allowed to be the output of an arbitrary function evaluated locally on the codeword symbol, and the user’s aim is to minimize the total information downloaded in order to compute g(𝐟). This problem has been studied before for linear functions g; in this work we initiate the study of non-linear functions by starting with quadratic monomials. For q = p^e and distinct i,j ∈ [k], we show that any scheme evaluating the quadratic monomial g_{i,j} := X_i X_j must download at least 2 log₂(q-1) - 3 bits of information when p is an odd prime, and at least 2log₂(q-2) -4 bits when p = 2. When k = 2, our result shows that one cannot do significantly better than the naive bound of k log₂(q) bits, which is enough to recover all of 𝐟. This contrasts sharply with prior work for low-bandwidth evaluation of linear functions g(𝐟) over Reed-Solomon encoded data, for which it is possible to substantially improve upon this bound [Venkatesan Guruswami and Mary Wootters, 2016; Tamo et al., 2018; Shutty and Wootters, 2021; Kiah et al., 2024; Con and Tamo, 2022]. Some proofs have been omitted from this extended abstract; the full version can be found at [Keller Blackwell and Mary Wootters, 2025].

Cite as

Keller Blackwell and Mary Wootters. Limitations to Computing Quadratic Functions on Reed-Solomon Encoded Data. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 19:1-19:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{blackwell_et_al:LIPIcs.ITCS.2026.19,
  author =	{Blackwell, Keller and Wootters, Mary},
  title =	{{Limitations to Computing Quadratic Functions on Reed-Solomon Encoded Data}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{19:1--19:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.19},
  URN =		{urn:nbn:de:0030-drops-253064},
  doi =		{10.4230/LIPIcs.ITCS.2026.19},
  annote =	{Keywords: Distributed computation, Reed-Solomon codes}
}
Document
Samplability Makes Learning Easier

Authors: Guy Blanc, Caleb Koch, Jane Lange, Carmen Strassle, and Li-Yang Tan


Abstract
The standard definition of PAC learning (Valiant 1984) requires learners to succeed under all distributions - even ones that are intractable to sample from. This stands in contrast to samplable PAC learning (Blum, Furst, Kearns, and Lipton 1993), where learners only have to succeed under samplable distributions. We study this distinction and show that samplable PAC substantially expands the power of efficient learners. We first construct a concept class that requires exponential sample complexity in standard PAC but is learnable with polynomial sample complexity in samplable PAC. We then lift this statistical separation to the computational setting and obtain a separation relative to a random oracle. Our proofs center around a new complexity primitive, explicit evasive sets, that we introduce and study. These are sets for which membership is easy to determine but are extremely hard to sample from. Our results extend to the online setting to similarly show that its landscape changes when the adversary is assumed to be efficient instead of computationally unbounded.

Cite as

Guy Blanc, Caleb Koch, Jane Lange, Carmen Strassle, and Li-Yang Tan. Samplability Makes Learning Easier. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 20:1-20:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{blanc_et_al:LIPIcs.ITCS.2026.20,
  author =	{Blanc, Guy and Koch, Caleb and Lange, Jane and Strassle, Carmen and Tan, Li-Yang},
  title =	{{Samplability Makes Learning Easier}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{20:1--20:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.20},
  URN =		{urn:nbn:de:0030-drops-253071},
  doi =		{10.4230/LIPIcs.ITCS.2026.20},
  annote =	{Keywords: PAC learning, Samplable distributions}
}
Document
Differential Privacy from Axioms

Authors: Guy Blanc, William Pires, and Toniann Pitassi


Abstract
Differential privacy (DP) is the de facto notion of privacy both in theory and in practice. However, despite its popularity, DP imposes strict requirements which guard against strong worst-case scenarios. For example, it guards against seemingly unrealistic scenarios where an attacker has full information about all but one point in the data set, and still nothing can be learned about the remaining point. While preventing such a strong attack is desirable, many works have explored whether average-case relaxations of DP are easier to satisfy [Hall et al., 2013; Wang et al., 2016; Bassily and Freund, 2016; Liu et al., 2023]. In this work, we are motivated by the question of whether alternate, weaker notions of privacy are possible: can a weakened privacy notion still guarantee some basic level of privacy, and on the other hand, achieve privacy more efficiently and/or for a substantially broader set of tasks? Our main result shows the answer is no: even in the statistical setting, any reasonable measure of privacy satisfying nontrivial composition is equivalent to DP. To prove this, we identify a core set of four axioms or desiderata: pre-processing invariance, prohibition of blatant non-privacy, strong composition, and linear scalability. Our main theorem shows that any privacy measure satisfying our axioms is equivalent to DP, up to polynomial factors in sample complexity. We complement this result by showing our axioms are minimal: removing any one of our axioms enables ill-behaved measures of privacy.

Cite as

Guy Blanc, William Pires, and Toniann Pitassi. Differential Privacy from Axioms. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 21:1-21:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{blanc_et_al:LIPIcs.ITCS.2026.21,
  author =	{Blanc, Guy and Pires, William and Pitassi, Toniann},
  title =	{{Differential Privacy from Axioms}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{21:1--21:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.21},
  URN =		{urn:nbn:de:0030-drops-253081},
  doi =		{10.4230/LIPIcs.ITCS.2026.21},
  annote =	{Keywords: Differential Privacy, Privacy Amplification, Composition}
}
Document
Simplicial Covering Dimension of Extremal Concept Classes

Authors: Ari Blondal, Hamed Hatami, Pooya Hatami, Chavdar Lalov, and Sivan Tretiak


Abstract
Dimension theory is a branch of topology concerned with defining and analyzing dimensions of geometric and topological spaces in purely topological terms. In this work, we adapt the classical notion of topological dimension (Lebesgue covering) to binary concept classes. The topological space naturally associated with a concept class is its space of realizable distributions. The loss function and the class itself induce a simplicial structure on this space, with respect to which we define a simplicial covering dimension. We prove that for finite concept classes, this simplicial covering dimension exactly characterizes the list replicability number (equivalently, global stability) in PAC learning. This connection allows us to apply tools from classical dimension theory to compute the exact list replicability number of the broad family of extremal concept classes.

Cite as

Ari Blondal, Hamed Hatami, Pooya Hatami, Chavdar Lalov, and Sivan Tretiak. Simplicial Covering Dimension of Extremal Concept Classes. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 22:1-22:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{blondal_et_al:LIPIcs.ITCS.2026.22,
  author =	{Blondal, Ari and Hatami, Hamed and Hatami, Pooya and Lalov, Chavdar and Tretiak, Sivan},
  title =	{{Simplicial Covering Dimension of Extremal Concept Classes}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{22:1--22:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.22},
  URN =		{urn:nbn:de:0030-drops-253094},
  doi =		{10.4230/LIPIcs.ITCS.2026.22},
  annote =	{Keywords: PAC Learning, Extremal Concept Classes, Replicability, List Replicability, Topology, Geometry}
}
Document
Decoding Balanced Linear Codes with Preprocessing

Authors: Andrej Bogdanov, Rohit Chatterjee, Yunqi Li, and Prashant Nalini Vasudevan


Abstract
Prange’s information set algorithm is a well-known decoding algorithm for linear codes. It decodes corrupted codewords of most 𝔽₂-linear codes C of message length n up to relative error rate O(log n / n) in poly(n) time. We show that the error rate can be improved to O((log n)² / n), provided: (1) the decoder has access to a polynomial-length advice string that depends on C only, and (2) C is n^{-Ω(1)}-balanced. As a consequence we improve the error tolerance in decoding random linear codes if inefficient preprocessing of the code is allowed. This reveals potential vulnerabilities in cryptographic applications of Learning Noisy Parities with low noise rate. Our main technical result is that the Hamming weight of Hw, where the rows of H are a random sample of short dual codewords, measures the proximity of a received word w to the code in the regime of interest. Given such H as advice, our algorithm corrects errors by locally minimizing this measure. We show that for most codes, the error rate tolerated by our decoder is asymptotically optimal among all algorithms whose decision is based on thresholding Hw for an arbitrary polynomial-size advice matrix H.

Cite as

Andrej Bogdanov, Rohit Chatterjee, Yunqi Li, and Prashant Nalini Vasudevan. Decoding Balanced Linear Codes with Preprocessing. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 23:1-23:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{bogdanov_et_al:LIPIcs.ITCS.2026.23,
  author =	{Bogdanov, Andrej and Chatterjee, Rohit and Li, Yunqi and Vasudevan, Prashant Nalini},
  title =	{{Decoding Balanced Linear Codes with Preprocessing}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{23:1--23:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.23},
  URN =		{urn:nbn:de:0030-drops-253107},
  doi =		{10.4230/LIPIcs.ITCS.2026.23},
  annote =	{Keywords: Linear codes, nearest codeword problem, learning parity with noise}
}
Document
Unitary Complexity and the Uhlmann Transformation Problem

Authors: John Bostanci, Yuval Efron, Tony Metger, Alexander Poremba, Luowen Qian, and Henry Yuen


Abstract
State transformation problems such as compressing quantum information or breaking quantum commitments are fundamental quantum tasks. However, their computational difficulty cannot easily be characterized using traditional complexity theory, which focuses on tasks with classical inputs and outputs. To study the complexity of such state transformation tasks, we introduce a framework for unitary synthesis problems, including notions of reductions and unitary complexity classes. We use this framework to study the complexity of transforming one entangled state into another via local operations. We formalize this as the Uhlmann Transformation Problem, an algorithmic version of Uhlmann’s theorem. Then, we prove structural results relating the complexity of the Uhlmann Transformation Problem, polynomial space quantum computation, and zero knowledge protocols. The Uhlmann Transformation Problem allows us to characterize the complexity of a variety of tasks in quantum information processing, including decoding noisy quantum channels, breaking falsifiable quantum cryptographic assumptions, implementing optimal prover strategies in quantum interactive proofs, and decoding the Hawking radiation of black holes. Our framework for unitary complexity thus provides new avenues for studying the computational complexity of many natural quantum information processing tasks.

Cite as

John Bostanci, Yuval Efron, Tony Metger, Alexander Poremba, Luowen Qian, and Henry Yuen. Unitary Complexity and the Uhlmann Transformation Problem. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 24:1-24:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{bostanci_et_al:LIPIcs.ITCS.2026.24,
  author =	{Bostanci, John and Efron, Yuval and Metger, Tony and Poremba, Alexander and Qian, Luowen and Yuen, Henry},
  title =	{{Unitary Complexity and the Uhlmann Transformation Problem}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{24:1--24:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.24},
  URN =		{urn:nbn:de:0030-drops-253111},
  doi =		{10.4230/LIPIcs.ITCS.2026.24},
  annote =	{Keywords: Uhlmann’s theorem, unitary complexity theory}
}
Document
Commuting Local Hamiltonians Beyond 2D

Authors: John Bostanci and Yeongwoo Hwang


Abstract
Commuting local Hamiltonians provide a testing ground for studying many of the most interesting open questions in quantum information theory, including the quantum PCP conjecture and the nature of entanglement. However, unlike the general local Hamiltonian problem, the exact complexity of the commuting local Hamiltonian problem (CLH) remains unknown. A number of works have shown that increasingly expressive families of commuting local Hamiltonians admit classical verifiers. Despite intense work, proofs placing CLH in NP rely heavily on an underlying 2D lattice structure, or a very constrained local dimension and locality. In this work, we present a new technique to analyze the complexity of various families of commuting local Hamiltonians: guided reductions. Intuitively, these are a generalization of typical reduction where the prover provides a guide so that the verifier can construct a simpler Hamiltonian. The core of our reduction is a new rounding technique based on a combination of Jordan’s Lemma for pairs of projectors and the Structure Lemma for C^* algebras. Our rounding technique is much more flexible than previous work and allows us to remove constraints on local dimension in exchange for a rank-1 assumption. Using our rounding technique, we prove the following two results: 1) 2D-CLH for rank-1 instances are contained in NP, independent of the qudit dimension. It is notable that this family of commuting local Hamiltonians has no restriction on the local dimension or the locality of the Hamiltonian terms. 2) 3D-CLH for rank-1 instances are in NP. To our knowledge this is the first time a family of {3D} commuting local Hamiltonians has been contained in NP. Our results apply to Hamiltonians with large qudit degree and remain non-trivial despite the quantum Lovász Local Lemma. [Andris Ambainis et al., 2012]

Cite as

John Bostanci and Yeongwoo Hwang. Commuting Local Hamiltonians Beyond 2D. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 25:1-25:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{bostanci_et_al:LIPIcs.ITCS.2026.25,
  author =	{Bostanci, John and Hwang, Yeongwoo},
  title =	{{Commuting Local Hamiltonians Beyond 2D}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{25:1--25:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.25},
  URN =		{urn:nbn:de:0030-drops-253129},
  doi =		{10.4230/LIPIcs.ITCS.2026.25},
  annote =	{Keywords: Quantum complexity, commuting Hamiltonians, complexity theory, C* algebras}
}
Document
Local Transformations of Bipartite Entanglement Are Rigid

Authors: John Bostanci, Tony Metger, and Henry Yuen


Abstract
Uhlmann’s theorem is a fundamental result in quantum information theory that quantifies the optimal overlap between two bipartite pure states after applying local unitary operations (called Uhlmann transformations). We show that optimal Uhlmann transformations are rigid - in other words, they must be unique up to some well-characterized degrees of freedom. This rigidity is also robust: Uhlmann transformations achieving near-optimal overlaps must be close to the unique optimal transformation (again, up to well-characterized degrees of freedom). We describe two applications of our robust rigidity theorem: (a) we obtain better interactive proofs for synthesizing Uhlmann transformations and (b) we obtain a simple, alternative proof of the Gowers-Hatami theorem on the stability of approximate representations of finite groups.

Cite as

John Bostanci, Tony Metger, and Henry Yuen. Local Transformations of Bipartite Entanglement Are Rigid. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 26:1-26:8, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{bostanci_et_al:LIPIcs.ITCS.2026.26,
  author =	{Bostanci, John and Metger, Tony and Yuen, Henry},
  title =	{{Local Transformations of Bipartite Entanglement Are Rigid}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{26:1--26:8},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.26},
  URN =		{urn:nbn:de:0030-drops-253138},
  doi =		{10.4230/LIPIcs.ITCS.2026.26},
  annote =	{Keywords: Uhlmann’s theorem, quantum entanglement, stability theorems}
}
Document
Identity Check Problem for Shallow Quantum Circuits

Authors: Sergey Bravyi, Natalie Parham, and Minh Tran


Abstract
Verifying that a quantum circuit correctly implements a desired transformation is essential for validating quantum algorithms. We consider the closely related identity check problem: given a quantum circuit U, estimate the diamond-norm distance between U and the identity channel. Ji and Wu showed that estimating this distance to within an additive 1/poly error is QMA-hard, even when U is constant-depth and 1D local - ruling out efficient algorithms in this regime. We show that this hardness barrier disappears if one seeks a constant multiplicative-approximation instead. We present a classical algorithm that, for shallow geometrically local D-dimensional circuits, approximates the distance to the identity within a factor α = D+1, provided that the circuit is sufficiently close to the identity. The runtime of the algorithm scales linearly with the number of qubits for any constant circuit depth and spatial dimension. We also show that the operator-norm distance to the identity ‖U-I‖ can be efficiently approximated within a factor α = 5 for shallow 1D circuits and, under a certain technical condition, within a factor α = 2D+3 for shallow D-dimensional circuits. A numerical implementation of the identity check algorithm is reported for 1D Trotter circuits with up to 100 qubits.

Cite as

Sergey Bravyi, Natalie Parham, and Minh Tran. Identity Check Problem for Shallow Quantum Circuits. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 27:1-27:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{bravyi_et_al:LIPIcs.ITCS.2026.27,
  author =	{Bravyi, Sergey and Parham, Natalie and Tran, Minh},
  title =	{{Identity Check Problem for Shallow Quantum Circuits}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{27:1--27:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.27},
  URN =		{urn:nbn:de:0030-drops-253147},
  doi =		{10.4230/LIPIcs.ITCS.2026.27},
  annote =	{Keywords: Quantum computing, Identity check problem, quantum circuits, classical simulation of quantum computation, shallow circuits}
}
Document
Linear Time Encodable Binary Code Achieving GV Bound with Linear Time Encodable Dual Achieving GV Bound

Authors: Martijn Brehm and Nicolas Resch


Abstract
We initiate the study of what we term "fast good codes" with "fast good duals." Specifically, we consider the task of constructing a binary linear code C ≤ 𝔽₂ⁿ such that both it and its dual C^⟂ : = {x ∈ 𝔽₂ⁿ:∀ c ∈ C, ⟨ x,c⟩ = 0} are asymptotically good (in fact, have rate-distance tradeoff approaching the GV bound), and are encodable in O(n) time. While we believe such codes should find applications more broadly, as motivation we describe how such codes can be used the secure computation task of encrypted matrix-vector product, as studied by Behhamouda et al (CCS 2025). Our main contribution is a construction of such a fast good code with fast good dual. Our construction is inspired by the repeat multiple accumulate (RMA) codes of Divsalar, Jin and McEliece (Allerton, 1998). To create the rate 1/2 code, after repeating each message coordinate, we perform accumulation steps - where first a uniform coordinate permutation is applied, and afterwards the prefix-sum modulo 2 is applied - which are alternated with discrete derivative steps - where again a uniform coordinate permutation is applied, and afterwards the previous two coordinates are summed modulo 2. Importantly, these two operations are inverse of each other. In particular, the dual of the code is very similar, with the accumulation and discrete derivative steps reversed. Our analysis is inspired by a prior analysis of RMA codes due to Ravazzi and Fagnani (IEEE Trans. Info. Theory, 2009). The main idea is to bound the input-output weight-enumerator function: the expected number of messages of a given weight that are encoded into a codeword of a given weight. We face new challenges in controlling the behaviour of the discrete derivative matrix (which can significantly drop the weight of a vector), which we overcome by careful case analysis.

Cite as

Martijn Brehm and Nicolas Resch. Linear Time Encodable Binary Code Achieving GV Bound with Linear Time Encodable Dual Achieving GV Bound. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 28:1-28:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{brehm_et_al:LIPIcs.ITCS.2026.28,
  author =	{Brehm, Martijn and Resch, Nicolas},
  title =	{{Linear Time Encodable Binary Code Achieving GV Bound with Linear Time Encodable Dual Achieving GV Bound}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{28:1--28:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.28},
  URN =		{urn:nbn:de:0030-drops-253157},
  doi =		{10.4230/LIPIcs.ITCS.2026.28},
  annote =	{Keywords: Binary error-correcting codes, dual codes, fast encoding, repeat-multiple-accumulate codes}
}
Document
Extended Abstract
The Mixed Birth-Death/death-Birth Moran Process (Extended Abstract)

Authors: David A. Brewster, Yichen Huang, Michael Mitzenmacher, and Martin A. Nowak


Abstract
We study evolutionary dynamics on graphs in which each step consists of one birth and one death, referred to generally as Moran processes. In standard simplified models, there are two types of individuals: residents, who have a fitness of 1, and mutants, who have a fitness of r. Two standard update rules are used in the literature. In Birth–death (Bd), a vertex is chosen to reproduce proportional to fitness, and one of its neighbors is selected uniformly at random to die and be replaced by the offspring. In death–Birth (dB), a vertex is chosen uniformly to die, and then one of its neighbors is chosen, proportional to fitness, to place an offspring into the vacancy. Two crucial quantities are: the unconditional absorption time, which is the expected time until only residents or only mutants remain, and the fixation probability of the mutant, which is the probability that at some time the mutants occupy the whole graph. Birth-death and death-Birth rules can yield significantly different outcomes for these quantities on the same graph, rendering conclusions dependent on the update rule. We formalize and study a unified model, the λ-mixed Moran process, in which each step is independently a Bd step with probability λ ∈ [0,1] and a dB step otherwise. We analyze this mixed process and establish a few results that form a starting point for its further study. All of our results are for undirected, connected graphs. As an interesting special case, we show at λ = 1/2 for any graph that the fixation probability when r = 1 with a single mutant initially on the graph is exactly 1/n, and also at λ = 1/2 that the absorption time for any r is O_r(n⁴) (that is, with an r-dependent constant). We also show results for graphs that are "almost regular," in a manner defined in the paper. We use this to show that for suitable random graphs from G∼ G(n,p) and fixed r > 1, with high probability over the choice of graph, the absorption time is O_r(n⁴), the fixation probability is Ω_r(n^{-2}), and we can approximate the fixation probability in polynomial time. Another special case is when the graph has only two possible values for the degree {d₁, d₂} with d₁ ≤ d₂. For those graphs, we give exact formulas for fixation probabilities under r = 1 and any λ, and establish O_r(n⁴ α⁴) absorption time regardless of λ, where α = d₂/d₁. We also provide explicit formulas for the star and cycle under any r or λ.

Cite as

David A. Brewster, Yichen Huang, Michael Mitzenmacher, and Martin A. Nowak. The Mixed Birth-Death/death-Birth Moran Process (Extended Abstract). In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 29:1-29:2, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{brewster_et_al:LIPIcs.ITCS.2026.29,
  author =	{Brewster, David A. and Huang, Yichen and Mitzenmacher, Michael and Nowak, Martin A.},
  title =	{{The Mixed Birth-Death/death-Birth Moran Process}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{29:1--29:2},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.29},
  URN =		{urn:nbn:de:0030-drops-253161},
  doi =		{10.4230/LIPIcs.ITCS.2026.29},
  annote =	{Keywords: Moran process, Approximation algorithms, Random graphs}
}
Document
New Bounds for Circular Trace Reconstruction

Authors: Arnav Burudgunte, Paul Valiant, and Hongao Wang


Abstract
The "trace reconstruction" problem asks, given an unknown binary string x and a channel that repeatedly returns "traces" of x with each bit randomly deleted with some probability p, how many traces are needed to recover x? There is an exponential gap between the best known upper and lower bounds for this problem. Many variants of the model have been introduced in hopes of motivating or revealing new approaches to narrow this gap. We study the variant of circular trace reconstruction introduced by Narayanan and Ren (ITCS 2021), in which traces undergo a random cyclic shift in addition to random deletions. We show an improved lower bound of Ω̃(n⁵) for circular trace reconstruction. This contrasts with the (previously) best known lower bounds of Ω̃(n³) in the circular case and Ω̃(n^{3/2}) in the linear case. Our bound shows the indistinguishability of traces from two sparse strings x,y that each have a constant number of nonzeros. Can this technique be extended significantly? How hard is it to reconstruct a sparse string x under a cyclic deletion channel? We resolve these questions by showing, using Fourier techniques, that Õ(n⁶) traces suffice for reconstructing any constant-sparse string in a circular deletion channel, in contrast to the best known upper bound of exp(Õ(n^{1/3})) for general strings in the circular deletion channel. This shows that new algorithms or new lower bounds must focus on non-constant-sparse strings.

Cite as

Arnav Burudgunte, Paul Valiant, and Hongao Wang. New Bounds for Circular Trace Reconstruction. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 30:1-30:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{burudgunte_et_al:LIPIcs.ITCS.2026.30,
  author =	{Burudgunte, Arnav and Valiant, Paul and Wang, Hongao},
  title =	{{New Bounds for Circular Trace Reconstruction}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{30:1--30:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.30},
  URN =		{urn:nbn:de:0030-drops-253176},
  doi =		{10.4230/LIPIcs.ITCS.2026.30},
  annote =	{Keywords: Trace reconstruction, algorithmic statistics, Fourier analysis}
}
Document
Delaunay Triangulations with Predictions

Authors: Sergio Cabello, Timothy M. Chan, and Panos Giannopoulos


Abstract
We investigate algorithms with predictions in computational geometry, specifically focusing on the basic problem of computing 2D Delaunay triangulations. Given a set P of n points in the plane and a triangulation G that serves as a "prediction" of the Delaunay triangulation, we would like to use G to compute the correct Delaunay triangulation DT(P) more quickly when G is "close" to DT(P). We obtain a variety of results of this type, under different deterministic and probabilistic settings, including the following: 1) Define D to be the number of edges in G that are not in DT(P). We present a deterministic algorithm to compute DT(P) from G in O(n + Dlog³ n) time, and a randomized algorithm in O(n+Dlog n) expected time, the latter of which is optimal in terms of D. 2) Let R be a random subset of the edges of DT(P), where each edge is chosen independently with probability ρ. Suppose G is any triangulation of P that contains R. We present an algorithm to compute DT(P) from G in O(nlog log n + nlog(1/ρ)) time with high probability. 3) Define d_{vio} to be the maximum number of points of P strictly inside the circumcircle of a triangle in G (the number is 0 if G is equal to DT(P)). We present a deterministic algorithm to compute DT(P) from G in O(nlog^*n + nlog d_{vio}) time. We also obtain results in similar settings for related problems such as 2D Euclidean minimum spanning trees, and hope that our work will open up a fruitful line of future research.

Cite as

Sergio Cabello, Timothy M. Chan, and Panos Giannopoulos. Delaunay Triangulations with Predictions. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 31:1-31:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{cabello_et_al:LIPIcs.ITCS.2026.31,
  author =	{Cabello, Sergio and Chan, Timothy M. and Giannopoulos, Panos},
  title =	{{Delaunay Triangulations with Predictions}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{31:1--31:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.31},
  URN =		{urn:nbn:de:0030-drops-253186},
  doi =		{10.4230/LIPIcs.ITCS.2026.31},
  annote =	{Keywords: Delaunay Triangulation, Minimum Spanning Tree, Algorithms with Predictions}
}
Document
Vanishing Signatures, Orbit Closure, and the Converse of the Holant Theorem

Authors: Jin-Yi Cai and Ben Young


Abstract
Valiant’s Holant theorem is a powerful tool for algorithms and reductions for counting problems. It states that if two sets ℱ and 𝒢 of tensors (a.k.a. constraint functions or signatures) are related by a holographic transformation, then ℱ and 𝒢 are Holant-indistinguishable, i.e., every tensor network using tensors from ℱ, respectively from 𝒢, contracts to the same value. Xia (ICALP 2010) conjectured the converse of the Holant theorem, but a counterexample was found based on vanishing signatures, those which are Holant-indistinguishable from 0. We prove two near-converses of the Holant theorem using techniques from invariant theory. (I) Holant-indistinguishable ℱ and 𝒢 always admit two sequences of holographic transformations mapping them arbitrarily close to each other, i.e., their GL_q-orbit closures intersect. (II) We show that vanishing signatures are the only true obstacle to a converse of the Holant theorem. As corollaries of the two theorems we obtain the first characterization of homomorphism-indistinguishability over graphs of bounded degree, a long standing open problem, and show that two graphs with invertible adjacency matrices are isomorphic if and only if they are homomorphism-indistinguishable over graphs with maximum degree at most three. We also show that Holant-indistinguishability is complete for a complexity class TOCI introduced by Lysikov and Walter [Vladimir Lysikov and Michael Walter, 2024], and hence hard for graph isomorphism.

Cite as

Jin-Yi Cai and Ben Young. Vanishing Signatures, Orbit Closure, and the Converse of the Holant Theorem. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 32:1-32:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{cai_et_al:LIPIcs.ITCS.2026.32,
  author =	{Cai, Jin-Yi and Young, Ben},
  title =	{{Vanishing Signatures, Orbit Closure, and the Converse of the Holant Theorem}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{32:1--32:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.32},
  URN =		{urn:nbn:de:0030-drops-253198},
  doi =		{10.4230/LIPIcs.ITCS.2026.32},
  annote =	{Keywords: Holant, Orbit Closure Intersection, Homomorphism Indistinguishability, Tensor Network}
}
Document
Uniformity Testing Under User-Level Local Privacy

Authors: Clément L. Canonne, Abigail Gentle, and Vikrant Singhal


Abstract
We initiate the study of distribution testing under user-level local differential privacy, where each of n users contributes m samples from the unknown underlying distribution. This setting, albeit very natural, is significantly more challenging than the usual locally private setting, as for the same parameter ε the privacy guarantee must now apply to a full batch of m data points. While some recent work considers distribution learning in this user-level setting, nothing was known for even the most fundamental testing task, uniformity testing (and its generalization, identity testing). We address this gap, by providing (nearly) sample-optimal user-level LDP algorithms for uniformity and identity testing. Motivated by practical considerations, our main focus is on the private-coin, symmetric setting, which does not require users to share a common random seed nor to have been assigned a globally unique identifier.

Cite as

Clément L. Canonne, Abigail Gentle, and Vikrant Singhal. Uniformity Testing Under User-Level Local Privacy. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 33:1-33:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{canonne_et_al:LIPIcs.ITCS.2026.33,
  author =	{Canonne, Cl\'{e}ment L. and Gentle, Abigail and Singhal, Vikrant},
  title =	{{Uniformity Testing Under User-Level Local Privacy}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{33:1--33:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.33},
  URN =		{urn:nbn:de:0030-drops-253201},
  doi =		{10.4230/LIPIcs.ITCS.2026.33},
  annote =	{Keywords: Differential Privacy, Local Differential Privacy, Uniformity Testing, Identity Testing, Hypothesis Testing, User-Level Differential Privacy, Person-Level Differential Privacy}
}
Document
Testing Classical Properties from Quantum Data

Authors: Matthias C. Caro, Preksha Naik, and Joseph Slote


Abstract
Many properties of Boolean functions can be tested far more efficiently than the function itself can be learned. However, this dramatic advantage often disappears when testers are limited to random samples of f instead of adaptively chosen queries to f. In this work we investigate the quantum version of this restriction: quantum algorithms that test properties of a Boolean function f solely from copies of either the function state |f⟩∝ ∑_x|x,f(x)⟩ or the phase state |(-1)^f⟩∝ ∑_x (-1)^{f(x)}|x⟩. Quantum advantage in testing from data. For monotonicity, symmetry, and triangle-freeness, we show passive quantum testers are unboundedly or super-polynomially better than their classical passive testing counterparts. They are competitive with classic query-based testers in each case. Inadequacy of Fourier sampling. Our new testers use techniques beyond quantum Fourier sampling, and it turns out this is necessary: we show a certain class of bent functions can be tested from 𝒪(1) function states but has a sample complexity lower bound of 2^{Ω(n)} for any tester relying exclusively on Fourier and classical samples. Classical queries vs. quantum data. Our passive quantum testers are competitive with classical query-based testers, but this isn't universal: we exhibit a testing problem that can be solved from 𝒪(1) classical queries but requires Ω(2^{n/2}) function state copies. The Forrelation problem provides a separation of the same magnitude in the opposite direction, so we conclude that quantum data and classical queries are "maximally incomparable" resources for testing. Towards lower bounds. We also begin the study of lower bounds for testing from quantum data. For quantum monotonicity testing, we prove that the ensembles of [Goldreich et al., 2000; Black, 2024], which give exponential lower bounds for classical sample-based testing, do not yield any nontrivial lower bounds for testing from quantum data. New insights specific to quantum data will be required for proving copy complexity lower bounds for testing in this model.

Cite as

Matthias C. Caro, Preksha Naik, and Joseph Slote. Testing Classical Properties from Quantum Data. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 34:1-34:26, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{caro_et_al:LIPIcs.ITCS.2026.34,
  author =	{Caro, Matthias C. and Naik, Preksha and Slote, Joseph},
  title =	{{Testing Classical Properties from Quantum Data}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{34:1--34:26},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.34},
  URN =		{urn:nbn:de:0030-drops-253213},
  doi =		{10.4230/LIPIcs.ITCS.2026.34},
  annote =	{Keywords: Quantum Property Testing, Quantum Data, Boolean Functions}
}
Document
Symmetric Quantum Computation

Authors: Davi Castro-Silva, Tom Gur, and Sergii Strelchuk


Abstract
We introduce a systematic study of symmetric quantum circuits, a new restricted model of quantum computation that preserves the symmetries of the problems it solves. This model is well-adapted for studying the role of symmetry in quantum speedups, extending a central notion of symmetric computation studied in the classical setting. Our results establish that symmetric quantum circuits are fundamentally more powerful than their classical counterparts. First, we give efficient symmetric circuits for key quantum techniques such as amplitude amplification, phase estimation and linear combination of unitaries. In addition, we show how the task of symmetric state preparation can be performed efficiently in several natural cases. Finally, we demonstrate an exponential separation in the symmetric setting for the problem XOR-SAT, which requires exponential-size symmetric classical circuits but can be solved by polynomial-size symmetric quantum circuits.

Cite as

Davi Castro-Silva, Tom Gur, and Sergii Strelchuk. Symmetric Quantum Computation. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 35:1-35:10, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{castrosilva_et_al:LIPIcs.ITCS.2026.35,
  author =	{Castro-Silva, Davi and Gur, Tom and Strelchuk, Sergii},
  title =	{{Symmetric Quantum Computation}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{35:1--35:10},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.35},
  URN =		{urn:nbn:de:0030-drops-253223},
  doi =		{10.4230/LIPIcs.ITCS.2026.35},
  annote =	{Keywords: Quantum computing, complexity theory, symmetries}
}
Document
Beyond 2-Edge-Connectivity: Algorithms and Impossibility for Content-Oblivious Leader Election

Authors: Yi-Jun Chang, Lyuting Chen, and Haoran Zhou


Abstract
The content-oblivious model, introduced by Censor-Hillel, Cohen, Gelles, and Sela (PODC 2022; Distributed Computing 2023), captures an extremely weak form of communication where nodes can only send asynchronous, content-less pulses. They showed that in 2-edge-connected networks, any distributed algorithm can be simulated in the content-oblivious model, provided that a unique leader is designated a priori. Subsequent works of Frei, Gelles, Ghazy, and Nolin (DISC 2024) and Chalopin et al. (DISC 2025) developed content-oblivious leader election algorithms, first for unoriented rings and then for general 2-edge-connected graphs. These results establish that all graph problems are solvable in content-oblivious, 2-edge-connected networks. Much less is known about networks that are not 2-edge-connected. Censor-Hillel, Cohen, Gelles, and Sela showed that no non-constant function f(x,y) can be computed correctly by two parties using content-oblivious communication over a single edge, where one party holds x and the other holds y. This seemingly ruled out many natural graph problems on non-2-edge-connected graphs. In this work, we show that, with the knowledge of network topology G, leader election is possible in a wide range of graphs. Our main contributions are as follows: Impossibility: Graphs symmetric about an edge admit no randomized terminating leader election algorithm, even when nodes have unique identifiers and full knowledge of G. Leader election algorithms: Trees that are not symmetric about any edge admit a quiescently terminating leader election algorithm with topology knowledge, even in anonymous networks, using O(n²) messages, where n is the number of nodes. Moreover, even-diameter trees admit a terminating leader election given only the knowledge of the network diameter D = 2r, with message complexity O(nr). Necessity of topology knowledge: In the family of graphs 𝒢 = {P₃, P₅}, both the 3-path P₃ and the 5-path P₅ admit a quiescently terminating leader election if nodes know the topology exactly. However, if nodes only know that the underlying topology belongs to 𝒢, then terminating leader election is impossible.

Cite as

Yi-Jun Chang, Lyuting Chen, and Haoran Zhou. Beyond 2-Edge-Connectivity: Algorithms and Impossibility for Content-Oblivious Leader Election. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 36:1-36:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{chang_et_al:LIPIcs.ITCS.2026.36,
  author =	{Chang, Yi-Jun and Chen, Lyuting and Zhou, Haoran},
  title =	{{Beyond 2-Edge-Connectivity: Algorithms and Impossibility for Content-Oblivious Leader Election}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{36:1--36:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.36},
  URN =		{urn:nbn:de:0030-drops-253239},
  doi =		{10.4230/LIPIcs.ITCS.2026.36},
  annote =	{Keywords: Asynchronous model, fault tolerance, quiescent termination}
}
Document
New Algebrization Barriers to Circuit Lower Bounds via Communication Complexity of Missing-String

Authors: Lijie Chen, Yang Hu, and Hanlin Ren


Abstract
The algebrization barrier, proposed by Aaronson and Wigderson (STOC '08, ToCT '09), captures the limitations of many complexity-theoretic techniques based on arithmetization. Notably, several circuit lower bounds that overcome the relativization barrier (Buhrman-Fortnow-Thierauf, CCC '98; Vinodchandran, TCS '05; Santhanam, STOC '07, SICOMP '09) remain subject to the algebrization barrier. In this work, we establish several new algebrization barriers to circuit lower bounds by studying the communication complexity of the following problem, called XOR-Missing-String: For m < 2^{n/2}, Alice gets a list of m strings x₁, … , x_m ∈ {0, 1}ⁿ, Bob gets a list of m strings y₁, … , y_m ∈ {0, 1}ⁿ, and the goal is to output a string s ∈ {0, 1}ⁿ that is not equal to x_i⊕ y_j for any i, j ∈ [m]. 1) We construct an oracle A₁ and its multilinear extension A₁̃ such that PostBPE^{A₁̃} has linear-size A₁-oracle circuits on infinitely many input lengths. That is, proving PostBPE ̸ ⊆ i.o.- SIZE[O(n)] requires non-algebrizing techniques. This barrier follows from a PostBPP communication lower bound for XOR-Missing-String. This is in contrast to the well-known algebrizing lower bound MA_E (⊆ PostBPE) ̸ ⊆ P/_poly. 2) We construct an oracle A₂ and its multilinear extension A₂̃ such that BPE^{A₂̃} has linear-size A₂-oracle circuits on all input lengths. Previously, a similar barrier was demonstrated by Aaronson and Wigderson, but in their result, A₂̃ is only a multiquadratic extension of A₂. Our results show that communication complexity is more useful than previously thought for proving algebrization barriers, as Aaronson and Wigderson wrote that communication-based barriers were "more contrived". This serves as an example of how XOR-Missing-String forms new connections between communication lower bounds and algebrization barriers. 3) Finally, we study algebrization barriers to circuit lower bounds for MA_E. Buhrman, Fortnow, and Thierauf proved a sub-half-exponential circuit lower bound for MA_E via algebrizing techniques. Toward understanding whether the half-exponential bound can be improved, we define a natural subclass of MA_E that includes their hard MA_E language, and prove the following result: For every super-half-exponential function h(n), we construct an oracle A₃ and its multilinear extension A₃̃ such that this natural subclass of MA_E^{A₃̃} has h(n)-size A₃-oracle circuits on all input lengths. This suggests that half-exponential might be the correct barrier for MA_E circuit lower bounds w.r.t. algebrizing techniques.

Cite as

Lijie Chen, Yang Hu, and Hanlin Ren. New Algebrization Barriers to Circuit Lower Bounds via Communication Complexity of Missing-String. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 37:1-37:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.ITCS.2026.37,
  author =	{Chen, Lijie and Hu, Yang and Ren, Hanlin},
  title =	{{New Algebrization Barriers to Circuit Lower Bounds via Communication Complexity of Missing-String}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{37:1--37:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.37},
  URN =		{urn:nbn:de:0030-drops-253246},
  doi =		{10.4230/LIPIcs.ITCS.2026.37},
  annote =	{Keywords: circuit lower bound, algebrization barrier, missing string, communication complexity}
}
Document
Lower Bounds on Tree Covers

Authors: Yu Chen, Zihan Tan, and Hangyu Xu


Abstract
Given an n-point metric space (X,d_X), a tree cover 𝒯 is a set of |𝒯| = k trees on X such that every pair of vertices in X has a low-distortion path in one of the trees in 𝒯. Tree covers have been playing a crucial role in graph algorithms for decades, and the research focus is the construction of tree covers with small size k and distortion. When k = 1, the best distortion is known to be Θ(n). For a constant k ≥ 2, the best distortion upper bound is Õ(n^{1/k}) and the strongest lower bound is Ω(log_k n), leaving a gap to be closed. In this paper, we improve the lower bound to Ω(n^{1/(2^{k-1)}}). Our proof is a novel analysis on a structurally simple grid-like graph, which utilizes some combinatorial fixed-point theorems. We believe that they will prove useful for analyzing other tree-like data structures as well.

Cite as

Yu Chen, Zihan Tan, and Hangyu Xu. Lower Bounds on Tree Covers. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 38:1-38:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.ITCS.2026.38,
  author =	{Chen, Yu and Tan, Zihan and Xu, Hangyu},
  title =	{{Lower Bounds on Tree Covers}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{38:1--38:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.38},
  URN =		{urn:nbn:de:0030-drops-253254},
  doi =		{10.4230/LIPIcs.ITCS.2026.38},
  annote =	{Keywords: Tree Covers, Combinatorial Fixed-Point Theorems}
}
Document
Efficient Algorithms for the Disjoint Shortest Paths Problem and Its Extensions

Authors: Keerti Choudhary, Amit Kumar, and Lakshay Saggi


Abstract
We study the 2-Disjoint Shortest Paths (2-DSP) problem: given a directed weighted graph and two terminal pairs (s₁,t₁) and (s₂,t₂), decide whether there exist vertex-disjoint shortest paths between each pair. Building on recent advances in disjoint shortest paths for DAGs and undirected graphs (Akmal et al. 2024), we present an O(mn log n)-time algorithm for this problem in weighted directed graphs that do not contain negative or zero weight cycles. This algorithm presents a significant improvement over the previously known O(m⁵n)-time bound (Berczi et al. 2017). Our approach exploits the algebraic structure of polynomials that enumerate shortest paths between terminal pairs. A key insight is that these polynomials admit a recursive decomposition, enabling efficient evaluation via dynamic programming over fields of characteristic two. Furthermore, we demonstrate how to report the corresponding paths in O(mn² log n)-time. In addition, we extend our techniques to a more general setting: given two terminal pairs (s₁, t₁) and (s₂, t₂) in a directed graph, find the minimum possible number of vertex intersections between any shortest path from s₁ to t₁ and s₂ to t₂. We call this the Minimum 2-Disjoint Shortest Paths (Min-2-DSP) problem. We provide in this paper the first efficient algorithm for this problem, including an O(m² n³)-time algorithm for directed graphs with positive edge weights, and an O(m+n)-time algorithm for DAGs and undirected graphs. Moreover, if the number of intersecting vertices is at least one, we show that it is possible to report the paths in the same O(m+n)-time. This is somewhat surprising, as there is no known o(mn) time algorithm for explicitly reporting the paths if they are vertex-disjoint, and is left as an open problem in (Akmal et al. 2024).

Cite as

Keerti Choudhary, Amit Kumar, and Lakshay Saggi. Efficient Algorithms for the Disjoint Shortest Paths Problem and Its Extensions. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 39:1-39:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{choudhary_et_al:LIPIcs.ITCS.2026.39,
  author =	{Choudhary, Keerti and Kumar, Amit and Saggi, Lakshay},
  title =	{{Efficient Algorithms for the Disjoint Shortest Paths Problem and Its Extensions}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{39:1--39:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.39},
  URN =		{urn:nbn:de:0030-drops-253267},
  doi =		{10.4230/LIPIcs.ITCS.2026.39},
  annote =	{Keywords: Disjoint paths, Disjoint shortest paths, Algebraic graph algorithms}
}
Document
A Simple and Robust Protocol for Distributed Counting

Authors: Edith Cohen, Moshe Shechner, and Uri Stemmer


Abstract
We revisit the distributed counting problem, where a server must continuously approximate the total number of events occurring across k sites while minimizing communication. The communication complexity of this problem is known to be Θ(k/(ε)log N) for deterministic protocols. Huang, Yi, and Zhang (2012) showed that randomization can reduce this to Θ((√k)/ε log N), but their analysis is restricted to the oblivious setting, where the stream of events is independent of the protocol’s outputs. Xiong, Zhu, and Huang (2023) presented a robust protocol for distributed counting that removes the oblivious assumption. However, their communication complexity is suboptimal by a polylog(k) factor and their protocol is substantially more complex than the oblivious protocol of Huang et al. (2012). This left open a natural question: could it be that the simple protocol of Huang et al. (2012) is already robust? We resolve this question with two main contributions. First, we show that the protocol of Huang et al. (2012) is itself not robust by constructing an explicit adaptive attack that forces it to lose its accuracy. Second, we present a new, surprisingly simple, robust protocol for distributed counting that achieves the optimal communication complexity of O((√k)/ε log N). Our protocol is simpler than that of Xiong et al. (2023), perhaps even simpler than that of Huang et al. (2012), and is the first to match the optimal oblivious complexity in the adaptive setting.

Cite as

Edith Cohen, Moshe Shechner, and Uri Stemmer. A Simple and Robust Protocol for Distributed Counting. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 40:1-40:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{cohen_et_al:LIPIcs.ITCS.2026.40,
  author =	{Cohen, Edith and Shechner, Moshe and Stemmer, Uri},
  title =	{{A Simple and Robust Protocol for Distributed Counting}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{40:1--40:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.40},
  URN =		{urn:nbn:de:0030-drops-253272},
  doi =		{10.4230/LIPIcs.ITCS.2026.40},
  annote =	{Keywords: Distributed Streaming, Adversarial Streaming}
}
Document
The Curious Case of "XOR Repetition" of Monogamy-Of-Entanglement Games

Authors: Andrea Coladangelo, Qipeng Liu, and Ziyi Xie


Abstract
In this work, we consider "decision" variants of a well-known monogamy-of-entanglement game by Tomamichel, Fehr, Kaniewski, and Wehner [New Journal of Physics '13]. In its original "search" variant, Alice prepares a (possibly entangled) state on registers ABC; register 𝖠, consisting of n qubits, is sent to a Referee, while 𝖡 and 𝖢 are sent to Bob and Charlie; the Referee then measures each qubit in the standard or Hadamard basis (chosen uniformly at random). The basis choices are sent to Bob and Charlie, whose goal is to simultaneously guess the Referee’s n-bit measurement outcome string x. Tomamichel et al. show that the optimal winning probability is cos^{2n}(π/8), following a perfect parallel repetition theorem. We consider the following "decision" variants of this game: - Variant 1, "XOR repetition": Bob and Charlie’s goal is to guess the XOR of all the bits of x. Ananth et al. [Asiacrypt '24] conjectured that the optimal advantage over random guessing decays exponentially in n. Surprisingly, we show that this conjecture is false, and, in fact, there is no decay at all: there exists a strategy that wins with probability cos²(π/8) ≈ 0.85 for any n. Moreover, this strategy does not involve any entanglement between Alice, Bob, and Charlie! - Variant 2, "Goldreich-Levin": The Referee additionally samples a uniformly random n-bit string r that is sent to Bob and Charlie along with the basis choices. Their goal is to guess the parity of r⋅ x. We show that the optimal advantage over random guessing decays exponentially in n for the restricted class of adversaries that do not share entanglement. A similar result was already shown by Champion et al. and Çakan et al.; we give a more direct proof. Showing that Variant 2 is "secure" (i.e., that the optimal winning probability is exponentially close to 1/2) against general adversaries would imply the existence of an information-theoretically "unclonable bit". We put forward a reasonably concrete conjecture that is equivalent to the general security of Variant 2.

Cite as

Andrea Coladangelo, Qipeng Liu, and Ziyi Xie. The Curious Case of "XOR Repetition" of Monogamy-Of-Entanglement Games. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 41:1-41:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{coladangelo_et_al:LIPIcs.ITCS.2026.41,
  author =	{Coladangelo, Andrea and Liu, Qipeng and Xie, Ziyi},
  title =	{{The Curious Case of "XOR Repetition" of Monogamy-Of-Entanglement Games}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{41:1--41:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.41},
  URN =		{urn:nbn:de:0030-drops-253281},
  doi =		{10.4230/LIPIcs.ITCS.2026.41},
  annote =	{Keywords: quantum information, monogamy of entanglement, unclonable encryption}
}
Document
Time and Space Efficient Deterministic List Decoding

Authors: Joshua Cook and Dana Moshkovitz


Abstract
Error correcting codes encode messages by codewords in such a way that even if some of the codeword is corrupted, the message can be decoded. Typical decoding algorithms for error correcting codes either use linear space or quadratic time. A natural question is whether codes can be decoded in near-linear time and sub-linear space simultaneously. A recent result by Cook and Moshkovitz gave efficient decoders that can uniquely decode Reed-Muller and other codes from a constant fraction (less than half) of corruption. In this work, we address the problem of list decoding in near-linear time and sub-linear space. In the list decoding setting, most of the codeword is corrupted, and one wants to output a short list of potential messages that contains the true message. For any constants γ, τ > 0, we give decoders for Reed-Muller codes that can decode from 1-γ fraction of corruptions in time n^{1+τ} and space n^{τ}. Our decoders work by extending the iterative correction technique of Cook and Moshkovitz. However, that technique, which gradually decreases the number of corruptions in the message, was tailored to the unique decoding setting. We first identify an intermediate problem, codewords list recovery, for which we can make iterative correction work. We then show how to reduce general list decoding to the codewords list recovery problem in efficient time and space. The reduction relies on local correction and testing. In the codewords list recovery problem, the input consists of n unordered lists containing exactly the symbols from L codewords, where a small fraction of the lists is corrupted. The goal is to find the L codewords. In addition, we prove that any linear code with time-space efficient encoding or decoding must be local, in the sense that the codewords satisfy a local linear constraint. This rules out codes like Reed-Solomon from having time-space efficient encoding or decoding.

Cite as

Joshua Cook and Dana Moshkovitz. Time and Space Efficient Deterministic List Decoding. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 42:1-42:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{cook_et_al:LIPIcs.ITCS.2026.42,
  author =	{Cook, Joshua and Moshkovitz, Dana},
  title =	{{Time and Space Efficient Deterministic List Decoding}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{42:1--42:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.42},
  URN =		{urn:nbn:de:0030-drops-253292},
  doi =		{10.4230/LIPIcs.ITCS.2026.42},
  annote =	{Keywords: Reed-Muller code, local correction, local testing}
}
Document
Efficient Catalytic Graph Algorithms

Authors: James Cook and Edward Pyne


Abstract
We give fast, simple, and implementable catalytic logspace algorithms for two fundamental graph problems. First, a randomized catalytic algorithm for s → t connectivity running in Õ(nm) time, and a deterministic catalytic algorithm for the same running in Õ(n³ m) time. The former algorithm is the first algorithmic use of randomization in CL. The algorithm uses one register per vertex and repeatedly "pushes" values along the edges in the graph. Second, a deterministic catalytic algorithm for simulating random walks which in Õ(m T² / ε) time estimates the probability a T-step random walk ends at a given vertex within ε additive error. The algorithm uses one register for each vertex and increments it at each visit to ensure repeated visits follow different outgoing edges. Prior catalytic algorithms for both problems did not have explicit runtime bounds beyond being polynomial in n.

Cite as

James Cook and Edward Pyne. Efficient Catalytic Graph Algorithms. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 43:1-43:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{cook_et_al:LIPIcs.ITCS.2026.43,
  author =	{Cook, James and Pyne, Edward},
  title =	{{Efficient Catalytic Graph Algorithms}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{43:1--43:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.43},
  URN =		{urn:nbn:de:0030-drops-253305},
  doi =		{10.4230/LIPIcs.ITCS.2026.43},
  annote =	{Keywords: catalytic computing, graph algorithms, catalytic logspace}
}
Document
Higher-Order Delsarte Dual LPs: Lifting, Constructions and Completeness

Authors: Leonardo Nagami Coregliano, Fernando Granha Jeronimo, Chris Jones, Nati Linial, and Elyassaf Loyfer


Abstract
A central and longstanding open problem in coding theory is the rate-versus-distance trade-off for binary error-correcting codes. In a seminal work, Delsarte introduced a family of linear programs establishing relaxations on the size of optimum codes. To date, the state-of-the-art upper bounds for binary codes come from dual feasible solutions to these LPs. Still, these bounds are exponentially far from the best-known existential constructions. Recently, hierarchies of linear programs extending and strengthening Delsarte’s original LPs were introduced for linear codes, which we refer to as higher-order Delsarte LPs. These new hierarchies were shown to provably converge to the actual value of optimum codes, namely, they are complete hierarchies. Therefore, understanding them and their dual formulations becomes a valuable line of investigation. Nonetheless, their higher-order structure poses challenges. In fact, analysis of all known convex programming hierarchies strengthening Delsarte’s original LPs has turned out to be exceedingly difficult and essentially nothing is known, stalling progress in the area since the 1970s. Our main result is an analysis of the higher-order Delsarte LPs via their dual formulation. Although quantitatively, our current analysis only matches the best-known upper bounds, it shows, for the first time, how to tame the complexity of analyzing a hierarchy strengthening Delsarte’s original LPs. In doing so, we reach a better understanding of the structure of the hierarchy, which may serve as the foundation for further quantitative improvements. We provide two additional structural results for this hierarchy. First, we show how to explicitly lift any feasible dual solution from level k to a (suitable) larger level 𝓁 while retaining the objective value. Second, we give a novel proof of completeness using the dual formulation.

Cite as

Leonardo Nagami Coregliano, Fernando Granha Jeronimo, Chris Jones, Nati Linial, and Elyassaf Loyfer. Higher-Order Delsarte Dual LPs: Lifting, Constructions and Completeness. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 44:1-44:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{coregliano_et_al:LIPIcs.ITCS.2026.44,
  author =	{Coregliano, Leonardo Nagami and Jeronimo, Fernando Granha and Jones, Chris and Linial, Nati and Loyfer, Elyassaf},
  title =	{{Higher-Order Delsarte Dual LPs: Lifting, Constructions and Completeness}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{44:1--44:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.44},
  URN =		{urn:nbn:de:0030-drops-253315},
  doi =		{10.4230/LIPIcs.ITCS.2026.44},
  annote =	{Keywords: Coding theory, code bounds, convex optimization, linear progamming hierarchy}
}
Document
Fairness in the k-Server Problem

Authors: Mohammadreza Daneshvaramoli, Mohammad Hajiesmaili, Shahin Kamali, Helia Karisani, and Cameron Musco


Abstract
We initiate a formal study of fairness for the k-server problem, where the objective is not only to minimize the total movement cost, but also to distribute the cost equitably among servers. We first define a general notion of (α,β)-fairness, where, for parameters α ≥ 1 and β ≥ 0, no server incurs more than an α/k-fraction of the total cost plus an additive term β. We then show that fairness can be achieved without a loss in competitiveness in both the offline and online settings. In the offline setting, we give a deterministic algorithm that, for any ε > 0, transforms any optimal solution into an (α,β)-fair solution for α = 1 + ε and β = O(diam ⋅ log k / ε), while increasing the cost of the solution by just an additive O(diam ⋅ k log k / ε) term. Here diam is the diameter of the underlying metric space. We give a similar result in the online setting, showing that any competitive algorithm can be transformed into a randomized online algorithm that is fair with high probability against an oblivious adversary and still competitive up to a small loss. The above results leave open a significant question: can fairness be achieved in the online setting, either with a deterministic algorithm or a randomized algorithm, against a fully adaptive adversary? We make progress towards answering this question, showing that the classic deterministic Double Coverage Algorithm (DCA) is fair on line metrics and on tree metrics when k = 2. However, we also show a negative result: DCA fails to be fair for any non-vacuous parameters on general tree metrics. We further show that on uniform metrics (i.e., the paging problem), the deterministic First-In First-Out (FIFO) algorithm is fair. We show that any "marking algorithm", including the Least Recently Used (LRU) algorithm, also satisfies a weaker, but still meaningful notion of fairness.

Cite as

Mohammadreza Daneshvaramoli, Mohammad Hajiesmaili, Shahin Kamali, Helia Karisani, and Cameron Musco. Fairness in the k-Server Problem. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 45:1-45:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{daneshvaramoli_et_al:LIPIcs.ITCS.2026.45,
  author =	{Daneshvaramoli, Mohammadreza and Hajiesmaili, Mohammad and Kamali, Shahin and Karisani, Helia and Musco, Cameron},
  title =	{{Fairness in the k-Server Problem}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{45:1--45:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.45},
  URN =		{urn:nbn:de:0030-drops-253328},
  doi =		{10.4230/LIPIcs.ITCS.2026.45},
  annote =	{Keywords: k-server problem, online algorithms, fairness, competitive analysis}
}
Document
Symmetric Algebraic Circuits and Homomorphism Polynomials

Authors: Anuj Dawar, Benedikt Pago, and Tim Seppelt


Abstract
The central open question of algebraic complexity is whether VP ≠ VNP, which is saying that the permanent cannot be represented by families of polynomial-size algebraic circuits. For symmetric algebraic circuits, this has been confirmed by Dawar and Wilsenach (2020), who showed exponential lower bounds on the size of symmetric circuits for the permanent. In this work, we set out to develop a more general symmetric algebraic complexity theory. Our main result is that a family of symmetric polynomials admits small symmetric circuits if and only if they can be written as a linear combination of homomorphism counting polynomials of graphs of bounded treewidth. We also establish a relationship between the symmetric complexity of subgraph counting polynomials and the vertex cover number of the pattern graph. As a concrete example, we examine the symmetric complexity of immanant families (a generalisation of the determinant and permanent) and show that a known conditional dichotomy due to Curticapean (2021) holds unconditionally in the symmetric setting.

Cite as

Anuj Dawar, Benedikt Pago, and Tim Seppelt. Symmetric Algebraic Circuits and Homomorphism Polynomials. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 46:1-46:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{dawar_et_al:LIPIcs.ITCS.2026.46,
  author =	{Dawar, Anuj and Pago, Benedikt and Seppelt, Tim},
  title =	{{Symmetric Algebraic Circuits and Homomorphism Polynomials}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{46:1--46:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.46},
  URN =		{urn:nbn:de:0030-drops-253330},
  doi =		{10.4230/LIPIcs.ITCS.2026.46},
  annote =	{Keywords: algebraic complexity, finite model theory, symmetric circuits, homomorphism counting, graph homomorphism, treewidth, counting width, first-order logic with counting quantifiers}
}
Document
Dudeney’s Dissection Is Optimal

Authors: Erik D. Demaine, Tonan Kamata, and Ryuhei Uehara


Abstract
In 1907, Henry Ernest Dudeney posed a puzzle: "cut any equilateral triangle ... into as few pieces as possible that will fit together and form a perfect square" (without overlap, via translation and rotation). Four weeks later, Dudeney demonstrated a beautiful four-piece solution, which today remains perhaps the most famous example of dissection. In this paper (over a century later), we finally solve Dudeney’s puzzle, by proving that the equilateral triangle and square have no common dissection with three or fewer polygonal pieces. We reduce the problem to the analysis of discrete graph structures representing the correspondence between the edges and the vertices of the pieces forming each polygon.

Cite as

Erik D. Demaine, Tonan Kamata, and Ryuhei Uehara. Dudeney’s Dissection Is Optimal. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 47:1-47:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{demaine_et_al:LIPIcs.ITCS.2026.47,
  author =	{Demaine, Erik D. and Kamata, Tonan and Uehara, Ryuhei},
  title =	{{Dudeney’s Dissection Is Optimal}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{47:1--47:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.47},
  URN =		{urn:nbn:de:0030-drops-253345},
  doi =		{10.4230/LIPIcs.ITCS.2026.47},
  annote =	{Keywords: Geometric Dissection, Dudeney Dissection, Dissection with Fewest Pieces}
}
Document
Auditability and the Landscape of Distance to Multicalibration

Authors: Nathan Derhake, Siddartha Devic, Dutch Hansen, Kuan Liu, and Vatsal Sharan


Abstract
Calibration is a critical property for establishing the trustworthiness of predictors that provide uncertainty estimates. Multicalibration is a strengthening of calibration which requires that predictors be calibrated on a potentially overlapping collection of subsets of the domain. As multicalibration grows in popularity with practitioners, an essential question is: how do we measure how multicalibrated a predictor is? Błasiok et al. [Błasiok et al., 2023] considered this question for standard calibration by introducing the distance to calibration framework (dCE) to understand how calibration metrics relate to each other and the ground truth. Building on the dCE framework, we consider the auditability of the distance to multicalibration of a predictor f. We begin by considering what are perhaps the two most natural generalizations of dCE to multiple subgroups: worst group dCE (wdMC), and distance to multicalibration (dMC). Using wdMC and dMC as a guiding path, we argue that there are two essential properties of any multicalibration error metric: 1) the metric should capture how much f would need to be modified in order to be perfectly multicalibrated; and 2) the metric should be auditable in an information theoretic sense (i.e., with some finite sample complexity). We show that wdMC and dMC each fail to satisfy one of these two properties, and that similar barriers arise when considering the auditability of general distance to multigroup fairness notions (e.g. multiaccuracy or low-degree multicalibration). We then propose two (equivalent) multicalibration metrics which do satisfy these requirements: 1) a continuized variant of dMC; and 2) a distance to intersection multicalibration, which leans on intersectional fairness desiderata. Along the way, we shed light on the loss-landscape of distance to multicalibration and the geometry of the set of perfectly multicalibrated predictors. We also demonstrate that the loss surface of any metric which captures how much f would need to be modified to be perfectly multicalibrated often satisfies a local minima are global minima property. Our findings may have implications for the development of stronger multicalibration algorithms, as well as multicalibration auditing more generally.

Cite as

Nathan Derhake, Siddartha Devic, Dutch Hansen, Kuan Liu, and Vatsal Sharan. Auditability and the Landscape of Distance to Multicalibration. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 48:1-48:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{derhake_et_al:LIPIcs.ITCS.2026.48,
  author =	{Derhake, Nathan and Devic, Siddartha and Hansen, Dutch and Liu, Kuan and Sharan, Vatsal},
  title =	{{Auditability and the Landscape of Distance to Multicalibration}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{48:1--48:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.48},
  URN =		{urn:nbn:de:0030-drops-253351},
  doi =		{10.4230/LIPIcs.ITCS.2026.48},
  annote =	{Keywords: Multicalibration, Auditability, Fairness, Classification, Calibration}
}
Document
Debordering Closure Results in Determinantal and Pfaffian Ideals

Authors: Anakin Dey and Zeyu Guo


Abstract
One important question in algebraic complexity is understanding the complexity of polynomial ideals (Grochow, Bulletin of EATCS 131, 2020). Andrews and Forbes (STOC 2022) studied the determinantal ideals I^{det}_{n,m,r} generated by the r× r minors of n× m matrices. Over fields of characteristic zero or of sufficiently large characteristic, they showed that for any nonzero f ∈ I^{det}_{n,m,r}, the determinant of a t × t matrix of variables with t = Θ{r^{1/3}} is approximately computed by a constant-depth, polynomial-size f-oracle algebraic circuit, in the sense that the determinant lies in the border of such circuits. An analogous result was also obtained for Pfaffians in the same paper. In this work, we deborder the result of Andrews and Forbes by showing that when f has polynomial degree, the determinant is in fact exactly computed by a constant-depth, polynomial-size f-oracle algebraic circuit. We further establish an analogous result for Pfaffian ideals. Our results are established using the isolation lemma, combined with a careful analysis of straightening-law expansions of polynomials in determinantal and Pfaffian ideals.

Cite as

Anakin Dey and Zeyu Guo. Debordering Closure Results in Determinantal and Pfaffian Ideals. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 49:1-49:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{dey_et_al:LIPIcs.ITCS.2026.49,
  author =	{Dey, Anakin and Guo, Zeyu},
  title =	{{Debordering Closure Results in Determinantal and Pfaffian Ideals}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{49:1--49:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.49},
  URN =		{urn:nbn:de:0030-drops-253363},
  doi =		{10.4230/LIPIcs.ITCS.2026.49},
  annote =	{Keywords: Algebraic circuit complexity, Isolation lemma, Debordering}
}
Document
On the PTAS Complexity of Multidimensional Knapsack

Authors: Ilan Doron-Arad, Ariel Kulik, and Pasin Manurangsi


Abstract
We study the d-dimensional knapsack problem. We are given a set of items, each with a d-dimensional cost vector and a profit, along with a d-dimensional budget vector. The goal is to select a set of items that do not exceed the budget in all dimensions and maximize the total profit. A polynomial-time approximation scheme (PTAS) with running time n^{Θ(d/{ε})} has long been known for this problem, where {ε} is the error parameter and n is the encoding size. Despite decades of active research, the best running time of a PTAS has remained O(n^{⌈ d/{ε} ⌉ - d}). Unfortunately, existing lower bounds only cover the special case with two dimensions d = 2, and do not answer whether there is a n^{o(d/({ε)})}-time PTAS for larger values of d. In this work, we show that the running times of the best-known PTAS cannot be improved up to a polylogarithmic factor assuming the Exponential Time Hypothesis (ETH). Our techniques are based on a robust reduction from 2-CSP, which embeds 2-CSP constraints into a desired number of dimensions. Then, using a recent result of [Bafna Karthik and Minzer, STOC'25], we succeed in exhibiting tight trade-off between d and {ε} for all regimes of the parameters assuming d is sufficiently large. Informally, our result also shows that under ETH, for any function f there is no f(d/({ε)}) ⋅ n^{õ(d/({ε)})}-time (1-{ε})-approximation for d-dimensional knapsack, where n is the number of items and õ hides polylogarithmic factors in d/({ε)}.

Cite as

Ilan Doron-Arad, Ariel Kulik, and Pasin Manurangsi. On the PTAS Complexity of Multidimensional Knapsack. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 50:1-50:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{doronarad_et_al:LIPIcs.ITCS.2026.50,
  author =	{Doron-Arad, Ilan and Kulik, Ariel and Manurangsi, Pasin},
  title =	{{On the PTAS Complexity of Multidimensional Knapsack}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{50:1--50:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.50},
  URN =		{urn:nbn:de:0030-drops-253377},
  doi =		{10.4230/LIPIcs.ITCS.2026.50},
  annote =	{Keywords: d-dimensional Knapsack, Multidimensional Knapsack, PTAS, CSP}
}
Document
Near-Optimal Sparsifiers for Stochastic Knapsack and Assignment Problems

Authors: Shaddin Dughmi, Yusuf Hakan Kalayci, and Xinyu Liu


Abstract
When uncertainty meets costly information gathering, a fundamental question emerges: which data points should we probe to unlock near-optimal solutions? Sparsification of stochastic packing problems addresses this trade-off. The existing notions of sparsification measure the level of sparsity, called degree, as the ratio of queried items to the optimal solution size. While effective for matching and matroid-type problems with uniform structures, this cardinality-based approach fails for knapsack-type constraints where feasible sets exhibit dramatic structural variation. We introduce a polyhedral sparsification framework that measures the degree as the smallest scalar needed to embed the query set within a scaled feasibility polytope, naturally capturing redundancy without relying on cardinality. Our main contribution establishes that knapsack, multiple knapsack, and generalized assignment problems admit (1-ε)-approximate sparsifiers with degree polynomial in 1/p and 1/ε - where p denotes the independent activation probability of each element - remarkably independent of problem dimensions. The key insight involves grouping items with similar weights and deploying a charging argument: when our query set misses an optimal item, we either substitute it directly with a queried item from the same group or leverage that group’s excess contribution to compensate for the loss. This reveals an intriguing complexity-theoretic separation - while the multiple knapsack problem lacks an FPTAS and generalized assignment is APX-hard, their sparsification counterparts admit efficient (1-ε)-approximation algorithms that identify polynomial degree query sets. Finally, we raise an open question: can such sparsification extend to general integer linear programs with degree independent of problem dimensions?

Cite as

Shaddin Dughmi, Yusuf Hakan Kalayci, and Xinyu Liu. Near-Optimal Sparsifiers for Stochastic Knapsack and Assignment Problems. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 51:1-51:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{dughmi_et_al:LIPIcs.ITCS.2026.51,
  author =	{Dughmi, Shaddin and Kalayci, Yusuf Hakan and Liu, Xinyu},
  title =	{{Near-Optimal Sparsifiers for Stochastic Knapsack and Assignment Problems}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{51:1--51:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.51},
  URN =		{urn:nbn:de:0030-drops-253386},
  doi =		{10.4230/LIPIcs.ITCS.2026.51},
  annote =	{Keywords: Packing Problems, Assignment Problems, Stochastic Selection, Sparsification}
}
Document
Diffie-Hellman Key Exchange from Commutativity to Group Laws

Authors: Dung Hoang Duong, Youming Qiao, and Chuanqi Zhang


Abstract
In Diffie-Hellman key exchange, the commutativity of power operations is instrumental in the agreement of keys. Viewing commutativity as a law in abelian groups, we propose Diffie-Hellman key exchange in the group action framework (Brassard-Yung, Crypto'90; Ji-Qiao-Song-Yun, TCC'19), for actions of non-abelian groups with laws. The security of this protocol is shown, following Fischlin, Günther, Schmidt, and Warinschi (IEEE S&P'16), based on a pseudorandom group action assumption. A concrete instantiation is proposed based on the monomial code equivalence problem.

Cite as

Dung Hoang Duong, Youming Qiao, and Chuanqi Zhang. Diffie-Hellman Key Exchange from Commutativity to Group Laws. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 52:1-52:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{duong_et_al:LIPIcs.ITCS.2026.52,
  author =	{Duong, Dung Hoang and Qiao, Youming and Zhang, Chuanqi},
  title =	{{Diffie-Hellman Key Exchange from Commutativity to Group Laws}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{52:1--52:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.52},
  URN =		{urn:nbn:de:0030-drops-253396},
  doi =		{10.4230/LIPIcs.ITCS.2026.52},
  annote =	{Keywords: Diffie-Hellman, Key Exchange, Group Laws, Group Actions, Code Equivalence}
}
Document
Average Sensitivity of Geometric Algorithms

Authors: Matthijs Ebbens and Yuichi Yoshida


Abstract
In modern applications of geometric algorithms, it is often unrealistic to assume that the input representation fully captures all relevant aspects of the problem, because the input data is often large and dynamic. To address this challenge, we consider the notion of average sensitivity, which is defined as the average earth mover’s distance between the output distributions of the algorithm when run on an input and the same input with one point removed, where the average is over removed points and the distance between two outputs is measured using the symmetric difference size. We start by showing that a number of classical problems from computational geometry, in particular the convex hull, Delaunay triangulation, and Voronoi diagram problems, are "simple" from the viewpoint of average sensitivity by proving tight bounds for the average sensitivity of any algorithm for these problems. Then, we continue by constructing an algorithm with low average sensitivity that computes, for any ε > 0, a set of (1/3+ε)n guards for the art gallery problem. This is the main technical contribution of this work, which combines algorithms from computational geometry with results from the theory of local computation algorithms (LCAs) and property testing.

Cite as

Matthijs Ebbens and Yuichi Yoshida. Average Sensitivity of Geometric Algorithms. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 53:1-53:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{ebbens_et_al:LIPIcs.ITCS.2026.53,
  author =	{Ebbens, Matthijs and Yoshida, Yuichi},
  title =	{{Average Sensitivity of Geometric Algorithms}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{53:1--53:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.53},
  URN =		{urn:nbn:de:0030-drops-253409},
  doi =		{10.4230/LIPIcs.ITCS.2026.53},
  annote =	{Keywords: Average Sensitivity, Convex Hull, Delaunay Triangulation, Voronoi Diagram, Art Gallery}
}
Document
Testable Algorithms for Approximately Counting Edges and Triangles in Sublinear Time and Space

Authors: Talya Eden, Ronitt Rubinfeld, and Arsen Vasilyan


Abstract
We consider the fundamental problems of approximately counting the numbers of edges and triangles in a graph in sublinear time. Previous algorithms for these tasks are significantly more efficient under a promise that the arboricity of the graph is bounded by some parameter ̅α. However, when this promise is violated, the estimates given by these algorithms are no longer guaranteed to be correct. For the triangle counting task, we give an algorithm that requires no promise on the input graph G, and computes a (1±ε)-approximation for the number of triangles t in G in time O^*((m⋅ α(G))/t + m/(t^{2/3)}), where α(G) is the arboricity of the graph. The algorithm can be used on any graph G (no prior knowledge of the arboricity α(G) is required), and the algorithm adapts its run-time on the fly based on the graph G. We accomplish this by trying a sequence of candidate values α̃ for α(G) and using a novel algorithm in the framework of testable algorithms. This ensures that wrong candidates α̃ cannot lead to wrong estimates: if the advice is incorrect, the algorithm either succeeds despite this or detects this and continues with a new candidate. Once the algorithm accepts the candidate, its output is guaranteed to be correct with high probability. We prove that this approach preserves - up to an additive overhead - the dramatic efficiency gains obtainable when good arboricity bounds are known in advance, while ensuring robustness against misleading advice. We further complement this result with a lower bound, showing that such an overhead is unavoidable whenever the advice may be faulty. We further demonstrate implications of our results for triangle counting in the streaming model.

Cite as

Talya Eden, Ronitt Rubinfeld, and Arsen Vasilyan. Testable Algorithms for Approximately Counting Edges and Triangles in Sublinear Time and Space. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 54:1-54:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{eden_et_al:LIPIcs.ITCS.2026.54,
  author =	{Eden, Talya and Rubinfeld, Ronitt and Vasilyan, Arsen},
  title =	{{Testable Algorithms for Approximately Counting Edges and Triangles in Sublinear Time and Space}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{54:1--54:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.54},
  URN =		{urn:nbn:de:0030-drops-253417},
  doi =		{10.4230/LIPIcs.ITCS.2026.54},
  annote =	{Keywords: Sublinear Algorithms, Triangle Counting, Edge Counting, Arboricity}
}
Document
Universally Optimal Streaming Algorithm for Random Walks in Dense Graphs

Authors: Klim Efremenko, Gillat Kol, Raghuvansh R. Saxena, and Zhijun Zhang


Abstract
Sampling a random walk is a fundamental primitive in many graph applications. In the streaming model, it is known that sampling an L-step random walk on an n-vertex directed graph requires Ω(n L) space, implying that no sublinear-space streaming algorithm exists for general graphs. We show that sublinear algorithms are possible for the case of dense graphs, where every vertex has out-degree at least Ω(n). In particular, we give a one-pass turnstile streaming algorithm that uses only 𝒪̃(L) memory for such graphs. More broadly, for graphs with minimum out-degree at least d, our streaming algorithm samples a random walk using 𝒪̃(n/d ⋅ L) memory. We show that our algorithm is optimal in a strong "beyond worst-case" sense. To formalize this, we introduce the notion of universal optimality for graph streaming algorithms. Informally, a streaming algorithm is universally optimal if it performs (almost) as well as possible on every graph, assuming a worst-case choice of the streaming order. This notion of universal optimality is a key conceptual contribution of our work.

Cite as

Klim Efremenko, Gillat Kol, Raghuvansh R. Saxena, and Zhijun Zhang. Universally Optimal Streaming Algorithm for Random Walks in Dense Graphs. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 55:1-55:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{efremenko_et_al:LIPIcs.ITCS.2026.55,
  author =	{Efremenko, Klim and Kol, Gillat and Saxena, Raghuvansh R. and Zhang, Zhijun},
  title =	{{Universally Optimal Streaming Algorithm for Random Walks in Dense Graphs}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{55:1--55:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.55},
  URN =		{urn:nbn:de:0030-drops-253423},
  doi =		{10.4230/LIPIcs.ITCS.2026.55},
  annote =	{Keywords: Random Walk, streaming Algorithm, universal Optimality}
}
Document
The Hardness of Learning Quantum Circuits and Its Cryptographic Applications

Authors: Bill Fefferman, Soumik Ghosh, Makrand Sinha, and Henry Yuen


Abstract
We show that concrete hardness assumptions about learning or cloning the output state of a random quantum circuit can be used as the foundation for secure quantum cryptography. In particular, under these assumptions we construct secure one-way state generators (OWSGs), digital signature schemes, quantum bit commitments, and private key encryption schemes. We also discuss evidence for these hardness assumptions by analyzing the best-known quantum learning algorithms, as well as proving black-box lower bounds for cloning and learning given state preparation oracles. Our random circuit-based constructions provide concrete instantiations of quantum cryptographic primitives whose security do not depend on the existence of one-way functions. The use of random circuits in our constructions also opens the door to {NISQ-friendly quantum cryptography}. We discuss noise tolerant versions of our OWSG and digital signature constructions which can potentially be implementable on noisy quantum computers connected by a quantum network. On the other hand, they are still secure against {noiseless} quantum adversaries, raising the intriguing possibility of a useful implementation of an end-to-end cryptographic protocol on near-term quantum computers. Finally, our explorations suggest that the rich interconnections between learning theory and cryptography in classical theoretical computer science also extend to the quantum setting.

Cite as

Bill Fefferman, Soumik Ghosh, Makrand Sinha, and Henry Yuen. The Hardness of Learning Quantum Circuits and Its Cryptographic Applications. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 56:1-56:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{fefferman_et_al:LIPIcs.ITCS.2026.56,
  author =	{Fefferman, Bill and Ghosh, Soumik and Sinha, Makrand and Yuen, Henry},
  title =	{{The Hardness of Learning Quantum Circuits and Its Cryptographic Applications}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{56:1--56:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.56},
  URN =		{urn:nbn:de:0030-drops-253431},
  doi =		{10.4230/LIPIcs.ITCS.2026.56},
  annote =	{Keywords: quantum learning, quantum circuits, cryptographic hardness, one-way state generators}
}
Document
Anti-Concentration for the Unitary Haar Measure and Applications to Random Quantum Circuits

Authors: Bill Fefferman, Soumik Ghosh, and Wei Zhan


Abstract
We prove a Carbery-Wright style anti-concentration inequality for the unitary Haar measure, by showing that the probability of a polynomial in the entries of a random unitary falling into an ε range is at most a polynomial in ε. Using it, we show that the scrambling speed of a random quantum circuit is lower bounded: Namely, every input qubit has an influence that is at least inverse exponential in depth, on any output qubit touched by its lightcone. Our result on scrambling speed works with high probability over the choice of a circuit from an ensemble, as opposed to just working in expectation. As an application, we give the first polynomial-time algorithm for learning log-depth random quantum circuits with Haar random gates up to polynomially small diamond distance, given oracle access to the circuit. Other applications of this new scrambling speed lower bound include: - An optimal Ω(log ε^{-1}) depth lower bound for ε-approximate unitary designs on any circuit architecture; - A polynomial-time quantum algorithm that computes the depth of a bounded-depth circuit, given oracle access to the circuit. Our learning and depth-testing algorithms apply to architectures defined over any geometric dimension, and can be generalized to a wide class of architectures with good lightcone properties.

Cite as

Bill Fefferman, Soumik Ghosh, and Wei Zhan. Anti-Concentration for the Unitary Haar Measure and Applications to Random Quantum Circuits. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 57:1-57:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{fefferman_et_al:LIPIcs.ITCS.2026.57,
  author =	{Fefferman, Bill and Ghosh, Soumik and Zhan, Wei},
  title =	{{Anti-Concentration for the Unitary Haar Measure and Applications to Random Quantum Circuits}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{57:1--57:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.57},
  URN =		{urn:nbn:de:0030-drops-253443},
  doi =		{10.4230/LIPIcs.ITCS.2026.57},
  annote =	{Keywords: Haar measure, anti-concentration, random quanytum circuit, learning}
}
Document
One Action Too Many: Inapproximability of Budgeted Combinatorial Contracts

Authors: Michal Feldman, Yoav Gal-Tzur, Tomasz Ponitka, and Maya Schlesinger


Abstract
We study multi-agent contract design with combinatorial actions, under budget constraints, and for a broad class of objective functions, including profit (principal’s utility), reward, and welfare. Our first result is a strong impossibility: For submodular reward functions, no randomized poly-time algorithm can approximate the optimal budget-feasible value within any finite factor, even with demand-oracle access. This result rules out extending known constant-factor guarantees from either (i) unbudgeted settings with combinatorial actions or (ii) budgeted settings with binary actions, to their combination. The hardness is tight: It holds even when all but one agent have binary actions and the remaining agent has just one additional action. On the positive side, we show that gross substitutes rewards (a well-studied strict subclass of submodular functions) admit a deterministic poly-time O(1)-approximation, using only value queries. Our results thus draw the first sharp separation between budgeted and unbudgeted settings in combinatorial contracts, and identifies gross substitutes as a tractable frontier for budgeted combinatorial contracts. Finally, we present an FPTAS for additive rewards, demonstrating that arbitrary approximation is tractable under any budget. This constitutes the first FPTAS for the multi-agent combinatorial-actions setting, even in the absence of budget constraints.

Cite as

Michal Feldman, Yoav Gal-Tzur, Tomasz Ponitka, and Maya Schlesinger. One Action Too Many: Inapproximability of Budgeted Combinatorial Contracts. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 58:1-58:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{feldman_et_al:LIPIcs.ITCS.2026.58,
  author =	{Feldman, Michal and Gal-Tzur, Yoav and Ponitka, Tomasz and Schlesinger, Maya},
  title =	{{One Action Too Many: Inapproximability of Budgeted Combinatorial Contracts}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{58:1--58:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.58},
  URN =		{urn:nbn:de:0030-drops-253459},
  doi =		{10.4230/LIPIcs.ITCS.2026.58},
  annote =	{Keywords: Combinatorial Contracts, Algorithmic Contract Design, Budget-Feasible Contracts}
}
Document
On Approximating the f-Divergence Between Two Ising Models

Authors: Weiming Feng and Yucheng Fu


Abstract
The f-divergence is a fundamental notion that measures the difference between two distributions. In this paper, we study the problem of approximating the f-divergence between two Ising models, which is a generalization of recent work on approximating the TV-distance. Given two Ising models ν and μ, which are specified by their interaction matrices and external fields, the problem is to approximate the f-divergence D_f (ν ‖ μ) within an arbitrary relative error e^{±ε}. For χ^α-divergence with a constant integer α, we establish both algorithmic and hardness results. The algorithm works in a parameter regime that matches the hardness result. Our algorithm can be extended to other f-divergences such as α-divergence, Kullback-Leibler divergence, Rényi divergence, Jensen-Shannon divergence, and squared Hellinger distance.

Cite as

Weiming Feng and Yucheng Fu. On Approximating the f-Divergence Between Two Ising Models. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 59:1-59:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{feng_et_al:LIPIcs.ITCS.2026.59,
  author =	{Feng, Weiming and Fu, Yucheng},
  title =	{{On Approximating the f-Divergence Between Two Ising Models}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{59:1--59:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.59},
  URN =		{urn:nbn:de:0030-drops-253469},
  doi =		{10.4230/LIPIcs.ITCS.2026.59},
  annote =	{Keywords: Ising model, f-divergence, approximation algorithms, randomized algorithms}
}
Document
Total Search Problems in ZPP

Authors: Noah Fleming, Stefan Grosser, Siddhartha Jain, Jiawei Li, Hanlin Ren, Morgan Shirley, and Weiqiang Yuan


Abstract
We initiate a systematic study of TFZPP, the class of total NP search problems solvable by polynomial time randomized algorithms. TFZPP contains a variety of important search problems such as Bertrand-Chebyshev (finding a prime between N and 2N), refuter problems for many circuit lower bounds, and Lossy-Code. The Lossy-Code problem has found prominence due to its fundamental connections to derandomization, catalytic computing, and the metamathematics of complexity theory, among other areas. While TFZPP collapses to FP under standard derandomization assumptions in the white-box setting, we are able to separate TFZPP from the major TFNP subclasses in the black-box setting. In fact, we are able to separate it from every uniform TFNP class assuming that NP is not in quasi-polynomial time. To do so, we extend the connection between proof complexity and black-box TFNP to randomized proof systems and randomized reductions. Next, we turn to developing a taxonomy of TFZPP problems. We highlight a problem called Nephew, originating from an infinity axiom in set theory. We show that Nephew is in PWPP∩ TFZPP and conjecture that it is not reducible to Lossy-Code. Intriguingly, except for some artificial examples, most other black-box TFZPP problems that we are aware of reduce to Lossy-Code: - We define a problem called Empty-Child capturing finding a leaf in a rooted (binary) tree, and show that this problem is equivalent to Lossy-Code. We also show that a variant of Empty-Child with "heights" is complete for the intersection of SOPL and Lossy-Code. - We strengthen Lossy-Code with several combinatorial inequalities such as the AM-GM inequality. Somewhat surprisingly, we show the resulting new problems are still reducible to Lossy-Code. A technical highlight of this result is that they are proved by formalizations in bounded arithmetic, specifically in Jeřábek’s theory APC₁ (JSL 2007). - Finally, we show that the Dense-Linear-Ordering problem reduces to Lossy-Code.

Cite as

Noah Fleming, Stefan Grosser, Siddhartha Jain, Jiawei Li, Hanlin Ren, Morgan Shirley, and Weiqiang Yuan. Total Search Problems in ZPP. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 60:1-60:26, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{fleming_et_al:LIPIcs.ITCS.2026.60,
  author =	{Fleming, Noah and Grosser, Stefan and Jain, Siddhartha and Li, Jiawei and Ren, Hanlin and Shirley, Morgan and Yuan, Weiqiang},
  title =	{{Total Search Problems in ZPP}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{60:1--60:26},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.60},
  URN =		{urn:nbn:de:0030-drops-253473},
  doi =		{10.4230/LIPIcs.ITCS.2026.60},
  annote =	{Keywords: TFNP, lossy code, randomized proof systems, query complexity}
}
Document
Random Unitaries in Constant (Quantum) Time

Authors: Ben Foxman, Natalie Parham, Francisca Vasconcelos, and Henry Yuen


Abstract
Random unitaries are a central object of study in quantum information, with applications to quantum computation, quantum many-body physics, and quantum cryptography. Recent work has constructed unitary designs and pseudorandom unitaries (PRUs) using Θ(log log n)-depth unitary circuits with two-qubit gates. In this work, we show that unitary designs and PRUs can be efficiently constructed in several well-studied models of constant-time quantum computation (i.e., the time complexity on the quantum computer is independent of the system size). These models are constant-depth circuits augmented with certain nonlocal operations, such as (a) many-qubit TOFFOLI gates, (b) many-qubit FANOUT gates, or (c) mid-circuit measurements with classical feedforward control. Recent advances in quantum computing hardware suggest experimental feasibility of these models in the near future. Our results demonstrate that unitary designs and PRUs can be constructed in much weaker circuit models than previously thought. Furthermore, our construction of PRUs in constant-depth with many-qubit TOFFOLI gates shows that, under cryptographic assumptions, there is no polynomial-time learning algorithm for the circuit class QAC⁰. Finally, our results suggest a new approach towards proving that PARITY is not computable in QAC⁰, a long-standing question in quantum complexity theory.

Cite as

Ben Foxman, Natalie Parham, Francisca Vasconcelos, and Henry Yuen. Random Unitaries in Constant (Quantum) Time. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 61:1-61:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{foxman_et_al:LIPIcs.ITCS.2026.61,
  author =	{Foxman, Ben and Parham, Natalie and Vasconcelos, Francisca and Yuen, Henry},
  title =	{{Random Unitaries in Constant (Quantum) Time}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{61:1--61:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.61},
  URN =		{urn:nbn:de:0030-drops-253481},
  doi =		{10.4230/LIPIcs.ITCS.2026.61},
  annote =	{Keywords: Quantum Information, Pseudorandomness, Circuit Complexity}
}
Document
Improved Rate for Non-Malleable Codes and Time-Lock Puzzles

Authors: Cody Freitag, Ilan Komargodski, Manu Kondapaneni, and Jad Silbak


Abstract
Non-malleable codes allow a sender to transmit a message to a receiver, while providing a "best-possible" integrity guarantee to ensure that no attacker - who cannot already decode the message - can meaningfully tamper the message in transit. If tampered, the received message should either be invalid or unrelated to the original message. Non-malleable time-lock puzzles (TLPs) are a special case of non-malleable codes for bounded polynomial-depth tampering with very efficient encoding. In this work, we give generic techniques for constructing non-malleable codes and non-malleable TLPs with improved rate, which captures the ratio of a message’s length to its encoding length. A key contribution of our work is identifying a security notion for non-malleability, which we term "CCA-hiding", sufficient for our compilers. CCA-hiding is a relaxation of CCA-security for encryption or commitments to the fine-grained setting of codes, and requires that the encoded message remains hidden, even given a decoding oracle for any other codeword. Intriguingly, CCA-hiding does not imply non-malleability in the fine-grained setting, as is the case for encryption and commitments. Using our new techniques, we give the following constructions: - Rate-1 CCA-hiding TLPs in the plain model. - Rate-1 non-malleable codes for bounded polynomial-depth tampering in the auxiliary-input random oracle model (AI-ROM). - Rate-(1/2) non-malleable TLPs in the AI-ROM.

Cite as

Cody Freitag, Ilan Komargodski, Manu Kondapaneni, and Jad Silbak. Improved Rate for Non-Malleable Codes and Time-Lock Puzzles. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 62:1-62:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{freitag_et_al:LIPIcs.ITCS.2026.62,
  author =	{Freitag, Cody and Komargodski, Ilan and Kondapaneni, Manu and Silbak, Jad},
  title =	{{Improved Rate for Non-Malleable Codes and Time-Lock Puzzles}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{62:1--62:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.62},
  URN =		{urn:nbn:de:0030-drops-253490},
  doi =		{10.4230/LIPIcs.ITCS.2026.62},
  annote =	{Keywords: Non-malleable codes, Time-lock puzzles}
}
Document
Perfect Simulation of Las Vegas Algorithms via Local Computation

Authors: Xinyu Fu, Yonggang Jiang, and Yitong Yin


Abstract
The notion of Las Vegas algorithms was introduced by Babai (1979) and can be defined in two ways: - In Babai’s original definition, a randomized algorithm is called Las Vegas if it has a finitely bounded running time and certifiable random failure. - Another definition widely accepted today is that Las Vegas algorithms refer to zero-error randomized algorithms with random running times. The equivalence between the two definitions is straightforward. Specifically, for randomized algorithms with certifiable failures, repeatedly running the algorithm until no failure is encountered allows for faithful simulation of the correct output when it executes successfully. We show that a similar perfect simulation can also be achieved in distributed local computation. Specifically, in the LOCAL model, with a polylogarithmic overhead in time complexity, any Las Vegas algorithm with finitely bounded running time and locally certifiable failures can be converted to a zero error Las Vegas algorithm. This transformed algorithm faithfully reproduces the correct output of the original algorithm in successful executions. This is achieved by a reduction to a distributed sampling problem under the Lovász Local Lemma (LLL), where the objective is to sample from the joint distribution of random variables avoiding all bad events. We then design the first efficient algorithm to solve this sampling problem in the LOCAL model.

Cite as

Xinyu Fu, Yonggang Jiang, and Yitong Yin. Perfect Simulation of Las Vegas Algorithms via Local Computation. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 63:1-63:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{fu_et_al:LIPIcs.ITCS.2026.63,
  author =	{Fu, Xinyu and Jiang, Yonggang and Yin, Yitong},
  title =	{{Perfect Simulation of Las Vegas Algorithms via Local Computation}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{63:1--63:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.63},
  URN =		{urn:nbn:de:0030-drops-253503},
  doi =		{10.4230/LIPIcs.ITCS.2026.63},
  annote =	{Keywords: Las Vegas algorithms, perfect simulation, Lov\'{a}sz Local Lemma, sampling}
}
Document
Optimal White-Box Adversarial Streaming Lower Bounds for Approximating LIS Length

Authors: Anna Gal, Gillat Kol, Raghuvansh R. Saxena, and Huacheng Yu


Abstract
The space complexity of deterministic streaming algorithms for approximating the length of the longest increasing subsequence (LIS) in a string of length n has been known to be Θ̃(√n) for almost two decades. In contrast, the space complexity of this problem for randomized streaming algorithms remains one of the few longstanding open problems in one-pass streaming. In fact, no better than Ω(log n) lower bounds are known, and the best upper bounds are no better than their deterministic counterparts. In this paper, we push the limits of our understanding of the streaming space complexity of the approximate LIS length problem by studying it in the white-box adversarial streaming model. This model is an intermediate model between deterministic and randomized streaming algorithms that has recently attracted attention. In the white-box model, the streaming algorithm can draw fresh randomness when processing each incoming element, but an adversary generating the stream observes all previously used randomness and adaptively chooses the subsequent elements of the stream. We prove a tight (up to logarithmic factors) Ω(√n) space lower bound for any white-box streaming algorithm that approximates the length of the LIS of a stream of length n to within a factor better than 1.1. Thus, for this problem, white-box algorithms offer no improvement over deterministic ones.

Cite as

Anna Gal, Gillat Kol, Raghuvansh R. Saxena, and Huacheng Yu. Optimal White-Box Adversarial Streaming Lower Bounds for Approximating LIS Length. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 64:1-64:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{gal_et_al:LIPIcs.ITCS.2026.64,
  author =	{Gal, Anna and Kol, Gillat and Saxena, Raghuvansh R. and Yu, Huacheng},
  title =	{{Optimal White-Box Adversarial Streaming Lower Bounds for Approximating LIS Length}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{64:1--64:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.64},
  URN =		{urn:nbn:de:0030-drops-253519},
  doi =		{10.4230/LIPIcs.ITCS.2026.64},
  annote =	{Keywords: White-bos streaming, Longest increasing subsequence}
}
Document
Characterizing Off-Chain Influence Proof Transaction Fee Mechanisms

Authors: Aadityan Ganesh, Clayton Thomas, and S. Matthew Weinberg


Abstract
Roughgarden [Roughgarden, 2020] initiates the study of Transaction Fee Mechanisms (TFMs), and posits that the on-chain game of a "good" TFM should be on-chain simple (OnC-S), i.e., incentive compatible for both the users and the miner. Recent work of Ganesh, Thomas an Weinberg [Ganesh et al., 2024] posit that they should additionally be Off-Chain Influence-Proof (OffC-IP), which means that the miner cannot achieve any additional revenue by separately conducting an off-chain auction to determine on-chain inclusion. They observe that a cryptographic second-price auction satisfies both properties, but leave open the question of whether other mechanisms (such as those not dependent on cryptography) satisfy these properties. In this paper, we characterize OffC-IP TFMs: They are those satisfying a burn identity relating the burn rule to the allocation rule. In particular, we show that auction is OffC-IP if and only if its (induced direct-revelation) allocation rule X̄(⋅) and burn rule B̅(⋅) (both of which take as input users' values v₁, … , v_n) are truthful when viewing (X̄(⋅), B̅(⋅)) as the allocation and pricing rule of a multi-item auction for a single additive buyer with values (φ(v₁),…, φ(v_n)) equal to the users' virtual values. Building on this burn identity, we characterize OffC-IP and OnC-S TFMs that are deterministic and do not use cryptography: They are posted-price mechanisms with specially-tuned burns. As a corollary, we show that such TFMs can only exist with infinite supply and prior-dependence. However, we show that for randomized TFMs, there are additional OnC-S and OffC-IP auctions that do not use cryptography (even when there is {finite} supply, under prior-dependence with a bounded prior distribution). Holistically, our results show that while OffC-IP is a fairly stringent requirement, families of OffC-IP mechanisms can be found for a variety of settings.

Cite as

Aadityan Ganesh, Clayton Thomas, and S. Matthew Weinberg. Characterizing Off-Chain Influence Proof Transaction Fee Mechanisms. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 65:1-65:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{ganesh_et_al:LIPIcs.ITCS.2026.65,
  author =	{Ganesh, Aadityan and Thomas, Clayton and Weinberg, S. Matthew},
  title =	{{Characterizing Off-Chain Influence Proof Transaction Fee Mechanisms}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{65:1--65:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.65},
  URN =		{urn:nbn:de:0030-drops-253527},
  doi =		{10.4230/LIPIcs.ITCS.2026.65},
  annote =	{Keywords: Transaction Fee Mechanism Design, Off-Chain Influence Proofness, Blockchain, Decentralized Finance, Simple Auctions}
}
Document
A Parameterized-Complexity Framework for Finding Local Optima

Authors: Robert Ganian, Hung P. Hoang, Christian Komusiewicz, and Nils Morawietz


Abstract
Local search is a fundamental optimization technique that is both widely used in practice and deeply studied in theory, yet its computational complexity remains poorly understood. The traditional frameworks, PLS and the standard algorithm problem, introduced by Johnson, Papadimitriou, and Yannakakis (1988) fail to capture the methodology of local search algorithms: PLS is concerned with finding a local optimum and not with using local search, while the standard algorithm problem restricts each improvement step to follow a fixed pivoting rule. In this work, we introduce a novel formulation of local search which provides a middle ground between these models. In particular, the task is to output not only a local optimum but also a chain of local improvements leading to it. With this framework, we aim to capture the challenge in designing a good pivoting rule. Especially, when combined with the parameterized complexity paradigm, it enables both strong lower bounds and meaningful tractability results. Unlike previous works that combined parameterized complexity with local search, our framework targets the whole task of finding a local optimum and not only a single improvement step. Focusing on two representative meta-problems - Subset Weight Optimization Problem with the c-swap neighborhood and Weighted Circuit with the flip neighborhood - we establish fixed-parameter tractability results related to the number of distinct weights, while ruling out an analogous result when parameterizing by the distance to the nearest optimum via a new type of reduction.

Cite as

Robert Ganian, Hung P. Hoang, Christian Komusiewicz, and Nils Morawietz. A Parameterized-Complexity Framework for Finding Local Optima. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 66:1-66:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{ganian_et_al:LIPIcs.ITCS.2026.66,
  author =	{Ganian, Robert and Hoang, Hung P. and Komusiewicz, Christian and Morawietz, Nils},
  title =	{{A Parameterized-Complexity Framework for Finding Local Optima}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{66:1--66:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.66},
  URN =		{urn:nbn:de:0030-drops-253532},
  doi =		{10.4230/LIPIcs.ITCS.2026.66},
  annote =	{Keywords: Local Search, Parameterized Complexity, PLS}
}
Document
Query Lower Bounds for Correlation Clustering Under Memory Constraints

Authors: Sumegha Garg, Songhua He, and Periklis A. Papakonstantinou


Abstract
This work initiates the study of memory–query tradeoffs for graph problems, with a focus on correlation clustering. Correlation clustering asks for a partition of the vertices that minimizes disagreements: non‑edges inside clusters plus edges across clusters. Our first result is a tight query lower bound: to output a partition whose cost approximates the optimum up to an additive error of ε n², any algorithm requires Ω(n/ε²) adjacency-matrix queries. Under memory constraints, we show that even for the seemingly easier task of approximating the optimal clustering cost (without producing a partition), any algorithm in the random query model must make ≫ n/ε² adjacency-matrix queries. Finally, we prove the first general graph model query lower bound for correlation clustering, where algorithms are allowed adjacency-matrix, neighbor, and degree queries. The latter two bounds are not yet tight, leaving room for sharper results.

Cite as

Sumegha Garg, Songhua He, and Periklis A. Papakonstantinou. Query Lower Bounds for Correlation Clustering Under Memory Constraints. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 67:1-67:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{garg_et_al:LIPIcs.ITCS.2026.67,
  author =	{Garg, Sumegha and He, Songhua and Papakonstantinou, Periklis A.},
  title =	{{Query Lower Bounds for Correlation Clustering Under Memory Constraints}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{67:1--67:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.67},
  URN =		{urn:nbn:de:0030-drops-253542},
  doi =		{10.4230/LIPIcs.ITCS.2026.67},
  annote =	{Keywords: correlation clustering, query-space complexity, information theory}
}
Document
Fourier Sparsity of Delta Functions and Matching Vector PIRs

Authors: Fatemeh Ghasemi and Swastik Kopparty


Abstract
In this paper we study a basic and natural question about Fourier analysis of Boolean functions, which has applications to the study of Matching Vector based Private Information Retrieval (PIR) schemes. For integers m,r, define a delta function on {0,1}^r ⊆ ℤ_m^r to be a function f: ℤ_m^r → C if f(0) = 1 and f(x) = 0 for all nonzero Boolean x. The basic question that we study is how small can the Fourier sparsity of a delta function be; namely, how sparse can such an f be in the Fourier basis? In addition to being intrinsically interesting and natural, such questions arise naturally while studying "S-decoding polynomials" for the known matching vector families. Finding S-decoding polynomials of reduced sparsity - which corresponds to finding delta functions with low Fourier sparsity - would improve the current best PIR schemes. We show nontrivial upper and lower bounds on the Fourier sparsity of delta functions. Our proofs are elementary and clean. These results imply limitations on improvements to the Matching Vector PIR schemes simply by finding better S-decoding polynomials. In particular, there are no S-decoding polynomials which can make Matching Vector PIRs based on the known matching vector families achieve polylogarithmic communication for constantly many servers. Many interesting questions remain open.

Cite as

Fatemeh Ghasemi and Swastik Kopparty. Fourier Sparsity of Delta Functions and Matching Vector PIRs. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 68:1-68:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{ghasemi_et_al:LIPIcs.ITCS.2026.68,
  author =	{Ghasemi, Fatemeh and Kopparty, Swastik},
  title =	{{Fourier Sparsity of Delta Functions and Matching Vector PIRs}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{68:1--68:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.68},
  URN =		{urn:nbn:de:0030-drops-253556},
  doi =		{10.4230/LIPIcs.ITCS.2026.68},
  annote =	{Keywords: Fourier Sparsity, Matching Vectors, Private Information Retrieval}
}
Document
Computing Equilibrium Points of Electrostatic Potentials

Authors: Abheek Ghosh, Paul W. Goldberg, and Alexandros Hollender


Abstract
We study the computation of equilibrium points of electrostatic potentials: locations in space where the electrostatic force arising from a collection of charged particles vanishes. This is a novel scenario of optimization in which solutions are guaranteed to exist due to a nonconstructive argument, but gradient descent is unreliable due to the presence of singularities. We present an algorithm based on piecewise approximation of the potential function by Taylor series. The main insight is to divide the domain into a grid with variable coarseness, where grid cells are exponentially smaller in regions where the function changes rapidly compared to regions where it changes slowly. Our algorithm finds approximate equilibrium points in time poly-logarithmic in the approximation parameter, but these points are not guaranteed to be close to exact solutions. Nevertheless, we show that such points can be computed efficiently under a mild assumption that we call "strong non-degeneracy". We complement these algorithmic results by studying a generalization of this problem and showing that it is CLS-hard and in PPAD, leaving its precise classification as an intriguing open problem.

Cite as

Abheek Ghosh, Paul W. Goldberg, and Alexandros Hollender. Computing Equilibrium Points of Electrostatic Potentials. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 69:1-69:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{ghosh_et_al:LIPIcs.ITCS.2026.69,
  author =	{Ghosh, Abheek and Goldberg, Paul W. and Hollender, Alexandros},
  title =	{{Computing Equilibrium Points of Electrostatic Potentials}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{69:1--69:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.69},
  URN =		{urn:nbn:de:0030-drops-253566},
  doi =		{10.4230/LIPIcs.ITCS.2026.69},
  annote =	{Keywords: Total search problems, TFNP, PPAD, CLS, polynomial equations}
}
Document
Unconditional Pseudorandomness Against Shallow Quantum Circuits

Authors: Soumik Ghosh, Sathyawageeswar Subramanian, and Wei Zhan


Abstract
Quantum computational pseudorandomness has emerged as a fundamental notion that spans connections to complexity theory, cryptography and fundamental physics. However, all known constructions of efficient quantum-secure pseudorandom objects rely on complexity theoretic assumptions. In this work, we establish the first unconditionally secure efficient pseudorandom constructions against shallow-depth quantum circuit classes. We prove that: - Any quantum state 2-design yields unconditional pseudorandomness against both QNC⁰ circuits with arbitrarily many ancillae and AC⁰∘QNC⁰ circuits with nearly linear ancillae. - Random phased subspace states, where the phases are picked using a 4-wise independent function, are unconditionally pseudoentangled against the above circuit classes. - Any unitary 2-design yields unconditionally secure parallel-query pseudorandom unitaries against geometrically local QNC⁰ adversaries, even with limited AC⁰ postprocessing. Our results stand in stark contrast to the standard guarantee of the 2-design property, which only ensures that they cannot be distinguished from Haar random ensembles using two copies or queries. Our work demonstrates that quantum computational pseudorandomness can be achieved unconditionally for natural classes of restricted adversaries, opening new directions in quantum complexity theory.

Cite as

Soumik Ghosh, Sathyawageeswar Subramanian, and Wei Zhan. Unconditional Pseudorandomness Against Shallow Quantum Circuits. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 70:1-70:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{ghosh_et_al:LIPIcs.ITCS.2026.70,
  author =	{Ghosh, Soumik and Subramanian, Sathyawageeswar and Zhan, Wei},
  title =	{{Unconditional Pseudorandomness Against Shallow Quantum Circuits}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{70:1--70:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.70},
  URN =		{urn:nbn:de:0030-drops-253578},
  doi =		{10.4230/LIPIcs.ITCS.2026.70},
  annote =	{Keywords: quantum pseudorandomness, shallow quantum circuits, pseudorandomness, t-designs}
}
Document
Lower Bounds on FSS from Dynamic Data Structures

Authors: Niv Gilboa and Daniel Weber


Abstract
In Function Secret Sharing (FSS), a dealer with a given function f: {0,1}ⁿ → 𝔾 from n bits to a commutative group 𝔾 such that f is in a function class ℱ shares succinct keys with two properties. Evaluating each key separately on a common input x results in additive shares of f(x) and any subset of the keys does not provide information on f. Two-party FSS schemes which are reducible to One-way Functions (OWF) have applications in cryptography, complexity, and in practical data security systems. We establish a two-way transformation between a two-party FSS scheme for a function class ℱ, which is black-box reducible to an OWF, or even black-box reducible to a family of Pseudo-Random Functions (PRF) and a dynamic data structure that supports range queries on ℱ. A data structure of this type enables dynamically adding functions to a multiset of functions F ⊆ ℱ, and answering range queries on the output of F, i.e., returning ∑_{f ∈ F} f(x) for a query x. The data structures are defined in one of several models which abstract RAM. The correspondence together with known lower bounds on the update time and the query time in data structures leads to the first non-trivial lower bounds on FSS schemes which are black-box reducible to PRF. These lower bounds apply to FSS schemes with polynomial key size and include: - For ℱ^d_{box}, the class of all functions which assign a constant group element β ∈ 𝔾 to any input in a specified d-dimensional box and 0 to all other inputs: if the key sharing function, Gen, runs in time polynomial in n and the evaluation function is Eval then: - If d ≥ 2 and 𝔾 = ℤ₂ then Eval’s running time is Ω ((n^{3/2})/(log³ n)). - If d ≥ 2 and 𝔾 is cyclic such that log |𝔾| = (1 + ε) n then Eval’s running time is Ω ((n/(log n)) ²). - If d > 2 is a constant and further, Gen and Eval correspond to operations on data structures in the Oblivious Group Model (this includes all known FSS from OWF techniques), then the product of Eval’s time and the key size is Ω(n^{d-1}). - For ℱ_{mono}, the class of all monomials ax^b ∈ 𝔽_{2ⁿ}[X] such that b ≤ B, assuming n^{ω(1)} ≤ B ≤ 2^{n/4}: if Gen runs in polynomial time, then Eval’s running time is Ω ((n √{log B})/(log² n)).

Cite as

Niv Gilboa and Daniel Weber. Lower Bounds on FSS from Dynamic Data Structures. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 71:1-71:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{gilboa_et_al:LIPIcs.ITCS.2026.71,
  author =	{Gilboa, Niv and Weber, Daniel},
  title =	{{Lower Bounds on FSS from Dynamic Data Structures}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{71:1--71:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.71},
  URN =		{urn:nbn:de:0030-drops-253585},
  doi =		{10.4230/LIPIcs.ITCS.2026.71},
  annote =	{Keywords: FSS, Data Structures, Lower Bounds, Black-Box Reductions}
}
Document
Forrelation Is Extremally Hard

Authors: Uma Girish and Rocco Servedio


Abstract
The Forrelation problem is a central problem that demonstrates an exponential separation between quantum and classical capabilities. In this problem, given query access to n-bit Boolean functions f and g, the goal is to estimate the Forrelation function forr(f,g), which measures the correlation between g and the Fourier transform of f. In this work we provide a new linear algebraic perspective on the Forrelation problem, as opposed to prior analytic approaches. We establish a connection between the Forrelation problem and bent Boolean functions and through this connection, analyze an extremal version of the Forrelation problem where the goal is to distinguish between extremal instances of Forrelation, namely (f,g) with forr(f,g) = 1 and forr(f,g) = -1. We show that this problem can be solved with one quantum query and success probability one, yet requires Ω̃(2^{n/4}) classical randomized queries, even for algorithms with a one-third failure probability, highlighting the remarkable power of one exact quantum query. We also study a restricted variant of this problem where the inputs f,g are computable by small classical circuits and show classical hardness under cryptographic assumptions.

Cite as

Uma Girish and Rocco Servedio. Forrelation Is Extremally Hard. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 72:1-72:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{girish_et_al:LIPIcs.ITCS.2026.72,
  author =	{Girish, Uma and Servedio, Rocco},
  title =	{{Forrelation Is Extremally Hard}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{72:1--72:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.72},
  URN =		{urn:nbn:de:0030-drops-253594},
  doi =		{10.4230/LIPIcs.ITCS.2026.72},
  annote =	{Keywords: Forrelation, exact quantum, query complexity}
}
Document
Quantum Advantage from Sampling Shallow Circuits: Beyond Hardness of Marginals

Authors: Daniel Grier, Daniel M. Kane, Jackson Morris, Anthony Ostuni, and Kewen Wu


Abstract
We construct a family of distributions {𝒟_n}_n with 𝒟_n over {0, 1}ⁿ and a family of depth-7 quantum circuits {C_n}_n such that 𝒟_n is produced exactly by C_n with the all zeros state as input, yet any constant-depth classical circuit with bounded fan-in gates evaluated on any binary product distribution has total variation distance 1 - e^{-Ω(n)} from 𝒟_n. Moreover, the quantum circuits we construct are geometrically local and use a relatively standard gate set: Hadamard, controlled-phase, CNOT, and Toffoli gates. All previous separations of this type suffer from some undesirable constraint on the classical circuit model or the quantum circuits witnessing the separation. Our family of distributions is inspired by the Parity Halving Problem of Watts, Kothari, Schaeffer, and Tal (STOC, 2019), which built on the work of Bravyi, Gosset, and König (Science, 2018) to separate shallow quantum and classical circuits for relational problems.

Cite as

Daniel Grier, Daniel M. Kane, Jackson Morris, Anthony Ostuni, and Kewen Wu. Quantum Advantage from Sampling Shallow Circuits: Beyond Hardness of Marginals. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 73:1-73:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{grier_et_al:LIPIcs.ITCS.2026.73,
  author =	{Grier, Daniel and Kane, Daniel M. and Morris, Jackson and Ostuni, Anthony and Wu, Kewen},
  title =	{{Quantum Advantage from Sampling Shallow Circuits: Beyond Hardness of Marginals}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{73:1--73:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.73},
  URN =		{urn:nbn:de:0030-drops-253607},
  doi =		{10.4230/LIPIcs.ITCS.2026.73},
  annote =	{Keywords: Shallow circuits, sampling, quantum circuits}
}
Document
Adversarially-Robust Gossip Algorithms for Approximate Quantile and Mean Computations

Authors: Bernhard Haeupler, Marc Kaufmann, Raghu Raman Ravi, and Ulysse Schaller


Abstract
This paper presents gossip algorithms for aggregation tasks that demonstrate both robustness to adversarial corruptions of any order of magnitude and optimality across a substantial range of these corruption levels. Gossip algorithms distribute information in a scalable and efficient way by having random pairs of nodes exchange small messages. Value aggregation problems are of particular interest in this setting, as they occur frequently in practice, and many elegant algorithms have been proposed for computing aggregates and statistics such as averages and quantiles. An important and well-studied advantage of gossip algorithms is their robustness to message delays, network churn, and unreliable message transmissions. However, these crucial robustness guarantees only hold if all nodes follow the protocol and no messages are corrupted. In this paper, we remedy this by providing a framework to model both adversarial participants and message corruptions in gossip-style communications by allowing an adversary to control a small fraction of the nodes or corrupt messages arbitrarily. Despite this very powerful and general corruption model, we show that robust gossip algorithms can be designed for many important aggregation problems. Our algorithms guarantee that almost all nodes converge to an approximately correct answer with optimal efficiency and essentially as fast as without corruptions. The design of adversarially-robust gossip algorithms poses completely new challenges. Despite this, our algorithms remain very simple variations of known non-robust algorithms with often only subtle changes to avoid non-compliant nodes gaining too much influence over outcomes. While our algorithms remain simple, their analysis is much more complex and often requires a completely different approach than the non-adversarial setting.

Cite as

Bernhard Haeupler, Marc Kaufmann, Raghu Raman Ravi, and Ulysse Schaller. Adversarially-Robust Gossip Algorithms for Approximate Quantile and Mean Computations. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 74:1-74:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{haeupler_et_al:LIPIcs.ITCS.2026.74,
  author =	{Haeupler, Bernhard and Kaufmann, Marc and Ravi, Raghu Raman and Schaller, Ulysse},
  title =	{{Adversarially-Robust Gossip Algorithms for Approximate Quantile and Mean Computations}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{74:1--74:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.74},
  URN =		{urn:nbn:de:0030-drops-253611},
  doi =		{10.4230/LIPIcs.ITCS.2026.74},
  annote =	{Keywords: Gossip Algorithms, Distributed Computing, Adversarial Robustness}
}
Document
Prior-Independent and Subgame Optimal Online Algorithms

Authors: Jason Hartline, Aleck Johnsen, and Anant Shah


Abstract
This paper develops two game-theoretic notions of beyond worst-case analysis that give better than worst-case guarantees on natural inputs. We illustrate them through the finite-horizon ski-rental problem. First, we consider prior-independent design and analysis of online algorithms where, rather than choosing a worst-case input, the adversary chooses a worst-case independent and identical distribution over inputs. Prior-independent online algorithms are generally analytically intractable; instead we give a fully polynomial-time approximation scheme to compute them. Second, we consider the worst-case design of algorithms. We define "subgame optimality" which is stronger than worst-case optimality in that it requires the algorithm to take advantage of an adversary not playing a worst-case input. Algorithms that focus only on the worst case can be far from subgame optimal. Highlighting the potential improvement from these paradigms for the finite-horizon ski-rental problem, we empirically compare worst-case, subgame optimal, and prior-independent algorithms in the prior-independent framework. Finally, we analyze the structure of their decisions across input sequences: the prior-independent algorithm exhibits more extreme adaptations to observed data, in contrast with the more conservative behavior of worst-case and subgame optimal algorithms.

Cite as

Jason Hartline, Aleck Johnsen, and Anant Shah. Prior-Independent and Subgame Optimal Online Algorithms. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 75:1-75:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{hartline_et_al:LIPIcs.ITCS.2026.75,
  author =	{Hartline, Jason and Johnsen, Aleck and Shah, Anant},
  title =	{{Prior-Independent and Subgame Optimal Online Algorithms}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{75:1--75:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.75},
  URN =		{urn:nbn:de:0030-drops-253622},
  doi =		{10.4230/LIPIcs.ITCS.2026.75},
  annote =	{Keywords: online algorithms, prior-independent algorithm design, zero-sum games}
}
Document
Ideal Private Simultaneous Messages Schemes and Their Applications

Authors: Keitaro Hiwatashi and Reo Eriguchi


Abstract
Private Simultaneous Messages (PSM) is a minimal model for secure computation, where two parties, Alice and Bob, have private inputs x,y and a shared random string. Each of them sends a single message to an external party, Charlie, who can compute f(x,y) for a public function f but learns nothing else. The problem of narrowing the gap between upper and lower bounds on the communication complexity of PSM has been widely studied, but the gap still remains exponential. In this work, we study the communication complexity of PSM from a different perspective and introduce a special class of PSM, referred to as ideal PSM, in which each party’s message length attains the minimum, that is, their messages are taken from the same domain as inputs. We initiate a systematic study of ideal PSM with a complete characterization, several positive results, and applications. First, we provide a characterization of the class of functions that admit ideal PSM, based on permutation groups acting on the input domain. This characterization allows us to derive asymptotic upper bounds on the total number of such functions and a complete list for small domains. We also present several infinite families of functions of practical interest that admit ideal PSM. Interestingly, by simply restricting the input domains of these ideal PSM schemes, we can recover most of the existing PSM schemes that achieve the best known communication complexity in various computation models. As applications, we show that these ideal PSM schemes yield novel communication-efficient PSM schemes for functions with sparse or dense truth-tables and those with low-rank truth-tables. Furthermore, we obtain a PSM scheme for general functions that improves the constant factor in the dominant term of the best known communication complexity. An additional advantage is that our scheme simplifies the existing construction by avoiding the hierarchical design of internally invoking PSM schemes for smaller functions.

Cite as

Keitaro Hiwatashi and Reo Eriguchi. Ideal Private Simultaneous Messages Schemes and Their Applications. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 76:1-76:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{hiwatashi_et_al:LIPIcs.ITCS.2026.76,
  author =	{Hiwatashi, Keitaro and Eriguchi, Reo},
  title =	{{Ideal Private Simultaneous Messages Schemes and Their Applications}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{76:1--76:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.76},
  URN =		{urn:nbn:de:0030-drops-253633},
  doi =		{10.4230/LIPIcs.ITCS.2026.76},
  annote =	{Keywords: secure computation, private simultaneous messages, communication complexity}
}
Document
Extended Abstract
Discrepancy Beyond Additive Functions with Applications to Fair Division (Extended Abstract)

Authors: Alexandros Hollender, Pasin Manurangsi, Raghu Meka, and Warut Suksompong


Abstract
We consider a setting where we have a ground set ℳ together with real-valued set functions f₁, … , f_n, and the goal is to partition ℳ into two sets S₁,S₂ such that |f_i(S₁) - f_i(S₂)| is small for every i. Many results in discrepancy theory can be stated in this form with the functions f_i being additive. In this work, we initiate the study of the unstructured case where f_i is not assumed to be additive. We show that even without the additivity assumption, the upper bound remains at most O(√{n log n}). Our result has implications on the fair allocation of indivisible goods. In particular, we show that a consensus halving up to O(√{n log n}) goods always exists for n agents with monotone utilities. Previously, only an O(n) bound was known for this setting.

Cite as

Alexandros Hollender, Pasin Manurangsi, Raghu Meka, and Warut Suksompong. Discrepancy Beyond Additive Functions with Applications to Fair Division (Extended Abstract). In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, p. 77:1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{hollender_et_al:LIPIcs.ITCS.2026.77,
  author =	{Hollender, Alexandros and Manurangsi, Pasin and Meka, Raghu and Suksompong, Warut},
  title =	{{Discrepancy Beyond Additive Functions with Applications to Fair Division}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{77:1--77:1},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.77},
  URN =		{urn:nbn:de:0030-drops-253641},
  doi =		{10.4230/LIPIcs.ITCS.2026.77},
  annote =	{Keywords: Discrepancy Theory, Fair Division}
}
Document
Hardness of Dynamic Tree Edit Distance and Friends

Authors: Bingbing Hu, Jakob Nogler, and Barna Saha


Abstract
String Edit Distance is a more-than-classical problem whose behavior in the dynamic setting, where the strings are updated over time, is well studied. A single-character substitution, insertion, or deletion can be processed in time 𝒪̃(n w) when operation costs are positive integers bounded by w [Charalampopoulos, Kociumaka, Mozes, CPM 2020][Gorbachev, Kociumaka, STOC 2025]. If the weights are further uniform (insertions and deletions have equal cost), also an 𝒪̃(n √n)-update time algorithm exists [Charalampopoulos, Kociumaka, Mozes, CPM 2020]. This is a substantial improvement over the static 𝒪(n²) algorithm when w ≪ n or when we are dealing with uniform weights. In contrast, for inherently related problems such as Tree Edit Distance, Dyck Edit Distance, and RNA Folding, it has remained unknown whether it is possible to devise dynamic algorithms with an advantage over the static algorithm. In this paper, we resolve this question by showing that (weighted) Tree Edit Distance, Dyck Edit Distance, and RNA Folding admit no dynamic speedup: under well-known fine-grained assumptions, we show that the best possible algorithm recomputes the solution from scratch after each update. Furthermore, we prove a quadratic per-update lower bound for unweighted Tree Edit Distance under the k-Clique Conjecture. This provides the first separation between dynamic unweighted String Edit Distance and unweighted Tree Edit Distance, problems whose relative difficulty in the static setting is still open.

Cite as

Bingbing Hu, Jakob Nogler, and Barna Saha. Hardness of Dynamic Tree Edit Distance and Friends. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 78:1-78:26, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{hu_et_al:LIPIcs.ITCS.2026.78,
  author =	{Hu, Bingbing and Nogler, Jakob and Saha, Barna},
  title =	{{Hardness of Dynamic Tree Edit Distance and Friends}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{78:1--78:26},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.78},
  URN =		{urn:nbn:de:0030-drops-253653},
  doi =		{10.4230/LIPIcs.ITCS.2026.78},
  annote =	{Keywords: fine-grained complexity, dynamic lower bounds, pattern matching}
}
Document
Range Avoidance and Remote Point: New Algorithms and Hardness

Authors: Shengtang Huang, Xin Li, and Yan Zhong


Abstract
The Range Avoidance (Avoid) problem C-Avoid[n,m(n)] asks that, given a circuit in a class C with input length n and output length m(n) > n, find a string not in the range of the circuit. This problem has been a central piece in several recent frameworks for proving circuit lower bounds and constructing explicit combinatorial objects. Previous work by Korten (FOCS' 21) and by Ren, Santhanam, and Wang (FOCS' 22) showed that algorithms for Avoid are closely related to circuit lower bounds. In particular, Korten’s work reinterpreted an earlier result from bounded arithmetic, originally proved by Jeřábek (Ann. Pure Appl. Log. 2004), as an equivalence in computational complexity between the existence of FP^NP algorithms for the general Avoid problem and 2^{Ω(n)} lower bounds against general Boolean circuits for the class 𝐄^NP. In this work, we significantly complement these works by generalizing the equivalence result to restricted circuit classes and obtain the following: - For any constant depth unbounded fan-in circuit class C ⊇ AC⁰, there is an FP^NP algorithm for C-Avoid[n,n^{1+ε}] (for any constant ε > 0) if and only if 𝐄^NP cannot be computed by C circuits of size 2^{o(n)}. This addresses an open problem by Korten (Bulletin of EATCS' 25). - If 𝐄^NP cannot be computed by o(2ⁿ/n) size formulas, then there is an FP^NP algorithm for NC⁰-Avoid[n,2n]. Note that by an extension of Ren, Santhanam, and Wang (FOCS' 22), an FP^NP algorithm for NC⁰₄-Avoid[n,n+n^δ] for any constant δ ∈ (0,1) implies 𝐄^NP cannot be computed by o(2ⁿ/n) size formulas. These results yield the first characterizations of FP^NP C-Avoid algorithms for low-complexity circuit classes such as AC⁰. We also consider the average-case analog of Avoid, the Remote Point (Remote-Point) problem, and establish: - For some suitable function c(n) and constant γ > 0, there is an FP^NP algorithm for Remote-Point[n,n^{6+γ},c(O_{γ}(log n))] if and only if 𝐄^NP cannot be (1/2-c(n))-approximated by circuits of size 2^{o(n)}. Finally, we also present two improved algorithms for NC⁰-Avoid: - A family of 2^{n^{1 - ε/(k-1) +o(1)}} time algorithms for NC⁰_k-Avoid[n,n^{1+ε}] for any ε > 0, exhibiting the first subexponential-time algorithm for any super-linear stretch. - Faster local algorithms for NC⁰_k-Avoid[n,n+1] running in time O(n2^{(k-2)/(k-1) n}), improving the naive 2ⁿ⋅ poly(n) bound.

Cite as

Shengtang Huang, Xin Li, and Yan Zhong. Range Avoidance and Remote Point: New Algorithms and Hardness. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 79:1-79:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{huang_et_al:LIPIcs.ITCS.2026.79,
  author =	{Huang, Shengtang and Li, Xin and Zhong, Yan},
  title =	{{Range Avoidance and Remote Point: New Algorithms and Hardness}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{79:1--79:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.79},
  URN =		{urn:nbn:de:0030-drops-253662},
  doi =		{10.4230/LIPIcs.ITCS.2026.79},
  annote =	{Keywords: Circuit Lower Bounds, Range Avoidance Problem, Remote Point Problem}
}
Document
FPT Approximations for Connected Maximum Coverage

Authors: Tanmay Inamdar, Satyabrata Jana, Madhumita Kundu, Daniel Lokshtanov, Saket Saurabh, and Meirav Zehavi


Abstract
We revisit connectivity-constrained coverage through a unifying model, Partial Connected Red-Blue Dominating Set (PartialConRBDS). Given a bipartite graph G = (R∪ B,E) with red vertices R and blue vertices B, an auxiliary connectivity graph G_{conn} on R, and integers k,t, the task is to find a set S ⊆ R with |S| ≤ k such that G_{conn}[S] is connected and S dominates at least t blue vertices. This formulation captures connected variants of Maximum Coverage [Hochbaum-Rao, Inf. Proc. Lett., 2020; D'Angelo-Delfaraz, AAMAS 2025], Partial Vertex Cover, and Partial Dominating Set [Khuller et al., SODA 2014; Lamprou et al., TCS 2021] via standard encodings. Limits to parameterized tractability. PartialConRBDS is W[1]-hard parameterized by k even under strong restrictions: it remains hard when G_{conn} is a clique or a star and the incidence graph G is 3-degenerate, or when G is K_{2,2}-free. Inapproximability. For every ε > 0, there is no polynomial-time (1, 1-1/e+ε)-approximation unless 𝖯 = NP. Moreover, under ETH, no algorithm running in f(k)⋅ n^{o(k)} time achieves an g(k)-approximation for k for any computable function g(⋅), or for any ε > 0, a (1-1/e+ε)-approximation for t. Graphical special cases. Partial Connected Dominating Set is W[2]-hard parameterized by k and inherits the same ETH-based f(k)⋅ n^{o(k)} inapproximability bound as above; Partial Connected Vertex Cover is W[1]-hard parameterized by k. These hardness boundaries delineate a natural "sweet spot" for study: within appropriate structural restrictions on the incidence graph, one can still aim for fine-grained (FPT) approximations. Our algorithms. We solve PartialConRBDS exactly by reducing it to Relaxed Directed Steiner Out-Tree in time (2e)^t ⋅ n^{𝒪(1)}. For biclique-free incidences (i.e., when G excludes K_{d,d} as an induced subgraph), we obtain two complementary parameterized schemes: - An Efficient Parameterized Approximation Scheme (EPAS) running in time 2^{𝒪(k² d/ε)}⋅ n^{𝒪(1)} that either returns a connected solution of size at most k covering at least (1-ε)t blue vertices, or correctly reports that no connected size-k solution covers t; and - A Parameterized Approximation Scheme (PAS) running in time 2^{𝒪(kd(k²+log d))}⋅ n^{𝒪(1/ε)} that either returns a connected solution of size at most (1+ε)k covering at least t blue vertices, or correctly reports that no connected size-k solution covers t. Together, these results chart the boundary between hardness and FPT-approximability for connectivity-constrained coverage.

Cite as

Tanmay Inamdar, Satyabrata Jana, Madhumita Kundu, Daniel Lokshtanov, Saket Saurabh, and Meirav Zehavi. FPT Approximations for Connected Maximum Coverage. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 80:1-80:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{inamdar_et_al:LIPIcs.ITCS.2026.80,
  author =	{Inamdar, Tanmay and Jana, Satyabrata and Kundu, Madhumita and Lokshtanov, Daniel and Saurabh, Saket and Zehavi, Meirav},
  title =	{{FPT Approximations for Connected Maximum Coverage}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{80:1--80:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.80},
  URN =		{urn:nbn:de:0030-drops-253674},
  doi =		{10.4230/LIPIcs.ITCS.2026.80},
  annote =	{Keywords: Partial Dominating Set, Connectivity, Maximum Coverage, FPT Approximation, Fixed-parameter Tractability}
}
Document
Supercritical Tradeoff Between Size and Depth for Resolution over Parities

Authors: Dmitry Itsykson and Alexander Knop


Abstract
Alekseev and Itsykson (STOC 2025) proved the existence of an unsatisfiable CNF formula such that any resolution over parities (Res(⊕)) refutation must either have exponential size (in the formula size) or superlinear depth (in the number of variables). In this paper, we extend this result by constructing a formula with the same hardness properties, but which additionally admits a resolution refutation of quasi-polynomial size. This establishes a supercritical tradeoff between size and depth for resolution over parities. The proof builds on the framework of Alekseev and Itsykson and relies on a lifting argument applied to the supercritical tradeoff between width and depth in resolution, proposed by Buss and Thapen (IPL 2026).

Cite as

Dmitry Itsykson and Alexander Knop. Supercritical Tradeoff Between Size and Depth for Resolution over Parities. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 81:1-81:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{itsykson_et_al:LIPIcs.ITCS.2026.81,
  author =	{Itsykson, Dmitry and Knop, Alexander},
  title =	{{Supercritical Tradeoff Between Size and Depth for Resolution over Parities}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{81:1--81:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.81},
  URN =		{urn:nbn:de:0030-drops-253680},
  doi =		{10.4230/LIPIcs.ITCS.2026.81},
  annote =	{Keywords: lifting theorems, resolution depth, resolution over parities, resolution width, supercritical tradeoff}
}
Document
Dimension Reduction for Clustering: The Curious Case of Discrete Centers

Authors: Shaofeng H.-C. Jiang, Robert Krauthgamer, Shay Sapir, Sandeep Silwal, and Di Yue


Abstract
The Johnson-Lindenstrauss transform is a fundamental method for dimension reduction in Euclidean spaces, that can map any dataset of n points into dimension O(log n) with low distortion of their distances. This dimension bound is tight in general, but one can bypass it for specific problems. Indeed, tremendous progress has been made for clustering problems, especially in the continuous setting where centers can be picked from the ambient space ℝ^d. Most notably, for k-median and k-means, the dimension bound was improved to O(log k) [Makarychev, Makarychev and Razenshteyn, STOC 2019]. We explore dimension reduction for clustering in the discrete setting, where centers can only be picked from the dataset, and present two results that are both parameterized by the doubling dimension of the dataset, denoted as ddim. The first result shows that dimension O_{ε}(ddim + log k + log log n) suffices, and is moreover tight, to guarantee that the cost is preserved within factor 1±ε for every set of centers. Our second result eliminates the log log n term in the dimension through a relaxation of the guarantee (namely, preserving the cost only for all approximately-optimal sets of centers), which maintains its usefulness for downstream applications. Overall, we achieve strong dimension reduction in the discrete setting, and find that it differs from the continuous setting not only in the dimension bound, which depends on the doubling dimension, but also in the guarantees beyond preserving the optimal value, such as which clusterings are preserved.

Cite as

Shaofeng H.-C. Jiang, Robert Krauthgamer, Shay Sapir, Sandeep Silwal, and Di Yue. Dimension Reduction for Clustering: The Curious Case of Discrete Centers. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 82:1-82:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{jiang_et_al:LIPIcs.ITCS.2026.82,
  author =	{Jiang, Shaofeng H.-C. and Krauthgamer, Robert and Sapir, Shay and Silwal, Sandeep and Yue, Di},
  title =	{{Dimension Reduction for Clustering: The Curious Case of Discrete Centers}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{82:1--82:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.82},
  URN =		{urn:nbn:de:0030-drops-253698},
  doi =		{10.4230/LIPIcs.ITCS.2026.82},
  annote =	{Keywords: dimension reduction, clustering, k-median, k-means, doubling dimension}
}
Document
The Pure-State Consistency of Local Density Matrices Problem: In PSPACE and Complete for a Class Between QMA and QMA(2)

Authors: Jonas Kamminga and Dorian Rudolph


Abstract
In this work we investigate the computational complexity of the pure consistency of local density matrices (PureCLDM) and pure N-representability (Pure-N-Representability; analog of PureCLDM for bosonic or fermionic systems) problems. In these problems the input is a set of reduced density matrices and the task is to determine whether there exists a global pure state consistent with these reduced density matrices. While mixed CLDM, i.e. where the global state can be mixed, was proven to be QMA-complete by Broadbent and Grilo [JoC 2022], almost nothing was known about the complexity of the pure version. Before our work the best upper and lower bounds were QMA(2) and QMA. Our contribution to the understanding of these problems is twofold. Firstly, we define a pure state analogue of the complexity class QMA^+ of Aharanov and Regev [FOCS 2003], which we call PureSuperQMA. We prove that both pure-N-Representability and PureCLDM are complete for this new class. Along the way we supplement Broadbent and Grilo by proving hardness for 2-qubit reduced density matrices and showing that mixed N-Representability is QMA-complete. Secondly, we improve the upper bound on PureCLDM. Using methods from algebraic geometry, we prove that PureSuperQMA ⊆ PSPACE. Our methods, and the PSPACE upper bound, are also valid for PureCLDM with exponential or even perfect precision, hence precisePureCLDM is not preciseQMA(2) = NEXP-complete, unless PSPACE = NEXP. We view this as evidence for a negative answer to the longstanding open question whether PureCLDM is QMA(2)-complete. The techniques we develop for our PSPACE upper bound are quite general. We are able to use them for various applications: from proving PSPACE upper bounds on other quantum problems to giving an efficient parallel (NC) algorithm for (non-convex) quadratically constrained quadratic programs with few constraints.

Cite as

Jonas Kamminga and Dorian Rudolph. The Pure-State Consistency of Local Density Matrices Problem: In PSPACE and Complete for a Class Between QMA and QMA(2). In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 83:1-83:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{kamminga_et_al:LIPIcs.ITCS.2026.83,
  author =	{Kamminga, Jonas and Rudolph, Dorian},
  title =	{{The Pure-State Consistency of Local Density Matrices Problem: In PSPACE and Complete for a Class Between QMA and QMA(2)}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{83:1--83:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.83},
  URN =		{urn:nbn:de:0030-drops-253701},
  doi =		{10.4230/LIPIcs.ITCS.2026.83},
  annote =	{Keywords: Quantum Complexity Theory, PSPACE, QMA(2), Consistency of Local Density Matrices, Polynomial Optimization}
}
Document
Bayesian Perspective on Memorization and Reconstruction

Authors: Haim Kaplan, Yishay Mansour, Kobbi Nissim, and Uri Stemmer


Abstract
We introduce a new Bayesian perspective on the concept of data reconstruction, and leverage this viewpoint to propose a new security definition that, in certain settings, provably prevents reconstruction attacks. We use our paradigm to shed new light on one of the most notorious attacks in the privacy and memorization literature - fingerprinting code attacks (FPC). We argue that these attacks are really a form of membership inference attacks, rather than reconstruction attacks. Furthermore, we show that if the goal is solely to prevent reconstruction (but not membership inference), then in some cases the impossibility results derived from FPC no longer apply.

Cite as

Haim Kaplan, Yishay Mansour, Kobbi Nissim, and Uri Stemmer. Bayesian Perspective on Memorization and Reconstruction. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 84:1-84:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{kaplan_et_al:LIPIcs.ITCS.2026.84,
  author =	{Kaplan, Haim and Mansour, Yishay and Nissim, Kobbi and Stemmer, Uri},
  title =	{{Bayesian Perspective on Memorization and Reconstruction}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{84:1--84:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.84},
  URN =		{urn:nbn:de:0030-drops-253713},
  doi =		{10.4230/LIPIcs.ITCS.2026.84},
  annote =	{Keywords: Reconstruction, Memorization, Differential privacy}
}
Document
Recovering Communities in Structured Random Graphs

Authors: Michael Kapralov, Luca Trevisan, and Weronika Wrzos-Kaminska


Abstract
The problem of recovering planted community structure in random graphs has received a lot of attention in the literature on the stochastic block model, where the input is a random graph in which edges crossing between different communities appear with smaller probability than edges induced by communities. The communities themselves form a collection of vertex-disjoint sparse cuts in the expected graph, and can be recovered, often exactly, from a sample as long as a separation condition on the intra- and inter-community edge probabilities is satisfied. In this paper, we ask whether the presence of a large number of overlapping sparsest cuts in the expected graph still allows recovery. For example, the d-dimensional hypercube graph admits d distinct (balanced) sparsest cuts, one for every coordinate. Can these cuts be identified given a random sample of the edges of the hypercube where each edge is present independently with some probability p ∈ (0, 1)? We show that this is the case, in a very strong sense: the sparsest balanced cut in a sample of the hypercube at rate p = Clog d/d for a sufficiently large constant C is 1/poly(d)-close to a coordinate cut with high probability. This is asymptotically optimal and allows approximate recovery of all d cuts simultaneously. Furthermore, for an appropriate sample of hypercube-like graphs recovery can be made exact. The proof is essentially a strong hypercube cut sparsification bound that combines a theorem of Friedgut, Kalai and Naor on boolean functions whose Fourier transform concentrates on the first level of the Fourier spectrum with Karger’s cut counting argument.

Cite as

Michael Kapralov, Luca Trevisan, and Weronika Wrzos-Kaminska. Recovering Communities in Structured Random Graphs. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 85:1-85:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{kapralov_et_al:LIPIcs.ITCS.2026.85,
  author =	{Kapralov, Michael and Trevisan, Luca and Wrzos-Kaminska, Weronika},
  title =	{{Recovering Communities in Structured Random Graphs}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{85:1--85:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.85},
  URN =		{urn:nbn:de:0030-drops-253727},
  doi =		{10.4230/LIPIcs.ITCS.2026.85},
  annote =	{Keywords: Hypercube graphs, Community detection, Fourier analysis of Boolean functions}
}
Document
The Secretary Problem with Predictions and a Chosen Order

Authors: Helia Karisani, Mohammadreza Daneshvaramoli, Hedyeh Beyhaghi, Mohammad Hajiesmaili, and Cameron Musco


Abstract
We study a learning-augmented variant of the secretary problem, recently introduced by Fujii and Yoshida (2023). In this variant, the decision-maker has access to machine-learned predictions of candidate values in advance. The key challenge is to balance consistency and robustness: when the predictions are accurate, the algorithm should hire a near-best secretary; however, if they are inaccurate, the algorithm should still achieve a bounded competitive ratio. We consider both the standard Random Order Secretary Problem (ROSP), where candidates arrive in a uniform random order, and a more natural model in the learning-augmented setting, where the decision-maker can choose the arrival order based on the predicted candidate values. This model, which we call the Chosen Order Secretary Problem (COSP), can capture scenarios such as an interview schedule that is set by the decision-maker. We propose a novel algorithm that applies to both ROSP and COSP. Building on the approach of Fujii and Yoshida, our method switches from fully trusting predictions to a threshold-based rule when a large deviation of a prediction is observed. Importantly, unlike the algorithm of Fujii and Yoshida, our algorithm uses randomization as part of its decision logic. We show that if ε ∈ [0,1] denotes the maximum multiplicative prediction error, then for ROSP our algorithm achieves competitive ratio max {0.221, (1-ε)/(1+ε)}, improving on a previous bound of max {0.215, (1-ε)/(1+ε)} due to Fujii and Yoshida [Fujii and Yoshida, 2023]. For COSP, our algorithm achieves max {0.262, (1-ε)/(1+ε)}. This surpasses a 0.25 upper bound on the worst-case competitive ratio that applies to the approach of Fujii and Yoshida, and gets closer to the classical secretary benchmark of 1/e ≈ 0.368, which is an upper bound for any algorithm. Our result for COSP highlights the benefit of integrating predictions with arrival-order control in online decision-making.

Cite as

Helia Karisani, Mohammadreza Daneshvaramoli, Hedyeh Beyhaghi, Mohammad Hajiesmaili, and Cameron Musco. The Secretary Problem with Predictions and a Chosen Order. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 86:1-86:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{karisani_et_al:LIPIcs.ITCS.2026.86,
  author =	{Karisani, Helia and Daneshvaramoli, Mohammadreza and Beyhaghi, Hedyeh and Hajiesmaili, Mohammad and Musco, Cameron},
  title =	{{The Secretary Problem with Predictions and a Chosen Order}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{86:1--86:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.86},
  URN =		{urn:nbn:de:0030-drops-253734},
  doi =		{10.4230/LIPIcs.ITCS.2026.86},
  annote =	{Keywords: Secretary problem, learning-augmented algorithms, online algorithms}
}
Document
Range Longest Increasing Subsequence and Its Relatives

Authors: Karthik C. S. and Saladi Rahul


Abstract
Longest increasing subsequence (LIS) is a classical textbook problem which is still actively studied in various computational models. In this work, we present a few results for the range longest increasing subsequence problem (Range-LIS) and its variants. The input to Range-LIS is a sequence 𝒮 of n real numbers and a collection 𝒬 of m query ranges and for each query in 𝒬, the goal is to report the LIS of the sequence 𝒮 restricted to that query. Our two main results are for the following generalizations of the Range-LIS problem: 2D Range Queries: In this variant of the Range-LIS problem, each query is a pair of ranges, one of indices and the other of values, and we provide a randomized algorithm with running time Õ(mn^{1/2}+ n^{3/2})+O(k), where k is the cumulative length of the m output subsequences. This improves on the elementary Õ(mn) runtime algorithm when m = Ω(√n). Previously, the only known result breaking the quadratic barrier was of Tiskin [SODA'10] which could only handle 1D range queries (i.e., each query was a range of indices) and also just outputted the length of the LIS (instead of reporting the subsequence achieving that length). Subsequent to our paper, Gawrychowski, Gorbachev, and Kociumaka in a preprint have extended Tiskin’s approach to handle reporting 1D range queries in O(n(log n)³+m+k) time. Colored Sequences: In this variant of the Range-LIS problem, each element in 𝒮 is colored and for each query in 𝒬, the goal is to report a monochromatic LIS contained in the sequence 𝒮 restricted to that query. For 2D queries, we provide a randomized algorithm for this colored version with running time Õ(mn^{2/3}+ n^{5/3})+O(k). Moreover, for 1D queries, we provide an improved algorithm with running time Õ(mn^{1/2}+ n^{3/2})+O(k). Thus, we again improve on the elementary Õ(mn) runtime algorithm. Additionally, we prove that assuming the well-known Combinatorial Boolean Matrix Multiplication Hypothesis, that the runtime for 1D queries is essentially tight for combinatorial algorithms. Our algorithms combine several tools such as dynamic programming (to precompute increasing subsequences with some desirable properties), geometric data structures (to efficiently compute the dynamic programming entries), random sampling (to capture elements which are part of the LIS), classification of query ranges into large LIS and small LIS, and classification of colors into light and heavy. We believe that our techniques will be of interest to tackle other variants of LIS problem and other range-searching problems.

Cite as

Karthik C. S. and Saladi Rahul. Range Longest Increasing Subsequence and Its Relatives. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 87:1-87:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{karthikc.s._et_al:LIPIcs.ITCS.2026.87,
  author =	{Karthik C. S. and Rahul, Saladi},
  title =	{{Range Longest Increasing Subsequence and Its Relatives}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{87:1--87:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.87},
  URN =		{urn:nbn:de:0030-drops-253740},
  doi =		{10.4230/LIPIcs.ITCS.2026.87},
  annote =	{Keywords: Longest Increasing Subsequence, Range Query, Fine-Grained Complexity}
}
Document
Lower Bounds and Separations for Torus Polynomials

Authors: Vaibhav Krishan and Sundar Vishwanathan


Abstract
The class ACC⁰ consists of Boolean functions that can be computed by constant-depth circuits of polynomial size with AND, NOT and MOD_m gates, where m is a natural number. At the frontier of our understanding lies a widely believed conjecture asserting that MAJORITY does not belong to ACC⁰. A few years ago, Bhrushundi, Hosseini, Lovett and Rao (ITCS 2019) introduced torus polynomial approximations as an approach towards this conjecture. Torus polynomials approximate Boolean functions when the fractional part of their value on Boolean points is close to half the value of the function. They reduced the conjecture that MAJORITY ∉ ACC⁰ to a conjecture concerning the non-existence of low degree torus polynomials that approximate MAJORITY. We reduce the non-existence problem further, to a statement about finding feasible solutions for an infinite family of linear programs. The main advantage of this statement is that it allows for incremental progress, which means finding feasible solutions for successively larger collections of these programs. As an immediate first step, we find feasible solutions for a large class of these linear programs, leaving only a finite set for further consideration. Our method is inspired by the method of dual polynomials, which is used to study the approximate degree of Boolean functions. Using our method, we also propose a way to progress further. We prove several additional key results with the same method, which include: - A lower bound on the degree of symmetric torus polynomials that approximate the AND function. As a consequence, we get a separation that symmetric torus polynomials are weaker than their asymmetric counterparts. - An error-degree trade-off for symmetric torus polynomials approximating the MAJORITY function, strengthening the corresponding result of Bhrushundi, Hosseini, Lovett and Rao (ITCS 2019). - The first lower bounds against torus polynomials approximating AND, showcasing the power of the machinery we develop. This lower bound nearly matches the corresponding upper bound. Hence, we get an almost complete characterization of the torus polynomial approximation degree of AND. - Lower bounds against asymmetric torus polynomials approximating MAJORITY, or AND, in the very low error regime. This partially answers a question posed in Bhrushundi, Hosseini, Lovett and Rao (ITCS 2019) about error-reduction for torus polynomials.

Cite as

Vaibhav Krishan and Sundar Vishwanathan. Lower Bounds and Separations for Torus Polynomials. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 88:1-88:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{krishan_et_al:LIPIcs.ITCS.2026.88,
  author =	{Krishan, Vaibhav and Vishwanathan, Sundar},
  title =	{{Lower Bounds and Separations for Torus Polynomials}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{88:1--88:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.88},
  URN =		{urn:nbn:de:0030-drops-253751},
  doi =		{10.4230/LIPIcs.ITCS.2026.88},
  annote =	{Keywords: Circuit complexity, ACC, lower bounds, polynomials}
}
Document
On Solving Asymmetric Diagonally Dominant Linear Systems in Sublinear Time

Authors: Tsz Chiu Kwok, Zhewei Wei, and Mingji Yang


Abstract
We initiate a study of solving a row/column diagonally dominant (RDD/CDD) linear system 𝐌x = b in sublinear time, with the goal of estimating t^{⊤}x^{∗} for a given vector t ∈ ℝⁿ and a specific solution x^{∗}. This setting naturally generalizes the study of sublinear-time solvers for symmetric diagonally dominant (SDD) systems [Andoni-Krauthgamer-Pogrow, ITCS 2019] to the asymmetric case, which has remained underexplored despite extensive work on nearly-linear-time solvers for RDD/CDD systems. Our first contributions are characterizations of the problem’s mathematical structure. We express a solution x^{∗} via a Neumann series, prove its convergence, and upper bound the truncation error on this series through a novel quantity of 𝐌, termed the maximum p-norm gap. This quantity generalizes the spectral gap of symmetric matrices and captures how the structure of 𝐌 governs the problem’s computational difficulty. For systems with bounded maximum p-norm gap, we develop a collection of algorithmic results for locally approximating t^{⊤}x^{∗} under various scenarios and error measures. We derive these results by adapting the techniques of random-walk sampling, local push, and their bidirectional combination, which have proved powerful for special cases of solving RDD/CDD systems, particularly estimating PageRank and effective resistance on graphs. Our general framework yields deeper insights, extended results, and improved complexity bounds for these problems. Notably, our perspective provides a unified understanding of Forward Push and Backward Push, two fundamental approaches for estimating random-walk probabilities on graphs. Our framework also inherits the hardness results for sublinear-time SDD solvers and local PageRank computation, establishing lower bounds on the maximum p-norm gap or the accuracy parameter. We hope that our work opens the door for further study into sublinear solvers, local graph algorithms, and directed spectral graph theory.

Cite as

Tsz Chiu Kwok, Zhewei Wei, and Mingji Yang. On Solving Asymmetric Diagonally Dominant Linear Systems in Sublinear Time. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 89:1-89:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{kwok_et_al:LIPIcs.ITCS.2026.89,
  author =	{Kwok, Tsz Chiu and Wei, Zhewei and Yang, Mingji},
  title =	{{On Solving Asymmetric Diagonally Dominant Linear Systems in Sublinear Time}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{89:1--89:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.89},
  URN =		{urn:nbn:de:0030-drops-253768},
  doi =		{10.4230/LIPIcs.ITCS.2026.89},
  annote =	{Keywords: Spectral Graph Theory, Linear Systems, Sublinear Algorithms}
}
Document
Slice Rank and Partition Rank of the Determinant

Authors: Amichai Lampert and Guy Moshkovitz


Abstract
The Laplace expansion expresses the n × n determinant det_n as a sum of n products. Do shorter expansions exist? In this paper we: - Fully determine the slice rank decompositions of det_n (where each product must contain a linear factor): In this case, we show that n summands are necessary, and moreover, the only such expansions with n summands are equivalent (in a precise sense) to the Laplace expansion. - Prove a logarithmic lower bound for the partition rank of det_n (where each product is of multilinear forms): In this case, we show that at least log₂(n)+1 summands are needed and we explain why existing techniques fail to yield any nontrivial lower bound. - Separate partition rank from slice rank for det_n: we find a quadratic expansion for det₄, over any field, with fewer summands than the Laplace expansion. This construction is related to a well-known example of Green-Tao and Lovett-Meshulam-Samorodnitsky disproving the naive version of the Gowers Inverse conjecture over small fields. An important motivation for these questions comes from the challenge of separating structure and randomness for tensors. On the one hand, we show that the random construction fails to separate: for a random tensor of partition rank r, the analytic rank is r-o(1) with high probability. On the other hand, our results imply that the determinant yields the first asymptotic separation between partition rank and analytic rank of d-tensors, with their ratio tending to infinity with d.

Cite as

Amichai Lampert and Guy Moshkovitz. Slice Rank and Partition Rank of the Determinant. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 90:1-90:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{lampert_et_al:LIPIcs.ITCS.2026.90,
  author =	{Lampert, Amichai and Moshkovitz, Guy},
  title =	{{Slice Rank and Partition Rank of the Determinant}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{90:1--90:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.90},
  URN =		{urn:nbn:de:0030-drops-253779},
  doi =		{10.4230/LIPIcs.ITCS.2026.90},
  annote =	{Keywords: Slice rank, partition rank, determinant}
}
Document
Limitations of Membership Queries in Testable Learning

Authors: Jane Lange and Mingda Qiao


Abstract
Membership queries (MQ) often yield speedups for learning tasks, particularly in the distribution-specific setting. We show that in the testable learning model of Rubinfeld and Vasilyan [Rubinfeld and Vasilyan, 2023], membership queries cannot decrease the time complexity of testable learning algorithms beyond the complexity of sample-only distribution-specific learning. In the testable learning model, the learner must output a hypothesis whenever the data distribution satisfies a desired property, and if it outputs a hypothesis, the hypothesis must be near-optimal. We give a general reduction from sample-based refutation of boolean concept classes, as presented in [Vadhan, 2017; Kothari and Livni, 2018], to testable learning with queries (TL-Q). This yields lower bounds for TL-Q via the reduction from learning to refutation given in [Kothari and Livni, 2018]. The result is that, relative to a concept class and a distribution family, no m-sample TL-Q algorithm can be super-polynomially more time-efficient than the best m-sample PAC learner. Finally, we define a class of "statistical" MQ algorithms that encompasses many known distribution-specific MQ learners, such as those based on influence estimation or subcube-conditional statistical queries. We show that TL-Q algorithms in this class imply efficient statistical-query refutation and learning algorithms. Thus, combined with known SQ dimension lower bounds, our results imply that these efficient membership query learners cannot be made testable.

Cite as

Jane Lange and Mingda Qiao. Limitations of Membership Queries in Testable Learning. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 91:1-91:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{lange_et_al:LIPIcs.ITCS.2026.91,
  author =	{Lange, Jane and Qiao, Mingda},
  title =	{{Limitations of Membership Queries in Testable Learning}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{91:1--91:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.91},
  URN =		{urn:nbn:de:0030-drops-253785},
  doi =		{10.4230/LIPIcs.ITCS.2026.91},
  annote =	{Keywords: Testable learning, PAC learning}
}
Document
A Combinatorial Characterization of Constant Mixing Time

Authors: Lap Chi Lau and Raymond Liu


Abstract
Classical spectral graph theory characterizes graphs with logarithmic mixing time. In this work, we present a combinatorial characterization of graphs with constant mixing time. The combinatorial characterization is based on the small-set bipartite density condition, which is weaker than having near-optimal spectral radius and is stronger than having near-optimal small-set vertex expansion.

Cite as

Lap Chi Lau and Raymond Liu. A Combinatorial Characterization of Constant Mixing Time. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 92:1-92:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{lau_et_al:LIPIcs.ITCS.2026.92,
  author =	{Lau, Lap Chi and Liu, Raymond},
  title =	{{A Combinatorial Characterization of Constant Mixing Time}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{92:1--92:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.92},
  URN =		{urn:nbn:de:0030-drops-253792},
  doi =		{10.4230/LIPIcs.ITCS.2026.92},
  annote =	{Keywords: Random walks, mixing time, bipartite density, spectral graph theory}
}
Document
Analyzing the Economic Impact of Decentralization on Users

Authors: Amit Levy, S. Matthew Weinberg, and Chenghan Zhou


Abstract
We model the ultimate price paid by users of a decentralized ledger as resulting from a two-stage game where Miners (/Proposers/etc.) first purchase blockspace via a Tullock contest, and then price that space to users. When analyzing our distributed ledger model, we find: - A characterization of all possible pure equilibria (although pure equilibria are not guaranteed to exist). - A natural sufficient condition, implied by Regularity (à la [Myerson, 1981]), for existence of a "market-clearing" pure equilibrium where Miners choose to sell all space allocated by the Distributed Ledger Protocol, and that this equilibrium is unique. - The market share of the largest miner is the relevant "measure of decentralization" to determine whether a market-clearing pure equilibrium exists. - Block rewards do not impact users' prices at equilibrium, when pure equilibria exist. But, higher block rewards can cause pure equilibria to exist. We also discuss aspects of our model and how they relate to blockchains deployed in practice. For example, only "patient" users (who are happy for their transactions to enter the blockchain under any miner) would enjoy the conclusions highlighted by our model, whereas "impatient" users (who are interested only for their transaction to be included in the very next block) still face monopoly pricing.

Cite as

Amit Levy, S. Matthew Weinberg, and Chenghan Zhou. Analyzing the Economic Impact of Decentralization on Users. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 93:1-93:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{levy_et_al:LIPIcs.ITCS.2026.93,
  author =	{Levy, Amit and Weinberg, S. Matthew and Zhou, Chenghan},
  title =	{{Analyzing the Economic Impact of Decentralization on Users}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{93:1--93:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.93},
  URN =		{urn:nbn:de:0030-drops-253805},
  doi =		{10.4230/LIPIcs.ITCS.2026.93},
  annote =	{Keywords: Blockchain, Cryptocurrency, Blockspace Markets, Decentralization, Distributed Ledgers, Equilibrium Analysis, Tullock Contests}
}
Document
Smoothed Analysis of Online Metric Matching with a Single Sample: Beyond Metric Distortion

Authors: Yingxi Li, Ellen Vitercik, and Mingwei Yang


Abstract
In the online metric matching problem, n servers and n requests lie in a metric space. Servers are available upfront, and requests arrive sequentially. An arriving request must be matched immediately and irrevocably to an available server, incurring a cost equal to their distance. The goal is to minimize the total matching cost. We study this problem in [0, 1]^d with the Euclidean metric, when servers are adversarial and requests are independently drawn from distinct distributions that satisfy a mild smoothness condition. Our main result is an O(1)-competitive algorithm for d ≠ 2 that requires no distributional knowledge, relying only on a single sample from each request distribution. To our knowledge, this is the first algorithm to achieve an o(log n) competitive ratio for non-trivial metrics beyond the i.i.d. setting. Our approach bypasses the Ω(log n) barrier introduced by probabilistic metric embeddings: instead of analyzing the embedding distortion and the algorithm separately, we directly bound the cost of the algorithm on the target metric space of a simple deterministic embedding. We then combine this analysis with lower bounds on the offline optimum for Euclidean metrics, derived via majorization arguments, to obtain our guarantees.

Cite as

Yingxi Li, Ellen Vitercik, and Mingwei Yang. Smoothed Analysis of Online Metric Matching with a Single Sample: Beyond Metric Distortion. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 94:1-94:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{li_et_al:LIPIcs.ITCS.2026.94,
  author =	{Li, Yingxi and Vitercik, Ellen and Yang, Mingwei},
  title =	{{Smoothed Analysis of Online Metric Matching with a Single Sample: Beyond Metric Distortion}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{94:1--94:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.94},
  URN =		{urn:nbn:de:0030-drops-253815},
  doi =		{10.4230/LIPIcs.ITCS.2026.94},
  annote =	{Keywords: Online algorithm, Metric matching, Competitive analysis, Smoothed analysis}
}
Document
Identity Testing for Circuits with Exponentiation Gates

Authors: Jiatu Li and Mengdi Wu


Abstract
Motivated by practical applications in the design of optimization compilers for neural networks, we initiated the study of identity testing problems for arithmetic circuits augmented with exponentiation gates that compute the real function x↦ e^x. These circuits compute real functions of form P(→x)/P'(→x), where both P(→x) and P'(→x) are exponential polynomials ∑_{i = 1}^k f_i(→x)⋅ exp((g_i(→x))/(h_i(→x))), for polynomials f_i(→x),g_i(→x), and h_i(→x). We formalize a black-box query model over finite fields for this class of circuits, which is mathematical simple and reflects constraints faced by real-world neural network compilers. We proved that a simple and efficient randomized identity testing algorithm achieves perfect completeness and non-trivial soundness. Concurrent with our work, the algorithm has been implemented in the optimization compiler Mirage by Wu et al. (OSDI 2025), demonstrating promising empirical performance in both efficiency and soundness error. Finally, we propose a number-theoretic conjecture under which our algorithm is sound with high probability.

Cite as

Jiatu Li and Mengdi Wu. Identity Testing for Circuits with Exponentiation Gates. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 95:1-95:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{li_et_al:LIPIcs.ITCS.2026.95,
  author =	{Li, Jiatu and Wu, Mengdi},
  title =	{{Identity Testing for Circuits with Exponentiation Gates}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{95:1--95:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.95},
  URN =		{urn:nbn:de:0030-drops-253821},
  doi =		{10.4230/LIPIcs.ITCS.2026.95},
  annote =	{Keywords: Polynomial Identity Testing, Exponential Polynomials}
}
Document
Robust Resource Allocation via Competitive Subsidies

Authors: David X. Lin, Giannis Fikioris, Siddhartha Banerjee, and Éva Tardos


Abstract
A canonical setting for non-monetary online resource allocation is one where agents compete over multiple rounds for a single item per round, with i.i.d. valuations and additive utilities across rounds. With n symmetric agents, a natural benchmark for each agent is the utility realized by her favorite 1/n-fraction of rounds; a line of work has demonstrated one can robustly guarantee each agent a constant fraction of this ideal utility, irrespective of how other agents behave. In particular, several mechanisms have been shown to be 1/2-robust, and recent work established that repeated first-price auctions based on artificial credits have a robustness factor of 0.59, which cannot be improved beyond 0.6 using first-price and simple strategies. In contrast, even without strategic considerations, the best achievable factor is 1-1/e≈ 0.63. In this work, we break the 0.6 first-price barrier to get a new 0.625-robust mechanism, which almost closes the gap to the non-strategic robustness bound. Surprisingly, we do so via a simple auction, where in each round, bidders decide if they ask for the item, and we allocate uniformly at random among those who ask. The main new ingredient is the idea of competitive subsidies, wherein we charge the winning agent an amount in artificial credits that decreases when fewer agents are bidding (specifically, when k agents bid, then the winner pays proportional to k/(k+1), varying the payment by a factor of 2 depending on the competition). Moreover, we show how it can be modified to get an equilibrium strategy with a slightly weaker robust guarantee of 5/(3e) ≈ 0.61 (and the optimal 1-1/e factor at equilibrium). Finally, we show that our mechanism gives the best possible bound under a wide class of auction-based mechanisms.

Cite as

David X. Lin, Giannis Fikioris, Siddhartha Banerjee, and Éva Tardos. Robust Resource Allocation via Competitive Subsidies. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 96:1-96:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{lin_et_al:LIPIcs.ITCS.2026.96,
  author =	{Lin, David X. and Fikioris, Giannis and Banerjee, Siddhartha and Tardos, \'{E}va},
  title =	{{Robust Resource Allocation via Competitive Subsidies}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{96:1--96:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.96},
  URN =		{urn:nbn:de:0030-drops-253835},
  doi =		{10.4230/LIPIcs.ITCS.2026.96},
  annote =	{Keywords: Online Resource Allocation, Non-Monetary Mechanisms}
}
Document
One-Way Functions and Boundary Hardness of Randomized Time-Bounded Kolmogorov Complexity

Authors: Yanyi Liu and Rafael Pass


Abstract
We revisit the question of whether worst-case hardness of the time-bounded Kolmogorov complexity problem, MINK^{poly} - that is, determining whether a string is "structured" (i.e., K^t(x) < n-1) or "random" (i.e., K^{poly(t)} ≥ n-1) - suffices to imply the existence of one-way functions (OWF). Liu-Pass (CRYPTO'25) recently showed that worst-case hardness of a boundary version of MINK^{poly} - where, roughly speaking, the goal is to decide whether given an instance x, (a) x is K^poly-random (i.e., K^{poly(t)}(x) ≥ n-1), or just close to K^poly-random (i.e., K^{t}(x) < n-1 but K^{poly(t)} > n - log n) - characterizes OWF, but with either of the following caveats (1) considering a non-standard notion of probabilistic K^t, as opposed to the standard notion of K^t, or (2) assuming somewhat strong, and non-standard, derandomization assumptions. In this paper, we present an alternative method for establishing their result which enables significantly weakening the caveats. First, we show that boundary hardness of the more standard randomized K^t problem suffices (where randomized K^t(x) is defined just like K^t(x) except that the program generating the string x may be randomized). As a consequence of this result, we can provide a characterization also in terms of just "plain" K^t under the most standard derandomization assumption (used to derandomize just BPP into P) - namely E ̸ ⊆ ioSIZE[2^{o(n)}]. Our proof relies on language compression schemes of Goldberg-Sipser (STOC'85); using the same technique, we also present the the first worst-case to average-case reduction for the exact MINK^{poly} problem (under the same standard derandomization assumption), improving upon Hirahara’s celebrated results (STOC'18, STOC'21) that only applied to a gap version of the MINK^{poly} problem, referred to as GapMINK^{poly}, where the goal is to decide whether K^t(x) ≤ n-O(log n)) or K^{poly(t)}(x) ≥ n-1 and under the same derandomization assumption.

Cite as

Yanyi Liu and Rafael Pass. One-Way Functions and Boundary Hardness of Randomized Time-Bounded Kolmogorov Complexity. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 97:1-97:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{liu_et_al:LIPIcs.ITCS.2026.97,
  author =	{Liu, Yanyi and Pass, Rafael},
  title =	{{One-Way Functions and Boundary Hardness of Randomized Time-Bounded Kolmogorov Complexity}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{97:1--97:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.97},
  URN =		{urn:nbn:de:0030-drops-253849},
  doi =		{10.4230/LIPIcs.ITCS.2026.97},
  annote =	{Keywords: One-way functions, Time-Bounded Kolmogorov Complexity, Worst-case to Average-case Reductions}
}
Document
Weighted Chairman Assignment and Flow-Time Scheduling

Authors: Siyue Liu and Victor Reis


Abstract
Given positive integers m, n, a fractional assignment x ∈ [0,1]^{m × n} and weights d ∈ ℝⁿ_{> 0}, we show that there exists an assignment y ∈ {0,1}^{m × n} so that for every i ∈ [m] and t ∈ [n], |∑_{j ∈ [t]} d_j (x_{ij} - y_{ij})| < max_{j ∈ [n]} d_j. This generalizes a result of Tijdeman (1973) on the unweighted version, known as the chairman assignment problem. This also confirms a special case of the single-source unsplittable flow conjecture with arc-wise lower and upper bounds due to Morell and Skutella (IPCO 2020). As an application, we consider a scheduling problem where jobs have release times and machines have closing times, and a job can only be scheduled on a machine if it is released before the machine closes. We give a 3-approximation algorithm for maximum flow-time minimization.

Cite as

Siyue Liu and Victor Reis. Weighted Chairman Assignment and Flow-Time Scheduling. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 98:1-98:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{liu_et_al:LIPIcs.ITCS.2026.98,
  author =	{Liu, Siyue and Reis, Victor},
  title =	{{Weighted Chairman Assignment and Flow-Time Scheduling}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{98:1--98:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.98},
  URN =		{urn:nbn:de:0030-drops-253858},
  doi =		{10.4230/LIPIcs.ITCS.2026.98},
  annote =	{Keywords: prefix discrepancy, flow-time scheduling, unsplittable flow}
}
Document
AC⁰[p]-Frege Cannot Efficiently Prove That Constant-Depth Algebraic Circuit Lower Bounds Are Hard

Authors: Jiaqi Lu, Rahul Santhanam, and Iddo Tzameret


Abstract
We study whether lower bounds against constant-depth algebraic circuits computing the Permanent over finite fields (Limaye-Srinivasan-Tavenas [J. ACM, 2025] and Forbes [CCC'24]) are hard to prove in certain proof systems. We focus on a DNF formula that expresses that such lower bounds are hard for constant-depth algebraic proofs. Using an adaptation of the diagonalization framework of Santhanam and Tzameret (SIAM J. Comput., 2025), we show unconditionally that this family of DNF formulas does not admit polynomial-size propositional AC⁰[p]-Frege proofs, infinitely often. This rules out the possibility that the DNF family is easy, and establishes that its status is either that of a hard tautology for AC⁰[p]-Frege or else unprovable (i.e., not a tautology). While it remains open whether the DNFs in question are tautologies, we provide evidence in this direction. In particular, under the plausible assumption that certain (weak) properties of multilinear algebra - specifically, those involving tensor rank - do not admit short constant-depth algebraic proofs, the DNFs are tautologies. We also observe that several weaker variants of the DNF formula are provably tautologies, and we show that the question of whether the DNFs are tautologies connects to conjectures of Razborov (ICALP'96) and Krajíček (J. Symb. Log., 2004). Additionally, our result has the following special features: ii) Existential depth amplification: the DNF formula considered is parameterised by a constant depth d bounding the depth of the algebraic proofs. We show that there exists some fixed depth d such that if there are no small depth-d algebraic proofs of certain circuit lower bounds for the Permanent, then there are no such small algebraic proofs in any constant depth. iii) Necessity: We show that our result is a necessary step towards establishing lower bounds against constant-depth algebraic proofs, and more generally against any sufficiently strong proof system. In particular, showing there are no short proofs for our DNF formulas, obtained by replacing "constant-depth algebraic circuits" with any "reasonable" algebraic circuit class C, is necessary in order to prove any super-polynomial lower bounds against algebraic proofs operating with circuits from C.

Cite as

Jiaqi Lu, Rahul Santhanam, and Iddo Tzameret. AC⁰[p]-Frege Cannot Efficiently Prove That Constant-Depth Algebraic Circuit Lower Bounds Are Hard. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 99:1-99:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{lu_et_al:LIPIcs.ITCS.2026.99,
  author =	{Lu, Jiaqi and Santhanam, Rahul and Tzameret, Iddo},
  title =	{{AC⁰\lbrackp\rbrack-Frege Cannot Efficiently Prove That Constant-Depth Algebraic Circuit Lower Bounds Are Hard}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{99:1--99:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.99},
  URN =		{urn:nbn:de:0030-drops-253865},
  doi =		{10.4230/LIPIcs.ITCS.2026.99},
  annote =	{Keywords: Complexity, Lower bounds, Proof complexity, AC⁰\lbrackp\rbrack-Frege, Diagonalisation, Algebraic complexity}
}
Document
Extended Abstract
Online Contention Resolution Schemes for Network Revenue Management and Combinatorial Auctions (Extended Abstract)

Authors: Will Ma, Calum MacRury, and Jingwei Zhang


Abstract
In the Network Revenue Management (NRM) problem, products composed of up to L resources are sold to stochastically arriving customers. We take a randomized rounding approach to NRM, motivated by the modern tool of Online Contention Resolution Schemes (OCRS). The goal is to take a fractional solution to NRM that satisfies the resource constraints in expectation, and implement it in an online policy that satisfies the resource constraints with probability 1, while (approximately) preserving all of the sales that were prescribed by the fractional solution. In NRM and revenue management problems, customer substitution induces a negative correlation between products being demanded, making it difficult to apply the standard definition of OCRS. We start by deriving a more powerful notion of "random-element" OCRS that achieves a guarantee of 1/(1+L) for NRM with customer substitution, matching a common benchmark in the literature. We show this benchmark is unbeatable for all integers L that are the power of a prime number, using a construction based on finite affine planes. We then show how to beat this benchmark under any of three assumptions: 1) no customer substitution (i.e., in the standard OCRS setting); 2) products comprise one item from each of up to L groups; or 3) customers arrive in a uniformly random (instead of fixed adversarial) order. Finally, we show that under both assumptions 1) and 3), it is possible to do better than offline CRS when L ≥ 5. Our results have corresponding implications for Online Combinatorial Auctions, in which buyers bid for bundles of up to L items, and buyers being single-minded is akin to having no substitution. Our result under assumption 1) or 2) implies that 1/(1+L) can be beaten for Prophet Inequality on the intersection of L partition matroids, a problem of interest. In sum, our paper shows how to apply OCRS to all of these problems and establishes a surprising separation in the achievable guarantees when substitution is involved, under general resource constraints parametrized by L.

Cite as

Will Ma, Calum MacRury, and Jingwei Zhang. Online Contention Resolution Schemes for Network Revenue Management and Combinatorial Auctions (Extended Abstract). In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, p. 100:1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{ma_et_al:LIPIcs.ITCS.2026.100,
  author =	{Ma, Will and MacRury, Calum and Zhang, Jingwei},
  title =	{{Online Contention Resolution Schemes for Network Revenue Management and Combinatorial Auctions}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{100:1--100:1},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.100},
  URN =		{urn:nbn:de:0030-drops-253875},
  doi =		{10.4230/LIPIcs.ITCS.2026.100},
  annote =	{Keywords: Online resource allocation, contention resolution schemes, network revenue management, combinatorial auctions}
}
Document
Two Bases Suffice for QMA ₁-Completeness

Authors: Henry Ma and Anand Natarajan


Abstract
We introduce a basis-restricted variant of the Quantum-k-Sat problem, in which each term in the input Hamiltonian is required to be diagonal in either the standard or Hadamard basis. Our main result is that the Quantum-6-Sat problem with this basis restriction is already QMA₁-complete, defined with respect to a natural gateset. Our construction is based on the Feynman-Kitaev circuit-to-Hamiltonian construction, with a modified clock encoding that interleaves two clocks in the standard and Hadamard bases. In light of the central role played by CSS codes and the uncertainty principle in the proof of the NLTS theorem of Anshu, Breuckmann, and Nirkhe (STOC '23), we hope that the CSS-like structure of our Hamiltonians will make them useful for progress towards a quantum PCP theorem.

Cite as

Henry Ma and Anand Natarajan. Two Bases Suffice for QMA ₁-Completeness. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 101:1-101:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{ma_et_al:LIPIcs.ITCS.2026.101,
  author =	{Ma, Henry and Natarajan, Anand},
  title =	{{Two Bases Suffice for QMA ₁-Completeness}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{101:1--101:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.101},
  URN =		{urn:nbn:de:0030-drops-253880},
  doi =		{10.4230/LIPIcs.ITCS.2026.101},
  annote =	{Keywords: quantum complexity theory, Hamiltonian complexity, Quantum Merlin Arthur (QMA), QMA₁, quantum satisfiability problem}
}
Document
Smoothed Analysis of Dynamic Graph Algorithms

Authors: Uri Meir and Ami Paz


Abstract
Recent years have seen significant progress in the study of dynamic graph algorithms, and most notably, the introduction of strong lower bound techniques for them (e.g., Henzinger, Krinninger, Nanongkai and Saranurak, STOC 2015; Larsen and Yu, FOCS 2023). As worst-case analysis (adversarial inputs) may lead to the necessity of high running times, a natural question arises: in which cases are high running times really necessary, and in which cases these inputs merely manifest unique pathological cases? Early attempts to tackle this question were made by Nikoletseas, Reif, Spirakis and Yung (ICALP 1995) and by Alberts and Henzinger (Algorithmica 1998), who considered models with very little adversarial control over the inputs, and showed fast algorithms exist for them. The question was then overlooked for decades, until Henzinger, Lincoln and Saha (SODA 2022) recently addressed uniformly random inputs, and presented algorithms and impossibility results for several subgraph counting problems. To tackle the above question more thoroughly, we employ smoothed analysis, a celebrated framework introduced by Spielman and Teng (J. ACM, 2004). An input is proposed by an adversary but then a noisy version of it is processed by the algorithm instead. This model of inputs is parameterized by the amount of adversarial control, and fully interpolates between worst-case inputs and a uniformly random input. Doing so, we extend impossibility results for some problems to the smoothed model with only a minor quantitative loss. That is, we show that partially-adversarial inputs suffice to impose high running times for certain problems. In contrast, we show that other problems become easy even with the slightest amount of noise. In addition, we study the interplay between the adversary and the noise, leading to three natural models of smoothed inputs, for which we show a hierarchy of increasing difficulty stretching between the average-case and the worst-case complexities.

Cite as

Uri Meir and Ami Paz. Smoothed Analysis of Dynamic Graph Algorithms. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 102:1-102:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{meir_et_al:LIPIcs.ITCS.2026.102,
  author =	{Meir, Uri and Paz, Ami},
  title =	{{Smoothed Analysis of Dynamic Graph Algorithms}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{102:1--102:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.102},
  URN =		{urn:nbn:de:0030-drops-253896},
  doi =		{10.4230/LIPIcs.ITCS.2026.102},
  annote =	{Keywords: Dynamic graph algorithms, Smoothed analysis, Shortest paths}
}
Document
A General Framework for Low Soundness Homomorphism Testing

Authors: Tushant Mittal and Sourya Roy


Abstract
We introduce a general framework to design and analyze algorithms for the problem of testing homomorphisms between finite groups in the low-soundness regime. In this regime, we give the first constant-query tests for various families of groups. These include tests for: (i) homomorphisms between arbitrary cyclic groups, (ii) homomorphisms between any finite group and ℤ_p, (iii) automorphisms of dihedral and symmetric groups, (iv) inner automorphisms of non-abelian finite simple groups and extraspecial groups, and (v) testing linear characters of GL_n(F_q), and finite-dimensional Lie algebras over F_q. We also recover the result of Kiwi [TCS'03] for testing homomorphisms between F_qⁿ and F_q. Prior to this work, such tests were only known for abelian groups with a constant maximal order (such as F_qⁿ). No tests were known for non-abelian groups. As an additional corollary, our framework gives combinatorial list decoding bounds for cyclic groups with list size dependence of O(ε^{-2}) (for agreement parameter ε). This improves upon the currently best-known bound of O(ε^{-105}) due to Dinur, Grigorescu, Kopparty, and Sudan [STOC'08], and Guo and Sudan [RANDOM'14].

Cite as

Tushant Mittal and Sourya Roy. A General Framework for Low Soundness Homomorphism Testing. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 103:1-103:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{mittal_et_al:LIPIcs.ITCS.2026.103,
  author =	{Mittal, Tushant and Roy, Sourya},
  title =	{{A General Framework for Low Soundness Homomorphism Testing}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{103:1--103:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.103},
  URN =		{urn:nbn:de:0030-drops-253901},
  doi =		{10.4230/LIPIcs.ITCS.2026.103},
  annote =	{Keywords: Property Testing, Coding Theory}
}
Document
Dimension-Free Correlated Sampling for the Hypersimplex

Authors: Joseph (Seffi) Naor, Nitya Raju, Abhishek Shetty, Aravind Srinivasan, Renata Valieva, and David Wajc


Abstract
Sampling from multiple distributions so as to maximize overlap has been studied by statisticians since the 1950s. Since the 2000s, such correlated sampling from the probability simplex has been a powerful building block in disparate areas of theoretical computer science. We study a generalization of this problem to sampling sets from given vectors in the hypersimplex, i.e., outputting sets of size (at most) k ∈ [n], while maximizing the overlap of the sampled sets. Specifically, the expected difference between two output sets should be at most α times their input vectors' 𝓁₁ distance. A value of α = O(log n) is known to be achievable, due to Chen et al. (ICALP'17). We improve this factor to O(log k), independent of the ambient dimension n. Our algorithm satisfies other desirable properties, including (up to a log^* n factor) input-sparsity sampling time, logarithmic parallel depth and dynamic update time, as well as preservation of submodular objectives. Anticipating broader use of correlated sampling algorithms for the hypersimplex, we present applications of our algorithm to online paging, offline approximation of metric multi-labeling, and swift multi-scenario submodular welfare approximating reallocation.

Cite as

Joseph (Seffi) Naor, Nitya Raju, Abhishek Shetty, Aravind Srinivasan, Renata Valieva, and David Wajc. Dimension-Free Correlated Sampling for the Hypersimplex. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 104:1-104:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{naor_et_al:LIPIcs.ITCS.2026.104,
  author =	{Naor, Joseph (Seffi) and Raju, Nitya and Shetty, Abhishek and Srinivasan, Aravind and Valieva, Renata and Wajc, David},
  title =	{{Dimension-Free Correlated Sampling for the Hypersimplex}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{104:1--104:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.104},
  URN =		{urn:nbn:de:0030-drops-253918},
  doi =		{10.4230/LIPIcs.ITCS.2026.104},
  annote =	{Keywords: Correlated Rounding, Dependent Rounding}
}
Document
Fixed-Parameter Tractable Submodular Maximization over a Matroid

Authors: Shamisa Nematollahi, Adrian Vladu, and Junyao Zhao


Abstract
In this paper, we design fixed-parameter tractable (FPT) algorithms for (non-monotone) submodular maximization subject to a matroid constraint, where the matroid rank r is treated as a fixed parameter that is independent of the total number of elements n. We provide two FPT algorithms: one for the offline setting and another for the random-order streaming setting. Our streaming algorithm achieves a 1/2-ε approximation using Õ(r/poly(ε)) memory, while our offline algorithm obtains a 1-(1)/(e)-ε approximation with n⋅ 2^{Õ(r/poly(ε))} runtime and Õ(r/poly(ε)) memory. Both approximation factors are near-optimal in their respective settings, given existing hardness results. In particular, our offline algorithm demonstrates that - unlike in the polynomial-time regime - there is essentially no separation between monotone and non-monotone submodular maximization under a matroid constraint in the FPT framework.

Cite as

Shamisa Nematollahi, Adrian Vladu, and Junyao Zhao. Fixed-Parameter Tractable Submodular Maximization over a Matroid. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 105:1-105:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{nematollahi_et_al:LIPIcs.ITCS.2026.105,
  author =	{Nematollahi, Shamisa and Vladu, Adrian and Zhao, Junyao},
  title =	{{Fixed-Parameter Tractable Submodular Maximization over a Matroid}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{105:1--105:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.105},
  URN =		{urn:nbn:de:0030-drops-253924},
  doi =		{10.4230/LIPIcs.ITCS.2026.105},
  annote =	{Keywords: Submodular maximization, matroids, parameterized complexity, streaming algorithms}
}
Document
List Decoding Reed-Solomon Codes in the Lee, Euclidean, and Other Metrics

Authors: Chris Peikert and Alexandra Veliche Hostetler


Abstract
Reed-Solomon error-correcting codes are ubiquitous across computer science and information theory, with applications in cryptography, computational complexity, communication and storage systems, and more. Most works on efficient error correction for these codes, like the celebrated Berlekamp-Welch unique decoder and the (Guruswami-)Sudan list decoders, are focused on measuring error in the Hamming metric, which simply counts the number of corrupted codeword symbols. However, for some applications, other metrics that depend on the specific values of the errors may be more appropriate. This work gives a polynomial-time algorithm that list decodes (generalized) Reed-Solomon codes over prime fields in 𝓁_p (semi)metrics, for any 0 < p ≤ 2. Compared to prior algorithms for the Lee (𝓁₁) and Euclidean (𝓁₂) metrics, ours decodes to arbitrarily large distances (for correspondingly small rates), and has better distance-rate tradeoffs for all decoding distances above some moderate thresholds. We also prove lower bounds on the 𝓁₁ and 𝓁₂ minimum distances of a certain natural subclass of GRS codes, which establishes that our list decoder is actually a unique decoder for many parameters of interest. Finally, we analyze our algorithm’s performance under random Laplacian and Gaussian errors, and show that it supports even larger rates than for corresponding amounts of worst-case error in 𝓁₁ and 𝓁₂ (respectively).

Cite as

Chris Peikert and Alexandra Veliche Hostetler. List Decoding Reed-Solomon Codes in the Lee, Euclidean, and Other Metrics. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 106:1-106:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{peikert_et_al:LIPIcs.ITCS.2026.106,
  author =	{Peikert, Chris and Hostetler, Alexandra Veliche},
  title =	{{List Decoding Reed-Solomon Codes in the Lee, Euclidean, and Other Metrics}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{106:1--106:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.106},
  URN =		{urn:nbn:de:0030-drops-253932},
  doi =		{10.4230/LIPIcs.ITCS.2026.106},
  annote =	{Keywords: Reed-Solomon codes, list decoding, unique decoding, Lee metric, Euclidean metric, Guruswami-Sudan algorithm}
}
Document
New Greedy Spanners and Applications

Authors: Elizaveta Popova and Elad Tzalik


Abstract
We present a simple greedy procedure to compute an (α,β)-spanner for a graph G. We then show that this procedure is useful for building fault-tolerant spanners, as well as spanners for weighted graphs. Our first main result is an algorithm that, given a multigraph G, outputs an f edge fault-tolerant (k,k-1)-spanner H of size O(fn^{1+1/k}) which is tight. To our knowledge, this is the first tight result concerning the price of fault tolerance in spanners which are not multiplicative, in any model of faults. Our second main result is a new construction of a spanner for weighted graphs. We show that any weighted graph G has a subgraph H with O(n^{1+1/k}) edges such that any path P of hop-length 𝓁 in G has a replacement path P' in H of weighted length ≤ w(P)+(2k-2)w^(1/2)(P) where w(P) is the total edge weight of P, and w^(1/2) denotes the sum of the largest ⌈𝓁/2⌉ edge weights along P. Moreover, we show such approximation is optimal for shortest paths of hop-length 2. To our knowledge, this is the first construction of a "spanner" for weighted graphs that strictly improves upon the stretch of multiplicative (2k-1)-spanners for all non-adjacent vertex pairs, while maintaining the same size bound. Our technique is based on using clustering and ball-growing, which are methods commonly used in designing spanner algorithms, to analyze simple greedy algorithms. This allows us to combine the flexibility of clustering approaches with the unique properties of the greedy algorithm to get improved bounds. In particular, our methods give a very short proof that the parallel greedy spanner adds O(kn^{1+1/k}) edges, improving upon known bounds.

Cite as

Elizaveta Popova and Elad Tzalik. New Greedy Spanners and Applications. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 107:1-107:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{popova_et_al:LIPIcs.ITCS.2026.107,
  author =	{Popova, Elizaveta and Tzalik, Elad},
  title =	{{New Greedy Spanners and Applications}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{107:1--107:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.107},
  URN =		{urn:nbn:de:0030-drops-253945},
  doi =		{10.4230/LIPIcs.ITCS.2026.107},
  annote =	{Keywords: Graph Spanners, Greedy Algorithms}
}
Document
The Learning Stabilizers with Noise Problem

Authors: Alexander Poremba, Yihui Quek, and Peter Shor


Abstract
Random classical codes have good error correcting properties, and yet they are notoriously hard to decode in practice. Despite many decades of extensive study, the fastest known algorithms still run in exponential time. The Learning Parity with Noise (LPN) problem, which can be seen as the task of decoding a random linear code in the presence of noise, has thus emerged as a prominent hardness assumption with numerous applications in both cryptography and learning theory. Is there a natural quantum analog of the LPN problem? In this work, we introduce the Learning Stabilizers with Noise (LSN) problem, the task of decoding a random stabilizer code in the presence of local depolarizing noise. We give both polynomial-time and exponential-time quantum algorithms for solving LSN in various depolarizing noise regimes, ranging from extremely low noise, to low constant noise rates, and even higher noise rates up to a threshold. Next, we provide concrete evidence that LSN is hard. First, we show that LSN includes LPN as a special case, which suggests that it is at least as hard as its classical counterpart. Second, we prove worst-case to average-case reductions for variants of LSN. We then ask: what is the computational complexity of solving LSN? Because the task features quantum inputs, its complexity cannot be characterized by traditional complexity classes. Instead, we show that the LSN problem lies in a recently introduced (distributional and oracle) unitary synthesis class. Finally, we identify several applications of our LSN assumption, ranging from the construction of quantum bit commitment schemes to the computational limitations of learning from quantum data.

Cite as

Alexander Poremba, Yihui Quek, and Peter Shor. The Learning Stabilizers with Noise Problem. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 108:1-108:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{poremba_et_al:LIPIcs.ITCS.2026.108,
  author =	{Poremba, Alexander and Quek, Yihui and Shor, Peter},
  title =	{{The Learning Stabilizers with Noise Problem}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{108:1--108:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.108},
  URN =		{urn:nbn:de:0030-drops-253950},
  doi =		{10.4230/LIPIcs.ITCS.2026.108},
  annote =	{Keywords: Random quantum stabilizer codes, average-case hardness}
}
Document
Cloning Games, Black Holes and Cryptography

Authors: Alexander Poremba, Seyoon Ragavan, and Vinod Vaikuntanathan


Abstract
In this work, we introduce a new toolkit for analyzing cloning games, a notion that captures stronger and more quantitative versions of the celebrated quantum no-cloning theorem. This framework allows us to analyze a new cloning game based on binary phase states. Our results provide evidence that these games may be able to overcome important limitations of previous candidates based on BB84 states and subspace coset states: in a model where the adversaries are restricted to making a single oracle query, we show that the binary phase variant is t-copy secure when t = o(n/log n). Moreover, for constant t, we obtain the first optimal bounds of O(2^{-n}), asymptotically matching the value attained by a trivial adversarial strategy. We also show a worst-case to average-case reduction which allows us to show the same quantitative results for the new and natural notion of Haar cloning games. Our analytic toolkit, which we believe will find further applications, is based on binary subtypes and uses novel bounds on the operator norms of block-wise tensor products of matrices. To illustrate the effectiveness of these new techniques, we present two applications: first, in black-hole physics, where our asymptotically optimal bound offers quantitative insights into information scrambling in idealized models of black holes; and second, in unclonable cryptography, where we (a) construct succinct unclonable encryption schemes from the existence of pseudorandom unitaries, and (b) propose and provide evidence for the security of multi-copy unclonable encryption schemes.

Cite as

Alexander Poremba, Seyoon Ragavan, and Vinod Vaikuntanathan. Cloning Games, Black Holes and Cryptography. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 109:1-109:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{poremba_et_al:LIPIcs.ITCS.2026.109,
  author =	{Poremba, Alexander and Ragavan, Seyoon and Vaikuntanathan, Vinod},
  title =	{{Cloning Games, Black Holes and Cryptography}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{109:1--109:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.109},
  URN =		{urn:nbn:de:0030-drops-253961},
  doi =		{10.4230/LIPIcs.ITCS.2026.109},
  annote =	{Keywords: Unclonable cryptography, quantum pseudorandomness, black hole physics}
}
Document
Optimal Two-Round Communication Lower Bound for Graph Connectivity via Pointer Chasing

Authors: Jaikumar Radhakrishnan, Chaitanya Reddy, and Rakesh Venkat


Abstract
We consider the communication complexity of the graph connectivity problem, where the edges of an n-vertex undirected graph G are distributed between two parties Alice and Bob, who are then required to communicate to determine if G is connected. We show that in any randomized protocol with two-rounds of communication, Alice and Bob must exchange Ω(nlog n) bits; such a lower bound for one-round protocols was shown by Sun and Woodruff (APPROX/RANDOM 2015). A one-round deterministic protocol, where Alice sends O(n log n) bits and Bob determines the answer, was observed by Hajnal, Maass and Turan (STOC 1988); they also showed a matching lower bound of Ω(n log n) bits for deterministic protocols with unbounded rounds of communication. For randomized protocols, a reduction from the set disjointness problem due to Babai, Frankl and Simon (FOCS 1986) implies a randomized lower bound of Ω(n) even with unbounded rounds of communication. Whether this lower bound can be improved to Ω(n log n) has been an outstanding open question, whose algorithmic implications were recently emphasized by Apers, Efron, Gawrychowski, Lee, Mukopadhyay and Nanongkai (FOCS 2022). Our lower bound for randomized two-round protocols is based on a reduction from a restricted version of the two-player pointer chasing problem originally studied by Papadimitriou and Sipser (JCSS 1984). Using this reduction, we show an ω(n) lower bounds on graph connectivity for any constant number of rounds by extending deterministic lower bounds shown by Ponzio, Radhakrishnan and Venkatesh (JCSS 2001) to the randomized setting.

Cite as

Jaikumar Radhakrishnan, Chaitanya Reddy, and Rakesh Venkat. Optimal Two-Round Communication Lower Bound for Graph Connectivity via Pointer Chasing. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 110:1-110:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{radhakrishnan_et_al:LIPIcs.ITCS.2026.110,
  author =	{Radhakrishnan, Jaikumar and Reddy, Chaitanya and Venkat, Rakesh},
  title =	{{Optimal Two-Round Communication Lower Bound for Graph Connectivity via Pointer Chasing}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{110:1--110:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.110},
  URN =		{urn:nbn:de:0030-drops-253974},
  doi =		{10.4230/LIPIcs.ITCS.2026.110},
  annote =	{Keywords: Communication complexity}
}
Document
Hardness of Range Avoidance and Proof Complexity Generators from Demi-Bits

Authors: Hanlin Ren, Yichuan Wang, and Yan Zhong


Abstract
Given a circuit G: {0, 1}ⁿ → {0, 1}^m with m > n, the range avoidance problem (Avoid) asks to output a string y ∈ {0, 1}^m that is not in the range of G. Besides its profound connection to circuit complexity and explicit construction problems, this problem is also related to the existence of proof complexity generators - circuits G: {0, 1}ⁿ → {0, 1}^m where m > n but for every y ∈ {0, 1}^m, it is infeasible to prove the statement "y ̸ ∈ Range(G)" in a given propositional proof system. This paper connects these two problems with the existence of demi-bits generators, a fundamental cryptographic primitive against nondeterministic adversaries introduced by Rudich (RANDOM '97). - We show that the existence of demi-bits generators implies Avoid is hard for nondeterministic algorithms. This resolves an open problem raised by Chen and Li (STOC '24). Furthermore, assuming the demi-hardness of certain LPN-style generators or Goldreich’s PRG, we prove the hardness of Avoid even when the instances are constant-degree polynomials over 𝔽₂. - We show that the dual weak pigeonhole principle is unprovable in Cook’s theory PV₁ under the existence of demi-bits generators secure against AM/_{O(1)}, thereby separating Jeřábek’s theory APC₁ from PV₁. Previously, Ilango, Li, and Williams (STOC '23) obtained the same separation under different (and arguably stronger) cryptographic assumptions. - We transform demi-bits generators to proof complexity generators that are pseudo-surjective in certain parameter regime. Pseudo-surjectivity is the strongest form of hardness considered in the literature for proof complexity generators. Our constructions are inspired by the recent breakthroughs on the hardness of Avoid by Ilango, Li, and Williams (STOC '23) and Chen and Li (STOC '24). We use randomness extractors to significantly simplify the construction and the proof.

Cite as

Hanlin Ren, Yichuan Wang, and Yan Zhong. Hardness of Range Avoidance and Proof Complexity Generators from Demi-Bits. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 111:1-111:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{ren_et_al:LIPIcs.ITCS.2026.111,
  author =	{Ren, Hanlin and Wang, Yichuan and Zhong, Yan},
  title =	{{Hardness of Range Avoidance and Proof Complexity Generators from Demi-Bits}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{111:1--111:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.111},
  URN =		{urn:nbn:de:0030-drops-253982},
  doi =		{10.4230/LIPIcs.ITCS.2026.111},
  annote =	{Keywords: Range Avoidance, Proof Complexity Generators}
}
Document
Lower Bounds Beyond DNF of Parities

Authors: Artur Riazanov, Anastasia Sofronova, and Dmitry Sokolov


Abstract
We consider a subclass of AC⁰[2] circuits that simultaneously captures DNF∘Xor and depth-3 AC⁰ circuits. For this class we show a technique for proving lower bounds inspired by the top-down approach. We give lower bounds for the middle slice function, inner product function, and affine dispersers.

Cite as

Artur Riazanov, Anastasia Sofronova, and Dmitry Sokolov. Lower Bounds Beyond DNF of Parities. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 112:1-112:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{riazanov_et_al:LIPIcs.ITCS.2026.112,
  author =	{Riazanov, Artur and Sofronova, Anastasia and Sokolov, Dmitry},
  title =	{{Lower Bounds Beyond DNF of Parities}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{112:1--112:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.112},
  URN =		{urn:nbn:de:0030-drops-253996},
  doi =		{10.4230/LIPIcs.ITCS.2026.112},
  annote =	{Keywords: boolean circuits, top-down, unpredictability}
}
Document
Multi-Quadratic Sum-Of-Squares Lower Bounds Imply VNC ¹ ≠ VNP

Authors: Benjamin Rossman and Davidson Zhu


Abstract
The sum-of-squares (SoS) complexity of a d-multiquadratic polynomial f (quadratic in each of d blocks of n variables) is the minimum s such that f = ∑_{i = 1}^s g_i² with each g_i d-multilinear. In the case d = 2, Hrubeš, Wigderson and Yehudayoff [Hrubeš et al., 2011] showed that an n^{1+Ω(1)} lower bound on the SoS complexity of explicit biquadratic polynomials implies an exponential lower bound for non-commutative arithmetic circuits. In this paper, we establish an analogous connection between general multiquadratic sum-of-squares and commutative arithmetic formulas. Specifically, we show that an n^{d-o(log d)} lower bound on the SoS complexity of explicit d-multiquadratic polynomials, for any d = d(n) with ω(1) ≤ d(n) ≤ O((log n)/(log log n)), would separate the algebraic complexity classes VNC¹ and VNP.

Cite as

Benjamin Rossman and Davidson Zhu. Multi-Quadratic Sum-Of-Squares Lower Bounds Imply VNC ¹ ≠ VNP. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 113:1-113:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{rossman_et_al:LIPIcs.ITCS.2026.113,
  author =	{Rossman, Benjamin and Zhu, Davidson},
  title =	{{Multi-Quadratic Sum-Of-Squares Lower Bounds Imply VNC ¹ ≠ VNP}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{113:1--113:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.113},
  URN =		{urn:nbn:de:0030-drops-254006},
  doi =		{10.4230/LIPIcs.ITCS.2026.113},
  annote =	{Keywords: sum-of-squares, arithmetic formulas}
}
Document
Zero-Freeness Is All You Need: A Weitz-Type FPTAS for the Entire Lee-Yang Zero-Free Region

Authors: Shuai Shao and Ke Shi


Abstract
We present a Weitz-type FPTAS for the ferromagnetic Ising model across the entire Lee–Yang zero-free region, without relying on the strong spatial mixing (SSM) property. Our algorithm is Weitz-type for two reasons. First, it expresses the partition function as a telescoping product of ratios, with the key being to approximate each ratio. Second, it uses Weitz’s self-avoiding walk tree, and truncates it at logarithmic depth to give a good and efficient approximation. The key difference from the standard Weitz algorithm is that we approximate a carefully designed edge-deletion ratio instead of the marginal probability of a vertex being assigned a particular spin, ensuring our algorithm does not require SSM. Furthermore, by establishing local dependence of coefficients (LDC), we indeed prove a novel form of SSM for these edge-deletion ratios, which, in turn, implies the standard SSM for the random cluster model. This is the first SSM result for the random cluster model on general graphs, beyond lattices. Our proof of LDC is based on a new division relation, and we show such relations hold quite universally. This leads to a broadly applicable framework for proving LDC across a variety of models, including the Potts model, the hypergraph independence polynomial, and Holant problems. Combined with existing zero-freeness results for these models, we derive new SSM results for them.

Cite as

Shuai Shao and Ke Shi. Zero-Freeness Is All You Need: A Weitz-Type FPTAS for the Entire Lee-Yang Zero-Free Region. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 114:1-114:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{shao_et_al:LIPIcs.ITCS.2026.114,
  author =	{Shao, Shuai and Shi, Ke},
  title =	{{Zero-Freeness Is All You Need: A Weitz-Type FPTAS for the Entire Lee-Yang Zero-Free Region}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{114:1--114:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.114},
  URN =		{urn:nbn:de:0030-drops-254010},
  doi =		{10.4230/LIPIcs.ITCS.2026.114},
  annote =	{Keywords: Ferromagnetic Ising Model, Lee–Yang Theorem, Weitz-Type FPTAS, Strong Spatial Mixing, Random Cluster Model}
}
Document
Lower Bounds for Noncommutative Circuits with Low Syntactic Degree

Authors: Pratik Shastri


Abstract
Proving lower bounds on the size of noncommutative arithmetic circuits is an important problem in arithmetic circuit complexity. For explicit n variate polynomials of degree Θ(n), the best known general bound is Ω(n log n) [Strassen, 1973; Walter Baur and Volker Strassen, 1983]. Recent work of Chatterjee and Hrubeš [Chatterjee and Hrubeš, 2023] has provided stronger (Ω(n²)) bounds for the restricted class of homogeneous circuits. The present paper extends these results to a broader class of circuits by using syntactic degree as a complexity measure. The syntactic degree of a circuit is a well known parameter which measures the extent to which high degree computation is used in the circuit. A homogeneous circuit computing a degree d polynomial can be assumed, without loss of generality, to have syntactic degree exactly equal to d [Fournier et al., 2024]. We generalize this by considering circuits that are not necessarily homogeneous but have low syntactic degree. Specifically, for an explicit n variate, degree n polynomial f we show that any circuit with syntactic degree O(n) computing f must have size Ω(n^{1+c}) for some constant c > 0. We also show that any circuit with syntactic degree o(nlog n) computing the same f must have size ω(nlog n). We further analyze the circuit size required to compute f based on the number of distinct syntactic degrees appearing in the circuit. Our analysis yields an ω(nlog n) size lower bound for all but a narrow parameter regime where an improved bound is not obtained. Finally, we observe that low syntactic degree circuits are more powerful than homogeneous circuits in a fine grained sense: there exists an n variate, degree Θ(n) polynomial that has a circuit of size O(nlog ²n) and syntactic degree O(n) but any homogeneous circuit computing it requires size Ω(n²).

Cite as

Pratik Shastri. Lower Bounds for Noncommutative Circuits with Low Syntactic Degree. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 115:1-115:9, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{shastri:LIPIcs.ITCS.2026.115,
  author =	{Shastri, Pratik},
  title =	{{Lower Bounds for Noncommutative Circuits with Low Syntactic Degree}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{115:1--115:9},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.115},
  URN =		{urn:nbn:de:0030-drops-254028},
  doi =		{10.4230/LIPIcs.ITCS.2026.115},
  annote =	{Keywords: Noncommutative Circuits, Lower Bounds, Circuit Complexity, Algebraic Complexity}
}
Document
Decentralized Data Archival: New Definitions and Constructions

Authors: Elaine Shi, Rose Silver, and Changrui Mu


Abstract
We initiate the study of a new abstraction called incremental decentralized data archival (iDDA). Specifically, imagine that there is an ever-growing, massive database such as a blockchain, a comprehensive human knowledge base like Wikipedia, or the Internet archive. We want to build a decentralized archival system for such datasets to ensure long-term robustness and sustainability. We identify several important properties that an iDDA scheme should satisfy. First, to promote heterogeneity and decentralization, we want to encourage even weak nodes with limited space (e.g., users' home computers) to contribute. The minimum space requirement to contribute should be approximately independent of the data size. Second, if a collection of nodes together receive rewards commensurate with contributing a total of m blocks of space, then we want the following reassurances: 1) if m is at least the database size, we should be able to reconstruct the entire dataset; and 2) these nodes should actually be committing roughly m space in aggregate - specifically, when m is much larger than the data size, these nodes cannot store only one copy of the database, and be able to impersonate arbitrarily many pseudonyms and get unbounded rewards. We propose new definitions that mathematically formalize the aforementioned requirements of an iDDA scheme. We also devise an efficient construction in the random oracle model which satisfies the desired security requirements. Our scheme incurs only Õ(1) audit cost, as well as Õ(1) update cost for both the publisher and each node, where Õ(⋅) hides polylogarithmic factors. Further, the minimum space provisioning required to contribute is as small as polylogarithmic. Our construction exposes several interesting technical challenges. Specifically, we show that a straightforward application of the standard hierarchical data structure fails, since both our security definition and the underlying cryptographic primitives we employ lack the desired compositional guarantees. We devise novel techniques to overcome these compositional issues, resulting in a construction with provable security while still retaining efficiency. Finally, our new definitions also make a conceptual contribution, and lay the theoretical groundwork for the study of iDDA. We raise several interesting open problems along this direction.

Cite as

Elaine Shi, Rose Silver, and Changrui Mu. Decentralized Data Archival: New Definitions and Constructions. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 116:1-116:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{shi_et_al:LIPIcs.ITCS.2026.116,
  author =	{Shi, Elaine and Silver, Rose and Mu, Changrui},
  title =	{{Decentralized Data Archival: New Definitions and Constructions}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{116:1--116:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.116},
  URN =		{urn:nbn:de:0030-drops-254037},
  doi =		{10.4230/LIPIcs.ITCS.2026.116},
  annote =	{Keywords: Decentralized Data Archival}
}
Document
On the Power of Computationally Sound Interactive Proofs of Proximity

Authors: Hadar Strauss


Abstract
Interactive proofs of proximity (IPPs) are a relaxation of interactive proofs, analogous to property testing, in which soundness is required to hold only for inputs that are ε-far from the property being verified, where ε > 0 is a proximity parameter. In such proof systems, the verifier has oracle access to the input, and it engages in two types of activities before making its decision: querying the input oracle and communicating with the prover. The main objective is to achieve protocols where both the query and communication complexities are extremely low. In this work, we focus on computationally sound IPPs (cs-IPPs). We study their power in two aspects: - Query complexity: We show that, assuming the existence of collision-resistant hashing functions (CRHFs), any public-coin cs-IPP that has query complexity q can be transformed into a cs-IPP that makes only O(1/ε) queries, while increasing the communication complexity by roughly q. If we further assume the existence of a good computational PIR (private information retrieval) scheme, then a similar transformation holds for general (i.e., possibly private-coin) cs-IPPs. - Coordination: Aside from the low query complexity, the resulting cs-IPP has only minimal coordination between the verifier’s two activities. The general definition of IPPs allows the verifier to fully coordinate its interaction with the prover and its queries to the input oracle. Goldreich, Rothblum, and Skverer (ITCS 2023) introduced two restricted models of IPPs that are minimally coordinated: The pre-coordinated model, where no information flows between the querying and interacting activities, but they may use a common source of randomness, and the isolated model, where the two activities are fully independent, each operating with a separate source of randomness. Our transformation shows that (under the aforementioned computational assumptions) any cs-IPP can be made to be in the pre-coordinated model, while preserving its efficiency. Hence, pre-coordinated cs-IPPs are essentially as powerful as general cs-IPPs. In contrast, we show that cs-IPPs in the isolated model are extremely limited, offering almost no advantage over property testers. Specifically, extending on a result shown by Goldreich et al. for unconditionally sound IPPs in the isolated model, we show that if a property has a cs-IPP in the isolated model that makes q queries and uses c > 0 bits of communication, then it has a tester with query complexity O(c⋅ q).

Cite as

Hadar Strauss. On the Power of Computationally Sound Interactive Proofs of Proximity. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 117:1-117:9, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{strauss:LIPIcs.ITCS.2026.117,
  author =	{Strauss, Hadar},
  title =	{{On the Power of Computationally Sound Interactive Proofs of Proximity}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{117:1--117:9},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.117},
  URN =		{urn:nbn:de:0030-drops-254047},
  doi =		{10.4230/LIPIcs.ITCS.2026.117},
  annote =	{Keywords: Interactive Proofs of Proximity, Computational Soundness}
}
Document
Markov Chain Robustness

Authors: David Zuckerman


Abstract
When a Markov chain models nature or social interactions, it is likely not followed exactly, but only approximately. We therefore introduce several notions of robustness for a Markov chain P. Our standard adversary can dynamically change transition probabilities of P by 1 ± ε, and our strong adversary can completely control each transition independently with probability ε, as in a model by Azar, Broder, Karlin, Linial, and Philips [Y. Azar et al., 1996]. These adversaries are equivalent up to constant factors if the degrees are constant. Our adversarial chains need not converge. We define and prove various robustness properties of a reversible chain P, i.e., a random walk on a connected undirected graph G. Let d be the maximum degree, Δ the diameter, π the stationary distribution, and t_{mix} the mixing time. 1) We define a natural analogue π^+(S) that upper bounds limiting frequencies in a set S in the adversarial chain. We show that if ε = O(1/√{dt_{up}}), where t_{up} is a variant of the mixing time, then π^+(S) = O(π(S)^{1-α}) for any α > 0. 2) We define the mixing time robustness as the largest ε such that the approximate mixing time increases by only a constant factor, and prove that it is Ω(1/√{dt_{mix}}). 3) We define the hitting time robustness as the largest ε such that the maximum hitting time increases by only a constant factor, and show that it is Ω(1/t_{mix}). For trees, we show it is Ω(1/Δ). 4) We define the cover time robustness as the largest ε such that the cover time increases by only a constant factor. We show that in most graphs it’s at least the hitting time robustness. 5) We characterize the mixing, hitting, and cover time robustnesses for constant-degree regular expander graphs up to constant factors. They are Θ(1), Θ(1/log n), and Θ(1/log n), respectively.

Cite as

David Zuckerman. Markov Chain Robustness. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 118:1-118:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{zuckerman:LIPIcs.ITCS.2026.118,
  author =	{Zuckerman, David},
  title =	{{Markov Chain Robustness}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{118:1--118:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.118},
  URN =		{urn:nbn:de:0030-drops-254056},
  doi =		{10.4230/LIPIcs.ITCS.2026.118},
  annote =	{Keywords: Markov chain, random walk, mixing time, hitting time, cover time, robustness, expander graph}
}

Filters


Any Issues?
X

Feedback on the Current Page

CAPTCHA

Thanks for your feedback!

Feedback submitted to Dagstuhl Publishing

Could not send message

Please try again later or send an E-mail