LIPIcs, Volume 339

40th Computational Complexity Conference (CCC 2025)



Thumbnail PDF

Event

CCC 2025, August 5-8, 2025, Toronto, Canada

Editor

Srikanth Srinivasan
  • University of Copenhagen, Denmark

Publication Details

  • published at: 2025-07-29
  • Publisher: Schloss Dagstuhl – Leibniz-Zentrum für Informatik
  • ISBN: 978-3-95977-379-9
  • DBLP: db/conf/coco/coco2025

Access Numbers

Documents

No documents found matching your filter selection.
Document
Complete Volume
LIPIcs, Volume 339, CCC 2025, Complete Volume

Authors: Srikanth Srinivasan


Abstract
LIPIcs, Volume 339, CCC 2025, Complete Volume

Cite as

40th Computational Complexity Conference (CCC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 339, pp. 1-990, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@Proceedings{srinivasan:LIPIcs.CCC.2025,
  title =	{{LIPIcs, Volume 339, CCC 2025, Complete Volume}},
  booktitle =	{40th Computational Complexity Conference (CCC 2025)},
  pages =	{1--990},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-379-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{339},
  editor =	{Srinivasan, Srikanth},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2025},
  URN =		{urn:nbn:de:0030-drops-240786},
  doi =		{10.4230/LIPIcs.CCC.2025},
  annote =	{Keywords: LIPIcs, Volume 339, CCC 2025, Complete Volume}
}
Document
Front Matter
Front Matter, Table of Contents, Preface, Conference Organization

Authors: Srikanth Srinivasan


Abstract
Front Matter, Table of Contents, Preface, Conference Organization

Cite as

40th Computational Complexity Conference (CCC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 339, pp. 0:i-0:xiv, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{srinivasan:LIPIcs.CCC.2025.0,
  author =	{Srinivasan, Srikanth},
  title =	{{Front Matter, Table of Contents, Preface, Conference Organization}},
  booktitle =	{40th Computational Complexity Conference (CCC 2025)},
  pages =	{0:i--0:xiv},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-379-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{339},
  editor =	{Srinivasan, Srikanth},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2025.0},
  URN =		{urn:nbn:de:0030-drops-240778},
  doi =		{10.4230/LIPIcs.CCC.2025.0},
  annote =	{Keywords: Front Matter, Table of Contents, Preface, Conference Organization}
}
Document
List Decoding Quotient Reed-Muller Codes

Authors: Omri Gotlib, Tali Kaufman, and Shachar Lovett


Abstract
Reed-Muller codes consist of evaluations of n-variate polynomials over a finite field 𝔽 with degree at most d. Much like every linear code, Reed-Muller codes can be characterized by constraints, where a codeword is valid if and only if it satisfies all degree-d constraints. For a subset X̃ ⊆ 𝔽ⁿ, we introduce the notion of X̃-quotient Reed-Muller code. A function F:X̃ → 𝔽 is a valid codeword in the quotient code if it satisfies all the constraints of degree-d polynomials lying in X̃. This gives rise to a novel phenomenon: a quotient codeword may have many extensions to original codewords. This weakens the connection between original codewords and quotient codewords which introduces a richer range of behaviors along with substantial new challenges. Our goal is to answer the following question: what properties of X̃ will imply that the quotient code inherits its distance and list-decoding radius from the original code? We address this question using techniques developed by Bhowmick and Lovett [Abhishek Bhowmick and Shachar Lovett, 2014], identifying key properties of 𝔽ⁿ used in their proof and extending them to general subsets X̃ ⊆ 𝔽ⁿ. By introducing a new tool, we overcome the novel challenge in analyzing the quotient code that arises from the weak connection between original and quotient codewords. This enables us to apply known results from additive combinatorics and algebraic geometry [David Kazhdan and Tamar Ziegler, 2018; David Kazhdan and Tamar Ziegler, 2019; Amichai Lampert and Tamar Ziegler, 2021] to show that when X̃ is a high rank variety, X̃-quotient Reed-Muller codes inherit the distance and list-decoding parameters from the original Reed-Muller codes.

Cite as

Omri Gotlib, Tali Kaufman, and Shachar Lovett. List Decoding Quotient Reed-Muller Codes. In 40th Computational Complexity Conference (CCC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 339, pp. 1:1-1:44, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{gotlib_et_al:LIPIcs.CCC.2025.1,
  author =	{Gotlib, Omri and Kaufman, Tali and Lovett, Shachar},
  title =	{{List Decoding Quotient Reed-Muller Codes}},
  booktitle =	{40th Computational Complexity Conference (CCC 2025)},
  pages =	{1:1--1:44},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-379-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{339},
  editor =	{Srinivasan, Srikanth},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2025.1},
  URN =		{urn:nbn:de:0030-drops-236957},
  doi =		{10.4230/LIPIcs.CCC.2025.1},
  annote =	{Keywords: Reed-Muller Codes, Quotient Code, Quotient Reed-Muller Code, List Decoding, High Rank Variety, High-Order Fourier Analysis, Error-Correcting Codes}
}
Document
Hardness Amplification for Real-Valued Functions

Authors: Yunqi Li and Prashant Nalini Vasudevan


Abstract
Given an integer-valued function f:{0,1}ⁿ → {0,1,… , m-1} that is mildly hard to compute on instances drawn from some distribution D over {0,1}ⁿ, we show that the function g(x_1, … , x_t) = f(x_1) + ⋯ + f(x_t) is strongly hard to compute on instances (x_1,… ,x_t) drawn from the product distribution D^t. We also show the same for the task of approximately computing real-valued functions f:{0,1}ⁿ → [0,m). Our theorems immediately imply hardness self-amplification for several natural problems including Max-Clique and Max-SAT, Approximate #SAT, Entropy Estimation, etc..

Cite as

Yunqi Li and Prashant Nalini Vasudevan. Hardness Amplification for Real-Valued Functions. In 40th Computational Complexity Conference (CCC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 339, pp. 2:1-2:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{li_et_al:LIPIcs.CCC.2025.2,
  author =	{Li, Yunqi and Vasudevan, Prashant Nalini},
  title =	{{Hardness Amplification for Real-Valued Functions}},
  booktitle =	{40th Computational Complexity Conference (CCC 2025)},
  pages =	{2:1--2:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-379-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{339},
  editor =	{Srinivasan, Srikanth},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2025.2},
  URN =		{urn:nbn:de:0030-drops-236967},
  doi =		{10.4230/LIPIcs.CCC.2025.2},
  annote =	{Keywords: Average-case complexity, hardness amplification}
}
Document
Quantum Threshold Is Powerful

Authors: Daniel Grier and Jackson Morris


Abstract
In 2005, Høyer and Špalek showed that constant-depth quantum circuits augmented with multi-qubit Fanout gates are quite powerful, able to compute a wide variety of Boolean functions as well as the quantum Fourier transform. They also asked what other multi-qubit gates could rival Fanout in terms of computational power, and suggested that the quantum Threshold gate might be one such candidate. Threshold is the gate that indicates if the Hamming weight of a classical basis state input is greater than some target value. We prove that Threshold is indeed powerful - there are polynomial-size constant-depth quantum circuits with Threshold gates that compute Fanout to high fidelity. Our proof is a generalization of a proof by Rosenthal that exponential-size constant-depth circuits with generalized Toffoli gates can compute Fanout. Our construction reveals that other quantum gates able to "weakly approximate" Parity can also be used as substitutes for Fanout.

Cite as

Daniel Grier and Jackson Morris. Quantum Threshold Is Powerful. In 40th Computational Complexity Conference (CCC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 339, pp. 3:1-3:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{grier_et_al:LIPIcs.CCC.2025.3,
  author =	{Grier, Daniel and Morris, Jackson},
  title =	{{Quantum Threshold Is Powerful}},
  booktitle =	{40th Computational Complexity Conference (CCC 2025)},
  pages =	{3:1--3:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-379-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{339},
  editor =	{Srinivasan, Srikanth},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2025.3},
  URN =		{urn:nbn:de:0030-drops-236979},
  doi =		{10.4230/LIPIcs.CCC.2025.3},
  annote =	{Keywords: Shallow Quantum Circuits, Circuit Complexity, Threshold Circuits}
}
Document
Hardness of Clique Approximation for Monotone Circuits

Authors: Jarosław Błasiok and Linus Meierhöfer


Abstract
We consider a problem of approximating the size of the largest clique in a graph, using a monotone circuit. Concretely, we focus on distinguishing a random Erdős–Rényi graph 𝒢_{n,p}, with p = n^{-2/(α-1)} chosen st. with high probability it does not even contain an α-clique, from a random clique on β vertices (where α ≤ β). Using the approximation method of Razborov, Alon and Boppana showed in their influential work in 1987 that as long as √{α} β < n^{1-δ}/log n, this problem requires a monotone circuit of size n^Ω(δ√α), implying a lower bound of 2^Ω̃(n^{1/3}) for the exact version of the problem Clique_k when k≈ n^{2/3}. Recently, Cavalar, Kumar, and Rossman improved their result by showing a tight lower bound n^Ω(k), in a limited range k ≤ n^{1/3}, implying a comparable 2^Ω̃(n^{1/3}) lower bound after choosing the largest admissible k. We combine the ideas of Cavalar, Kumar and Rossman with recent breakthrough results on sunflower conjecture by Alweiss, Lovett, Wu, and Zhang to show that as long as α β < n^{1-δ}/log n, any monotone circuit rejecting 𝒢_{n,p} graph while accepting a β-clique needs to have size at least n^Ω(δ²α); this implies a stronger 2^Ω̃(√n) lower bound for the unrestricted version of the problem. We complement this result with a construction of an explicit monotone circuit of size O(n^{δ² α/2}) which rejects 𝒢_{n,p}, and accepts any graph containing β-clique whenever β > n^{1-δ}. In particular, those two theorems give a precise characterization of the smallest β-clique that can be distinguished from 𝒢_{n, 1/2}: when β > n / 2^{C √{log n}}, there is a polynomial-size circuit that solves it, while for β < n / 2^ω(√{log n}) every circuit needs size n^ω(1).

Cite as

Jarosław Błasiok and Linus Meierhöfer. Hardness of Clique Approximation for Monotone Circuits. In 40th Computational Complexity Conference (CCC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 339, pp. 4:1-4:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{blasiok_et_al:LIPIcs.CCC.2025.4,
  author =	{B{\l}asiok, Jaros{\l}aw and Meierh\"{o}fer, Linus},
  title =	{{Hardness of Clique Approximation for Monotone Circuits}},
  booktitle =	{40th Computational Complexity Conference (CCC 2025)},
  pages =	{4:1--4:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-379-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{339},
  editor =	{Srinivasan, Srikanth},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2025.4},
  URN =		{urn:nbn:de:0030-drops-236987},
  doi =		{10.4230/LIPIcs.CCC.2025.4},
  annote =	{Keywords: circuit lower bounds, monotone circuits, sunflower conjecture}
}
Document
Improved Separation Between Quantum and Classical Computers for Sampling and Functional Tasks

Authors: Simon C. Marshal, Scott Aaronson, and Vedran Dunjko


Abstract
This paper furthers existing evidence that quantum computers are capable of computations beyond classical computers. Specifically, we strengthen the collapse of the polynomial hierarchy to the second level if: (i) Quantum computers with postselection are as powerful as classical computers with postselection (PostBQP = PostBPP), (ii) any one of several quantum sampling experiments (BosonSampling, IQP, DQC1) can be approximately performed by a classical computer (contingent on existing assumptions). This last result implies that if any of these experiment’s hardness conjectures hold, then quantum computers can implement functions classical computers cannot (FBQP≠ FBPP) unless the polynomial hierarchy collapses to its 2nd level. These results are an improvement over previous work which either achieved a collapse to the third level or were concerned with exact sampling, a physically impractical case. The workhorse of these results is a new technical complexity-theoretic result which we believe could have value beyond quantum computation. In particular, we prove that if there exists an equivalence between problems solvable with an exact counting oracle and problems solvable with an approximate counting oracle, then the polynomial hierarchy collapses to its second level, indeed to ZPP^NP.

Cite as

Simon C. Marshal, Scott Aaronson, and Vedran Dunjko. Improved Separation Between Quantum and Classical Computers for Sampling and Functional Tasks. In 40th Computational Complexity Conference (CCC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 339, pp. 5:1-5:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{marshal_et_al:LIPIcs.CCC.2025.5,
  author =	{Marshal, Simon C. and Aaronson, Scott and Dunjko, Vedran},
  title =	{{Improved Separation Between Quantum and Classical Computers for Sampling and Functional Tasks}},
  booktitle =	{40th Computational Complexity Conference (CCC 2025)},
  pages =	{5:1--5:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-379-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{339},
  editor =	{Srinivasan, Srikanth},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2025.5},
  URN =		{urn:nbn:de:0030-drops-236991},
  doi =		{10.4230/LIPIcs.CCC.2025.5},
  annote =	{Keywords: Quantum advantage, Approximate counting, Boson sampling}
}
Document
Near-Optimal Averaging Samplers and Matrix Samplers

Authors: Zhiyang Xun and David Zuckerman


Abstract
We present the first efficient averaging sampler that achieves asymptotically optimal randomness complexity and near-optimal sample complexity. For any δ < ε and any constant α > 0, our sampler uses m + O(log (1 / δ)) random bits to output t = O((1/ε² log 1/δ)^{1 + α}) samples Z_1, … , Z_t ∈ {0, 1}^m such that for any function f: {0, 1}^m → [0, 1], Pr[|1/t∑_{i=1}^t f(Z_i) - 𝔼[f]| ≤ ε] ≥ 1 - δ. The randomness complexity is optimal up to a constant factor, and the sample complexity is optimal up to the O((1/(ε²) log 1/(δ))^α) factor. Our technique generalizes to matrix samplers. A matrix sampler is defined similarly, except that f: {0, 1}^m → ℂ^{d×d} and the absolute value is replaced by the spectral norm. Our matrix sampler achieves randomness complexity m + Õ(log(d / δ)) and sample complexity O((1/ε² log d/δ)^{1 + α}) for any constant α > 0, both near-optimal with only a logarithmic factor in randomness complexity and an additional α exponent on the sample complexity. We use known connections with randomness extractors and list-decodable codes to give applications to these objects. Specifically, we give the first extractor construction with optimal seed length up to an arbitrarily small constant factor above 1, when the min-entropy k = β n for a large enough constant β < 1. Finally, we generalize the definition of averaging sampler to any normed vector space.

Cite as

Zhiyang Xun and David Zuckerman. Near-Optimal Averaging Samplers and Matrix Samplers. In 40th Computational Complexity Conference (CCC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 339, pp. 6:1-6:28, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{xun_et_al:LIPIcs.CCC.2025.6,
  author =	{Xun, Zhiyang and Zuckerman, David},
  title =	{{Near-Optimal Averaging Samplers and Matrix Samplers}},
  booktitle =	{40th Computational Complexity Conference (CCC 2025)},
  pages =	{6:1--6:28},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-379-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{339},
  editor =	{Srinivasan, Srikanth},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2025.6},
  URN =		{urn:nbn:de:0030-drops-237001},
  doi =		{10.4230/LIPIcs.CCC.2025.6},
  annote =	{Keywords: Pseudorandomness, Averaging Samplers, Randomness Extractors}
}
Document
Sparser Abelian High Dimensional Expanders

Authors: Yotam Dikstein, Siqi Liu, and Avi Wigderson


Abstract
The focus of this paper is the development of new elementary techniques for the construction and analysis of high dimensional expanders. Specifically, we present two new explicit constructions of Cayley high dimensional expanders (HDXs) over the abelian group 𝔽₂ⁿ. Our expansion proofs use only linear algebra and combinatorial arguments. The first construction gives local spectral HDXs of any constant dimension and subpolynomial degree exp(n^ε) for every ε > 0, improving on a construction by Golowich [Golowich, 2023] which achieves ε = 1/2. [Golowich, 2023] derives these HDXs by sparsifying the complete Grassmann poset of subspaces. The novelty in our construction is the ability to sparsify any expanding Grassmann posets, leading to iterated sparsification and much smaller degrees. The sparse Grassmannian (which is of independent interest in the theory of HDXs) serves as the generating set of the Cayley graph. Our second construction gives a 2-dimensional HDX of any polynomial degree exp(ε n) for any constant ε > 0, which is simultaneously a spectral expander and a coboundary expander. To the best of our knowledge, this is the first such non-trivial construction. We name it the Johnson complex, as it is derived from the classical Johnson scheme, whose vertices serve as the generating set of this Cayley graph. This construction may be viewed as a derandomization of the recent random geometric complexes of [Liu et al., 2023]. Establishing coboundary expansion through Gromov’s "cone method" and the associated isoperimetric inequalities is the most intricate aspect of this construction. While these two constructions are quite different, we show that they both share a common structure, resembling the intersection patterns of vectors in the Hadamard code. We propose a general framework of such "Hadamard-like" constructions in the hope that it will yield new HDXs.

Cite as

Yotam Dikstein, Siqi Liu, and Avi Wigderson. Sparser Abelian High Dimensional Expanders. In 40th Computational Complexity Conference (CCC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 339, pp. 7:1-7:98, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{dikstein_et_al:LIPIcs.CCC.2025.7,
  author =	{Dikstein, Yotam and Liu, Siqi and Wigderson, Avi},
  title =	{{Sparser Abelian High Dimensional Expanders}},
  booktitle =	{40th Computational Complexity Conference (CCC 2025)},
  pages =	{7:1--7:98},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-379-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{339},
  editor =	{Srinivasan, Srikanth},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2025.7},
  URN =		{urn:nbn:de:0030-drops-237013},
  doi =		{10.4230/LIPIcs.CCC.2025.7},
  annote =	{Keywords: Local spectral expander, coboundary expander, Grassmannian expander}
}
Document
Amortized Closure and Its Applications in Lifting for Resolution over Parities

Authors: Klim Efremenko and Dmitry Itsykson


Abstract
The notion of closure of a set of linear forms, first introduced by Efremenko, Garlik, and Itsykson [Klim Efremenko et al., 2024], has proven instrumental in proving lower bounds on the sizes of regular and bounded-depth Res(⊕) refutations [Klim Efremenko et al., 2024; Yaroslav Alekseev and Dmitry Itsykson, 2025]. In this work, we present amortized closure, an enhancement that retains the properties of original closure [Klim Efremenko et al., 2024] but offers tighter control on its growth. Specifically, adding a new linear form increases the amortized closure by at most one. We explore two applications that highlight the power of this new concept. Utilizing our newly defined amortized closure, we extend and provide a succinct and elegant proof of the recent lifting theorem by Chattopadhyay and Dvorak [Arkadev Chattopadhyay and Pavel Dvorak, 2025]. Namely we show that for an unsatisfiable CNF formula φ and a 1-stifling gadget g: {0,1}^𝓁 → {0,1}, if the lifted formula φ∘g has a tree-like Res(⊕) refutation of size 2^d and width w, then φ has a resolution refutation of depth d and width w. The original theorem by Chattopadhyay and Dvorak [Arkadev Chattopadhyay and Pavel Dvorak, 2025] applies only to the more restrictive class of strongly stifling gadgets. As a more significant application of amortized closure, we show improved lower bounds for bounded-depth Res(⊕), extending the depth beyond that of Alekseev and Itsykson [Yaroslav Alekseev and Dmitry Itsykson, 2025]. Our result establishes an exponential lower bound for depth-Ω(n log n) Res(⊕) refutations of lifted Tseitin formulas, a notable improvement over the existing depth-Ω(n log log n) Res(⊕) lower bound.

Cite as

Klim Efremenko and Dmitry Itsykson. Amortized Closure and Its Applications in Lifting for Resolution over Parities. In 40th Computational Complexity Conference (CCC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 339, pp. 8:1-8:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{efremenko_et_al:LIPIcs.CCC.2025.8,
  author =	{Efremenko, Klim and Itsykson, Dmitry},
  title =	{{Amortized Closure and Its Applications in Lifting for Resolution over Parities}},
  booktitle =	{40th Computational Complexity Conference (CCC 2025)},
  pages =	{8:1--8:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-379-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{339},
  editor =	{Srinivasan, Srikanth},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2025.8},
  URN =		{urn:nbn:de:0030-drops-237023},
  doi =		{10.4230/LIPIcs.CCC.2025.8},
  annote =	{Keywords: lifting, resolution over parities, closure of linear forms, lower bounds, width, depth, size vs depth tradeoff}
}
Document
Pseudorandom Bits for Non-Commutative Programs

Authors: Chin Ho Lee and Emanuele Viola


Abstract
We obtain new explicit pseudorandom generators for several computational models involving groups. Our main results are as follows: 1) We consider read-once group-products over a finite group G, i.e., tests of the form ∏_{i=1}^n (g_i)^{x_i} where g_i ∈ G, a special case of read-once permutation branching programs. We give generators with optimal seed length c_G log(n/ε) over any p-group. The proof uses the small-bias plus noise paradigm, but derandomizes the noise to avoid the recursion in previous work. Our generator works when the bits are read in any order. Previously for any non-commutative group the best seed length was ≥ log n log(1/ε), even for a fixed order. 2) We give a reduction that "lifts" suitable generators for group products over G to a generator that fools width-w block products, i.e., tests of the form ∏ (g_i)^{f_i} where the f_i are arbitrary functions on disjoint blocks of w bits. Block products generalize several previously studied classes. The reduction applies to groups that are mixing in a representation-theoretic sense that we identify. 3) Combining (2) with (1) and other works we obtain new generators for block products over the quaternions or over any commutative group, with nearly optimal seed length. In particular, we obtain generators for read-once polynomials modulo any fixed m with nearly optimal seed length. Previously this was known only for m = 2. 4) We give a new generator for products over "mixing groups." The construction departs from previous work and uses representation theory. For constant error, we obtain optimal seed length, improving on previous work (which applied to any group). This paper identifies a challenge in the area that is reminiscent of a roadblock in circuit complexity - handling composite moduli - and points to several classes of groups to be attacked next.

Cite as

Chin Ho Lee and Emanuele Viola. Pseudorandom Bits for Non-Commutative Programs. In 40th Computational Complexity Conference (CCC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 339, pp. 9:1-9:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{lee_et_al:LIPIcs.CCC.2025.9,
  author =	{Lee, Chin Ho and Viola, Emanuele},
  title =	{{Pseudorandom Bits for Non-Commutative Programs}},
  booktitle =	{40th Computational Complexity Conference (CCC 2025)},
  pages =	{9:1--9:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-379-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{339},
  editor =	{Srinivasan, Srikanth},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2025.9},
  URN =		{urn:nbn:de:0030-drops-237039},
  doi =		{10.4230/LIPIcs.CCC.2025.9},
  annote =	{Keywords: Group programs, Space-bounded derandomization, Representation theory}
}
Document
Biased Linearity Testing in the 1% Regime

Authors: Subhash Khot and Kunal Mittal


Abstract
We study linearity testing over the p-biased hypercube ({0,1}ⁿ, μ_p^{⊗n}) in the 1% regime. For a distribution ν supported over {x ∈ {0,1}^k:∑_{i=1}^k x_i = 0 (mod 2)}, with marginal distribution μ_p in each coordinate, the corresponding k-query linearity test Lin(ν) proceeds as follows: Given query access to a function f:{0,1}ⁿ → {-1,1}, sample (x_1,… ,x_k)∼ ν^{⊗n}, query f on x_1,… ,x_k, and accept if and only if ∏_{i ∈ [k]} f(x_i) = 1. Building on the work of Bhangale, Khot, and Minzer (STOC '23), we show, for 0 < p ≤ 1/2, that if k ≥ 1+1/p, then there exists a distribution ν such that the test Lin(ν) works in the 1% regime; that is, any function f:{0,1}ⁿ → {-1,1} passing the test Lin(ν) with probability ≥ 1/2+ε, for some constant ε > 0, satisfies Pr_{x∼μ_p^{⊗n}}[f(x) = g(x)] ≥ 1/2+δ, for some linear function g, and a constant δ = δ(ε) > 0. Conversely, we show that if k < 1+1/p, then no such test Lin(ν) works in the 1% regime. Our key observation is that the linearity test Lin(ν) works if and only if the distribution ν satisfies a certain pairwise independence property.

Cite as

Subhash Khot and Kunal Mittal. Biased Linearity Testing in the 1% Regime. In 40th Computational Complexity Conference (CCC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 339, pp. 10:1-10:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{khot_et_al:LIPIcs.CCC.2025.10,
  author =	{Khot, Subhash and Mittal, Kunal},
  title =	{{Biased Linearity Testing in the 1\% Regime}},
  booktitle =	{40th Computational Complexity Conference (CCC 2025)},
  pages =	{10:1--10:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-379-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{339},
  editor =	{Srinivasan, Srikanth},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2025.10},
  URN =		{urn:nbn:de:0030-drops-237046},
  doi =		{10.4230/LIPIcs.CCC.2025.10},
  annote =	{Keywords: Linearity test, 1\% regime, p-biased}
}
Document
Switching Graph Matrix Norm Bounds: From i.i.d. to Random Regular Graphs

Authors: Jeff Xu


Abstract
In this work, we give novel spectral norm bounds for graph matrix on inputs being random regular graphs. Graph matrix is a family of random matrices with entries given by polynomial functions of the underlying input. These matrices have been known to be the backbone for the analysis of various average-case algorithms and hardness. Previous investigations of such matrices are largely restricted to the Erdős-Rényi model, and tight matrix norm bounds on regular graphs are only known for specific examples. We unite these two lines of investigations, and give the first result departing from the Erdős-Rényi setting in the full generality of graph matrices. We believe our norm bound result would enable a simple transfer of spectral analysis for average-case algorithms and hardness between these two distributions of random graphs. As an application of our spectral norm bounds, we show that higher-degree Sum-of-Squares lower bounds for the independent set problem on Erdős-Rényi random graphs can be switched into lower bounds on random d-regular graphs. Our main conceptual insight is that existing Sum-of-Squares lower bounds analysis based on moment methods are surprisingly robust, and amenable for a light-weight translation. Our result is the first to address the general open question of analyzing higher-degree Sum-of-Squares on random regular graphs.

Cite as

Jeff Xu. Switching Graph Matrix Norm Bounds: From i.i.d. to Random Regular Graphs. In 40th Computational Complexity Conference (CCC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 339, pp. 11:1-11:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{xu:LIPIcs.CCC.2025.11,
  author =	{Xu, Jeff},
  title =	{{Switching Graph Matrix Norm Bounds: From i.i.d. to Random Regular Graphs}},
  booktitle =	{40th Computational Complexity Conference (CCC 2025)},
  pages =	{11:1--11:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-379-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{339},
  editor =	{Srinivasan, Srikanth},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2025.11},
  URN =		{urn:nbn:de:0030-drops-237054},
  doi =		{10.4230/LIPIcs.CCC.2025.11},
  annote =	{Keywords: Semidefinite programming, random matrices, average-case complexity}
}
Document
New Lower-Bounds for Quantum Computation with Non-Collapsing Measurements

Authors: David Miloschewsky and Supartha Podder


Abstract
Aaronson, Bouland, Fitzsimons and Lee [Scott Aaronson et al., 2014] introduced the complexity class PDQP (which was original labeled naCQP), an alteration of BQP enhanced with the ability to obtain non-collapsing measurements, samples of quantum states without collapsing them. Although SZK ⊆ PDQP, it still requires Ω(N^(1/4)) queries to solve unstructured search. We formulate an alternative equivalent definition of PDQP, which we use to prove the positive weighted adversary lower-bounding method, establishing multiple tighter bounds and a trade-off between queries and non-collapsing measurements. We utilize the technique in order to analyze the query complexity of the well-studied majority and element distinctness problems. Additionally, we prove a tight Θ(N^(1/3)) bound on search. Furthermore, we use the lower-bound to explore PDQP under query restrictions, finding that when combined with non-adaptive queries, we limit the speed-up in several cases.

Cite as

David Miloschewsky and Supartha Podder. New Lower-Bounds for Quantum Computation with Non-Collapsing Measurements. In 40th Computational Complexity Conference (CCC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 339, pp. 12:1-12:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{miloschewsky_et_al:LIPIcs.CCC.2025.12,
  author =	{Miloschewsky, David and Podder, Supartha},
  title =	{{New Lower-Bounds for Quantum Computation with Non-Collapsing Measurements}},
  booktitle =	{40th Computational Complexity Conference (CCC 2025)},
  pages =	{12:1--12:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-379-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{339},
  editor =	{Srinivasan, Srikanth},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2025.12},
  URN =		{urn:nbn:de:0030-drops-237067},
  doi =		{10.4230/LIPIcs.CCC.2025.12},
  annote =	{Keywords: Non-collapsing measurements, Quantum lower-bounds, Quantum adversary method}
}
Document
Tight Bounds for Stream Decodable Error-Correcting Codes

Authors: Meghal Gupta, Venkatesan Guruswami, and Mihir Singhal


Abstract
In order to communicate a message over a noisy channel, a sender (Alice) uses an error-correcting code to encode her message, a bitstring x, into a codeword. The receiver (Bob) decodes x correctly whenever there is at most a small constant fraction of adversarial errors in the transmitted codeword. We investigate the setting where Bob is restricted to be a low-space streaming algorithm. Specifically, Bob receives the message as a stream and must process it and write x in order to a write-only tape while using low (say polylogarithmic) space. Note that such a primitive then allows the execution of any downstream streaming computation on x. We show three basic results about this setting, which are informally as follows: [(i)] 1) There is a stream decodable code of near-quadratic length, resilient to error-fractions approaching the optimal bound of 1/4. 2) There is no stream decodable code of sub-quadratic length, even to correct any small constant fraction of errors. 3) If Bob need only compute a private linear function of the bits of x, instead of writing them all to the output tape, there is a stream decodable code of near-linear length. Our constructions use locally decodable codes with additional functionality in the decoding, and (for the result on linear functions) repeated tensoring. Our lower bound, which rather surprisingly demonstrates a strong information-theoretic limitation originating from a computational restriction, proceeds via careful control of the message indices that may be output during successive blocks of the stream, a task complicated by the arbitrary state of the decoder during the algorithm.

Cite as

Meghal Gupta, Venkatesan Guruswami, and Mihir Singhal. Tight Bounds for Stream Decodable Error-Correcting Codes. In 40th Computational Complexity Conference (CCC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 339, pp. 13:1-13:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{gupta_et_al:LIPIcs.CCC.2025.13,
  author =	{Gupta, Meghal and Guruswami, Venkatesan and Singhal, Mihir},
  title =	{{Tight Bounds for Stream Decodable Error-Correcting Codes}},
  booktitle =	{40th Computational Complexity Conference (CCC 2025)},
  pages =	{13:1--13:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-379-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{339},
  editor =	{Srinivasan, Srikanth},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2025.13},
  URN =		{urn:nbn:de:0030-drops-237072},
  doi =		{10.4230/LIPIcs.CCC.2025.13},
  annote =	{Keywords: Coding theory, Streaming computation, Locally decodable code, Lower Bounds}
}
Document
On the Automatability of Tree-Like k-DNF Resolution

Authors: Gaia Carenini and Susanna F. de Rezende


Abstract
A proof system 𝒫 is said to be automatable in time f(N) if there exists an algorithm that given as input an unsatisfiable formula F outputs a refutation of F in the proof system 𝒫 in time f(N), where N is the size of the smallest 𝒫-refutation of F plus the size of F. Atserias and Bonet (ECCC 2002), observed that tree-like k-DNF resolution is automatable in time N^{c⋅klog N} for a universal constant c. We show that, under the randomized exponential-time hypothesis (rETH), this is tight up to a O(log k)-factor in the exponent, i.e., we prove that tree-like k-DNF resolution, for k at most logarithmic in the number of variables of F, is not automatable in time N^o((k/log k)⋅log N) unless rETH is false. Our proof builds on the non-automatability results for resolution by Atserias and Müller (FOCS 2019), for algebraic proof systems by de Rezende, Göös, Nordström, Pitassi, Robere and Sokolov (STOC 2021), and for tree-like resolution by de Rezende (LAGOS 2021).

Cite as

Gaia Carenini and Susanna F. de Rezende. On the Automatability of Tree-Like k-DNF Resolution. In 40th Computational Complexity Conference (CCC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 339, pp. 14:1-14:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{carenini_et_al:LIPIcs.CCC.2025.14,
  author =	{Carenini, Gaia and de Rezende, Susanna F.},
  title =	{{On the Automatability of Tree-Like k-DNF Resolution}},
  booktitle =	{40th Computational Complexity Conference (CCC 2025)},
  pages =	{14:1--14:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-379-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{339},
  editor =	{Srinivasan, Srikanth},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2025.14},
  URN =		{urn:nbn:de:0030-drops-237081},
  doi =		{10.4230/LIPIcs.CCC.2025.14},
  annote =	{Keywords: Proof Complexity, Tree-like k-DNF Resolution, Automatability}
}
Document
Algebraic Pseudorandomness in VNC⁰

Authors: Robert Andrews


Abstract
We study the arithmetic complexity of hitting set generators, which are pseudorandom objects used for derandomization of the polynomial identity testing problem. We give new explicit constructions of hitting set generators whose outputs are computable in VNC⁰, i.e., can be computed by arithmetic formulas of constant size. Unconditionally, we construct a VNC⁰-computable generator that hits arithmetic circuits of constant depth and polynomial size. We also give conditional constructions, under strong but plausible hardness assumptions, of VNC⁰-computable generators that hit arithmetic formulas and arithmetic branching programs of polynomial size, respectively. As a corollary of our constructions, we derive lower bounds for subsystems of the Geometric Ideal Proof System of Grochow and Pitassi. Constructions of such generators are implicit in prior work of Kayal on lower bounds for the degree of annihilating polynomials. Our main contribution is a construction whose correctness relies on circuit complexity lower bounds rather than degree lower bounds.

Cite as

Robert Andrews. Algebraic Pseudorandomness in VNC⁰. In 40th Computational Complexity Conference (CCC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 339, pp. 15:1-15:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{andrews:LIPIcs.CCC.2025.15,
  author =	{Andrews, Robert},
  title =	{{Algebraic Pseudorandomness in VNC⁰}},
  booktitle =	{40th Computational Complexity Conference (CCC 2025)},
  pages =	{15:1--15:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-379-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{339},
  editor =	{Srinivasan, Srikanth},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2025.15},
  URN =		{urn:nbn:de:0030-drops-237092},
  doi =		{10.4230/LIPIcs.CCC.2025.15},
  annote =	{Keywords: Polynomial identity testing, Algebraic circuits, Ideal Proof System}
}
Document
Direct Sums for Parity Decision Trees

Authors: Tyler Besselman, Mika Göös, Siyao Guo, Gilbert Maystre, and Weiqiang Yuan


Abstract
Direct sum theorems state that the cost of solving k instances of a problem is at least Ω(k) times the cost of solving a single instance. We prove the first such results in the randomised parity decision tree model. We show that a direct sum theorem holds whenever (1) the lower bound for parity decision trees is proved using the discrepancy method; or (2) the lower bound is proved relative to a product distribution.

Cite as

Tyler Besselman, Mika Göös, Siyao Guo, Gilbert Maystre, and Weiqiang Yuan. Direct Sums for Parity Decision Trees. In 40th Computational Complexity Conference (CCC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 339, pp. 16:1-16:38, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{besselman_et_al:LIPIcs.CCC.2025.16,
  author =	{Besselman, Tyler and G\"{o}\"{o}s, Mika and Guo, Siyao and Maystre, Gilbert and Yuan, Weiqiang},
  title =	{{Direct Sums for Parity Decision Trees}},
  booktitle =	{40th Computational Complexity Conference (CCC 2025)},
  pages =	{16:1--16:38},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-379-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{339},
  editor =	{Srinivasan, Srikanth},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2025.16},
  URN =		{urn:nbn:de:0030-drops-237105},
  doi =		{10.4230/LIPIcs.CCC.2025.16},
  annote =	{Keywords: direct sum, parity decision trees, query complexity}
}
Document
Space-Bounded Quantum Interactive Proof Systems

Authors: François Le Gall, Yupan Liu, Harumichi Nishimura, and Qisheng Wang


Abstract
We introduce two models of space-bounded quantum interactive proof systems, QIPL and QIP_{U}L. The QIP_{U}L model, a space-bounded variant of quantum interactive proofs (QIP) introduced by Watrous (CC 2003) and Kitaev and Watrous (STOC 2000), restricts verifier actions to unitary circuits. In contrast, QIPL allows logarithmically many pinching intermediate measurements per verifier action, making it the weakest model that encompasses the classical model of Condon and Ladner (JCSS 1995). We characterize the computational power of QIPL and QIP_{U}L. When the message number m is polynomially bounded, QIP_{U}L ⊊ QIPL unless P = NP: - QIPL^HC, a subclass of QIPL defined by a high-concentration condition on yes instances, exactly characterizes NP. - QIP_{U}L is contained in P and contains SAC¹ ∪ BQL, where SAC¹ denotes problems solvable by classical logarithmic-depth, semi-unbounded fan-in circuits. However, this distinction vanishes when m is constant. Our results further indicate that (pinching) intermediate measurements uniquely impact space-bounded quantum interactive proofs, unlike in space-bounded quantum computation, where BQL = BQ_{U}L. We also introduce space-bounded unitary quantum statistical zero-knowledge (QSZK_{U}L), a specific form of QIP_{U}L proof systems with statistical zero-knowledge against any verifier. This class is a space-bounded variant of quantum statistical zero-knowledge (QSZK) defined by Watrous (SICOMP 2009). We prove that QSZK_{U}L = BQL, implying that the statistical zero-knowledge property negates the computational advantage typically gained from the interaction.

Cite as

François Le Gall, Yupan Liu, Harumichi Nishimura, and Qisheng Wang. Space-Bounded Quantum Interactive Proof Systems. In 40th Computational Complexity Conference (CCC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 339, pp. 17:1-17:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{legall_et_al:LIPIcs.CCC.2025.17,
  author =	{Le Gall, Fran\c{c}ois and Liu, Yupan and Nishimura, Harumichi and Wang, Qisheng},
  title =	{{Space-Bounded Quantum Interactive Proof Systems}},
  booktitle =	{40th Computational Complexity Conference (CCC 2025)},
  pages =	{17:1--17:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-379-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{339},
  editor =	{Srinivasan, Srikanth},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2025.17},
  URN =		{urn:nbn:de:0030-drops-237115},
  doi =		{10.4230/LIPIcs.CCC.2025.17},
  annote =	{Keywords: Intermediate measurements, Quantum interactive proofs, Space-bounded quantum computation}
}
Document
Directed st-Connectivity with Few Paths Is in Quantum Logspace

Authors: Simon Apers and Roman Edenhofer


Abstract
We present a BQSPACE(O(log n))-procedure to count st-paths on directed graphs for which we are promised that there are at most polynomially many paths starting in s and polynomially many paths ending in t. For comparison, the best known classical upper bound in this case just to decide st-connectivity is DSPACE(O(log² n/ log log n)). The result establishes a new relationship between BQL and unambiguity and fewness subclasses of NL. Further, we also show how to recognize directed graphs with at most polynomially many paths between any two nodes in BQSPACE(O(log n)). This yields the first natural candidate for a language separating BQL from 𝖫 and BPL. Until now, all candidates potentially separating these classes were inherently promise problems.

Cite as

Simon Apers and Roman Edenhofer. Directed st-Connectivity with Few Paths Is in Quantum Logspace. In 40th Computational Complexity Conference (CCC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 339, pp. 18:1-18:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{apers_et_al:LIPIcs.CCC.2025.18,
  author =	{Apers, Simon and Edenhofer, Roman},
  title =	{{Directed st-Connectivity with Few Paths Is in Quantum Logspace}},
  booktitle =	{40th Computational Complexity Conference (CCC 2025)},
  pages =	{18:1--18:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-379-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{339},
  editor =	{Srinivasan, Srikanth},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2025.18},
  URN =		{urn:nbn:de:0030-drops-237128},
  doi =		{10.4230/LIPIcs.CCC.2025.18},
  annote =	{Keywords: Quantum computation, Space-bounded complexity classes, Graph connectivity, Unambiguous computation, Random walks}
}
Document
Characterizing the Distinguishability of Product Distributions Through Multicalibration

Authors: Cassandra Marcussen, Aaron Putterman, and Salil Vadhan


Abstract
Given a sequence of samples x_1, … , x_k promised to be drawn from one of two distributions X₀, X₁, a well-studied problem in statistics is to decide which distribution the samples are from. Information theoretically, the maximum advantage in distinguishing the two distributions given k samples is captured by the total variation distance between X₀^{⊗k} and X₁^{⊗k}. However, when we restrict our attention to efficient distinguishers (i.e., small circuits) of these two distributions, exactly characterizing the ability to distinguish X₀^{⊗k} and X₁^{⊗k} is more involved and less understood. In this work, we give a general way to reduce bounds on the computational indistinguishability of X₀ and X₁ to bounds on the information-theoretic indistinguishability of some specific, related variables X̃₀ and X̃₁. As a consequence, we prove a new, tight characterization of the number of samples k needed to efficiently distinguish X₀^{⊗k} and X₁^{⊗k} with constant advantage as k = Θ(d_H^{-2}(X̃₀, X̃₁)), which is the inverse of the squared Hellinger distance d_H between two distributions X̃₀ and X̃₁ that are computationally indistinguishable from X₀ and X₁. Likewise, our framework can be used to re-derive a result of Halevi and Rabin (TCC 2008) and Geier (TCC 2022), proving nearly-tight bounds on how computational indistinguishability scales with the number of samples for arbitrary product distributions. At the heart of our work is the use of the Multicalibration Theorem (Hébert-Johnson, Kim, Reingold, Rothblum 2018) in a way inspired by recent work of Casacuberta, Dwork, and Vadhan (STOC 2024). Multicalibration allows us to relate the computational indistinguishability of X₀, X₁ to the statistical indistinguishability of X̃₀, X̃₁ (for lower bounds on k) and construct explicit circuits to distinguish between X̃₀, X̃₁ and consequently X₀, X₁ (for upper bounds on k).

Cite as

Cassandra Marcussen, Aaron Putterman, and Salil Vadhan. Characterizing the Distinguishability of Product Distributions Through Multicalibration. In 40th Computational Complexity Conference (CCC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 339, pp. 19:1-19:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{marcussen_et_al:LIPIcs.CCC.2025.19,
  author =	{Marcussen, Cassandra and Putterman, Aaron and Vadhan, Salil},
  title =	{{Characterizing the Distinguishability of Product Distributions Through Multicalibration}},
  booktitle =	{40th Computational Complexity Conference (CCC 2025)},
  pages =	{19:1--19:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-379-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{339},
  editor =	{Srinivasan, Srikanth},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2025.19},
  URN =		{urn:nbn:de:0030-drops-237130},
  doi =		{10.4230/LIPIcs.CCC.2025.19},
  annote =	{Keywords: Multicalibration, computational distinguishability}
}
Document
Counting Martingales for Measure and Dimension in Complexity Classes

Authors: John M. Hitchcock, Adewale Sekoni, and Hadi Shafei


Abstract
This paper makes two primary contributions. First, we introduce the concept of counting martingales and use it to define counting measures and counting dimensions. Second, we apply these new tools to strengthen previous circuit lower bounds. Resource-bounded measure and dimension have traditionally focused on deterministic time and space bounds. We use counting complexity classes to develop resource-bounded counting measures and dimensions. Counting martingales are constructed using functions from the #𝖯, SpanP, and GapP complexity classes. We show that counting martingales capture many martingale constructions in complexity theory. The resulting counting measures and dimensions are intermediate in power between the standard time-bounded and space-bounded notions, enabling finer-grained analysis where space-bounded measures are known, but time-bounded measures remain open. For example, we show that BPP has #𝖯-dimension 0 and BQP has GapP-dimension 0, whereas the 𝖯-dimensions of these classes remain open. As our main application, we improve circuit-size lower bounds. Lutz (1992) strengthened Shannon’s classic (1-ε) 2ⁿ/n lower bound (1949) to PSPACE-measure, showing that almost all problems require circuits of size (2ⁿ/n)(1+(α log n)/n), for any α < 1. We extend this result to SpanP-measure, with a proof that uses a connection through the Minimum Circuit Size Problem (MCSP) to construct a counting martingale. Our results imply that the stronger lower bound holds within the third level of the exponential-time hierarchy, whereas previously, it was only known in ESPACE. Under a derandomization hypothesis, this lower bound holds within the second level of the exponential-time hierarchy, specifically in the class 𝖤^NP. We also study the #𝖯-dimension of classical circuit complexity classes and the GapP-dimension of quantum circuit complexity classes.

Cite as

John M. Hitchcock, Adewale Sekoni, and Hadi Shafei. Counting Martingales for Measure and Dimension in Complexity Classes. In 40th Computational Complexity Conference (CCC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 339, pp. 20:1-20:35, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{hitchcock_et_al:LIPIcs.CCC.2025.20,
  author =	{Hitchcock, John M. and Sekoni, Adewale and Shafei, Hadi},
  title =	{{Counting Martingales for Measure and Dimension in Complexity Classes}},
  booktitle =	{40th Computational Complexity Conference (CCC 2025)},
  pages =	{20:1--20:35},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-379-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{339},
  editor =	{Srinivasan, Srikanth},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2025.20},
  URN =		{urn:nbn:de:0030-drops-237145},
  doi =		{10.4230/LIPIcs.CCC.2025.20},
  annote =	{Keywords: resource-bounded measure, resource-bounded dimension, counting martingales, counting complexity, circuit complexity, Kolmogorov complexity, quantum complexity, Minimum Circuit Size Problem}
}
Document
Reconstruction of Depth 3 Arithmetic Circuits with Top Fan-In 3

Authors: Shubhangi Saraf and Devansh Shringi


Abstract
In this paper, we give the first subexponential (and in fact quasi-polynomial time) reconstruction algorithm for depth 3 circuits of top fan-in 3 (ΣΠΣ(3) circuits) over the fields ℝ and C. Concretely, we show that given blackbox access to an n-variate polynomial f computed by a ΣΠΣ(3) circuit of size s, there is a randomized algorithm that runs in time quasi-poly(n,s) and outputs a generalized ΣΠΣ(3) circuit computing f. The size s includes the bit complexity of coefficients appearing in the circuit. Depth 3 circuits of constant fan-in (ΣΠΣ(k) circuits) and closely related models have been extensively studied in the context of polynomial identity testing (PIT). The study of PIT for these models led to an understanding of the structure of identically zero ΣΠΣ(3) circuits and ΣΠΣ(k) circuits using some very elegant connections to discrete geometry, specifically the Sylvester-Gallai Theorem, and colorful and high dimensional variants of them. Despite a lot of progress on PIT for ΣΠΣ(k) circuits and more recently on PIT for depth 4 circuits of bounded top and bottom fan-in, reconstruction algorithms for ΣΠΣ(k) circuits has proven to be extremely challenging. In this paper, we build upon the structural results for identically zero ΣΠΣ(3) circuits that bound their rank, and prove stronger structural properties of ΣΠΣ(3) circuits (again using connections to discrete geometry). One such result is a bound on the number of codimension 3 subspaces on which a polynomial computed by an ΣΠΣ(3) circuit can vanish on. Armed with the new structural results, we provide the first reconstruction algorithms for ΣΠΣ(3) circuits over ℝ and C. Our work extends the work of [Sinha, CCC 2016] who provided a reconstruction algorithm for ΣΠΣ(2) circuits over ℝ and C as well as the works of [Shpilka, STOC 2007] who provided a reconstruction algorithms for ΣΠΣ(2) circuits in the setting of small finite fields, and [Karnin-Shpilka, CCC 2009] who provided reconstruction algorithms for ΣΠΣ(k) circuits in the setting of small finite fields.

Cite as

Shubhangi Saraf and Devansh Shringi. Reconstruction of Depth 3 Arithmetic Circuits with Top Fan-In 3. In 40th Computational Complexity Conference (CCC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 339, pp. 21:1-21:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{saraf_et_al:LIPIcs.CCC.2025.21,
  author =	{Saraf, Shubhangi and Shringi, Devansh},
  title =	{{Reconstruction of Depth 3 Arithmetic Circuits with Top Fan-In 3}},
  booktitle =	{40th Computational Complexity Conference (CCC 2025)},
  pages =	{21:1--21:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-379-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{339},
  editor =	{Srinivasan, Srikanth},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2025.21},
  URN =		{urn:nbn:de:0030-drops-237151},
  doi =		{10.4230/LIPIcs.CCC.2025.21},
  annote =	{Keywords: arithmetic circuits, learning, reconstruction}
}
Document
Multiplicative Extractors for Samplable Distributions

Authors: Ronen Shaltiel


Abstract
Trevisan and Vadhan (FOCS 2000) introduced the notion of (seedless) extractors for samplable distributions as a way to extract random keys for cryptographic protocols from weak sources of randomness. They showed that under a very strong complexity theoretic assumption, there exists a constant α > 0 such that for every constant c ≥ 1, there is an extractor Ext:{0,1}ⁿ → {0,1}^Ω(n), such that for every distribution X over {0,1}ⁿ with H_∞(X) ≥ (1-α) ⋅ n that is samplable by size n^c circuits, the distribution Ext(X) is ε-close to uniform for ε = 1/(n^c), and furthermore, Ext is computable in time poly(n^c). Recently, Ball, Goldin, Dachman-Soled and Mutreja (FOCS 2023) gave a substantial improvement, and achieved the same conclusion under the weaker (and by now standard) assumption that there exists a constant β > 0, and a problem in E = DTIME(2^O(n)) that requires size 2^(βn) nondeterministic circuits. In this paper we give an alternative proof of this result with the following advantages: - Our extractors have "multiplicative error": It is guaranteed that for every event A ⊆ {0,1}^m, Pr[Ext(X) ∈ A] ≤ (1+ε) ⋅ Pr[U_m ∈ A]. (This should be contrasted with the standard notion that only implies Pr[Ext(X) ∈ A] ≤ ε + Pr[U_m ∈ A]). Consequently, unlike the (additive) extractors of Trevisan and Vadhan, and Ball et al., our multiplicative extractors guarantee that in the application of selecting keys for cryptographic protocols, if when choosing a random key, the probability that an adversary can steal the honest party’s money is n^{-ω(1)}, then this also holds when using the output of the extractor as a key. Our multiplicative extractors are a key component in the recent subsequent work of Ball, Shaltiel and Silbak (STOC 2025) that constructs extractors for samplable distributions with low min-entropy. This is another demonstration of the usefulness of multiplicative extractors. We remark that a related notion of multiplicative extractors was defined by Applebaum, Artemenko, Shaltiel and Yang (CCC 2015) who showed that black-box techniques cannot yield extractors with additive error ε = n^{-ω(1)}, under the assumption assumed by Ball et al. or Trevisan and Vadhan. This motivated Applebaum et al. to consider multiplicative extractors, and they gave constructions based on the original hardness assumption of Trevisan and Vadhan. - Our proof is significantly simpler, and more modular than that of Ball et al. (and arguably also than that of Trevisan and Vadhan). A key observation is that the extractors that we want to construct, easily follow from a seed-extending pseudorandom generator against nondeterministic circuits (with the twist that the error is measured multiplicatively, as in computational differential privacy). We then proceed to construct such pseudorandom generators under the hardness assumption. This turns out to be easier (utilizing amongst other things, ideas by Trevisan and Vadhan, and by Ball et al.) Trevisan and Vadhan also asked whether lower bounds against nondeterministic circuits are necessary to achieve extractors for samplable distributions. While we cannot answer this question, we show that the proof techniques used in our paper (as well as those used in previous work) produce extractors which imply seed-extending PRGs against nondeterministic circuits, which in turn imply lower bounds against nondeterministic circuits.

Cite as

Ronen Shaltiel. Multiplicative Extractors for Samplable Distributions. In 40th Computational Complexity Conference (CCC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 339, pp. 22:1-22:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{shaltiel:LIPIcs.CCC.2025.22,
  author =	{Shaltiel, Ronen},
  title =	{{Multiplicative Extractors for Samplable Distributions}},
  booktitle =	{40th Computational Complexity Conference (CCC 2025)},
  pages =	{22:1--22:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-379-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{339},
  editor =	{Srinivasan, Srikanth},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2025.22},
  URN =		{urn:nbn:de:0030-drops-237163},
  doi =		{10.4230/LIPIcs.CCC.2025.22},
  annote =	{Keywords: Randomness Extractors, Samplable Distributions, Hardness vsRandomness}
}
Document
From an Odd Arity Signature to a Holant Dichotomy

Authors: Boning Meng, Juqiu Wang, Mingji Xia, and Jiayi Zheng


Abstract
Holant is an essential framework in the field of counting complexity. For over fifteen years, researchers have been clarifying the complexity classification for complex-valued Holant on Boolean domain, a challenge that remains unresolved. In this article, we prove a complexity dichotomy for complex-valued Holant on Boolean domain when a non-trivial signature of odd arity exists. This dichotomy is based on the dichotomy for #EO, and consequently is an FP^NP vs. #P dichotomy as well, stating that each problem is either in FP^NP or #P-hard. Furthermore, we establish a generalized version of the decomposition lemma for complex-valued Holant on Boolean domain. It asserts that each signature can be derived from its tensor product with other signatures, or conversely, the problem itself is in FP^NP. We believe that this result is a powerful method for building reductions in complex-valued Holant, as it is also employed as a pivotal technique in the proof of the aforementioned dichotomy in this article.

Cite as

Boning Meng, Juqiu Wang, Mingji Xia, and Jiayi Zheng. From an Odd Arity Signature to a Holant Dichotomy. In 40th Computational Complexity Conference (CCC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 339, pp. 23:1-23:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{meng_et_al:LIPIcs.CCC.2025.23,
  author =	{Meng, Boning and Wang, Juqiu and Xia, Mingji and Zheng, Jiayi},
  title =	{{From an Odd Arity Signature to a Holant Dichotomy}},
  booktitle =	{40th Computational Complexity Conference (CCC 2025)},
  pages =	{23:1--23:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-379-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{339},
  editor =	{Srinivasan, Srikanth},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2025.23},
  URN =		{urn:nbn:de:0030-drops-237177},
  doi =		{10.4230/LIPIcs.CCC.2025.23},
  annote =	{Keywords: Complexity dichotomy, Counting, Holant problem, #P}
}
Document
Super-Critical Trade-Offs in Resolution over Parities via Lifting

Authors: Arkadev Chattopadhyay and Pavel Dvořák


Abstract
Razborov [Alexander A. Razborov, 2016] exhibited the following surprisingly strong trade-off phenomenon in propositional proof complexity: for a parameter k = k(n), there exists k-CNF formulas over n variables, having resolution refutations of O(k) width, but every tree-like refutation of width n^{1-ε}/k needs size exp(n^Ω(k)). We extend this result to tree-like Resolution over parities, commonly denoted by Res(⊕), with parameters essentially unchanged. To obtain our result, we extend the lifting theorem of Chattopadhyay, Mande, Sanyal and Sherif [Arkadev Chattopadhyay et al., 2023] to handle tree-like affine DAGs. We introduce additional ideas from linear algebra to handle forget nodes along long paths.

Cite as

Arkadev Chattopadhyay and Pavel Dvořák. Super-Critical Trade-Offs in Resolution over Parities via Lifting. In 40th Computational Complexity Conference (CCC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 339, pp. 24:1-24:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{chattopadhyay_et_al:LIPIcs.CCC.2025.24,
  author =	{Chattopadhyay, Arkadev and Dvo\v{r}\'{a}k, Pavel},
  title =	{{Super-Critical Trade-Offs in Resolution over Parities via Lifting}},
  booktitle =	{40th Computational Complexity Conference (CCC 2025)},
  pages =	{24:1--24:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-379-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{339},
  editor =	{Srinivasan, Srikanth},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2025.24},
  URN =		{urn:nbn:de:0030-drops-237186},
  doi =		{10.4230/LIPIcs.CCC.2025.24},
  annote =	{Keywords: Proof complexity, Lifting, Resolution over parities}
}
Document
Quantum LDPC Codes of Almost Linear Distance via Iterated Homological Products

Authors: Louis Golowich and Venkatesan Guruswami


Abstract
The first linear-distance quantum LDPC codes were recently constructed by a line of breakthrough works (culminating in the result of Panteleev & Kalachev, 2021). All such constructions, even when allowing for almost-linear distance, are based on an operation called a balanced (or lifted) product, which is used in a one-shot manner to combine a pair of large classical codes possessing a group symmetry. We present a new construction of almost-linear distance quantum LDPC codes that is iterative in nature. Our construction is based on a more basic and widely used product, namely the homological product (i.e. the tensor product of chain complexes). Specifically, for every ε > 0, we obtain a family of [[N,N^{1-ε},N^{1-ε}]] (subsystem) quantum LDPC codes via repeated homological products of a constant-sized quantum locally testable code. Our key idea is to remove certain low-weight codewords using subsystem codes (while still maintaining constant stabilizer weight), in order to circumvent a particular obstruction that limited the distance of many prior homological product code constructions to at most Õ(√N).

Cite as

Louis Golowich and Venkatesan Guruswami. Quantum LDPC Codes of Almost Linear Distance via Iterated Homological Products. In 40th Computational Complexity Conference (CCC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 339, pp. 25:1-25:11, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{golowich_et_al:LIPIcs.CCC.2025.25,
  author =	{Golowich, Louis and Guruswami, Venkatesan},
  title =	{{Quantum LDPC Codes of Almost Linear Distance via Iterated Homological Products}},
  booktitle =	{40th Computational Complexity Conference (CCC 2025)},
  pages =	{25:1--25:11},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-379-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{339},
  editor =	{Srinivasan, Srikanth},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2025.25},
  URN =		{urn:nbn:de:0030-drops-237196},
  doi =		{10.4230/LIPIcs.CCC.2025.25},
  annote =	{Keywords: Quantum Error Correction, Quantum LDPC Code, Homological Product, Iterative Construction}
}
Document
Algebraic Metacomplexity and Representation Theory

Authors: Maxim van den Berg, Pranjal Dutta, Fulvio Gesmundo, Christian Ikenmeyer, and Vladimir Lysikov


Abstract
In the algebraic metacomplexity framework we prove that the decomposition of metapolynomials into their isotypic components can be implemented efficiently, namely with only a quasipolynomial blowup in the circuit size. We use this to resolve an open question posed by Grochow, Kumar, Saks & Saraf (2017). Our result means that many existing algebraic complexity lower bound proofs can be efficiently converted into isotypic lower bound proofs via highest weight metapolynomials, a notion studied in geometric complexity theory. In the context of algebraic natural proofs, it means that without loss of generality algebraic natural proofs can be assumed to be isotypic. Our proof is built on the Poincaré-Birkhoff-Witt theorem for Lie algebras and on Gelfand-Tsetlin theory, for which we give the necessary comprehensive background.

Cite as

Maxim van den Berg, Pranjal Dutta, Fulvio Gesmundo, Christian Ikenmeyer, and Vladimir Lysikov. Algebraic Metacomplexity and Representation Theory. In 40th Computational Complexity Conference (CCC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 339, pp. 26:1-26:35, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{vandenberg_et_al:LIPIcs.CCC.2025.26,
  author =	{van den Berg, Maxim and Dutta, Pranjal and Gesmundo, Fulvio and Ikenmeyer, Christian and Lysikov, Vladimir},
  title =	{{Algebraic Metacomplexity and Representation Theory}},
  booktitle =	{40th Computational Complexity Conference (CCC 2025)},
  pages =	{26:1--26:35},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-379-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{339},
  editor =	{Srinivasan, Srikanth},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2025.26},
  URN =		{urn:nbn:de:0030-drops-237209},
  doi =		{10.4230/LIPIcs.CCC.2025.26},
  annote =	{Keywords: Algebraic complexity theory, metacomplexity, representation theory, geometric complexity theory}
}
Document
New Codes on High Dimensional Expanders

Authors: Irit Dinur, Siqi Liu, and Rachel Yun Zhang


Abstract
We describe a new parameterized family of symmetric error-correcting codes with low-density parity-check matrices (LDPC). Our codes can be described in two seemingly different ways. First, in relation to Reed-Muller codes: our codes are functions on a subset of the points in 𝔽ⁿ whose restrictions to a prescribed set of affine lines has low degree. Alternatively, they are Tanner codes on high dimensional expanders, where the coordinates of the codeword correspond to triangles of a 2-dimensional expander, such that around every edge the local view forms a Reed-Solomon codeword. For some range of parameters our codes are provably locally testable, and their dimension is some fixed power of the block length. For another range of parameters our codes have distance and dimension that are both linear in the block length, but we do not know if they are locally testable. The codes also have the multiplication property: the coordinate-wise product of two codewords is a codeword in a related code. The definition of the codes relies on the construction of a specific family of simplicial complexes which is a slight variant on the coset complexes of Kaufman and Oppenheim. We show a novel way to embed the triangles of these complexes into 𝔽ⁿ, with the property that links of edges embed as affine lines in 𝔽ⁿ. We rely on this embedding to lower bound the rate of these codes in a way that avoids constraint-counting and thereby achieves non-trivial rate even when the local codes themselves have arbitrarily small rate, and in particular below 1/2.

Cite as

Irit Dinur, Siqi Liu, and Rachel Yun Zhang. New Codes on High Dimensional Expanders. In 40th Computational Complexity Conference (CCC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 339, pp. 27:1-27:42, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{dinur_et_al:LIPIcs.CCC.2025.27,
  author =	{Dinur, Irit and Liu, Siqi and Zhang, Rachel Yun},
  title =	{{New Codes on High Dimensional Expanders}},
  booktitle =	{40th Computational Complexity Conference (CCC 2025)},
  pages =	{27:1--27:42},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-379-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{339},
  editor =	{Srinivasan, Srikanth},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2025.27},
  URN =		{urn:nbn:de:0030-drops-237217},
  doi =		{10.4230/LIPIcs.CCC.2025.27},
  annote =	{Keywords: error correcting codes, high dimensional expanders, multiplication property}
}
Document
Provably Total Functions in the Polynomial Hierarchy

Authors: Noah Fleming, Deniz Imrek, and Christophe Marciot


Abstract
TFNP studies the complexity of total, verifiable search problems, and represents the first layer of the total function polynomial hierarchy (TFPH). Recently, problems in higher levels of the TFPH have gained significant attention, partly due to their close connection to circuit lower bounds. However, very little is known about the relationships between problems in levels of the hierarchy beyond TFNP. Connections to proof complexity have had an outsized impact on our understanding of the relationships between subclasses of TFNP in the black-box model. Subclasses are characterized by provability in certain proof systems, which has allowed for tools from proof complexity to be applied in order to separate TFNP problems. In this work we begin a systematic study of the relationship between subclasses of total search problems in the polynomial hierarchy and proof systems. We show that, akin to TFNP, reductions to a problem in TFΣ_d are equivalent to proofs of the formulas expressing the totality of the problems in some Σ_d-proof system. Having established this general correspondence, we examine important subclasses of TFPH. We show that reductions to the StrongAvoid problem are equivalent to proofs in a Σ₂-variant of the (unary) Sherali-Adams proof system. As well, we explore the TFPH classes which result from well-studied proof systems, introducing a number of new TFΣ₂ classes which characterize variants of DNF resolution, as well as TFΣ_d classes capturing levels of Σ_d-bounded-depth Frege.

Cite as

Noah Fleming, Deniz Imrek, and Christophe Marciot. Provably Total Functions in the Polynomial Hierarchy. In 40th Computational Complexity Conference (CCC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 339, pp. 28:1-28:40, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{fleming_et_al:LIPIcs.CCC.2025.28,
  author =	{Fleming, Noah and Imrek, Deniz and Marciot, Christophe},
  title =	{{Provably Total Functions in the Polynomial Hierarchy}},
  booktitle =	{40th Computational Complexity Conference (CCC 2025)},
  pages =	{28:1--28:40},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-379-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{339},
  editor =	{Srinivasan, Srikanth},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2025.28},
  URN =		{urn:nbn:de:0030-drops-237223},
  doi =		{10.4230/LIPIcs.CCC.2025.28},
  annote =	{Keywords: TFNP, TFPH, Proof Complxity, Characterizations}
}
Document
Generalised Linial-Nisan Conjecture Is False for DNFs

Authors: Yaroslav Alekseev, Mika Göös, Ziyi Guan, Gilbert Maystre, Artur Riazanov, Dmitry Sokolov, and Weiqiang Yuan


Abstract
Aaronson (STOC 2010) conjectured that almost k-wise independence fools constant-depth circuits; he called this the generalised Linial-Nisan conjecture. Aaronson himself later found a counterexample for depth-3 circuits. We give here an improved counterexample for depth-2 circuits (DNFs). This shows, for instance, that Bazzi’s celebrated result (k-wise independence fools DNFs) cannot be generalised in a natural way. We also propose a way to circumvent our counterexample: We define a new notion of pseudorandomness called local couplings and show that it fools DNFs and even decision lists.

Cite as

Yaroslav Alekseev, Mika Göös, Ziyi Guan, Gilbert Maystre, Artur Riazanov, Dmitry Sokolov, and Weiqiang Yuan. Generalised Linial-Nisan Conjecture Is False for DNFs. In 40th Computational Complexity Conference (CCC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 339, pp. 29:1-29:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{alekseev_et_al:LIPIcs.CCC.2025.29,
  author =	{Alekseev, Yaroslav and G\"{o}\"{o}s, Mika and Guan, Ziyi and Maystre, Gilbert and Riazanov, Artur and Sokolov, Dmitry and Yuan, Weiqiang},
  title =	{{Generalised Linial-Nisan Conjecture Is False for DNFs}},
  booktitle =	{40th Computational Complexity Conference (CCC 2025)},
  pages =	{29:1--29:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-379-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{339},
  editor =	{Srinivasan, Srikanth},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2025.29},
  URN =		{urn:nbn:de:0030-drops-237231},
  doi =		{10.4230/LIPIcs.CCC.2025.29},
  annote =	{Keywords: pseudorandomness, DNFs, bounded independence}
}
Document
Online Condensing of Unpredictable Sources via Random Walks

Authors: Dean Doron, Dana Moshkovitz, Justin Oh, and David Zuckerman


Abstract
A natural model of a source of randomness consists of a long stream of symbols X = X_1∘…∘X_t, with some guarantee on the entropy of X_i conditioned on the outcome of the prefix x_1,… ,x_{i-1}. We study unpredictable sources, a generalization of the almost Chor-Goldreich (CG) sources considered in [Doron et al., 2023]. In an unpredictable source X, for a typical draw of x ∼ X, for most i-s, the element x_i has a low probability of occurring given x_1,… ,x_{i-1}. Such a model relaxes the often unrealistic assumption of a CG source that for every i, and every x_1,… ,x_{i-1}, the next symbol X_i has sufficiently large entropy. Unpredictable sources subsume all previously considered notions of almost CG sources, including notions that [Doron et al., 2023] failed to analyze, and including those that are equivalent to general sources with high min entropy. For a lossless expander G = (V,E) with m = log |V|, we consider a random walk V_0,V_1,…,V_t on G using unpredictable instructions that have sufficient entropy with respect to m. Our main theorem is that for almost all the steps t/2 ≤ i ≤ t in the walk, the vertex V_i is close to a distribution with min-entropy at least m-O(1). As a result, we obtain seeded online condensers with constant entropy gap, and seedless (deterministic) condensers outputting a constant fraction of the entropy. In particular, our condensers run in space comparable to the output entropy, as opposed to the size of the stream, and even when the length t of the stream is not known ahead of time. As another corollary, we obtain a new extractor based on expander random walks handling lower entropy than the classic expander based construction relying on spectral techniques [Gillman, 1998]. As our main technical tool, we provide a novel analysis covering a key case of adversarial random walks on lossless expanders that [Doron et al., 2023] fails to address. As part of the analysis, we provide a "chain rule for vertex probabilities". The standard chain rule states that for every x ∼ X and i, Pr(x_1,… ,x_i) = Pr[X_i = x_i|X_[1,i-1] = x_1,… ,x_{i-1}] ⋅ Pr(x_1,… ,x_{i-1}). If W(x₁,… ,x_i) is the vertex reached using x₁,… ,x_i, then the chain rule for vertex probabilities essentially states that the same phenomena occurs for a typical x: Pr [V_i = W(x_1,… ,x_i)] ≲ Pr[X_i = x_i|X_[1,i-1] = x_1,… ,x_{i-1}] ⋅ Pr[V_{i-1} = W(x_1,… ,x_{i-1})], where V_i is the vertex distribution of the random walk at step i using X.

Cite as

Dean Doron, Dana Moshkovitz, Justin Oh, and David Zuckerman. Online Condensing of Unpredictable Sources via Random Walks. In 40th Computational Complexity Conference (CCC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 339, pp. 30:1-30:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{doron_et_al:LIPIcs.CCC.2025.30,
  author =	{Doron, Dean and Moshkovitz, Dana and Oh, Justin and Zuckerman, David},
  title =	{{Online Condensing of Unpredictable Sources via Random Walks}},
  booktitle =	{40th Computational Complexity Conference (CCC 2025)},
  pages =	{30:1--30:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-379-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{339},
  editor =	{Srinivasan, Srikanth},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2025.30},
  URN =		{urn:nbn:de:0030-drops-237243},
  doi =		{10.4230/LIPIcs.CCC.2025.30},
  annote =	{Keywords: Randomness Extractors, Expander Graphs}
}
Document
Towards Free Lunch Derandomization from Necessary Assumptions (And OWFs)

Authors: Marshall Ball, Lijie Chen, and Roei Tell


Abstract
The question of optimal derandomization, introduced by Doron et. al (JACM 2022), garnered significant recent attention. Works in recent years showed conditional superfast derandomization algorithms, as well as conditional impossibility results, and barriers for obtaining superfast derandomization using certain black-box techniques. Of particular interest is the extreme high-end, which focuses on "free lunch" derandomization, as suggested by Chen and Tell (FOCS 2021). This is derandomization that incurs essentially no time overhead, and errs only on inputs that are infeasible to find. Constructing such algorithms is challenging, and so far there have not been any results following the one in their initial work. In their result, their algorithm is essentially the classical Nisan-Wigderson generator, and they relied on an ad-hoc assumption asserting the existence of a function that is non-batch-computable over all polynomial-time samplable distributions. In this work we deduce free lunch derandomization from a variety of natural hardness assumptions. In particular, we do not resort to non-batch-computability, and the common denominator for all of our assumptions is hardness over all polynomial-time samplable distributions, which is necessary for the conclusion. The main technical components in our proofs are constructions of new and superfast targeted generators, which completely eliminate the time overheads that are inherent to all previously known constructions. In particular, we present an alternative construction for the targeted generator by Chen and Tell (FOCS 2021), which is faster than the original construction, and also more natural and technically intuitive. These contributions significantly strengthen the evidence for the possibility of free lunch derandomization, distill the required assumptions for such a result, and provide the first set of dedicated technical tools that are useful for studying the question.

Cite as

Marshall Ball, Lijie Chen, and Roei Tell. Towards Free Lunch Derandomization from Necessary Assumptions (And OWFs). In 40th Computational Complexity Conference (CCC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 339, pp. 31:1-31:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{ball_et_al:LIPIcs.CCC.2025.31,
  author =	{Ball, Marshall and Chen, Lijie and Tell, Roei},
  title =	{{Towards Free Lunch Derandomization from Necessary Assumptions (And OWFs)}},
  booktitle =	{40th Computational Complexity Conference (CCC 2025)},
  pages =	{31:1--31:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-379-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{339},
  editor =	{Srinivasan, Srikanth},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2025.31},
  URN =		{urn:nbn:de:0030-drops-237259},
  doi =		{10.4230/LIPIcs.CCC.2025.31},
  annote =	{Keywords: Pseudorandomness, Derandomization}
}
Document
A Lower Bound for k-DNF Resolution on Random CNF Formulas via Expansion

Authors: Anastasia Sofronova and Dmitry Sokolov


Abstract
Random Δ-CNF formulas are one of the few candidates that are expected to be hard for proof systems and SAT algotirhms. Assume we sample m clauses over n variables. Here, the main complexity parameter is clause density, χ := m/n. For a fixed Δ, there exists a satisfiability threshold c_Δ such that for χ > c_Δ a formula is unsatisfiable with high probability. and for χ < c_Δ it is satisfiable with high probability. Near satisfiability threshold, there are various lower bounds for algorithms and proof systems [Eli Ben-Sasson, 2001; Eli Ben-Sasson and Russell Impagliazzo, 1999; Michael Alekhnovich and Alexander A. Razborov, 2003; Dima Grigoriev, 2001; Grant Schoenebeck, 2008; Pavel Hrubes and Pavel Pudlák, 2017; Noah Fleming et al., 2017; Dmitry Sokolov, 2024], and for high-density regimes, there exist upper bounds [Uriel Feige et al., 2006; Sebastian Müller and Iddo Tzameret, 2014; Jackson Abascal et al., 2021; Venkatesan Guruswami et al., 2022]. One of the frontiers in the direction of proving lower bounds on these formulas is the k-DNF Resolution proof system (aka Res(k)). There are several known results for k = 𝒪(√{log n}/{log log n}}) [Nathan Segerlind et al., 2004; Michael Alekhnovich, 2011], that are applicable only for density regime near the threshold. In this paper, we show the first Res(k) lower bound that is applicable in higher-density regimes. Our results work for slightly larger k = 𝒪(√{log n}).

Cite as

Anastasia Sofronova and Dmitry Sokolov. A Lower Bound for k-DNF Resolution on Random CNF Formulas via Expansion. In 40th Computational Complexity Conference (CCC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 339, pp. 32:1-32:27, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{sofronova_et_al:LIPIcs.CCC.2025.32,
  author =	{Sofronova, Anastasia and Sokolov, Dmitry},
  title =	{{A Lower Bound for k-DNF Resolution on Random CNF Formulas via Expansion}},
  booktitle =	{40th Computational Complexity Conference (CCC 2025)},
  pages =	{32:1--32:27},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-379-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{339},
  editor =	{Srinivasan, Srikanth},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2025.32},
  URN =		{urn:nbn:de:0030-drops-237269},
  doi =		{10.4230/LIPIcs.CCC.2025.32},
  annote =	{Keywords: proof complexity, random CNFs}
}
Document
A Min-Entropy Approach to Multi-Party Communication Lower Bounds

Authors: Mi-Ying (Miryam) Huang, Xinyu Mao, Shuo Wang, Guangxu Yang, and Jiapeng Zhang


Abstract
Information complexity is one of the most powerful techniques to prove information-theoretical lower bounds, in which Shannon entropy plays a central role. Though Shannon entropy has some convenient properties, such as the chain rule, it still has inherent limitations. One of the most notable barriers is the square-root loss, which appears in the square-root gap between entropy gaps and statistical distances, e.g., Pinsker’s inequality. To bypass this barrier, we introduce a new method based on min-entropy analysis. Building on this new method, we prove the following results. - An Ω(N^{∑_i α_i - max_i {α_i}}/k) randomized communication lower bound of the k-party set-intersection problem where the i-th party holds a random set of size ≈ N^{1-α_i}. - A tight Ω(n/k) randomized lower bound of the k-party Tree Pointer Jumping problems, improving an Ω(n/k²) lower bound by Chakrabarti, Cormode, and McGregor (STOC 08). - An Ω(n/k+√n) lower bound of the Chained Index problem, improving an Ω(n/k²) lower bound by Cormode, Dark, and Konrad (ICALP 19). Since these problems served as hard problems for numerous applications in streaming lower bounds and cryptography, our new lower bounds directly improve these streaming lower bounds and cryptography lower bounds. On the technical side, min-entropy does not have nice properties such as the chain rule. To address this issue, we enhance the structure-vs-pseudorandomness decomposition used by Göös, Pitassi, and Watson (FOCS 17) and Yang and Zhang (STOC 24); both papers used this decomposition to prove communication lower bounds. In this paper, we give a new breath to this method in the multi-party setting, presenting a new toolkit for proving multi-party communication lower bounds.

Cite as

Mi-Ying (Miryam) Huang, Xinyu Mao, Shuo Wang, Guangxu Yang, and Jiapeng Zhang. A Min-Entropy Approach to Multi-Party Communication Lower Bounds. In 40th Computational Complexity Conference (CCC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 339, pp. 33:1-33:29, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{huang_et_al:LIPIcs.CCC.2025.33,
  author =	{Huang, Mi-Ying (Miryam) and Mao, Xinyu and Wang, Shuo and Yang, Guangxu and Zhang, Jiapeng},
  title =	{{A Min-Entropy Approach to Multi-Party Communication Lower Bounds}},
  booktitle =	{40th Computational Complexity Conference (CCC 2025)},
  pages =	{33:1--33:29},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-379-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{339},
  editor =	{Srinivasan, Srikanth},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2025.33},
  URN =		{urn:nbn:de:0030-drops-237273},
  doi =		{10.4230/LIPIcs.CCC.2025.33},
  annote =	{Keywords: communication complexity, lifting theorems, set intersection, chained index}
}
Document
Witness Encryption and NP-Hardness of Learning

Authors: Halley Goldberg and Valentine Kabanets


Abstract
We study connections between two fundamental questions from computer science theory. (1) Is witness encryption possible for NP [Sanjam Garg et al., 2013]? That is, given an instance x of an NP-complete language L, can one encrypt a secret message with security contingent on the ability to provide a witness for x ∈ L? (2) Is computational learning (in the sense of [Leslie G. Valiant, 1984; Michael J. Kearns et al., 1994]) hard for NP? That is, is there a polynomial-time reduction from instances of L to instances of learning? Our main contribution is that certain formulations of NP-hardness of learning characterize the existence of witness encryption for NP. More specifically, we show: - witness encryption for a language L ∈ NP is equivalent to a half-Levin reduction from L to the Computational Gap Learning problem (denoted CGL [Benny Applebaum et al., 2008]), where a half-Levin reduction is the same as a Levin reduction but only required to preserve witnesses in one direction, and CGL formalizes agnostic learning as a decision problem. We show versions of the statement above for witness encryption secure against non-uniform and uniform adversaries. We also show that witness encryption for NP with ciphertexts of logarithmic length, along with a circuit lower bound for E, are together equivalent to NP-hardness of a generalized promise version of MCSP. We complement the above with a number of unconditional NP-hardness results for agnostic PAC learning. Extending a result of [Shuichi Hirahara, 2022] to the standard setting of boolean circuits, we show NP-hardness of "semi-proper" learning. Namely: - for some polynomial s, it is NP-hard to agnostically learn circuits of size s(n) by circuits of size s(n)⋅ n^{1/(log log n)^O(1)}. Looking beyond the computational model of standard boolean circuits enables us to prove NP-hardness of improper learning (ie. without a restriction on the size of hypothesis returned by the learner). We obtain such results for: - learning circuits with oracle access to a given randomly sampled string, and - learning RAM programs. In particular, we show that a variant of MINLT [Ker-I Ko, 1991] for RAM programs is NP-hard with parameters corresponding to the setting of improper learning. We view these results as partial progress toward the ultimate goal of showing NP-hardness of learning boolean circuits in an improper setting. Lastly, we give some consequences of NP-hardness of learning for private- and public-key cryptography. Improving a main result of [Benny Applebaum et al., 2008], we show that if improper agnostic PAC learning is NP-hard under a randomized non-adaptive reduction (with some restrictions), then NP ⊈ BPP implies the existence of i.o. one-way functions. In contrast, if CGL is NP-hard under a half-Levin reduction, then NP ⊈ BPP implies the existence of i.o. public-key encryption.

Cite as

Halley Goldberg and Valentine Kabanets. Witness Encryption and NP-Hardness of Learning. In 40th Computational Complexity Conference (CCC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 339, pp. 34:1-34:43, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{goldberg_et_al:LIPIcs.CCC.2025.34,
  author =	{Goldberg, Halley and Kabanets, Valentine},
  title =	{{Witness Encryption and NP-Hardness of Learning}},
  booktitle =	{40th Computational Complexity Conference (CCC 2025)},
  pages =	{34:1--34:43},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-379-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{339},
  editor =	{Srinivasan, Srikanth},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2025.34},
  URN =		{urn:nbn:de:0030-drops-237281},
  doi =		{10.4230/LIPIcs.CCC.2025.34},
  annote =	{Keywords: agnostic PAC learning, witness encryption, NP-hardness}
}
Document
How to Construct Random Strings

Authors: Oliver Korten and Rahul Santhanam


Abstract
We address the following fundamental question: is there an efficient deterministic algorithm that, given 1ⁿ, outputs a string of length n that has polynomial-time bounded Kolmogorov complexity Ω̃(n) or even n - o(n)? Under plausible complexity-theoretic assumptions, stating for example that there is an ε > 0 for which TIME[T(n)] ̸ ⊆ TIME^NP[T(n)^ε]/2^(εn) for appropriately chosen time-constructible T, we show that the answer to this question is positive (answering a question of [Hanlin Ren et al., 2022]), and that the Range Avoidance problem [Robert Kleinberg et al., 2021; Oliver Korten, 2021; Hanlin Ren et al., 2022] is efficiently solvable for uniform sequences of circuits with close to minimal stretch (answering a question of [Rahul Ilango et al., 2023]). We obtain our results by giving efficient constructions of pseudo-random generators with almost optimal seed length against algorithms with small advice, under assumptions of the form mentioned above. We also apply our results to give the first complexity-theoretic evidence for explicit constructions of objects such as rigid matrices (in the sense of Valiant) and Ramsey graphs with near-optimal parameters.

Cite as

Oliver Korten and Rahul Santhanam. How to Construct Random Strings. In 40th Computational Complexity Conference (CCC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 339, pp. 35:1-35:32, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{korten_et_al:LIPIcs.CCC.2025.35,
  author =	{Korten, Oliver and Santhanam, Rahul},
  title =	{{How to Construct Random Strings}},
  booktitle =	{40th Computational Complexity Conference (CCC 2025)},
  pages =	{35:1--35:32},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-379-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{339},
  editor =	{Srinivasan, Srikanth},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2025.35},
  URN =		{urn:nbn:de:0030-drops-237290},
  doi =		{10.4230/LIPIcs.CCC.2025.35},
  annote =	{Keywords: Explicit Constructions, Kolmogorov Complexity, Derandomization}
}
Document
Lifting with Colourful Sunflowers

Authors: Susanna F. de Rezende and Marc Vinyals


Abstract
We show that a generalization of the DAG-like query-to-communication lifting theorem, when proven using sunflowers over non-binary alphabets, yields lower bounds on the monotone circuit complexity and proof complexity of natural functions and formulas that are better than previously known results obtained using the approximation method. These include an n^Ω(k) lower bound for the clique function up to k ≤ n^{1/2-ε}, and an exp(Ω(n^{1/3-ε})) lower bound for a function in P.

Cite as

Susanna F. de Rezende and Marc Vinyals. Lifting with Colourful Sunflowers. In 40th Computational Complexity Conference (CCC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 339, pp. 36:1-36:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{derezende_et_al:LIPIcs.CCC.2025.36,
  author =	{de Rezende, Susanna F. and Vinyals, Marc},
  title =	{{Lifting with Colourful Sunflowers}},
  booktitle =	{40th Computational Complexity Conference (CCC 2025)},
  pages =	{36:1--36:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-379-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{339},
  editor =	{Srinivasan, Srikanth},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2025.36},
  URN =		{urn:nbn:de:0030-drops-237303},
  doi =		{10.4230/LIPIcs.CCC.2025.36},
  annote =	{Keywords: lifting, sunflower, clique, colouring, monotone circuit, cutting planes}
}

Filters


Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail