22 Search Results for "Martin, Russell"


Document
The Recurrence/Transience of Random Walks on a Bounded Grid in an Increasing Dimension

Authors: Shuma Kumamoto, Shuji Kijima, and Tomoyuki Shirai

Published in: LIPIcs, Volume 302, 35th International Conference on Probabilistic, Combinatorial and Asymptotic Methods for the Analysis of Algorithms (AofA 2024)


Abstract
It is celebrated that a simple random walk on ℤ and ℤ² returns to the initial vertex v infinitely many times during infinitely many transitions, which is said recurrent, while it returns to v only finite times on ℤ^d for d ≥ 3, which is said transient. It is also known that a simple random walk on a growing region on ℤ^d can be recurrent depending on growing speed for any fixed d. This paper shows that a simple random walk on {0,1,…,N}ⁿ with an increasing n and a fixed N can be recurrent depending on the increasing speed of n. Precisely, we are concerned with a specific model of a random walk on a growing graph (RWoGG) and show a phase transition between the recurrence and transience of the random walk regarding the growth speed of the graph. For the proof, we develop a pausing coupling argument introducing the notion of weakly less homesick as graph growing (weakly LHaGG).

Cite as

Shuma Kumamoto, Shuji Kijima, and Tomoyuki Shirai. The Recurrence/Transience of Random Walks on a Bounded Grid in an Increasing Dimension. In 35th International Conference on Probabilistic, Combinatorial and Asymptotic Methods for the Analysis of Algorithms (AofA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 302, pp. 22:1-22:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{kumamoto_et_al:LIPIcs.AofA.2024.22,
  author =	{Kumamoto, Shuma and Kijima, Shuji and Shirai, Tomoyuki},
  title =	{{The Recurrence/Transience of Random Walks on a Bounded Grid in an Increasing Dimension}},
  booktitle =	{35th International Conference on Probabilistic, Combinatorial and Asymptotic Methods for the Analysis of Algorithms (AofA 2024)},
  pages =	{22:1--22:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-329-4},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{302},
  editor =	{Mailler, C\'{e}cile and Wild, Sebastian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.AofA.2024.22},
  URN =		{urn:nbn:de:0030-drops-204577},
  doi =		{10.4230/LIPIcs.AofA.2024.22},
  annote =	{Keywords: Random walk, dynamic graph, recurrence, transience, coupling}
}
Document
Quantum Automating TC⁰-Frege Is LWE-Hard

Authors: Noel Arteche, Gaia Carenini, and Matthew Gray

Published in: LIPIcs, Volume 300, 39th Computational Complexity Conference (CCC 2024)


Abstract
We prove the first hardness results against efficient proof search by quantum algorithms. We show that under Learning with Errors (LWE), the standard lattice-based cryptographic assumption, no quantum algorithm can weakly automate TC⁰-Frege. This extends the line of results of Krajíček and Pudlák (Information and Computation, 1998), Bonet, Pitassi, and Raz (FOCS, 1997), and Bonet, Domingo, Gavaldà, Maciel, and Pitassi (Computational Complexity, 2004), who showed that Extended Frege, TC⁰-Frege and AC⁰-Frege, respectively, cannot be weakly automated by classical algorithms if either the RSA cryptosystem or the Diffie-Hellman key exchange protocol are secure. To the best of our knowledge, this is the first interaction between quantum computation and propositional proof search.

Cite as

Noel Arteche, Gaia Carenini, and Matthew Gray. Quantum Automating TC⁰-Frege Is LWE-Hard. In 39th Computational Complexity Conference (CCC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 300, pp. 15:1-15:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{arteche_et_al:LIPIcs.CCC.2024.15,
  author =	{Arteche, Noel and Carenini, Gaia and Gray, Matthew},
  title =	{{Quantum Automating TC⁰-Frege Is LWE-Hard}},
  booktitle =	{39th Computational Complexity Conference (CCC 2024)},
  pages =	{15:1--15:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-331-7},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{300},
  editor =	{Santhanam, Rahul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2024.15},
  URN =		{urn:nbn:de:0030-drops-204117},
  doi =		{10.4230/LIPIcs.CCC.2024.15},
  annote =	{Keywords: automatability, post-quantum cryptography, feasible interpolation}
}
Document
A Strong Direct Sum Theorem for Distributional Query Complexity

Authors: Guy Blanc, Caleb Koch, Carmen Strassle, and Li-Yang Tan

Published in: LIPIcs, Volume 300, 39th Computational Complexity Conference (CCC 2024)


Abstract
Consider the expected query complexity of computing the k-fold direct product f^{⊗ k} of a function f to error ε with respect to a distribution μ^k. One strategy is to sequentially compute each of the k copies to error ε/k with respect to μ and apply the union bound. We prove a strong direct sum theorem showing that this naive strategy is essentially optimal. In particular, computing a direct product necessitates a blowup in both query complexity and error. Strong direct sum theorems contrast with results that only show a blowup in query complexity or error but not both. There has been a long line of such results for distributional query complexity, dating back to (Impagliazzo, Raz, Wigderson 1994) and (Nisan, Rudich, Saks 1994), but a strong direct sum theorem that holds for all functions in the standard query model had been elusive. A key idea in our work is the first use of the Hardcore Theorem (Impagliazzo 1995) in the context of query complexity. We prove a new resilience lemma that accompanies it, showing that the hardcore of f^{⊗k} is likely to remain dense under arbitrary partitions of the input space.

Cite as

Guy Blanc, Caleb Koch, Carmen Strassle, and Li-Yang Tan. A Strong Direct Sum Theorem for Distributional Query Complexity. In 39th Computational Complexity Conference (CCC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 300, pp. 16:1-16:30, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{blanc_et_al:LIPIcs.CCC.2024.16,
  author =	{Blanc, Guy and Koch, Caleb and Strassle, Carmen and Tan, Li-Yang},
  title =	{{A Strong Direct Sum Theorem for Distributional Query Complexity}},
  booktitle =	{39th Computational Complexity Conference (CCC 2024)},
  pages =	{16:1--16:30},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-331-7},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{300},
  editor =	{Santhanam, Rahul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2024.16},
  URN =		{urn:nbn:de:0030-drops-204123},
  doi =		{10.4230/LIPIcs.CCC.2024.16},
  annote =	{Keywords: Query complexity, direct product theorem, hardcore theorem}
}
Document
Track A: Algorithms, Complexity and Games
Finer-Grained Reductions in Fine-Grained Hardness of Approximation

Authors: Elie Abboud and Noga Ron-Zewi

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
We investigate the relation between δ and ε required for obtaining a (1+δ)-approximation in time N^{2-ε} for closest pair problems under various distance metrics, and for other related problems in fine-grained complexity. Specifically, our main result shows that if it is impossible to (exactly) solve the (bichromatic) inner product (IP) problem for vectors of dimension c log N in time N^{2-ε}, then there is no (1+δ)-approximation algorithm for (bichromatic) Euclidean Closest Pair running in time N^{2-2ε}, where δ ≈ (ε/c)² (where ≈ hides polylog factors). This improves on the prior result due to Chen and Williams (SODA 2019) which gave a smaller polynomial dependence of δ on ε, on the order of δ ≈ (ε/c)⁶. Our result implies in turn that no (1+δ)-approximation algorithm exists for Euclidean closest pair for δ ≈ ε⁴, unless an algorithmic improvement for IP is obtained. This in turn is very close to the approximation guarantee of δ ≈ ε³ for Euclidean closest pair, given by the best known algorithm of Almam, Chan, and Williams (FOCS 2016). By known reductions, a similar result follows for a host of other related problems in fine-grained hardness of approximation. Our reduction combines the hardness of approximation framework of Chen and Williams, together with an MA communication protocol for IP over a small alphabet, that is inspired by the MA protocol of Chen (Theory of Computing, 2020).

Cite as

Elie Abboud and Noga Ron-Zewi. Finer-Grained Reductions in Fine-Grained Hardness of Approximation. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 7:1-7:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{abboud_et_al:LIPIcs.ICALP.2024.7,
  author =	{Abboud, Elie and Ron-Zewi, Noga},
  title =	{{Finer-Grained Reductions in Fine-Grained Hardness of Approximation}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{7:1--7:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.7},
  URN =		{urn:nbn:de:0030-drops-201507},
  doi =		{10.4230/LIPIcs.ICALP.2024.7},
  annote =	{Keywords: Fine-grained complexity, conditional lower bound, fine-grained reduction, Approximation algorithms, Analysis of algorithms, Computational geometry, Computational and structural complexity theory}
}
Document
Track A: Algorithms, Complexity and Games
From Proof Complexity to Circuit Complexity via Interactive Protocols

Authors: Noel Arteche, Erfan Khaniki, Ján Pich, and Rahul Santhanam

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
Folklore in complexity theory suspects that circuit lower bounds against NC¹ or P/poly, currently out of reach, are a necessary step towards proving strong proof complexity lower bounds for systems like Frege or Extended Frege. Establishing such a connection formally, however, is already daunting, as it would imply the breakthrough separation NEXP ⊈ P/poly, as recently observed by Pich and Santhanam [Pich and Santhanam, 2023]. We show such a connection conditionally for the Implicit Extended Frege proof system (iEF) introduced by Krajíček [Krajíček, 2004], capable of formalizing most of contemporary complexity theory. In particular, we show that if iEF proves efficiently the standard derandomization assumption that a concrete Boolean function is hard on average for subexponential-size circuits, then any superpolynomial lower bound on the length of iEF proofs implies #P ⊈ FP/poly (which would in turn imply, for example, PSPACE ⊈ P/poly). Our proof exploits the formalization inside iEF of the soundness of the sum-check protocol of Lund, Fortnow, Karloff, and Nisan [Lund et al., 1992]. This has consequences for the self-provability of circuit upper bounds in iEF. Interestingly, further improving our result seems to require progress in constructing interactive proof systems with more efficient provers.

Cite as

Noel Arteche, Erfan Khaniki, Ján Pich, and Rahul Santhanam. From Proof Complexity to Circuit Complexity via Interactive Protocols. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 12:1-12:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{arteche_et_al:LIPIcs.ICALP.2024.12,
  author =	{Arteche, Noel and Khaniki, Erfan and Pich, J\'{a}n and Santhanam, Rahul},
  title =	{{From Proof Complexity to Circuit Complexity via Interactive Protocols}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{12:1--12:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.12},
  URN =		{urn:nbn:de:0030-drops-201557},
  doi =		{10.4230/LIPIcs.ICALP.2024.12},
  annote =	{Keywords: proof complexity, circuit complexity, interactive protocols}
}
Document
Track A: Algorithms, Complexity and Games
NP-Hardness of Testing Equivalence to Sparse Polynomials and to Constant-Support Polynomials

Authors: Omkar Baraskar, Agrim Dewan, Chandan Saha, and Pulkit Sinha

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
An s-sparse polynomial has at most s monomials with nonzero coefficients. The Equivalence Testing problem for sparse polynomials (ETsparse) asks to decide if a given polynomial f is equivalent to (i.e., in the orbit of) some s-sparse polynomial. In other words, given f ∈ 𝔽[𝐱] and s ∈ ℕ, ETsparse asks to check if there exist A ∈ GL(|𝐱|, 𝔽) and 𝐛 ∈ 𝔽^|𝐱| such that f(A𝐱 + 𝐛) is s-sparse. We show that ETsparse is NP-hard over any field 𝔽, if f is given in the sparse representation, i.e., as a list of nonzero coefficients and exponent vectors. This answers a question posed by Gupta, Saha and Thankey (SODA 2023) and also, more explicitly, by Baraskar, Dewan and Saha (STACS 2024). The result implies that the Minimum Circuit Size Problem (MCSP) is NP-hard for a dense subclass of depth-3 arithmetic circuits if the input is given in sparse representation. We also show that approximating the smallest s₀ such that a given s-sparse polynomial f is in the orbit of some s₀-sparse polynomial to within a factor of s^{1/3 - ε} is NP-hard for any ε > 0; observe that s-factor approximation is trivial as the input is s-sparse. Finally, we show that for any constant σ ≥ 6, checking if a polynomial (given in sparse representation) is in the orbit of some support-σ polynomial is NP-hard. Support of a polynomial f is the maximum number of variables present in any monomial of f. These results are obtained via direct reductions from the 3-SAT problem.

Cite as

Omkar Baraskar, Agrim Dewan, Chandan Saha, and Pulkit Sinha. NP-Hardness of Testing Equivalence to Sparse Polynomials and to Constant-Support Polynomials. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 16:1-16:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{baraskar_et_al:LIPIcs.ICALP.2024.16,
  author =	{Baraskar, Omkar and Dewan, Agrim and Saha, Chandan and Sinha, Pulkit},
  title =	{{NP-Hardness of Testing Equivalence to Sparse Polynomials and to Constant-Support Polynomials}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{16:1--16:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.16},
  URN =		{urn:nbn:de:0030-drops-201598},
  doi =		{10.4230/LIPIcs.ICALP.2024.16},
  annote =	{Keywords: Equivalence testing, MCSP, sparse polynomials, 3SAT}
}
Document
Track A: Algorithms, Complexity and Games
Fundamental Problems on Bounded-Treewidth Graphs: The Real Source of Hardness

Authors: Barış Can Esmer, Jacob Focke, Dániel Marx, and Paweł Rzążewski

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
It is known for many algorithmic problems that if a tree decomposition of width t is given in the input, then the problem can be solved with exponential dependence on t. A line of research initiated by Lokshtanov, Marx, and Saurabh [SODA 2011] produced lower bounds showing that in many cases known algorithms already achieve the best possible exponential dependence on t, assuming the Strong Exponential-Time Hypothesis (SETH). The main message of this paper is showing that the same lower bounds can already be obtained in a much more restricted setting: informally, a graph consisting of a block of t vertices connected to components of constant size already has the same hardness as a general tree decomposition of width t. Formally, a (σ,δ)-hub is a set Q of vertices such that every component of Q has size at most σ and is adjacent to at most δ vertices of Q. We explore if the known tight lower bounds parameterized by the width of the given tree decomposition remain valid if we parameterize by the size of the given hub. - For every ε > 0, there are σ,δ > 0 such that Independent Set (equivalently Vertex Cover) cannot be solved in time (2-ε)^p⋅ n, even if a (σ, δ)-hub of size p is given in the input, assuming the SETH. This matches the earlier tight lower bounds parameterized by width of the tree decomposition. Similar tight bounds are obtained for Odd Cycle Transversal, Max Cut, q-Coloring, and edge/vertex deletions versions of q-Coloring. - For every ε > 0, there are σ,δ > 0 such that △-Partition cannot be solved in time (2-ε)^p ⋅ n, even if a (σ, δ)-hub of size p is given in the input, assuming the Set Cover Conjecture (SCC). In fact, we prove that this statement is equivalent to the SCC, thus it is unlikely that this could be proved assuming the SETH. - For Dominating Set, we can prove a non-tight lower bound ruling out (2-ε)^p ⋅ n^𝒪(1) algorithms, assuming either the SETH or the SCC, but this does not match the 3^p⋅ n^{𝒪(1)} upper bound. Thus our results reveal that, for many problems, the research on lower bounds on the dependence on tree width was never really about tree decompositions, but the real source of hardness comes from a much simpler structure. Additionally, we study if the same lower bounds can be obtained if σ and δ are fixed universal constants (not depending on ε). We show that lower bounds of this form are possible for Max Cut and the edge-deletion version of q-Coloring, under the Max 3-Sat Hypothesis (M3SH). However, no such lower bounds are possible for Independent Set, Odd Cycle Transversal, and the vertex-deletion version of q-Coloring: better than brute force algorithms are possible for every fixed (σ,δ).

Cite as

Barış Can Esmer, Jacob Focke, Dániel Marx, and Paweł Rzążewski. Fundamental Problems on Bounded-Treewidth Graphs: The Real Source of Hardness. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 34:1-34:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{canesmer_et_al:LIPIcs.ICALP.2024.34,
  author =	{Can Esmer, Bar{\i}\c{s} and Focke, Jacob and Marx, D\'{a}niel and Rz\k{a}\.{z}ewski, Pawe{\l}},
  title =	{{Fundamental Problems on Bounded-Treewidth Graphs: The Real Source of Hardness}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{34:1--34:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.34},
  URN =		{urn:nbn:de:0030-drops-201772},
  doi =		{10.4230/LIPIcs.ICALP.2024.34},
  annote =	{Keywords: Parameterized Complexity, Tight Bounds, Hub, Treewidth, Strong Exponential Time Hypothesis, Vertex Coloring, Vertex Deletion, Edge Deletion, Triangle Packing, Triangle Partition, Set Cover Hypothesis, Dominating Set}
}
Document
Track A: Algorithms, Complexity and Games
A Tight Subexponential-Time Algorithm for Two-Page Book Embedding

Authors: Robert Ganian, Haiko Müller, Sebastian Ordyniak, Giacomo Paesani, and Mateusz Rychlicki

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
A book embedding of a graph is a drawing that maps vertices onto a line and edges to simple pairwise non-crossing curves drawn into "pages", which are half-planes bounded by that line. Two-page book embeddings, i.e., book embeddings into 2 pages, are of special importance as they are both NP-hard to compute and have specific applications. We obtain a 2^𝒪(√n) algorithm for computing a book embedding of an n-vertex graph on two pages - a result which is asymptotically tight under the Exponential Time Hypothesis. As a key tool in our approach, we obtain a single-exponential fixed-parameter algorithm for the same problem when parameterized by the treewidth of the input graph. We conclude by establishing the fixed-parameter tractability of computing minimum-page book embeddings when parameterized by the feedback edge number, settling an open question arising from previous work on the problem.

Cite as

Robert Ganian, Haiko Müller, Sebastian Ordyniak, Giacomo Paesani, and Mateusz Rychlicki. A Tight Subexponential-Time Algorithm for Two-Page Book Embedding. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 68:1-68:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{ganian_et_al:LIPIcs.ICALP.2024.68,
  author =	{Ganian, Robert and M\"{u}ller, Haiko and Ordyniak, Sebastian and Paesani, Giacomo and Rychlicki, Mateusz},
  title =	{{A Tight Subexponential-Time Algorithm for Two-Page Book Embedding}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{68:1--68:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.68},
  URN =		{urn:nbn:de:0030-drops-202114},
  doi =		{10.4230/LIPIcs.ICALP.2024.68},
  annote =	{Keywords: book embedding, page number, subexponential algorithms, subhamiltonicity, feedback edge number}
}
Document
Track A: Algorithms, Complexity and Games
BQP, Meet NP: Search-To-Decision Reductions and Approximate Counting

Authors: Sevag Gharibian and Jonas Kamminga

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
What is the power of polynomial-time quantum computation with access to an NP oracle? In this work, we focus on two fundamental tasks from the study of Boolean satisfiability (SAT) problems: search-to-decision reductions, and approximate counting. We first show that, in strong contrast to the classical setting where a poly-time Turing machine requires Θ(n) queries to an NP oracle to compute a witness to a given SAT formula, quantumly Θ(log n) queries suffice. We then show this is tight in the black-box model - any quantum algorithm with "NP-like" query access to a formula requires Ω(log n) queries to extract a solution with constant probability. Moving to approximate counting of SAT solutions, by exploiting a quantum link between search-to-decision reductions and approximate counting, we show that existing classical approximate counting algorithms are likely optimal. First, we give a lower bound in the "NP-like" black-box query setting: Approximate counting requires Ω(log n) queries, even on a quantum computer. We then give a "white-box" lower bound (i.e. where the input formula is not hidden in the oracle) - if there exists a randomized poly-time classical or quantum algorithm for approximate counting making o(log n) NP queries, then BPP^NP[o(n)] contains a 𝖯^NP-complete problem if the algorithm is classical and FBQP^NP[o(n)] contains an FP^NP-complete problem if the algorithm is quantum.

Cite as

Sevag Gharibian and Jonas Kamminga. BQP, Meet NP: Search-To-Decision Reductions and Approximate Counting. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 70:1-70:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{gharibian_et_al:LIPIcs.ICALP.2024.70,
  author =	{Gharibian, Sevag and Kamminga, Jonas},
  title =	{{BQP, Meet NP: Search-To-Decision Reductions and Approximate Counting}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{70:1--70:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.70},
  URN =		{urn:nbn:de:0030-drops-202134},
  doi =		{10.4230/LIPIcs.ICALP.2024.70},
  annote =	{Keywords: Approximate Counting, Search to Decision Reduction, BQP, NP, Oracle Complexity Class}
}
Document
Track A: Algorithms, Complexity and Games
Fully Dynamic Strongly Connected Components in Planar Digraphs

Authors: Adam Karczmarz and Marcin Smulewicz

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
In this paper we consider maintaining strongly connected components (SCCs) of a directed planar graph subject to edge insertions and deletions. We show a data structure maintaining an implicit representation of the SCCs within Õ(n^{6/7}) worst-case time per update. The data structure supports, in O(log²{n}) time, reporting vertices of any specified SCC (with constant overhead per reported vertex) and aggregating vertex information (e.g., computing the maximum label) over all the vertices of that SCC. Furthermore, it can maintain global information about the structure of SCCs, such as the number of SCCs, or the size of the largest SCC. To the best of our knowledge, no fully dynamic SCCs data structures with sublinear update time have been previously known for any major subclass of digraphs. Our result should be contrasted with the n^{1-o(1)} amortized update time lower bound conditional on SETH, which holds even for dynamically maintaining whether a general digraph has more than two SCCs.

Cite as

Adam Karczmarz and Marcin Smulewicz. Fully Dynamic Strongly Connected Components in Planar Digraphs. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 95:1-95:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{karczmarz_et_al:LIPIcs.ICALP.2024.95,
  author =	{Karczmarz, Adam and Smulewicz, Marcin},
  title =	{{Fully Dynamic Strongly Connected Components in Planar Digraphs}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{95:1--95:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.95},
  URN =		{urn:nbn:de:0030-drops-202388},
  doi =		{10.4230/LIPIcs.ICALP.2024.95},
  annote =	{Keywords: dynamic strongly connected components, dynamic strong connectivity, dynamic reachability, planar graphs}
}
Document
Track A: Algorithms, Complexity and Games
Polylogarithmic Approximations for Robust s-t Path

Authors: Shi Li, Chenyang Xu, and Ruilong Zhang

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
The paper revisits the Robust s-t Path problem, one of the most fundamental problems in robust optimization. In the problem, we are given a directed graph with n vertices and k distinct cost functions (scenarios) defined over edges, and aim to choose an s-t path such that the total cost of the path is always provable no matter which scenario is realized. Viewing each cost function as an agent, our goal is to find a fair s-t path, which minimizes the maximum cost among all agents. The problem is NP-hard to approximate within a factor of o(log k) unless NP ⊆ DTIME(n^{polylog n}), and the best-known approximation ratio is Õ(√n), which is based on the natural flow linear program. A longstanding open question is whether we can achieve a polylogarithmic approximation for the problem; it remains open even if a quasi-polynomial running time is allowed. Our main result is a O(log n log k) approximation for the Robust s-t Path problem in quasi-polynomial time, solving the open question in the quasi-polynomial time regime. The algorithm is built on a novel linear program formulation for a decision-tree-type structure, which enables us to overcome the Ω(√n) integrality gap for the natural flow LP. Furthermore, we show that for graphs with bounded treewidth, the quasi-polynomial running time can be improved to a polynomial. We hope our techniques can offer new insights into this problem and other related problems in robust optimization.

Cite as

Shi Li, Chenyang Xu, and Ruilong Zhang. Polylogarithmic Approximations for Robust s-t Path. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 106:1-106:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{li_et_al:LIPIcs.ICALP.2024.106,
  author =	{Li, Shi and Xu, Chenyang and Zhang, Ruilong},
  title =	{{Polylogarithmic Approximations for Robust s-t Path}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{106:1--106:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.106},
  URN =		{urn:nbn:de:0030-drops-202497},
  doi =		{10.4230/LIPIcs.ICALP.2024.106},
  annote =	{Keywords: Approximation Algorithm, Randomized LP Rounding, Robust s-t Path}
}
Document
Track A: Algorithms, Complexity and Games
Improved Lower Bounds for Approximating Parameterized Nearest Codeword and Related Problems Under ETH

Authors: Shuangle Li, Bingkai Lin, and Yuwei Liu

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
In this paper we present a new gap-creating randomized self-reduction for the parameterized Maximum Likelihood Decoding problem over 𝔽_p (k-MLD_p). The reduction takes a k-MLD_p instance with k⋅ n d-dimensional vectors as input, runs in O(d2^{O(k)}n^{1.01}) time for some computable function f, outputs a (3/2-ε)-Gap-k'-MLD_p instance for any ε > 0, where k' = O(k²log k). Using this reduction, we show that assuming the randomized Exponential Time Hypothesis (ETH), no algorithms can approximate k-MLD_p (and therefore its dual problem k-NCP_p) within factor (3/2-ε) in f(k)⋅ n^{o(√{k/log k})} time for any ε > 0. We then use reduction by Bhattacharyya, Ghoshal, Karthik and Manurangsi (ICALP 2018) to amplify the (3/2-ε)-gap to any constant. As a result, we show that assuming ETH, no algorithms can approximate k-NCP_p and k-MDP_p within γ-factor in f(k)⋅ n^{o(k^{ε_γ})} time for some constant ε_γ > 0. Combining with the gap-preserving reduction by Bennett, Cheraghchi, Guruswami and Ribeiro (STOC 2023), we also obtain similar lower bounds for k-MDP_p, k-CVP_p and k-SVP_p. These results improve upon the previous f(k)⋅ n^{Ω(poly log k)} lower bounds for these problems under ETH using reductions by Bhattacharyya et al. (J.ACM 2021) and Bennett et al. (STOC 2023).

Cite as

Shuangle Li, Bingkai Lin, and Yuwei Liu. Improved Lower Bounds for Approximating Parameterized Nearest Codeword and Related Problems Under ETH. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 107:1-107:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{li_et_al:LIPIcs.ICALP.2024.107,
  author =	{Li, Shuangle and Lin, Bingkai and Liu, Yuwei},
  title =	{{Improved Lower Bounds for Approximating Parameterized Nearest Codeword and Related Problems Under ETH}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{107:1--107:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.107},
  URN =		{urn:nbn:de:0030-drops-202500},
  doi =		{10.4230/LIPIcs.ICALP.2024.107},
  annote =	{Keywords: Nearest Codeword Problem, Hardness of Approximations, Fine-grained Complexity, Parameterized Complexity, Minimum Distance Problem, Shortest Vector Problem}
}
Document
Track A: Algorithms, Complexity and Games
No Polynomial Kernels for Knapsack

Authors: Klaus Heeger, Danny Hermelin, Matthias Mnich, and Dvir Shabtay

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
This paper focuses on kernelization algorithms for the fundamental Knapsack problem. A kernelization algorithm (or kernel) is a polynomial-time reduction from a problem onto itself, where the output size is bounded by a function of some problem-specific parameter. Such algorithms provide a theoretical model for data reduction and preprocessing and are central in the area of parameterized complexity. In this way, a kernel for Knapsack for some parameter k reduces any instance of Knapsack to an equivalent instance of size at most f(k) in polynomial time, for some computable function f. When f(k) = k^{O(1)} then we call such a reduction a polynomial kernel. Our study focuses on two natural parameters for Knapsack: The number w_# of different item weights, and the number p_# of different item profits. Our main technical contribution is a proof showing that Knapsack does not admit a polynomial kernel for any of these two parameters under standard complexity-theoretic assumptions. Our proof discovers an elaborate application of the standard kernelization lower bound framework, and develops along the way novel ideas that should be useful for other problems as well. We complement our lower bounds by showing that Knapsack admits a polynomial kernel for the combined parameter w_# ⋅ p_#.

Cite as

Klaus Heeger, Danny Hermelin, Matthias Mnich, and Dvir Shabtay. No Polynomial Kernels for Knapsack. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 83:1-83:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{heeger_et_al:LIPIcs.ICALP.2024.83,
  author =	{Heeger, Klaus and Hermelin, Danny and Mnich, Matthias and Shabtay, Dvir},
  title =	{{No Polynomial Kernels for Knapsack}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{83:1--83:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.83},
  URN =		{urn:nbn:de:0030-drops-202261},
  doi =		{10.4230/LIPIcs.ICALP.2024.83},
  annote =	{Keywords: Knapsack, polynomial kernels, compositions, number of different weights, number of different profits}
}
Document
Track A: Algorithms, Complexity and Games
Impagliazzo’s Worlds Through the Lens of Conditional Kolmogorov Complexity

Authors: Zhenjian Lu and Rahul Santhanam

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
We develop new characterizations of Impagliazzo’s worlds Algorithmica, Heuristica and Pessiland by the intractability of conditional Kolmogorov complexity 𝖪 and conditional probabilistic time-bounded Kolmogorov complexity pK^t. In our first set of results, we show that NP ⊆ BPP iff pK^t(x ∣ y) can be computed efficiently in the worst case when t is sublinear in |x| + |y|; DistNP ⊆ HeurBPP iff pK^t(x ∣ y) can be computed efficiently over all polynomial-time samplable distributions when t is sublinear in |x| + |y|; and infinitely-often one-way functions fail to exist iff pK^t(x ∣ y) can be computed efficiently over all polynomial-time samplable distributions for t a sufficiently large polynomial in |x| + |y|. These results characterize Impagliazzo’s worlds Algorithmica, Heuristica and Pessiland purely in terms of the tractability of conditional pK^t. Notably, the results imply that Pessiland fails to exist iff the average-case intractability of conditional pK^t is insensitive to the difference between sublinear and polynomially bounded t. As a corollary, while we prove conditional pK^t to be NP-hard for sublinear t, showing NP-hardness for large enough polynomially bounded t would eliminate Pessiland as a possible world of average-case complexity. In our second set of results, we characterize Impagliazzo’s worlds Algorithmica, Heuristica and Pessiland by the distributional tractability of a natural problem, i.e., approximating the conditional Kolmogorov complexity, that is provably intractable in the worst case. We show that NP ⊆ BPP iff conditional Kolmogorov complexity can be approximated in the semi-worst case; and DistNP ⊆ HeurBPP iff conditional Kolmogorov complexity can be approximated on average over all independent polynomial-time samplable distributions. It follows from a result by Ilango, Ren, and Santhanam (STOC 2022) that infinitely-often one-way functions fail to exist iff conditional Kolmogorov complexity can be approximated on average over all polynomial-time samplable distributions. Together, these results yield the claimed characterizations. Our techniques, combined with previous work, also yield a characterization of auxiliary-input one-way functions and equivalences between different average-case tractability assumptions for conditional Kolmogorov complexity and its variants. Our results suggest that novel average-case tractability assumptions such as tractability in the semi-worst case and over independent polynomial-time samplable distributions might be worthy of further study.

Cite as

Zhenjian Lu and Rahul Santhanam. Impagliazzo’s Worlds Through the Lens of Conditional Kolmogorov Complexity. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 110:1-110:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{lu_et_al:LIPIcs.ICALP.2024.110,
  author =	{Lu, Zhenjian and Santhanam, Rahul},
  title =	{{Impagliazzo’s Worlds Through the Lens of Conditional Kolmogorov Complexity}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{110:1--110:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.110},
  URN =		{urn:nbn:de:0030-drops-202538},
  doi =		{10.4230/LIPIcs.ICALP.2024.110},
  annote =	{Keywords: meta-complexity, Kolmogorov complexity, one-way functions, average-case complexity}
}
Document
Track A: Algorithms, Complexity and Games
Bounds on the Total Coefficient Size of Nullstellensatz Proofs of the Pigeonhole Principle

Authors: Aaron Potechin and Aaron Zhang

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
We show that the minimum total coefficient size of a Nullstellensatz proof of the pigeonhole principle on n+1 pigeons and n holes is 2^{Θ(n)}. We also investigate the ordering principle and construct an explicit Nullstellensatz proof for the ordering principle on n elements with total coefficient size 2ⁿ - n.

Cite as

Aaron Potechin and Aaron Zhang. Bounds on the Total Coefficient Size of Nullstellensatz Proofs of the Pigeonhole Principle. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 117:1-117:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{potechin_et_al:LIPIcs.ICALP.2024.117,
  author =	{Potechin, Aaron and Zhang, Aaron},
  title =	{{Bounds on the Total Coefficient Size of Nullstellensatz Proofs of the Pigeonhole Principle}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{117:1--117:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.117},
  URN =		{urn:nbn:de:0030-drops-202604},
  doi =		{10.4230/LIPIcs.ICALP.2024.117},
  annote =	{Keywords: Proof complexity, Nullstellensatz, pigeonhole principle, coefficient size}
}
  • Refine by Author
  • 2 Arteche, Noel
  • 2 Martin, Russell
  • 2 Santhanam, Rahul
  • 1 Abboud, Elie
  • 1 Alipanahi, Bahar
  • Show More...

  • Refine by Classification
  • 4 Theory of computation → Parameterized complexity and exact algorithms
  • 3 Theory of computation → Problems, reductions and completeness
  • 3 Theory of computation → Proof complexity
  • 2 Theory of computation → Complexity theory and logic
  • 2 Theory of computation → Computational complexity and cryptography
  • Show More...

  • Refine by Keyword
  • 2 Parameterized Complexity
  • 1 3SAT
  • 1 Analysis of algorithms
  • 1 Applications of logics
  • 1 Approximate Counting
  • Show More...

  • Refine by Type
  • 22 document

  • Refine by Publication Year
  • 18 2024
  • 2 2017
  • 2 2018