LIPIcs, Volume 327

42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)



Thumbnail PDF

Event

STACS 2025, March 4-7, 2025, Jena, Germany

Editors

Olaf Beyersdorff
  • Friedrich Schiller University Jena, Germany
Michał Pilipczuk
  • University of Warsaw, Poland
Elaine Pimentel
  • University College London, UK
Nguyễn Kim Thắng
  • Grenoble INP, Université Grenoble-Alpes, France

Publication Details

  • published at: 2025-02-24
  • Publisher: Schloss Dagstuhl – Leibniz-Zentrum für Informatik
  • ISBN: 978-3-95977-365-2
  • DBLP: db/conf/stacs/stacs2025

Access Numbers

Documents

No documents found matching your filter selection.
Document
Complete Volume
LIPIcs, Volume 327, STACS 2025, Complete Volume

Authors: Olaf Beyersdorff, Michał Pilipczuk, Elaine Pimentel, and Nguyễn Kim Thắng


Abstract
LIPIcs, Volume 327, STACS 2025, Complete Volume

Cite as

42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 327, pp. 1-1388, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@Proceedings{beyersdorff_et_al:LIPIcs.STACS.2025,
  title =	{{LIPIcs, Volume 327, STACS 2025, Complete Volume}},
  booktitle =	{42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)},
  pages =	{1--1388},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-365-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{327},
  editor =	{Beyersdorff, Olaf and Pilipczuk, Micha{\l} and Pimentel, Elaine and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2025},
  URN =		{urn:nbn:de:0030-drops-229403},
  doi =		{10.4230/LIPIcs.STACS.2025},
  annote =	{Keywords: LIPIcs, Volume 327, STACS 2025, Complete Volume}
}
Document
Front Matter
Front Matter, Table of Contents, Preface, Conference Organization

Authors: Olaf Beyersdorff, Michał Pilipczuk, Elaine Pimentel, and Nguyễn Kim Thắng


Abstract
Front Matter, Table of Contents, Preface, Conference Organization

Cite as

42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 327, pp. 0:i-0:xxvi, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{beyersdorff_et_al:LIPIcs.STACS.2025.0,
  author =	{Beyersdorff, Olaf and Pilipczuk, Micha{\l} and Pimentel, Elaine and Thắng, Nguy\~{ê}n Kim},
  title =	{{Front Matter, Table of Contents, Preface, Conference Organization}},
  booktitle =	{42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)},
  pages =	{0:i--0:xxvi},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-365-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{327},
  editor =	{Beyersdorff, Olaf and Pilipczuk, Micha{\l} and Pimentel, Elaine and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2025.0},
  URN =		{urn:nbn:de:0030-drops-229393},
  doi =		{10.4230/LIPIcs.STACS.2025.0},
  annote =	{Keywords: Front Matter, Table of Contents, Preface, Conference Organization}
}
Document
Invited Talk
Proof Complexity and Its Relations to SAT Solving (Invited Talk)

Authors: Albert Atserias


Abstract
Propositional proof complexity is the study of algorithms that recognize the set of tautologies in propositional logic. Initially conceived as an approach to address the "P versus NP" problem in computational complexity, the field has gradually expanded its focus to include new objectives. Among these is the goal of providing a theoretical foundation for comparing the effectiveness of heuristics for algorithms that exhaustively explore the solution spaces of combinatorial problems. Dually, and complementarily, the methods of proof complexity can also be used to assess how to certify that a given exploration path of such an algorithm ultimately leads to a dead end. A notable challenge faced by this methodology lies in the fact that, despite the theoretically proved modelling power of propositional logic, as established by the theory of NP-completeness, propositional logic is not always the best specification language for all application domains. Addressing this challenge involves studying the expressive power of various languages and their associated proof systems through the lens of computational complexity. The first part of this talk will be a survey of how the emergence of these new objectives for propositional proof complexity came to be, and what the theory’s methods offer in pursuing them. The second part will review the current state of the art on the computational complexity of automating the proof search problem for various proof systems for propositional logic and other languages. While it is now known and well understood that fully automating propositional Resolution as a proof system for propositional logic is NP-hard, it remains an open question whether it is possible to distinguish satisfiable formulas from unsatisfiable ones with short Resolution proofs of unsatisfiability in polynomial time. As of the time of writing, there is no consensus among experts on whether this problem should be considered computationally intractable.

Cite as

Albert Atserias. Proof Complexity and Its Relations to SAT Solving (Invited Talk). In 42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 327, p. 1:1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{atserias:LIPIcs.STACS.2025.1,
  author =	{Atserias, Albert},
  title =	{{Proof Complexity and Its Relations to SAT Solving}},
  booktitle =	{42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)},
  pages =	{1:1--1:1},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-365-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{327},
  editor =	{Beyersdorff, Olaf and Pilipczuk, Micha{\l} and Pimentel, Elaine and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2025.1},
  URN =		{urn:nbn:de:0030-drops-228260},
  doi =		{10.4230/LIPIcs.STACS.2025.1},
  annote =	{Keywords: Propositional logic, proof systems, Resolution, cutting planes, integer linear programming, automatability, NP-hardness, satisfiability, heuristics, search}
}
Document
Invited Talk
A Strongly Polynomial Algorithm for Linear Programs with at Most Two Non-Zero Entries per Row or Column (Invited Talk)

Authors: Daniel Dadush, Zhuan Khye Koh, Bento Natura, Neil Olver, and László A. Végh


Abstract
We give a strongly polynomial algorithm for minimum cost generalized flow, and hence for optimizing any linear program with at most two non-zero entries per row, or at most two non-zero entries per column. Primal and dual feasibility were shown by Végh (MOR '17) and Megiddo (SICOMP '83) respectively. Our result can be viewed as progress towards understanding whether all linear programs can be solved in strongly polynomial time, also referred to as Smale’s 9th problem. Our approach is based on the recent primal-dual interior point method (IPM) due to Allamigeon, Dadush, Loho, Natura and Végh (FOCS '22). The number of iterations needed by the IPM is bounded, up to a polynomial factor in the number of inequalities, by the straight line complexity of the central path. Roughly speaking, this is the minimum number of pieces of any piecewise linear curve that multiplicatively approximates the central path. As our main contribution, we show that the straight line complexity of any minimum cost generalized flow instance is polynomial in the number of arcs and vertices. By applying a reduction of Hochbaum (ORL '04), the same bound applies to any linear program with at most two non-zeros per column or per row. To be able to run the IPM, one requires a suitable initial point. For this purpose, we develop a novel multistage approach, where each stage can be solved in strongly polynomial time given the result of the previous stage. Beyond this, substantial work is needed to ensure that the bit complexity of each iterate remains bounded during the execution of the algorithm. For this purpose, we show that one can maintain a representation of the iterates as a low complexity convex combination of vertices and extreme rays. Our approach is black-box and can be applied to any log-barrier path following method.

Cite as

Daniel Dadush, Zhuan Khye Koh, Bento Natura, Neil Olver, and László A. Végh. A Strongly Polynomial Algorithm for Linear Programs with at Most Two Non-Zero Entries per Row or Column (Invited Talk). In 42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 327, p. 2:1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{dadush_et_al:LIPIcs.STACS.2025.2,
  author =	{Dadush, Daniel and Koh, Zhuan Khye and Natura, Bento and Olver, Neil and V\'{e}gh, L\'{a}szl\'{o} A.},
  title =	{{A Strongly Polynomial Algorithm for Linear Programs with at Most Two Non-Zero Entries per Row or Column}},
  booktitle =	{42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)},
  pages =	{2:1--2:1},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-365-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{327},
  editor =	{Beyersdorff, Olaf and Pilipczuk, Micha{\l} and Pimentel, Elaine and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2025.2},
  URN =		{urn:nbn:de:0030-drops-228273},
  doi =		{10.4230/LIPIcs.STACS.2025.2},
  annote =	{Keywords: Linear Programming, Strongly Polynomial Algorithms, Interior Point Methods}
}
Document
Invited Talk
Algebras for Automata: Reasoning with Regularity (Invited Talk)

Authors: Anupam Das


Abstract
In the second half of the 20th century various theories of regular expressions were proposed, eventually leading to the notion of a Kleene Algebra (KA). Kozen and Krob independently proved the completeness of KA for the model of regular languages, a now celebrated result that has been refined and generalised over the years. In recent years proof theoretic approaches to regular languages have been studied, providing alternative routes to metalogical results like completeness and decidability. In this talk I will present a new approach from a different starting point: finite state automata. A notation for non-deterministic finite automata is readily obtained via expressions with least fixed points, leading to a theory of right-linear algebras (RLA). RLA is strictly more general than KA, e.g. admitting ω-regular languages as a model too, and enjoys a simpler proof theory than KA. This allows us to recover (more general) metalogical results in a robust way, combining techniques from automata, games, and cyclic proofs. Finally I will discuss extensions of RLA by greatest fixed points, comprising a notation for parity automata, to reason about ω-regular languages too. This talk is based on joint works with Abhishek De [Anupam Das and Abhishek De, 2024a and 2024b].

Cite as

Anupam Das. Algebras for Automata: Reasoning with Regularity (Invited Talk). In 42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 327, p. 3:1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{das:LIPIcs.STACS.2025.3,
  author =	{Das, Anupam},
  title =	{{Algebras for Automata: Reasoning with Regularity}},
  booktitle =	{42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)},
  pages =	{3:1--3:1},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-365-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{327},
  editor =	{Beyersdorff, Olaf and Pilipczuk, Micha{\l} and Pimentel, Elaine and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2025.3},
  URN =		{urn:nbn:de:0030-drops-228281},
  doi =		{10.4230/LIPIcs.STACS.2025.3},
  annote =	{Keywords: Regular languages, Linear grammars, Proof theory, Cyclic proofs, Automata theory, Fixed points, Games}
}
Document
Invited Talk
Some Recent Advancements in Monotone Circuit Complexity (Invited Talk)

Authors: Susanna F. de Rezende


Abstract
In 1985, Razborov [Razborov, 1985] proved the first superpolynomial size lower bound for monotone Boolean circuits for the perfect matching the clique functions, and, independently, Andreev [Andreev, 1985] obtained exponential size lower bounds. These breakthroughs were soon followed by further advancements in monotone complexity, including better lower bounds for clique [Alon and Boppana, 1987; Ingo Wegener, 1987], superlogarithmic depth lower bounds for connectivity by Karchmer and Wigderson [Karchmer and Wigderson, 1990], and the separations mon-NC ≠ mon-P and that mon-NC^i ≠ mon-NC^{i+1} by Raz and McKenzie [Ran Raz and Pierre McKenzie, 1999]. Karchmer and Wigderson [Karchmer and Wigderson, 1990] proved their result by establishing a relation between communication complexity and (monotone) circuit depth, and Raz and McKenzie [Ran Raz and Pierre McKenzie, 1999] introduced a new technique, now called lifting theorems, for obtaining communication lower bounds from query complexity lower bounds, In this talk, we will survey recent advancements in monotone complexity driven by query-to-communication lifting theorems. A decade ago, Göös, Pitassi, and Watson [Mika Göös et al., 2018] brought to light the generality of the result of Raz and McKenzie [Ran Raz and Pierre McKenzie, 1999] and reignited this line of work. A notable extension is the lifting theorem [Ankit Garg et al., 2020] for a model of DAG-like communication [Alexander A. Razborov, 1995; Dmitry Sokolov, 2017] that corresponds to circuit size. These powerful theorems, in their different flavours, have been instrumental in addressing many open questions in monotone circuit complexity, including: optimal 2^Ω(n) lower bounds on the size of monotone Boolean formulas computing an explicit function in NP [Toniann Pitassi and Robert Robere, 2017]; a complete picture of the relation between the mon-AC and mon-NC hierarchies [Susanna F. de Rezende et al., 2016]; a near optimal separation between monotone circuit and monotone formula size [Susanna F. de Rezende et al., 2020]; exponential separation between NC^2 and mon-P [Ankit Garg et al., 2020; Mika Göös et al., 2019]; and better lower bounds for clique [de Rezende and Vinyals, 2025; Lovett et al., 2022], improving on [Cavalar et al., 2021]. Very recently, lifting theorems were also used to prove supercritical trade-offs for monotone circuits showing that there are functions computable by small circuits for which any small circuit must have superlinear or even superpolynomial depth [de Rezende et al., 2024; Göös et al., 2024]. We will explore these results and their implications, and conclude by discussing some open problems.

Cite as

Susanna F. de Rezende. Some Recent Advancements in Monotone Circuit Complexity (Invited Talk). In 42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 327, pp. 4:1-4:2, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{derezende:LIPIcs.STACS.2025.4,
  author =	{de Rezende, Susanna F.},
  title =	{{Some Recent Advancements in Monotone Circuit Complexity}},
  booktitle =	{42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)},
  pages =	{4:1--4:2},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-365-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{327},
  editor =	{Beyersdorff, Olaf and Pilipczuk, Micha{\l} and Pimentel, Elaine and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2025.4},
  URN =		{urn:nbn:de:0030-drops-228291},
  doi =		{10.4230/LIPIcs.STACS.2025.4},
  annote =	{Keywords: monotone circuit complexity, query complexity, lifting theorems}
}
Document
Parameterized Saga of First-Fit and Last-Fit Coloring

Authors: Akanksha Agrawal, Daniel Lokshtanov, Fahad Panolan, Saket Saurabh, and Shaily Verma


Abstract
The classic greedy coloring algorithm considers the vertices of an input graph G in a given order and assigns the first available color to each vertex v in G. In the Grundy Coloring problem, the task is to find an ordering of the vertices that will force the greedy algorithm to use as many colors as possible. In the Partial Grundy Coloring, the task is also to color the graph using as many colors as possible. This time, however, we may select both the ordering in which the vertices are considered and which color to assign the vertex. The only constraint is that the color assigned to a vertex v is a color previously used for another vertex if such a color is available. Whether Grundy Coloring and Partial Grundy Coloring admit fixed-parameter tractable (FPT) algorithms, algorithms with running time f(k)n^O(1), where k is the number of colors, was posed as an open problem by Zaker and by Effantin et al., respectively. Recently, Aboulker et al. (STACS 2020 and Algorithmica 2022) resolved the question for Grundy Coloring in the negative by showing that the problem is W[1]-hard. For Partial Grundy Coloring, they obtain an FPT algorithm on graphs that do not contain K_{i,j} as a subgraph (a.k.a. K_{i,j}-free graphs). Aboulker et al. re-iterate the question of whether there exists an FPT algorithm for Partial Grundy Coloring on general graphs and also asks whether Grundy Coloring admits an FPT algorithm on K_{i,j}-free graphs. We give FPT algorithms for Partial Grundy Coloring on general graphs and for Grundy Coloring on K_{i,j}-free graphs, resolving both the questions in the affirmative. We believe that our new structural theorems for partial Grundy coloring and "representative-family" like sets for K_{i,j}-free graphs that we use in obtaining our results may have wider algorithmic applications.

Cite as

Akanksha Agrawal, Daniel Lokshtanov, Fahad Panolan, Saket Saurabh, and Shaily Verma. Parameterized Saga of First-Fit and Last-Fit Coloring. In 42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 327, pp. 5:1-5:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{agrawal_et_al:LIPIcs.STACS.2025.5,
  author =	{Agrawal, Akanksha and Lokshtanov, Daniel and Panolan, Fahad and Saurabh, Saket and Verma, Shaily},
  title =	{{Parameterized Saga of First-Fit and Last-Fit Coloring}},
  booktitle =	{42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)},
  pages =	{5:1--5:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-365-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{327},
  editor =	{Beyersdorff, Olaf and Pilipczuk, Micha{\l} and Pimentel, Elaine and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2025.5},
  URN =		{urn:nbn:de:0030-drops-228304},
  doi =		{10.4230/LIPIcs.STACS.2025.5},
  annote =	{Keywords: Grundy Coloring, Partial Grundy Coloring, FPT Algorithm, K\underline\{i,j\}-free graphs}
}
Document
Twin-Width One

Authors: Jungho Ahn, Hugo Jacob, Noleen Köhler, Christophe Paul, Amadeus Reinald, and Sebastian Wiederrecht


Abstract
We investigate the structure of graphs of twin-width at most 1, and obtain the following results: - Graphs of twin-width at most 1 are permutation graphs. In particular they have an intersection model and a linear structure. - There is always a 1-contraction sequence closely following a given permutation diagram. - Based on a recursive decomposition theorem, we obtain a simple algorithm running in linear time that produces a 1-contraction sequence of a graph, or guarantees that it has twin-width more than 1. - We characterise distance-hereditary graphs based on their twin-width and deduce a linear time algorithm to compute optimal sequences on this class of graphs.

Cite as

Jungho Ahn, Hugo Jacob, Noleen Köhler, Christophe Paul, Amadeus Reinald, and Sebastian Wiederrecht. Twin-Width One. In 42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 327, pp. 6:1-6:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{ahn_et_al:LIPIcs.STACS.2025.6,
  author =	{Ahn, Jungho and Jacob, Hugo and K\"{o}hler, Noleen and Paul, Christophe and Reinald, Amadeus and Wiederrecht, Sebastian},
  title =	{{Twin-Width One}},
  booktitle =	{42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)},
  pages =	{6:1--6:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-365-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{327},
  editor =	{Beyersdorff, Olaf and Pilipczuk, Micha{\l} and Pimentel, Elaine and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2025.6},
  URN =		{urn:nbn:de:0030-drops-228319},
  doi =		{10.4230/LIPIcs.STACS.2025.6},
  annote =	{Keywords: Twin-width, Hereditary graph classes, Intersection model}
}
Document
Faster Edge Coloring by Partition Sieving

Authors: Shyan Akmal and Tomohiro Koana


Abstract
In the Edge Coloring problem, we are given an undirected graph G with n vertices and m edges, and are tasked with finding the smallest positive integer k so that the edges of G can be assigned k colors in such a way that no two edges incident to the same vertex are assigned the same color. Edge Coloring is a classic NP-hard problem, and so significant research has gone into designing fast exponential-time algorithms for solving Edge Coloring and its variants exactly. Prior work showed that Edge Coloring can be solved in 2^mpoly(n) time and polynomial space, and in graphs with average degree d in 2^{(1-ε_d)m}⋅poly(n) time and exponential space, where ε_d = (1/d)^Θ(d³). We present an algorithm that solves Edge Coloring in 2^{m-3n/5}⋅poly(n) time and polynomial space. Our result is the first algorithm for this problem which simultaneously runs in faster than 2^m⋅poly(m) time and uses only polynomial space. In graphs of average degree d, our algorithm runs in 2^{(1-6/(5d))m}⋅poly(n) time, which has far better dependence in d than previous results. We also consider a generalization of Edge Coloring called List Edge Coloring, where each edge e in the input graph comes with a list L_e ⊆ {1, …, k} of colors, and we must determine whether we can assign each edge a color from its list so that no two edges incident to the same vertex receive the same color. We show that this problem can be solved in 2^{(1-6/(5k))m}⋅poly(n) time and polynomial space. The previous best algorithm for List Edge Coloring took 2^m⋅poly(n) time and space. Our algorithms are algebraic, and work by constructing a special polynomial P based off the input graph that contains a multilinear monomial (i.e., a monomial where every variable has degree at most one) if and only if the answer to the List Edge Coloring problem on the input graph is YES. We then solve the problem by detecting multilinear monomials in P. Previous work also employed such monomial detection techniques to solve Edge Coloring. We obtain faster algorithms both by carefully constructing our polynomial P, and by improving the runtimes for certain structured monomial detection problems using a technique we call partition sieving.

Cite as

Shyan Akmal and Tomohiro Koana. Faster Edge Coloring by Partition Sieving. In 42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 327, pp. 7:1-7:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{akmal_et_al:LIPIcs.STACS.2025.7,
  author =	{Akmal, Shyan and Koana, Tomohiro},
  title =	{{Faster Edge Coloring by Partition Sieving}},
  booktitle =	{42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)},
  pages =	{7:1--7:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-365-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{327},
  editor =	{Beyersdorff, Olaf and Pilipczuk, Micha{\l} and Pimentel, Elaine and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2025.7},
  URN =		{urn:nbn:de:0030-drops-228328},
  doi =		{10.4230/LIPIcs.STACS.2025.7},
  annote =	{Keywords: Coloring, Edge coloring, Chromatic index, Matroid, Pfaffian, Algebraic algorithm}
}
Document
Tropical Proof Systems: Between R(CP) and Resolution

Authors: Yaroslav Alekseev, Dima Grigoriev, and Edward A. Hirsch


Abstract
Propositional proof complexity deals with the lengths of polynomial-time verifiable proofs for Boolean tautologies. An abundance of proof systems is known, including algebraic and semialgebraic systems, which work with polynomial equations and inequalities, respectively. The most basic algebraic proof system is based on Hilbert’s Nullstellensatz [Paul Beame et al., 1996]. Tropical ("min-plus") arithmetic has many applications in various areas of mathematics. The operations are the real addition (as the tropical multiplication) and the minimum (as the tropical addition). Recently, [Bertram and Easton, 2017; Dima Grigoriev and Vladimir V. Podolskii, 2018; Joo and Mincheva, 2018] demonstrated a version of Nullstellensatz in the tropical setting. In this paper we introduce (semi)algebraic proof systems that use min-plus arithmetic. For the dual-variable encoding of Boolean variables (two tropical variables x and x ̅ per one Boolean variable x) and {0,1}-encoding of the truth values, we prove that a static (Nullstellensatz-based) tropical proof system polynomially simulates daglike resolution and also has short proofs for the propositional pigeon-hole principle. Its dynamic version strengthened by an additional derivation rule (a tropical analogue of resolution by linear inequality) is equivalent to the system Res(LP) (aka R(LP)), which derives nonnegative linear combinations of linear inequalities; this latter system is known to polynomially simulate Krajíček’s Res(CP) (aka R(CP)) with unary coefficients. Therefore, tropical proof systems give a finer hierarchy of proof systems below Res(LP) for which we still do not have exponential lower bounds. While the "driving force" in Res(LP) is resolution by linear inequalities, dynamic tropical systems are driven solely by the transitivity of the order, and static tropical proof systems are based on reasoning about differences between the input linear functions. For the truth values encoded by {0,∞}, dynamic tropical proofs are equivalent to Res(∞), which is a small-depth Frege system called also DNF resolution. Finally, we provide a lower bound on the size of derivations of a much simplified tropical version of the {Binary Value Principle} in a static tropical proof system. Also, we establish the non-deducibility of the tropical resolution rule in this system and discuss axioms for Boolean logic that do not use dual variables. In this extended abstract, full proofs are omitted.

Cite as

Yaroslav Alekseev, Dima Grigoriev, and Edward A. Hirsch. Tropical Proof Systems: Between R(CP) and Resolution. In 42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 327, pp. 8:1-8:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{alekseev_et_al:LIPIcs.STACS.2025.8,
  author =	{Alekseev, Yaroslav and Grigoriev, Dima and Hirsch, Edward A.},
  title =	{{Tropical Proof Systems: Between R(CP) and Resolution}},
  booktitle =	{42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)},
  pages =	{8:1--8:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-365-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{327},
  editor =	{Beyersdorff, Olaf and Pilipczuk, Micha{\l} and Pimentel, Elaine and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2025.8},
  URN =		{urn:nbn:de:0030-drops-228332},
  doi =		{10.4230/LIPIcs.STACS.2025.8},
  annote =	{Keywords: Cutting Planes, Nullstellensatz refutations, Res(CP), semi-algebraic proofs, tropical proof systems, tropical semiring}
}
Document
Improved Approximation Algorithms for (1,2)-TSP and Max-TSP Using Path Covers in the Semi-Streaming Model

Authors: Sharareh Alipour, Ermiya Farokhnejad, and Tobias Mömke


Abstract
We investigate semi-streaming algorithms for the Traveling Salesman Problem (TSP). Specifically, we focus on a variant known as the (1,2)-TSP, where the distances between any two vertices are either one or two. Our primary emphasis is on the closely related Maximum Path Cover Problem, which aims to find a collection of vertex-disjoint paths that covers the maximum number of edges in a graph. We propose an algorithm that, for any ε > 0, achieves a (2/3-ε)-approximation of the maximum path cover size for an n-vertex graph, using poly(1/ε) passes. This result improves upon the previous 1/2-approximation by Behnezhad et al. [Soheil Behnezhad et al., 2023] in the semi-streaming model. Building on this result, we design a semi-streaming algorithm that constructs a tour for an instance of (1,2)-TSP with an approximation factor of (4/3 + ε), improving upon the previous 3/2-approximation factor algorithm by Behnezhad et al. [Soheil Behnezhad et al., 2023]. Furthermore, we extend our approach to develop an approximation algorithm for the Maximum TSP (Max-TSP), where the goal is to find a Hamiltonian cycle with the maximum possible weight in a given weighted graph G. Our algorithm provides a (7/12 - ε)-approximation for Max-TSP in poly(1/(ε)) passes, improving on the previously known (1/2-ε)-approximation obtained via maximum weight matching in the semi-streaming model.

Cite as

Sharareh Alipour, Ermiya Farokhnejad, and Tobias Mömke. Improved Approximation Algorithms for (1,2)-TSP and Max-TSP Using Path Covers in the Semi-Streaming Model. In 42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 327, pp. 9:1-9:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{alipour_et_al:LIPIcs.STACS.2025.9,
  author =	{Alipour, Sharareh and Farokhnejad, Ermiya and M\"{o}mke, Tobias},
  title =	{{Improved Approximation Algorithms for (1,2)-TSP and Max-TSP Using Path Covers in the Semi-Streaming Model}},
  booktitle =	{42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)},
  pages =	{9:1--9:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-365-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{327},
  editor =	{Beyersdorff, Olaf and Pilipczuk, Micha{\l} and Pimentel, Elaine and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2025.9},
  URN =		{urn:nbn:de:0030-drops-228342},
  doi =		{10.4230/LIPIcs.STACS.2025.9},
  annote =	{Keywords: (1,2)-TSP, Max-TSP, Maximum Path Cover, Semi-Streaming Algorithms, Approximation Algorithms, Graph Algorithms}
}
Document
Monotone Weak Distributive Laws over the Lifted Powerset Monad in Categories of Algebras

Authors: Quentin Aristote


Abstract
In both the category of sets and the category of compact Hausdorff spaces, there is a monotone weak distributive law that combines two layers of non-determinism. Noticing the similarity between these two laws, we study whether the latter can be obtained automatically as a weak lifting of the former. This holds partially, but does not generalize to other categories of algebras. We then characterize when exactly monotone weak distributive laws over powerset monads in categories of algebras exist, on the one hand exhibiting a law combining probabilities and non-determinism in compact Hausdorff spaces and showing on the other hand that such laws do not exist in a lot of other cases.

Cite as

Quentin Aristote. Monotone Weak Distributive Laws over the Lifted Powerset Monad in Categories of Algebras. In 42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 327, pp. 10:1-10:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{aristote:LIPIcs.STACS.2025.10,
  author =	{Aristote, Quentin},
  title =	{{Monotone Weak Distributive Laws over the Lifted Powerset Monad in Categories of Algebras}},
  booktitle =	{42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)},
  pages =	{10:1--10:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-365-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{327},
  editor =	{Beyersdorff, Olaf and Pilipczuk, Micha{\l} and Pimentel, Elaine and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2025.10},
  URN =		{urn:nbn:de:0030-drops-228356},
  doi =		{10.4230/LIPIcs.STACS.2025.10},
  annote =	{Keywords: weak distributive law, weak extension, weak lifting, iterated distributive law, Yang-Baxter equation, powerset monad, Vietoris monad, Radon monad, Eilenberg-Moore category, regular category, relational extension}
}
Document
Generalized Inner Product Estimation with Limited Quantum Communication

Authors: Srinivasan Arunachalam and Louis Schatzki


Abstract
In this work, we consider the fundamental task of distributed inner product estimation when allowed limited communication. Suppose Alice and Bob are given k copies of an unknown n-qubit quantum state |ψ⟩,|ϕ⟩ respectively, are allowed to send q qubits to one another, and the task is to estimate |⟨ψ|ϕ⟩|² up to constant additive error. We show that k = Θ(√{2^{n-q}}) copies are essentially necessary and sufficient for this task (extending the work of Anshu, Landau and Liu (STOC'22) who considered the case when q = 0). Additionally, we also consider the task when the goal of the players is to estimate |⟨ψ|M|ϕ⟩|², for arbitrary Hermitian M. For this task we show that certain norms on M determine the sample complexity of estimating |⟨ψ|M|ϕ⟩|² when using only classical communication.

Cite as

Srinivasan Arunachalam and Louis Schatzki. Generalized Inner Product Estimation with Limited Quantum Communication. In 42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 327, pp. 11:1-11:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{arunachalam_et_al:LIPIcs.STACS.2025.11,
  author =	{Arunachalam, Srinivasan and Schatzki, Louis},
  title =	{{Generalized Inner Product Estimation with Limited Quantum Communication}},
  booktitle =	{42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)},
  pages =	{11:1--11:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-365-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{327},
  editor =	{Beyersdorff, Olaf and Pilipczuk, Micha{\l} and Pimentel, Elaine and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2025.11},
  URN =		{urn:nbn:de:0030-drops-228366},
  doi =		{10.4230/LIPIcs.STACS.2025.11},
  annote =	{Keywords: Quantum property testing, Quantum Distributed Algorithms}
}
Document
Results on H-Freeness Testing in Graphs of Bounded r-Admissibility

Authors: Christine Awofeso, Patrick Greaves, Oded Lachish, and Felix Reidl


Abstract
We study the property of H-freeness in graphs with known bounded average degree, i.e. the property of a graph not containing some graph H as a subgraph. H-freeness is one of the fundamental graph properties that has been studied in the property testing framework. Levi [Reut Levi, 2021] showed that triangle-freeness is testable in graphs of bounded arboricity, which is a superset of e.g. planar graphs or graphs of bounded degree. Complementing this result is a recent preprint [Talya Eden et al., 2024] by Eden ηl which shows that, for every r ≥ 4, C_r-freeness is not testable in graphs of bounded arboricity. We proceed in this line of research by using the r-admissibility measure that originates from the field of structural sparse graph theory. Graphs of bounded 1-admissibility are identical to graphs of bounded arboricity, while graphs of bounded degree, planar graphs, graphs of bounded genus, and even graphs excluding a fixed graph as a (topological) minor have bounded r-admissibility for any value of r [Nešetřil and Ossona de Mendez, 2012]. In this work we show that H-freeness is testable in graphs with bounded 2-admissibility for all graphs H of diameter 2. Furthermore, we show the testability of C₄-freeness in bounded 2-admissible graphs directly (with better query complexity) and extend this result to C₅-freeness. Using our techniques it is also possible to show that C₆-freeness and C₇-freeness are testable in graphs with bounded 3-admissibility. The formal proofs will appear in the journal version of this paper. These positive results are supplemented with a lower bound showing that, for every r ≥ 4, C_r-freeness is not testable for graphs of bounded (⌊r/2⌋ - 1)-admissibility. This lower bound will appear in the journal version of this paper. This implies that, for every r > 0, there exists a graph H of diameter r+1, such that H-freeness is not testable on graphs with bounded r-admissibility. These results lead us to the conjecture that, for every r > 4, and t ≤ 2r+1, C_t-freeness is testable in graphs of bounded r-admissibility, and for every r > 2, H-freeness for graphs H of diameter r is testable in graphs with bounded r-admissibility.

Cite as

Christine Awofeso, Patrick Greaves, Oded Lachish, and Felix Reidl. Results on H-Freeness Testing in Graphs of Bounded r-Admissibility. In 42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 327, pp. 12:1-12:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{awofeso_et_al:LIPIcs.STACS.2025.12,
  author =	{Awofeso, Christine and Greaves, Patrick and Lachish, Oded and Reidl, Felix},
  title =	{{Results on H-Freeness Testing in Graphs of Bounded r-Admissibility}},
  booktitle =	{42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)},
  pages =	{12:1--12:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-365-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{327},
  editor =	{Beyersdorff, Olaf and Pilipczuk, Micha{\l} and Pimentel, Elaine and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2025.12},
  URN =		{urn:nbn:de:0030-drops-228378},
  doi =		{10.4230/LIPIcs.STACS.2025.12},
  annote =	{Keywords: Property Testing, Sparse Graphs, Degeneracy, Admissibility}
}
Document
Hyperbolic Random Graphs: Clique Number and Degeneracy with Implications for Colouring

Authors: Samuel Baguley, Yannic Maus, Janosch Ruff, and George Skretas


Abstract
Hyperbolic random graphs inherit many properties that are present in real-world networks. The hyperbolic geometry imposes a scale-free network with a strong clustering coefficient. Other properties like a giant component, the small world phenomena and others follow. This motivates the design of simple algorithms for hyperbolic random graphs. In this paper we consider threshold hyperbolic random graphs (HRGs). Greedy heuristics are commonly used in practice as they deliver a good approximations to the optimal solution even though their theoretical analysis would suggest otherwise. A typical example for HRGs are degeneracy-based greedy algorithms [Bläsius, Fischbeck; Transactions of Algorithms '24]. In an attempt to bridge this theory-practice gap we characterise the parameter of degeneracy yielding a simple approximation algorithm for colouring HRGs. The approximation ratio of our algorithm ranges from (2/√3) to 4/3 depending on the power-law exponent of the model. We complement our findings for the degeneracy with new insights on the clique number of hyperbolic random graphs. We show that degeneracy and clique number are substantially different and derive an improved upper bound on the clique number. Additionally, we show that the core of HRGs does not constitute the largest clique. Lastly we demonstrate that the degeneracy of the closely related standard model of geometric inhomogeneous random graphs behaves inherently different compared to the one of hyperbolic random graphs.

Cite as

Samuel Baguley, Yannic Maus, Janosch Ruff, and George Skretas. Hyperbolic Random Graphs: Clique Number and Degeneracy with Implications for Colouring. In 42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 327, pp. 13:1-13:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{baguley_et_al:LIPIcs.STACS.2025.13,
  author =	{Baguley, Samuel and Maus, Yannic and Ruff, Janosch and Skretas, George},
  title =	{{Hyperbolic Random Graphs: Clique Number and Degeneracy with Implications for Colouring}},
  booktitle =	{42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)},
  pages =	{13:1--13:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-365-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{327},
  editor =	{Beyersdorff, Olaf and Pilipczuk, Micha{\l} and Pimentel, Elaine and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2025.13},
  URN =		{urn:nbn:de:0030-drops-228386},
  doi =		{10.4230/LIPIcs.STACS.2025.13},
  annote =	{Keywords: hyperbolic random graphs, scale-free networks, power-law graphs, cliques, degeneracy, vertex colouring, chromatic number}
}
Document
Multivariate Exploration of Metric Dilation

Authors: Aritra Banik, Fedor V. Fomin, Petr A. Golovach, Tanmay Inamdar, Satyabrata Jana, and Saket Saurabh


Abstract
Let G be a weighted graph embedded in a metric space (M, d_M). The vertices of G correspond to the points in M, with the weight of each edge uv being the distance d_M(u,v) between their respective points in M. The dilation (or stretch) of G is defined as the minimum factor t such that, for any pair of vertices u,v, the distance between u and v - represented by the weight of a shortest u,v-path - is at most t⋅ d_M(u,v). We study Dilation t-Augmentation, where the objective is, given a metric M, a graph G, and numerical values k and t, to determine whether G can be transformed into a graph with dilation t by adding at most k edges. Our primary focus is on the scenario where the metric M is the shortest path metric of an unweighted graph Γ. Even in this specific case, Dilation t-Augmentation remains computationally challenging. In particular, the problem is W[2]-hard parameterized by k when Γ is a complete graph, already for t = 2. Our main contribution lies in providing new insights into the impact of combinations of various parameters on the computational complexity of the problem. We establish the following. - The parameterized dichotomy of the problem with respect to dilation t, when the graph G is sparse: Parameterized by k, the problem is FPT for graphs excluding a biclique K_{d,d} as a subgraph for t ≤ 2 and the problem is W[1]-hard for t ≥ 3 even if G is a forest consisting of disjoint stars. - The problem is FPT parameterized by the combined parameter k+t+Δ, where Δ is the maximum degree of the graph G or Γ.

Cite as

Aritra Banik, Fedor V. Fomin, Petr A. Golovach, Tanmay Inamdar, Satyabrata Jana, and Saket Saurabh. Multivariate Exploration of Metric Dilation. In 42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 327, pp. 14:1-14:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{banik_et_al:LIPIcs.STACS.2025.14,
  author =	{Banik, Aritra and Fomin, Fedor V. and Golovach, Petr A. and Inamdar, Tanmay and Jana, Satyabrata and Saurabh, Saket},
  title =	{{Multivariate Exploration of Metric Dilation}},
  booktitle =	{42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)},
  pages =	{14:1--14:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-365-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{327},
  editor =	{Beyersdorff, Olaf and Pilipczuk, Micha{\l} and Pimentel, Elaine and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2025.14},
  URN =		{urn:nbn:de:0030-drops-228395},
  doi =		{10.4230/LIPIcs.STACS.2025.14},
  annote =	{Keywords: Metric dilation, geometric spanner, fixed-parameter tractability}
}
Document
Structure-Guided Automated Reasoning

Authors: Max Bannach and Markus Hecher


Abstract
Algorithmic meta-theorems state that problems definable in a fixed logic can be solved efficiently on structures with certain properties. An example is Courcelle’s Theorem, which states that all problems expressible in monadic second-order logic can be solved efficiently on structures of small treewidth. Such theorems are usually proven by algorithms for the model-checking problem of the logic, which is often complex and rarely leads to highly efficient solutions. Alternatively, we can solve the model-checking problem by grounding the given logic to propositional logic, for which dedicated solvers are available. Such encodings will, however, usually not preserve the input’s treewidth. This paper investigates whether all problems definable in monadic second-order logic can efficiently be encoded into SAT such that the input’s treewidth bounds the treewidth of the resulting formula. We answer this in the affirmative and, hence, provide an alternative proof of Courcelle’s Theorem. Our technique can naturally be extended: There are treewidth-aware reductions from the optimization version of Courcelle’s Theorem to MAXSAT and from the counting version of the theorem to #SAT. By using encodings to SAT, we obtain, ignoring polynomial factors, the same running time for the model-checking problem as we would with dedicated algorithms. Another immediate consequence is a treewidth-preserving reduction from the model-checking problem of monadic second-order logic to integer linear programming (ILP). We complement our upper bounds with new lower bounds based on ETH; and we show that the block size of the input’s formula and the treewidth of the input’s structure are tightly linked. Finally, we present various side results needed to prove the main theorems: A treewidth-preserving cardinality constraints, treewidth-preserving encodings from CNFs into DNFs, and a treewidth-aware quantifier elimination scheme for QBF implying a treewidth-preserving reduction from QSAT to SAT. We also present a reduction from projected model counting to #SAT that increases the treewidth by at most a factor of 2^{k+3.59}, yielding a algorithm for projected model counting that beats the currently best running time of 2^{2^{k+4}}⋅poly(|ψ|).

Cite as

Max Bannach and Markus Hecher. Structure-Guided Automated Reasoning. In 42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 327, pp. 15:1-15:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{bannach_et_al:LIPIcs.STACS.2025.15,
  author =	{Bannach, Max and Hecher, Markus},
  title =	{{Structure-Guided Automated Reasoning}},
  booktitle =	{42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)},
  pages =	{15:1--15:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-365-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{327},
  editor =	{Beyersdorff, Olaf and Pilipczuk, Micha{\l} and Pimentel, Elaine and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2025.15},
  URN =		{urn:nbn:de:0030-drops-228408},
  doi =		{10.4230/LIPIcs.STACS.2025.15},
  annote =	{Keywords: automated reasoning, treewidth, satisfiability, max-sat, sharp-sat, monadic second-order logic, fixed-parameter tractability}
}
Document
Listing Spanning Trees of Outerplanar Graphs by Pivot-Exchanges

Authors: Nastaran Behrooznia and Torsten Mütze


Abstract
We prove that the spanning trees of any outerplanar triangulation G can be listed so that any two consecutive spanning trees differ in an exchange of two edges that share an end vertex. For outerplanar graphs G with faces of arbitrary lengths (not necessarily 3) we establish a similar result, with the condition that the two exchanged edges share an end vertex or lie on a common face. These listings of spanning trees are obtained from a simple greedy algorithm that can be implemented efficiently, i.e., in time {O}(n log n) per generated spanning tree, where n is the number of vertices of G. Furthermore, the listings correspond to Hamilton paths on the 0/1-polytope that is obtained as the convex hull of the characteristic vectors of all spanning trees of G.

Cite as

Nastaran Behrooznia and Torsten Mütze. Listing Spanning Trees of Outerplanar Graphs by Pivot-Exchanges. In 42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 327, pp. 16:1-16:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{behrooznia_et_al:LIPIcs.STACS.2025.16,
  author =	{Behrooznia, Nastaran and M\"{u}tze, Torsten},
  title =	{{Listing Spanning Trees of Outerplanar Graphs by Pivot-Exchanges}},
  booktitle =	{42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)},
  pages =	{16:1--16:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-365-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{327},
  editor =	{Beyersdorff, Olaf and Pilipczuk, Micha{\l} and Pimentel, Elaine and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2025.16},
  URN =		{urn:nbn:de:0030-drops-228411},
  doi =		{10.4230/LIPIcs.STACS.2025.16},
  annote =	{Keywords: Spanning tree, generation, edge exchange, Hamilton path, Gray code}
}
Document
Tight Approximation and Kernelization Bounds for Vertex-Disjoint Shortest Paths

Authors: Matthias Bentert, Fedor V. Fomin, and Petr A. Golovach


Abstract
We examine the possibility of approximating Maximum Vertex-Disjoint Shortest Paths. In this problem, the input is an edge-weighted (directed or undirected) n-vertex graph G along with k terminal pairs (s_1,t_1),(s_2,t_2),…,(s_k,t_k). The task is to connect as many terminal pairs as possible by pairwise vertex-disjoint paths such that each path is a shortest path between the respective terminals. Our work is anchored in the recent breakthrough by Lochet [SODA '21], which demonstrates the polynomial-time solvability of the problem for a fixed value of k. Lochet’s result implies the existence of a polynomial-time ck-approximation for Maximum Vertex-Disjoint Shortest Paths, where c ≤ 1 is a constant. (One can guess 1/c terminal pairs to connect in k^O(1/c) time and then utilize Lochet’s algorithm to compute the solution in n^f(1/c) time.) Our first result suggests that this approximation algorithm is, in a sense, the best we can hope for. More precisely, assuming the gap-ETH, we exclude the existence of an o(k)-approximation within f(k) ⋅ poly(n) time for any function f that only depends on k. Our second result demonstrates the infeasibility of achieving an approximation ratio of m^{1/2-ε} in polynomial time, unless P = NP. It is not difficult to show that a greedy algorithm selecting a path with the minimum number of arcs results in a ⌈√𝓁⌉-approximation, where 𝓁 is the number of edges in all the paths of an optimal solution. Since 𝓁 ≤ n, this underscores the tightness of the m^{1/2-ε}-inapproximability bound. Additionally, we establish that Maximum Vertex-Disjoint Shortest Paths is fixed-parameter tractable when parameterized by 𝓁 but does not admit a polynomial kernel. Our hardness results hold for undirected graphs with unit weights, while our positive results extend to scenarios where the input graph is directed and features arbitrary (non-negative) edge weights.

Cite as

Matthias Bentert, Fedor V. Fomin, and Petr A. Golovach. Tight Approximation and Kernelization Bounds for Vertex-Disjoint Shortest Paths. In 42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 327, pp. 17:1-17:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{bentert_et_al:LIPIcs.STACS.2025.17,
  author =	{Bentert, Matthias and Fomin, Fedor V. and Golovach, Petr A.},
  title =	{{Tight Approximation and Kernelization Bounds for Vertex-Disjoint Shortest Paths}},
  booktitle =	{42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)},
  pages =	{17:1--17:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-365-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{327},
  editor =	{Beyersdorff, Olaf and Pilipczuk, Micha{\l} and Pimentel, Elaine and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2025.17},
  URN =		{urn:nbn:de:0030-drops-228422},
  doi =		{10.4230/LIPIcs.STACS.2025.17},
  annote =	{Keywords: Inapproximability, Fixed-parameter tractability, Parameterized approximation}
}
Document
Online Disjoint Set Covers: Randomization Is Not Necessary

Authors: Marcin Bienkowski, Jarosław Byrka, and Łukasz Jeż


Abstract
In the online disjoint set covers problem, the edges of a hypergraph are revealed online, and the goal is to partition them into a maximum number of disjoint set covers. That is, n nodes of a hypergraph are given at the beginning, and then a sequence of hyperedges (subsets of [n]) is presented to an algorithm. For each hyperedge, an online algorithm must assign a color (an integer). Once an input terminates, the gain of the algorithm is the number of colors that correspond to valid set covers (i.e., the union of hyperedges that have that color contains all n nodes). We present a deterministic online algorithm that is O(log² n)-competitive, exponentially improving on the previous bound of O(n) and matching the performance of the best randomized algorithm by Emek et al. [ESA 2019]. For color selection, our algorithm uses a novel potential function, which can be seen as an online counterpart of the derandomization method of conditional probabilities and pessimistic estimators. There are only a few cases where derandomization has been successfully used in the field of online algorithms. In contrast to previous approaches, our result extends to the following new challenges: (i) the potential function derandomizes not only the Chernoff bound, but also the coupon collector’s problem, (ii) the value of Opt of the maximization problem is not bounded a priori, and (iii) we do not produce a fractional solution first, but work directly on the input.

Cite as

Marcin Bienkowski, Jarosław Byrka, and Łukasz Jeż. Online Disjoint Set Covers: Randomization Is Not Necessary. In 42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 327, pp. 18:1-18:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{bienkowski_et_al:LIPIcs.STACS.2025.18,
  author =	{Bienkowski, Marcin and Byrka, Jaros{\l}aw and Je\.{z}, {\L}ukasz},
  title =	{{Online Disjoint Set Covers: Randomization Is Not Necessary}},
  booktitle =	{42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)},
  pages =	{18:1--18:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-365-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{327},
  editor =	{Beyersdorff, Olaf and Pilipczuk, Micha{\l} and Pimentel, Elaine and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2025.18},
  URN =		{urn:nbn:de:0030-drops-228433},
  doi =		{10.4230/LIPIcs.STACS.2025.18},
  annote =	{Keywords: Disjoint Set Covers, Derandomization, pessimistic Estimator, potential Function, online Algorithms, competitive Analysis}
}
Document
The Complexity of Learning LTL, CTL and ATL Formulas

Authors: Benjamin Bordais, Daniel Neider, and Rajarshi Roy


Abstract
We consider the problem of learning temporal logic formulas from examples of system behavior. Learning temporal properties has crystallized as an effective means to explain complex temporal behaviors. Several efficient algorithms have been designed for learning temporal formulas. However, the theoretical understanding of the complexity of the learning decision problems remains largely unexplored. To address this, we study the complexity of the passive learning problems of three prominent temporal logics, Linear Temporal Logic (LTL), Computation Tree Logic (CTL) and Alternating-time Temporal Logic (ATL) and several of their fragments. We show that learning formulas with unbounded occurrences of binary operators is NP-complete for all of these logics. On the other hand, when investigating the complexity of learning formulas with bounded occurrences of binary operators, we exhibit discrepancies between the complexity of learning LTL, CTL and ATL formulas (with a varying number of agents).

Cite as

Benjamin Bordais, Daniel Neider, and Rajarshi Roy. The Complexity of Learning LTL, CTL and ATL Formulas. In 42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 327, pp. 19:1-19:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{bordais_et_al:LIPIcs.STACS.2025.19,
  author =	{Bordais, Benjamin and Neider, Daniel and Roy, Rajarshi},
  title =	{{The Complexity of Learning LTL, CTL and ATL Formulas}},
  booktitle =	{42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)},
  pages =	{19:1--19:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-365-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{327},
  editor =	{Beyersdorff, Olaf and Pilipczuk, Micha{\l} and Pimentel, Elaine and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2025.19},
  URN =		{urn:nbn:de:0030-drops-228441},
  doi =		{10.4230/LIPIcs.STACS.2025.19},
  annote =	{Keywords: Temporal logic, passive learning, complexity}
}
Document
On Cascades of Reset Automata

Authors: Roberto Borelli, Luca Geatti, Marco Montali, and Angelo Montanari


Abstract
The Krohn-Rhodes decomposition theorem is a pivotal result in automata theory. It introduces the concept of cascade product, where two semiautomata, that is, automata devoid of initial and final states, are combined in a feed-forward fashion. The theorem states that any semiautomaton can be decomposed into a sequence of permutation-reset semiautomata. For the counter-free case, this decomposition consists entirely of reset components with two states each. This decomposition has significantly impacted recent research in various areas of computer science, including the identification of a class of transformer encoders equivalent to star-free languages and the conversion of Linear Temporal Logic formulas into past-only expressions (pastification). The paper revisits the cascade product in the context of reset automata, thus considering each component of the cascade as a language acceptor. First, we give regular expression counterparts of cascades of reset automata. We then establish several expressiveness results, identifying hierarchies of languages based on the restriction of the height (number of components) of the cascade or of the number of states in each level. We also show that any cascade of reset automata can be transformed, with a quadratic increase in height, into a cascade that only includes two-state components. Finally, we show that some fundamental operations on cascades, like intersection, union, negation, and concatenation with a symbol to the left, can be directly and efficiently computed by adding a two-state component.

Cite as

Roberto Borelli, Luca Geatti, Marco Montali, and Angelo Montanari. On Cascades of Reset Automata. In 42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 327, pp. 20:1-20:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{borelli_et_al:LIPIcs.STACS.2025.20,
  author =	{Borelli, Roberto and Geatti, Luca and Montali, Marco and Montanari, Angelo},
  title =	{{On Cascades of Reset Automata}},
  booktitle =	{42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)},
  pages =	{20:1--20:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-365-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{327},
  editor =	{Beyersdorff, Olaf and Pilipczuk, Micha{\l} and Pimentel, Elaine and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2025.20},
  URN =		{urn:nbn:de:0030-drops-228453},
  doi =		{10.4230/LIPIcs.STACS.2025.20},
  annote =	{Keywords: Automata, Cascade products, Regular expressions, Krohn-Rhodes theory}
}
Document
Computability of Extender Sets in Multidimensional Subshifts

Authors: Antonin Callard, Léo Paviet Salomon, and Pascal Vanier


Abstract
Subshifts are sets of colorings of ℤ^d defined by families of forbidden patterns. Given a subshift and a finite pattern, its extender set is the set of admissible completions of this pattern. It has been conjectured that the behavior of extender sets, and in particular their growth called extender entropy [French and Pavlov, 2019], could provide a way to separate the classes of sofic and effective subshifts. We prove here that both classes have the same possible extender entropies: exactly the Π₃ real numbers of [0,+∞).

Cite as

Antonin Callard, Léo Paviet Salomon, and Pascal Vanier. Computability of Extender Sets in Multidimensional Subshifts. In 42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 327, pp. 21:1-21:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{callard_et_al:LIPIcs.STACS.2025.21,
  author =	{Callard, Antonin and Paviet Salomon, L\'{e}o and Vanier, Pascal},
  title =	{{Computability of Extender Sets in Multidimensional Subshifts}},
  booktitle =	{42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)},
  pages =	{21:1--21:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-365-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{327},
  editor =	{Beyersdorff, Olaf and Pilipczuk, Micha{\l} and Pimentel, Elaine and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2025.21},
  URN =		{urn:nbn:de:0030-drops-228462},
  doi =		{10.4230/LIPIcs.STACS.2025.21},
  annote =	{Keywords: Symbolic dynamics, subshifts, extender sets, extender entropy, computability, sofic shifts, tilings}
}
Document
CMSO-Transducing Tree-Like Graph Decompositions

Authors: Rutger Campbell, Bruno Guillon, Mamadou Moustapha Kanté, Eun Jung Kim, and Noleen Köhler


Abstract
We show that given a graph G we can CMSO-transduce its modular decomposition, its split decomposition and its bi-join decomposition. This improves results by Courcelle [Logical Methods in Computer Science, 2006] who gave such transductions using order-invariant MSO, a strictly more expressive logic than CMSO. Our methods more generally yield C_{2}MSO-transductions of the canonical decomposition of weakly-partitive set systems and weakly-bipartitive systems of bipartitions.

Cite as

Rutger Campbell, Bruno Guillon, Mamadou Moustapha Kanté, Eun Jung Kim, and Noleen Köhler. CMSO-Transducing Tree-Like Graph Decompositions. In 42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 327, pp. 22:1-22:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{campbell_et_al:LIPIcs.STACS.2025.22,
  author =	{Campbell, Rutger and Guillon, Bruno and Kant\'{e}, Mamadou Moustapha and Kim, Eun Jung and K\"{o}hler, Noleen},
  title =	{{CMSO-Transducing Tree-Like Graph Decompositions}},
  booktitle =	{42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)},
  pages =	{22:1--22:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-365-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{327},
  editor =	{Beyersdorff, Olaf and Pilipczuk, Micha{\l} and Pimentel, Elaine and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2025.22},
  URN =		{urn:nbn:de:0030-drops-228474},
  doi =		{10.4230/LIPIcs.STACS.2025.22},
  annote =	{Keywords: MSO-transduction, MSO-definability, graph decomposisions}
}
Document
How to Play the Accordion: Uniformity and the (Non-)Conservativity of the Linear Approximation of the λ-Calculus

Authors: Rémy Cerda and Lionel Vaux Auclair


Abstract
Twenty years after its introduction by Ehrhard and Regnier, differentiation in λ-calculus and in linear logic is now a celebrated tool. In particular, it allows to write the Taylor formula in various λ-calculi, hence providing a theory of linear approximations for these calculi. In the standard λ-calculus, this linear approximation is expressed by results stating that the (possibly) infinitary β-reduction of λ-terms is simulated by the reduction of their Taylor expansion: in terms of rewriting systems, the resource reduction (operating on Taylor approximants) is an extension of the β-reduction. In this paper, we address the converse property, conservativity: are there reductions of the Taylor approximants that do not arise from an actual β-reduction of the approximated term? We show that if we restrict the setting to finite terms and β-reduction sequences, then the linear approximation is conservative. However, as soon as one allows infinitary reduction sequences this property is broken. We design a counter-example, the Accordion. Then we show how restricting the reduction of the Taylor approximants allows to build a conservative extension of the β-reduction preserving good simulation properties. This restriction relies on uniformity, a property that was already at the core of Ehrhard and Regnier’s pioneering work.

Cite as

Rémy Cerda and Lionel Vaux Auclair. How to Play the Accordion: Uniformity and the (Non-)Conservativity of the Linear Approximation of the λ-Calculus. In 42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 327, pp. 23:1-23:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{cerda_et_al:LIPIcs.STACS.2025.23,
  author =	{Cerda, R\'{e}my and Vaux Auclair, Lionel},
  title =	{{How to Play the Accordion: Uniformity and the (Non-)Conservativity of the Linear Approximation of the \lambda-Calculus}},
  booktitle =	{42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)},
  pages =	{23:1--23:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-365-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{327},
  editor =	{Beyersdorff, Olaf and Pilipczuk, Micha{\l} and Pimentel, Elaine and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2025.23},
  URN =		{urn:nbn:de:0030-drops-228480},
  doi =		{10.4230/LIPIcs.STACS.2025.23},
  annote =	{Keywords: program approximation, quantitative semantics, lambda-calculus, linear approximation, Taylor expansion, conservativity}
}
Document
A Deterministic Approach to Shortest Path Restoration in Edge Faulty Graphs

Authors: Keerti Choudhary and Rishabh Dhiman


Abstract
Afek, Bremler-Barr, Kaplan, Cohen, and Merritt (PODC'01) in their seminal work on shortest path restorations demonstrated that after a single edge failure in a graph G, a replacement shortest path between any two vertices s and t, which avoids the failed edge, can be represented as the concatenation of two original shortest paths in G. They also showed that we cannot associate a canonical shortest path between the vertex pairs in G that consistently allows for the replacement path (in the surviving graph) to be represented as a concatenation of these canonical paths. Recently, Bodwin and Parter (PODC'21) proposed a randomized tie-breaking scheme for selecting canonical paths for the "ordered" vertex pairs in graph G with the desired property of representing the replacement shortest path as a concatenation of canonical shortest-paths provided for ordered pairs. An interesting open question is whether it is possible to provide a deterministic construction of canonical paths in an efficient manner. We address this question in our paper by presenting an O(mn) time deterministic algorithm to compute a canonical path family ℱ = {P_{x,y}, Q_{x,y} | x,y ∈ V} comprising of two paths per (unordered) vertex pair. Each replacement is either a PQ-path (of type P_{x,y}∘Q_{y,z}), a QP-path, a QQ-path, or a PP-path. Our construction is fairly simple and is a straightforward application of independent spanning trees. We also present various applications of family ℱ in computing fault-tolerant structures.

Cite as

Keerti Choudhary and Rishabh Dhiman. A Deterministic Approach to Shortest Path Restoration in Edge Faulty Graphs. In 42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 327, pp. 24:1-24:10, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{choudhary_et_al:LIPIcs.STACS.2025.24,
  author =	{Choudhary, Keerti and Dhiman, Rishabh},
  title =	{{A Deterministic Approach to Shortest Path Restoration in Edge Faulty Graphs}},
  booktitle =	{42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)},
  pages =	{24:1--24:10},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-365-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{327},
  editor =	{Beyersdorff, Olaf and Pilipczuk, Micha{\l} and Pimentel, Elaine and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2025.24},
  URN =		{urn:nbn:de:0030-drops-228499},
  doi =		{10.4230/LIPIcs.STACS.2025.24},
  annote =	{Keywords: Fault-tolerant Data-structures, Shortest Path Restoration, Replacement path}
}
Document
Local Density and Its Distributed Approximation

Authors: Aleksander Bjørn Christiansen, Ivor van der Hoog, and Eva Rotenberg


Abstract
The densest subgraph problem is a classic problem in combinatorial optimisation. Graphs with low maximum subgraph density are often called "uniformly sparse", leading to algorithms parameterised by this density. However, in reality, the sparsity of a graph is not necessarily uniform. This calls for a formally well-defined, fine-grained notion of density. Danisch, Chan, and Sozio propose a definition for local density that assigns to each vertex v a value ρ^*(v). This local density is a generalisation of the maximum subgraph density of a graph. I.e., if ρ(G) is the subgraph density of a finite graph G, then ρ(G) equals the maximum local density ρ^*(v) over vertices v in G. They present a Frank-Wolfe-based algorithm to approximate the local density of each vertex with no theoretical (asymptotic) guarantees. We provide an extensive study of this local density measure. Just as with (global) maximum subgraph density, we show that there is a dual relation between the local out-degrees and the minimum out-degree orientations of the graph. We introduce the definition of the local out-degree g^*(v) of a vertex v, and show it to be equal to the local density ρ^*(v). We consider the local out-degree to be conceptually simpler, shorter to define, and easier to compute. Using the local out-degree we show a previously unknown fact: that existing algorithms already dynamically approximate the local density for each vertex with polylogarithmic update time. Next, we provide the first distributed algorithms that compute the local density with provable guarantees: given any ε such that ε^{-1} ∈ O(poly n), we show a deterministic distributed algorithm in the LOCAL model where, after O(ε^{-2} log² n) rounds, every vertex v outputs a (1 + ε)-approximation of their local density ρ^*(v). In CONGEST, we show a deterministic distributed algorithm that requires poly(log n,ε^{-1}) ⋅ 2^{O(√{log n})} rounds, which is sublinear in n. As a corollary, we obtain the first deterministic algorithm running in a sublinear number of rounds for (1+ε)-approximate densest subgraph detection in the CONGEST model.

Cite as

Aleksander Bjørn Christiansen, Ivor van der Hoog, and Eva Rotenberg. Local Density and Its Distributed Approximation. In 42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 327, pp. 25:1-25:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{christiansen_et_al:LIPIcs.STACS.2025.25,
  author =	{Christiansen, Aleksander Bj{\o}rn and van der Hoog, Ivor and Rotenberg, Eva},
  title =	{{Local Density and Its Distributed Approximation}},
  booktitle =	{42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)},
  pages =	{25:1--25:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-365-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{327},
  editor =	{Beyersdorff, Olaf and Pilipczuk, Micha{\l} and Pimentel, Elaine and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2025.25},
  URN =		{urn:nbn:de:0030-drops-228502},
  doi =		{10.4230/LIPIcs.STACS.2025.25},
  annote =	{Keywords: Distributed graph algorithms, graph density computation, graph density approximation, network analysis theory}
}
Document
Toward Better Depth Lower Bounds: Strong Composition of XOR and a Random Function

Authors: Nikolai Chukhin, Alexander S. Kulikov, and Ivan Mihajlin


Abstract
Proving formula depth lower bounds is a fundamental challenge in complexity theory, with the strongest known bound of (3 - o(1))log n established by Håstad over 25 years ago. The Karchmer-Raz-Wigderson (KRW) conjecture offers a promising approach to advance these bounds and separate P from NC¹. It suggests that the depth complexity of a function composition f ⋄ g approximates the sum of the depth complexities of f and g. The Karchmer-Wigderson (KW) relation framework translates formula depth into communication complexity, restating the KRW conjecture as CC(KW_f ⋄ KW_g) ≈ CC(KW_f) + CC(KW_g). Prior work has confirmed the conjecture under various relaxations, often replacing one or both KW relations with the universal relation or constraining the communication game through strong composition. In this paper, we examine the strong composition KW_XOR ⊛ KW_f of the parity function and a random Boolean function f. We prove that with probability 1-o(1), any protocol solving this composition requires at least n^{3 - o(1)} leaves. This result establishes a depth lower bound of (3 - o(1))log n, matching Håstad’s bound, but is applicable to a broader class of inner functions, even when the outer function is simple. Though bounds for the strong composition do not translate directly to formula depth bounds, they usually help to analyze the standard composition (of the corresponding two functions) which is directly related to formula depth. Our proof utilizes formal complexity measures. First, we apply Khrapchenko’s method to show that numerous instances of f remain unsolved after several communication steps. Subsequently, we transition to a different formal complexity measure to demonstrate that the remaining communication problem is at least as hard as KW_OR ⊛ KW_f. This hybrid approach not only achieves the desired lower bound, but also introduces a novel technique for analyzing formula depth, potentially informing future research in complexity theory.

Cite as

Nikolai Chukhin, Alexander S. Kulikov, and Ivan Mihajlin. Toward Better Depth Lower Bounds: Strong Composition of XOR and a Random Function. In 42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 327, pp. 26:1-26:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{chukhin_et_al:LIPIcs.STACS.2025.26,
  author =	{Chukhin, Nikolai and Kulikov, Alexander S. and Mihajlin, Ivan},
  title =	{{Toward Better Depth Lower Bounds: Strong Composition of XOR and a Random Function}},
  booktitle =	{42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)},
  pages =	{26:1--26:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-365-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{327},
  editor =	{Beyersdorff, Olaf and Pilipczuk, Micha{\l} and Pimentel, Elaine and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2025.26},
  URN =		{urn:nbn:de:0030-drops-228513},
  doi =		{10.4230/LIPIcs.STACS.2025.26},
  annote =	{Keywords: complexity, formula complexity, lower bounds, Boolean functions, depth}
}
Document
Local Equivalence of Stabilizer States: A Graphical Characterisation

Authors: Nathan Claudet and Simon Perdrix


Abstract
Stabilizer states form a ubiquitous family of quantum states that can be graphically represented through the graph state formalism. A fundamental property of graph states is that applying a local complementation - a well-known and extensively studied graph transformation - results in a graph that represents the same entanglement as the original. In other words, the corresponding graph states are LU-equivalent. This property served as the cornerstone for capturing non-trivial quantum properties in a simple graphical manner, in the study of quantum entanglement but also for developing protocols and models based on graph states and stabilizer states, such as measurement-based quantum computing, secret sharing, error correction, entanglement distribution... However, local complementation fails short to fully characterise entanglement: there exist pairs of graph states that are LU-equivalent but cannot be transformed one into the other using local complementations. Only few is known about the equivalence of graph states beyond local complementation. We introduce a generalisation of local complementation which graphically characterises the LU-equivalence of graph states. We use this characterisation to show the existence of a strict infinite hierarchy of equivalences of graph states. Our approach is based on minimal local sets, which are subsets of vertices that are known to cover any graph, and to be invariant under local complementation and even LU-equivalence. We use these structures to provide a type to each vertex of a graph, leading to a natural standard form in which the LU-equivalence can be exhibited and captured by means of generalised local complementation.

Cite as

Nathan Claudet and Simon Perdrix. Local Equivalence of Stabilizer States: A Graphical Characterisation. In 42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 327, pp. 27:1-27:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{claudet_et_al:LIPIcs.STACS.2025.27,
  author =	{Claudet, Nathan and Perdrix, Simon},
  title =	{{Local Equivalence of Stabilizer States: A Graphical Characterisation}},
  booktitle =	{42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)},
  pages =	{27:1--27:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-365-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{327},
  editor =	{Beyersdorff, Olaf and Pilipczuk, Micha{\l} and Pimentel, Elaine and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2025.27},
  URN =		{urn:nbn:de:0030-drops-228527},
  doi =		{10.4230/LIPIcs.STACS.2025.27},
  annote =	{Keywords: Quantum computing, Graph theory, Entanglement, Local complementation}
}
Document
Can You Link Up With Treewidth?

Authors: Radu Curticapean, Simon Döring, Daniel Neuen, and Jiaheng Wang


Abstract
A central result by Marx [ToC '10] constructs k-vertex graphs H of maximum degree 3 such that n^o(k/log k) time algorithms for detecting colorful H-subgraphs would refute the Exponential-Time Hypothesis (ETH). This result is widely used to obtain almost-tight conditional lower bounds for parameterized problems under ETH. Our first contribution is a new and fully self-contained proof of this result that further simplifies a recent work by Karthik et al. [SOSA 2024]. In our proof, we introduce a novel graph parameter of independent interest, the linkage capacity γ(H), and show that detecting colorful H-subgraphs in time n^o(γ(H)) refutes ETH. Then, we use a simple construction of communication networks credited to Beneš to obtain k-vertex graphs of maximum degree 3 and linkage capacity Ω(k/log k), avoiding arguments involving expander graphs, which were required in previous papers. We also show that every graph H of treewidth t has linkage capacity Ω(t/log t), thus recovering a stronger result shown by Marx [ToC '10] with a simplified proof. Additionally, we obtain new tight lower bounds on the complexity of subgraph detection for certain types of patterns by analyzing their linkage capacity: We prove that almost all k-vertex graphs of polynomial average degree Ω(k^β) for β > 0 have linkage capacity Θ(k), which implies tight lower bounds for finding such patterns H. As an application of these results, we also obtain tight lower bounds for counting small induced subgraphs having a fixed property Φ, improving bounds from, e.g., [Roth et al., FOCS 2020].

Cite as

Radu Curticapean, Simon Döring, Daniel Neuen, and Jiaheng Wang. Can You Link Up With Treewidth?. In 42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 327, pp. 28:1-28:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{curticapean_et_al:LIPIcs.STACS.2025.28,
  author =	{Curticapean, Radu and D\"{o}ring, Simon and Neuen, Daniel and Wang, Jiaheng},
  title =	{{Can You Link Up With Treewidth?}},
  booktitle =	{42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)},
  pages =	{28:1--28:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-365-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{327},
  editor =	{Beyersdorff, Olaf and Pilipczuk, Micha{\l} and Pimentel, Elaine and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2025.28},
  URN =		{urn:nbn:de:0030-drops-228534},
  doi =		{10.4230/LIPIcs.STACS.2025.28},
  annote =	{Keywords: subgraph isomorphism, constraint satisfaction problems, linkage capacity, exponential-time hypothesis, parameterized complexity, counting complexity}
}
Document
Noisy (Binary) Searching: Simple, Fast and Correct

Authors: Dariusz Dereniowski, Aleksander Łukasiewicz, and Przemysław Uznański


Abstract
This work considers the problem of the noisy binary search in a sorted array. The noise is modeled by a parameter p that dictates that a comparison can be incorrect with probability p, independently of other queries. We state two types of upper bounds on the number of queries: the worst-case and expected query complexity scenarios. The bounds improve the ones known to date, i.e., our algorithms require fewer queries. Additionally, they have simpler statements, and work for the full range of parameters. All query complexities for the expected query scenarios are tight up to lower order terms. For the problem where the target prior is uniform over all possible inputs, we provide an algorithm with expected complexity upperbounded by (log₂ n + log₂ δ^{-1} + 3)/I(p), where n is the domain size, 0 ≤ p < 1/2 is the noise ratio, and δ > 0 is the failure probability, and I(p) is the information gain function. As a side-effect, we close some correctness issues regarding previous work. Also, en route, we obtain new and improved query complexities for the search generalized to arbitrary graphs. This paper continues and improves the lines of research of Burnashev-Zigangirov [Prob. Per. Informatsii, 1974], Ben-Or and Hassidim [FOCS 2008], Gu and Xu [STOC 2023], and Emamjomeh-Zadeh et al. [STOC 2016], Dereniowski et al. [SOSA@SODA 2019].

Cite as

Dariusz Dereniowski, Aleksander Łukasiewicz, and Przemysław Uznański. Noisy (Binary) Searching: Simple, Fast and Correct. In 42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 327, pp. 29:1-29:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{dereniowski_et_al:LIPIcs.STACS.2025.29,
  author =	{Dereniowski, Dariusz and {\L}ukasiewicz, Aleksander and Uzna\'{n}ski, Przemys{\l}aw},
  title =	{{Noisy (Binary) Searching: Simple, Fast and Correct}},
  booktitle =	{42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)},
  pages =	{29:1--29:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-365-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{327},
  editor =	{Beyersdorff, Olaf and Pilipczuk, Micha{\l} and Pimentel, Elaine and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2025.29},
  URN =		{urn:nbn:de:0030-drops-228551},
  doi =		{10.4230/LIPIcs.STACS.2025.29},
  annote =	{Keywords: Graph Algorithms, Noisy Binary Search, Query Complexity, Reliability}
}
Document
Being Efficient in Time, Space, and Workload: a Self-Stabilizing Unison and Its Consequences

Authors: Stéphane Devismes, David Ilcinkas, Colette Johnen, and Frédéric Mazoit


Abstract
We present a self-stabilizing algorithm for the unison problem which is efficient in time, workload, and space in a weak model. Precisely, our algorithm is defined in the atomic-state model and works in anonymous asynchronous connected networks in which even local ports are unlabeled. It makes no assumption on the daemon and thus stabilizes under the weakest one: the distributed unfair daemon. In an n-node network of diameter D and assuming the knowledge B ≥ 2D+2, our algorithm only requires Θ(log(B)) bits per node and is fully polynomial as it stabilizes in at most 2D+2 rounds and O(min(n²B, n³)) moves. In particular, it is the first self-stabilizing unison for arbitrary asynchronous anonymous networks achieving an asymptotically optimal stabilization time in rounds using a bounded memory at each node. Furthermore, we show that our solution can be used to efficiently simulate synchronous self-stabilizing algorithms in asynchronous environments. For example, this simulation allows us to design a new state-of-the-art algorithm solving both the leader election and the BFS (Breadth-First Search) spanning tree construction in any identified connected network which, to the best of our knowledge, beats all existing solutions in the literature.

Cite as

Stéphane Devismes, David Ilcinkas, Colette Johnen, and Frédéric Mazoit. Being Efficient in Time, Space, and Workload: a Self-Stabilizing Unison and Its Consequences. In 42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 327, pp. 30:1-30:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{devismes_et_al:LIPIcs.STACS.2025.30,
  author =	{Devismes, St\'{e}phane and Ilcinkas, David and Johnen, Colette and Mazoit, Fr\'{e}d\'{e}ric},
  title =	{{Being Efficient in Time, Space, and Workload: a Self-Stabilizing Unison and Its Consequences}},
  booktitle =	{42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)},
  pages =	{30:1--30:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-365-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{327},
  editor =	{Beyersdorff, Olaf and Pilipczuk, Micha{\l} and Pimentel, Elaine and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2025.30},
  URN =		{urn:nbn:de:0030-drops-228568},
  doi =		{10.4230/LIPIcs.STACS.2025.30},
  annote =	{Keywords: Self-stabilization, unison, time complexity, synchronizer}
}
Document
Efficient Approximation Schemes for Scheduling on a Stochastic Number of Machines

Authors: Leah Epstein and Asaf Levin


Abstract
We study three two-stage optimization problems with a similar structure and different objectives. In the first stage of each problem, the goal is to assign input jobs of positive sizes to unsplittable bags. After this assignment is decided, the realization of the number of identical machines that will be available is revealed. Then, in the second stage, the bags are assigned to machines. The probability vector of the number of machines in the second stage is known to the algorithm as part of the input before making the decisions of the first stage. Thus, the vector of machine completion times is a random variable. The goal of the first problem is to minimize the expected value of the makespan of the second stage schedule, while the goal of the second problem is to maximize the expected value of the minimum completion time of the machines in the second stage solution. The goal of the third problem is to minimize the 𝓁_𝔭 norm for a fixed 𝔭 > 1, where the norm is applied on machines' completion times vectors. Each one of the first two problems admits a PTAS as Buchem et al. showed recently. Here we significantly improve all their results by designing an EPTAS for each one of these problems. We also design an EPTAS for 𝓁_𝔭 norm minimization for any 𝔭 > 1.

Cite as

Leah Epstein and Asaf Levin. Efficient Approximation Schemes for Scheduling on a Stochastic Number of Machines. In 42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 327, pp. 31:1-31:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{epstein_et_al:LIPIcs.STACS.2025.31,
  author =	{Epstein, Leah and Levin, Asaf},
  title =	{{Efficient Approximation Schemes for Scheduling on a Stochastic Number of Machines}},
  booktitle =	{42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)},
  pages =	{31:1--31:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-365-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{327},
  editor =	{Beyersdorff, Olaf and Pilipczuk, Micha{\l} and Pimentel, Elaine and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2025.31},
  URN =		{urn:nbn:de:0030-drops-228579},
  doi =		{10.4230/LIPIcs.STACS.2025.31},
  annote =	{Keywords: Approximation algorithms, Approximation schemes, Two-stage stochastic optimization problems, Multiprocessor scheduling}
}
Document
A Faster Algorithm for Constrained Correlation Clustering

Authors: Nick Fischer, Evangelos Kipouridis, Jonas Klausen, and Mikkel Thorup


Abstract
In the Correlation Clustering problem we are given n nodes, and a preference for each pair of nodes indicating whether we prefer the two endpoints to be in the same cluster or not. The output is a clustering inducing the minimum number of violated preferences. In certain cases, however, the preference between some pairs may be too important to be violated. The constrained version of this problem specifies pairs of nodes that must be in the same cluster as well as pairs that must not be in the same cluster (hard constraints). The output clustering has to satisfy all hard constraints while minimizing the number of violated preferences. Constrained Correlation Clustering is APX-Hard and has been approximated within a factor 3 by van Zuylen et al. [SODA '07]. Their algorithm is based on rounding an LP with Θ(n³) constraints, resulting in an Ω(n^{3ω}) running time. In this work, using a more combinatorial approach, we show how to approximate this problem significantly faster at the cost of a slightly weaker approximation factor. In particular, our algorithm runs in Õ(n³) time (notice that the input size is Θ(n²)) and approximates Constrained Correlation Clustering within a factor 16. To achieve our result we need properties guaranteed by a particular influential algorithm for (unconstrained) Correlation Clustering, the CC-PIVOT algorithm. This algorithm chooses a pivot node u, creates a cluster containing u and all its preferred nodes, and recursively solves the rest of the problem. It is known that selecting pivots at random gives a 3-approximation. As a byproduct of our work, we provide a derandomization of the CC-PIVOT algorithm that still achieves the 3-approximation; furthermore, we show that there exist instances where no ordering of the pivots can give a (3-ε)-approximation, for any constant ε. Finally, we introduce a node-weighted version of Correlation Clustering, which can be approximated within factor 3 using our insights on Constrained Correlation Clustering. As the general weighted version of Correlation Clustering would require a major breakthrough to approximate within a factor o(log n), Node-Weighted Correlation Clustering may be a practical alternative.

Cite as

Nick Fischer, Evangelos Kipouridis, Jonas Klausen, and Mikkel Thorup. A Faster Algorithm for Constrained Correlation Clustering. In 42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 327, pp. 32:1-32:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{fischer_et_al:LIPIcs.STACS.2025.32,
  author =	{Fischer, Nick and Kipouridis, Evangelos and Klausen, Jonas and Thorup, Mikkel},
  title =	{{A Faster Algorithm for Constrained Correlation Clustering}},
  booktitle =	{42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)},
  pages =	{32:1--32:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-365-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{327},
  editor =	{Beyersdorff, Olaf and Pilipczuk, Micha{\l} and Pimentel, Elaine and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2025.32},
  URN =		{urn:nbn:de:0030-drops-228585},
  doi =		{10.4230/LIPIcs.STACS.2025.32},
  annote =	{Keywords: Clustering, Constrained Correlation Clustering, Approximation}
}
Document
Metric Dimension and Geodetic Set Parameterized by Vertex Cover

Authors: Florent Foucaud, Esther Galby, Liana Khazaliya, Shaohua Li, Fionn Mc Inerney, Roohani Sharma, and Prafullkumar Tale


Abstract
For a graph G, a subset S ⊆ V(G) is called a resolving set of G if, for any two vertices u,v ∈ V(G), there exists a vertex w ∈ S such that d(w,u) ≠ d(w,v). The Metric Dimension problem takes as input a graph G on n vertices and a positive integer k, and asks whether there exists a resolving set of size at most k. In another metric-based graph problem, Geodetic Set, the input is a graph G and an integer k, and the objective is to determine whether there exists a subset S ⊆ V(G) of size at most k such that, for any vertex u ∈ V(G), there are two vertices s₁, s₂ ∈ S such that u lies on a shortest path from s₁ to s₂. These two classical problems are known to be intractable with respect to the natural parameter, i.e., the solution size, as well as most structural parameters, including the feedback vertex set number and pathwidth. We observe that both problems admit an FPT algorithm running in 2^𝒪(vc²) ⋅ n^𝒪(1) time, and a kernelization algorithm that outputs a kernel with 2^𝒪(vc) vertices, where vc is the vertex cover number. We prove that unless the Exponential Time Hypothesis (ETH) fails, Metric Dimension and Geodetic Set, even on graphs of bounded diameter, do not admit - an FPT algorithm running in 2^o(vc²) ⋅ n^𝒪(1) time, nor - a kernelization algorithm that does not increase the solution size and outputs a kernel with 2^o(vc) vertices. We only know of one other problem in the literature that admits such a tight algorithmic lower bound with respect to vc. Similarly, the list of known problems with exponential lower bounds on the number of vertices in kernelized instances is very short.

Cite as

Florent Foucaud, Esther Galby, Liana Khazaliya, Shaohua Li, Fionn Mc Inerney, Roohani Sharma, and Prafullkumar Tale. Metric Dimension and Geodetic Set Parameterized by Vertex Cover. In 42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 327, pp. 33:1-33:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{foucaud_et_al:LIPIcs.STACS.2025.33,
  author =	{Foucaud, Florent and Galby, Esther and Khazaliya, Liana and Li, Shaohua and Mc Inerney, Fionn and Sharma, Roohani and Tale, Prafullkumar},
  title =	{{Metric Dimension and Geodetic Set Parameterized by Vertex Cover}},
  booktitle =	{42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)},
  pages =	{33:1--33:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-365-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{327},
  editor =	{Beyersdorff, Olaf and Pilipczuk, Micha{\l} and Pimentel, Elaine and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2025.33},
  URN =		{urn:nbn:de:0030-drops-228593},
  doi =		{10.4230/LIPIcs.STACS.2025.33},
  annote =	{Keywords: Parameterized Complexity, ETH-based Lower Bounds, Kernelization, Vertex Cover, Metric Dimension, Geodetic Set}
}
Document
Agreement Tasks in Fault-Prone Synchronous Networks of Arbitrary Structure

Authors: Pierre Fraigniaud, Minh Hang Nguyen, and Ami Paz


Abstract
Consensus is arguably the most studied problem in distributed computing as a whole, and particularly in the distributed message-passing setting. In this latter framework, research on consensus has considered various hypotheses regarding the failure types, the memory constraints, the algorithmic performances (e.g., early stopping and obliviousness), etc. Surprisingly, almost all of this work assumes that messages are passed in a complete network, i.e., each process has a direct link to every other process. A noticeable exception is the recent work of Castañeda et al. (Inf. Comput. 2023) who designed a generic oblivious algorithm for consensus running in radius(G,t) rounds in every graph G, when up to t nodes can crash by irrevocably stopping, where t is smaller than the node-connectivity κ of G. Here, radius(G,t) denotes a graph parameter called the radius of G whenever up to t nodes can crash. For t = 0, this parameter coincides with radius(G), the standard radius of a graph, and, for G = K_n, the running time radius(K_n,t) = t+1 of the algorithm exactly matches the known round-complexity of consensus in the clique K_n. Our main result is a proof that radius(G,t) rounds are necessary for oblivious algorithms solving consensus in G when up to t nodes can crash, thus validating a conjecture of Castañeda et al., and demonstrating that their consensus algorithm is optimal for any graph G. We also extend the result of Castañeda et al. to two different settings: First, to the case where the number t of failures is not necessarily smaller than the connectivity κ of the considered graph; Second, to the k-set agreement problem for which agreement is not restricted to be on a single value as in consensus, but on up to k different values.

Cite as

Pierre Fraigniaud, Minh Hang Nguyen, and Ami Paz. Agreement Tasks in Fault-Prone Synchronous Networks of Arbitrary Structure. In 42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 327, pp. 34:1-34:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{fraigniaud_et_al:LIPIcs.STACS.2025.34,
  author =	{Fraigniaud, Pierre and Nguyen, Minh Hang and Paz, Ami},
  title =	{{Agreement Tasks in Fault-Prone Synchronous Networks of Arbitrary Structure}},
  booktitle =	{42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)},
  pages =	{34:1--34:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-365-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{327},
  editor =	{Beyersdorff, Olaf and Pilipczuk, Micha{\l} and Pimentel, Elaine and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2025.34},
  URN =		{urn:nbn:de:0030-drops-228606},
  doi =		{10.4230/LIPIcs.STACS.2025.34},
  annote =	{Keywords: Consensus, set-agreement, fault tolerance, crash failures}
}
Document
Dimension-Free Parameterized Approximation Schemes for Hybrid Clustering

Authors: Ameet Gadekar and Tanmay Inamdar


Abstract
Hybrid k-Clustering is a model of clustering that generalizes two of the most widely studied clustering objectives: k-Center and k-Median. In this model, given a set of n points P, the goal is to find k centers such that the sum of the r-distances of each point to its nearest center is minimized. The r-distance between two points p and q is defined as max{dist(p, q)-r, 0} - this represents the distance of p to the boundary of the r-radius ball around q if p is outside the ball, and 0 otherwise. This problem was recently introduced by Fomin et al. [APPROX 2024], who designed a (1+ε, 1+ε)-bicrtieria approximation that runs in time 2^{(kd/ε)^{O(1)}} ⋅ n^{O(1)} for inputs in ℝ^d; such a bicriteria solution uses balls of radius (1+ε)r instead of r, and has a cost at most 1+ε times the cost of an optimal solution using balls of radius r. In this paper we significantly improve upon this result by designing an approximation algorithm with the same bicriteria guarantee, but with running time that is FPT only in k and ε - crucially, removing the exponential dependence on the dimension d. This resolves an open question posed in their paper. Our results extend further in several directions. First, our approximation scheme works in a broader class of metric spaces, including doubling spaces, minor-free, and bounded treewidth metrics. Secondly, our techniques yield a similar bicriteria FPT-approximation schemes for other variants of Hybrid k-Clustering, e.g., when the objective features the sum of z-th power of the r-distances. Finally, we also design a coreset for Hybrid k-Clustering in doubling spaces, answering another open question from the work of Fomin et al.

Cite as

Ameet Gadekar and Tanmay Inamdar. Dimension-Free Parameterized Approximation Schemes for Hybrid Clustering. In 42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 327, pp. 35:1-35:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{gadekar_et_al:LIPIcs.STACS.2025.35,
  author =	{Gadekar, Ameet and Inamdar, Tanmay},
  title =	{{Dimension-Free Parameterized Approximation Schemes for Hybrid Clustering}},
  booktitle =	{42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)},
  pages =	{35:1--35:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-365-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{327},
  editor =	{Beyersdorff, Olaf and Pilipczuk, Micha{\l} and Pimentel, Elaine and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2025.35},
  URN =		{urn:nbn:de:0030-drops-228615},
  doi =		{10.4230/LIPIcs.STACS.2025.35},
  annote =	{Keywords: Clustering, Parameterized algorithms, FPT approximation, k-Median, k-Center}
}
Document
MaxMin Separation Problems: FPT Algorithms for st-Separator and Odd Cycle Transversal

Authors: Ajinkya Gaikwad, Hitendra Kumar, Soumen Maity, Saket Saurabh, and Roohani Sharma


Abstract
In this paper, we study the parameterized complexity of the MaxMin versions of two fundamental separation problems: Maximum Minimal st-Separator and Maximum Minimal Odd Cycle Transversal (OCT), both parameterized by the solution size. In the Maximum Minimal st-Separator problem, given a graph G, two distinct vertices s and t and a positive integer k, the goal is to determine whether there exists a minimal st-separator in G of size at least k. Similarly, the Maximum Minimal OCT problem seeks to determine if there exists a minimal set of vertices whose deletion results in a bipartite graph, and whose size is at least k. We demonstrate that both problems are fixed-parameter tractable parameterized by k. Our FPT algorithm for Maximum Minimal st-Separator answers the open question by Hanaka, Bodlaender, van der Zanden & Ono [TCS 2019]. One unique insight from this work is the following. We use the meta-result of Lokshtanov, Ramanujan, Saurabh & Zehavi [ICALP 2018] that enables us to reduce our problems to highly unbreakable graphs. This is interesting, as an explicit use of the recursive understanding and randomized contractions framework of Chitnis, Cygan, Hajiaghayi, Pilipczuk & Pilipczuk [SICOMP 2016] to reduce to the highly unbreakable graphs setting (which is the result that Lokshtanov et al. tries to abstract out in their meta-theorem) does not seem obvious because certain "extension" variants of our problems are W[1]-hard.

Cite as

Ajinkya Gaikwad, Hitendra Kumar, Soumen Maity, Saket Saurabh, and Roohani Sharma. MaxMin Separation Problems: FPT Algorithms for st-Separator and Odd Cycle Transversal. In 42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 327, pp. 36:1-36:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{gaikwad_et_al:LIPIcs.STACS.2025.36,
  author =	{Gaikwad, Ajinkya and Kumar, Hitendra and Maity, Soumen and Saurabh, Saket and Sharma, Roohani},
  title =	{{MaxMin Separation Problems: FPT Algorithms for st-Separator and Odd Cycle Transversal}},
  booktitle =	{42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)},
  pages =	{36:1--36:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-365-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{327},
  editor =	{Beyersdorff, Olaf and Pilipczuk, Micha{\l} and Pimentel, Elaine and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2025.36},
  URN =		{urn:nbn:de:0030-drops-228622},
  doi =		{10.4230/LIPIcs.STACS.2025.36},
  annote =	{Keywords: Parameterized Complexity, FPT, MaxMin problems, Maximum Minimal st-separator, Maximum Minimal Odd Cycle Transversal, Unbreakable Graphs, CMSO, Long Induced Odd Cycles, Sunflower Lemma}
}
Document
On the Existential Theory of the Reals Enriched with Integer Powers of a Computable Number

Authors: Jorge Gallego-Hernández and Alessio Mansutti


Abstract
This paper investigates ∃ℝ(ξ^ℤ), that is the extension of the existential theory of the reals by an additional unary predicate ξ^ℤ for the integer powers of a fixed computable real number ξ > 0. If all we have access to is a Turing machine computing ξ, it is not possible to decide whether an input formula from this theory is satisfiable. However, we show an algorithm to decide this problem when - ξ is known to be transcendental, or - ξ is a root of some given integer polynomial (that is, ξ is algebraic). In other words, knowing the algebraicity of ξ suffices to circumvent undecidability. Furthermore, we establish complexity results under the proviso that ξ enjoys what we call a polynomial root barrier. Using this notion, we show that the satisfiability problem of ∃ℝ(ξ^ℤ) is - in ExpSpace if ξ is an algebraic number, and - in 3Exp if ξ is a logarithm of an algebraic number, Euler’s e, or the number π, among others. To establish our results, we first observe that the satisfiability problem of ∃ℝ(ξ^ℤ) reduces in exponential time to the problem of solving quantifier-free instances of the theory of the reals where variables range over ξ^ℤ. We then prove that these instances have a small witness property: only finitely many integer powers of ξ must be considered to find whether a formula is satisfiable. Our complexity results are shown by relying on well-established machinery from Diophantine approximation and transcendental number theory, such as bounds for the transcendence measure of numbers. As a by-product of our results, we are able to remove the appeal to Schanuel’s conjecture from the proof of decidability of the entropic risk threshold problem for stochastic games with rational probabilities, rewards and threshold [Baier et al., MFCS, 2023]: when the base of the entropic risk is e and the aversion factor is a fixed algebraic number, the problem is (unconditionally) in Exp.

Cite as

Jorge Gallego-Hernández and Alessio Mansutti. On the Existential Theory of the Reals Enriched with Integer Powers of a Computable Number. In 42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 327, pp. 37:1-37:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{gallegohernandez_et_al:LIPIcs.STACS.2025.37,
  author =	{Gallego-Hern\'{a}ndez, Jorge and Mansutti, Alessio},
  title =	{{On the Existential Theory of the Reals Enriched with Integer Powers of a Computable Number}},
  booktitle =	{42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)},
  pages =	{37:1--37:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-365-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{327},
  editor =	{Beyersdorff, Olaf and Pilipczuk, Micha{\l} and Pimentel, Elaine and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2025.37},
  URN =		{urn:nbn:de:0030-drops-228635},
  doi =		{10.4230/LIPIcs.STACS.2025.37},
  annote =	{Keywords: Theory of the reals with exponentiation, decision procedures, computability}
}
Document
Two-Dimensional Longest Common Extension Queries in Compact Space

Authors: Arnab Ganguly, Daniel Gibney, Rahul Shah, and Sharma V. Thankachan


Abstract
For a length n text over an alphabet of size σ, we can encode the suffix tree data structure in 𝒪(nlog σ) bits of space. It supports suffix array (SA), inverse suffix array (ISA), and longest common extension (LCE) queries in 𝒪(log^ε_σ n) time, which enables efficient pattern matching; here ε > 0 is an arbitrarily small constant. Further improvements are possible for LCE queries, where 𝒪(1) time queries can be achieved using an index of space 𝒪(nlog σ) bits. However, compactly indexing a two-dimensional text (i.e., an n× n matrix) has been a major open problem. We show progress in this direction by first presenting an 𝒪(n²log σ)-bit structure supporting LCE queries in near 𝒪((log_σ n)^{2/3}) time. We then present an 𝒪(n²log σ + n²log log n)-bit structure supporting ISA queries in near 𝒪(log n ⋅ (log_σ n)^{2/3}) time. Within a similar space, achieving SA queries in poly-logarithmic (even strongly sub-linear) time is a significant challenge. However, our 𝒪(n²log σ + n²log log n)-bit structure can support SA queries in 𝒪(n²/(σ log n)^c) time, where c is an arbitrarily large constant, which enables pattern matching in time faster than what is possible without preprocessing. We then design a repetition-aware data structure. The δ_2D compressibility measure for two-dimensional texts was recently introduced by Carfagna and Manzini [SPIRE 2023]. The measure ranges from 1 to n², with smaller δ_2D indicating a highly compressible two-dimensional text. The current data structure utilizing δ_2D allows only element access. We obtain the first structure based on δ_2D for LCE queries. It takes 𝒪^{~}(n^{5/3} + n^{8/5}δ_2D^{1/5}) space and answers queries in 𝒪(log n) time.

Cite as

Arnab Ganguly, Daniel Gibney, Rahul Shah, and Sharma V. Thankachan. Two-Dimensional Longest Common Extension Queries in Compact Space. In 42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 327, pp. 38:1-38:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{ganguly_et_al:LIPIcs.STACS.2025.38,
  author =	{Ganguly, Arnab and Gibney, Daniel and Shah, Rahul and Thankachan, Sharma V.},
  title =	{{Two-Dimensional Longest Common Extension Queries in Compact Space}},
  booktitle =	{42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)},
  pages =	{38:1--38:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-365-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{327},
  editor =	{Beyersdorff, Olaf and Pilipczuk, Micha{\l} and Pimentel, Elaine and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2025.38},
  URN =		{urn:nbn:de:0030-drops-228649},
  doi =		{10.4230/LIPIcs.STACS.2025.38},
  annote =	{Keywords: String matching, text indexing, two-dimensional text}
}
Document
A Quasi-Polynomial Time Algorithm for Multi-Arrival on Tree-Like Multigraphs

Authors: Ebrahim Ghorbani, Jonah Leander Hoff, and Matthias Mnich


Abstract
Propp machines, or rotor-router models, are a classic tool to simulate random systems in forms of Markov chains by deterministic systems. To this end, the nodes of the Markov chain are replaced by switching nodes, which maintain a queue over their outgoing arcs, and a particle sent through the system traverses the top arc of the queue which is then moved to the end of the queue and the particle arrives at the next node. A key question to answer about such systems is whether a single particle can reach a particular target node, given as input an initial configuration of the queues at all switching nodes. This question was introduced by Dohrau et al. (2017) under the name of Arrival. A major open question is whether Arrival can be solved in polynomial time, as it is known to lie in NP ∩ co-NP; yet the fastest known algorithm for general instances takes subexponential time (Gärtner et al., ICALP 2021). We consider a generalized version of Arrival introduced by Auger et al. (RP 2023), which requires routing multiple (potentially exponentially many) particles through a rotor graph. The Multi-Arrival problem is to determine the particle configuration that results from moving all particles from a given initial configuration to sinks. Auger et al. showed that for path-like rotor graphs with a certain uniform rotor order, the problem can be solved in polynomial time. Our main result is a quasi-polynomial-time algorithm for Multi-Arrival on tree-like rotor graphs for arbitrary rotor orders. Tree-like rotor graphs are directed multigraphs which can be obtained from undirected trees by replacing each edge by an arbitrary number of arcs in either or both directions. For trees of bounded contracted height, such as paths, the algorithm runs in polynomial time and thereby generalizes the result by Auger et al.. Moreover, we give a polynomial-time algorithm for Multi-Arrival on tree-like rotor graphs without parallel arcs.

Cite as

Ebrahim Ghorbani, Jonah Leander Hoff, and Matthias Mnich. A Quasi-Polynomial Time Algorithm for Multi-Arrival on Tree-Like Multigraphs. In 42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 327, pp. 39:1-39:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{ghorbani_et_al:LIPIcs.STACS.2025.39,
  author =	{Ghorbani, Ebrahim and Leander Hoff, Jonah and Mnich, Matthias},
  title =	{{A Quasi-Polynomial Time Algorithm for Multi-Arrival on Tree-Like Multigraphs}},
  booktitle =	{42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)},
  pages =	{39:1--39:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-365-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{327},
  editor =	{Beyersdorff, Olaf and Pilipczuk, Micha{\l} and Pimentel, Elaine and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2025.39},
  URN =		{urn:nbn:de:0030-drops-228658},
  doi =		{10.4230/LIPIcs.STACS.2025.39},
  annote =	{Keywords: Arrival, Rotor-routing, Tree-like Multigraph, Path-Like Multigraph, Fixed-Parameter Tractability}
}
Document
Identity-Preserving Lax Extensions and Where to Find Them

Authors: Sergey Goncharov, Dirk Hofmann, Pedro Nora, Lutz Schröder, and Paul Wild


Abstract
Generic notions of bisimulation for various types of systems (nondeterministic, probabilistic, weighted etc.) rely on identity-preserving (normal) lax extensions of the functor encapsulating the system type, in the paradigm of universal coalgebra. It is known that preservation of weak pullbacks is a sufficient condition for a functor to admit a normal lax extension (the Barr extension, which in fact is then even strict); in the converse direction, nothing is currently known about necessary (weak) pullback preservation conditions for the existence of normal lax extensions. In the present work, we narrow this gap by showing on the one hand that functors admitting a normal lax extension preserve 1/4-iso pullbacks, i.e. pullbacks in which at least one of the projections is an isomorphism. On the other hand, we give sufficient conditions, showing that a functor admits a normal lax extension if it weakly preserves either 1/4-iso pullbacks and 4/4-epi pullbacks (i.e. pullbacks in which all morphisms are epic) or inverse images. We apply these criteria to concrete examples, in particular to functors modelling neighbourhood systems and weighted systems.

Cite as

Sergey Goncharov, Dirk Hofmann, Pedro Nora, Lutz Schröder, and Paul Wild. Identity-Preserving Lax Extensions and Where to Find Them. In 42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 327, pp. 40:1-40:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{goncharov_et_al:LIPIcs.STACS.2025.40,
  author =	{Goncharov, Sergey and Hofmann, Dirk and Nora, Pedro and Schr\"{o}der, Lutz and Wild, Paul},
  title =	{{Identity-Preserving Lax Extensions and Where to Find Them}},
  booktitle =	{42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)},
  pages =	{40:1--40:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-365-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{327},
  editor =	{Beyersdorff, Olaf and Pilipczuk, Micha{\l} and Pimentel, Elaine and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2025.40},
  URN =		{urn:nbn:de:0030-drops-228665},
  doi =		{10.4230/LIPIcs.STACS.2025.40},
  annote =	{Keywords: (Bi-)simulations, lax extensions, modal logics, coalgebra}
}
Document
Residue Domination in Bounded-Treewidth Graphs

Authors: Jakob Greilhuber, Philipp Schepper, and Philip Wellnitz


Abstract
For the vertex selection problem (σ,ρ)-DomSet one is given two fixed sets σ and ρ of integers and the task is to decide whether we can select vertices of the input graph such that, for every selected vertex, the number of selected neighbors is in σ and, for every unselected vertex, the number of selected neighbors is in ρ [Telle, Nord. J. Comp. 1994]. This framework covers many fundamental graph problems such as Independent Set and Dominating Set. We significantly extend the recent result by Focke et al. [SODA 2023] to investigate the case when σ and ρ are two (potentially different) residue classes modulo m ≥ 2. We study the problem parameterized by treewidth and present an algorithm that solves in time m^tw ⋅ n^O(1) the decision, minimization and maximization version of the problem. This significantly improves upon the known algorithms where for the case m ≥ 3 not even an explicit running time is known. We complement our algorithm by providing matching lower bounds which state that there is no (m-ε)^pw ⋅ n^O(1)-time algorithm parameterized by pathwidth pw, unless SETH fails. For m = 2, we extend these bounds to the minimization version as the decision version is efficiently solvable.

Cite as

Jakob Greilhuber, Philipp Schepper, and Philip Wellnitz. Residue Domination in Bounded-Treewidth Graphs. In 42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 327, pp. 41:1-41:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{greilhuber_et_al:LIPIcs.STACS.2025.41,
  author =	{Greilhuber, Jakob and Schepper, Philipp and Wellnitz, Philip},
  title =	{{Residue Domination in Bounded-Treewidth Graphs}},
  booktitle =	{42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)},
  pages =	{41:1--41:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-365-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{327},
  editor =	{Beyersdorff, Olaf and Pilipczuk, Micha{\l} and Pimentel, Elaine and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2025.41},
  URN =		{urn:nbn:de:0030-drops-228675},
  doi =		{10.4230/LIPIcs.STACS.2025.41},
  annote =	{Keywords: Parameterized Complexity, Treewidth, Generalized Dominating Set, Strong Exponential Time Hypothesis}
}
Document
Local Enumeration: The Not-All-Equal Case

Authors: Mohit Gurumukhani, Ramamohan Paturi, Michael Saks, and Navid Talebanfard


Abstract
Gurumukhani et al. (CCC'24) proposed the local enumeration problem Enum(k, t) as an approach to break the Super Strong Exponential Time Hypothesis (SSETH): for a natural number k and a parameter t, given an n-variate k-CNF with no satisfying assignment of Hamming weight less than t(n), enumerate all satisfying assignments of Hamming weight exactly t(n). Furthermore, they gave a randomized algorithm for Enum(k, t) and employed new ideas to analyze the first non-trivial case, namely k = 3. In particular, they solved Enum(3, n/2) in expected 1.598ⁿ time. A simple construction shows a lower bound of 6^{n/4} ≈ 1.565ⁿ. In this paper, we show that to break SSETH, it is sufficient to consider a simpler local enumeration problem NAE-Enum(k, t): for a natural number k and a parameter t, given an n-variate k-CNF with no satisfying assignment of Hamming weight less than t(n), enumerate all Not-All-Equal (NAE) solutions of Hamming weight exactly t(n), i.e., those that satisfy and falsify some literal in every clause. We refine the algorithm of Gurumukhani et al. and show that it optimally solves NAE-Enum(3, n/2), namely, in expected time poly(n) ⋅ 6^{n/4}.

Cite as

Mohit Gurumukhani, Ramamohan Paturi, Michael Saks, and Navid Talebanfard. Local Enumeration: The Not-All-Equal Case. In 42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 327, pp. 42:1-42:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{gurumukhani_et_al:LIPIcs.STACS.2025.42,
  author =	{Gurumukhani, Mohit and Paturi, Ramamohan and Saks, Michael and Talebanfard, Navid},
  title =	{{Local Enumeration: The Not-All-Equal Case}},
  booktitle =	{42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)},
  pages =	{42:1--42:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-365-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{327},
  editor =	{Beyersdorff, Olaf and Pilipczuk, Micha{\l} and Pimentel, Elaine and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2025.42},
  URN =		{urn:nbn:de:0030-drops-228680},
  doi =		{10.4230/LIPIcs.STACS.2025.42},
  annote =	{Keywords: Depth 3 circuits, k-CNF satisfiability, Circuit lower bounds, Majority function}
}
Document
Approximating Densest Subgraph in Geometric Intersection Graphs

Authors: Sariel Har-Peled and Saladi Rahul


Abstract
For an undirected graph 𝖦 = (𝖵, 𝖤), with n vertices and m edges, the densest subgraph problem, is to compute a subset S ⊆ 𝖵 which maximizes the ratio |𝖤_S|/|S|, where 𝖤_S ⊆ 𝖤 is the set of all edges of 𝖦 with endpoints in S. The densest subgraph problem is a well studied problem in computer science. Existing exact and approximation algorithms for computing the densest subgraph require Ω(m) time. We present near-linear time (in n) approximation algorithms for the densest subgraph problem on implicit geometric intersection graphs, where the vertices are explicitly given but not the edges. As a concrete example, we consider n disks in the plane with arbitrary radii and present two different approximation algorithms. As a by-product, we show a reduction from (shallow) range-reporting to approximate counting/sampling which seems to be new and is useful for other problems such as independent query sampling.

Cite as

Sariel Har-Peled and Saladi Rahul. Approximating Densest Subgraph in Geometric Intersection Graphs. In 42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 327, pp. 43:1-43:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{harpeled_et_al:LIPIcs.STACS.2025.43,
  author =	{Har-Peled, Sariel and Rahul, Saladi},
  title =	{{Approximating Densest Subgraph in Geometric Intersection Graphs}},
  booktitle =	{42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)},
  pages =	{43:1--43:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-365-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{327},
  editor =	{Beyersdorff, Olaf and Pilipczuk, Micha{\l} and Pimentel, Elaine and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2025.43},
  URN =		{urn:nbn:de:0030-drops-228697},
  doi =		{10.4230/LIPIcs.STACS.2025.43},
  annote =	{Keywords: Geometric intersection graphs, Densest subgraph, Range searching, Approximation algorithms}
}
Document
Independence and Domination on Bounded-Treewidth Graphs: Integer, Rational, and Irrational Distances

Authors: Tim A. Hartmann and Dániel Marx


Abstract
The distance-d variants of Independent Set and Dominating Set problems have been extensively studied from different algorithmic viewpoints. In particular, the complexity of these problems are well understood on bounded-treewidth graphs [Katsikarelis, Lampis, and Paschos, Discret. Appl. Math 2022][Borradaile and Le, IPEC 2016]: given a tree decomposition of width t, the two problems can be solved in time d^t⋅ n^O(1) and (2d+1)^t⋅ n^O(1), respectively. Furthermore, assuming the Strong Exponential-Time Hypothesis (SETH), the base constants are best possible in these running times: they cannot be improved to d-ε and 2d+1-ε, respectively, for any ε > 0. We investigate continuous versions of these problems in a setting introduced by Megiddo and Tamir [SICOMP 1983], where every edge is modeled by a unit-length interval of points. In the δ-Dispersion problem, the task is to find a maximum number of points (possibly inside edges) that are pairwise at distance at least δ from each other. Similarly, in the δ-Covering problem, the task is to find a minimum number of points (possibly inside edges) such that every point of the graph (including those inside edges) is at distance at most δ from the selected point set. We provide a comprehensive understanding of these two problems on bounded-treewidth graphs. 1) Let δ = a/b with a and b being coprime. If a ≤ 2, then δ-Dispersion is polynomial-time solvable. For a ≥ 3, given a tree decomposition of width t, the problem can be solved in time (2a)^t⋅ n^O(1), and, assuming SETH, there is no (2a-ε)^t⋅n^{O(1)} time algorithm for any ε > 0. 2) Let δ = a/b with a and b being coprime. If a = 1, then δ-Covering is polynomial-time solvable. For a ≥ 2, given a tree decomposition of width t, the problem can be solved in time ((2+2(bod 2)) a)^t⋅ n^O(1), and, assuming SETH, there is no ((2+2(bod 2))a -ε)^t⋅n^O(1) time algorithm for any ε > 0. 3) For every fixed irrational number δ > 0 satisfying some mild computability condition, both δ-Dispersion and δ-Covering can be solved in time n^O(t) on graphs of treewidth t. We show a very explicitly defined irrational number δ = (4∑_{j=1}^∞ 2^{-2^j})^{-1} ≈ 0.790085 such that δ-Dispersion and δ/2-Covering are W[1]-hard parameterized by the treewidth t of the input graph, and, assuming ETH, cannot be solved in time f(t)⋅n^o(t). As a key step in obtaining these results, we extend earlier results on distance-d versions of Independent Set and Dominating Set: We determine the exact complexity of these problems in the special case when the input graph arises from some graph G' by subdividing every edge exactly b times.

Cite as

Tim A. Hartmann and Dániel Marx. Independence and Domination on Bounded-Treewidth Graphs: Integer, Rational, and Irrational Distances. In 42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 327, pp. 44:1-44:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{hartmann_et_al:LIPIcs.STACS.2025.44,
  author =	{Hartmann, Tim A. and Marx, D\'{a}niel},
  title =	{{Independence and Domination on Bounded-Treewidth Graphs: Integer, Rational, and Irrational Distances}},
  booktitle =	{42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)},
  pages =	{44:1--44:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-365-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{327},
  editor =	{Beyersdorff, Olaf and Pilipczuk, Micha{\l} and Pimentel, Elaine and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2025.44},
  URN =		{urn:nbn:de:0030-drops-228700},
  doi =		{10.4230/LIPIcs.STACS.2025.44},
  annote =	{Keywords: Independence, Domination, Irrationals, Treewidth, SETH}
}
Document
Forbidden Patterns in Mixed Linear Layouts

Authors: Deborah Haun, Laura Merker, and Sergey Pupyrev


Abstract
An ordered graph is a graph with a total order over its vertices. A linear layout of an ordered graph is a partition of the edges into sets of either non-crossing edges, called stacks, or non-nesting edges, called queues. The stack (queue) number of an ordered graph is the minimum number of required stacks (queues). Mixed linear layouts combine these layouts by allowing each set of edges to form either a stack or a queue. The minimum number of stacks plus queues is called the mixed page number. It is well known that ordered graphs with small stack number are characterized, up to a function, by the absence of large twists (that is, pairwise crossing edges). Similarly, ordered graphs with small queue number are characterized by the absence of large rainbows (that is, pairwise nesting edges). However, no such characterization via forbidden patterns is known for mixed linear layouts. We address this gap by introducing patterns similar to twists and rainbows, which we call thick patterns; such patterns allow a characterization, again up to a function, of mixed linear layouts of bounded-degree graphs. That is, we show that a family of ordered graphs with bounded maximum degree has bounded mixed page number if and only if the size of the largest thick pattern is bounded. In addition, we investigate an exact characterization of ordered graphs whose mixed page number equals a fixed integer k via a finite set of forbidden patterns. We show that for k = 2, there is no such characterization, which supports the nature of our first result.

Cite as

Deborah Haun, Laura Merker, and Sergey Pupyrev. Forbidden Patterns in Mixed Linear Layouts. In 42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 327, pp. 45:1-45:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{haun_et_al:LIPIcs.STACS.2025.45,
  author =	{Haun, Deborah and Merker, Laura and Pupyrev, Sergey},
  title =	{{Forbidden Patterns in Mixed Linear Layouts}},
  booktitle =	{42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)},
  pages =	{45:1--45:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-365-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{327},
  editor =	{Beyersdorff, Olaf and Pilipczuk, Micha{\l} and Pimentel, Elaine and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2025.45},
  URN =		{urn:nbn:de:0030-drops-228717},
  doi =		{10.4230/LIPIcs.STACS.2025.45},
  annote =	{Keywords: Ordered Graphs, linear Layout, mixed linear Layout, Stack Layout, Queue Layout}
}
Document
Sampling Unlabeled Chordal Graphs in Expected Polynomial Time

Authors: Úrsula Hébert-Johnson and Daniel Lokshtanov


Abstract
We design an algorithm that generates an n-vertex unlabeled chordal graph uniformly at random in expected polynomial time. Along the way, we develop the following two results: (1) an FPT algorithm for counting and sampling labeled chordal graphs with a given automorphism π, parameterized by the number of moved points of π, and (2) a proof that the probability that a random n-vertex labeled chordal graph has a given automorphism π ∈ S_n is at most 1/2^{c max{μ²,n}}, where μ is the number of moved points of π and c is a constant. Our algorithm for sampling unlabeled chordal graphs calls the aforementioned FPT algorithm as a black box with potentially large values of the parameter μ, but the probability of calling this algorithm with a large value of μ is exponentially small.

Cite as

Úrsula Hébert-Johnson and Daniel Lokshtanov. Sampling Unlabeled Chordal Graphs in Expected Polynomial Time. In 42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 327, pp. 46:1-46:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{hebertjohnson_et_al:LIPIcs.STACS.2025.46,
  author =	{H\'{e}bert-Johnson, \'{U}rsula and Lokshtanov, Daniel},
  title =	{{Sampling Unlabeled Chordal Graphs in Expected Polynomial Time}},
  booktitle =	{42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)},
  pages =	{46:1--46:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-365-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{327},
  editor =	{Beyersdorff, Olaf and Pilipczuk, Micha{\l} and Pimentel, Elaine and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2025.46},
  URN =		{urn:nbn:de:0030-drops-228726},
  doi =		{10.4230/LIPIcs.STACS.2025.46},
  annote =	{Keywords: Chordal graphs, graph sampling, graph counting, unlabeled graphs}
}
Document
Minimizing the Number of Tardy Jobs with Uniform Processing Times on Parallel Machines

Authors: Klaus Heeger and Hendrik Molter


Abstract
In this work, we study the computational (parameterized) complexity of P∣ r_j, p_j = p ∣∑ w_j U_j. Here, we are given m identical parallel machines and n jobs with equal processing time, each characterized by a release date, a due date, and a weight. The task is to find a feasible schedule, that is, an assignment of the jobs to starting times on machines, such that no job starts before its release date and no machine processes several jobs at the same time, that minimizes the weighted number of tardy jobs. A job is considered tardy if it finishes after its due date. Our main contribution is showing that P∣r_j, p_j = p∣∑ U_j (the unweighted version of the problem) is NP-hard and W[2]-hard when parameterized by the number of machines. The former resolves an open problem in Note 2.1.19 by Kravchenko and Werner [Journal of Scheduling, 2011] and Open Problem 2 by Sgall [ESA, 2012], and the latter resolves Open Problem 7 by Mnich and van Bevern [Computers & Operations Research, 2018]. Furthermore, our result shows that the known XP-algorithm by Baptiste et al. [4OR, 2004] for P∣r_j, p_j = p∣∑ w_j U_j parameterized by the number of machines is optimal from a classification standpoint. On the algorithmic side, we provide alternative running time bounds for the above-mentioned known XP-algorithm. Our analysis shows that P∣r_j, p_j = p∣∑ w_j U_j is contained in XP when parameterized by the processing time, and that it is contained in FPT when parameterized by the combination of the number of machines and the processing time. Finally, we give an FPT-algorithm for P∣r_j, p_j = p∣∑ w_j U_j parameterized by the number of release dates or the number of due dates. With this work, we lay out the foundation for a systematic study of the parameterized complexity of P∣r_j, p_j = p∣∑ w_j U_j.

Cite as

Klaus Heeger and Hendrik Molter. Minimizing the Number of Tardy Jobs with Uniform Processing Times on Parallel Machines. In 42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 327, pp. 47:1-47:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{heeger_et_al:LIPIcs.STACS.2025.47,
  author =	{Heeger, Klaus and Molter, Hendrik},
  title =	{{Minimizing the Number of Tardy Jobs with Uniform Processing Times on Parallel Machines}},
  booktitle =	{42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)},
  pages =	{47:1--47:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-365-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{327},
  editor =	{Beyersdorff, Olaf and Pilipczuk, Micha{\l} and Pimentel, Elaine and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2025.47},
  URN =		{urn:nbn:de:0030-drops-228736},
  doi =		{10.4230/LIPIcs.STACS.2025.47},
  annote =	{Keywords: Scheduling, Identical Parallel Machines, Weighted Number of Tardy Jobs, Uniform Processing Times, Release Dates, NP-hard Problems, Parameterized Complexity}
}
Document
Subshifts Defined by Nondeterministic and Alternating Plane-Walking Automata

Authors: Benjamin Hellouin de Menibus and Pacôme Perrotin


Abstract
Plane-walking automata were introduced by Salo & Törma to recognise languages of two-dimensional infinite words (subshifts), the counterpart of 4-way finite automata for two-dimensional finite words. We extend the model to allow for nondeterminism and alternation of quantifiers. We prove that the recognised subshifts form a strict subclass of sofic subshifts, and that the classes corresponding to existential and universal nondeterminism are incomparable and both larger that the deterministic class. We define a hierarchy of subshifts recognised by plane-walking automata with alternating quantifiers, which we conjecture to be strict.

Cite as

Benjamin Hellouin de Menibus and Pacôme Perrotin. Subshifts Defined by Nondeterministic and Alternating Plane-Walking Automata. In 42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 327, pp. 48:1-48:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{hellouindemenibus_et_al:LIPIcs.STACS.2025.48,
  author =	{Hellouin de Menibus, Benjamin and Perrotin, Pac\^{o}me},
  title =	{{Subshifts Defined by Nondeterministic and Alternating Plane-Walking Automata}},
  booktitle =	{42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)},
  pages =	{48:1--48:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-365-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{327},
  editor =	{Beyersdorff, Olaf and Pilipczuk, Micha{\l} and Pimentel, Elaine and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2025.48},
  URN =		{urn:nbn:de:0030-drops-228540},
  doi =		{10.4230/LIPIcs.STACS.2025.48},
  annote =	{Keywords: Formal languages, Finite automata, Subshifts, Symbolic dynamics, Tilings}
}
Document
Cycle Counting Under Local Differential Privacy for Degeneracy-Bounded Graphs

Authors: Quentin Hillebrand, Vorapong Suppakitpaisarn, and Tetsuo Shibuya


Abstract
We propose an algorithm for counting the number of cycles under local differential privacy for degeneracy-bounded input graphs. Numerous studies have focused on counting the number of triangles under the privacy notion, demonstrating that the expected 𝓁₂-error of these algorithms is Ω(n^{1.5}), where n is the number of nodes in the graph. When parameterized by the number of cycles of length four (C₄), the best existing triangle counting algorithm has an error of O(n^{1.5} + √C₄) = O(n²). In this paper, we introduce an algorithm with an expected 𝓁₂-error of O(δ^1.5 n^0.5 + δ^0.5 d_max^0.5 n^0.5), where δ is the degeneracy and d_{max} is the maximum degree of the graph. For degeneracy-bounded graphs (δ ∈ Θ(1)) commonly found in practical social networks, our algorithm achieves an expected 𝓁₂-error of O(d_{max}^{0.5} n^{0.5}) = O(n). Our algorithm’s core idea is a precise count of triangles following a preprocessing step that approximately sorts the degree of all nodes. This approach can be extended to approximate the number of cycles of length k, maintaining a similar 𝓁₂-error, namely O(δ^{(k-2)/2} d_max^0.5 n^{(k-2)/2} + δ^{k/2} n^{(k-2)/2}) or O(d_max^0.5 n^{(k-2)/2}) = O(n^{(k-1)/2}) for degeneracy-bounded graphs.

Cite as

Quentin Hillebrand, Vorapong Suppakitpaisarn, and Tetsuo Shibuya. Cycle Counting Under Local Differential Privacy for Degeneracy-Bounded Graphs. In 42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 327, pp. 49:1-49:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{hillebrand_et_al:LIPIcs.STACS.2025.49,
  author =	{Hillebrand, Quentin and Suppakitpaisarn, Vorapong and Shibuya, Tetsuo},
  title =	{{Cycle Counting Under Local Differential Privacy for Degeneracy-Bounded Graphs}},
  booktitle =	{42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)},
  pages =	{49:1--49:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-365-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{327},
  editor =	{Beyersdorff, Olaf and Pilipczuk, Micha{\l} and Pimentel, Elaine and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2025.49},
  URN =		{urn:nbn:de:0030-drops-228748},
  doi =		{10.4230/LIPIcs.STACS.2025.49},
  annote =	{Keywords: Differential privacy, triangle counting, degeneracy, arboricity, graph theory, parameterized accuracy}
}
Document
Designing Exploration Contracts

Authors: Martin Hoefer, Conrad Schecker, and Kevin Schewior


Abstract
We study a natural application of contract design in the context of sequential exploration problems. In our principal-agent setting, a search task is delegated to an agent. The agent performs a sequential exploration of n boxes, suffers the exploration cost for each inspected box, and selects the content (called the prize) of one inspected box as outcome. Agent and principal obtain an individual value based on the selected prize. To influence the search, the principal a-priori designs a contract with a non-negative payment to the agent for each potential prize. The goal of the principal is to maximize her expected reward, i.e., value minus payment. Interestingly, this natural contract scenario shares close relations with the Pandora’s Box problem. We show how to compute optimal contracts for the principal in several scenarios. A popular and important subclass is that of linear contracts, and we show how to compute optimal linear contracts in polynomial time. For general contracts, we obtain optimal contracts under the standard assumption that the agent suffers cost but obtains value only from the transfers by the principal. More generally, for general contracts with non-zero agent values for outcomes we show how to compute an optimal contract in two cases: (1) when each box has only one prize with non-zero value for principal and agent, (2) for i.i.d. boxes with a single prize with positive value for the principal.

Cite as

Martin Hoefer, Conrad Schecker, and Kevin Schewior. Designing Exploration Contracts. In 42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 327, pp. 50:1-50:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{hoefer_et_al:LIPIcs.STACS.2025.50,
  author =	{Hoefer, Martin and Schecker, Conrad and Schewior, Kevin},
  title =	{{Designing Exploration Contracts}},
  booktitle =	{42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)},
  pages =	{50:1--50:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-365-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{327},
  editor =	{Beyersdorff, Olaf and Pilipczuk, Micha{\l} and Pimentel, Elaine and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2025.50},
  URN =		{urn:nbn:de:0030-drops-228755},
  doi =		{10.4230/LIPIcs.STACS.2025.50},
  annote =	{Keywords: Exploration, Contract Design, Pandora’s Box Problem}
}
Document
Protecting the Connectivity of a Graph Under Non-Uniform Edge Failures

Authors: Felix Hommelsheim, Zhenwei Liu, Nicole Megow, and Guochuan Zhang


Abstract
We study the problem of guaranteeing the connectivity of a given graph by protecting or strengthening edges. Herein, a protected edge is assumed to be robust and will not fail, which features a non-uniform failure model. We introduce the (p,q)-Steiner-Connectivity Preservation problem where we protect a minimum-cost set of edges such that the underlying graph maintains p-edge-connectivity between given terminal pairs against edge failures, assuming at most q unprotected edges can fail. We design polynomial-time exact algorithms for the cases where p and q are small and approximation algorithms for general values of p and q. Additionally, we show that when both p and q are part of the input, even deciding whether a given solution is feasible is NP-complete. This hardness also carries over to Flexible Network Design, a research direction that has gained significant attention. In particular, previous work focuses on problem settings where either p or q is constant, for which our new hardness result now provides justification.

Cite as

Felix Hommelsheim, Zhenwei Liu, Nicole Megow, and Guochuan Zhang. Protecting the Connectivity of a Graph Under Non-Uniform Edge Failures. In 42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 327, pp. 51:1-51:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{hommelsheim_et_al:LIPIcs.STACS.2025.51,
  author =	{Hommelsheim, Felix and Liu, Zhenwei and Megow, Nicole and Zhang, Guochuan},
  title =	{{Protecting the Connectivity of a Graph Under Non-Uniform Edge Failures}},
  booktitle =	{42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)},
  pages =	{51:1--51:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-365-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{327},
  editor =	{Beyersdorff, Olaf and Pilipczuk, Micha{\l} and Pimentel, Elaine and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2025.51},
  URN =		{urn:nbn:de:0030-drops-228761},
  doi =		{10.4230/LIPIcs.STACS.2025.51},
  annote =	{Keywords: Network Design, Edge Failures, Graph Connectivity, Approximation Algorithms}
}
Document
Polynomial Kernel and Incompressibility for Prison-Free Edge Deletion and Completion

Authors: Séhane Bel Houari-Durand, Eduard Eiben, and Magnus Wahlström


Abstract
Given a graph G and an integer k, the H-free Edge Deletion problem asks whether there exists a set of at most k edges of G whose deletion makes G free of induced copies of H. Significant attention has been given to the kernelizability aspects of this problem - i.e., for which graphs H does the problem admit an "efficient preprocessing" procedure, known as a polynomial kernelization, where an instance I of the problem with parameter k is reduced to an equivalent instance I' whose size and parameter value are bounded polynomially in k? Although such routines are known for many graphs H where the class of H-free graphs has significant restricted structure, it is also clear that for most graphs H the problem is incompressible, i.e., admits no polynomial kernelization parameterized by k unless the polynomial hierarchy collapses. These results led Marx and Sandeep to the conjecture that H-free Edge Deletion is incompressible for any graph H with at least five vertices, unless H is complete or has at most one edge (JCSS 2022). This conjecture was reduced to the incompressibility of H-free Edge Deletion for a finite list of graphs H. We consider one of these graphs, which we dub the prison, and show that Prison-Free Edge Deletion has a polynomial kernel, refuting the conjecture. On the other hand, the same problem for the complement of the prison is incompressible.

Cite as

Séhane Bel Houari-Durand, Eduard Eiben, and Magnus Wahlström. Polynomial Kernel and Incompressibility for Prison-Free Edge Deletion and Completion. In 42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 327, pp. 52:1-52:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{houaridurand_et_al:LIPIcs.STACS.2025.52,
  author =	{Houari-Durand, S\'{e}hane Bel and Eiben, Eduard and Wahlstr\"{o}m, Magnus},
  title =	{{Polynomial Kernel and Incompressibility for Prison-Free Edge Deletion and Completion}},
  booktitle =	{42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)},
  pages =	{52:1--52:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-365-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{327},
  editor =	{Beyersdorff, Olaf and Pilipczuk, Micha{\l} and Pimentel, Elaine and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2025.52},
  URN =		{urn:nbn:de:0030-drops-228770},
  doi =		{10.4230/LIPIcs.STACS.2025.52},
  annote =	{Keywords: Graph modification problems, parameterized complexity, polynomial kernelization}
}
Document
On Read-k Projections of the Determinant

Authors: Pavel Hrubeš and Pushkar S. Joglekar


Abstract
We consider read-k determinantal representations of polynomials and prove some non-expressibility results. A square matrix M whose entries are variables or field elements will be called read-k, if every variable occurs at most k times in M. It will be called a determinantal representation of a polynomial f if f = det(M). We show that - the n × n permanent polynomial does not have a read-k determinantal representation for k ∈ o(√n/log n) (over a field of characteristic different from two). We also obtain a quantitative strengthening of this result by giving a similar non-expressibility for k ∈ o(√n/log n) for an explicit n-variate multilinear polynomial (as opposed to the permanent which is n²-variate).

Cite as

Pavel Hrubeš and Pushkar S. Joglekar. On Read-k Projections of the Determinant. In 42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 327, pp. 53:1-53:7, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{hrubes_et_al:LIPIcs.STACS.2025.53,
  author =	{Hrube\v{s}, Pavel and Joglekar, Pushkar S.},
  title =	{{On Read-k Projections of the Determinant}},
  booktitle =	{42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)},
  pages =	{53:1--53:7},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-365-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{327},
  editor =	{Beyersdorff, Olaf and Pilipczuk, Micha{\l} and Pimentel, Elaine and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2025.53},
  URN =		{urn:nbn:de:0030-drops-228785},
  doi =		{10.4230/LIPIcs.STACS.2025.53},
  annote =	{Keywords: determinant, permanent, projection of determinant, VNP completeness of permanent}
}
Document
Multidimensional Quantum Walks, Recursion, and Quantum Divide & Conquer

Authors: Stacey Jeffery and Galina Pass


Abstract
We introduce an object called a subspace graph that formalizes the technique of multidimensional quantum walks. Composing subspace graphs allows one to seamlessly combine quantum and classical reasoning, keeping a classical structure in mind, while abstracting quantum parts into subgraphs with simple boundaries as needed. As an example, we show how to combine a switching network with arbitrary quantum subroutines, to compute a composed function. As another application, we give a time-efficient implementation of quantum Divide & Conquer when the sub-problems are combined via a Boolean formula. We use this to quadratically speed up Savitch’s algorithm for directed st-connectivity.

Cite as

Stacey Jeffery and Galina Pass. Multidimensional Quantum Walks, Recursion, and Quantum Divide & Conquer. In 42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 327, pp. 54:1-54:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{jeffery_et_al:LIPIcs.STACS.2025.54,
  author =	{Jeffery, Stacey and Pass, Galina},
  title =	{{Multidimensional Quantum Walks, Recursion, and Quantum Divide \& Conquer}},
  booktitle =	{42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)},
  pages =	{54:1--54:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-365-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{327},
  editor =	{Beyersdorff, Olaf and Pilipczuk, Micha{\l} and Pimentel, Elaine and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2025.54},
  URN =		{urn:nbn:de:0030-drops-228791},
  doi =		{10.4230/LIPIcs.STACS.2025.54},
  annote =	{Keywords: Quantum Divide \& Conquer, Time-Efficient, Subspace Graphs, Quantum Walks, Switching Networks, Directed st-Connectivity}
}
Document
Modal Separation of Fixpoint Formulae

Authors: Jean Christoph Jung and Jędrzej Kołodziejski


Abstract
Modal separability for modal fixpoint formulae is the problem to decide for two given modal fixpoint formulae φ,φ' whether there is a modal formula ψ that separates them, in the sense that φ ⊧ ψ and ψ ⊧ ¬φ'. We study modal separability and its special case modal definability over various classes of models, such as arbitrary models, finite models, trees, and models of bounded outdegree. Our main results are that modal separability is PSpace-complete over words, that is, models of outdegree ≤ 1, ExpTime-complete over unrestricted and over binary models, and 2-ExpTime-complete over models of outdegree bounded by some d ≥ 3. Interestingly, this latter case behaves fundamentally different from the other cases also in that modal logic does not enjoy the Craig interpolation property over this class. Motivated by this we study also the induced interpolant existence problem as a special case of modal separability, and show that it is coNExpTime-complete and thus harder than validity in the logic. Besides deciding separability, we also investigate the problem of efficient construction of separators. Finally, we consider in a case study the extension of modal fixpoint formulae by graded modalities and investigate separability by modal formulae and graded modal formulae.

Cite as

Jean Christoph Jung and Jędrzej Kołodziejski. Modal Separation of Fixpoint Formulae. In 42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 327, pp. 55:1-55:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{jung_et_al:LIPIcs.STACS.2025.55,
  author =	{Jung, Jean Christoph and Ko{\l}odziejski, J\k{e}drzej},
  title =	{{Modal Separation of Fixpoint Formulae}},
  booktitle =	{42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)},
  pages =	{55:1--55:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-365-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{327},
  editor =	{Beyersdorff, Olaf and Pilipczuk, Micha{\l} and Pimentel, Elaine and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2025.55},
  URN =		{urn:nbn:de:0030-drops-228804},
  doi =		{10.4230/LIPIcs.STACS.2025.55},
  annote =	{Keywords: Modal Logic, Fixpoint Logic, Separability, Interpolation}
}
Document
Transforming Stacks into Queues: Mixed and Separated Layouts of Graphs

Authors: Julia Katheder, Michael Kaufmann, Sergey Pupyrev, and Torsten Ueckerdt


Abstract
Some of the most important open problems for linear layouts of graphs ask for the relation between a graph’s queue number and its stack number or mixed number. In such, we seek a vertex order and edge partition of G into parts with pairwise non-crossing edges (a stack) or with pairwise non-nesting edges (a queue). Allowing only stacks, only queues, or both, the minimum number of required parts is the graph’s stack number sn(G), queue number qn(G), and mixed number mn(G), respectively. Already in 1992, Heath and Rosenberg asked whether qn(G) is bounded in terms of sn(G), that is, whether stacks "can be transformed into" queues. This is equivalent to bipartite 3-stack graphs having bounded queue number (Dujmović and Wood, 2005). Recently, Alam et al. asked whether qn(G) is bounded in terms of mn(G), which we show to also be equivalent to the previous questions. We approach the problem by considering separated linear layouts of bipartite graphs. In this natural setting all vertices of one part must precede all vertices of the other part. Separated stack and queue numbers coincide, and for fixed vertex orders, graphs with bounded separated stack/queue number can be characterized and efficiently recognized, whereas the separated mixed layouts are more challenging. In this work, we thoroughly investigate the relationship between separated and non-separated, mixed and pure linear layouts.

Cite as

Julia Katheder, Michael Kaufmann, Sergey Pupyrev, and Torsten Ueckerdt. Transforming Stacks into Queues: Mixed and Separated Layouts of Graphs. In 42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 327, pp. 56:1-56:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{katheder_et_al:LIPIcs.STACS.2025.56,
  author =	{Katheder, Julia and Kaufmann, Michael and Pupyrev, Sergey and Ueckerdt, Torsten},
  title =	{{Transforming Stacks into Queues: Mixed and Separated Layouts of Graphs}},
  booktitle =	{42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)},
  pages =	{56:1--56:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-365-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{327},
  editor =	{Beyersdorff, Olaf and Pilipczuk, Micha{\l} and Pimentel, Elaine and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2025.56},
  URN =		{urn:nbn:de:0030-drops-228819},
  doi =		{10.4230/LIPIcs.STACS.2025.56},
  annote =	{Keywords: Separated linear Layouts, Stack Number, Queue Number, mixed Number, bipartite Graphs}
}
Document
Approximate Minimum Tree Cover in All Symmetric Monotone Norms Simultaneously

Authors: Matthias Kaul, Kelin Luo, Matthias Mnich, and Heiko Röglin


Abstract
We study the problem of partitioning a set of n objects in a metric space into k clusters V₁,...,V_k. The quality of the clustering is measured by considering the vector of cluster costs and then minimizing some monotone symmetric norm of that vector (in particular, this includes the 𝓁_p-norms). For the costs of the clusters we take the weight of a minimum-weight spanning tree on the objects in V_i, which may serve as a proxy for the cost of traversing all objects in the cluster, for example in the context of Multirobot Coverage as studied by Zheng, Koenig, Kempe, Jain (IROS 2005), but also as a shape-invariant measure of cluster density similar to Single-Linkage Clustering. This problem has been studied by Even, Garg, Könemann, Ravi, Sinha (Oper. Res. Lett., 2004) for the setting of minimizing the weight of the largest cluster (i.e., using 𝓁_∞) as Min-Max Tree Cover, for which they gave a constant-factor approximation algorithm. We provide a careful adaptation of their algorithm to compute solutions which are approximately optimal with respect to all monotone symmetric norms simultaneously, and show how to find them in polynomial time. In fact, our algorithm is purely combinatorial and can process metric spaces with 10,000 points in less than a second. As an extension, we also consider the case where instead of a target number of clusters we are provided with a set of depots in the space such that every cluster should contain at least one such depot. One can consider these as the fixed starting points of some agents that will traverse all points of a cluster. For this setting also we are able to give a polynomial-time algorithm computing a constant-factor approximation with respect to all monotone symmetric norms simultaneously. To show that the algorithmic results are tight up to the precise constant of approximation attainable, we also prove that such clustering problems are already APX-hard when considering only one single 𝓁_p norm for the objective.

Cite as

Matthias Kaul, Kelin Luo, Matthias Mnich, and Heiko Röglin. Approximate Minimum Tree Cover in All Symmetric Monotone Norms Simultaneously. In 42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 327, pp. 57:1-57:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{kaul_et_al:LIPIcs.STACS.2025.57,
  author =	{Kaul, Matthias and Luo, Kelin and Mnich, Matthias and R\"{o}glin, Heiko},
  title =	{{Approximate Minimum Tree Cover in All Symmetric Monotone Norms Simultaneously}},
  booktitle =	{42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)},
  pages =	{57:1--57:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-365-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{327},
  editor =	{Beyersdorff, Olaf and Pilipczuk, Micha{\l} and Pimentel, Elaine and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2025.57},
  URN =		{urn:nbn:de:0030-drops-228821},
  doi =		{10.4230/LIPIcs.STACS.2025.57},
  annote =	{Keywords: Clustering, spanning trees, all-norm approximation}
}
Document
Violating Constant Degree Hypothesis Requires Breaking Symmetry

Authors: Piotr Kawałek and Armin Weiß


Abstract
The Constant Degree Hypothesis was introduced by Barrington et. al. [David A. Mix Barrington et al., 1990] to study some extensions of q-groups by nilpotent groups and the power of these groups in a computation model called NuDFA (non-uniform DFA). In its simplest formulation, it establishes exponential lower bounds for MOD_q∘MOD_m∘AND_d circuits computing AND of unbounded arity n (for constant integers d,m and a prime q). While it has been proved in some special cases (including d = 1), it remains wide open in its general form for over 30 years. In this paper we prove that the hypothesis holds when we restrict our attention to symmetric circuits with m being a prime. While we build upon techniques by Grolmusz and Tardos [Vince Grolmusz and Gábor Tardos, 2000], we have to prove a new symmetric version of their Degree Decreasing Lemma and use it to simplify circuits in a symmetry-preserving way. Moreover, to establish the result, we perform a careful analysis of automorphism groups of MOD_m∘AND_d subcircuits and study the periodic behaviour of the computed functions. Our methods also yield lower bounds when d is treated as a function of n. Finally, we present a construction of symmetric MOD_q∘MOD_m∘AND_d circuits that almost matches our lower bound and conclude that a symmetric function f can be computed by symmetric MOD_q∘MOD_p∘AND_d circuits of quasipolynomial size if and only if f has periods of polylogarithmic length of the form p^k q^𝓁.

Cite as

Piotr Kawałek and Armin Weiß. Violating Constant Degree Hypothesis Requires Breaking Symmetry. In 42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 327, pp. 58:1-58:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{kawalek_et_al:LIPIcs.STACS.2025.58,
  author =	{Kawa{\l}ek, Piotr and Wei{\ss}, Armin},
  title =	{{Violating Constant Degree Hypothesis Requires Breaking Symmetry}},
  booktitle =	{42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)},
  pages =	{58:1--58:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-365-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{327},
  editor =	{Beyersdorff, Olaf and Pilipczuk, Micha{\l} and Pimentel, Elaine and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2025.58},
  URN =		{urn:nbn:de:0030-drops-228837},
  doi =		{10.4230/LIPIcs.STACS.2025.58},
  annote =	{Keywords: Circuit lower bounds, constant degree hypothesis, permutation groups, CC⁰-circuits}
}
Document
Online Matching with Delays and Size-Based Costs

Authors: Yasushi Kawase and Tomohiro Nakayoshi


Abstract
In this paper, we introduce the problem of Online Matching with Delays and Size-based Costs (OMDSC). The OMDSC problem involves m requests arriving online. At any time, a group can be formed by matching any number of requests that have been received but remain unmatched. The cost associated with each group is determined by the waiting time for each request within the group and size-dependent cost. The size-dependent cost is specified by a penalty function. Our goal is to partition all the incoming requests into multiple groups while minimizing the total associated cost. This problem is an extension of the TCP acknowledgment problem proposed by Dooly et al. (J. ACM, 2001). It generalizes the cost model for sending acknowledgments. This study reveals the competitive ratios for a fundamental case, in which the penalty function takes only values of either 0 or 1. We classify such penalty functions into three distinct cases: (i) a fixed penalty of 1 regardless of the group size, (ii) a penalty of 0 if and only if the group size is a multiple of a specific integer k, and (iii) other situations. The problem in case (i) is equivalent to the TCP acknowledgment problem, for which Dooly et al. proposed a 2-competitive algorithm. For case (ii), we first show that natural algorithms that match all remaining requests are Ω(√k)-competitive. We then propose an O(log k / log log k)-competitive deterministic algorithm by carefully managing the match size and timing, and prove its optimality. For any penalty function in case (iii), we demonstrate the non-existence of a competitive online algorithm. Additionally, we discuss competitive ratios for other typical penalty functions that are not restricted to take values of 0 or 1.

Cite as

Yasushi Kawase and Tomohiro Nakayoshi. Online Matching with Delays and Size-Based Costs. In 42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 327, pp. 59:1-59:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{kawase_et_al:LIPIcs.STACS.2025.59,
  author =	{Kawase, Yasushi and Nakayoshi, Tomohiro},
  title =	{{Online Matching with Delays and Size-Based Costs}},
  booktitle =	{42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)},
  pages =	{59:1--59:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-365-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{327},
  editor =	{Beyersdorff, Olaf and Pilipczuk, Micha{\l} and Pimentel, Elaine and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2025.59},
  URN =		{urn:nbn:de:0030-drops-228846},
  doi =		{10.4230/LIPIcs.STACS.2025.59},
  annote =	{Keywords: Online matching, competitive analysis, delayed service}
}
Document
Modular Counting CSP: Reductions and Algorithms

Authors: Amirhossein Kazeminia and Andrei A. Bulatov


Abstract
The Constraint Satisfaction Problem (CSP) is ubiquitous in various areas of mathematics and computer science. Many of its variations have been studied including the Counting CSP, where the goal is to find the number of solutions to a CSP instance. The complexity of finding the exact number of solutions of a CSP is well understood (Bulatov, 2013, and Dyer and Richerby, 2013) and the focus has shifted to other variations of the Counting CSP such as counting the number of solutions modulo an integer. This problem has attracted considerable attention recently. In the case of CSPs based on undirected graphs Bulatov and Kazeminia (STOC 2022) obtained a complexity classification for the problem of counting solutions modulo p for arbitrary prime p. In this paper we report on the progress made towards a similar classification for the general CSP, not necessarily based on graphs. We identify several features that make the general case very different from the graph case such as a stronger form of rigidity and the structure of automorphisms of powers of relational structures. We provide a solution algorithm in the case p = 2 that works under some additional conditions and prove the hardness of the problem under some assumptions about automorphisms of the powers of the relational structure. We also reduce the general CSP to the case that only uses binary relations satisfying strong additional conditions.

Cite as

Amirhossein Kazeminia and Andrei A. Bulatov. Modular Counting CSP: Reductions and Algorithms. In 42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 327, pp. 60:1-60:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{kazeminia_et_al:LIPIcs.STACS.2025.60,
  author =	{Kazeminia, Amirhossein and Bulatov, Andrei A.},
  title =	{{Modular Counting CSP: Reductions and Algorithms}},
  booktitle =	{42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)},
  pages =	{60:1--60:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-365-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{327},
  editor =	{Beyersdorff, Olaf and Pilipczuk, Micha{\l} and Pimentel, Elaine and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2025.60},
  URN =		{urn:nbn:de:0030-drops-228853},
  doi =		{10.4230/LIPIcs.STACS.2025.60},
  annote =	{Keywords: Constraint Satisfaction Problem, Modular Counting}
}
Document
Efficiently Computing the Minimum Rank of a Matrix in a Monoid of Zero-One Matrices

Authors: Stefan Kiefer and Andrew Ryzhikov


Abstract
A zero-one matrix is a matrix with entries from {0, 1}. We study monoids containing only such matrices. A finite set of zero-one matrices generating such a monoid can be seen as the matrix representation of an unambiguous finite automaton, an important generalisation of deterministic finite automata which shares many of their good properties. Let 𝒜 be a finite set of n×n zero-one matrices generating a monoid of zero-one matrices, and m be the cardinality of 𝒜. We study the computational complexity of computing the minimum rank of a matrix in the monoid generated by 𝒜. By using linear-algebraic techniques, we show that this problem is in NC and can be solved in 𝒪(mn⁴) time. We also provide a combinatorial algorithm finding a matrix of minimum rank in 𝒪(n^{2 + ω} + mn⁴) time, where 2 ≤ ω ≤ 2.4 is the matrix multiplication exponent. As a byproduct, we show a very weak version of a generalisation of the Černý conjecture: there always exists a straight line program of size 𝒪(n²) describing a product resulting in a matrix of minimum rank. For the special case corresponding to complete DFAs (that is, for the case where all matrices have exactly one 1 in each row), the minimum rank is the size of the smallest image of the set of states under the action of a word. Our combinatorial algorithm finds a matrix of minimum rank in time 𝒪(n³ + mn²) in this case.

Cite as

Stefan Kiefer and Andrew Ryzhikov. Efficiently Computing the Minimum Rank of a Matrix in a Monoid of Zero-One Matrices. In 42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 327, pp. 61:1-61:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{kiefer_et_al:LIPIcs.STACS.2025.61,
  author =	{Kiefer, Stefan and Ryzhikov, Andrew},
  title =	{{Efficiently Computing the Minimum Rank of a Matrix in a Monoid of Zero-One Matrices}},
  booktitle =	{42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)},
  pages =	{61:1--61:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-365-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{327},
  editor =	{Beyersdorff, Olaf and Pilipczuk, Micha{\l} and Pimentel, Elaine and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2025.61},
  URN =		{urn:nbn:de:0030-drops-228867},
  doi =		{10.4230/LIPIcs.STACS.2025.61},
  annote =	{Keywords: matrix monoids, minimum rank, unambiguous automata}
}
Document
Faster Algorithms on Linear Delta-Matroids

Authors: Tomohiro Koana and Magnus Wahlström


Abstract
We present new algorithms and constructions for linear delta-matroids. Delta-matroids are generalizations of matroids that also capture structures such as matchable vertex sets in graphs and path-packing problems. As with matroids, an important class of delta-matroids is given by linear delta-matroids, which generalize linear matroids and are represented via a "twist" of a skew-symmetric matrix. We observe an alternative representation, termed a contraction representation over a skew-symmetric matrix. This representation is equivalent to the more standard twist representation up to O(n^ω)-time transformations (where n is the dimension of the delta-matroid and ω < 2.372 the matrix multiplication exponent), but it is much more convenient for algorithmic tasks. For instance, the problem of finding a max-weight feasible set now reduces directly to finding a max-weight basis in a linear matroid. Supported by this representation, we provide new algorithms and constructions for linear delta-matroids. In particular, we show that the union and delta-sum of linear delta-matroids are again linear delta-matroids, and that a representation for the resulting delta-matroid can be constructed in randomized time O(n^ω) (or more precisely, in O(n^ω) field operations, over a field of size at least Ω(n⋅(1/ε)), where ε > 0 is an error parameter). Previously, it was only known that these operations define delta-matroids. We also note that every projected linear delta-matroid can be represented as an elementary projection. This implies that several optimization problems over (projected) linear delta-matroids, including the coverage, delta-coverage, and parity problems, reduce (in their decision versions) to a single O(n^ω)-time matrix rank computation. Using the methods of Harvey, previously applied by Cheung, Lao and Leung for linear matroid parity, we furthermore show how to solve the search versions in the same time. This improves on the O(n⁴)-time augmenting path algorithm of Geelen, Iwata and Murota, albeit with randomization. Finally, we consider the maximum-cardinality delta-matroid intersection problem (equivalently, the maximum-cardinality delta-matroid matching problem). Using Storjohann’s algorithms for symbolic determinants, we show that such a solution can be found in O(n^{ω+1}) time. This provides the first (randomized) polynomial-time solution for the problem, thereby solving an open question of Kakimura and Takamatsu.

Cite as

Tomohiro Koana and Magnus Wahlström. Faster Algorithms on Linear Delta-Matroids. In 42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 327, pp. 62:1-62:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{koana_et_al:LIPIcs.STACS.2025.62,
  author =	{Koana, Tomohiro and Wahlstr\"{o}m, Magnus},
  title =	{{Faster Algorithms on Linear Delta-Matroids}},
  booktitle =	{42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)},
  pages =	{62:1--62:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-365-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{327},
  editor =	{Beyersdorff, Olaf and Pilipczuk, Micha{\l} and Pimentel, Elaine and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2025.62},
  URN =		{urn:nbn:de:0030-drops-228876},
  doi =		{10.4230/LIPIcs.STACS.2025.62},
  annote =	{Keywords: Delta-matroids, Randomized algorithms}
}
Document
Approximation of Spanning Tree Congestion Using Hereditary Bisection

Authors: Petr Kolman


Abstract
The Spanning Tree Congestion (STC) problem is the following NP-hard problem: given a graph G, construct a spanning tree T of G minimizing its maximum edge congestion where the congestion of an edge e ∈ T is the number of edges uv in G such that the unique path between u and v in T passes through e; the optimal value for a given graph G is denoted STC(G). It is known that every spanning tree is an n/2-approximation for the STC problem. A long-standing problem is to design a better approximation algorithm. Our contribution towards this goal is an 𝒪(Δ⋅log^{3/2}n)-approximation algorithm where Δ is the maximum degree in G and n the number of vertices. For graphs with a maximum degree bounded by a polylog of the number of vertices, this is an exponential improvement over the previous best approximation. Our main tool for the algorithm is a new lower bound on the spanning tree congestion which is of independent interest. Denoting by hb(G) the hereditary bisection of G which is the maximum bisection width over all subgraphs of G, we prove that for every graph G, STC(G) ≥ Ω(hb(G)/Δ).

Cite as

Petr Kolman. Approximation of Spanning Tree Congestion Using Hereditary Bisection. In 42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 327, pp. 63:1-63:6, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{kolman:LIPIcs.STACS.2025.63,
  author =	{Kolman, Petr},
  title =	{{Approximation of Spanning Tree Congestion Using Hereditary Bisection}},
  booktitle =	{42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)},
  pages =	{63:1--63:6},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-365-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{327},
  editor =	{Beyersdorff, Olaf and Pilipczuk, Micha{\l} and Pimentel, Elaine and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2025.63},
  URN =		{urn:nbn:de:0030-drops-228880},
  doi =		{10.4230/LIPIcs.STACS.2025.63},
  annote =	{Keywords: Spanning Tree Congestion, Bisection, Expansion, Divide and Conquer}
}
Document
Cluster Editing on Cographs and Related Classes

Authors: Manuel Lafond, Alitzel López Sánchez, and Weidong Luo


Abstract
In the Cluster Editing problem, sometimes known as (unweighted) Correlation Clustering, we must insert and delete a minimum number of edges to achieve a graph in which every connected component is a clique. Owing to its applications in computational biology, social network analysis, machine learning, and others, this problem has been widely studied for decades and is still undergoing active research. There exist several parameterized algorithms for general graphs, but little is known about the complexity of the problem on specific classes of graphs. Among the few important results in this direction, if only deletions are allowed, the problem can be solved in polynomial time on cographs, which are the P₄-free graphs. However, the complexity of the broader editing problem on cographs is still open. We show that even on a very restricted subclass of cographs, the problem is NP-hard, W[1]-hard when parameterized by the number p of desired clusters, and that time n^o(p/log p) is forbidden under the ETH. This shows that the editing variant is substantially harder than the deletion-only case, and that hardness holds for the many superclasses of cographs (including graphs of clique-width at most 2, perfect graphs, circle graphs, permutation graphs). On the other hand, we provide an almost tight upper bound of time n^O(p), which is a consequence of a more general n^O(cw⋅p) time algorithm, where cw is the clique-width. Given that forbidding P₄s maintains NP-hardness, we look at {P₄, C₄}-free graphs, also known as trivially perfect graphs, and provide a cubic-time algorithm for this class.

Cite as

Manuel Lafond, Alitzel López Sánchez, and Weidong Luo. Cluster Editing on Cographs and Related Classes. In 42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 327, pp. 64:1-64:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{lafond_et_al:LIPIcs.STACS.2025.64,
  author =	{Lafond, Manuel and L\'{o}pez S\'{a}nchez, Alitzel and Luo, Weidong},
  title =	{{Cluster Editing on Cographs and Related Classes}},
  booktitle =	{42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)},
  pages =	{64:1--64:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-365-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{327},
  editor =	{Beyersdorff, Olaf and Pilipczuk, Micha{\l} and Pimentel, Elaine and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2025.64},
  URN =		{urn:nbn:de:0030-drops-228895},
  doi =		{10.4230/LIPIcs.STACS.2025.64},
  annote =	{Keywords: Cluster editing, cographs, parameterized algorithms, clique-width, trivially perfect graphs}
}
Document
On Average Baby PIH and Its Applications

Authors: Yuwei Liu, Yijia Chen, Shuangle Li, Bingkai Lin, and Xin Zheng


Abstract
The Parameterized Inapproximability Hypothesis (PIH) asserts that no FPT algorithm can decide whether a given 2CSP instance parameterized by the number of variables is satisfiable, or at most a constant fraction of its constraints can be satisfied simultaneously. In a recent breakthrough, Guruswami, Lin, Ren, Sun, and Wu (STOC 2024) proved the PIH under the Exponential Time Hypothesis (ETH). However, it remains a major open problem whether the PIH can be established assuming only W[1]≠FPT. Towards this goal, Guruswami, Ren, and Sandeep (CCC 2024) showed a weaker version of the PIH called the Baby PIH under W[1]≠FPT. In addition, they proposed one more intermediate assumption known as the Average Baby PIH, which might lead to further progress on the PIH. As the main contribution of this paper, we prove that the Average Baby PIH holds assuming W[1]≠FPT. Given a 2CSP instance where the number of its variables is the parameter, the Average Baby PIH states that no FPT algorithm can decide whether (i) it is satisfiable or (ii) any multi-assignment that satisfies all constraints must assign each variable more than r values on average for any fixed constant r > 1. So there is a gap between (i) and (ii) on the average number of values that are assigned to a variable, i.e., 1 vs. r. If this gap occurs in each variable instead of on average, we get the original Baby PIH. So central to our paper is an FPT self-reduction for 2CSP instances that turns the above gap for each variable into a gap on average. By the known W[1]-hardness for the Baby PIH, this proves that the Average Baby PIH holds under W[1] ≠ FPT. As applications, we obtain (i) for the first time, the W[1]-hardness of constant approximating k-ExactCover, and (ii) a tight relationship between running time lower bounds in the Average Baby PIH and approximating the parameterized Nearest Codeword Problem (k-NCP).

Cite as

Yuwei Liu, Yijia Chen, Shuangle Li, Bingkai Lin, and Xin Zheng. On Average Baby PIH and Its Applications. In 42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 327, pp. 65:1-65:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{liu_et_al:LIPIcs.STACS.2025.65,
  author =	{Liu, Yuwei and Chen, Yijia and Li, Shuangle and Lin, Bingkai and Zheng, Xin},
  title =	{{On Average Baby PIH and Its Applications}},
  booktitle =	{42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)},
  pages =	{65:1--65:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-365-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{327},
  editor =	{Beyersdorff, Olaf and Pilipczuk, Micha{\l} and Pimentel, Elaine and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2025.65},
  URN =		{urn:nbn:de:0030-drops-228900},
  doi =		{10.4230/LIPIcs.STACS.2025.65},
  annote =	{Keywords: Average Baby PIH, Parameterized Inapproximability, Constraint Satisfaction Problem, Exact Set Cover, W\lbrack1\rbrack-hardness}
}
Document
The Hardness of Decision Tree Complexity

Authors: Bruno Loff and Alexey Milovanov


Abstract
Let f be a Boolean function given as either a truth table or a circuit. How difficult is it to find the decision tree complexity, also known as deterministic query complexity, of f in both cases? We prove that this problem is NC¹-hard and PSPACE-hard, respectively. The second bound is tight, and the first bound is close to being tight.

Cite as

Bruno Loff and Alexey Milovanov. The Hardness of Decision Tree Complexity. In 42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 327, pp. 66:1-66:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{loff_et_al:LIPIcs.STACS.2025.66,
  author =	{Loff, Bruno and Milovanov, Alexey},
  title =	{{The Hardness of Decision Tree Complexity}},
  booktitle =	{42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)},
  pages =	{66:1--66:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-365-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{327},
  editor =	{Beyersdorff, Olaf and Pilipczuk, Micha{\l} and Pimentel, Elaine and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2025.66},
  URN =		{urn:nbn:de:0030-drops-228913},
  doi =		{10.4230/LIPIcs.STACS.2025.66},
  annote =	{Keywords: Decision tree, Log-depth circuits}
}
Document
Commutative ℕ-Rational Series of Polynomial Growth

Authors: Aliaume Lopez


Abstract
This paper studies which functions computed by ℤ-weighted automata can be realised by ℕ-weighted automata, under two extra assumptions: commutativity (the order of letters in the input does not matter) and polynomial growth (the output of the function is bounded by a polynomial in the size of the input). We leverage this effective characterization to decide whether a function computed by a commutative ℕ-weighted automaton of polynomial growth is star-free, a notion borrowed from the theory of regular languages that has been the subject of many investigations in the context of string-to-string functions during the last decade.

Cite as

Aliaume Lopez. Commutative ℕ-Rational Series of Polynomial Growth. In 42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 327, pp. 67:1-67:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{lopez:LIPIcs.STACS.2025.67,
  author =	{Lopez, Aliaume},
  title =	{{Commutative \mathbb{N}-Rational Series of Polynomial Growth}},
  booktitle =	{42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)},
  pages =	{67:1--67:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-365-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{327},
  editor =	{Beyersdorff, Olaf and Pilipczuk, Micha{\l} and Pimentel, Elaine and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2025.67},
  URN =		{urn:nbn:de:0030-drops-228924},
  doi =		{10.4230/LIPIcs.STACS.2025.67},
  annote =	{Keywords: Rational series, weighted automata, polyregular function, commutative}
}
Document
Slightly Non-Linear Higher-Order Tree Transducers

Authors: Lê Thành Dũng (Tito) Nguyễn and Gabriele Vanoni


Abstract
We investigate the tree-to-tree functions computed by "affine λ-transducers": tree automata whose memory consists of an affine λ-term instead of a finite state. They can be seen as variations on Gallot, Lemay and Salvati’s Linear High-Order Deterministic Tree Transducers. When the memory is almost purely affine (à la Kanazawa), we show that these machines can be translated to tree-walking transducers (and with a purely affine memory, we get a reversible tree-walking transducer). This leads to a proof of an inexpressivity conjecture of Nguyễn and Pradic on "implicit automata" in an affine λ-calculus. We also prove that a more powerful variant, extended with preprocessing by an MSO relabeling and allowing a limited amount of non-linearity, is equivalent in expressive power to Engelfriet, Hoogeboom and Samwel’s invisible pebble tree transducers. The key technical tool in our proofs is the Interaction Abstract Machine (IAM), an operational avatar of Girard’s geometry of interaction, a semantics of linear logic. We work with ad-hoc specializations to λ-terms of low exponential depth of a tree-generating version of the IAM.

Cite as

Lê Thành Dũng (Tito) Nguyễn and Gabriele Vanoni. Slightly Non-Linear Higher-Order Tree Transducers. In 42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 327, pp. 68:1-68:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{nguyen_et_al:LIPIcs.STACS.2025.68,
  author =	{Nguy\~{ê}n, L\^{e} Th\`{a}nh D\~{u}ng (Tito) and Vanoni, Gabriele},
  title =	{{Slightly Non-Linear Higher-Order Tree Transducers}},
  booktitle =	{42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)},
  pages =	{68:1--68:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-365-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{327},
  editor =	{Beyersdorff, Olaf and Pilipczuk, Micha{\l} and Pimentel, Elaine and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2025.68},
  URN =		{urn:nbn:de:0030-drops-228934},
  doi =		{10.4230/LIPIcs.STACS.2025.68},
  annote =	{Keywords: Almost affine lambda-calculus, geometry of interaction, reversibility, tree transducers, tree-walking automata}
}
Document
A Dichotomy Theorem for Ordinal Ranks in MSO

Authors: Damian Niwiński, Paweł Parys, and Michał Skrzypczak


Abstract
We focus on formulae ∃X.φ(Y, X) of monadic second-order logic over the full binary tree, such that the witness X is a well-founded set. The ordinal rank rank(X) < ω₁ of such a set X measures its depth and branching structure. We search for the least upper bound for these ranks, and discover the following dichotomy depending on the formula φ. Let η_φ be the minimal ordinal such that, whenever an instance Y satisfies the formula, there is a witness X with rank(X) ≤ η_φ. Then η_φ is either strictly smaller than ω² or it reaches the maximal possible value ω₁. Moreover, it is decidable which of the cases holds. The result has potential for applications in a variety of ordinal-related problems, in particular it entails a result about the closure ordinal of a fixed-point formula.

Cite as

Damian Niwiński, Paweł Parys, and Michał Skrzypczak. A Dichotomy Theorem for Ordinal Ranks in MSO. In 42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 327, pp. 69:1-69:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{niwinski_et_al:LIPIcs.STACS.2025.69,
  author =	{Niwi\'{n}ski, Damian and Parys, Pawe{\l} and Skrzypczak, Micha{\l}},
  title =	{{A Dichotomy Theorem for Ordinal Ranks in MSO}},
  booktitle =	{42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)},
  pages =	{69:1--69:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-365-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{327},
  editor =	{Beyersdorff, Olaf and Pilipczuk, Micha{\l} and Pimentel, Elaine and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2025.69},
  URN =		{urn:nbn:de:0030-drops-228942},
  doi =		{10.4230/LIPIcs.STACS.2025.69},
  annote =	{Keywords: dichotomy result, limit ordinal, countable ordinals, nondeterministic tree automata}
}
Document
Colorful Vertex Recoloring of Bipartite Graphs

Authors: Boaz Patt-Shamir, Adi Rosén, and Seeun William Umboh


Abstract
We consider the problem of vertex recoloring: we are given n vertices with their initial coloring, and edges arrive in an online fashion. The algorithm is required to maintain a valid coloring by means of vertex recoloring, where recoloring a vertex incurs a cost. The problem abstracts a scenario of job placement in machines (possibly in the cloud), where vertices represent jobs, colors represent machines, and edges represent "anti affinity" (disengagement) constraints. Online coloring in this setting is a hard problem, and only a few cases were analyzed. One family of instances which is fairly well-understood is bipartite graphs, i.e., instances in which two colors are sufficient to satisfy all constraints. In this case it is known that the competitive ratio of vertex recoloring is Θ(log n). In this paper we propose a generalization of the problem, which allows using additional colors (possibly at a higher cost), to improve overall performance. Concretely, we analyze the simple case of bipartite graphs of bounded largest bond (a bond of a connected graph is an edge-cut that partitions the graph into two connected components). From the upper bound perspective, we propose two algorithms. One algorithm exhibits a trade-off for the uniform-cost case: given Ω(logβ) ≤ c ≤ O(log n) colors, the algorithm guarantees that its cost is at most O((log n)/c) times the optimal offline cost for two colors, where n is the number of vertices and β is the size of the largest bond of the graph. The other algorithm is designed for the case where the additional colors come at a higher cost, D > 1: given Δ additional colors, where Δ is the maximum degree in the graph, the algorithm guarantees a competitive ratio of O(log D). From the lower bounds viewpoint, we show that if the cost of the extra colors is D > 1, no algorithm (even randomized) can achieve a competitive ratio of o(log D). We also show that in the case of general bipartite graphs (i.e., of unbounded bond size), any deterministic online algorithm has competitive ratio Ω(min(D,log n)).

Cite as

Boaz Patt-Shamir, Adi Rosén, and Seeun William Umboh. Colorful Vertex Recoloring of Bipartite Graphs. In 42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 327, pp. 70:1-70:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{pattshamir_et_al:LIPIcs.STACS.2025.70,
  author =	{Patt-Shamir, Boaz and Ros\'{e}n, Adi and Umboh, Seeun William},
  title =	{{Colorful Vertex Recoloring of Bipartite Graphs}},
  booktitle =	{42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)},
  pages =	{70:1--70:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-365-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{327},
  editor =	{Beyersdorff, Olaf and Pilipczuk, Micha{\l} and Pimentel, Elaine and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2025.70},
  URN =		{urn:nbn:de:0030-drops-228955},
  doi =		{10.4230/LIPIcs.STACS.2025.70},
  annote =	{Keywords: online algorithms, competitive analysis, resource augmentation, graph coloring}
}
Document
Unfairly Splitting Separable Necklaces

Authors: Patrick Schnider, Linus Stalder, and Simon Weber


Abstract
The Necklace Splitting problem is a classical problem in combinatorics that has been intensively studied both from a combinatorial and a computational point of view. It is well-known that the Necklace Splitting problem reduces to the discrete Ham Sandwich problem. This reduction was crucial in the proof of PPA-completeness of the Ham Sandwich problem. Recently, Borzechowski, Schnider and Weber [ISAAC'23] introduced a variant of Necklace Splitting that similarly reduces to the α-Ham Sandwich problem, which lies in the complexity class UEOPL but is not known to be complete. To make this reduction work, the input necklace is guaranteed to be n-separable. They showed that these necklaces can be fairly split in polynomial time and thus this subproblem cannot be used to prove UEOPL-hardness for α-Ham Sandwich. We consider the more general unfair necklace splitting problem on n-separable necklaces, i.e., the problem of splitting these necklaces such that each thief gets a desired fraction of each type of jewels. This more general problem is the natural necklace-splitting-type version of α-Ham Sandwich, and its complexity status is one of the main open questions posed by Borzechowski, Schnider and Weber. We show that the unfair splitting problem is also polynomial-time solvable, and can thus also not be used to show UEOPL-hardness for α-Ham Sandwich.

Cite as

Patrick Schnider, Linus Stalder, and Simon Weber. Unfairly Splitting Separable Necklaces. In 42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 327, pp. 71:1-71:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{schnider_et_al:LIPIcs.STACS.2025.71,
  author =	{Schnider, Patrick and Stalder, Linus and Weber, Simon},
  title =	{{Unfairly Splitting Separable Necklaces}},
  booktitle =	{42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)},
  pages =	{71:1--71:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-365-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{327},
  editor =	{Beyersdorff, Olaf and Pilipczuk, Micha{\l} and Pimentel, Elaine and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2025.71},
  URN =		{urn:nbn:de:0030-drops-228963},
  doi =		{10.4230/LIPIcs.STACS.2025.71},
  annote =	{Keywords: Necklace splitting, n-separability, well-separation, Ham Sandwich, alpha-Ham Sandwich, unfair splitting, fair division}
}
Document
Card-Based Protocols Imply PSM Protocols

Authors: Kazumasa Shinagawa and Koji Nuida


Abstract
Card-based cryptography is the art of cryptography using a deck of physical cards. While this area is known as a research area of recreational cryptography and is recently paid attention in educational purposes, there is no systematic study of the relationship between card-based cryptography and the other "conventional" cryptography. This paper establishes the first generic conversion from card-based protocols to private simultaneous messages (PSM) protocols, a special kind of secure multiparty computation. Our compiler supports "simple" card-based protocols, which is a natural subclass of finite-runtime protocols. The communication complexity of the resulting PSM protocol depends on how many cards are opened in total in all possible branches of the original card-based protocol. This result shows theoretical importance of such "opening complexity" of card-based protocols, which had not been focused in this area. As a consequence, lower bounds for PSM protocols imply those for simple card-based protocols. In particular, if there exists no PSM protocol with subexponential communication complexity for a function f, then there exists no simple card-based protocol with subexponential opening complexity for the same f.

Cite as

Kazumasa Shinagawa and Koji Nuida. Card-Based Protocols Imply PSM Protocols. In 42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 327, pp. 72:1-72:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{shinagawa_et_al:LIPIcs.STACS.2025.72,
  author =	{Shinagawa, Kazumasa and Nuida, Koji},
  title =	{{Card-Based Protocols Imply PSM Protocols}},
  booktitle =	{42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)},
  pages =	{72:1--72:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-365-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{327},
  editor =	{Beyersdorff, Olaf and Pilipczuk, Micha{\l} and Pimentel, Elaine and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2025.72},
  URN =		{urn:nbn:de:0030-drops-228975},
  doi =		{10.4230/LIPIcs.STACS.2025.72},
  annote =	{Keywords: Card-based cryptography, private simultaneous messages}
}
Document
Dominating Set, Independent Set, Discrete k-Center, Dispersion, and Related Problems for Planar Points in Convex Position

Authors: Anastasiia Tkachenko and Haitao Wang


Abstract
Given a set P of n points in the plane, its unit-disk graph G(P) is a graph with P as its vertex set such that two points of P are connected by an edge if their (Euclidean) distance is at most 1. We consider several classical problems on G(P) in a special setting when points of P are in convex position. These problems are all NP-hard in the general case. We present efficient algorithms for these problems under the convex position assumption. ● For the problem of finding the smallest dominating set of G(P), we present an O(knlog n) time algorithm, where k is the smallest dominating set size. We also consider the weighted case in which each point of P has a weight and the goal is to find a dominating set in G(P) with minimum total weight; our algorithm runs in O(n³log² n) time. In particular, for a given k, our algorithm can compute in O(kn²log² n) time a minimum weight dominating set of size at most k (if it exists). ● For the discrete k-center problem, which is to find a subset of k points in P (called centers) for a given k, such that the maximum distance between any point in P and its nearest center is minimized. We present an algorithm that solves the problem in O(min{n^{4/3}log n+knlog² n,k² nlog²n}) time, which is O(n²log² n) in the worst case when k = Θ(n). For comparison, the runtime of the current best algorithm for the continuous version of the problem where centers can be anywhere in the plane is O(n³ log n). ● For the problem of finding a maximum independent set in G(P), we give an algorithm of O(n^{7/2}) time and another randomized algorithm of O(n^{37/11}) expected time, which improve the previous best result of O(n⁶log n) time. Our algorithms can be extended to compute a maximum-weight independent set in G(P) with the same time complexities when points of P have weights. - If we are looking for an (unweighted) independent set of size 3, we derive an algorithm of O(nlog n) time; the previous best algorithm runs in O(n^{4/3}log² n) time (which works for the general case where points of P are not necessarily in convex position). - If points of P have weights and are not necessarily in convex position, we present an algorithm that can find a maximum-weight independent set of size 3 in O(n^{5/3+δ}) time for an arbitrarily small constant δ > 0. By slightly modifying the algorithm, a maximum-weight clique of size 3 can also be found within the same time complexity. ● For the dispersion problem, which is to find a subset of k points from P for a given k, such that the minimum pairwise distance of the points in the subset is maximized. We present an algorithm of O(n^{7/2}log n) time and another randomized algorithm of O(n^{37/11}log n) expected time, which improve the previous best result of O(n⁶) time. - If k = 3, we present an algorithm of O(nlog² n) time and another randomized algorithm of O(nlog n) expected time; the previous best algorithm runs in O(n^{4/3}log² n) time (which works for the general case where points of P are not necessarily in convex position).

Cite as

Anastasiia Tkachenko and Haitao Wang. Dominating Set, Independent Set, Discrete k-Center, Dispersion, and Related Problems for Planar Points in Convex Position. In 42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 327, pp. 73:1-73:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{tkachenko_et_al:LIPIcs.STACS.2025.73,
  author =	{Tkachenko, Anastasiia and Wang, Haitao},
  title =	{{Dominating Set, Independent Set, Discrete k-Center, Dispersion, and Related Problems for Planar Points in Convex Position}},
  booktitle =	{42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)},
  pages =	{73:1--73:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-365-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{327},
  editor =	{Beyersdorff, Olaf and Pilipczuk, Micha{\l} and Pimentel, Elaine and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2025.73},
  URN =		{urn:nbn:de:0030-drops-228982},
  doi =		{10.4230/LIPIcs.STACS.2025.73},
  annote =	{Keywords: Dominating set, k-center, geometric set cover, independent set, clique, vertex cover, unit-disk graphs, convex position, dispersion, maximally separated sets}
}
Document
Nearly-Optimal Algorithm for Non-Clairvoyant Service with Delay

Authors: Noam Touitou


Abstract
We consider the online service with delay problem, in which a server traverses a metric space to serve requests that arrive over time. Requests gather individual delay cost while awaiting service, penalizing service latency; an algorithm seeks to minimize both its movement cost and the total delay cost. Algorithms for the problem (on general metric spaces) are only known for the clairvoyant model, where the algorithm knows future delay cost in advance (e.g., Azar et al., STOC'17; Azar and Touitou, FOCS'19; Touitou, STOC'23). However, in the non-clairvoyant setting, only negative results are known: where n is the size of the metric space and m is the number of requests, there are lower bounds of Ω(√n) and Ω(√m) on competitiveness (Azar et al., STOC'17), that hold even for randomized algorithms (Le et al., SODA'23). In this paper, we present the first algorithm for non-clairvoyant online service with delay. Our algorithm is deterministic and O(min{√n log n, √m log m})-competitive; combined with the lower bounds of Azar et al., this settles the correct competitive ratio for the problem up to logarithmic factors, in terms of both n and m.

Cite as

Noam Touitou. Nearly-Optimal Algorithm for Non-Clairvoyant Service with Delay. In 42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 327, pp. 74:1-74:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{touitou:LIPIcs.STACS.2025.74,
  author =	{Touitou, Noam},
  title =	{{Nearly-Optimal Algorithm for Non-Clairvoyant Service with Delay}},
  booktitle =	{42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)},
  pages =	{74:1--74:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-365-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{327},
  editor =	{Beyersdorff, Olaf and Pilipczuk, Micha{\l} and Pimentel, Elaine and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2025.74},
  URN =		{urn:nbn:de:0030-drops-228995},
  doi =		{10.4230/LIPIcs.STACS.2025.74},
  annote =	{Keywords: Online, Delay, Deadlines, k-server, Non-clairvoyant}
}
Document
Canonical Labeling of Sparse Random Graphs

Authors: Oleg Verbitsky and Maksim Zhukovskii


Abstract
We show that if p = O(1/n), then the Erdős-Rényi random graph G(n,p) with high probability admits a canonical labeling computable in time O(nlog n). Combined with the previous results on the canonization of random graphs, this implies that G(n,p) with high probability admits a polynomial-time canonical labeling whatever the edge probability function p. Our algorithm combines the standard color refinement routine with simple post-processing based on the classical linear-time tree canonization. Noteworthy, our analysis of how well color refinement performs in this setting allows us to complete the description of the automorphism group of the 2-core of G(n,p).

Cite as

Oleg Verbitsky and Maksim Zhukovskii. Canonical Labeling of Sparse Random Graphs. In 42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 327, pp. 75:1-75:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{verbitsky_et_al:LIPIcs.STACS.2025.75,
  author =	{Verbitsky, Oleg and Zhukovskii, Maksim},
  title =	{{Canonical Labeling of Sparse Random Graphs}},
  booktitle =	{42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)},
  pages =	{75:1--75:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-365-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{327},
  editor =	{Beyersdorff, Olaf and Pilipczuk, Micha{\l} and Pimentel, Elaine and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2025.75},
  URN =		{urn:nbn:de:0030-drops-229003},
  doi =		{10.4230/LIPIcs.STACS.2025.75},
  annote =	{Keywords: Graph isomorphism, random graphs, canonical labeling, color refinement}
}
Document
Dynamic Unit-Disk Range Reporting

Authors: Haitao Wang and Yiming Zhao


Abstract
For a set P of n points in the plane and a value r > 0, the unit-disk range reporting problem is to construct a data structure so that given any query disk of radius r, all points of P in the disk can be reported efficiently. We consider the dynamic version of the problem where point insertions and deletions of P are allowed. The previous best method provides a data structure of O(n log n) space that supports O(log^{3+ε} n) amortized insertion time, O(log^{5+ε} n) amortized deletion time, and O(log² n/log log n+k) query time, where ε is an arbitrarily small positive constant and k is the output size. In this paper, we improve the query time to O(log n+k) while keeping other complexities the same as before. A key ingredient of our approach is a shallow cutting algorithm for circular arcs, which may be interesting in its own right. A related problem that can also be solved by our techniques is the dynamic unit-disk range emptiness queries: Given a query unit disk, we wish to determine whether the disk contains a point of P. The best previous work can maintain P in a data structure of O(n) space that supports O(log² n) amortized insertion time, O(log⁴n) amortized deletion time, and O(log² n) query time. Our new data structure also uses O(n) space but can support each update in O(log^{1+ε} n) amortized time and support each query in O(log n) time.

Cite as

Haitao Wang and Yiming Zhao. Dynamic Unit-Disk Range Reporting. In 42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 327, pp. 76:1-76:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{wang_et_al:LIPIcs.STACS.2025.76,
  author =	{Wang, Haitao and Zhao, Yiming},
  title =	{{Dynamic Unit-Disk Range Reporting}},
  booktitle =	{42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)},
  pages =	{76:1--76:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-365-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{327},
  editor =	{Beyersdorff, Olaf and Pilipczuk, Micha{\l} and Pimentel, Elaine and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2025.76},
  URN =		{urn:nbn:de:0030-drops-229019},
  doi =		{10.4230/LIPIcs.STACS.2025.76},
  annote =	{Keywords: Unit disks, range reporting, range emptiness, alpha-hulls, dynamic data structures, shallow cuttings}
}

Filters


Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail