204 Search Results for "Sanders, Peter"


Volume

LIPIcs, Volume 173

28th Annual European Symposium on Algorithms (ESA 2020)

ESA 2020, September 7-9, 2020, Pisa, Italy (Virtual Conference)

Editors: Fabrizio Grandoni, Grzegorz Herman, and Peter Sanders

Document
Mind the Gap. Doubling Constant Parametrization of Weighted Problems: TSP, Max-Cut, and More

Authors: Mihail Stoian

Published in: LIPIcs, Volume 364, 43rd International Symposium on Theoretical Aspects of Computer Science (STACS 2026)


Abstract
Despite much research, hard weighted problems still resist super-polynomial improvements over their textbook solution. On the other hand, the unweighted versions of these problems have recently witnessed the sought-after speedups. Currently, the only way to repurpose the algorithm of the unweighted version for the weighted version is to employ a polynomial embedding of the input weights. This, however, introduces a pseudo-polynomial factor into the running time, which becomes impractical for arbitrarily weighted instances. In this paper, we introduce a new way to repurpose the algorithm of the unweighted problem. Specifically, we show that the time complexity of several well-known NP-hard problems operating over the (min, +) and (max, +) semirings, such as TSP, Weighted Max-Cut, and Edge-Weighted k-Clique, is proportional to that of their unweighted versions when the set of input weights has small doubling. We achieve this by a meta-algorithm that converts the input weights into polynomially bounded integers using the recent constructive Freiman’s theorem by Randolph and Węgrzycki [ESA 2024] before applying the polynomial embedding.

Cite as

Mihail Stoian. Mind the Gap. Doubling Constant Parametrization of Weighted Problems: TSP, Max-Cut, and More. In 43rd International Symposium on Theoretical Aspects of Computer Science (STACS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 364, pp. 79:1-79:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{stoian:LIPIcs.STACS.2026.79,
  author =	{Stoian, Mihail},
  title =	{{Mind the Gap. Doubling Constant Parametrization of Weighted Problems: TSP, Max-Cut, and More}},
  booktitle =	{43rd International Symposium on Theoretical Aspects of Computer Science (STACS 2026)},
  pages =	{79:1--79:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-412-3},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{364},
  editor =	{Mahajan, Meena and Manea, Florin and McIver, Annabelle and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2026.79},
  URN =		{urn:nbn:de:0030-drops-255680},
  doi =		{10.4230/LIPIcs.STACS.2026.79},
  annote =	{Keywords: doubling constant parametrization, weighted problems, traveling salesman, weighted max-cut, edge-weighted k-clique}
}
Document
Protrusion Decompositions Revisited: Uniform Lossy Kernels for Reducing Treewidth and Linear Kernels for Hitting Disconnected Minors

Authors: Roohani Sharma and Michał Włodarczyk

Published in: LIPIcs, Volume 364, 43rd International Symposium on Theoretical Aspects of Computer Science (STACS 2026)


Abstract
Let ℱ be a finite family of graphs. In the ℱ-Deletion problem, one is given a graph G and an integer k, and the goal is to find k vertices whose deletion results in a graph with no minor from the family ℱ. This may be regarded as a far-reaching generalization of Vertex Cover and Feedback vertex Set. In their seminal work, Fomin, Lokshtanov, Misra & Saurabh [FOCS 2012] gave a polynomial kernel for this problem when the family ℱ contains a planar graph. As the size of their kernel is g(ℱ) ⋅ k^{f(ℱ)}, a natural follow-up question was whether the dependence on ℱ in the exponent of k can be avoided. The answer turned out to be negative: Giannopoulou, Jansen, Lokshtanov & Saurabh [TALG 2017] proved that this is already inevitable for the special case of the Treewidth-η-Deletion problem. In this work, we show that this non-uniformity can be avoided at the expense of a small loss. First, we present a simple 2-approximate kernelization algorithm for Treewidth-η-Deletion with a kernel size g(η) ⋅ k⁶. Next, we show that the approximation factor can be made arbitrarily close to 1, if we settle for a kernelization protocol with 𝒪(1) calls to an oracle that solves instances of size bounded by a uniform polynomial in k. We extend the above results to general ℱ-Deletion, whenever ℱ contains a planar graph, as long as an oracle for Treewidth-η-Deletion is available for small instances. Notably, all our constants are computable functions of ℱ and our techniques work also when some graphs in ℱ may be disconnected. Our results rely on two novel techniques. First, we transform so-called "near-protrusion decompositions" into true protrusion decompositions by sacrificing a small accuracy loss. Secondly, we show how to optimally compress such a decomposition with respect to general ℱ-Deletion. Using our second technique, we also obtain linear kernels on sparse graph classes when ℱ contains a planar graph, whereas the previously known theorems required all graphs in ℱ to be connected. Specifically, we generalize the kernelization algorithm by Kim, Langer, Paul, Reidl, Rossmanith, Sau & Sikdar [TALG 2015] on graph classes that exclude a topological minor.

Cite as

Roohani Sharma and Michał Włodarczyk. Protrusion Decompositions Revisited: Uniform Lossy Kernels for Reducing Treewidth and Linear Kernels for Hitting Disconnected Minors. In 43rd International Symposium on Theoretical Aspects of Computer Science (STACS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 364, pp. 78:1-78:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{sharma_et_al:LIPIcs.STACS.2026.78,
  author =	{Sharma, Roohani and W{\l}odarczyk, Micha{\l}},
  title =	{{Protrusion Decompositions Revisited: Uniform Lossy Kernels for Reducing Treewidth and Linear Kernels for Hitting Disconnected Minors}},
  booktitle =	{43rd International Symposium on Theoretical Aspects of Computer Science (STACS 2026)},
  pages =	{78:1--78:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-412-3},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{364},
  editor =	{Mahajan, Meena and Manea, Florin and McIver, Annabelle and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2026.78},
  URN =		{urn:nbn:de:0030-drops-255674},
  doi =		{10.4230/LIPIcs.STACS.2026.78},
  annote =	{Keywords: kernelization, graph minors, treewidth, uniform kernels, minor hitting}
}
Document
Time-Optimal Construction of String Synchronizing Sets

Authors: Jonas Ellert and Tomasz Kociumaka

Published in: LIPIcs, Volume 364, 43rd International Symposium on Theoretical Aspects of Computer Science (STACS 2026)


Abstract
A powerful design principle behind many modern string algorithms is local consistency: breaking the symmetry between string positions based on their small contexts so that matching fragments are handled consistently. Among the most influential instantiations of this principle are string synchronizing sets [Kempa & Kociumaka; STOC 2019]. A τ-synchronizing set of a string of length n is a set of O(n/τ) string positions, chosen using their length-2τ contexts, such that (outside of highly periodic regions) every block of τ consecutive positions contains at least one element of the set. Synchronizing sets have found dozens of applications in diverse settings, from quantum and dynamic algorithms to fully compressed computation. In the classic word RAM model, particularly for strings over small alphabets, they enabled faster solutions to core problems in data compression, text indexing, and string similarity. In this work, we show that any string T ∈ [0 .. σ)ⁿ can be preprocessed in O(n log σ / log n) time so that, for any given integer τ ∈ [1 .. n], a τ-synchronizing set of T can be constructed in O((n log τ)/(τ log n)) time. Both bounds are optimal in the word RAM model with machine word size w = Θ(log n), matching the information-theoretic minimum for the input and output sizes, respectively. Previously, constructing a τ-synchronizing set required O(n/τ) time after an O(n)-time preprocessing [Kociumaka, Radoszewski, Rytter, and Waleń; SICOMP 2024], or, in the restricted regime of τ < 0.2 log_σ n, without any preprocessing needed [Kempa & Kociumaka; STOC 2019]. A simple instantiation of our method outputs the synchronizing set as a sorted list in O(n/τ) time, or as a bitmask in O(n/log n) time. Our optimal construction produces a compact fully indexable dictionary, supporting select queries in O(1) time and rank queries in O(log ((log τ)/(log log n))) time. The latter complexity matches known unconditional cell-probe lower bounds for τ ≤ n^{1-Ω(1)}. To achieve this, we introduce a general framework for efficiently processing sparse integer sequences via a custom variable-length encoding. We also augment the optimal variant of van Emde Boas trees [Pătraşcu & Thorup; STOC 2006] with a deterministic linear-time construction. When the set is represented as a bitmask under our sparse encoding, the same guarantees for select and rank queries hold after preprocessing in time proportional to the size of our encoding (in words).

Cite as

Jonas Ellert and Tomasz Kociumaka. Time-Optimal Construction of String Synchronizing Sets. In 43rd International Symposium on Theoretical Aspects of Computer Science (STACS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 364, pp. 36:1-36:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{ellert_et_al:LIPIcs.STACS.2026.36,
  author =	{Ellert, Jonas and Kociumaka, Tomasz},
  title =	{{Time-Optimal Construction of String Synchronizing Sets}},
  booktitle =	{43rd International Symposium on Theoretical Aspects of Computer Science (STACS 2026)},
  pages =	{36:1--36:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-412-3},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{364},
  editor =	{Mahajan, Meena and Manea, Florin and McIver, Annabelle and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2026.36},
  URN =		{urn:nbn:de:0030-drops-255258},
  doi =		{10.4230/LIPIcs.STACS.2026.36},
  annote =	{Keywords: synchronizing sets, local consistency, packed strings}
}
Document
Broadcast in Almost Mixing Time

Authors: Anton Paramonov and Roger Wattenhofer

Published in: LIPIcs, Volume 364, 43rd International Symposium on Theoretical Aspects of Computer Science (STACS 2026)


Abstract
We study the problem of broadcasting multiple messages in the CONGEST model. In this problem, a dedicated source node s possesses a set M of messages with every message of size O(log n) where n is the total number of nodes. The objective is to ensure that every node in the network learns all messages in M. The execution of an algorithm progresses in rounds, and we focus on optimizing the round complexity of broadcasting multiple messages. Our primary contribution is a randomized algorithm for networks with expander topology. The algorithm succeeds with high probability and achieves a round complexity that is optimal up to a factor of the network’s mixing time and polylogarithmic terms. It leverages a multi-COBRA primitive, which uses multiple branching random walks running in parallel. A crucial aspect of our method is the use of these branching random walks to construct an optimal (up to a polylogarithmic factor) tree packing of a random graph, which is then used for efficient broadcasting. We also prove the problem to be NP-hard in a centralized setting and provide insights into why lower bounds that can be matched in expanders, namely graph diameter and |M|/minCut, cannot be tight in general graphs.

Cite as

Anton Paramonov and Roger Wattenhofer. Broadcast in Almost Mixing Time. In 43rd International Symposium on Theoretical Aspects of Computer Science (STACS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 364, pp. 71:1-71:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{paramonov_et_al:LIPIcs.STACS.2026.71,
  author =	{Paramonov, Anton and Wattenhofer, Roger},
  title =	{{Broadcast in Almost Mixing Time}},
  booktitle =	{43rd International Symposium on Theoretical Aspects of Computer Science (STACS 2026)},
  pages =	{71:1--71:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-412-3},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{364},
  editor =	{Mahajan, Meena and Manea, Florin and McIver, Annabelle and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2026.71},
  URN =		{urn:nbn:de:0030-drops-255603},
  doi =		{10.4230/LIPIcs.STACS.2026.71},
  annote =	{Keywords: Distributed algorithms, Expander Graphs, Random graphs, Broadcast, Branching random walks, Tree packing, CONGEST model}
}
Document
Dynamic Pattern Matching with Wildcards

Authors: Arshia Ataee Naeini, Amir-Parsa Mobed, Masoud Seddighin, and Saeed Seddighin

Published in: LIPIcs, Volume 364, 43rd International Symposium on Theoretical Aspects of Computer Science (STACS 2026)


Abstract
We study the fully dynamic pattern matching problem where the pattern may contain up to k wildcard symbols, each matching any symbol of the alphabet. Both the text and the pattern are subject to updates (insert, delete, change). We design an algorithm with 𝒪(n log² n) preprocessing and update/query time 𝒪̃(kn^{k/{k+1}} + k² log n). The bound is truly sublinear for a constant k, and sublinear when k = o(log n). We further complement our results with a conditional lower bound: assuming subquadratic preprocessing time, achieving truly sublinear update time for the case k = Ω(log n) would contradict the Strong Exponential Time Hypothesis (SETH). Finally, we develop sublinear algorithms for two special cases: - If the pattern contains w non-wildcard symbols, we give an algorithm with preprocessing time 𝒪(nw) and update time 𝒪(w + log n), which is truly sublinear whenever w is truly sublinear. - Using FFT technique combined with block decomposition, we design a deterministic truly sublinear algorithm with preprocessing time 𝒪(n^{1.8}) and update time 𝒪(n^{0.8} log n) for the case that there are at most two non-wildcards.

Cite as

Arshia Ataee Naeini, Amir-Parsa Mobed, Masoud Seddighin, and Saeed Seddighin. Dynamic Pattern Matching with Wildcards. In 43rd International Symposium on Theoretical Aspects of Computer Science (STACS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 364, pp. 68:1-68:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{naeini_et_al:LIPIcs.STACS.2026.68,
  author =	{Naeini, Arshia Ataee and Mobed, Amir-Parsa and Seddighin, Masoud and Seddighin, Saeed},
  title =	{{Dynamic Pattern Matching with Wildcards}},
  booktitle =	{43rd International Symposium on Theoretical Aspects of Computer Science (STACS 2026)},
  pages =	{68:1--68:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-412-3},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{364},
  editor =	{Mahajan, Meena and Manea, Florin and McIver, Annabelle and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2026.68},
  URN =		{urn:nbn:de:0030-drops-255579},
  doi =		{10.4230/LIPIcs.STACS.2026.68},
  annote =	{Keywords: pattern matching, wildcards, dynamic algorithms, string algorithms, data structures}
}
Document
A Polylogarithmic Competitive Algorithm for Stochastic Online Sorting and TSP

Authors: Andreas Kalavas, Charalampos Platanos, and Thanos Tolias

Published in: LIPIcs, Volume 364, 43rd International Symposium on Theoretical Aspects of Computer Science (STACS 2026)


Abstract
In Online Sorting, an array of n initially empty cells is given. At each time step t, an element x_t ∈ [0,1] arrives and must be irrevocably placed in an empty cell without knowledge of future arrivals. We aim to minimize the sum of absolute differences between pairs of elements placed in consecutive array cells, seeking an online placement strategy that results in a final array close to a sorted one. An interesting multidimensional generalization, referred to as the Online Traveling Salesperson Problem, arises when the request sequence consists of points in the d-dimensional unit cube and the objective is to minimize the sum of Euclidean distances between points in consecutive cells. Motivated by the recent work of (Abrahamsen, Bercea, Beretta, Klausen and Kozma; ESA 2024), we consider the stochastic version of Online Sorting (resp. Online TSP), where each element (resp. point) x_t is an i.i.d. sample from the uniform distribution on [0, 1] (resp. [0,1]^d). By carefully decomposing the request sequence into a hierarchy of balls-into-bins instances, where the balls to bins ratio is large enough so that bin occupancy is sharply concentrated around its mean and small enough so that we can efficiently deal with the elements placed in the same bin, we obtain an online algorithm that approximates the optimal cost within a factor of O(log² n) with high probability. Our result comprises an exponential improvement over the previously best known competitive ratio of Õ(n^{1/4}) for Stochastic Online Sorting due to (Abrahamsen et al.; ESA 2024) and O(√n) for (adversarial) Online TSP due to (Bertram, ESA 2025).

Cite as

Andreas Kalavas, Charalampos Platanos, and Thanos Tolias. A Polylogarithmic Competitive Algorithm for Stochastic Online Sorting and TSP. In 43rd International Symposium on Theoretical Aspects of Computer Science (STACS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 364, pp. 58:1-58:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{kalavas_et_al:LIPIcs.STACS.2026.58,
  author =	{Kalavas, Andreas and Platanos, Charalampos and Tolias, Thanos},
  title =	{{A Polylogarithmic Competitive Algorithm for Stochastic Online Sorting and TSP}},
  booktitle =	{43rd International Symposium on Theoretical Aspects of Computer Science (STACS 2026)},
  pages =	{58:1--58:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-412-3},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{364},
  editor =	{Mahajan, Meena and Manea, Florin and McIver, Annabelle and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2026.58},
  URN =		{urn:nbn:de:0030-drops-255473},
  doi =		{10.4230/LIPIcs.STACS.2026.58},
  annote =	{Keywords: sorting, online algorithm, balls-into-bins, TSP}
}
Document
Range Longest Increasing Subsequence and Its Relatives

Authors: Karthik C. S. and Saladi Rahul

Published in: LIPIcs, Volume 362, 17th Innovations in Theoretical Computer Science Conference (ITCS 2026)


Abstract
Longest increasing subsequence (LIS) is a classical textbook problem which is still actively studied in various computational models. In this work, we present a few results for the range longest increasing subsequence problem (Range-LIS) and its variants. The input to Range-LIS is a sequence 𝒮 of n real numbers and a collection 𝒬 of m query ranges and for each query in 𝒬, the goal is to report the LIS of the sequence 𝒮 restricted to that query. Our two main results are for the following generalizations of the Range-LIS problem: 2D Range Queries: In this variant of the Range-LIS problem, each query is a pair of ranges, one of indices and the other of values, and we provide a randomized algorithm with running time Õ(mn^{1/2}+ n^{3/2})+O(k), where k is the cumulative length of the m output subsequences. This improves on the elementary Õ(mn) runtime algorithm when m = Ω(√n). Previously, the only known result breaking the quadratic barrier was of Tiskin [SODA'10] which could only handle 1D range queries (i.e., each query was a range of indices) and also just outputted the length of the LIS (instead of reporting the subsequence achieving that length). Subsequent to our paper, Gawrychowski, Gorbachev, and Kociumaka in a preprint have extended Tiskin’s approach to handle reporting 1D range queries in O(n(log n)³+m+k) time. Colored Sequences: In this variant of the Range-LIS problem, each element in 𝒮 is colored and for each query in 𝒬, the goal is to report a monochromatic LIS contained in the sequence 𝒮 restricted to that query. For 2D queries, we provide a randomized algorithm for this colored version with running time Õ(mn^{2/3}+ n^{5/3})+O(k). Moreover, for 1D queries, we provide an improved algorithm with running time Õ(mn^{1/2}+ n^{3/2})+O(k). Thus, we again improve on the elementary Õ(mn) runtime algorithm. Additionally, we prove that assuming the well-known Combinatorial Boolean Matrix Multiplication Hypothesis, that the runtime for 1D queries is essentially tight for combinatorial algorithms. Our algorithms combine several tools such as dynamic programming (to precompute increasing subsequences with some desirable properties), geometric data structures (to efficiently compute the dynamic programming entries), random sampling (to capture elements which are part of the LIS), classification of query ranges into large LIS and small LIS, and classification of colors into light and heavy. We believe that our techniques will be of interest to tackle other variants of LIS problem and other range-searching problems.

Cite as

Karthik C. S. and Saladi Rahul. Range Longest Increasing Subsequence and Its Relatives. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 87:1-87:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{karthikc.s._et_al:LIPIcs.ITCS.2026.87,
  author =	{Karthik C. S. and Rahul, Saladi},
  title =	{{Range Longest Increasing Subsequence and Its Relatives}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{87:1--87:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.87},
  URN =		{urn:nbn:de:0030-drops-253740},
  doi =		{10.4230/LIPIcs.ITCS.2026.87},
  annote =	{Keywords: Longest Increasing Subsequence, Range Query, Fine-Grained Complexity}
}
Document
Bridging Treewidth and Clique-Width via Cograph-Modular-Treewidth

Authors: Václav Blažej, Satyabrata Jana, M. S. Ramanujan, and Peter Strulo

Published in: LIPIcs, Volume 358, 20th International Symposium on Parameterized and Exact Computation (IPEC 2025)


Abstract
Many classical graph problems - such as Max Cut, Chromatic Number, Edge Dominating Set, and Hamiltonian Cycle - are polynomial-time solvable on cographs, fixed-parameter tractable (FPT) when parameterized by treewidth, but W[1]-hard when parameterized by clique-width. In contrast, Graph Isomorphism is FPT parameterized by treewidth, but for clique-width it is known to be in XP; whether it is FPT or W[1]-hard is open. This reveals a sharp tractability gap between treewidth and clique-width. In this work, we propose a new structural graph parameter, 𝒞-modular-treewidth, which lies between treewidth and clique-width. The parameter leverages modular decomposition and restricts modules to induce graphs from a fixed class 𝒞 (e.g., cographs or edgeless graphs). By exploiting true and false twins - a hallmark of cograph-like structure - our parameter allows the design of efficient algorithms for several hard problems beyond the reach of treewidth-based methods. In this work, we show that 𝒞-modular-treewidth enables efficient solutions under suitable choices of 𝒞, opening a new pathway in the parameterized complexity landscape between treewidth and clique-width. In particular we show that - When parameterized by cograph-modular-treewidth, Isomorphism admits an FPT algorithm, whereas Chromatic Number remains W[1]-hard. - When parameterized by independent-modular-treewidth, Hamiltonian Cycle and Edge Dominating Set remain W[1]-hard.

Cite as

Václav Blažej, Satyabrata Jana, M. S. Ramanujan, and Peter Strulo. Bridging Treewidth and Clique-Width via Cograph-Modular-Treewidth. In 20th International Symposium on Parameterized and Exact Computation (IPEC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 358, pp. 18:1-18:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{blazej_et_al:LIPIcs.IPEC.2025.18,
  author =	{Bla\v{z}ej, V\'{a}clav and Jana, Satyabrata and Ramanujan, M. S. and Strulo, Peter},
  title =	{{Bridging Treewidth and Clique-Width via Cograph-Modular-Treewidth}},
  booktitle =	{20th International Symposium on Parameterized and Exact Computation (IPEC 2025)},
  pages =	{18:1--18:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-407-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{358},
  editor =	{Agrawal, Akanksha and van Leeuwen, Erik Jan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.IPEC.2025.18},
  URN =		{urn:nbn:de:0030-drops-251507},
  doi =		{10.4230/LIPIcs.IPEC.2025.18},
  annote =	{Keywords: Treewidth, Clique-width, Cograph, FPT, W\lbrack1\rbrack-hard}
}
Document
Tight Bounds for Connected Odd Cycle Transversal Parameterized by Clique-Width

Authors: Narek Bojikian and Stefan Kratsch

Published in: LIPIcs, Volume 358, 20th International Symposium on Parameterized and Exact Computation (IPEC 2025)


Abstract
Recently, Bojikian and Kratsch [ICALP 2024] presented a novel approach to tackle connectivity problems parameterized by clique-width (cw), based on counting (modulo 2) the number of representations of partial solutions, while allowing for possibly multiple representations to exist for the same partial solution. Using this technique, they got a SETH-tight bound of 𝒪^*(3^{cw}) for the Steiner Tree problem, which was left open by Hegerfeld and Kratsch [ESA 2023]. We use the same technique to solve the Connected Odd Cycle Transversal problem in time 𝒪^*(12^{cw}). Moreover, we prove that our result is tight by providing a SETH-based lower bound excluding algorithms with running time 𝒪^*((12-ε)^{cw}). This answers another question of Hegerfeld and Kratsch [ESA 2023].

Cite as

Narek Bojikian and Stefan Kratsch. Tight Bounds for Connected Odd Cycle Transversal Parameterized by Clique-Width. In 20th International Symposium on Parameterized and Exact Computation (IPEC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 358, pp. 19:1-19:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{bojikian_et_al:LIPIcs.IPEC.2025.19,
  author =	{Bojikian, Narek and Kratsch, Stefan},
  title =	{{Tight Bounds for Connected Odd Cycle Transversal Parameterized by Clique-Width}},
  booktitle =	{20th International Symposium on Parameterized and Exact Computation (IPEC 2025)},
  pages =	{19:1--19:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-407-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{358},
  editor =	{Agrawal, Akanksha and van Leeuwen, Erik Jan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.IPEC.2025.19},
  URN =		{urn:nbn:de:0030-drops-251516},
  doi =		{10.4230/LIPIcs.IPEC.2025.19},
  annote =	{Keywords: Parameterized complexity, connected odd cycle transversal, clique-width}
}
Document
Research
Mining Inter-Document Argument Structures in Scientific Papers for an Argument Web

Authors: Florian Ruosch, Cristina Sarasua, and Abraham Bernstein

Published in: TGDK, Volume 3, Issue 3 (2025). Transactions on Graph Data and Knowledge, Volume 3, Issue 3


Abstract
In Argument Mining, predicting argumentative relations between texts (or spans) remains one of the most challenging aspects, even more so in the cross-document setting. This paper makes three key contributions to advance research in this domain. We first extend an existing dataset, the Sci-Arg corpus, by annotating it with explicit inter-document argumentative relations, thereby allowing arguments to be distributed over several documents forming an Argument Web; these new annotations are published using Semantic Web technologies (RDF, OWL). Second, we explore and evaluate three automated approaches for predicting these inter-document argumentative relations, establishing critical baselines on the new dataset. We find that a simple classifier based on discourse indicators with access to context outperforms neural methods. Third, we conduct a comparative analysis of these approaches for both intra- and inter-document settings, identifying statistically significant differences in results that indicate the necessity of distinguishing between these two scenarios. Our findings highlight significant challenges in this complex domain and open crucial avenues for future research on the Argument Web of Science, particularly for those interested in leveraging Semantic Web technologies and knowledge graphs to understand scholarly discourse. With this, we provide the first stepping stones in the form of a benchmark dataset, three baseline methods, and an initial analysis for a systematic exploration of this field relevant to the Web of Data and Science.

Cite as

Florian Ruosch, Cristina Sarasua, and Abraham Bernstein. Mining Inter-Document Argument Structures in Scientific Papers for an Argument Web. In Transactions on Graph Data and Knowledge (TGDK), Volume 3, Issue 3, pp. 4:1-4:33, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@Article{ruosch_et_al:TGDK.3.3.4,
  author =	{Ruosch, Florian and Sarasua, Cristina and Bernstein, Abraham},
  title =	{{Mining Inter-Document Argument Structures in Scientific Papers for an Argument Web}},
  journal =	{Transactions on Graph Data and Knowledge},
  pages =	{4:1--4:33},
  ISSN =	{2942-7517},
  year =	{2025},
  volume =	{3},
  number =	{3},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/TGDK.3.3.4},
  URN =		{urn:nbn:de:0030-drops-252159},
  doi =		{10.4230/TGDK.3.3.4},
  annote =	{Keywords: Argument Mining, Large Language Models, Knowledge Graphs, Link Prediction}
}
Document
On the Hardness of Approximating Distances of Quantum Codes

Authors: Elena Grigorescu, Vatsal Jha, and Eric Samperton

Published in: LIPIcs, Volume 360, 45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025)


Abstract
The problem of computing distances of error-correcting codes is fundamental in both the classical and quantum settings. While hardness for the classical version of these problems has been known for some time (in both the exact and approximate settings), it was only recently that Kapshikar and Kundu showed these problems are also hard in the quantum setting. As our first main result, we reprove this using arguably simpler arguments based on hypergraph product codes. In particular, we get a direct reduction to CSS codes, the most commonly used type of quantum code, from the minimum distance problem for classical linear codes. Our second set of results considers the distance of a graph state, which is a key parameter for quantum codes obtained via the codeword stabilized formalism. We show that it is NP-hard to compute/approximate the distance of a graph state when the adjacency matrix of the graph is the input. In fact, we show this is true even if we only consider X-type errors of a graph state. Our techniques moreover imply an interesting classical consequence: the hardness of computing or approximating the distance of classical codes with rate equal to 1/2. One of the main motivations of the present work is a question raised by Kapshikar and Kundu concerning the NP-hardness of approximation when there is an additive error proportional to a quantum code’s length. We show that no such hardness can hold for hypergraph product codes. These observations suggest the possibility of a new kind of square root barrier.

Cite as

Elena Grigorescu, Vatsal Jha, and Eric Samperton. On the Hardness of Approximating Distances of Quantum Codes. In 45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 360, pp. 34:1-34:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{grigorescu_et_al:LIPIcs.FSTTCS.2025.34,
  author =	{Grigorescu, Elena and Jha, Vatsal and Samperton, Eric},
  title =	{{On the Hardness of Approximating Distances of Quantum Codes}},
  booktitle =	{45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025)},
  pages =	{34:1--34:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-406-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{360},
  editor =	{Aiswarya, C. and Mehta, Ruta and Roy, Subhajit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2025.34},
  URN =		{urn:nbn:de:0030-drops-251152},
  doi =		{10.4230/LIPIcs.FSTTCS.2025.34},
  annote =	{Keywords: quantum codes, minimum distance problem, NP-hardness, graph state distance}
}
Document
Compressibility Measures and Succinct Data Structures for Piecewise Linear Approximations

Authors: Paolo Ferragina and Filippo Lari

Published in: LIPIcs, Volume 359, 36th International Symposium on Algorithms and Computation (ISAAC 2025)


Abstract
We study the problem of deriving compressibility measures for Piecewise Linear Approximations (PLAs), i.e., error-bounded approximations of a set of two-dimensional increasing data points using a sequence of segments. Such approximations are widely used tools in implementing many learned data structures, which mix learning models with traditional algorithmic design blocks to exploit regularities in the underlying data distribution, providing novel and effective space-time trade-offs. We introduce the first lower bounds to the cost of storing PLAs in two settings, namely compression and indexing. We then compare these compressibility measures to known data structures, and show that they are asymptotically optimal up to a constant factor from the space lower bounds. Finally, we design the first data structures for the aforementioned settings that achieve the space lower bounds plus small additive terms, which turn out to be succinct in most practical cases. Our data structures support the efficient retrieval and evaluation of a segment in the (compressed) PLA for a given x-value, which is a core operation in any learned data structure relying on PLAs. As a result, our paper offers the first theoretical analysis of the maximum compressibility achievable by PLA-based learned data structures, and provides novel storage schemes for PLAs offering strong theoretical guarantees while also suggesting simple and efficient practical implementations.

Cite as

Paolo Ferragina and Filippo Lari. Compressibility Measures and Succinct Data Structures for Piecewise Linear Approximations. In 36th International Symposium on Algorithms and Computation (ISAAC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 359, pp. 31:1-31:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{ferragina_et_al:LIPIcs.ISAAC.2025.31,
  author =	{Ferragina, Paolo and Lari, Filippo},
  title =	{{Compressibility Measures and Succinct Data Structures for Piecewise Linear Approximations}},
  booktitle =	{36th International Symposium on Algorithms and Computation (ISAAC 2025)},
  pages =	{31:1--31:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-408-6},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{359},
  editor =	{Chen, Ho-Lin and Hon, Wing-Kai and Tsai, Meng-Tsung},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2025.31},
  URN =		{urn:nbn:de:0030-drops-249397},
  doi =		{10.4230/LIPIcs.ISAAC.2025.31},
  annote =	{Keywords: Piecewise Linear Approximations, Succinct Data Structures, Lower Bounds}
}
Document
Parallel Joinable B-Trees in the Fork-Join I/O Model

Authors: Michael T. Goodrich, Yan Gu, Ryuto Kitagawa, and Yihan Sun

Published in: LIPIcs, Volume 359, 36th International Symposium on Algorithms and Computation (ISAAC 2025)


Abstract
Balanced search trees are widely used in computer science to efficiently maintain dynamic ordered data. To support efficient set operations (e.g., union, intersection, difference) using trees, the join-based framework is widely studied. This framework has received particular attention in the parallel setting, and has been shown to be effective in enabling simple and theoretically efficient set operations on trees. Despite the widespread adoption of parallel join-based trees, a major drawback of previous work on such data structures is the inefficiency of their input/output (I/O) access patterns. Some recent work (e.g., C-trees and PaC-trees) focused on more I/O-friendly implementations of these algorithms. Surprisingly, however, there have been no results on bounding the I/O-costs for these algorithms. It remains open whether these algorithms can provide tight, provable guarantees in I/O-costs on trees. This paper studies efficient parallel algorithms for set operations based on search tree algorithms using a join-based framework, with a special focus on achieving I/O efficiency in these algorithms. To better capture the I/O-efficiency in these algorithms in parallel, we introduce a new computational model, the Fork-Join I/O Model, to measure the I/O costs in fork-join parallelism. This model measures the total block transfers (I/O work) and their critical path (I/O span). Under this model, we propose our new solution based on B-trees. Our parallel algorithm computes the union, intersection, and difference of two B-trees with O(m log_B(n/m)) I/O work and O(log_B m ⋅ log₂ log_B n + log_B n) I/O span, where n and m ≤ n are the sizes of the two trees, and B is the block size.

Cite as

Michael T. Goodrich, Yan Gu, Ryuto Kitagawa, and Yihan Sun. Parallel Joinable B-Trees in the Fork-Join I/O Model. In 36th International Symposium on Algorithms and Computation (ISAAC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 359, pp. 37:1-37:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{goodrich_et_al:LIPIcs.ISAAC.2025.37,
  author =	{Goodrich, Michael T. and Gu, Yan and Kitagawa, Ryuto and Sun, Yihan},
  title =	{{Parallel Joinable B-Trees in the Fork-Join I/O Model}},
  booktitle =	{36th International Symposium on Algorithms and Computation (ISAAC 2025)},
  pages =	{37:1--37:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-408-6},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{359},
  editor =	{Chen, Ho-Lin and Hon, Wing-Kai and Tsai, Meng-Tsung},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2025.37},
  URN =		{urn:nbn:de:0030-drops-249451},
  doi =		{10.4230/LIPIcs.ISAAC.2025.37},
  annote =	{Keywords: Parallel algorithm, I/O efficiency, search trees, B-trees}
}
Document
PIPQ: Strict Insert-Optimized Concurrent Priority Queue

Authors: Olivia Grimes, Ahmed Hassan, Panagiota Fatourou, and Roberto Palmieri

Published in: LIPIcs, Volume 356, 39th International Symposium on Distributed Computing (DISC 2025)


Abstract
This paper presents PIPQ, a strict and linearizable concurrent priority queue whose design differs from existing solutions in literature because it focuses on enabling parallelism of insert operations as opposed to accelerating delete-min operations, as traditionally done. In a nutshell, PIPQ’s structure includes two levels: the worker level and the leader level. The worker level provides per-thread data structures enabling fast and parallel insertions. The leader level contains the highest priority elements in the priority queue and can thus serve delete-min operations. Our evaluation, which includes an exploration of different data access patterns, operation mixes, runtime settings, and an integration into a graph-based application, shows that PIPQ outperforms competitors in a variety of cases, especially with insert-dominant workloads.

Cite as

Olivia Grimes, Ahmed Hassan, Panagiota Fatourou, and Roberto Palmieri. PIPQ: Strict Insert-Optimized Concurrent Priority Queue. In 39th International Symposium on Distributed Computing (DISC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 356, pp. 35:1-35:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{grimes_et_al:LIPIcs.DISC.2025.35,
  author =	{Grimes, Olivia and Hassan, Ahmed and Fatourou, Panagiota and Palmieri, Roberto},
  title =	{{PIPQ: Strict Insert-Optimized Concurrent Priority Queue}},
  booktitle =	{39th International Symposium on Distributed Computing (DISC 2025)},
  pages =	{35:1--35:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-402-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{356},
  editor =	{Kowalski, Dariusz R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.DISC.2025.35},
  URN =		{urn:nbn:de:0030-drops-248525},
  doi =		{10.4230/LIPIcs.DISC.2025.35},
  annote =	{Keywords: Priority Queue, Concurrent Data Structures, Synchronization}
}
  • Refine by Type
  • 199 Document/PDF
  • 79 Document/HTML
  • 4 Artifact
  • 1 Volume

  • Refine by Publication Year
  • 7 2026
  • 75 2025
  • 6 2024
  • 5 2023
  • 2 2022
  • Show More...

  • Refine by Author
  • 34 Sanders, Peter
  • 8 Lehmann, Hans-Peter
  • 7 Walzer, Stefan
  • 5 Schulz, Christian
  • 4 Fomin, Fedor V.
  • Show More...

  • Refine by Series/Journal
  • 172 LIPIcs
  • 15 OASIcs
  • 2 LITES
  • 2 TGDK
  • 8 DagSemProc

  • Refine by Classification
  • 30 Mathematics of computing → Graph algorithms
  • 23 Theory of computation → Data structures design and analysis
  • 22 Theory of computation → Parameterized complexity and exact algorithms
  • 21 Theory of computation → Graph algorithms analysis
  • 20 Theory of computation → Computational geometry
  • Show More...

  • Refine by Keyword
  • 6 Treewidth
  • 5 approximation algorithms
  • 5 kernelization
  • 4 Approximation Algorithms
  • 4 Burrows-Wheeler transform
  • Show More...

Any Issues?
X

Feedback on the Current Page

CAPTCHA

Thanks for your feedback!

Feedback submitted to Dagstuhl Publishing

Could not send message

Please try again later or send an E-mail