277 Search Results for "Marx, Daniel"


Volume

LIPIcs, Volume 107

45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)

ICALP 2018, July 9-13, 2018, Prague, Czech Republic

Editors: Ioannis Chatzigiannakis, Christos Kaklamanis, Dániel Marx, and Donald Sannella

Document
Survey
Temporal Modelling in Cultural Heritage Knowledge Graphs: Use Cases, Requirements, Evaluation, and Decision Support

Authors: Oleksandra Bruns, Jörg Waitelonis, Jeff Z. Pan, and Harald Sack

Published in: TGDK, Volume 4, Issue 1 (2026). Transactions on Graph Data and Knowledge, Volume 4, Issue 1


Abstract
Our culture, history and world are in constant motion, continuously shaped by the flow of time, evolving narratives, and shifting relationships. Capturing this temporal complexity within cultural heritage (CH) knowledge graphs is essential for preserving the dynamic nature of human heritage. However, standard RDF predicates fail to effectively model the temporal aspects of cultural data, such as changing facts, evolving relationships, and temporal concepts. Over the past two decades, a variety of RDF-based approaches have been proposed to address this limitation, yet guidance is missing on which method best suits specific CH contexts. This paper presents a systematic evaluation of temporal RDF modelling approaches from a CH perspective. Based on an analysis of real-world CH use cases, core temporal requirements are identified that reflect both modelling expressivity and practical concerns. Six prominent approaches - RDF*, tRDF, Named Graphs, Singleton Property, N-ary Relations, and 4D Fluents - are assessed across these requirements. Our findings reveal that no single solution fits all scenarios, but suitable approaches can be selected based on project-specific priorities. To support practitioners, a decision-support tool is introduced to guide them in selecting the most suitable extension for their specific needs. This work provides practical guidance for CH modelling and contributes to the broader development of temporally aware Linked Data.

Cite as

Oleksandra Bruns, Jörg Waitelonis, Jeff Z. Pan, and Harald Sack. Temporal Modelling in Cultural Heritage Knowledge Graphs: Use Cases, Requirements, Evaluation, and Decision Support. In Transactions on Graph Data and Knowledge (TGDK), Volume 4, Issue 1, pp. 2:1-2:46, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@Article{bruns_et_al:TGDK.4.1.2,
  author =	{Bruns, Oleksandra and Waitelonis, J\"{o}rg and Pan, Jeff Z. and Sack, Harald},
  title =	{{Temporal Modelling in Cultural Heritage Knowledge Graphs: Use Cases, Requirements, Evaluation, and Decision Support}},
  journal =	{Transactions on Graph Data and Knowledge},
  pages =	{2:1--2:46},
  ISSN =	{2942-7517},
  year =	{2026},
  volume =	{4},
  number =	{1},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/TGDK.4.1.2},
  URN =		{urn:nbn:de:0030-drops-256871},
  doi =		{10.4230/TGDK.4.1.2},
  annote =	{Keywords: Temporal Data Representation, RDF Extensions, Cultural Heritage, Knowledge Graphs}
}
Document
Density Matters: A Complexity Dichotomy of Deleting Edges to Bound Subgraph Density

Authors: Matthias Bentert, Tom-Lukas Breitkopf, Vincent Froese, Anton Herrmann, and André Nichterlein

Published in: LIPIcs, Volume 364, 43rd International Symposium on Theoretical Aspects of Computer Science (STACS 2026)


Abstract
We study τ-Bounded-Density Edge Deletion (τ-BDED), where given an undirected graph G, the task is to remove as few edges as possible to obtain a graph G' where no subgraph of G' has density more than τ. The density of a (sub)graph is the number of edges divided by the number of vertices. This problem was recently introduced and shown to be NP-hard for τ ∈ {2/3, 3/4, 1 + 1/25}, but polynomial-time solvable for τ ∈ {0,1/2,1} [Bazgan et al., JCSS 2025]. We provide a complete dichotomy with respect to the target density τ: 1) If 2τ ∈ ℕ (half-integral target density) or τ < 2/3, then τ-BDED is polynomial-time solvable. 2) Otherwise, τ-BDED is NP-hard. We complement the NP-hardness with fixed-parameter tractability with respect to the treewidth of G. Moreover, for integral target density τ ∈ ℕ, we show τ-BDED to be solvable in randomized O(m^{1 + o(1)}) time. Our algorithmic results are based on a reduction to a new general flow problem on restricted networks that, depending on τ, can be solved via Maximum s-t-Flow or General Factors. We believe this connection between these variants of flow and matching to be of independent interest.

Cite as

Matthias Bentert, Tom-Lukas Breitkopf, Vincent Froese, Anton Herrmann, and André Nichterlein. Density Matters: A Complexity Dichotomy of Deleting Edges to Bound Subgraph Density. In 43rd International Symposium on Theoretical Aspects of Computer Science (STACS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 364, pp. 12:1-12:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{bentert_et_al:LIPIcs.STACS.2026.12,
  author =	{Bentert, Matthias and Breitkopf, Tom-Lukas and Froese, Vincent and Herrmann, Anton and Nichterlein, Andr\'{e}},
  title =	{{Density Matters: A Complexity Dichotomy of Deleting Edges to Bound Subgraph Density}},
  booktitle =	{43rd International Symposium on Theoretical Aspects of Computer Science (STACS 2026)},
  pages =	{12:1--12:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-412-3},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{364},
  editor =	{Mahajan, Meena and Manea, Florin and McIver, Annabelle and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2026.12},
  URN =		{urn:nbn:de:0030-drops-255012},
  doi =		{10.4230/LIPIcs.STACS.2026.12},
  annote =	{Keywords: Transshipment, Maximum Flow, General Factors, Matching, Graph Modification Problem}
}
Document
Line Cover and Related Problems

Authors: Matthias Bentert, Fedor V. Fomin, Petr A. Golovach, Souvik Saha, Sanjay Seetharaman, and Anannya Upasana

Published in: LIPIcs, Volume 364, 43rd International Symposium on Theoretical Aspects of Computer Science (STACS 2026)


Abstract
We study several extensions of the classic Line Cover problem of covering a set of n points in the plane with k lines. Line Cover is known to be NP-hard and our focus is on two natural generalizations: (1) Line Clustering, where the objective is to find k lines in the plane that minimize the sum of squares of distances of a given set of input points to the closest line, and (2) Hyperplane Cover, where the goal is to cover n points in ℝ^d by k hyperplanes. We also consider the more general Projective Clustering problem, which unifies both of these and has numerous applications in machine learning, data mining, and computational geometry. In this problem one seeks k affine subspaces of dimension r minimizing the sum of squares of distances of a given set of n points in ℝ^d to the closest point within one of the k affine subspaces. Our main contributions reveal interesting differences in the parameterized complexity of these problems. While Line Cover is fixed-parameter tractable parameterized by the number k of lines in the solution, we show that Line Clustering is W[1]-hard when parameterized by k and rule out algorithms of running time n^{o(k)} under the Exponential Time Hypothesis. Hyperplane Cover is known to be NP-hard even when d = 2 and by the work of Langerman and Morin [Discrete & Computational Geometry, 2005], it is FPT parameterized by k and d. We complement this result by establishing that Hyperplane Cover is W[2]-hard when parameterized by only k. We complement our hardness results by presenting an algorithm for Projective Clustering. We show that this problem is solvable in n^{𝒪(dk(r+1))} time. Not only does this yield an upper bound for Line Clustering that asymptotically matches our lower bound, but it also significantly extends the seminal work on k-Means Clustering (the special case r = 0) by Inaba, Katoh, and Imai [SoCG 1994].

Cite as

Matthias Bentert, Fedor V. Fomin, Petr A. Golovach, Souvik Saha, Sanjay Seetharaman, and Anannya Upasana. Line Cover and Related Problems. In 43rd International Symposium on Theoretical Aspects of Computer Science (STACS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 364, pp. 13:1-13:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{bentert_et_al:LIPIcs.STACS.2026.13,
  author =	{Bentert, Matthias and Fomin, Fedor V. and Golovach, Petr A. and Saha, Souvik and Seetharaman, Sanjay and Upasana, Anannya},
  title =	{{Line Cover and Related Problems}},
  booktitle =	{43rd International Symposium on Theoretical Aspects of Computer Science (STACS 2026)},
  pages =	{13:1--13:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-412-3},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{364},
  editor =	{Mahajan, Meena and Manea, Florin and McIver, Annabelle and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2026.13},
  URN =		{urn:nbn:de:0030-drops-255023},
  doi =		{10.4230/LIPIcs.STACS.2026.13},
  annote =	{Keywords: Point Line Cover, Projective Clustering, W-hardness, XP algorithm}
}
Document
List Coloring Ordered Graphs with Forbidden Induced Subgraphs

Authors: Marta Piecyk and Paweł Rzążewski

Published in: LIPIcs, Volume 364, 43rd International Symposium on Theoretical Aspects of Computer Science (STACS 2026)


Abstract
In the List k-Coloring problem we are given a graph whose every vertex is equipped with a list, which is a subset of {1,…,k}. We need to decide if G admits a proper coloring, where every vertex receives a color from its list. The complexity of the problem in classes defined by forbidding induced subgraphs is a widely studied topic in algorithmic graph theory. Recently, Hajebi, Li, and Spirkl [SIAM J. Discr. Math. 38 (2024)] initiated the study of List 3-Coloring in ordered graphs, i.e., graphs with fixed linear ordering of vertices. Forbidding ordered induced subgraphs allows us to investigate the boundary of tractability more closely. We continue this direction of research, focusing mostly on the case of List 4-Coloring. We present several algorithmic and hardness results, which altogether provide an almost complete dichotomy for classes defined by forbidding one fixed ordered graph: our investigations leave one minimal open case.

Cite as

Marta Piecyk and Paweł Rzążewski. List Coloring Ordered Graphs with Forbidden Induced Subgraphs. In 43rd International Symposium on Theoretical Aspects of Computer Science (STACS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 364, pp. 74:1-74:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{piecyk_et_al:LIPIcs.STACS.2026.74,
  author =	{Piecyk, Marta and Rz\k{a}\.{z}ewski, Pawe{\l}},
  title =	{{List Coloring Ordered Graphs with Forbidden Induced Subgraphs}},
  booktitle =	{43rd International Symposium on Theoretical Aspects of Computer Science (STACS 2026)},
  pages =	{74:1--74:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-412-3},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{364},
  editor =	{Mahajan, Meena and Manea, Florin and McIver, Annabelle and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2026.74},
  URN =		{urn:nbn:de:0030-drops-255634},
  doi =		{10.4230/LIPIcs.STACS.2026.74},
  annote =	{Keywords: coloring, ordered graphs, forbidden induced subgraphs}
}
Document
Mind the Gap. Doubling Constant Parametrization of Weighted Problems: TSP, Max-Cut, and More

Authors: Mihail Stoian

Published in: LIPIcs, Volume 364, 43rd International Symposium on Theoretical Aspects of Computer Science (STACS 2026)


Abstract
Despite much research, hard weighted problems still resist super-polynomial improvements over their textbook solution. On the other hand, the unweighted versions of these problems have recently witnessed the sought-after speedups. Currently, the only way to repurpose the algorithm of the unweighted version for the weighted version is to employ a polynomial embedding of the input weights. This, however, introduces a pseudo-polynomial factor into the running time, which becomes impractical for arbitrarily weighted instances. In this paper, we introduce a new way to repurpose the algorithm of the unweighted problem. Specifically, we show that the time complexity of several well-known NP-hard problems operating over the (min, +) and (max, +) semirings, such as TSP, Weighted Max-Cut, and Edge-Weighted k-Clique, is proportional to that of their unweighted versions when the set of input weights has small doubling. We achieve this by a meta-algorithm that converts the input weights into polynomially bounded integers using the recent constructive Freiman’s theorem by Randolph and Węgrzycki [ESA 2024] before applying the polynomial embedding.

Cite as

Mihail Stoian. Mind the Gap. Doubling Constant Parametrization of Weighted Problems: TSP, Max-Cut, and More. In 43rd International Symposium on Theoretical Aspects of Computer Science (STACS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 364, pp. 79:1-79:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{stoian:LIPIcs.STACS.2026.79,
  author =	{Stoian, Mihail},
  title =	{{Mind the Gap. Doubling Constant Parametrization of Weighted Problems: TSP, Max-Cut, and More}},
  booktitle =	{43rd International Symposium on Theoretical Aspects of Computer Science (STACS 2026)},
  pages =	{79:1--79:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-412-3},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{364},
  editor =	{Mahajan, Meena and Manea, Florin and McIver, Annabelle and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2026.79},
  URN =		{urn:nbn:de:0030-drops-255680},
  doi =		{10.4230/LIPIcs.STACS.2026.79},
  annote =	{Keywords: doubling constant parametrization, weighted problems, traveling salesman, weighted max-cut, edge-weighted k-clique}
}
Document
Protrusion Decompositions Revisited: Uniform Lossy Kernels for Reducing Treewidth and Linear Kernels for Hitting Disconnected Minors

Authors: Roohani Sharma and Michał Włodarczyk

Published in: LIPIcs, Volume 364, 43rd International Symposium on Theoretical Aspects of Computer Science (STACS 2026)


Abstract
Let ℱ be a finite family of graphs. In the ℱ-Deletion problem, one is given a graph G and an integer k, and the goal is to find k vertices whose deletion results in a graph with no minor from the family ℱ. This may be regarded as a far-reaching generalization of Vertex Cover and Feedback vertex Set. In their seminal work, Fomin, Lokshtanov, Misra & Saurabh [FOCS 2012] gave a polynomial kernel for this problem when the family ℱ contains a planar graph. As the size of their kernel is g(ℱ) ⋅ k^{f(ℱ)}, a natural follow-up question was whether the dependence on ℱ in the exponent of k can be avoided. The answer turned out to be negative: Giannopoulou, Jansen, Lokshtanov & Saurabh [TALG 2017] proved that this is already inevitable for the special case of the Treewidth-η-Deletion problem. In this work, we show that this non-uniformity can be avoided at the expense of a small loss. First, we present a simple 2-approximate kernelization algorithm for Treewidth-η-Deletion with a kernel size g(η) ⋅ k⁶. Next, we show that the approximation factor can be made arbitrarily close to 1, if we settle for a kernelization protocol with 𝒪(1) calls to an oracle that solves instances of size bounded by a uniform polynomial in k. We extend the above results to general ℱ-Deletion, whenever ℱ contains a planar graph, as long as an oracle for Treewidth-η-Deletion is available for small instances. Notably, all our constants are computable functions of ℱ and our techniques work also when some graphs in ℱ may be disconnected. Our results rely on two novel techniques. First, we transform so-called "near-protrusion decompositions" into true protrusion decompositions by sacrificing a small accuracy loss. Secondly, we show how to optimally compress such a decomposition with respect to general ℱ-Deletion. Using our second technique, we also obtain linear kernels on sparse graph classes when ℱ contains a planar graph, whereas the previously known theorems required all graphs in ℱ to be connected. Specifically, we generalize the kernelization algorithm by Kim, Langer, Paul, Reidl, Rossmanith, Sau & Sikdar [TALG 2015] on graph classes that exclude a topological minor.

Cite as

Roohani Sharma and Michał Włodarczyk. Protrusion Decompositions Revisited: Uniform Lossy Kernels for Reducing Treewidth and Linear Kernels for Hitting Disconnected Minors. In 43rd International Symposium on Theoretical Aspects of Computer Science (STACS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 364, pp. 78:1-78:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{sharma_et_al:LIPIcs.STACS.2026.78,
  author =	{Sharma, Roohani and W{\l}odarczyk, Micha{\l}},
  title =	{{Protrusion Decompositions Revisited: Uniform Lossy Kernels for Reducing Treewidth and Linear Kernels for Hitting Disconnected Minors}},
  booktitle =	{43rd International Symposium on Theoretical Aspects of Computer Science (STACS 2026)},
  pages =	{78:1--78:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-412-3},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{364},
  editor =	{Mahajan, Meena and Manea, Florin and McIver, Annabelle and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2026.78},
  URN =		{urn:nbn:de:0030-drops-255674},
  doi =		{10.4230/LIPIcs.STACS.2026.78},
  annote =	{Keywords: kernelization, graph minors, treewidth, uniform kernels, minor hitting}
}
Document
Simple Circuit Extensions for XOR in PTIME

Authors: Marco Carmosino, Ngu Dang, and Tim Jackman

Published in: LIPIcs, Volume 364, 43rd International Symposium on Theoretical Aspects of Computer Science (STACS 2026)


Abstract
The Minimum Circuit Size Problem for Partial Functions (MCSP^*) is hard assuming the Exponential Time Hypothesis (ETH) (Ilango, 2020). This breakthrough result leveraged a characterization of the optimal {∧, ∨, ¬} circuits for n-bit OR (OR_n) and a reduction from the partial f-Simple Extension Problem where f = OR_n. It remains open to extend that reduction to show ETH-hardness of total MCSP. However, Ilango observed that the total f-Simple Extension Problem is easy whenever f is computed by read-once formulas (like OR_n). Therefore, extending Ilango’s proof to total MCSP would require replacing OR_n with a more complex but similarly well-understood Boolean function. This work shows that the f-Simple Extension problem remains easy when f is the next natural candidate: XOR_n. We first develop a fixed-parameter tractable algorithm for the f-Simple Extension Problem that is efficient whenever the optimal circuits for f are (1) linear in size, (2) polynomially "few" and efficiently enumerable in the truth-table size (up to isomorphism and permutation of inputs), and (3) all have constant bounded fan-out. XOR_n satisfies all three of these conditions. When ¬ gates count towards circuit size, optimal XOR_n circuits are binary trees of n-1 subcircuits computing (¬)XOR₂ (Kombarov, 2011). We extend this characterization when ¬ gates do not contribute the circuit size. Thus, the XOR-Simple Extension Problem is in polynomial time under both measures of circuit complexity. We conclude by discussing conjectures about the complexity of the f-Simple Extension problem for each explicit function f with known and unrestricted circuit lower bounds over the DeMorgan basis. Examining the conditions under which our Simple Extension Solver is efficient, we argue that multiplexer functions (MUX) are the most promising candidate for ETH-hardness of a Simple Extension Problem, towards proving ETH-hardness of total MCSP.

Cite as

Marco Carmosino, Ngu Dang, and Tim Jackman. Simple Circuit Extensions for XOR in PTIME. In 43rd International Symposium on Theoretical Aspects of Computer Science (STACS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 364, pp. 23:1-23:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{carmosino_et_al:LIPIcs.STACS.2026.23,
  author =	{Carmosino, Marco and Dang, Ngu and Jackman, Tim},
  title =	{{Simple Circuit Extensions for XOR in PTIME}},
  booktitle =	{43rd International Symposium on Theoretical Aspects of Computer Science (STACS 2026)},
  pages =	{23:1--23:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-412-3},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{364},
  editor =	{Mahajan, Meena and Manea, Florin and McIver, Annabelle and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2026.23},
  URN =		{urn:nbn:de:0030-drops-255127},
  doi =		{10.4230/LIPIcs.STACS.2026.23},
  annote =	{Keywords: Minimum Circuit Size Problem, Circuit Lower Bounds, Exponential Time Hypothesis}
}
Document
Foremost, Fastest, Shortest: Temporal Graph Realization Under Various Path Metrics

Authors: Justine Cauvi, Nils Morawietz, and Laurent Viennot

Published in: LIPIcs, Volume 364, 43rd International Symposium on Theoretical Aspects of Computer Science (STACS 2026)


Abstract
In this work, we follow the current trend on temporal graph realization, where one is given a property P and the goal is to determine whether there is a temporal graph, that is, a graph where the edge set changes over time, with property P. We consider the problems where the given property P is a prescribed matrix for the duration, length, or earliest arrival time of pairwise temporal paths. This means that we are given a matrix D and ask whether there is a temporal graph such that for any ordered pair of vertices (s,t), D_{s,t} equals the duration (length, or earliest arrival time, respectively) of any temporal path from s to t minimizing that specific temporal path metric. For shortest and earliest arrival temporal paths, we are the first to consider these problems as far as we know. We analyze these problems for many settings such as: strict and non-strict paths, periodic and non-periodic temporal graphs, and limited number of labels per edge (limited number of occurrences per edge over time). In contrast to all other path metrics, we show that for the earliest arrival times, we can achieve polynomial-time algorithms in periodic and non-periodic temporal graphs and for strict and and non-strict paths. However, the problem becomes NP-hard when the matrix does not contain a single integer but a set or range of possible allowed values. As we show, the problem can still be solved efficiently in this scenario, when the number of entries with more than one value is small, that is, we develop an FPT-algorithm for the number of such entries. For the setting of fastest paths, we achieve new hardness results that answers an open question by Klobas, Mertzios, Molter, and Spirakis [Theor. Comput. Sci. '25] about the parameterized complexity of the problem with respect to the vertex cover number and significantly improves over a previous hardness result for the feedback vertex set number. When considering shortest paths, we show that the periodic versions are polynomial-time solvable whereas the non-periodic versions become NP-hard.

Cite as

Justine Cauvi, Nils Morawietz, and Laurent Viennot. Foremost, Fastest, Shortest: Temporal Graph Realization Under Various Path Metrics. In 43rd International Symposium on Theoretical Aspects of Computer Science (STACS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 364, pp. 24:1-24:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{cauvi_et_al:LIPIcs.STACS.2026.24,
  author =	{Cauvi, Justine and Morawietz, Nils and Viennot, Laurent},
  title =	{{Foremost, Fastest, Shortest: Temporal Graph Realization Under Various Path Metrics}},
  booktitle =	{43rd International Symposium on Theoretical Aspects of Computer Science (STACS 2026)},
  pages =	{24:1--24:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-412-3},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{364},
  editor =	{Mahajan, Meena and Manea, Florin and McIver, Annabelle and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2026.24},
  URN =		{urn:nbn:de:0030-drops-255139},
  doi =		{10.4230/LIPIcs.STACS.2026.24},
  annote =	{Keywords: network design, temporal paths, foremost paths, fastest paths, shortest paths, non-strict paths, periodic temporal graphs}
}
Document
Homomorphism Indistinguishability, Multiplicity Automata Equivalence, and Polynomial Identity Testing

Authors: Marek Černý and Tim Seppelt

Published in: LIPIcs, Volume 364, 43rd International Symposium on Theoretical Aspects of Computer Science (STACS 2026)


Abstract
Two graphs G and H are homomorphism indistinguishable over a graph class ℱ if they admit the same number of homomorphisms from every graph F ∈ ℱ. Many graph isomorphism relaxations such as (quantum) isomorphism and cospectrality can be characterised as homomorphism indistinguishability over specific graph classes. Thereby, the problems HomInd(ℱ) of deciding homomorphism indistinguishability over ℱ subsume diverse graph isomorphism relaxations whose complexities range from logspace to undecidable. Establishing the first general result on the complexity of HomInd(ℱ), Seppelt (MFCS 2024) showed that HomInd(ℱ) is in randomised polynomial time for every graph class ℱ of bounded treewidth that can be defined in counting monadic second-order logic CMSO₂. We show that this algorithm is conditionally optimal, i.e. it cannot be derandomised unless polynomial identity testing is in P. For CMSO₂-definable graph classes ℱ of bounded pathwidth, we improve the previous complexity upper bound for HomInd(ℱ) from P to C_ = L and show that this is tight. Secondarily, we establish a connection between homomorphism indistinguishability and multiplicity automata equivalence which allows us to pinpoint the complexity of the latter problem as C_ = L-complete.

Cite as

Marek Černý and Tim Seppelt. Homomorphism Indistinguishability, Multiplicity Automata Equivalence, and Polynomial Identity Testing. In 43rd International Symposium on Theoretical Aspects of Computer Science (STACS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 364, pp. 25:1-25:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{cerny_et_al:LIPIcs.STACS.2026.25,
  author =	{\v{C}ern\'{y}, Marek and Seppelt, Tim},
  title =	{{Homomorphism Indistinguishability, Multiplicity Automata Equivalence, and Polynomial Identity Testing}},
  booktitle =	{43rd International Symposium on Theoretical Aspects of Computer Science (STACS 2026)},
  pages =	{25:1--25:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-412-3},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{364},
  editor =	{Mahajan, Meena and Manea, Florin and McIver, Annabelle and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2026.25},
  URN =		{urn:nbn:de:0030-drops-255144},
  doi =		{10.4230/LIPIcs.STACS.2026.25},
  annote =	{Keywords: treewidth, Courcelle’s theorem, logspace, multiplicity automata, polynomial identity testing}
}
Document
When Is Local Search Both Effective and Efficient?

Authors: Artem Kaznatcheev and Sofia Vazquez Alferez

Published in: LIPIcs, Volume 364, 43rd International Symposium on Theoretical Aspects of Computer Science (STACS 2026)


Abstract
Combinatorial optimization problems implicitly define fitness landscapes that combine the numeric structure of the "fitness" function to be maximized with the combinatorial structure of which assignments are "adjacent". Local search starts at an assignment in this landscape and successively moves assignments until no further improvement is possible among the adjacent assignments. Classic analyses of local search algorithms have focused more on the question of effectiveness ("did we find a good solution?") and often implicitly assumed that there are no doubts about their efficiency ("did we find it quickly?"). But there are many reasons to doubt the efficiency of local search. Even if we focus on fitness landscapes on the hypercube that are single peaked on every subcube (known as semismooth fitness landscapes, completely unimodal pseudo-Boolean functions, or acyclic unique sink orientations) where effectiveness is obvious, many local search algorithms are known to be inefficient. Since fitness landscapes are unwieldy exponentially large objects, we focus on their polynomial-sized representations by instances of valued constraint satisfaction problems (VCSP). We define a "direction" for valued constraints such that directed VCSPs generate semismooth fitness landscapes. We call directed VCSPs oriented if they do not have any pair of variables with arcs in both directions. Since recognizing if a VCSP-instance is directed or oriented is coNP-complete, we generalized oriented VCSPs as conditionally-smooth fitness landscapes where the structural property of "conditionally-smooth" is recognizable in polynomial time for a VCSP-instance. We prove that many popular local search algorithms like random ascent, simulated annealing, history-based rules, jumping rules, and the Kernighan-Lin heuristic are very efficient on conditionally-smooth landscapes. But conditionally-smooth landscapes are still expressive enough so that other well-regarded local search algorithms like steepest ascent and random facet require a super-polynomial number of steps to find the fitness peak.

Cite as

Artem Kaznatcheev and Sofia Vazquez Alferez. When Is Local Search Both Effective and Efficient?. In 43rd International Symposium on Theoretical Aspects of Computer Science (STACS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 364, pp. 59:1-59:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{kaznatcheev_et_al:LIPIcs.STACS.2026.59,
  author =	{Kaznatcheev, Artem and Vazquez Alferez, Sofia},
  title =	{{When Is Local Search Both Effective and Efficient?}},
  booktitle =	{43rd International Symposium on Theoretical Aspects of Computer Science (STACS 2026)},
  pages =	{59:1--59:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-412-3},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{364},
  editor =	{Mahajan, Meena and Manea, Florin and McIver, Annabelle and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2026.59},
  URN =		{urn:nbn:de:0030-drops-255480},
  doi =		{10.4230/LIPIcs.STACS.2026.59},
  annote =	{Keywords: valued constraint satisfaction problem, local search, algorithm analysis, constraint graphs, pseudo-Boolean functions, parameterized complexity}
}
Document
Computing Twin-Width via Treedepth and Vertex Integrity

Authors: Robert Ganian and Mathis Rocton

Published in: LIPIcs, Volume 364, 43rd International Symposium on Theoretical Aspects of Computer Science (STACS 2026)


Abstract
Twin-width is a graph parameter that has become central to explaining the fixed-parameter tractability of first-order model checking across many graph classes. Despite its algorithmic importance, computing twin-width remains poorly understood: even recognizing graphs of twin-width at most four is NP-hard, and no fixed-parameter approximations parameterized by twin-width itself are known. A recent approach towards breaking this barrier focuses on first developing fixed-parameter algorithms for computing or approximating twin-width under parameterizations distinct from twin-width. Our first result establishes that approximating twin-width is fixed-parameter tractable when parameterized by treedepth, thereby breaking the long-standing barrier that all previous tractable parameterizations were based on deletion distance. The proof proceeds via oriented twin-width, yielding the first constructive evidence that this variant may be easier to handle algorithmically. As our second main result, we show that computing twin-width exactly is fixed-parameter tractable with respect to vertex integrity. This constitutes the first non-trivial parameterized algorithm for computing optimal contraction sequences.

Cite as

Robert Ganian and Mathis Rocton. Computing Twin-Width via Treedepth and Vertex Integrity. In 43rd International Symposium on Theoretical Aspects of Computer Science (STACS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 364, pp. 42:1-42:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{ganian_et_al:LIPIcs.STACS.2026.42,
  author =	{Ganian, Robert and Rocton, Mathis},
  title =	{{Computing Twin-Width via Treedepth and Vertex Integrity}},
  booktitle =	{43rd International Symposium on Theoretical Aspects of Computer Science (STACS 2026)},
  pages =	{42:1--42:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-412-3},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{364},
  editor =	{Mahajan, Meena and Manea, Florin and McIver, Annabelle and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2026.42},
  URN =		{urn:nbn:de:0030-drops-255318},
  doi =		{10.4230/LIPIcs.STACS.2026.42},
  annote =	{Keywords: twin-width, fixed-parameter algorithms, treedepth, vertex integrity}
}
Document
Symmetric Algebraic Circuits and Homomorphism Polynomials

Authors: Anuj Dawar, Benedikt Pago, and Tim Seppelt

Published in: LIPIcs, Volume 362, 17th Innovations in Theoretical Computer Science Conference (ITCS 2026)


Abstract
The central open question of algebraic complexity is whether VP ≠ VNP, which is saying that the permanent cannot be represented by families of polynomial-size algebraic circuits. For symmetric algebraic circuits, this has been confirmed by Dawar and Wilsenach (2020), who showed exponential lower bounds on the size of symmetric circuits for the permanent. In this work, we set out to develop a more general symmetric algebraic complexity theory. Our main result is that a family of symmetric polynomials admits small symmetric circuits if and only if they can be written as a linear combination of homomorphism counting polynomials of graphs of bounded treewidth. We also establish a relationship between the symmetric complexity of subgraph counting polynomials and the vertex cover number of the pattern graph. As a concrete example, we examine the symmetric complexity of immanant families (a generalisation of the determinant and permanent) and show that a known conditional dichotomy due to Curticapean (2021) holds unconditionally in the symmetric setting.

Cite as

Anuj Dawar, Benedikt Pago, and Tim Seppelt. Symmetric Algebraic Circuits and Homomorphism Polynomials. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 46:1-46:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{dawar_et_al:LIPIcs.ITCS.2026.46,
  author =	{Dawar, Anuj and Pago, Benedikt and Seppelt, Tim},
  title =	{{Symmetric Algebraic Circuits and Homomorphism Polynomials}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{46:1--46:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.46},
  URN =		{urn:nbn:de:0030-drops-253330},
  doi =		{10.4230/LIPIcs.ITCS.2026.46},
  annote =	{Keywords: algebraic complexity, finite model theory, symmetric circuits, homomorphism counting, graph homomorphism, treewidth, counting width, first-order logic with counting quantifiers}
}
Document
On the PTAS Complexity of Multidimensional Knapsack

Authors: Ilan Doron-Arad, Ariel Kulik, and Pasin Manurangsi

Published in: LIPIcs, Volume 362, 17th Innovations in Theoretical Computer Science Conference (ITCS 2026)


Abstract
We study the d-dimensional knapsack problem. We are given a set of items, each with a d-dimensional cost vector and a profit, along with a d-dimensional budget vector. The goal is to select a set of items that do not exceed the budget in all dimensions and maximize the total profit. A polynomial-time approximation scheme (PTAS) with running time n^{Θ(d/{ε})} has long been known for this problem, where {ε} is the error parameter and n is the encoding size. Despite decades of active research, the best running time of a PTAS has remained O(n^{⌈ d/{ε} ⌉ - d}). Unfortunately, existing lower bounds only cover the special case with two dimensions d = 2, and do not answer whether there is a n^{o(d/({ε)})}-time PTAS for larger values of d. In this work, we show that the running times of the best-known PTAS cannot be improved up to a polylogarithmic factor assuming the Exponential Time Hypothesis (ETH). Our techniques are based on a robust reduction from 2-CSP, which embeds 2-CSP constraints into a desired number of dimensions. Then, using a recent result of [Bafna Karthik and Minzer, STOC'25], we succeed in exhibiting tight trade-off between d and {ε} for all regimes of the parameters assuming d is sufficiently large. Informally, our result also shows that under ETH, for any function f there is no f(d/({ε)}) ⋅ n^{õ(d/({ε)})}-time (1-{ε})-approximation for d-dimensional knapsack, where n is the number of items and õ hides polylogarithmic factors in d/({ε)}.

Cite as

Ilan Doron-Arad, Ariel Kulik, and Pasin Manurangsi. On the PTAS Complexity of Multidimensional Knapsack. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 50:1-50:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{doronarad_et_al:LIPIcs.ITCS.2026.50,
  author =	{Doron-Arad, Ilan and Kulik, Ariel and Manurangsi, Pasin},
  title =	{{On the PTAS Complexity of Multidimensional Knapsack}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{50:1--50:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.50},
  URN =		{urn:nbn:de:0030-drops-253377},
  doi =		{10.4230/LIPIcs.ITCS.2026.50},
  annote =	{Keywords: d-dimensional Knapsack, Multidimensional Knapsack, PTAS, CSP}
}
Document
Efficient Algorithms for the Disjoint Shortest Paths Problem and Its Extensions

Authors: Keerti Choudhary, Amit Kumar, and Lakshay Saggi

Published in: LIPIcs, Volume 362, 17th Innovations in Theoretical Computer Science Conference (ITCS 2026)


Abstract
We study the 2-Disjoint Shortest Paths (2-DSP) problem: given a directed weighted graph and two terminal pairs (s₁,t₁) and (s₂,t₂), decide whether there exist vertex-disjoint shortest paths between each pair. Building on recent advances in disjoint shortest paths for DAGs and undirected graphs (Akmal et al. 2024), we present an O(mn log n)-time algorithm for this problem in weighted directed graphs that do not contain negative or zero weight cycles. This algorithm presents a significant improvement over the previously known O(m⁵n)-time bound (Berczi et al. 2017). Our approach exploits the algebraic structure of polynomials that enumerate shortest paths between terminal pairs. A key insight is that these polynomials admit a recursive decomposition, enabling efficient evaluation via dynamic programming over fields of characteristic two. Furthermore, we demonstrate how to report the corresponding paths in O(mn² log n)-time. In addition, we extend our techniques to a more general setting: given two terminal pairs (s₁, t₁) and (s₂, t₂) in a directed graph, find the minimum possible number of vertex intersections between any shortest path from s₁ to t₁ and s₂ to t₂. We call this the Minimum 2-Disjoint Shortest Paths (Min-2-DSP) problem. We provide in this paper the first efficient algorithm for this problem, including an O(m² n³)-time algorithm for directed graphs with positive edge weights, and an O(m+n)-time algorithm for DAGs and undirected graphs. Moreover, if the number of intersecting vertices is at least one, we show that it is possible to report the paths in the same O(m+n)-time. This is somewhat surprising, as there is no known o(mn) time algorithm for explicitly reporting the paths if they are vertex-disjoint, and is left as an open problem in (Akmal et al. 2024).

Cite as

Keerti Choudhary, Amit Kumar, and Lakshay Saggi. Efficient Algorithms for the Disjoint Shortest Paths Problem and Its Extensions. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 39:1-39:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{choudhary_et_al:LIPIcs.ITCS.2026.39,
  author =	{Choudhary, Keerti and Kumar, Amit and Saggi, Lakshay},
  title =	{{Efficient Algorithms for the Disjoint Shortest Paths Problem and Its Extensions}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{39:1--39:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.39},
  URN =		{urn:nbn:de:0030-drops-253267},
  doi =		{10.4230/LIPIcs.ITCS.2026.39},
  annote =	{Keywords: Disjoint paths, Disjoint shortest paths, Algebraic graph algorithms}
}
  • Refine by Type
  • 276 Document/PDF
  • 198 Document/HTML
  • 1 Volume

  • Refine by Publication Year
  • 21 2026
  • 176 2025
  • 9 2024
  • 3 2023
  • 7 2022
  • Show More...

  • Refine by Author
  • 62 Marx, Dániel
  • 18 Saurabh, Saket
  • 10 Fomin, Fedor V.
  • 10 Sharma, Roohani
  • 9 Lokshtanov, Daniel
  • Show More...

  • Refine by Series/Journal
  • 247 LIPIcs
  • 6 OASIcs
  • 4 TGDK
  • 10 DagRep
  • 9 DagSemProc

  • Refine by Classification
  • 73 Theory of computation → Parameterized complexity and exact algorithms
  • 35 Theory of computation → Fixed parameter tractability
  • 31 Mathematics of computing → Graph algorithms
  • 31 Theory of computation → Graph algorithms analysis
  • 27 Theory of computation → Problems, reductions and completeness
  • Show More...

  • Refine by Keyword
  • 22 parameterized complexity
  • 20 Parameterized Complexity
  • 18 Treewidth
  • 15 treewidth
  • 13 Parameterized complexity
  • Show More...

Any Issues?
X

Feedback on the Current Page

CAPTCHA

Thanks for your feedback!

Feedback submitted to Dagstuhl Publishing

Could not send message

Please try again later or send an E-mail