17 Search Results for "Bannach, Max"


Document
Faster Graph Algorithms Through DAG Compression

Authors: Max Bannach, Florian Andreas Marwitz, and Till Tantau

Published in: LIPIcs, Volume 289, 41st International Symposium on Theoretical Aspects of Computer Science (STACS 2024)


Abstract
The runtime of graph algorithms such as depth-first search or Dijkstra’s algorithm is dominated by the fact that all edges of the graph need to be processed at least once, leading to prohibitive runtimes for large, dense graphs. We introduce a simple data structure for storing graphs (and more general structures) in a compressed manner using directed acyclic graphs (dags). We then show that numerous standard graph problems can be solved in time linear in the size of the dag compression of a graph, rather than in the number of edges of the graph. Crucially, many dense graphs, including but not limited to graphs of bounded twinwidth, have a dag compression of size linear in the number of vertices rather than edges. This insight allows us to improve the previous best results for the runtime of standard algorithms from quasi-linear to linear for the large class of graphs of bounded twinwidth, which includes all cographs, graphs of bounded treewidth, or graphs of bounded cliquewidth.

Cite as

Max Bannach, Florian Andreas Marwitz, and Till Tantau. Faster Graph Algorithms Through DAG Compression. In 41st International Symposium on Theoretical Aspects of Computer Science (STACS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 289, pp. 8:1-8:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{bannach_et_al:LIPIcs.STACS.2024.8,
  author =	{Bannach, Max and Marwitz, Florian Andreas and Tantau, Till},
  title =	{{Faster Graph Algorithms Through DAG Compression}},
  booktitle =	{41st International Symposium on Theoretical Aspects of Computer Science (STACS 2024)},
  pages =	{8:1--8:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-311-9},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{289},
  editor =	{Beyersdorff, Olaf and Kant\'{e}, Mamadou Moustapha and Kupferman, Orna and Lokshtanov, Daniel},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2024.8},
  URN =		{urn:nbn:de:0030-drops-197188},
  doi =		{10.4230/LIPIcs.STACS.2024.8},
  annote =	{Keywords: graph compression, graph traversal, twinwidth, parameterized algorithms}
}
Document
Existential Second-Order Logic over Graphs: Parameterized Complexity

Authors: Max Bannach, Florian Chudigiewitsch, and Till Tantau

Published in: LIPIcs, Volume 285, 18th International Symposium on Parameterized and Exact Computation (IPEC 2023)


Abstract
By Fagin’s Theorem, NP contains precisely those problems that can be described by formulas starting with an existential second-order quantifier, followed by only first-order quantifiers (eso formulas). Subsequent research refined this result, culminating in powerful theorems that characterize for each possible sequence of first-order quantifiers how difficult the described problem can be. We transfer this line of inquiry to the parameterized setting, where the size of the set quantified by the second-order quantifier is the parameter. Many natural parameterized problems can be described in this way using simple sequences of first-order quantifiers: For the clique or vertex cover problems, two universal first-order quantifiers suffice ("for all pairs of vertices ... must hold"); for the dominating set problem, a universal followed by an existential quantifier suffice ("for all vertices, there is a vertex such that ..."); and so on. We present a complete characterization that states for each possible sequence of first-order quantifiers how high the parameterized complexity of the described problems can be. The uncovered dividing line between quantifier sequences that lead to tractable versus intractable problems is distinct from that known from the classical setting, and it depends on whether the parameter is a lower bound on, an upper bound on, or equal to the size of the quantified set.

Cite as

Max Bannach, Florian Chudigiewitsch, and Till Tantau. Existential Second-Order Logic over Graphs: Parameterized Complexity. In 18th International Symposium on Parameterized and Exact Computation (IPEC 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 285, pp. 3:1-3:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{bannach_et_al:LIPIcs.IPEC.2023.3,
  author =	{Bannach, Max and Chudigiewitsch, Florian and Tantau, Till},
  title =	{{Existential Second-Order Logic over Graphs: Parameterized Complexity}},
  booktitle =	{18th International Symposium on Parameterized and Exact Computation (IPEC 2023)},
  pages =	{3:1--3:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-305-8},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{285},
  editor =	{Misra, Neeldhara and Wahlstr\"{o}m, Magnus},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.IPEC.2023.3},
  URN =		{urn:nbn:de:0030-drops-194224},
  doi =		{10.4230/LIPIcs.IPEC.2023.3},
  annote =	{Keywords: existential second-order logic, graph problems, parallel algorithms, fixed-parameter tractability, descriptive complexity}
}
Document
PACE Solver Description
PACE Solver Description: The PACE 2023 Parameterized Algorithms and Computational Experiments Challenge: Twinwidth

Authors: Max Bannach and Sebastian Berndt

Published in: LIPIcs, Volume 285, 18th International Symposium on Parameterized and Exact Computation (IPEC 2023)


Abstract
This article is a report by the challenge organizers on the 8th Parameterized Algorithms and Computational Experiments Challenge (PACE 2023). As was common in previous iterations of the competition, this year’s iteration implemented an exact and heuristic track for a parameterized problem that has gained attention in the theory community. This year, the problem was to compute the twinwidth of a graph, a recently introduced width parameter that measures the similarity of a graph to a cograph. In the exact track, the competition participants were asked to develop an exact algorithm that can solve as many instances as possible from a benchmark set of 100 instances - with a time limit of 30 minutes per instance. The same task must be accomplished within 5 minutes in the heuristic track. However, the result in this track is not required to be optimal. As in previous iterations, the organizers handed out awards to the best solutions in both tracks and to the best student submissions. New this year is a dedicated theory award that appreciates new theoretical insights found by the participants during the development of their tools.

Cite as

Max Bannach and Sebastian Berndt. PACE Solver Description: The PACE 2023 Parameterized Algorithms and Computational Experiments Challenge: Twinwidth. In 18th International Symposium on Parameterized and Exact Computation (IPEC 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 285, pp. 35:1-35:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{bannach_et_al:LIPIcs.IPEC.2023.35,
  author =	{Bannach, Max and Berndt, Sebastian},
  title =	{{PACE Solver Description: The PACE 2023 Parameterized Algorithms and Computational Experiments Challenge: Twinwidth}},
  booktitle =	{18th International Symposium on Parameterized and Exact Computation (IPEC 2023)},
  pages =	{35:1--35:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-305-8},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{285},
  editor =	{Misra, Neeldhara and Wahlstr\"{o}m, Magnus},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.IPEC.2023.35},
  URN =		{urn:nbn:de:0030-drops-194548},
  doi =		{10.4230/LIPIcs.IPEC.2023.35},
  annote =	{Keywords: Twinwidth, Algorithm Engineering, FPT, Kernelization}
}
Document
On the Parallel Parameterized Complexity of MaxSAT Variants

Authors: Max Bannach, Malte Skambath, and Till Tantau

Published in: LIPIcs, Volume 236, 25th International Conference on Theory and Applications of Satisfiability Testing (SAT 2022)


Abstract
In the maximum satisfiability problem (max-sat) we are given a propositional formula in conjunctive normal form and have to find an assignment that satisfies as many clauses as possible. We study the parallel parameterized complexity of various versions of max-sat and provide the first constant-time algorithms parameterized either by the solution size or by the allowed excess relative to some guarantee ("above guarantee" versions). For the dual parameterized version where the parameter is the number of clauses we are allowed to leave unsatisfied, we present the first parallel algorithm for max-2sat (known as almost-2sat). The difficulty in solving almost-2sat in parallel comes from the fact that the iterative compression method, originally developed to prove that the problem is fixed-parameter tractable at all, is inherently sequential. We observe that a graph flow whose value is a parameter can be computed in parallel and use this fact to develop a parallel algorithm for the vertex cover problem parameterized above the size of a given matching. Finally, we study the parallel complexity of max-sat parameterized by the vertex cover number, the treedepth, the feedback vertex set number, and the treewidth of the input’s incidence graph. While max-sat is fixed-parameter tractable for all of these parameters, we show that they allow different degrees of possible parallelization. For all four we develop dedicated parallel algorithms that are constructive, meaning that they output an optimal assignment - in contrast to results that can be obtained by parallel meta-theorems, which often only solve the decision version.

Cite as

Max Bannach, Malte Skambath, and Till Tantau. On the Parallel Parameterized Complexity of MaxSAT Variants. In 25th International Conference on Theory and Applications of Satisfiability Testing (SAT 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 236, pp. 19:1-19:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{bannach_et_al:LIPIcs.SAT.2022.19,
  author =	{Bannach, Max and Skambath, Malte and Tantau, Till},
  title =	{{On the Parallel Parameterized Complexity of MaxSAT Variants}},
  booktitle =	{25th International Conference on Theory and Applications of Satisfiability Testing (SAT 2022)},
  pages =	{19:1--19:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-242-6},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{236},
  editor =	{Meel, Kuldeep S. and Strichman, Ofer},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SAT.2022.19},
  URN =		{urn:nbn:de:0030-drops-166934},
  doi =		{10.4230/LIPIcs.SAT.2022.19},
  annote =	{Keywords: max-sat, almost-sat, parallel algorithms, fixed-parameter tractability}
}
Document
MaxSAT with Absolute Value Functions: A Parameterized Perspective

Authors: Max Bannach, Pamela Fleischmann, and Malte Skambath

Published in: LIPIcs, Volume 227, 18th Scandinavian Symposium and Workshops on Algorithm Theory (SWAT 2022)


Abstract
The natural generalization of the Boolean satisfiability problem to optimization problems is the task of determining the maximum number of clauses that can simultaneously be satisfied in a propositional formula in conjunctive normal form. In the weighted maximum satisfiability problem each clause has a positive weight and one seeks an assignment of maximum weight. The literature almost solely considers the case of positive weights. While the general case of the problem is only restricted slightly by this constraint, many special cases become trivial in the absence of negative weights. In this work we study the problem with negative weights and observe that the problem becomes computationally harder - which we formalize from a parameterized perspective in the sense that various variations of the problem become W[1]-hard if negative weights are present. Allowing negative weights also introduces new variants of the problem: Instead of maximizing the sum of weights of satisfied clauses, we can maximize the absolute value of that sum. This turns out to be surprisingly expressive even restricted to monotone formulas in disjunctive normal form with at most two literals per clause. In contrast to the versions without the absolute value, however, we prove that these variants are fixed-parameter tractable. As technical contribution we present a kernelization for an auxiliary problem on hypergraphs in which we seek, given an edge-weighted hypergraph, an induced subgraph that maximizes the absolute value of the sum of edge-weights.

Cite as

Max Bannach, Pamela Fleischmann, and Malte Skambath. MaxSAT with Absolute Value Functions: A Parameterized Perspective. In 18th Scandinavian Symposium and Workshops on Algorithm Theory (SWAT 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 227, pp. 12:1-12:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{bannach_et_al:LIPIcs.SWAT.2022.12,
  author =	{Bannach, Max and Fleischmann, Pamela and Skambath, Malte},
  title =	{{MaxSAT with Absolute Value Functions: A Parameterized Perspective}},
  booktitle =	{18th Scandinavian Symposium and Workshops on Algorithm Theory (SWAT 2022)},
  pages =	{12:1--12:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-236-5},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{227},
  editor =	{Czumaj, Artur and Xin, Qin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SWAT.2022.12},
  URN =		{urn:nbn:de:0030-drops-161728},
  doi =		{10.4230/LIPIcs.SWAT.2022.12},
  annote =	{Keywords: parameterized complexity, kernelization, weighted maximum satisfiability, absolute value maximization}
}
Document
Dynamic Kernels for Hitting Sets and Set Packing

Authors: Max Bannach, Zacharias Heinrich, Rüdiger Reischuk, and Till Tantau

Published in: LIPIcs, Volume 214, 16th International Symposium on Parameterized and Exact Computation (IPEC 2021)


Abstract
Computing small kernels for the hitting set problem is a well-studied computational problem where we are given a hypergraph with n vertices and m hyperedges, each of size d for some small constant d, and a parameter k. The task is to compute a new hypergraph, called a kernel, whose size is polynomial with respect to the parameter k and which has a size-k hitting set if, and only if, the original hypergraph has one. State-of-the-art algorithms compute kernels of size k^d (which is a polynomial kernel size as d is a constant), and they do so in time m⋅ 2^d poly(d) for a small polynomial poly(d) (which is a linear runtime as d is again a constant). We generalize this task to the dynamic setting where hyperedges may continuously be added or deleted and one constantly has to keep track of a size-k^d hitting set kernel in memory (including moments when no size-k hitting set exists). This paper presents a deterministic solution with worst-case time 3^d poly(d) for updating the kernel upon hyperedge inserts and time 5^d poly(d) for updates upon deletions. These bounds nearly match the time 2^d poly(d) needed by the best static algorithm per hyperedge. Let us stress that for constant d our algorithm maintains a dynamic hitting set kernel with constant, deterministic, worst-case update time that is independent of n, m, and the parameter k. As a consequence, we also get a deterministic dynamic algorithm for keeping track of size-k hitting sets in d-hypergraphs with update times O(1) and query times O(c^k) where c = d - 1 + O(1/d) equals the best base known for the static setting.

Cite as

Max Bannach, Zacharias Heinrich, Rüdiger Reischuk, and Till Tantau. Dynamic Kernels for Hitting Sets and Set Packing. In 16th International Symposium on Parameterized and Exact Computation (IPEC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 214, pp. 7:1-7:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{bannach_et_al:LIPIcs.IPEC.2021.7,
  author =	{Bannach, Max and Heinrich, Zacharias and Reischuk, R\"{u}diger and Tantau, Till},
  title =	{{Dynamic Kernels for Hitting Sets and Set Packing}},
  booktitle =	{16th International Symposium on Parameterized and Exact Computation (IPEC 2021)},
  pages =	{7:1--7:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-216-7},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{214},
  editor =	{Golovach, Petr A. and Zehavi, Meirav},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.IPEC.2021.7},
  URN =		{urn:nbn:de:0030-drops-153900},
  doi =		{10.4230/LIPIcs.IPEC.2021.7},
  annote =	{Keywords: Kernelization, Dynamic Algorithms, Hitting Set, Set Packings}
}
Document
PACE Solver Description
PACE Solver Description: Fluid

Authors: Max Bannach, Sebastian Berndt, Martin Schuster, and Marcel Wienöbst

Published in: LIPIcs, Volume 180, 15th International Symposium on Parameterized and Exact Computation (IPEC 2020)


Abstract
This document describes the heuristic for computing treedepth decompositions of undirected graphs used by our solve fluid. The heuristic runs four different strategies to find a solution and finally outputs the best solution obtained by any of them. Two strategies are score-based and iteratively remove the vertex with the best score. The other two strategies iteratively search for vertex separators and remove them. We also present implementation strategies and data structures that significantly improve the run time complexity and might be interesting on their own.

Cite as

Max Bannach, Sebastian Berndt, Martin Schuster, and Marcel Wienöbst. PACE Solver Description: Fluid. In 15th International Symposium on Parameterized and Exact Computation (IPEC 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 180, pp. 27:1-27:3, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{bannach_et_al:LIPIcs.IPEC.2020.27,
  author =	{Bannach, Max and Berndt, Sebastian and Schuster, Martin and Wien\"{o}bst, Marcel},
  title =	{{PACE Solver Description: Fluid}},
  booktitle =	{15th International Symposium on Parameterized and Exact Computation (IPEC 2020)},
  pages =	{27:1--27:3},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-172-6},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{180},
  editor =	{Cao, Yixin and Pilipczuk, Marcin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.IPEC.2020.27},
  URN =		{urn:nbn:de:0030-drops-133300},
  doi =		{10.4230/LIPIcs.IPEC.2020.27},
  annote =	{Keywords: treedepth, heuristics}
}
Document
PACE Solver Description
PACE Solver Description: PID^⋆

Authors: Max Bannach, Sebastian Berndt, Martin Schuster, and Marcel Wienöbst

Published in: LIPIcs, Volume 180, 15th International Symposium on Parameterized and Exact Computation (IPEC 2020)


Abstract
This document provides a short overview of our treedepth solver PID^{⋆} in the version that we submitted to the exact track of the PACE challenge 2020. The solver relies on the positive-instance driven dynamic programming (PID) paradigm that was discovered in the light of earlier iterations of the PACE in the context of treewidth. It was recently shown that PID can be used to solve a general class of vertex pursuit-evasion games - which include the game theoretic characterization of treedepth. Our solver PID^{⋆} is build on top of this characterization.

Cite as

Max Bannach, Sebastian Berndt, Martin Schuster, and Marcel Wienöbst. PACE Solver Description: PID^⋆. In 15th International Symposium on Parameterized and Exact Computation (IPEC 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 180, pp. 28:1-28:4, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{bannach_et_al:LIPIcs.IPEC.2020.28,
  author =	{Bannach, Max and Berndt, Sebastian and Schuster, Martin and Wien\"{o}bst, Marcel},
  title =	{{PACE Solver Description: PID^⋆}},
  booktitle =	{15th International Symposium on Parameterized and Exact Computation (IPEC 2020)},
  pages =	{28:1--28:4},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-172-6},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{180},
  editor =	{Cao, Yixin and Pilipczuk, Marcin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.IPEC.2020.28},
  URN =		{urn:nbn:de:0030-drops-133312},
  doi =		{10.4230/LIPIcs.IPEC.2020.28},
  annote =	{Keywords: treedepth, positive-instance driven}
}
Document
Solving Packing Problems with Few Small Items Using Rainbow Matchings

Authors: Max Bannach, Sebastian Berndt, Marten Maack, Matthias Mnich, Alexandra Lassota, Malin Rau, and Malte Skambath

Published in: LIPIcs, Volume 170, 45th International Symposium on Mathematical Foundations of Computer Science (MFCS 2020)


Abstract
An important area of combinatorial optimization is the study of packing and covering problems, such as Bin Packing, Multiple Knapsack, and Bin Covering. Those problems have been studied extensively from the viewpoint of approximation algorithms, but their parameterized complexity has only been investigated barely. For problem instances containing no "small" items, classical matching algorithms yield optimal solutions in polynomial time. In this paper we approach them by their distance from triviality, measuring the problem complexity by the number k of small items. Our main results are fixed-parameter algorithms for vector versions of Bin Packing, Multiple Knapsack, and Bin Covering parameterized by k. The algorithms are randomized with one-sided error and run in time 4^k⋅ k!⋅ n^{O(1)}. To achieve this, we introduce a colored matching problem to which we reduce all these packing problems. The colored matching problem is natural in itself and we expect it to be useful for other applications. We also present a deterministic fixed-parameter algorithm for Bin Covering with run time O((k!)² ⋅ k ⋅ 2^k ⋅ n log(n)).

Cite as

Max Bannach, Sebastian Berndt, Marten Maack, Matthias Mnich, Alexandra Lassota, Malin Rau, and Malte Skambath. Solving Packing Problems with Few Small Items Using Rainbow Matchings. In 45th International Symposium on Mathematical Foundations of Computer Science (MFCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 170, pp. 11:1-11:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{bannach_et_al:LIPIcs.MFCS.2020.11,
  author =	{Bannach, Max and Berndt, Sebastian and Maack, Marten and Mnich, Matthias and Lassota, Alexandra and Rau, Malin and Skambath, Malte},
  title =	{{Solving Packing Problems with Few Small Items Using Rainbow Matchings}},
  booktitle =	{45th International Symposium on Mathematical Foundations of Computer Science (MFCS 2020)},
  pages =	{11:1--11:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-159-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{170},
  editor =	{Esparza, Javier and Kr\'{a}l', Daniel},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2020.11},
  URN =		{urn:nbn:de:0030-drops-126816},
  doi =		{10.4230/LIPIcs.MFCS.2020.11},
  annote =	{Keywords: Bin Packing, Knapsack, matching, fixed-parameter tractable}
}
Document
Kernelizing the Hitting Set Problem in Linear Sequential and Constant Parallel Time

Authors: Max Bannach, Malte Skambath, and Till Tantau

Published in: LIPIcs, Volume 162, 17th Scandinavian Symposium and Workshops on Algorithm Theory (SWAT 2020)


Abstract
We analyze a reduction rule for computing kernels for the hitting set problem: In a hypergraph, the link of a set c of vertices consists of all edges that are supersets of c. We call such a set critical if its link has certain easy-to-check size properties. The rule states that the link of a critical c can be replaced by c. It is known that a simple linear-time algorithm for computing hitting set kernels (number of edges) at most k^d (k is the hitting set size, d is the maximum edge size) can be derived from this rule. We parallelize this algorithm and obtain the first AC⁰ kernel algorithm that outputs polynomial-size kernels. Previously, such algorithms were not even known for artificial problems. An interesting application of our methods lies in traditional, non-parameterized approximation theory: Our results imply that uniform AC⁰-circuits can compute a hitting set whose size is polynomial in the size of an optimal hitting set.

Cite as

Max Bannach, Malte Skambath, and Till Tantau. Kernelizing the Hitting Set Problem in Linear Sequential and Constant Parallel Time. In 17th Scandinavian Symposium and Workshops on Algorithm Theory (SWAT 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 162, pp. 9:1-9:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{bannach_et_al:LIPIcs.SWAT.2020.9,
  author =	{Bannach, Max and Skambath, Malte and Tantau, Till},
  title =	{{Kernelizing the Hitting Set Problem in Linear Sequential and Constant Parallel Time}},
  booktitle =	{17th Scandinavian Symposium and Workshops on Algorithm Theory (SWAT 2020)},
  pages =	{9:1--9:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-150-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{162},
  editor =	{Albers, Susanne},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SWAT.2020.9},
  URN =		{urn:nbn:de:0030-drops-122566},
  doi =		{10.4230/LIPIcs.SWAT.2020.9},
  annote =	{Keywords: Kernelization, Approximation, Hitting Set, Constant-Depth Circuits}
}
Document
On the Descriptive Complexity of Color Coding

Authors: Max Bannach and Till Tantau

Published in: LIPIcs, Volume 126, 36th International Symposium on Theoretical Aspects of Computer Science (STACS 2019)


Abstract
Color coding is an algorithmic technique used in parameterized complexity theory to detect "small" structures inside graphs. The idea is to derandomize algorithms that first randomly color a graph and then search for an easily-detectable, small color pattern. We transfer color coding to the world of descriptive complexity theory by characterizing - purely in terms of the syntactic structure of describing formulas - when the powerful second-order quantifiers representing a random coloring can be replaced by equivalent, simple first-order formulas. Building on this result, we identify syntactic properties of first-order quantifiers that can be eliminated from formulas describing parameterized problems. The result applies to many packing and embedding problems, but also to the long path problem. Together with a new result on the parameterized complexity of formula families involving only a fixed number of variables, we get that many problems lie in fpt just because of the way they are commonly described using logical formulas.

Cite as

Max Bannach and Till Tantau. On the Descriptive Complexity of Color Coding. In 36th International Symposium on Theoretical Aspects of Computer Science (STACS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 126, pp. 11:1-11:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{bannach_et_al:LIPIcs.STACS.2019.11,
  author =	{Bannach, Max and Tantau, Till},
  title =	{{On the Descriptive Complexity of Color Coding}},
  booktitle =	{36th International Symposium on Theoretical Aspects of Computer Science (STACS 2019)},
  pages =	{11:1--11:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-100-9},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{126},
  editor =	{Niedermeier, Rolf and Paul, Christophe},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2019.11},
  URN =		{urn:nbn:de:0030-drops-102509},
  doi =		{10.4230/LIPIcs.STACS.2019.11},
  annote =	{Keywords: color coding, descriptive complexity, fixed-parameter tractability, quantifier elimination, para-AC^0}
}
Document
Computing Kernels in Parallel: Lower and Upper Bounds

Authors: Max Bannach and Till Tantau

Published in: LIPIcs, Volume 115, 13th International Symposium on Parameterized and Exact Computation (IPEC 2018)


Abstract
Parallel fixed-parameter tractability studies how parameterized problems can be solved in parallel. A surprisingly large number of parameterized problems admit a high level of parallelization, but this does not mean that we can also efficiently compute small problem kernels in parallel: known kernelization algorithms are typically highly sequential. In the present paper, we establish a number of upper and lower bounds concerning the sizes of kernels that can be computed in parallel. An intriguing finding is that there are complex trade-offs between kernel size and the depth of the circuits needed to compute them: For the vertex cover problem, an exponential kernel can be computed by AC^0-circuits, a quadratic kernel by TC^0-circuits, and a linear kernel by randomized NC-circuits with derandomization being possible only if it is also possible for the matching problem. Other natural problems for which similar (but quantitatively different) effects can be observed include tree decomposition problems parameterized by the vertex cover number, the undirected feedback vertex set problem, the matching problem, or the point line cover problem. We also present natural problems for which computing kernels is inherently sequential.

Cite as

Max Bannach and Till Tantau. Computing Kernels in Parallel: Lower and Upper Bounds. In 13th International Symposium on Parameterized and Exact Computation (IPEC 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 115, pp. 13:1-13:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{bannach_et_al:LIPIcs.IPEC.2018.13,
  author =	{Bannach, Max and Tantau, Till},
  title =	{{Computing Kernels in Parallel: Lower and Upper Bounds}},
  booktitle =	{13th International Symposium on Parameterized and Exact Computation (IPEC 2018)},
  pages =	{13:1--13:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-084-2},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{115},
  editor =	{Paul, Christophe and Pilipczuk, Michal},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.IPEC.2018.13},
  URN =		{urn:nbn:de:0030-drops-102148},
  doi =		{10.4230/LIPIcs.IPEC.2018.13},
  annote =	{Keywords: parallel computation, fixed-parameter tractability, kernelization}
}
Document
Practical Access to Dynamic Programming on Tree Decompositions

Authors: Max Bannach and Sebastian Berndt

Published in: LIPIcs, Volume 112, 26th Annual European Symposium on Algorithms (ESA 2018)


Abstract
Parameterized complexity theory has lead to a wide range of algorithmic breakthroughs within the last decades, but the practicability of these methods for real-world problems is still not well understood. We investigate the practicability of one of the fundamental approaches of this field: dynamic programming on tree decompositions. Indisputably, this is a key technique in parameterized algorithms and modern algorithm design. Despite the enormous impact of this approach in theory, it still has very little influence on practical implementations. The reasons for this phenomenon are manifold. One of them is the simple fact that such an implementation requires a long chain of non-trivial tasks (as computing the decomposition, preparing it,...). We provide an easy way to implement such dynamic programs that only requires the definition of the update rules. With this interface, dynamic programs for various problems, such as 3-coloring, can be implemented easily in about 100 lines of structured Java code. The theoretical foundation of the success of dynamic programming on tree decompositions is well understood due to Courcelle's celebrated theorem, which states that every MSO-definable problem can be efficiently solved if a tree decomposition of small width is given. We seek to provide practical access to this theorem as well, by presenting a lightweight model-checker for a small fragment of MSO. This fragment is powerful enough to describe many natural problems, and our model-checker turns out to be very competitive against similar state-of-the-art tools.

Cite as

Max Bannach and Sebastian Berndt. Practical Access to Dynamic Programming on Tree Decompositions. In 26th Annual European Symposium on Algorithms (ESA 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 112, pp. 6:1-6:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{bannach_et_al:LIPIcs.ESA.2018.6,
  author =	{Bannach, Max and Berndt, Sebastian},
  title =	{{Practical Access to Dynamic Programming on Tree Decompositions}},
  booktitle =	{26th Annual European Symposium on Algorithms (ESA 2018)},
  pages =	{6:1--6:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-081-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{112},
  editor =	{Azar, Yossi and Bast, Hannah and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2018.6},
  URN =		{urn:nbn:de:0030-drops-94692},
  doi =		{10.4230/LIPIcs.ESA.2018.6},
  annote =	{Keywords: fixed-parameter tractability, treewidth, model-checking}
}
Document
Computing Hitting Set Kernels By AC^0-Circuits

Authors: Max Bannach and Till Tantau

Published in: LIPIcs, Volume 96, 35th Symposium on Theoretical Aspects of Computer Science (STACS 2018)


Abstract
Given a hypergraph H = (V,E), what is the smallest subset X of V such that e and X are not disjoint for all e in E? This problem, known as the hitting set problem, is a basic problem in parameterized complexity theory. There are well-known kernelization algorithms for it, which get a hypergraph H and a number k as input and output a hypergraph H' such that (1) H has a hitting set of size k if, and only if, H' has such a hitting set and (2) the size of H' depends only on k and on the maximum cardinality d of edges in H. The algorithms run in polynomial time, but are highly sequential. Recently, it has been shown that one of them can be parallelized to a certain degree: one can compute hitting set kernels in parallel time O(d) - but it was conjectured that this is the best parallel algorithm possible. We refute this conjecture and show how hitting set kernels can be computed in constant parallel time. For our proof, we introduce a new, generalized notion of hypergraph sunflowers and show how iterated applications of the color coding technique can sometimes be collapsed into a single application.

Cite as

Max Bannach and Till Tantau. Computing Hitting Set Kernels By AC^0-Circuits. In 35th Symposium on Theoretical Aspects of Computer Science (STACS 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 96, pp. 9:1-9:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{bannach_et_al:LIPIcs.STACS.2018.9,
  author =	{Bannach, Max and Tantau, Till},
  title =	{{Computing Hitting Set Kernels By AC^0-Circuits}},
  booktitle =	{35th Symposium on Theoretical Aspects of Computer Science (STACS 2018)},
  pages =	{9:1--9:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-062-0},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{96},
  editor =	{Niedermeier, Rolf and Vall\'{e}e, Brigitte},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2018.9},
  URN =		{urn:nbn:de:0030-drops-84998},
  doi =		{10.4230/LIPIcs.STACS.2018.9},
  annote =	{Keywords: parallel computation, fixed-parameter tractability, kernelization}
}
Document
Jdrasil: A Modular Library for Computing Tree Decompositions

Authors: Max Bannach, Sebastian Berndt, and Thorsten Ehlers

Published in: LIPIcs, Volume 75, 16th International Symposium on Experimental Algorithms (SEA 2017)


Abstract
While the theoretical aspects concerning the computation of tree width - one of the most important graph parameters - are well understood, it is not clear how it can be computed practically. We present the open source Java library Jdrasil that implements several different state of the art algorithms for this task. By experimentally comparing these algorithms, we show that the default choices made in Jdrasil lead to an competitive implementation (it took the third place in the first PACE challenge) while also being easy to use and easy to extend.

Cite as

Max Bannach, Sebastian Berndt, and Thorsten Ehlers. Jdrasil: A Modular Library for Computing Tree Decompositions. In 16th International Symposium on Experimental Algorithms (SEA 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 75, pp. 28:1-28:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{bannach_et_al:LIPIcs.SEA.2017.28,
  author =	{Bannach, Max and Berndt, Sebastian and Ehlers, Thorsten},
  title =	{{Jdrasil: A Modular Library for Computing Tree Decompositions}},
  booktitle =	{16th International Symposium on Experimental Algorithms (SEA 2017)},
  pages =	{28:1--28:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-036-1},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{75},
  editor =	{Iliopoulos, Costas S. and Pissis, Solon P. and Puglisi, Simon J. and Raman, Rajeev},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2017.28},
  URN =		{urn:nbn:de:0030-drops-76051},
  doi =		{10.4230/LIPIcs.SEA.2017.28},
  annote =	{Keywords: tree width, algorithmic library, experimental evaluation}
}
  • Refine by Author
  • 17 Bannach, Max
  • 10 Tantau, Till
  • 6 Berndt, Sebastian
  • 4 Skambath, Malte
  • 2 Schuster, Martin
  • Show More...

  • Refine by Classification
  • 6 Theory of computation → Fixed parameter tractability
  • 4 Theory of computation → Design and analysis of algorithms
  • 4 Theory of computation → Parameterized complexity and exact algorithms
  • 2 Theory of computation → Finite Model Theory
  • 2 Theory of computation → Graph algorithms analysis
  • Show More...

  • Refine by Keyword
  • 7 fixed-parameter tractability
  • 3 Kernelization
  • 3 kernelization
  • 3 parallel computation
  • 2 FPT
  • Show More...

  • Refine by Type
  • 17 document

  • Refine by Publication Year
  • 4 2020
  • 2 2017
  • 2 2018
  • 2 2019
  • 2 2022
  • Show More...

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail