Search Results

Documents authored by Tantau, Till


Document
On the Descriptive Complexity of Vertex Deletion Problems

Authors: Max Bannach, Florian Chudigiewitsch, and Till Tantau

Published in: LIPIcs, Volume 306, 49th International Symposium on Mathematical Foundations of Computer Science (MFCS 2024)


Abstract
Vertex deletion problems for graphs are studied intensely in classical and parameterized complexity theory. They ask whether we can delete at most k vertices from an input graph such that the resulting graph has a certain property. Regarding k as the parameter, a dichotomy was recently shown based on the number of quantifier alternations of first-order formulas that describe the property. In this paper, we refine this classification by moving from quantifier alternations to individual quantifier patterns and from a dichotomy to a trichotomy, resulting in a complete classification of the complexity of vertex deletion problems based on their quantifier pattern. The more fine-grained approach uncovers new tractable fragments, which we show to not only lie in FPT, but even in parameterized constant-depth circuit complexity classes. On the other hand, we show that vertex deletion becomes intractable already for just one quantifier per alternation, that is, there is a formula of the form ∀ x∃ y∀ z (ψ), with ψ quantifier-free, for which the vertex deletion problem is W[1]-hard. The fine-grained analysis also allows us to uncover differences in the complexity landscape when we consider different kinds of graphs and more general structures: While basic graphs (undirected graphs without self-loops), undirected graphs, and directed graphs each have a different frontier of tractability, the frontier for arbitrary logical structures coincides with that of directed graphs.

Cite as

Max Bannach, Florian Chudigiewitsch, and Till Tantau. On the Descriptive Complexity of Vertex Deletion Problems. In 49th International Symposium on Mathematical Foundations of Computer Science (MFCS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 306, pp. 17:1-17:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{bannach_et_al:LIPIcs.MFCS.2024.17,
  author =	{Bannach, Max and Chudigiewitsch, Florian and Tantau, Till},
  title =	{{On the Descriptive Complexity of Vertex Deletion Problems}},
  booktitle =	{49th International Symposium on Mathematical Foundations of Computer Science (MFCS 2024)},
  pages =	{17:1--17:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-335-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{306},
  editor =	{Kr\'{a}lovi\v{c}, Rastislav and Ku\v{c}era, Anton{\'\i}n},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2024.17},
  URN =		{urn:nbn:de:0030-drops-205733},
  doi =		{10.4230/LIPIcs.MFCS.2024.17},
  annote =	{Keywords: graph problems, fixed-parameter tractability, descriptive complexity, vertex deletion}
}
Document
Faster Graph Algorithms Through DAG Compression

Authors: Max Bannach, Florian Andreas Marwitz, and Till Tantau

Published in: LIPIcs, Volume 289, 41st International Symposium on Theoretical Aspects of Computer Science (STACS 2024)


Abstract
The runtime of graph algorithms such as depth-first search or Dijkstra’s algorithm is dominated by the fact that all edges of the graph need to be processed at least once, leading to prohibitive runtimes for large, dense graphs. We introduce a simple data structure for storing graphs (and more general structures) in a compressed manner using directed acyclic graphs (dags). We then show that numerous standard graph problems can be solved in time linear in the size of the dag compression of a graph, rather than in the number of edges of the graph. Crucially, many dense graphs, including but not limited to graphs of bounded twinwidth, have a dag compression of size linear in the number of vertices rather than edges. This insight allows us to improve the previous best results for the runtime of standard algorithms from quasi-linear to linear for the large class of graphs of bounded twinwidth, which includes all cographs, graphs of bounded treewidth, or graphs of bounded cliquewidth.

Cite as

Max Bannach, Florian Andreas Marwitz, and Till Tantau. Faster Graph Algorithms Through DAG Compression. In 41st International Symposium on Theoretical Aspects of Computer Science (STACS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 289, pp. 8:1-8:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{bannach_et_al:LIPIcs.STACS.2024.8,
  author =	{Bannach, Max and Marwitz, Florian Andreas and Tantau, Till},
  title =	{{Faster Graph Algorithms Through DAG Compression}},
  booktitle =	{41st International Symposium on Theoretical Aspects of Computer Science (STACS 2024)},
  pages =	{8:1--8:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-311-9},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{289},
  editor =	{Beyersdorff, Olaf and Kant\'{e}, Mamadou Moustapha and Kupferman, Orna and Lokshtanov, Daniel},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2024.8},
  URN =		{urn:nbn:de:0030-drops-197188},
  doi =		{10.4230/LIPIcs.STACS.2024.8},
  annote =	{Keywords: graph compression, graph traversal, twinwidth, parameterized algorithms}
}
Document
Existential Second-Order Logic over Graphs: Parameterized Complexity

Authors: Max Bannach, Florian Chudigiewitsch, and Till Tantau

Published in: LIPIcs, Volume 285, 18th International Symposium on Parameterized and Exact Computation (IPEC 2023)


Abstract
By Fagin’s Theorem, NP contains precisely those problems that can be described by formulas starting with an existential second-order quantifier, followed by only first-order quantifiers (eso formulas). Subsequent research refined this result, culminating in powerful theorems that characterize for each possible sequence of first-order quantifiers how difficult the described problem can be. We transfer this line of inquiry to the parameterized setting, where the size of the set quantified by the second-order quantifier is the parameter. Many natural parameterized problems can be described in this way using simple sequences of first-order quantifiers: For the clique or vertex cover problems, two universal first-order quantifiers suffice ("for all pairs of vertices ... must hold"); for the dominating set problem, a universal followed by an existential quantifier suffice ("for all vertices, there is a vertex such that ..."); and so on. We present a complete characterization that states for each possible sequence of first-order quantifiers how high the parameterized complexity of the described problems can be. The uncovered dividing line between quantifier sequences that lead to tractable versus intractable problems is distinct from that known from the classical setting, and it depends on whether the parameter is a lower bound on, an upper bound on, or equal to the size of the quantified set.

Cite as

Max Bannach, Florian Chudigiewitsch, and Till Tantau. Existential Second-Order Logic over Graphs: Parameterized Complexity. In 18th International Symposium on Parameterized and Exact Computation (IPEC 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 285, pp. 3:1-3:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{bannach_et_al:LIPIcs.IPEC.2023.3,
  author =	{Bannach, Max and Chudigiewitsch, Florian and Tantau, Till},
  title =	{{Existential Second-Order Logic over Graphs: Parameterized Complexity}},
  booktitle =	{18th International Symposium on Parameterized and Exact Computation (IPEC 2023)},
  pages =	{3:1--3:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-305-8},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{285},
  editor =	{Misra, Neeldhara and Wahlstr\"{o}m, Magnus},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.IPEC.2023.3},
  URN =		{urn:nbn:de:0030-drops-194224},
  doi =		{10.4230/LIPIcs.IPEC.2023.3},
  annote =	{Keywords: existential second-order logic, graph problems, parallel algorithms, fixed-parameter tractability, descriptive complexity}
}
Document
Computational Complexity of Discrete Problems (Dagstuhl Seminar 23111)

Authors: Anna Gál, Meena Mahajan, Rahul Santhanam, Till Tantau, and Manaswi Paraashar

Published in: Dagstuhl Reports, Volume 13, Issue 3 (2023)


Abstract
This report documents the program and activities of Dagstuhl Seminar 23111 "Computational Complexity of Discrete Problems", which was held in-person in March 2023 (the previous instance of the seminar series had been held online in March 2021). Following a description of the seminar’s objectives and its overall organization, this report lists the different major talks given during the seminar in alphabetical order of speakers, followed by the abstracts of the talks, including the main references and relevant sources where applicable. The return to an in-person setting allowed an intense atmosphere of active research and interaction throughout the five day seminar.

Cite as

Anna Gál, Meena Mahajan, Rahul Santhanam, Till Tantau, and Manaswi Paraashar. Computational Complexity of Discrete Problems (Dagstuhl Seminar 23111). In Dagstuhl Reports, Volume 13, Issue 3, pp. 17-31, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@Article{gal_et_al:DagRep.13.3.17,
  author =	{G\'{a}l, Anna and Mahajan, Meena and Santhanam, Rahul and Tantau, Till and Paraashar, Manaswi},
  title =	{{Computational Complexity of Discrete Problems (Dagstuhl Seminar 23111)}},
  pages =	{17--31},
  journal =	{Dagstuhl Reports},
  ISSN =	{2192-5283},
  year =	{2023},
  volume =	{13},
  number =	{3},
  editor =	{G\'{a}l, Anna and Mahajan, Meena and Santhanam, Rahul and Tantau, Till and Paraashar, Manaswi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagRep.13.3.17},
  URN =		{urn:nbn:de:0030-drops-192261},
  doi =		{10.4230/DagRep.13.3.17},
  annote =	{Keywords: circuit complexity, communication complexity, computational complexity, lower bounds, randomness}
}
Document
On the Parallel Parameterized Complexity of MaxSAT Variants

Authors: Max Bannach, Malte Skambath, and Till Tantau

Published in: LIPIcs, Volume 236, 25th International Conference on Theory and Applications of Satisfiability Testing (SAT 2022)


Abstract
In the maximum satisfiability problem (max-sat) we are given a propositional formula in conjunctive normal form and have to find an assignment that satisfies as many clauses as possible. We study the parallel parameterized complexity of various versions of max-sat and provide the first constant-time algorithms parameterized either by the solution size or by the allowed excess relative to some guarantee ("above guarantee" versions). For the dual parameterized version where the parameter is the number of clauses we are allowed to leave unsatisfied, we present the first parallel algorithm for max-2sat (known as almost-2sat). The difficulty in solving almost-2sat in parallel comes from the fact that the iterative compression method, originally developed to prove that the problem is fixed-parameter tractable at all, is inherently sequential. We observe that a graph flow whose value is a parameter can be computed in parallel and use this fact to develop a parallel algorithm for the vertex cover problem parameterized above the size of a given matching. Finally, we study the parallel complexity of max-sat parameterized by the vertex cover number, the treedepth, the feedback vertex set number, and the treewidth of the input’s incidence graph. While max-sat is fixed-parameter tractable for all of these parameters, we show that they allow different degrees of possible parallelization. For all four we develop dedicated parallel algorithms that are constructive, meaning that they output an optimal assignment - in contrast to results that can be obtained by parallel meta-theorems, which often only solve the decision version.

Cite as

Max Bannach, Malte Skambath, and Till Tantau. On the Parallel Parameterized Complexity of MaxSAT Variants. In 25th International Conference on Theory and Applications of Satisfiability Testing (SAT 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 236, pp. 19:1-19:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{bannach_et_al:LIPIcs.SAT.2022.19,
  author =	{Bannach, Max and Skambath, Malte and Tantau, Till},
  title =	{{On the Parallel Parameterized Complexity of MaxSAT Variants}},
  booktitle =	{25th International Conference on Theory and Applications of Satisfiability Testing (SAT 2022)},
  pages =	{19:1--19:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-242-6},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{236},
  editor =	{Meel, Kuldeep S. and Strichman, Ofer},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SAT.2022.19},
  URN =		{urn:nbn:de:0030-drops-166934},
  doi =		{10.4230/LIPIcs.SAT.2022.19},
  annote =	{Keywords: max-sat, almost-sat, parallel algorithms, fixed-parameter tractability}
}
Document
On the Satisfaction Probability of k-CNF Formulas

Authors: Till Tantau

Published in: LIPIcs, Volume 234, 37th Computational Complexity Conference (CCC 2022)


Abstract
The satisfaction probability σ(ϕ) := Pr_{β:vars(ϕ) → {0,1}}[β ⊧ ϕ] of a propositional formula ϕ is the likelihood that a random assignment β makes the formula true. We study the complexity of the problem kSAT-PROB_{> δ} = {ϕ is a kCNF formula ∣ σ(ϕ) > δ} for fixed k and δ. While 3SAT-PROB_{> 0} = 3SAT is NP-complete and SAT-PROB}_{> 1/2} is PP-complete, Akmal and Williams recently showed 3SAT-PROB_{> 1/2} ∈ P and 4SAT-PROB_{> 1/2} ∈ NP-complete; but the methods used to prove these striking results stay silent about, say, 4SAT-PROB_{> 3/4}, leaving the computational complexity of kSAT-PROB_{> δ} open for most k and δ. In the present paper we give a complete characterization in the form of a trichotomy: kSAT-PROB_{> δ} lies in AC⁰, is NL-complete, or is NP-complete; and given k and δ we can decide which of the three applies. The proof of the trichotomy hinges on a new order-theoretic insight: Every set of kCNF formulas contains a formula of maximal satisfaction probability. This deceptively simple result allows us to (1) kernelize kSAT-PROB_{≥ δ}, (2) show that the variables of the kernel form a strong backdoor set when the trichotomy states membership in AC⁰ or NL, and (3) prove a locality property by which for every kCNF formula ϕ we have σ(ϕ) ≥ δ iff σ(ψ) ≥ δ for every fixed-size subset ψ of ϕ’s clauses. The locality property will allow us to prove a conjecture of Akmal and Williams: The majority-of-majority satisfaction problem for kCNFS lies in P for all k.

Cite as

Till Tantau. On the Satisfaction Probability of k-CNF Formulas. In 37th Computational Complexity Conference (CCC 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 234, pp. 2:1-2:27, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{tantau:LIPIcs.CCC.2022.2,
  author =	{Tantau, Till},
  title =	{{On the Satisfaction Probability of k-CNF Formulas}},
  booktitle =	{37th Computational Complexity Conference (CCC 2022)},
  pages =	{2:1--2:27},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-241-9},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{234},
  editor =	{Lovett, Shachar},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2022.2},
  URN =		{urn:nbn:de:0030-drops-165648},
  doi =		{10.4230/LIPIcs.CCC.2022.2},
  annote =	{Keywords: Satisfaction probability, majority it\{k\}-sat, kernelization, well orderings, locality}
}
Document
An FPT-Algorithm for Longest Common Subsequence Parameterized by the Maximum Number of Deletions

Authors: Laurent Bulteau, Mark Jones, Rolf Niedermeier, and Till Tantau

Published in: LIPIcs, Volume 223, 33rd Annual Symposium on Combinatorial Pattern Matching (CPM 2022)


Abstract
In the NP-hard Longest Common Subsequence problem (LCS), given a set of strings, the task is to find a string that can be obtained from every input string using as few deletions as possible. LCS is one of the most fundamental string problems with numerous applications in various areas, having gained a lot of attention in the algorithms and complexity research community. Significantly improving on an algorithm by Irving and Fraser [CPM'92], featured as a research challenge in a 2014 survey paper, we show that LCS is fixed-parameter tractable (FPT) when parameterized by the maximum number of deletions per input string. Given the relatively moderate running time of our algorithm (linear time when the parameter is a constant) and small parameter values to be expected in several applications, we believe that our purely theoretical analysis could finally pave the way to a new, exact and practically useful algorithm for this notoriously hard string problem.

Cite as

Laurent Bulteau, Mark Jones, Rolf Niedermeier, and Till Tantau. An FPT-Algorithm for Longest Common Subsequence Parameterized by the Maximum Number of Deletions. In 33rd Annual Symposium on Combinatorial Pattern Matching (CPM 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 223, pp. 6:1-6:11, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{bulteau_et_al:LIPIcs.CPM.2022.6,
  author =	{Bulteau, Laurent and Jones, Mark and Niedermeier, Rolf and Tantau, Till},
  title =	{{An FPT-Algorithm for Longest Common Subsequence Parameterized by the Maximum Number of Deletions}},
  booktitle =	{33rd Annual Symposium on Combinatorial Pattern Matching (CPM 2022)},
  pages =	{6:1--6:11},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-234-1},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{223},
  editor =	{Bannai, Hideo and Holub, Jan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CPM.2022.6},
  URN =		{urn:nbn:de:0030-drops-161338},
  doi =		{10.4230/LIPIcs.CPM.2022.6},
  annote =	{Keywords: NP-hard string problems, multiple sequence alignment, center string, parameterized complexity, search tree algorithms, enumerative algorithms}
}
Document
Dynamic Kernels for Hitting Sets and Set Packing

Authors: Max Bannach, Zacharias Heinrich, Rüdiger Reischuk, and Till Tantau

Published in: LIPIcs, Volume 214, 16th International Symposium on Parameterized and Exact Computation (IPEC 2021)


Abstract
Computing small kernels for the hitting set problem is a well-studied computational problem where we are given a hypergraph with n vertices and m hyperedges, each of size d for some small constant d, and a parameter k. The task is to compute a new hypergraph, called a kernel, whose size is polynomial with respect to the parameter k and which has a size-k hitting set if, and only if, the original hypergraph has one. State-of-the-art algorithms compute kernels of size k^d (which is a polynomial kernel size as d is a constant), and they do so in time m⋅ 2^d poly(d) for a small polynomial poly(d) (which is a linear runtime as d is again a constant). We generalize this task to the dynamic setting where hyperedges may continuously be added or deleted and one constantly has to keep track of a size-k^d hitting set kernel in memory (including moments when no size-k hitting set exists). This paper presents a deterministic solution with worst-case time 3^d poly(d) for updating the kernel upon hyperedge inserts and time 5^d poly(d) for updates upon deletions. These bounds nearly match the time 2^d poly(d) needed by the best static algorithm per hyperedge. Let us stress that for constant d our algorithm maintains a dynamic hitting set kernel with constant, deterministic, worst-case update time that is independent of n, m, and the parameter k. As a consequence, we also get a deterministic dynamic algorithm for keeping track of size-k hitting sets in d-hypergraphs with update times O(1) and query times O(c^k) where c = d - 1 + O(1/d) equals the best base known for the static setting.

Cite as

Max Bannach, Zacharias Heinrich, Rüdiger Reischuk, and Till Tantau. Dynamic Kernels for Hitting Sets and Set Packing. In 16th International Symposium on Parameterized and Exact Computation (IPEC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 214, pp. 7:1-7:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{bannach_et_al:LIPIcs.IPEC.2021.7,
  author =	{Bannach, Max and Heinrich, Zacharias and Reischuk, R\"{u}diger and Tantau, Till},
  title =	{{Dynamic Kernels for Hitting Sets and Set Packing}},
  booktitle =	{16th International Symposium on Parameterized and Exact Computation (IPEC 2021)},
  pages =	{7:1--7:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-216-7},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{214},
  editor =	{Golovach, Petr A. and Zehavi, Meirav},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.IPEC.2021.7},
  URN =		{urn:nbn:de:0030-drops-153900},
  doi =		{10.4230/LIPIcs.IPEC.2021.7},
  annote =	{Keywords: Kernelization, Dynamic Algorithms, Hitting Set, Set Packings}
}
Document
Computational Complexity of Discrete Problems (Dagstuhl Seminar 21121)

Authors: Anna Gál, Meena Mahajan, Rahul Santhanam, and Till Tantau

Published in: Dagstuhl Reports, Volume 11, Issue 2 (2021)


Abstract
This report documents the program and activities of Dagstuhl Seminar 21121 "Computational Complexity of Discrete Problems," which was held online in March 2021. Starting with a description of the organization of the online meeting and the topics covered, we then list the different talks given during the seminar in alphabetical order of speakers, followed by the abstracts of the talks, including the main references and relevant sources where applicable. Despite the fact that only a compressed daily time slot was available for the seminar with participants from time zones spanning the whole globe and despite the fact that informal discussions were harder to hold than in a typical on-site seminar, the rate of participation throughout the seminar was very high and many lively scientific debates were held.

Cite as

Anna Gál, Meena Mahajan, Rahul Santhanam, and Till Tantau. Computational Complexity of Discrete Problems (Dagstuhl Seminar 21121). In Dagstuhl Reports, Volume 11, Issue 2, pp. 1-16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@Article{gal_et_al:DagRep.11.2.1,
  author =	{G\'{a}l, Anna and Mahajan, Meena and Santhanam, Rahul and Tantau, Till},
  title =	{{Computational Complexity of Discrete Problems (Dagstuhl Seminar 21121)}},
  pages =	{1--16},
  journal =	{Dagstuhl Reports},
  ISSN =	{2192-5283},
  year =	{2021},
  volume =	{11},
  number =	{2},
  editor =	{G\'{a}l, Anna and Mahajan, Meena and Santhanam, Rahul and Tantau, Till},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagRep.11.2.1},
  URN =		{urn:nbn:de:0030-drops-146836},
  doi =		{10.4230/DagRep.11.2.1},
  annote =	{Keywords: circuit complexity, communication complexity, computational complexity, lower bounds, randomness}
}
Document
Kernelizing the Hitting Set Problem in Linear Sequential and Constant Parallel Time

Authors: Max Bannach, Malte Skambath, and Till Tantau

Published in: LIPIcs, Volume 162, 17th Scandinavian Symposium and Workshops on Algorithm Theory (SWAT 2020)


Abstract
We analyze a reduction rule for computing kernels for the hitting set problem: In a hypergraph, the link of a set c of vertices consists of all edges that are supersets of c. We call such a set critical if its link has certain easy-to-check size properties. The rule states that the link of a critical c can be replaced by c. It is known that a simple linear-time algorithm for computing hitting set kernels (number of edges) at most k^d (k is the hitting set size, d is the maximum edge size) can be derived from this rule. We parallelize this algorithm and obtain the first AC⁰ kernel algorithm that outputs polynomial-size kernels. Previously, such algorithms were not even known for artificial problems. An interesting application of our methods lies in traditional, non-parameterized approximation theory: Our results imply that uniform AC⁰-circuits can compute a hitting set whose size is polynomial in the size of an optimal hitting set.

Cite as

Max Bannach, Malte Skambath, and Till Tantau. Kernelizing the Hitting Set Problem in Linear Sequential and Constant Parallel Time. In 17th Scandinavian Symposium and Workshops on Algorithm Theory (SWAT 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 162, pp. 9:1-9:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{bannach_et_al:LIPIcs.SWAT.2020.9,
  author =	{Bannach, Max and Skambath, Malte and Tantau, Till},
  title =	{{Kernelizing the Hitting Set Problem in Linear Sequential and Constant Parallel Time}},
  booktitle =	{17th Scandinavian Symposium and Workshops on Algorithm Theory (SWAT 2020)},
  pages =	{9:1--9:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-150-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{162},
  editor =	{Albers, Susanne},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SWAT.2020.9},
  URN =		{urn:nbn:de:0030-drops-122566},
  doi =		{10.4230/LIPIcs.SWAT.2020.9},
  annote =	{Keywords: Kernelization, Approximation, Hitting Set, Constant-Depth Circuits}
}
Document
Computational Complexity of Discrete Problems (Dagstuhl Seminar 19121)

Authors: Anna Gál, Rahul Santhanam, and Till Tantau

Published in: Dagstuhl Reports, Volume 9, Issue 3 (2019)


Abstract
The following report archives the presentations and activities of the March 2019 Dagstuhl Seminar 19121 "Computational Complexity of Discrete Problems". Section 1 summarizes the topics and some specific results offered in selected talks during the course of the week. Section 2 provides a table of contents, listing each of the talks given in alphabetical order. Section 3 contains the abstracts, indicating both the main reference and other relevant sources (where applicable) to allow the reader to investigate the topics further.

Cite as

Anna Gál, Rahul Santhanam, and Till Tantau. Computational Complexity of Discrete Problems (Dagstuhl Seminar 19121). In Dagstuhl Reports, Volume 9, Issue 3, pp. 64-82, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@Article{gal_et_al:DagRep.9.3.64,
  author =	{G\'{a}l, Anna and Santhanam, Rahul and Tantau, Till},
  title =	{{Computational Complexity of Discrete Problems (Dagstuhl Seminar 19121)}},
  pages =	{64--82},
  journal =	{Dagstuhl Reports},
  ISSN =	{2192-5283},
  year =	{2019},
  volume =	{9},
  number =	{3},
  editor =	{G\'{a}l, Anna and Santhanam, Rahul and Tantau, Till},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagRep.9.3.64},
  URN =		{urn:nbn:de:0030-drops-112920},
  doi =		{10.4230/DagRep.9.3.64},
  annote =	{Keywords: circuit complexity, communication complexity, computational complexity, parametrisation, randomness}
}
Document
On the Descriptive Complexity of Color Coding

Authors: Max Bannach and Till Tantau

Published in: LIPIcs, Volume 126, 36th International Symposium on Theoretical Aspects of Computer Science (STACS 2019)


Abstract
Color coding is an algorithmic technique used in parameterized complexity theory to detect "small" structures inside graphs. The idea is to derandomize algorithms that first randomly color a graph and then search for an easily-detectable, small color pattern. We transfer color coding to the world of descriptive complexity theory by characterizing - purely in terms of the syntactic structure of describing formulas - when the powerful second-order quantifiers representing a random coloring can be replaced by equivalent, simple first-order formulas. Building on this result, we identify syntactic properties of first-order quantifiers that can be eliminated from formulas describing parameterized problems. The result applies to many packing and embedding problems, but also to the long path problem. Together with a new result on the parameterized complexity of formula families involving only a fixed number of variables, we get that many problems lie in fpt just because of the way they are commonly described using logical formulas.

Cite as

Max Bannach and Till Tantau. On the Descriptive Complexity of Color Coding. In 36th International Symposium on Theoretical Aspects of Computer Science (STACS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 126, pp. 11:1-11:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{bannach_et_al:LIPIcs.STACS.2019.11,
  author =	{Bannach, Max and Tantau, Till},
  title =	{{On the Descriptive Complexity of Color Coding}},
  booktitle =	{36th International Symposium on Theoretical Aspects of Computer Science (STACS 2019)},
  pages =	{11:1--11:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-100-9},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{126},
  editor =	{Niedermeier, Rolf and Paul, Christophe},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2019.11},
  URN =		{urn:nbn:de:0030-drops-102509},
  doi =		{10.4230/LIPIcs.STACS.2019.11},
  annote =	{Keywords: color coding, descriptive complexity, fixed-parameter tractability, quantifier elimination, para-AC^0}
}
Document
Computing Kernels in Parallel: Lower and Upper Bounds

Authors: Max Bannach and Till Tantau

Published in: LIPIcs, Volume 115, 13th International Symposium on Parameterized and Exact Computation (IPEC 2018)


Abstract
Parallel fixed-parameter tractability studies how parameterized problems can be solved in parallel. A surprisingly large number of parameterized problems admit a high level of parallelization, but this does not mean that we can also efficiently compute small problem kernels in parallel: known kernelization algorithms are typically highly sequential. In the present paper, we establish a number of upper and lower bounds concerning the sizes of kernels that can be computed in parallel. An intriguing finding is that there are complex trade-offs between kernel size and the depth of the circuits needed to compute them: For the vertex cover problem, an exponential kernel can be computed by AC^0-circuits, a quadratic kernel by TC^0-circuits, and a linear kernel by randomized NC-circuits with derandomization being possible only if it is also possible for the matching problem. Other natural problems for which similar (but quantitatively different) effects can be observed include tree decomposition problems parameterized by the vertex cover number, the undirected feedback vertex set problem, the matching problem, or the point line cover problem. We also present natural problems for which computing kernels is inherently sequential.

Cite as

Max Bannach and Till Tantau. Computing Kernels in Parallel: Lower and Upper Bounds. In 13th International Symposium on Parameterized and Exact Computation (IPEC 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 115, pp. 13:1-13:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{bannach_et_al:LIPIcs.IPEC.2018.13,
  author =	{Bannach, Max and Tantau, Till},
  title =	{{Computing Kernels in Parallel: Lower and Upper Bounds}},
  booktitle =	{13th International Symposium on Parameterized and Exact Computation (IPEC 2018)},
  pages =	{13:1--13:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-084-2},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{115},
  editor =	{Paul, Christophe and Pilipczuk, Michal},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.IPEC.2018.13},
  URN =		{urn:nbn:de:0030-drops-102148},
  doi =		{10.4230/LIPIcs.IPEC.2018.13},
  annote =	{Keywords: parallel computation, fixed-parameter tractability, kernelization}
}
Document
Computing Hitting Set Kernels By AC^0-Circuits

Authors: Max Bannach and Till Tantau

Published in: LIPIcs, Volume 96, 35th Symposium on Theoretical Aspects of Computer Science (STACS 2018)


Abstract
Given a hypergraph H = (V,E), what is the smallest subset X of V such that e and X are not disjoint for all e in E? This problem, known as the hitting set problem, is a basic problem in parameterized complexity theory. There are well-known kernelization algorithms for it, which get a hypergraph H and a number k as input and output a hypergraph H' such that (1) H has a hitting set of size k if, and only if, H' has such a hitting set and (2) the size of H' depends only on k and on the maximum cardinality d of edges in H. The algorithms run in polynomial time, but are highly sequential. Recently, it has been shown that one of them can be parallelized to a certain degree: one can compute hitting set kernels in parallel time O(d) - but it was conjectured that this is the best parallel algorithm possible. We refute this conjecture and show how hitting set kernels can be computed in constant parallel time. For our proof, we introduce a new, generalized notion of hypergraph sunflowers and show how iterated applications of the color coding technique can sometimes be collapsed into a single application.

Cite as

Max Bannach and Till Tantau. Computing Hitting Set Kernels By AC^0-Circuits. In 35th Symposium on Theoretical Aspects of Computer Science (STACS 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 96, pp. 9:1-9:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{bannach_et_al:LIPIcs.STACS.2018.9,
  author =	{Bannach, Max and Tantau, Till},
  title =	{{Computing Hitting Set Kernels By AC^0-Circuits}},
  booktitle =	{35th Symposium on Theoretical Aspects of Computer Science (STACS 2018)},
  pages =	{9:1--9:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-062-0},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{96},
  editor =	{Niedermeier, Rolf and Vall\'{e}e, Brigitte},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2018.9},
  URN =		{urn:nbn:de:0030-drops-84998},
  doi =		{10.4230/LIPIcs.STACS.2018.9},
  annote =	{Keywords: parallel computation, fixed-parameter tractability, kernelization}
}
Document
Computational Complexity of Discrete Problems (Dagstuhl Seminar 17121)

Authors: Anna Gál, Michal Koucký, Oded Regev, and Till Tantau

Published in: Dagstuhl Reports, Volume 7, Issue 3 (2017)


Abstract
This report documents the program and the outcomes of Dagstuhl Seminar 17121 "Computational Complexity of Discrete Problems". The first section gives an overview of the topics covered and the organization of the meeting. Section 2 lists the talks given in alphabetical order. The last section contains the abstracts of the talks.

Cite as

Anna Gál, Michal Koucký, Oded Regev, and Till Tantau. Computational Complexity of Discrete Problems (Dagstuhl Seminar 17121). In Dagstuhl Reports, Volume 7, Issue 3, pp. 45-69, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@Article{gal_et_al:DagRep.7.3.45,
  author =	{G\'{a}l, Anna and Kouck\'{y}, Michal and Regev, Oded and Tantau, Till},
  title =	{{Computational Complexity of Discrete Problems (Dagstuhl Seminar 17121)}},
  pages =	{45--69},
  journal =	{Dagstuhl Reports},
  ISSN =	{2192-5283},
  year =	{2017},
  volume =	{7},
  number =	{3},
  editor =	{G\'{a}l, Anna and Kouck\'{y}, Michal and Regev, Oded and Tantau, Till},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagRep.7.3.45},
  URN =		{urn:nbn:de:0030-drops-73611},
  doi =		{10.4230/DagRep.7.3.45},
  annote =	{Keywords: Computational Complexity}
}
Document
Invited Talk
Applications of Algorithmic Metatheorems to Space Complexity and Parallelism (Invited Talk)

Authors: Till Tantau

Published in: LIPIcs, Volume 66, 34th Symposium on Theoretical Aspects of Computer Science (STACS 2017)


Abstract
Algorithmic metatheorems state that if a problem can be described in a certain logic and the inputs are structured in a certain way, then the problem can be solved with a certain amount of resources. As an example, by Courcelle's Theorem all monadic second-order ("in a certain logic") properties of graphs of bounded tree width ("structured in a certain way") can be solved in linear time ("with a certain amount of resources"). Such theorems have become a valuable tool in algorithmics: If a problem happens to have the right structure and can be described in the right logic, they immediately yield a (typically tight) upper bound on the time complexity of the problem. Perhaps even more importantly, several complex algorithms rely on algorithmic metatheorems internally to solve subproblems, which considerably broadens the range of applications of these theorems. The talk is intended as a gentle introduction to the ideas behind algorithmic metatheorems, especially behind some recent results concerning space classes and parallel computation, and tries to give a flavor of the range of

Cite as

Till Tantau. Applications of Algorithmic Metatheorems to Space Complexity and Parallelism (Invited Talk). In 34th Symposium on Theoretical Aspects of Computer Science (STACS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 66, pp. 4:1-4:4, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{tantau:LIPIcs.STACS.2017.4,
  author =	{Tantau, Till},
  title =	{{Applications of Algorithmic Metatheorems to Space Complexity and Parallelism}},
  booktitle =	{34th Symposium on Theoretical Aspects of Computer Science (STACS 2017)},
  pages =	{4:1--4:4},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-028-6},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{66},
  editor =	{Vollmer, Heribert and Vall\'{e}e, Brigitte},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2017.4},
  URN =		{urn:nbn:de:0030-drops-70303},
  doi =		{10.4230/LIPIcs.STACS.2017.4},
  annote =	{Keywords: Algorithmic metatheorems, Courcelle’s Theorem, tree width, monadic second-order logic, logarithmic space, parallel computations}
}
Document
Parallel Multivariate Meta-Theorems

Authors: Max Bannach and Till Tantau

Published in: LIPIcs, Volume 63, 11th International Symposium on Parameterized and Exact Computation (IPEC 2016)


Abstract
Fixed-parameter tractability is based on the observation that many hard problems become tractable even on large inputs as long as certain input parameters are small. Originally, "tractable" just meant "solvable in polynomial time," but especially modern hardware raises the question of whether we can also achieve "solvable in polylogarithmic parallel time." A framework for this study of parallel fixed-parameter tractability is available and a number of isolated algorithmic results have been obtained in recent years, but one of the unifying core tools of classical FPT theory has been missing: algorithmic meta-theorems. We establish two such theorems by giving new upper bounds on the circuit depth necessary to solve the model checking problem for monadic second-order logic, once parameterized by the tree width and the formula (this is a parallel version of Courcelle's Theorem) and once by the tree depth and the formula. For our proofs we refine the analysis of earlier algorithms, especially of Bodlaender's, but also need to add new ideas, especially in the context where the parallel runtime is bounded by a function of the parameter and does not depend on the length of the input.

Cite as

Max Bannach and Till Tantau. Parallel Multivariate Meta-Theorems. In 11th International Symposium on Parameterized and Exact Computation (IPEC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 63, pp. 4:1-4:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{bannach_et_al:LIPIcs.IPEC.2016.4,
  author =	{Bannach, Max and Tantau, Till},
  title =	{{Parallel Multivariate Meta-Theorems}},
  booktitle =	{11th International Symposium on Parameterized and Exact Computation (IPEC 2016)},
  pages =	{4:1--4:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-023-1},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{63},
  editor =	{Guo, Jiong and Hermelin, Danny},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.IPEC.2016.4},
  URN =		{urn:nbn:de:0030-drops-69227},
  doi =		{10.4230/LIPIcs.IPEC.2016.4},
  annote =	{Keywords: Parallel computation, FPT, meta-theorems, tree width, tree depth}
}
Document
Fast Parallel Fixed-parameter Algorithms via Color Coding

Authors: Max Bannach, Christoph Stockhusen, and Till Tantau

Published in: LIPIcs, Volume 43, 10th International Symposium on Parameterized and Exact Computation (IPEC 2015)


Abstract
Fixed-parameter algorithms have been successfully applied to solve numerous difficult problems within acceptable time bounds on large inputs. However, most fixed-parameter algorithms are inherently sequential and, thus, make no use of the parallel hardware present in modern computers. We show that parallel fixed-parameter algorithms do not only exist for numerous parameterized problems from the literature - including vertex cover, packing problems, cluster editing, cutting vertices, finding embeddings, or finding matchings - but that there are parallel algorithms working in constant time or at least in time depending only on the parameter (and not on the size of the input) for these problems. Phrased in terms of complexity classes, we place numerous natural parameterized problems in parameterized versions of AC^0. On a more technical level, we show how the color coding method can be implemented in constant time and apply it to embedding problems for graphs of bounded tree-width or tree-depth and to model checking first-order formulas in graphs of bounded degree.

Cite as

Max Bannach, Christoph Stockhusen, and Till Tantau. Fast Parallel Fixed-parameter Algorithms via Color Coding. In 10th International Symposium on Parameterized and Exact Computation (IPEC 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 43, pp. 224-235, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{bannach_et_al:LIPIcs.IPEC.2015.224,
  author =	{Bannach, Max and Stockhusen, Christoph and Tantau, Till},
  title =	{{Fast Parallel Fixed-parameter Algorithms via Color Coding}},
  booktitle =	{10th International Symposium on Parameterized and Exact Computation (IPEC 2015)},
  pages =	{224--235},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-92-7},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{43},
  editor =	{Husfeldt, Thore and Kanj, Iyad},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.IPEC.2015.224},
  URN =		{urn:nbn:de:0030-drops-55857},
  doi =		{10.4230/LIPIcs.IPEC.2015.224},
  annote =	{Keywords: color coding, parallel computation, fixed-parameter tractability, graph packing, cutting \$ell\$ vertices, cluster editing, tree-width, tree-depth,}
}
Document
Existential Second-order Logic over Graphs: A Complete Complexity-theoretic Classification

Authors: Till Tantau

Published in: LIPIcs, Volume 30, 32nd International Symposium on Theoretical Aspects of Computer Science (STACS 2015)


Abstract
Descriptive complexity theory aims at inferring a problem's computational complexity from the syntactic complexity of its description. A cornerstone of this theory is Fagin's Theorem, by which a property is expressible in existential second-order logic (ESO logic) if, and only if, it is in NP. A natural question, from the theory's point of view, is which syntactic fragments of ESO logic also still characterize NP. Research on this question has culminated in a dichotomy result by Gottlob, Kolaitis, and Schwentick: for each possible quantifier prefix of an ESO formula, the resulting prefix class over graphs either contains an NP-complete problem or is contained in P. However, the exact complexity of the prefix classes inside P remained elusive. In the present paper, we clear up the picture by showing that for each prefix class of ESO logic, its reduction closure under first-order reductions is either FO, L, NL, or NP. For undirected self-loop-free graphs two containment results are especially challenging to prove: containment in L for the prefix \exists R_1\cdots \exists R_n \forall x \exists y and containment in FO for the prefix \exists M \forall x \exists y for monadic M. The complex argument by Gottlob et al. concerning polynomial time needs to be carefully reexamined and either combined with the logspace version of Courcelle's Theorem or directly improved to first-order computations. A different challenge is posed by formulas with the prefix \exists M \forall x\forall y, which we show to express special constraint satisfaction problems that lie in L.

Cite as

Till Tantau. Existential Second-order Logic over Graphs: A Complete Complexity-theoretic Classification. In 32nd International Symposium on Theoretical Aspects of Computer Science (STACS 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 30, pp. 703-715, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{tantau:LIPIcs.STACS.2015.703,
  author =	{Tantau, Till},
  title =	{{Existential Second-order Logic over Graphs: A Complete Complexity-theoretic Classification}},
  booktitle =	{32nd International Symposium on Theoretical Aspects of Computer Science (STACS 2015)},
  pages =	{703--715},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-78-1},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{30},
  editor =	{Mayr, Ernst W. and Ollinger, Nicolas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2015.703},
  URN =		{urn:nbn:de:0030-drops-49524},
  doi =		{10.4230/LIPIcs.STACS.2015.703},
  annote =	{Keywords: existential second-order logic, descriptive complexity, logarithmic space}
}
Document
Algorithmic Meta Theorems for Circuit Classes of Constant and Logarithmic Depth

Authors: Michael Elberfeld, Andreas Jakoby, and Till Tantau

Published in: LIPIcs, Volume 14, 29th International Symposium on Theoretical Aspects of Computer Science (STACS 2012)


Abstract
An algorithmic meta theorem for a logic and a class C of structures states that all problems expressible in this logic can be solved efficiently for inputs from $C$. The prime example is Courcelle's Theorem, which states that monadic second-order (MSO) definable problems are linear-time solvable on graphs of bounded tree width. We contribute new algorithmic meta theorems, which state that MSO-definable problems are (a) solvable by uniform constant-depth circuit families (AC0 for decision problems and TC0 for counting problems) when restricted to input structures of bounded tree depth and (b) solvable by uniform logarithmic-depth circuit families (NC1 for decision problems and #NC1 for counting problems) when a tree decomposition of bounded width in term representation is part of the input. Applications of our theorems include a TC0-completeness proof for the unary version of integer linear programming with a fixed number of equations and extensions of a recent result that counting the number of accepting paths of a visible pushdown automaton lies in #NC1. Our main technical contributions are a new tree automata model for unordered, unranked, labeled trees; a method for representing the tree automata's computations algebraically using convolution circuits; and a lemma on computing balanced width-3 tree decompositions of trees in TC0, which encapsulates most of the technical difficulties surrounding earlier results connecting tree automata and NC1.

Cite as

Michael Elberfeld, Andreas Jakoby, and Till Tantau. Algorithmic Meta Theorems for Circuit Classes of Constant and Logarithmic Depth. In 29th International Symposium on Theoretical Aspects of Computer Science (STACS 2012). Leibniz International Proceedings in Informatics (LIPIcs), Volume 14, pp. 66-77, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{elberfeld_et_al:LIPIcs.STACS.2012.66,
  author =	{Elberfeld, Michael and Jakoby, Andreas and Tantau, Till},
  title =	{{Algorithmic Meta Theorems for Circuit Classes of Constant and Logarithmic Depth}},
  booktitle =	{29th International Symposium on Theoretical Aspects of Computer Science (STACS 2012)},
  pages =	{66--77},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-35-4},
  ISSN =	{1868-8969},
  year =	{2012},
  volume =	{14},
  editor =	{D\"{u}rr, Christoph and Wilke, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2012.66},
  URN =		{urn:nbn:de:0030-drops-34059},
  doi =		{10.4230/LIPIcs.STACS.2012.66},
  annote =	{Keywords: algorithmic meta theorem, monadic second-order logic, circuit complexity, tree width, tree depth}
}
Document
Smoothed Analysis of Binary Search Trees and Quicksort Under Additive Noise

Authors: Bodo Manthey and Till Tantau

Published in: Dagstuhl Seminar Proceedings, Volume 7391, Probabilistic Methods in the Design and Analysis of Algorithms (2007)


Abstract
While the height of binary search trees is linear in the worst case, their average height is logarithmic. We investigate what happens in between, i.e., when the randomness is limited, by analyzing the smoothed height of binary search trees: Randomly perturb a given (adversarial) sequence and then take the expected height of the binary search tree generated by the resulting sequence. As perturbation models, we consider partial permutations, where some elements are randomly permuted, and additive noise, where random numbers are added to the adversarial sequence. We prove tight bounds for the smoothed height of binary search trees under these models. We also obtain tight bounds for smoothed number of left-to-right maxima. Furthermore, we exploit the results obtained to get bounds for the smoothed number of comparisons that quicksort needs.

Cite as

Bodo Manthey and Till Tantau. Smoothed Analysis of Binary Search Trees and Quicksort Under Additive Noise. In Probabilistic Methods in the Design and Analysis of Algorithms. Dagstuhl Seminar Proceedings, Volume 7391, pp. 1-19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2007)


Copy BibTex To Clipboard

@InProceedings{manthey_et_al:DagSemProc.07391.3,
  author =	{Manthey, Bodo and Tantau, Till},
  title =	{{Smoothed Analysis of Binary Search Trees and Quicksort Under Additive Noise}},
  booktitle =	{Probabilistic Methods in the Design and Analysis of Algorithms},
  pages =	{1--19},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2007},
  volume =	{7391},
  editor =	{Martin Dietzfelbinger and Shang-Hua Teng and Eli Upfal and Berthold V\"{o}cking},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagSemProc.07391.3},
  URN =		{urn:nbn:de:0030-drops-12893},
  doi =		{10.4230/DagSemProc.07391.3},
  annote =	{Keywords: Smoothed Analysis, Binary Search Trees, Quicksort, Left-to-right Maxima}
}
Document
Computing Shortest Paths in Series-Parallel Graphs in Logarithmic Space

Authors: Andreas Jakoby and Till Tantau

Published in: Dagstuhl Seminar Proceedings, Volume 6111, Complexity of Boolean Functions (2006)


Abstract
Series-parallel graphs, which are built by repeatedly applying series or parallel composition operations to paths, play an important role in computer science as they model the flow of information in many types of programs. For directed series-parallel graphs, we study the problem of finding a shortest path between two given vertices. Our main result is that we can find such a path in logarithmic space, which shows that the distance problem for series-parallel graphs is L-complete. Previously, it was known that one can compute some path in logarithmic space; but for other graph types, like undirected graphs or tournament graphs, constructing some path between given vertices is possible in logarithmic space while constructing a shortest path is NL-complete.

Cite as

Andreas Jakoby and Till Tantau. Computing Shortest Paths in Series-Parallel Graphs in Logarithmic Space. In Complexity of Boolean Functions. Dagstuhl Seminar Proceedings, Volume 6111, pp. 1-9, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2006)


Copy BibTex To Clipboard

@InProceedings{jakoby_et_al:DagSemProc.06111.6,
  author =	{Jakoby, Andreas and Tantau, Till},
  title =	{{Computing Shortest Paths in Series-Parallel Graphs in Logarithmic Space}},
  booktitle =	{Complexity of Boolean Functions},
  pages =	{1--9},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2006},
  volume =	{6111},
  editor =	{Matthias Krause and Pavel Pudl\'{a}k and R\"{u}diger Reischuk and Dieter van Melkebeek},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagSemProc.06111.6},
  URN =		{urn:nbn:de:0030-drops-6185},
  doi =		{10.4230/DagSemProc.06111.6},
  annote =	{Keywords: Series-parallel graphs, shortest path, logspace}
}
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail