42 Search Results for "Bringmann, Karl"


Document
The NFA Acceptance Hypothesis: Non-Combinatorial and Dynamic Lower Bounds

Authors: Karl Bringmann, Allan Grønlund, Marvin Künnemann, and Kasper Green Larsen

Published in: LIPIcs, Volume 287, 15th Innovations in Theoretical Computer Science Conference (ITCS 2024)


Abstract
We pose the fine-grained hardness hypothesis that the textbook algorithm for the NFA Acceptance problem is optimal up to subpolynomial factors, even for dense NFAs and fixed alphabets. We show that this barrier appears in many variations throughout the algorithmic literature by introducing a framework of Colored Walk problems. These yield fine-grained equivalent formulations of the NFA Acceptance problem as problems concerning detection of an s-t-walk with a prescribed color sequence in a given edge- or node-colored graph. For NFA Acceptance on sparse NFAs (or equivalently, Colored Walk in sparse graphs), a tight lower bound under the Strong Exponential Time Hypothesis has been rediscovered several times in recent years. We show that our hardness hypothesis, which concerns dense NFAs, has several interesting implications: - It gives a tight lower bound for Context-Free Language Reachability. This proves conditional optimality for the class of 2NPDA-complete problems, explaining the cubic bottleneck of interprocedural program analysis. - It gives a tight (n+nm^{1/3})^{1-o(1)} lower bound for the Word Break problem on strings of length n and dictionaries of total size m. - It implies the popular OMv hypothesis. Since the NFA acceptance problem is a static (i.e., non-dynamic) problem, this provides a static reason for the hardness of many dynamic problems. Thus, a proof of the NFA Acceptance hypothesis would resolve several interesting barriers. Conversely, a refutation of the NFA Acceptance hypothesis may lead the way to attacking the current barriers observed for Context-Free Language Reachability, the Word Break problem and the growing list of dynamic problems proven hard under the OMv hypothesis.

Cite as

Karl Bringmann, Allan Grønlund, Marvin Künnemann, and Kasper Green Larsen. The NFA Acceptance Hypothesis: Non-Combinatorial and Dynamic Lower Bounds. In 15th Innovations in Theoretical Computer Science Conference (ITCS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 287, pp. 22:1-22:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.ITCS.2024.22,
  author =	{Bringmann, Karl and Gr{\o}nlund, Allan and K\"{u}nnemann, Marvin and Larsen, Kasper Green},
  title =	{{The NFA Acceptance Hypothesis: Non-Combinatorial and Dynamic Lower Bounds}},
  booktitle =	{15th Innovations in Theoretical Computer Science Conference (ITCS 2024)},
  pages =	{22:1--22:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-309-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{287},
  editor =	{Guruswami, Venkatesan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2024.22},
  URN =		{urn:nbn:de:0030-drops-195500},
  doi =		{10.4230/LIPIcs.ITCS.2024.22},
  annote =	{Keywords: Fine-grained complexity theory, non-deterministic finite automata}
}
Document
Faster 0-1-Knapsack via Near-Convex Min-Plus-Convolution

Authors: Karl Bringmann and Alejandro Cassis

Published in: LIPIcs, Volume 274, 31st Annual European Symposium on Algorithms (ESA 2023)


Abstract
We revisit the classic 0-1-Knapsack problem, in which we are given n items with their weights and profits as well as a weight budget W, and the goal is to find a subset of items of total weight at most W that maximizes the total profit. We study pseudopolynomial-time algorithms parameterized by the largest profit of any item p_{max}, and the largest weight of any item w_max. Our main result are algorithms for 0-1-Knapsack running in time Õ(n w_max p_max^{2/3}) and Õ(n p_max w_max^{2/3}), improving upon an algorithm in time O(n p_max w_max) by Pisinger [J. Algorithms '99]. In the regime p_max ≈ w_max ≈ n (and W ≈ OPT ≈ n²) our algorithms are the first to break the cubic barrier n³. To obtain our result, we give an efficient algorithm to compute the min-plus convolution of near-convex functions. More precisely, we say that a function f : [n] ↦ ℤ is Δ-near convex with Δ ≥ 1, if there is a convex function f ̆ such that f ̆(i) ≤ f(i) ≤ f ̆(i) + Δ for every i. We design an algorithm computing the min-plus convolution of two Δ-near convex functions in time Õ(nΔ). This tool can replace the usage of the prediction technique of Bateni, Hajiaghayi, Seddighin and Stein [STOC '18] in all applications we are aware of, and we believe it has wider applicability.

Cite as

Karl Bringmann and Alejandro Cassis. Faster 0-1-Knapsack via Near-Convex Min-Plus-Convolution. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 24:1-24:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.ESA.2023.24,
  author =	{Bringmann, Karl and Cassis, Alejandro},
  title =	{{Faster 0-1-Knapsack via Near-Convex Min-Plus-Convolution}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{24:1--24:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.24},
  URN =		{urn:nbn:de:0030-drops-186776},
  doi =		{10.4230/LIPIcs.ESA.2023.24},
  annote =	{Keywords: Knapsack, Fine-Grained Complexity, Min-Plus Convolution}
}
Document
Combinatorial Designs Meet Hypercliques: Higher Lower Bounds for Klee’s Measure Problem and Related Problems in Dimensions d ≥ 4

Authors: Egor Gorbachev and Marvin Künnemann

Published in: LIPIcs, Volume 258, 39th International Symposium on Computational Geometry (SoCG 2023)


Abstract
Klee’s measure problem (computing the volume of the union of n axis-parallel boxes in ℝ^d) is well known to have n^{d/2± o(1)}-time algorithms (Overmars, Yap, SICOMP'91; Chan FOCS'13). Only recently, a conditional lower bound (without any restriction to "combinatorial" algorithms) could be shown for d = 3 (Künnemann, FOCS'22). Can this result be extended to a tight lower bound for dimensions d ≥ 4? In this paper, we formalize the technique of the tight lower bound for d = 3 using a combinatorial object we call prefix covering design. We show that these designs, which are related in spirit to combinatorial designs, directly translate to conditional lower bounds for Klee’s measure problem and various related problems. By devising good prefix covering designs, we give the following lower bounds for Klee’s measure problem in ℝ^d, the depth problem for axis-parallel boxes in ℝ^d, the largest-volume/max-perimeter empty (anchored) box problem in ℝ^{2d}, and related problems: - Ω(n^1.90476) for d = 4, - Ω(n^2.22222) for d = 5, - Ω(n^{d/3 + 2√d/9-o(√d)}) for general d, assuming the 3-uniform hyperclique hypothesis. For Klee’s measure problem and the depth problem, these bounds improve previous lower bounds of Ω(n^{1.777...}), Ω(n^{2.0833...}) and Ω(n^{d/3 + 1/3 + Θ(1/d)}) respectively. Our improved prefix covering designs were obtained by (1) exploiting a computer-aided search using problem-specific insights as well as SAT solvers, and (2) showing how to transform combinatorial covering designs known in the literature to strong prefix covering designs. In contrast, we show that our lower bounds are close to best possible using this proof technique.

Cite as

Egor Gorbachev and Marvin Künnemann. Combinatorial Designs Meet Hypercliques: Higher Lower Bounds for Klee’s Measure Problem and Related Problems in Dimensions d ≥ 4. In 39th International Symposium on Computational Geometry (SoCG 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 258, pp. 36:1-36:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{gorbachev_et_al:LIPIcs.SoCG.2023.36,
  author =	{Gorbachev, Egor and K\"{u}nnemann, Marvin},
  title =	{{Combinatorial Designs Meet Hypercliques: Higher Lower Bounds for Klee’s Measure Problem and Related Problems in Dimensions d ≥ 4}},
  booktitle =	{39th International Symposium on Computational Geometry (SoCG 2023)},
  pages =	{36:1--36:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-273-0},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{258},
  editor =	{Chambers, Erin W. and Gudmundsson, Joachim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SoCG.2023.36},
  URN =		{urn:nbn:de:0030-drops-178861},
  doi =		{10.4230/LIPIcs.SoCG.2023.36},
  annote =	{Keywords: Fine-grained complexity theory, non-combinatorial lower bounds, computational geometry, clique detection}
}
Document
Computing Generalized Convolutions Faster Than Brute Force

Authors: Barış Can Esmer, Ariel Kulik, Dániel Marx, Philipp Schepper, and Karol Węgrzycki

Published in: LIPIcs, Volume 249, 17th International Symposium on Parameterized and Exact Computation (IPEC 2022)


Abstract
In this paper, we consider a general notion of convolution. Let D be a finite domain and let Dⁿ be the set of n-length vectors (tuples) of D. Let f : D × D → D be a function and let ⊕_f be a coordinate-wise application of f. The f-Convolution of two functions g,h : Dⁿ → {-M,…,M} is (g ⊛_f h)(v) := ∑_{v_g,v_h ∈ D^n s.t. v = v_g ⊕_f v_h} g(v_g) ⋅ h(v_h) for every 𝐯 ∈ Dⁿ. This problem generalizes many fundamental convolutions such as Subset Convolution, XOR Product, Covering Product or Packing Product, etc. For arbitrary function f and domain D we can compute f-Convolution via brute-force enumeration in 𝒪̃(|D|^{2n} ⋅ polylog(M)) time. Our main result is an improvement over this naive algorithm. We show that f-Convolution can be computed exactly in 𝒪̃((c ⋅ |D|²)ⁿ ⋅ polylog(M)) for constant c := 5/6 when D has even cardinality. Our main observation is that a cyclic partition of a function f : D × D → D can be used to speed up the computation of f-Convolution, and we show that an appropriate cyclic partition exists for every f. Furthermore, we demonstrate that a single entry of the f-Convolution can be computed more efficiently. In this variant, we are given two functions g,h : Dⁿ → {-M,…,M} alongside with a vector 𝐯 ∈ Dⁿ and the task of the f-Query problem is to compute integer (g ⊛_f h)(𝐯). This is a generalization of the well-known Orthogonal Vectors problem. We show that f-Query can be computed in 𝒪̃(|D|^{(ω/2)n} ⋅ polylog(M)) time, where ω ∈ [2,2.373) is the exponent of currently fastest matrix multiplication algorithm.

Cite as

Barış Can Esmer, Ariel Kulik, Dániel Marx, Philipp Schepper, and Karol Węgrzycki. Computing Generalized Convolutions Faster Than Brute Force. In 17th International Symposium on Parameterized and Exact Computation (IPEC 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 249, pp. 12:1-12:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{esmer_et_al:LIPIcs.IPEC.2022.12,
  author =	{Esmer, Bar{\i}\c{s} Can and Kulik, Ariel and Marx, D\'{a}niel and Schepper, Philipp and W\k{e}grzycki, Karol},
  title =	{{Computing Generalized Convolutions Faster Than Brute Force}},
  booktitle =	{17th International Symposium on Parameterized and Exact Computation (IPEC 2022)},
  pages =	{12:1--12:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-260-0},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{249},
  editor =	{Dell, Holger and Nederlof, Jesper},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.IPEC.2022.12},
  URN =		{urn:nbn:de:0030-drops-173685},
  doi =		{10.4230/LIPIcs.IPEC.2022.12},
  annote =	{Keywords: Generalized Convolution, Fast Fourier Transform, Fast Subset Convolution}
}
Document
Track A: Algorithms, Complexity and Games
A Structural Investigation of the Approximability of Polynomial-Time Problems

Authors: Karl Bringmann, Alejandro Cassis, Nick Fischer, and Marvin Künnemann

Published in: LIPIcs, Volume 229, 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)


Abstract
An extensive research effort targets optimal (in)approximability results for various NP-hard optimization problems. Notably, the works of (Creignou'95) as well as (Khanna, Sudan, Trevisan, Williamson'00) establish a tight characterization of a large subclass of MaxSNP, namely Boolean MaxCSPs and further variants, in terms of their polynomial-time approximability. Can we obtain similarly encompassing characterizations for classes of polynomial-time optimization problems? To this end, we initiate the systematic study of a recently introduced polynomial-time analogue of MaxSNP, which includes a large number of well-studied problems (including Nearest and Furthest Neighbor in the Hamming metric, Maximum Inner Product, optimization variants of k-XOR and Maximum k-Cover). Specifically, for each k, MaxSP_k denotes the class of O(m^k)-time problems of the form max_{x_1,… , x_k} #{y : ϕ(x_1,… ,x_k,y)} where ϕ is a quantifier-free first-order property and m denotes the size of the relational structure. Assuming central hypotheses about clique detection in hypergraphs and exact Max-3-SAT}, we show that for any MaxSP_k problem definable by a quantifier-free m-edge graph formula φ, the best possible approximation guarantee in faster-than-exhaustive-search time O(m^{k-δ})falls into one of four categories: - optimizable to exactness in time O(m^{k-δ}), - an (inefficient) approximation scheme, i.e., a (1+ε)-approximation in time O(m^{k-f(ε)}), - a (fixed) constant-factor approximation in time O(m^{k-δ}), or - a nm^ε-approximation in time O(m^{k-f(ε)}). We obtain an almost complete characterization of these regimes, for MaxSP_k as well as for an analogously defined minimization class MinSP_k. As our main technical contribution, we show how to rule out the existence of approximation schemes for a large class of problems admitting constant-factor approximations, under a hypothesis for exact Sparse Max-3-SAT algorithms posed by (Alman, Vassilevska Williams'20). As general trends for the problems we consider, we observe: (1) Exact optimizability has a simple algebraic characterization, (2) only few maximization problems do not admit a constant-factor approximation; these do not even have a subpolynomial-factor approximation, and (3) constant-factor approximation of minimization problems is equivalent to deciding whether the optimum is equal to 0.

Cite as

Karl Bringmann, Alejandro Cassis, Nick Fischer, and Marvin Künnemann. A Structural Investigation of the Approximability of Polynomial-Time Problems. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 30:1-30:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.ICALP.2022.30,
  author =	{Bringmann, Karl and Cassis, Alejandro and Fischer, Nick and K\"{u}nnemann, Marvin},
  title =	{{A Structural Investigation of the Approximability of Polynomial-Time Problems}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{30:1--30:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.30},
  URN =		{urn:nbn:de:0030-drops-163713},
  doi =		{10.4230/LIPIcs.ICALP.2022.30},
  annote =	{Keywords: Classification Theorems, Hardness of Approximation in P, Fine-grained Complexity Theory}
}
Document
Track A: Algorithms, Complexity and Games
Faster Knapsack Algorithms via Bounded Monotone Min-Plus-Convolution

Authors: Karl Bringmann and Alejandro Cassis

Published in: LIPIcs, Volume 229, 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)


Abstract
We present new exact and approximation algorithms for 0-1-Knapsack and Unbounded Knapsack: - Exact Algorithm for 0-1-Knapsack: 0-1-Knapsack has known algorithms running in time Õ(n + min{n ⋅ OPT, n ⋅ W, OPT², W²}) [Bellman '57], where n is the number of items, W is the weight budget, and OPT is the optimal profit. We present an algorithm running in time Õ(n + (W + OPT)^{1.5}). This improves the running time in case n,W,OPT are roughly equal. - Exact Algorithm for Unbounded Knapsack: Unbounded Knapsack has known algorithms running in time Õ(n + min{n ⋅ p_max, n ⋅ w_max, p_max², w_max²}) [Axiotis, Tzamos '19, Jansen, Rohwedder '19, Chan, He '22], where n is the number of items, w_{max} is the largest weight of any item, and p_max is the largest profit of any item. We present an algorithm running in time Õ(n + (p_max + w_max)^{1.5}), giving a similar improvement as for 0-1-Knapsack. - Approximating Unbounded Knapsack with Resource Augmentation: Unbounded Knapsack has a known FPTAS with running time Õ(min{n/ε, n + 1/ε²}) [Jansen, Kraft '18]. We study weak approximation algorithms, which approximate the optimal profit but are allowed to overshoot the weight constraint (i.e. resource augmentation). We present the first approximation scheme for Unbounded Knapsack in this setting, achieving running time Õ(n + 1/ε^{1.5}). Along the way, we also give a simpler FPTAS with lower order improvement in the standard setting. For all of these problem settings the previously known results had matching conditional lower bounds. We avoid these lower bounds in the approximation setting by allowing resource augmentation, and in the exact setting by analyzing the time complexity in terms of weight and profit parameters (instead of only weight or only profit parameters). Our algorithms can be seen as reductions to Min-Plus-Convolution on monotone sequences with bounded entries. These structured instances of Min-Plus-Convolution can be solved in time O(n^1.5) [Chi, Duan, Xie, Zhang '22] (in contrast to the conjectured n^{2-o(1)} lower bound for the general case). We complement our results by showing reductions in the opposite direction, that is, we show that achieving our results with the constant 1.5 replaced by any constant < 2 implies subquadratic algorithms for Min-Plus-Convolution on monotone sequences with bounded entries.

Cite as

Karl Bringmann and Alejandro Cassis. Faster Knapsack Algorithms via Bounded Monotone Min-Plus-Convolution. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 31:1-31:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.ICALP.2022.31,
  author =	{Bringmann, Karl and Cassis, Alejandro},
  title =	{{Faster Knapsack Algorithms via Bounded Monotone Min-Plus-Convolution}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{31:1--31:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.31},
  URN =		{urn:nbn:de:0030-drops-163727},
  doi =		{10.4230/LIPIcs.ICALP.2022.31},
  annote =	{Keywords: Knapsack, Approximation Schemes, Fine-Grained Complexity, Min-Plus Convolution}
}
Document
Track A: Algorithms, Complexity and Games
Improved Sublinear-Time Edit Distance for Preprocessed Strings

Authors: Karl Bringmann, Alejandro Cassis, Nick Fischer, and Vasileios Nakos

Published in: LIPIcs, Volume 229, 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)


Abstract
We study the problem of approximating the edit distance of two strings in sublinear time, in a setting where one or both string(s) are preprocessed, as initiated by Goldenberg, Rubinstein, Saha (STOC '20). Specifically, in the (k, K)-gap edit distance problem, the goal is to distinguish whether the edit distance of two strings is at most k or at least K. We obtain the following results: - After preprocessing one string in time n^{1+o(1)}, we can solve (k, k ⋅ n^o(1))-gap-gap edit distance in time (n/k + k) ⋅ n^o(1). - After preprocessing both strings separately in time n^{1+o(1)}, we can solve (k, k ⋅ n^o(1))-gap edit distance in time kn^o(1). Both results improve upon some previously best known result, with respect to either the gap or the query time or the preprocessing time. Our algorithms build on the framework by Andoni, Krauthgamer and Onak (FOCS '10) and the recent sublinear-time algorithm by Bringmann, Cassis, Fischer and Nakos (STOC '22). We replace many complicated parts in their algorithm by faster and simpler solutions which exploit the preprocessing.

Cite as

Karl Bringmann, Alejandro Cassis, Nick Fischer, and Vasileios Nakos. Improved Sublinear-Time Edit Distance for Preprocessed Strings. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 32:1-32:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.ICALP.2022.32,
  author =	{Bringmann, Karl and Cassis, Alejandro and Fischer, Nick and Nakos, Vasileios},
  title =	{{Improved Sublinear-Time Edit Distance for Preprocessed Strings}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{32:1--32:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.32},
  URN =		{urn:nbn:de:0030-drops-163734},
  doi =		{10.4230/LIPIcs.ICALP.2022.32},
  annote =	{Keywords: Edit Distance, Property Testing, Preprocessing, Precision Sampling}
}
Document
Unlabeled Multi-Robot Motion Planning with Tighter Separation Bounds

Authors: Bahareh Banyassady, Mark de Berg, Karl Bringmann, Kevin Buchin, Henning Fernau, Dan Halperin, Irina Kostitsyna, Yoshio Okamoto, and Stijn Slot

Published in: LIPIcs, Volume 224, 38th International Symposium on Computational Geometry (SoCG 2022)


Abstract
We consider the unlabeled motion-planning problem of m unit-disc robots moving in a simple polygonal workspace of n edges. The goal is to find a motion plan that moves the robots to a given set of m target positions. For the unlabeled variant, it does not matter which robot reaches which target position as long as all target positions are occupied in the end. If the workspace has narrow passages such that the robots cannot fit through them, then the free configuration space, representing all possible unobstructed positions of the robots, will consist of multiple connected components. Even if in each component of the free space the number of targets matches the number of start positions, the motion-planning problem does not always have a solution when the robots and their targets are positioned very densely. In this paper, we prove tight bounds on how much separation between start and target positions is necessary to always guarantee a solution. Moreover, we describe an algorithm that always finds a solution in time O(n log n + mn + m²) if the separation bounds are met. Specifically, we prove that the following separation is sufficient: any two start positions are at least distance 4 apart, any two target positions are at least distance 4 apart, and any pair of a start and a target positions is at least distance 3 apart. We further show that when the free space consists of a single connected component, the separation between start and target positions is not necessary.

Cite as

Bahareh Banyassady, Mark de Berg, Karl Bringmann, Kevin Buchin, Henning Fernau, Dan Halperin, Irina Kostitsyna, Yoshio Okamoto, and Stijn Slot. Unlabeled Multi-Robot Motion Planning with Tighter Separation Bounds. In 38th International Symposium on Computational Geometry (SoCG 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 224, pp. 12:1-12:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{banyassady_et_al:LIPIcs.SoCG.2022.12,
  author =	{Banyassady, Bahareh and de Berg, Mark and Bringmann, Karl and Buchin, Kevin and Fernau, Henning and Halperin, Dan and Kostitsyna, Irina and Okamoto, Yoshio and Slot, Stijn},
  title =	{{Unlabeled Multi-Robot Motion Planning with Tighter Separation Bounds}},
  booktitle =	{38th International Symposium on Computational Geometry (SoCG 2022)},
  pages =	{12:1--12:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-227-3},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{224},
  editor =	{Goaoc, Xavier and Kerber, Michael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SoCG.2022.12},
  URN =		{urn:nbn:de:0030-drops-160203},
  doi =		{10.4230/LIPIcs.SoCG.2022.12},
  annote =	{Keywords: motion planning, computational geometry, simple polygon}
}
Document
Dynamic Time Warping Under Translation: Approximation Guided by Space-Filling Curves

Authors: Karl Bringmann, Sándor Kisfaludi‑Bak, Marvin Künnemann, Dániel Marx, and André Nusser

Published in: LIPIcs, Volume 224, 38th International Symposium on Computational Geometry (SoCG 2022)


Abstract
The Dynamic Time Warping (DTW) distance is a popular measure of similarity for a variety of sequence data. For comparing polygonal curves π, σ in ℝ^d, it provides a robust, outlier-insensitive alternative to the Fréchet distance. However, like the Fréchet distance, the DTW distance is not invariant under translations. Can we efficiently optimize the DTW distance of π and σ under arbitrary translations, to compare the curves' shape irrespective of their absolute location? There are surprisingly few works in this direction, which may be due to its computational intricacy: For the Euclidean norm, this problem contains as a special case the geometric median problem, which provably admits no exact algebraic algorithm (that is, no algorithm using only addition, multiplication, and k-th roots). We thus investigate exact algorithms for non-Euclidean norms as well as approximation algorithms for the Euclidean norm. For the L₁ norm in ℝ^d, we provide an 𝒪(n^{2(d+1)})-time algorithm, i.e., an exact polynomial-time algorithm for constant d. Here and below, n bounds the curves' complexities. For the Euclidean norm in ℝ², we show that a simple problem-specific insight leads to a (1+ε)-approximation in time 𝒪(n³/ε²). We then show how to obtain a subcubic 𝒪̃(n^{2.5}/ε²) time algorithm with significant new ideas; this time comes close to the well-known quadratic time barrier for computing DTW for fixed translations. Technically, the algorithm is obtained by speeding up repeated DTW distance estimations using a dynamic data structure for maintaining shortest paths in weighted planar digraphs. Crucially, we show how to traverse a candidate set of translations using space-filling curves in a way that incurs only few updates to the data structure. We hope that our results will facilitate the use of DTW under translation both in theory and practice, and inspire similar algorithmic approaches for related geometric optimization problems.

Cite as

Karl Bringmann, Sándor Kisfaludi‑Bak, Marvin Künnemann, Dániel Marx, and André Nusser. Dynamic Time Warping Under Translation: Approximation Guided by Space-Filling Curves. In 38th International Symposium on Computational Geometry (SoCG 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 224, pp. 20:1-20:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.SoCG.2022.20,
  author =	{Bringmann, Karl and Kisfaludi‑Bak, S\'{a}ndor and K\"{u}nnemann, Marvin and Marx, D\'{a}niel and Nusser, Andr\'{e}},
  title =	{{Dynamic Time Warping Under Translation: Approximation Guided by Space-Filling Curves}},
  booktitle =	{38th International Symposium on Computational Geometry (SoCG 2022)},
  pages =	{20:1--20:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-227-3},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{224},
  editor =	{Goaoc, Xavier and Kerber, Michael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SoCG.2022.20},
  URN =		{urn:nbn:de:0030-drops-160287},
  doi =		{10.4230/LIPIcs.SoCG.2022.20},
  annote =	{Keywords: Dynamic Time Warping, Sequence Similarity Measures}
}
Document
Towards Sub-Quadratic Diameter Computation in Geometric Intersection Graphs

Authors: Karl Bringmann, Sándor Kisfaludi‑Bak, Marvin Künnemann, André Nusser, and Zahra Parsaeian

Published in: LIPIcs, Volume 224, 38th International Symposium on Computational Geometry (SoCG 2022)


Abstract
We initiate the study of diameter computation in geometric intersection graphs from the fine-grained complexity perspective. A geometric intersection graph is a graph whose vertices correspond to some shapes in d-dimensional Euclidean space, such as balls, segments, or hypercubes, and whose edges correspond to pairs of intersecting shapes. The diameter of a graph is the largest distance realized by a pair of vertices in the graph. Computing the diameter in near-quadratic time is possible in several classes of intersection graphs [Chan and Skrepetos 2019], but it is not at all clear if these algorithms are optimal, especially since in the related class of planar graphs the diameter can be computed in 𝒪̃(n^{5/3}) time [Cabello 2019, Gawrychowski et al. 2021]. In this work we (conditionally) rule out sub-quadratic algorithms in several classes of intersection graphs, i.e., algorithms of running time 𝒪(n^{2-δ}) for some δ > 0. In particular, there are no sub-quadratic algorithms already for fat objects in small dimensions: unit balls in ℝ³ or congruent equilateral triangles in ℝ². For unit segments and congruent equilateral triangles, we can even rule out strong sub-quadratic approximations already in ℝ². It seems that the hardness of approximation may also depend on dimensionality: for axis-parallel unit hypercubes in ℝ^{12}, distinguishing between diameter 2 and 3 needs quadratic time (ruling out (3/2-ε)- approximations), whereas for axis-parallel unit squares, we give an algorithm that distinguishes between diameter 2 and 3 in near-linear time. Note that many of our lower bounds match the best known algorithms up to sub-polynomial factors. Ultimately, this fine-grained perspective may enable us to determine for which shapes we can have efficient algorithms and approximation schemes for diameter computation.

Cite as

Karl Bringmann, Sándor Kisfaludi‑Bak, Marvin Künnemann, André Nusser, and Zahra Parsaeian. Towards Sub-Quadratic Diameter Computation in Geometric Intersection Graphs. In 38th International Symposium on Computational Geometry (SoCG 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 224, pp. 21:1-21:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.SoCG.2022.21,
  author =	{Bringmann, Karl and Kisfaludi‑Bak, S\'{a}ndor and K\"{u}nnemann, Marvin and Nusser, Andr\'{e} and Parsaeian, Zahra},
  title =	{{Towards Sub-Quadratic Diameter Computation in Geometric Intersection Graphs}},
  booktitle =	{38th International Symposium on Computational Geometry (SoCG 2022)},
  pages =	{21:1--21:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-227-3},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{224},
  editor =	{Goaoc, Xavier and Kerber, Michael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SoCG.2022.21},
  URN =		{urn:nbn:de:0030-drops-160294},
  doi =		{10.4230/LIPIcs.SoCG.2022.21},
  annote =	{Keywords: Hardness in P, Geometric Intersection Graph, Graph Diameter, Orthogonal Vectors, Hyperclique Detection}
}
Document
Superlinear Lower Bounds Based on ETH

Authors: András Z. Salamon and Michael Wehar

Published in: LIPIcs, Volume 219, 39th International Symposium on Theoretical Aspects of Computer Science (STACS 2022)


Abstract
We introduce techniques for proving superlinear conditional lower bounds for polynomial time problems. In particular, we show that CircuitSAT for circuits with m gates and log(m) inputs (denoted by log-CircuitSAT) is not decidable in essentially-linear time unless the exponential time hypothesis (ETH) is false and k-Clique is decidable in essentially-linear time in terms of the graph’s size for all fixed k. Such conditional lower bounds have previously only been demonstrated relative to the strong exponential time hypothesis (SETH). Our results therefore offer significant progress towards proving unconditional superlinear time complexity lower bounds for natural problems in polynomial time.

Cite as

András Z. Salamon and Michael Wehar. Superlinear Lower Bounds Based on ETH. In 39th International Symposium on Theoretical Aspects of Computer Science (STACS 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 219, pp. 55:1-55:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{salamon_et_al:LIPIcs.STACS.2022.55,
  author =	{Salamon, Andr\'{a}s Z. and Wehar, Michael},
  title =	{{Superlinear Lower Bounds Based on ETH}},
  booktitle =	{39th International Symposium on Theoretical Aspects of Computer Science (STACS 2022)},
  pages =	{55:1--55:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-222-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{219},
  editor =	{Berenbrink, Petra and Monmege, Benjamin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2022.55},
  URN =		{urn:nbn:de:0030-drops-158652},
  doi =		{10.4230/LIPIcs.STACS.2022.55},
  annote =	{Keywords: Circuit Satisfiability, Conditional Lower Bounds, Exponential Time Hypothesis, Limited Nondeterminism}
}
Document
APPROX
Fine-Grained Completeness for Optimization in P

Authors: Karl Bringmann, Alejandro Cassis, Nick Fischer, and Marvin Künnemann

Published in: LIPIcs, Volume 207, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021)


Abstract
We initiate the study of fine-grained completeness theorems for exact and approximate optimization in the polynomial-time regime. Inspired by the first completeness results for decision problems in P (Gao, Impagliazzo, Kolokolova, Williams, TALG 2019) as well as the classic class MaxSNP and MaxSNP-completeness for NP optimization problems (Papadimitriou, Yannakakis, JCSS 1991), we define polynomial-time analogues MaxSP and MinSP, which contain a number of natural optimization problems in P, including Maximum Inner Product, general forms of nearest neighbor search and optimization variants of the k-XOR problem. Specifically, we define MaxSP as the class of problems definable as max_{x₁,… ,x_k} #{(y₁,… ,y_𝓁) : ϕ(x₁,… ,x_k, y₁,… ,y_𝓁)}, where ϕ is a quantifier-free first-order property over a given relational structure (with MinSP defined analogously). On m-sized structures, we can solve each such problem in time O(m^{k+𝓁-1}). Our results are: - We determine (a sparse variant of) the Maximum/Minimum Inner Product problem as complete under deterministic fine-grained reductions: A strongly subquadratic algorithm for Maximum/Minimum Inner Product would beat the baseline running time of O(m^{k+𝓁-1}) for all problems in MaxSP/MinSP by a polynomial factor. - This completeness transfers to approximation: Maximum/Minimum Inner Product is also complete in the sense that a strongly subquadratic c-approximation would give a (c+ε)-approximation for all MaxSP/MinSP problems in time O(m^{k+𝓁-1-δ}), where ε > 0 can be chosen arbitrarily small. Combining our completeness with (Chen, Williams, SODA 2019), we obtain the perhaps surprising consequence that refuting the OV Hypothesis is equivalent to giving a O(1)-approximation for all MinSP problems in faster-than-O(m^{k+𝓁-1}) time. - By fine-tuning our reductions, we obtain mild algorithmic improvements for solving and approximating all problems in MaxSP and MinSP, using the fastest known algorithms for Maximum/Minimum Inner Product.

Cite as

Karl Bringmann, Alejandro Cassis, Nick Fischer, and Marvin Künnemann. Fine-Grained Completeness for Optimization in P. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 207, pp. 9:1-9:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.APPROX/RANDOM.2021.9,
  author =	{Bringmann, Karl and Cassis, Alejandro and Fischer, Nick and K\"{u}nnemann, Marvin},
  title =	{{Fine-Grained Completeness for Optimization in P}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021)},
  pages =	{9:1--9:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-207-5},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{207},
  editor =	{Wootters, Mary and Sanit\`{a}, Laura},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2021.9},
  URN =		{urn:nbn:de:0030-drops-147024},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2021.9},
  annote =	{Keywords: Fine-grained Complexity \& Algorithm Design, Completeness, Hardness of Approximation in P, Dimensionality Reductions}
}
Document
Hitting Sets and Reconstruction for Dense Orbits in VP_{e} and ΣΠΣ Circuits

Authors: Dori Medini and Amir Shpilka

Published in: LIPIcs, Volume 200, 36th Computational Complexity Conference (CCC 2021)


Abstract
In this paper we study polynomials in VP_{e} (polynomial-sized formulas) and in ΣΠΣ (polynomial-size depth-3 circuits) whose orbits, under the action of the affine group GL^{aff}_n(𝔽) (the action of (A,b) ∈ GL^{aff}_n(𝔽) on a polynomial f ∈ 𝔽[x] is defined as (A,b)∘f = f(A^Tx+b)), are dense in their ambient class. We construct hitting sets and interpolating sets for these orbits as well as give reconstruction algorithms. Specifically, we obtain the following results: 1) For C_n(ℓ_1(x),…,ℓ_n(x)) ≜ Trace(\begin{pmatrix} 𝓁₁(x) & 1 \\ 1 & 0 \end{pmatrix} ⋅ … ⋅ \begin{pmatrix} 𝓁_n(x) & 1 \\ 1 & 0 \end{pmatrix}), where the 𝓁_is are linearly independent linear functions, we construct a polynomial-sized interpolating set, and give a polynomial-time reconstruction algorithm. By a result of Bringmann, Ikenmeyer and Zuiddam, the set of all such polynomials is dense in VP_e [Karl Bringmann et al., 2018], thus our construction gives the first polynomial-size interpolating set for a dense subclass of VP_e. 2) For polynomials of the form ANF_Δ(𝓁₁(x),…,𝓁_{4^Δ}(x)), where ANF_Δ(x) is the canonical read-once formula in alternating normal form, of depth 2Δ, and the 𝓁_is are linearly independent linear functions, we provide a quasipolynomial-size interpolating set. We also observe that the reconstruction algorithm of [Ankit Gupta et al., 2014] works for all polynomials in this class. This class is also dense in VP_e. 3) Similarly, we give a quasipolynomial-sized hitting set for read-once formulas (not necessarily in alternating normal form) composed with a set of linearly independent linear functions. This gives another dense class in VP_e. 4) We give a quasipolynomial-sized hitting set for polynomials of the form f(𝓁₁(x),…,𝓁_{m}(x)), where f is an m-variate s-sparse polynomial. and the 𝓁_is are linearly independent linear functions in n ≥ m variables. This class is dense in ΣΠΣ. 5) For polynomials of the form ∑_{i=1}^{s}∏_{j=1}^{d}𝓁_{i,j}(x), where the 𝓁_{i,j}s are linearly independent linear functions, we construct a polynomial-sized interpolating set. We also observe that the reconstruction algorithm of [Neeraj Kayal and Chandan Saha, 2019] works for every polynomial in the class. This class is dense in ΣΠΣ. As VP = VNC², our results for VP_{e} translate immediately to VP with a quasipolynomial blow up in parameters. If any of our hitting or interpolating sets could be made robust then this would immediately yield a hitting set for the superclass in which the relevant class is dense, and as a consequence also a lower bound for the superclass. Unfortunately, we also prove that the kind of constructions that we have found (which are defined in terms of k-independent polynomial maps) do not necessarily yield robust hitting sets.

Cite as

Dori Medini and Amir Shpilka. Hitting Sets and Reconstruction for Dense Orbits in VP_{e} and ΣΠΣ Circuits. In 36th Computational Complexity Conference (CCC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 200, pp. 19:1-19:27, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{medini_et_al:LIPIcs.CCC.2021.19,
  author =	{Medini, Dori and Shpilka, Amir},
  title =	{{Hitting Sets and Reconstruction for Dense Orbits in VP\underline\{e\} and \Sigma\Pi\Sigma Circuits}},
  booktitle =	{36th Computational Complexity Conference (CCC 2021)},
  pages =	{19:1--19:27},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-193-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{200},
  editor =	{Kabanets, Valentine},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2021.19},
  URN =		{urn:nbn:de:0030-drops-142930},
  doi =		{10.4230/LIPIcs.CCC.2021.19},
  annote =	{Keywords: Algebraic complexity, VP, VNP, algebraic circuits, algebraic formula}
}
Document
Track A: Algorithms, Complexity and Games
A Linear-Time n^{0.4}-Approximation for Longest Common Subsequence

Authors: Karl Bringmann and Debarati Das

Published in: LIPIcs, Volume 198, 48th International Colloquium on Automata, Languages, and Programming (ICALP 2021)


Abstract
We consider the classic problem of computing the Longest Common Subsequence (LCS) of two strings of length n. While a simple quadratic algorithm has been known for the problem for more than 40 years, no faster algorithm has been found despite an extensive effort. The lack of progress on the problem has recently been explained by Abboud, Backurs, and Vassilevska Williams [FOCS'15] and Bringmann and Künnemann [FOCS'15] who proved that there is no subquadratic algorithm unless the Strong Exponential Time Hypothesis fails. This major roadblock for getting faster exact algorithms has led the community to look for subquadratic approximation algorithms for the problem. Yet, unlike the edit distance problem for which a constant-factor approximation in almost-linear time is known, very little progress has been made on LCS, making it a notoriously difficult problem also in the realm of approximation. For the general setting (where we make no assumption on the length of the optimum solution or the alphabet size), only a naive O(n^{ε/2})-approximation algorithm with running time Õ(n^{2-ε}) has been known, for any constant 0 < ε ≤ 1. Recently, a breakthrough result by Hajiaghayi, Seddighin, Seddighin, and Sun [SODA'19] provided a linear-time algorithm that yields a O(n^{0.497956})-approximation in expectation; improving upon the naive O(√n)-approximation for the first time. In this paper, we provide an algorithm that in time O(n^{2-ε}) computes an Õ(n^{2ε/5})-approximation with high probability, for any 0 < ε ≤ 1. Our result (1) gives an Õ(n^{0.4})-approximation in linear time, improving upon the bound of Hajiaghayi, Seddighin, Seddighin, and Sun, (2) provides an algorithm whose approximation scales with any subquadratic running time O(n^{2-ε}), improving upon the naive bound of O(n^{ε/2}) for any ε, and (3) instead of only in expectation, succeeds with high probability.

Cite as

Karl Bringmann and Debarati Das. A Linear-Time n^{0.4}-Approximation for Longest Common Subsequence. In 48th International Colloquium on Automata, Languages, and Programming (ICALP 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 198, pp. 39:1-39:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.ICALP.2021.39,
  author =	{Bringmann, Karl and Das, Debarati},
  title =	{{A Linear-Time n^\{0.4\}-Approximation for Longest Common Subsequence}},
  booktitle =	{48th International Colloquium on Automata, Languages, and Programming (ICALP 2021)},
  pages =	{39:1--39:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-195-5},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{198},
  editor =	{Bansal, Nikhil and Merelli, Emanuela and Worrell, James},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2021.39},
  URN =		{urn:nbn:de:0030-drops-141082},
  doi =		{10.4230/LIPIcs.ICALP.2021.39},
  annote =	{Keywords: approximation algorithm, longest common subsequence, string algorithm}
}
Document
Track A: Algorithms, Complexity and Games
Current Algorithms for Detecting Subgraphs of Bounded Treewidth Are Probably Optimal

Authors: Karl Bringmann and Jasper Slusallek

Published in: LIPIcs, Volume 198, 48th International Colloquium on Automata, Languages, and Programming (ICALP 2021)


Abstract
The Subgraph Isomorphism problem is of considerable importance in computer science. We examine the problem when the pattern graph H is of bounded treewidth, as occurs in a variety of applications. This problem has a well-known algorithm via color-coding that runs in time O(n^{tw(H)+1}) [Alon, Yuster, Zwick'95], where n is the number of vertices of the host graph G. While there are pattern graphs known for which Subgraph Isomorphism can be solved in an improved running time of O(n^{tw(H)+1-ε}) or even faster (e.g. for k-cliques), it is not known whether such improvements are possible for all patterns. The only known lower bound rules out time n^{o(tw(H) / log(tw(H)))} for any class of patterns of unbounded treewidth assuming the Exponential Time Hypothesis [Marx'07]. In this paper, we demonstrate the existence of maximally hard pattern graphs H that require time n^{tw(H)+1-o(1)}. Specifically, under the Strong Exponential Time Hypothesis (SETH), a standard assumption from fine-grained complexity theory, we prove the following asymptotic statement for large treewidth t: For any ε > 0 there exists t ≥ 3 and a pattern graph H of treewidth t such that Subgraph Isomorphism on pattern H has no algorithm running in time O(n^{t+1-ε}). Under the more recent 3-uniform Hyperclique hypothesis, we even obtain tight lower bounds for each specific treewidth t ≥ 3: For any t ≥ 3 there exists a pattern graph H of treewidth t such that for any ε > 0 Subgraph Isomorphism on pattern H has no algorithm running in time O(n^{t+1-ε}). In addition to these main results, we explore (1) colored and uncolored problem variants (and why they are equivalent for most cases), (2) Subgraph Isomorphism for tw < 3, (3) Subgraph Isomorphism parameterized by pathwidth instead of treewidth, and (4) a weighted variant that we call Exact Weight Subgraph Isomorphism, for which we examine pseudo-polynomial time algorithms. For many of these settings we obtain similarly tight upper and lower bounds.

Cite as

Karl Bringmann and Jasper Slusallek. Current Algorithms for Detecting Subgraphs of Bounded Treewidth Are Probably Optimal. In 48th International Colloquium on Automata, Languages, and Programming (ICALP 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 198, pp. 40:1-40:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.ICALP.2021.40,
  author =	{Bringmann, Karl and Slusallek, Jasper},
  title =	{{Current Algorithms for Detecting Subgraphs of Bounded Treewidth Are Probably Optimal}},
  booktitle =	{48th International Colloquium on Automata, Languages, and Programming (ICALP 2021)},
  pages =	{40:1--40:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-195-5},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{198},
  editor =	{Bansal, Nikhil and Merelli, Emanuela and Worrell, James},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2021.40},
  URN =		{urn:nbn:de:0030-drops-141095},
  doi =		{10.4230/LIPIcs.ICALP.2021.40},
  annote =	{Keywords: subgraph isomorphism, treewidth, fine-grained complexity, hyperclique}
}
  • Refine by Author
  • 34 Bringmann, Karl
  • 9 Künnemann, Marvin
  • 5 Cassis, Alejandro
  • 5 Fischer, Nick
  • 5 Nusser, André
  • Show More...

  • Refine by Classification
  • 10 Theory of computation → Design and analysis of algorithms
  • 9 Theory of computation → Problems, reductions and completeness
  • 8 Theory of computation → Computational geometry
  • 3 Theory of computation → Approximation algorithms analysis
  • 3 Theory of computation → Parameterized complexity and exact algorithms
  • Show More...

  • Refine by Keyword
  • 5 Fine-Grained Complexity
  • 4 Fréchet distance
  • 4 Hardness in P
  • 2 Exponential Time Hypothesis
  • 2 Fine-grained complexity theory
  • Show More...

  • Refine by Type
  • 42 document

  • Refine by Publication Year
  • 8 2019
  • 8 2022
  • 7 2021
  • 6 2017
  • 4 2020
  • Show More...

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail