Search Results

Documents authored by Künnemann, Marvin


Document
Exploring the Approximability Landscape of 3SUM

Authors: Karl Bringmann, Ahmed Ghazy, and Marvin Künnemann

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
Since an increasing number of problems in P have conditional lower bounds against exact algorithms, it is natural to study which of these problems can be efficiently approximated. Often, however, there are many potential ways to formulate an approximate version of a problem. We ask: How sensitive is the (in-)approximability of a problem in P to its precise formulation? To this end, we perform a case study using the popular 3SUM problem. Its many equivalent formulations give rise to a wide range of potential approximate relaxations. Specifically, to obtain an approximate relaxation in our framework, one can choose among the options: (a) 3SUM or Convolution 3SUM, (b) monochromatic or trichromatic, (c) allowing under-approximation, over-approximation, or both, (d) approximate decision or approximate optimization, (e) single output or multiple outputs and (f) implicit or explicit target (given as input). We show general reduction principles between some variants and find that we can classify the remaining problems (over polynomially bounded positive integers) into three regimes: 1) (1+ε)-approximable in near-linear time Õ(n + 1/ε), 2) (1+ε)-approximable in near-quadratic time Õ(n/ε) or Õ(n+1/ε²), or 3) non-approximable, i.e., requiring time n^{2± o(1)} even for any approximation factor. In each of these three regimes, we provide matching upper and conditional lower bounds. To prove our results, we establish two results that may be of independent interest: Over polynomially bounded integers, we show subquadratic equivalence of (min,+)-convolution and polyhedral 3SUM, and we prove equivalence of the Strong 3SUM conjecture and the Strong Convolution 3SUM conjecture.

Cite as

Karl Bringmann, Ahmed Ghazy, and Marvin Künnemann. Exploring the Approximability Landscape of 3SUM. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 34:1-34:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.ESA.2024.34,
  author =	{Bringmann, Karl and Ghazy, Ahmed and K\"{u}nnemann, Marvin},
  title =	{{Exploring the Approximability Landscape of 3SUM}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{34:1--34:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.34},
  URN =		{urn:nbn:de:0030-drops-211057},
  doi =		{10.4230/LIPIcs.ESA.2024.34},
  annote =	{Keywords: Fine-grained Complexity, Conditional Lower Bounds, Approximation Schemes, Min-Plus Convolution}
}
Document
The NFA Acceptance Hypothesis: Non-Combinatorial and Dynamic Lower Bounds

Authors: Karl Bringmann, Allan Grønlund, Marvin Künnemann, and Kasper Green Larsen

Published in: LIPIcs, Volume 287, 15th Innovations in Theoretical Computer Science Conference (ITCS 2024)


Abstract
We pose the fine-grained hardness hypothesis that the textbook algorithm for the NFA Acceptance problem is optimal up to subpolynomial factors, even for dense NFAs and fixed alphabets. We show that this barrier appears in many variations throughout the algorithmic literature by introducing a framework of Colored Walk problems. These yield fine-grained equivalent formulations of the NFA Acceptance problem as problems concerning detection of an s-t-walk with a prescribed color sequence in a given edge- or node-colored graph. For NFA Acceptance on sparse NFAs (or equivalently, Colored Walk in sparse graphs), a tight lower bound under the Strong Exponential Time Hypothesis has been rediscovered several times in recent years. We show that our hardness hypothesis, which concerns dense NFAs, has several interesting implications: - It gives a tight lower bound for Context-Free Language Reachability. This proves conditional optimality for the class of 2NPDA-complete problems, explaining the cubic bottleneck of interprocedural program analysis. - It gives a tight (n+nm^{1/3})^{1-o(1)} lower bound for the Word Break problem on strings of length n and dictionaries of total size m. - It implies the popular OMv hypothesis. Since the NFA acceptance problem is a static (i.e., non-dynamic) problem, this provides a static reason for the hardness of many dynamic problems. Thus, a proof of the NFA Acceptance hypothesis would resolve several interesting barriers. Conversely, a refutation of the NFA Acceptance hypothesis may lead the way to attacking the current barriers observed for Context-Free Language Reachability, the Word Break problem and the growing list of dynamic problems proven hard under the OMv hypothesis.

Cite as

Karl Bringmann, Allan Grønlund, Marvin Künnemann, and Kasper Green Larsen. The NFA Acceptance Hypothesis: Non-Combinatorial and Dynamic Lower Bounds. In 15th Innovations in Theoretical Computer Science Conference (ITCS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 287, pp. 22:1-22:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.ITCS.2024.22,
  author =	{Bringmann, Karl and Gr{\o}nlund, Allan and K\"{u}nnemann, Marvin and Larsen, Kasper Green},
  title =	{{The NFA Acceptance Hypothesis: Non-Combinatorial and Dynamic Lower Bounds}},
  booktitle =	{15th Innovations in Theoretical Computer Science Conference (ITCS 2024)},
  pages =	{22:1--22:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-309-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{287},
  editor =	{Guruswami, Venkatesan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2024.22},
  URN =		{urn:nbn:de:0030-drops-195500},
  doi =		{10.4230/LIPIcs.ITCS.2024.22},
  annote =	{Keywords: Fine-grained complexity theory, non-deterministic finite automata}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Coverability in VASS Revisited: Improving Rackoff’s Bound to Obtain Conditional Optimality

Authors: Marvin Künnemann, Filip Mazowiecki, Lia Schütze, Henry Sinclair-Banks, and Karol Węgrzycki

Published in: LIPIcs, Volume 261, 50th International Colloquium on Automata, Languages, and Programming (ICALP 2023)


Abstract
Seminal results establish that the coverability problem for Vector Addition Systems with States (VASS) is in EXPSPACE (Rackoff, '78) and is EXPSPACE-hard already under unary encodings (Lipton, '76). More precisely, Rosier and Yen later utilise Rackoff’s bounding technique to show that if coverability holds then there is a run of length at most n^{2^𝒪(d log d)}, where d is the dimension and n is the size of the given unary VASS. Earlier, Lipton showed that there exist instances of coverability in d-dimensional unary VASS that are only witnessed by runs of length at least n^{2^Ω(d)}. Our first result closes this gap. We improve the upper bound by removing the twice-exponentiated log(d) factor, thus matching Lipton’s lower bound. This closes the corresponding gap for the exact space required to decide coverability. This also yields a deterministic n^{2^𝒪(d)}-time algorithm for coverability. Our second result is a matching lower bound, that there does not exist a deterministic n^{2^o(d)}-time algorithm, conditioned upon the Exponential Time Hypothesis. When analysing coverability, a standard proof technique is to consider VASS with bounded counters. Bounded VASS make for an interesting and popular model due to strong connections with timed automata. Withal, we study a natural setting where the counter bound is linear in the size of the VASS. Here the trivial exhaustive search algorithm runs in 𝒪(n^{d+1})-time. We give evidence to this being near-optimal. We prove that in dimension one this trivial algorithm is conditionally optimal, by showing that n^{2-o(1)}-time is required under the k-cycle hypothesis. In general fixed dimension d, we show that n^{d-2-o(1)}-time is required under the 3-uniform hyperclique hypothesis.

Cite as

Marvin Künnemann, Filip Mazowiecki, Lia Schütze, Henry Sinclair-Banks, and Karol Węgrzycki. Coverability in VASS Revisited: Improving Rackoff’s Bound to Obtain Conditional Optimality. In 50th International Colloquium on Automata, Languages, and Programming (ICALP 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 261, pp. 131:1-131:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{kunnemann_et_al:LIPIcs.ICALP.2023.131,
  author =	{K\"{u}nnemann, Marvin and Mazowiecki, Filip and Sch\"{u}tze, Lia and Sinclair-Banks, Henry and W\k{e}grzycki, Karol},
  title =	{{Coverability in VASS Revisited: Improving Rackoff’s Bound to Obtain Conditional Optimality}},
  booktitle =	{50th International Colloquium on Automata, Languages, and Programming (ICALP 2023)},
  pages =	{131:1--131:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-278-5},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{261},
  editor =	{Etessami, Kousha and Feige, Uriel and Puppis, Gabriele},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2023.131},
  URN =		{urn:nbn:de:0030-drops-181834},
  doi =		{10.4230/LIPIcs.ICALP.2023.131},
  annote =	{Keywords: Vector Addition System, Coverability, Reachability, Fine-Grained Complexity, Exponential Time Hypothesis, k-Cycle Hypothesis, Hyperclique Hypothesis}
}
Document
Combinatorial Designs Meet Hypercliques: Higher Lower Bounds for Klee’s Measure Problem and Related Problems in Dimensions d ≥ 4

Authors: Egor Gorbachev and Marvin Künnemann

Published in: LIPIcs, Volume 258, 39th International Symposium on Computational Geometry (SoCG 2023)


Abstract
Klee’s measure problem (computing the volume of the union of n axis-parallel boxes in ℝ^d) is well known to have n^{d/2± o(1)}-time algorithms (Overmars, Yap, SICOMP'91; Chan FOCS'13). Only recently, a conditional lower bound (without any restriction to "combinatorial" algorithms) could be shown for d = 3 (Künnemann, FOCS'22). Can this result be extended to a tight lower bound for dimensions d ≥ 4? In this paper, we formalize the technique of the tight lower bound for d = 3 using a combinatorial object we call prefix covering design. We show that these designs, which are related in spirit to combinatorial designs, directly translate to conditional lower bounds for Klee’s measure problem and various related problems. By devising good prefix covering designs, we give the following lower bounds for Klee’s measure problem in ℝ^d, the depth problem for axis-parallel boxes in ℝ^d, the largest-volume/max-perimeter empty (anchored) box problem in ℝ^{2d}, and related problems: - Ω(n^1.90476) for d = 4, - Ω(n^2.22222) for d = 5, - Ω(n^{d/3 + 2√d/9-o(√d)}) for general d, assuming the 3-uniform hyperclique hypothesis. For Klee’s measure problem and the depth problem, these bounds improve previous lower bounds of Ω(n^{1.777...}), Ω(n^{2.0833...}) and Ω(n^{d/3 + 1/3 + Θ(1/d)}) respectively. Our improved prefix covering designs were obtained by (1) exploiting a computer-aided search using problem-specific insights as well as SAT solvers, and (2) showing how to transform combinatorial covering designs known in the literature to strong prefix covering designs. In contrast, we show that our lower bounds are close to best possible using this proof technique.

Cite as

Egor Gorbachev and Marvin Künnemann. Combinatorial Designs Meet Hypercliques: Higher Lower Bounds for Klee’s Measure Problem and Related Problems in Dimensions d ≥ 4. In 39th International Symposium on Computational Geometry (SoCG 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 258, pp. 36:1-36:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{gorbachev_et_al:LIPIcs.SoCG.2023.36,
  author =	{Gorbachev, Egor and K\"{u}nnemann, Marvin},
  title =	{{Combinatorial Designs Meet Hypercliques: Higher Lower Bounds for Klee’s Measure Problem and Related Problems in Dimensions d ≥ 4}},
  booktitle =	{39th International Symposium on Computational Geometry (SoCG 2023)},
  pages =	{36:1--36:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-273-0},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{258},
  editor =	{Chambers, Erin W. and Gudmundsson, Joachim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SoCG.2023.36},
  URN =		{urn:nbn:de:0030-drops-178861},
  doi =		{10.4230/LIPIcs.SoCG.2023.36},
  annote =	{Keywords: Fine-grained complexity theory, non-combinatorial lower bounds, computational geometry, clique detection}
}
Document
Track A: Algorithms, Complexity and Games
A Structural Investigation of the Approximability of Polynomial-Time Problems

Authors: Karl Bringmann, Alejandro Cassis, Nick Fischer, and Marvin Künnemann

Published in: LIPIcs, Volume 229, 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)


Abstract
An extensive research effort targets optimal (in)approximability results for various NP-hard optimization problems. Notably, the works of (Creignou'95) as well as (Khanna, Sudan, Trevisan, Williamson'00) establish a tight characterization of a large subclass of MaxSNP, namely Boolean MaxCSPs and further variants, in terms of their polynomial-time approximability. Can we obtain similarly encompassing characterizations for classes of polynomial-time optimization problems? To this end, we initiate the systematic study of a recently introduced polynomial-time analogue of MaxSNP, which includes a large number of well-studied problems (including Nearest and Furthest Neighbor in the Hamming metric, Maximum Inner Product, optimization variants of k-XOR and Maximum k-Cover). Specifically, for each k, MaxSP_k denotes the class of O(m^k)-time problems of the form max_{x_1,… , x_k} #{y : ϕ(x_1,… ,x_k,y)} where ϕ is a quantifier-free first-order property and m denotes the size of the relational structure. Assuming central hypotheses about clique detection in hypergraphs and exact Max-3-SAT}, we show that for any MaxSP_k problem definable by a quantifier-free m-edge graph formula φ, the best possible approximation guarantee in faster-than-exhaustive-search time O(m^{k-δ})falls into one of four categories: - optimizable to exactness in time O(m^{k-δ}), - an (inefficient) approximation scheme, i.e., a (1+ε)-approximation in time O(m^{k-f(ε)}), - a (fixed) constant-factor approximation in time O(m^{k-δ}), or - a nm^ε-approximation in time O(m^{k-f(ε)}). We obtain an almost complete characterization of these regimes, for MaxSP_k as well as for an analogously defined minimization class MinSP_k. As our main technical contribution, we show how to rule out the existence of approximation schemes for a large class of problems admitting constant-factor approximations, under a hypothesis for exact Sparse Max-3-SAT algorithms posed by (Alman, Vassilevska Williams'20). As general trends for the problems we consider, we observe: (1) Exact optimizability has a simple algebraic characterization, (2) only few maximization problems do not admit a constant-factor approximation; these do not even have a subpolynomial-factor approximation, and (3) constant-factor approximation of minimization problems is equivalent to deciding whether the optimum is equal to 0.

Cite as

Karl Bringmann, Alejandro Cassis, Nick Fischer, and Marvin Künnemann. A Structural Investigation of the Approximability of Polynomial-Time Problems. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 30:1-30:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.ICALP.2022.30,
  author =	{Bringmann, Karl and Cassis, Alejandro and Fischer, Nick and K\"{u}nnemann, Marvin},
  title =	{{A Structural Investigation of the Approximability of Polynomial-Time Problems}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{30:1--30:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.30},
  URN =		{urn:nbn:de:0030-drops-163713},
  doi =		{10.4230/LIPIcs.ICALP.2022.30},
  annote =	{Keywords: Classification Theorems, Hardness of Approximation in P, Fine-grained Complexity Theory}
}
Document
Dynamic Time Warping Under Translation: Approximation Guided by Space-Filling Curves

Authors: Karl Bringmann, Sándor Kisfaludi‑Bak, Marvin Künnemann, Dániel Marx, and André Nusser

Published in: LIPIcs, Volume 224, 38th International Symposium on Computational Geometry (SoCG 2022)


Abstract
The Dynamic Time Warping (DTW) distance is a popular measure of similarity for a variety of sequence data. For comparing polygonal curves π, σ in ℝ^d, it provides a robust, outlier-insensitive alternative to the Fréchet distance. However, like the Fréchet distance, the DTW distance is not invariant under translations. Can we efficiently optimize the DTW distance of π and σ under arbitrary translations, to compare the curves' shape irrespective of their absolute location? There are surprisingly few works in this direction, which may be due to its computational intricacy: For the Euclidean norm, this problem contains as a special case the geometric median problem, which provably admits no exact algebraic algorithm (that is, no algorithm using only addition, multiplication, and k-th roots). We thus investigate exact algorithms for non-Euclidean norms as well as approximation algorithms for the Euclidean norm. For the L₁ norm in ℝ^d, we provide an 𝒪(n^{2(d+1)})-time algorithm, i.e., an exact polynomial-time algorithm for constant d. Here and below, n bounds the curves' complexities. For the Euclidean norm in ℝ², we show that a simple problem-specific insight leads to a (1+ε)-approximation in time 𝒪(n³/ε²). We then show how to obtain a subcubic 𝒪̃(n^{2.5}/ε²) time algorithm with significant new ideas; this time comes close to the well-known quadratic time barrier for computing DTW for fixed translations. Technically, the algorithm is obtained by speeding up repeated DTW distance estimations using a dynamic data structure for maintaining shortest paths in weighted planar digraphs. Crucially, we show how to traverse a candidate set of translations using space-filling curves in a way that incurs only few updates to the data structure. We hope that our results will facilitate the use of DTW under translation both in theory and practice, and inspire similar algorithmic approaches for related geometric optimization problems.

Cite as

Karl Bringmann, Sándor Kisfaludi‑Bak, Marvin Künnemann, Dániel Marx, and André Nusser. Dynamic Time Warping Under Translation: Approximation Guided by Space-Filling Curves. In 38th International Symposium on Computational Geometry (SoCG 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 224, pp. 20:1-20:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.SoCG.2022.20,
  author =	{Bringmann, Karl and Kisfaludi‑Bak, S\'{a}ndor and K\"{u}nnemann, Marvin and Marx, D\'{a}niel and Nusser, Andr\'{e}},
  title =	{{Dynamic Time Warping Under Translation: Approximation Guided by Space-Filling Curves}},
  booktitle =	{38th International Symposium on Computational Geometry (SoCG 2022)},
  pages =	{20:1--20:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-227-3},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{224},
  editor =	{Goaoc, Xavier and Kerber, Michael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SoCG.2022.20},
  URN =		{urn:nbn:de:0030-drops-160287},
  doi =		{10.4230/LIPIcs.SoCG.2022.20},
  annote =	{Keywords: Dynamic Time Warping, Sequence Similarity Measures}
}
Document
Towards Sub-Quadratic Diameter Computation in Geometric Intersection Graphs

Authors: Karl Bringmann, Sándor Kisfaludi‑Bak, Marvin Künnemann, André Nusser, and Zahra Parsaeian

Published in: LIPIcs, Volume 224, 38th International Symposium on Computational Geometry (SoCG 2022)


Abstract
We initiate the study of diameter computation in geometric intersection graphs from the fine-grained complexity perspective. A geometric intersection graph is a graph whose vertices correspond to some shapes in d-dimensional Euclidean space, such as balls, segments, or hypercubes, and whose edges correspond to pairs of intersecting shapes. The diameter of a graph is the largest distance realized by a pair of vertices in the graph. Computing the diameter in near-quadratic time is possible in several classes of intersection graphs [Chan and Skrepetos 2019], but it is not at all clear if these algorithms are optimal, especially since in the related class of planar graphs the diameter can be computed in 𝒪̃(n^{5/3}) time [Cabello 2019, Gawrychowski et al. 2021]. In this work we (conditionally) rule out sub-quadratic algorithms in several classes of intersection graphs, i.e., algorithms of running time 𝒪(n^{2-δ}) for some δ > 0. In particular, there are no sub-quadratic algorithms already for fat objects in small dimensions: unit balls in ℝ³ or congruent equilateral triangles in ℝ². For unit segments and congruent equilateral triangles, we can even rule out strong sub-quadratic approximations already in ℝ². It seems that the hardness of approximation may also depend on dimensionality: for axis-parallel unit hypercubes in ℝ^{12}, distinguishing between diameter 2 and 3 needs quadratic time (ruling out (3/2-ε)- approximations), whereas for axis-parallel unit squares, we give an algorithm that distinguishes between diameter 2 and 3 in near-linear time. Note that many of our lower bounds match the best known algorithms up to sub-polynomial factors. Ultimately, this fine-grained perspective may enable us to determine for which shapes we can have efficient algorithms and approximation schemes for diameter computation.

Cite as

Karl Bringmann, Sándor Kisfaludi‑Bak, Marvin Künnemann, André Nusser, and Zahra Parsaeian. Towards Sub-Quadratic Diameter Computation in Geometric Intersection Graphs. In 38th International Symposium on Computational Geometry (SoCG 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 224, pp. 21:1-21:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.SoCG.2022.21,
  author =	{Bringmann, Karl and Kisfaludi‑Bak, S\'{a}ndor and K\"{u}nnemann, Marvin and Nusser, Andr\'{e} and Parsaeian, Zahra},
  title =	{{Towards Sub-Quadratic Diameter Computation in Geometric Intersection Graphs}},
  booktitle =	{38th International Symposium on Computational Geometry (SoCG 2022)},
  pages =	{21:1--21:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-227-3},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{224},
  editor =	{Goaoc, Xavier and Kerber, Michael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SoCG.2022.21},
  URN =		{urn:nbn:de:0030-drops-160294},
  doi =		{10.4230/LIPIcs.SoCG.2022.21},
  annote =	{Keywords: Hardness in P, Geometric Intersection Graph, Graph Diameter, Orthogonal Vectors, Hyperclique Detection}
}
Document
The Fine-Grained Complexity of Multi-Dimensional Ordering Properties

Authors: Haozhe An, Mohit Gurumukhani, Russell Impagliazzo, Michael Jaber, Marvin Künnemann, and Maria Paula Parga Nina

Published in: LIPIcs, Volume 214, 16th International Symposium on Parameterized and Exact Computation (IPEC 2021)


Abstract
We define a class of problems whose input is an n-sized set of d-dimensional vectors, and where the problem is first-order definable using comparisons between coordinates. This class captures a wide variety of tasks, such as complex types of orthogonal range search, model-checking first-order properties on geometric intersection graphs, and elementary questions on multidimensional data like verifying Pareto optimality of a choice of data points. Focusing on constant dimension d, we show that any k-quantifier, d-dimensional such problem is solvable in O(n^{k-1} log^{d-1} n) time. Furthermore, this algorithm is conditionally tight up to subpolynomial factors: we show that assuming the 3-uniform hyperclique hypothesis, there is a k-quantifier, (3k-3)-dimensional problem in this class that requires time Ω(n^{k-1-o(1)}). Towards identifying a single representative problem for this class, we study the existence of complete problems for the 3-quantifier setting (since 2-quantifier problems can already be solved in near-linear time O(nlog^{d-1} n), and k-quantifier problems with k > 3 reduce to the 3-quantifier case). We define a problem Vector Concatenated Non-Domination VCND_d (Given three sets of vectors X,Y and Z of dimension d,d and 2d, respectively, is there an x ∈ X and a y ∈ Y so that their concatenation x∘y is not dominated by any z ∈ Z, where vector u is dominated by vector v if u_i ≤ v_i for each coordinate 1 ≤ i ≤ d), and determine it as the "unique" candidate to be complete for this class (under fine-grained assumptions).

Cite as

Haozhe An, Mohit Gurumukhani, Russell Impagliazzo, Michael Jaber, Marvin Künnemann, and Maria Paula Parga Nina. The Fine-Grained Complexity of Multi-Dimensional Ordering Properties. In 16th International Symposium on Parameterized and Exact Computation (IPEC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 214, pp. 3:1-3:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{an_et_al:LIPIcs.IPEC.2021.3,
  author =	{An, Haozhe and Gurumukhani, Mohit and Impagliazzo, Russell and Jaber, Michael and K\"{u}nnemann, Marvin and Nina, Maria Paula Parga},
  title =	{{The Fine-Grained Complexity of Multi-Dimensional Ordering Properties}},
  booktitle =	{16th International Symposium on Parameterized and Exact Computation (IPEC 2021)},
  pages =	{3:1--3:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-216-7},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{214},
  editor =	{Golovach, Petr A. and Zehavi, Meirav},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.IPEC.2021.3},
  URN =		{urn:nbn:de:0030-drops-153869},
  doi =		{10.4230/LIPIcs.IPEC.2021.3},
  annote =	{Keywords: Fine-grained complexity, First-order logic, Orthogonal vectors}
}
Document
APPROX
Fine-Grained Completeness for Optimization in P

Authors: Karl Bringmann, Alejandro Cassis, Nick Fischer, and Marvin Künnemann

Published in: LIPIcs, Volume 207, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021)


Abstract
We initiate the study of fine-grained completeness theorems for exact and approximate optimization in the polynomial-time regime. Inspired by the first completeness results for decision problems in P (Gao, Impagliazzo, Kolokolova, Williams, TALG 2019) as well as the classic class MaxSNP and MaxSNP-completeness for NP optimization problems (Papadimitriou, Yannakakis, JCSS 1991), we define polynomial-time analogues MaxSP and MinSP, which contain a number of natural optimization problems in P, including Maximum Inner Product, general forms of nearest neighbor search and optimization variants of the k-XOR problem. Specifically, we define MaxSP as the class of problems definable as max_{x₁,… ,x_k} #{(y₁,… ,y_𝓁) : ϕ(x₁,… ,x_k, y₁,… ,y_𝓁)}, where ϕ is a quantifier-free first-order property over a given relational structure (with MinSP defined analogously). On m-sized structures, we can solve each such problem in time O(m^{k+𝓁-1}). Our results are: - We determine (a sparse variant of) the Maximum/Minimum Inner Product problem as complete under deterministic fine-grained reductions: A strongly subquadratic algorithm for Maximum/Minimum Inner Product would beat the baseline running time of O(m^{k+𝓁-1}) for all problems in MaxSP/MinSP by a polynomial factor. - This completeness transfers to approximation: Maximum/Minimum Inner Product is also complete in the sense that a strongly subquadratic c-approximation would give a (c+ε)-approximation for all MaxSP/MinSP problems in time O(m^{k+𝓁-1-δ}), where ε > 0 can be chosen arbitrarily small. Combining our completeness with (Chen, Williams, SODA 2019), we obtain the perhaps surprising consequence that refuting the OV Hypothesis is equivalent to giving a O(1)-approximation for all MinSP problems in faster-than-O(m^{k+𝓁-1}) time. - By fine-tuning our reductions, we obtain mild algorithmic improvements for solving and approximating all problems in MaxSP and MinSP, using the fastest known algorithms for Maximum/Minimum Inner Product.

Cite as

Karl Bringmann, Alejandro Cassis, Nick Fischer, and Marvin Künnemann. Fine-Grained Completeness for Optimization in P. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 207, pp. 9:1-9:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.APPROX/RANDOM.2021.9,
  author =	{Bringmann, Karl and Cassis, Alejandro and Fischer, Nick and K\"{u}nnemann, Marvin},
  title =	{{Fine-Grained Completeness for Optimization in P}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021)},
  pages =	{9:1--9:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-207-5},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{207},
  editor =	{Wootters, Mary and Sanit\`{a}, Laura},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2021.9},
  URN =		{urn:nbn:de:0030-drops-147024},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2021.9},
  annote =	{Keywords: Fine-grained Complexity \& Algorithm Design, Completeness, Hardness of Approximation in P, Dimensionality Reductions}
}
Document
When Lipschitz Walks Your Dog: Algorithm Engineering of the Discrete Fréchet Distance Under Translation

Authors: Karl Bringmann, Marvin Künnemann, and André Nusser

Published in: LIPIcs, Volume 173, 28th Annual European Symposium on Algorithms (ESA 2020)


Abstract
Consider the natural question of how to measure the similarity of curves in the plane by a quantity that is invariant under translations of the curves. Such a measure is justified whenever we aim to quantify the similarity of the curves' shapes rather than their positioning in the plane, e.g., to compare the similarity of handwritten characters. Perhaps the most natural such notion is the (discrete) Fréchet distance under translation. Unfortunately, the algorithmic literature on this problem yields a very pessimistic view: On polygonal curves with n vertices, the fastest algorithm runs in time 𝒪(n^4.667) and cannot be improved below n^{4-o(1)} unless the Strong Exponential Time Hypothesis fails. Can we still obtain an implementation that is efficient on realistic datasets? Spurred by the surprising performance of recent implementations for the Fréchet distance, we perform algorithm engineering for the Fréchet distance under translation. Our solution combines fast, but inexact tools from continuous optimization (specifically, branch-and-bound algorithms for global Lipschitz optimization) with exact, but expensive algorithms from computational geometry (specifically, problem-specific algorithms based on an arrangement construction). We combine these two ingredients to obtain an exact decision algorithm for the Fréchet distance under translation. For the related task of computing the distance value up to a desired precision, we engineer and compare different methods. On a benchmark set involving handwritten characters and route trajectories, our implementation answers a typical query for either task in the range of a few milliseconds up to a second on standard desktop hardware. We believe that our implementation will enable, for the first time, the use of the Fréchet distance under translation in applications, whereas previous algorithmic approaches would have been computationally infeasible. Furthermore, we hope that our combination of continuous optimization and computational geometry will inspire similar approaches for further algorithmic questions.

Cite as

Karl Bringmann, Marvin Künnemann, and André Nusser. When Lipschitz Walks Your Dog: Algorithm Engineering of the Discrete Fréchet Distance Under Translation. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 25:1-25:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.ESA.2020.25,
  author =	{Bringmann, Karl and K\"{u}nnemann, Marvin and Nusser, Andr\'{e}},
  title =	{{When Lipschitz Walks Your Dog: Algorithm Engineering of the Discrete Fr\'{e}chet Distance Under Translation}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{25:1--25:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.25},
  URN =		{urn:nbn:de:0030-drops-128912},
  doi =		{10.4230/LIPIcs.ESA.2020.25},
  annote =	{Keywords: Fr\'{e}chet Distance, Computational Geometry, Continuous Optimization, Algorithm Engineering}
}
Document
Finding Small Satisfying Assignments Faster Than Brute Force: A Fine-Grained Perspective into Boolean Constraint Satisfaction

Authors: Marvin Künnemann and Dániel Marx

Published in: LIPIcs, Volume 169, 35th Computational Complexity Conference (CCC 2020)


Abstract
To study the question under which circumstances small solutions can be found faster than by exhaustive search (and by how much), we study the fine-grained complexity of Boolean constraint satisfaction with size constraint exactly k. More precisely, we aim to determine, for any finite constraint family, the optimal running time f(k)n^g(k) required to find satisfying assignments that set precisely k of the n variables to 1. Under central hardness assumptions on detecting cliques in graphs and 3-uniform hypergraphs, we give an almost tight characterization of g(k) into four regimes: 1) Brute force is essentially best-possible, i.e., g(k) = (1 ± o(1))k, 2) the best algorithms are as fast as current k-clique algorithms, i.e., g(k) = (ω/3 ± o(1))k, 3) the exponent has sublinear dependence on k with g(k) ∈ [Ω(∛k), O(√k)], or 4) the problem is fixed-parameter tractable, i.e., g(k) = O(1). This yields a more fine-grained perspective than a previous FPT/W[1]-hardness dichotomy (Marx, Computational Complexity 2005). Our most interesting technical contribution is a f(k)n^(4√k)-time algorithm for SubsetSum with precedence constraints parameterized by the target k - particularly the approach, based on generalizing a bound on the Frobenius coin problem to a setting with precedence constraints, might be of independent interest.

Cite as

Marvin Künnemann and Dániel Marx. Finding Small Satisfying Assignments Faster Than Brute Force: A Fine-Grained Perspective into Boolean Constraint Satisfaction. In 35th Computational Complexity Conference (CCC 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 169, pp. 27:1-27:28, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{kunnemann_et_al:LIPIcs.CCC.2020.27,
  author =	{K\"{u}nnemann, Marvin and Marx, D\'{a}niel},
  title =	{{Finding Small Satisfying Assignments Faster Than Brute Force: A Fine-Grained Perspective into Boolean Constraint Satisfaction}},
  booktitle =	{35th Computational Complexity Conference (CCC 2020)},
  pages =	{27:1--27:28},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-156-6},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{169},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2020.27},
  URN =		{urn:nbn:de:0030-drops-125791},
  doi =		{10.4230/LIPIcs.CCC.2020.27},
  annote =	{Keywords: Fine-grained complexity theory, algorithmic classification theorem, multivariate algorithms and complexity, constraint satisfaction problems, satisfiability}
}
Document
A Fine-Grained Analogue of Schaefer’s Theorem in P: Dichotomy of Exists^k-Forall-Quantified First-Order Graph Properties

Authors: Karl Bringmann, Nick Fischer, and Marvin Künnemann

Published in: LIPIcs, Volume 137, 34th Computational Complexity Conference (CCC 2019)


Abstract
An important class of problems in logics and database theory is given by fixing a first-order property psi over a relational structure, and considering the model-checking problem for psi. Recently, Gao, Impagliazzo, Kolokolova, and Williams (SODA 2017) identified this class as fundamental for the theory of fine-grained complexity in P, by showing that the (Sparse) Orthogonal Vectors problem is complete for this class under fine-grained reductions. This raises the question whether fine-grained complexity can yield a precise understanding of all first-order model-checking problems. Specifically, can we determine, for any fixed first-order property psi, the exponent of the optimal running time O(m^{c_psi}), where m denotes the number of tuples in the relational structure? Towards answering this question, in this work we give a dichotomy for the class of exists^k-forall-quantified graph properties. For every such property psi, we either give a polynomial-time improvement over the baseline O(m^k)-time algorithm or show that it requires time m^{k-o(1)} under the hypothesis that MAX-3-SAT has no O((2-epsilon)^n)-time algorithm. More precisely, we define a hardness parameter h = H(psi) such that psi can be decided in time O(m^{k-epsilon}) if h <=2 and requires time m^{k-o(1)} for h >= 3 unless the h-uniform HyperClique hypothesis fails. This unveils a natural hardness hierarchy within first-order properties: for any h >= 3, we show that there exists a exists^k-forall-quantified graph property psi with hardness H(psi)=h that is solvable in time O(m^{k-epsilon}) if and only if the h-uniform HyperClique hypothesis fails. Finally, we give more precise upper and lower bounds for an exemplary class of formulas with k=3 and extend our classification to a counting dichotomy.

Cite as

Karl Bringmann, Nick Fischer, and Marvin Künnemann. A Fine-Grained Analogue of Schaefer’s Theorem in P: Dichotomy of Exists^k-Forall-Quantified First-Order Graph Properties. In 34th Computational Complexity Conference (CCC 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 137, pp. 31:1-31:27, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.CCC.2019.31,
  author =	{Bringmann, Karl and Fischer, Nick and K\"{u}nnemann, Marvin},
  title =	{{A Fine-Grained Analogue of Schaefer’s Theorem in P: Dichotomy of Exists^k-Forall-Quantified First-Order Graph Properties}},
  booktitle =	{34th Computational Complexity Conference (CCC 2019)},
  pages =	{31:1--31:27},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-116-0},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{137},
  editor =	{Shpilka, Amir},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2019.31},
  URN =		{urn:nbn:de:0030-drops-108533},
  doi =		{10.4230/LIPIcs.CCC.2019.31},
  annote =	{Keywords: Fine-grained Complexity, Hardness in P, Hyperclique Conjecture, Constrained Triangle Detection}
}
Document
Walking the Dog Fast in Practice: Algorithm Engineering of the Fréchet Distance

Authors: Karl Bringmann, Marvin Künnemann, and André Nusser

Published in: LIPIcs, Volume 129, 35th International Symposium on Computational Geometry (SoCG 2019)


Abstract
The Fréchet distance provides a natural and intuitive measure for the popular task of computing the similarity of two (polygonal) curves. While a simple algorithm computes it in near-quadratic time, a strongly subquadratic algorithm cannot exist unless the Strong Exponential Time Hypothesis fails. Still, fast practical implementations of the Fréchet distance, in particular for realistic input curves, are highly desirable. This has even lead to a designated competition, the ACM SIGSPATIAL GIS Cup 2017: Here, the challenge was to implement a near-neighbor data structure under the Fréchet distance. The bottleneck of the top three implementations turned out to be precisely the decision procedure for the Fréchet distance. In this work, we present a fast, certifying implementation for deciding the Fréchet distance, in order to (1) complement its pessimistic worst-case hardness by an empirical analysis on realistic input data and to (2) improve the state of the art for the GIS Cup challenge. We experimentally evaluate our implementation on a large benchmark consisting of several data sets (including handwritten characters and GPS trajectories). Compared to the winning implementation of the GIS Cup, we obtain running time improvements of up to more than two orders of magnitude for the decision procedure and of up to a factor of 30 for queries to the near-neighbor data structure.

Cite as

Karl Bringmann, Marvin Künnemann, and André Nusser. Walking the Dog Fast in Practice: Algorithm Engineering of the Fréchet Distance. In 35th International Symposium on Computational Geometry (SoCG 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 129, pp. 17:1-17:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.SoCG.2019.17,
  author =	{Bringmann, Karl and K\"{u}nnemann, Marvin and Nusser, Andr\'{e}},
  title =	{{Walking the Dog Fast in Practice: Algorithm Engineering of the Fr\'{e}chet Distance}},
  booktitle =	{35th International Symposium on Computational Geometry (SoCG 2019)},
  pages =	{17:1--17:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-104-7},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{129},
  editor =	{Barequet, Gill and Wang, Yusu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SoCG.2019.17},
  URN =		{urn:nbn:de:0030-drops-104219},
  doi =		{10.4230/LIPIcs.SoCG.2019.17},
  annote =	{Keywords: curve simplification, Fr\'{e}chet distance, algorithm engineering}
}
Document
On Nondeterministic Derandomization of Freivalds' Algorithm: Consequences, Avenues and Algorithmic Progress

Authors: Marvin Künnemann

Published in: LIPIcs, Volume 112, 26th Annual European Symposium on Algorithms (ESA 2018)


Abstract
Motivated by studying the power of randomness, certifying algorithms and barriers for fine-grained reductions, we investigate the question whether the multiplication of two n x n matrices can be performed in near-optimal nondeterministic time O~(n^2). Since a classic algorithm due to Freivalds verifies correctness of matrix products probabilistically in time O(n^2), our question is a relaxation of the open problem of derandomizing Freivalds' algorithm. We discuss consequences of a positive or negative resolution of this problem and provide potential avenues towards resolving it. Particularly, we show that sufficiently fast deterministic verifiers for 3SUM or univariate polynomial identity testing yield faster deterministic verifiers for matrix multiplication. Furthermore, we present the partial algorithmic progress that distinguishing whether an integer matrix product is correct or contains between 1 and n erroneous entries can be performed in time O~(n^2) - interestingly, the difficult case of deterministic matrix product verification is not a problem of "finding a needle in the haystack", but rather cancellation effects in the presence of many errors. Our main technical contribution is a deterministic algorithm that corrects an integer matrix product containing at most t errors in time O~(sqrt{t} n^2 + t^2). To obtain this result, we show how to compute an integer matrix product with at most t nonzeroes in the same running time. This improves upon known deterministic output-sensitive integer matrix multiplication algorithms for t = Omega(n^{2/3}) nonzeroes, which is of independent interest.

Cite as

Marvin Künnemann. On Nondeterministic Derandomization of Freivalds' Algorithm: Consequences, Avenues and Algorithmic Progress. In 26th Annual European Symposium on Algorithms (ESA 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 112, pp. 56:1-56:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{kunnemann:LIPIcs.ESA.2018.56,
  author =	{K\"{u}nnemann, Marvin},
  title =	{{On Nondeterministic Derandomization of Freivalds' Algorithm: Consequences, Avenues and Algorithmic Progress}},
  booktitle =	{26th Annual European Symposium on Algorithms (ESA 2018)},
  pages =	{56:1--56:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-081-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{112},
  editor =	{Azar, Yossi and Bast, Hannah and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2018.56},
  URN =		{urn:nbn:de:0030-drops-95195},
  doi =		{10.4230/LIPIcs.ESA.2018.56},
  annote =	{Keywords: matrix product verification, certifying computation, fine-grained complexity and algorithms}
}
Document
Tight Conditional Lower Bounds for Longest Common Increasing Subsequence

Authors: Lech Duraj, Marvin Künnemann, and Adam Polak

Published in: LIPIcs, Volume 89, 12th International Symposium on Parameterized and Exact Computation (IPEC 2017)


Abstract
We consider the canonical generalization of the well-studied Longest Increasing Subsequence problem to multiple sequences, called k-LCIS: Given k integer sequences X_1,...,X_k of length at most n, the task is to determine the length of the longest common subsequence of X_1,...,X_k that is also strictly increasing. Especially for the case of k=2 (called LCIS for short), several algorithms have been proposed that require quadratic time in the worst case. Assuming the Strong Exponential Time Hypothesis (SETH), we prove a tight lower bound, specifically, that no algorithm solves LCIS in (strongly) subquadratic time. Interestingly, the proof makes no use of normalization tricks common to hardness proofs for similar problems such as LCS. We further strengthen this lower bound to rule out O((nL)^{1-epsilon}) time algorithms for LCIS, where L denotes the solution size, and to rule out O(n^{k-epsilon}) time algorithms for k-LCIS. We obtain the same conditional lower bounds for the related Longest Common Weakly Increasing Subsequence problem.

Cite as

Lech Duraj, Marvin Künnemann, and Adam Polak. Tight Conditional Lower Bounds for Longest Common Increasing Subsequence. In 12th International Symposium on Parameterized and Exact Computation (IPEC 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 89, pp. 15:1-15:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{duraj_et_al:LIPIcs.IPEC.2017.15,
  author =	{Duraj, Lech and K\"{u}nnemann, Marvin and Polak, Adam},
  title =	{{Tight Conditional Lower Bounds for Longest Common Increasing Subsequence}},
  booktitle =	{12th International Symposium on Parameterized and Exact Computation (IPEC 2017)},
  pages =	{15:1--15:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-051-4},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{89},
  editor =	{Lokshtanov, Daniel and Nishimura, Naomi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.IPEC.2017.15},
  URN =		{urn:nbn:de:0030-drops-85706},
  doi =		{10.4230/LIPIcs.IPEC.2017.15},
  annote =	{Keywords: fine-grained complexity, combinatorial pattern matching, sequence alignments, parameterized complexity, SETH}
}
Document
On the Fine-Grained Complexity of One-Dimensional Dynamic Programming

Authors: Marvin Künnemann, Ramamohan Paturi, and Stefan Schneider

Published in: LIPIcs, Volume 80, 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)


Abstract
In this paper, we investigate the complexity of one-dimensional dynamic programming, or more specifically, of the Least-Weight Subsequence (LWS) problem: Given a sequence of n data items together with weights for every pair of the items, the task is to determine a subsequence S minimizing the total weight of the pairs adjacent in S. A large number of natural problems can be formulated as LWS problems, yielding obvious O(n^2)-time solutions. In many interesting instances, the O(n^2)-many weights can be succinctly represented. Yet except for near-linear time algorithms for some specific special cases, little is known about when an LWS instantiation admits a subquadratic-time algorithm and when it does not. In particular, no lower bounds for LWS instantiations have been known before. In an attempt to remedy this situation, we provide a general approach to study the fine-grained complexity of succinct instantiations of the LWS problem: Given an LWS instantiation we identify a highly parallel core problem that is subquadratically equivalent. This provides either an explanation for the apparent hardness of the problem or an avenue to find improved algorithms as the case may be. More specifically, we prove subquadratic equivalences between the following pairs (an LWS instantiation and the corresponding core problem) of problems: a low-rank version of LWS and minimum inner product, finding the longest chain of nested boxes and vector domination, and a coin change problem which is closely related to the knapsack problem and (min,+)-convolution. Using these equivalences and known SETH-hardness results for some of the core problems, we deduce tight conditional lower bounds for the corresponding LWS instantiations. We also establish the (min,+)-convolution-hardness of the knapsack problem. Furthermore, we revisit some of the LWS instantiations which are known to be solvable in near-linear time and explain their easiness in terms of the easiness of the corresponding core problems.

Cite as

Marvin Künnemann, Ramamohan Paturi, and Stefan Schneider. On the Fine-Grained Complexity of One-Dimensional Dynamic Programming. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 21:1-21:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{kunnemann_et_al:LIPIcs.ICALP.2017.21,
  author =	{K\"{u}nnemann, Marvin and Paturi, Ramamohan and Schneider, Stefan},
  title =	{{On the Fine-Grained Complexity of One-Dimensional Dynamic Programming}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{21:1--21:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.21},
  URN =		{urn:nbn:de:0030-drops-74688},
  doi =		{10.4230/LIPIcs.ICALP.2017.21},
  annote =	{Keywords: Least-Weight Subsequence, SETH, Fine-Grained Complexity, Knapsack, Subquadratic Algorithms}
}
Document
Improved Protocols and Hardness Results for the Two-Player Cryptogenography Problem

Authors: Benjamin Doerr and Marvin Künnemann

Published in: LIPIcs, Volume 55, 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)


Abstract
The cryptogenography problem, introduced by Brody, Jakobsen, Scheder, and Winkler (ITCS 2014), is to collaboratively leak a piece of information known to only one member of a group (i) without revealing who was the origin of this information and (ii) without any private communication, neither during the process nor before. Despite several deep structural results, even the smallest case of leaking one bit of information present at one of two players is not well understood. Brody et al. gave a 2-round protocol enabling the two players to succeed with probability 1/3 and showed the hardness result that no protocol can give a success probability of more than 3/8. In this work, we show that neither bound is tight. Our new hardness result, obtained by a different application of the concavity method used also in the previous work, states that a success probability of better than 0.3672 is not possible. Using both theoretical and numerical approaches, we improve the lower bound to 0.3384, that is, give a protocol leading to this success probability. To ease the design of new protocols, we prove an equivalent formulation of the cryptogenography problem as solitaire vector splitting game. Via an automated game tree search, we find good strategies for this game. We then translate the splits that occurred in this strategy into inequalities relating position values and use an LP solver to find an optimal solution for these inequalities. This gives slightly better game values, but more importantly, also a more compact representation of the protocol and a way to easily verify the claimed quality of the protocol. Unfortunately, already the smallest protocol we found that beats the previous 1/3 success probability takes up 16 rounds of communication. The protocol leading to the bound of 0.3384 even in a compact representation consists of 18248 game states. These numbers suggest that the task of finding good protocols for the cryptogenography problem as well as understanding their structure is harder than what the simple problem formulation suggests.

Cite as

Benjamin Doerr and Marvin Künnemann. Improved Protocols and Hardness Results for the Two-Player Cryptogenography Problem. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 150:1-150:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{doerr_et_al:LIPIcs.ICALP.2016.150,
  author =	{Doerr, Benjamin and K\"{u}nnemann, Marvin},
  title =	{{Improved Protocols and Hardness Results for the Two-Player Cryptogenography Problem}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{150:1--150:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.150},
  URN =		{urn:nbn:de:0030-drops-62946},
  doi =		{10.4230/LIPIcs.ICALP.2016.150},
  annote =	{Keywords: randomized protocols, anonymous communication, computer-aided proofs, solitaire games}
}
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail