20 Search Results for "Künnemann, Marvin"


Document
The NFA Acceptance Hypothesis: Non-Combinatorial and Dynamic Lower Bounds

Authors: Karl Bringmann, Allan Grønlund, Marvin Künnemann, and Kasper Green Larsen

Published in: LIPIcs, Volume 287, 15th Innovations in Theoretical Computer Science Conference (ITCS 2024)


Abstract
We pose the fine-grained hardness hypothesis that the textbook algorithm for the NFA Acceptance problem is optimal up to subpolynomial factors, even for dense NFAs and fixed alphabets. We show that this barrier appears in many variations throughout the algorithmic literature by introducing a framework of Colored Walk problems. These yield fine-grained equivalent formulations of the NFA Acceptance problem as problems concerning detection of an s-t-walk with a prescribed color sequence in a given edge- or node-colored graph. For NFA Acceptance on sparse NFAs (or equivalently, Colored Walk in sparse graphs), a tight lower bound under the Strong Exponential Time Hypothesis has been rediscovered several times in recent years. We show that our hardness hypothesis, which concerns dense NFAs, has several interesting implications: - It gives a tight lower bound for Context-Free Language Reachability. This proves conditional optimality for the class of 2NPDA-complete problems, explaining the cubic bottleneck of interprocedural program analysis. - It gives a tight (n+nm^{1/3})^{1-o(1)} lower bound for the Word Break problem on strings of length n and dictionaries of total size m. - It implies the popular OMv hypothesis. Since the NFA acceptance problem is a static (i.e., non-dynamic) problem, this provides a static reason for the hardness of many dynamic problems. Thus, a proof of the NFA Acceptance hypothesis would resolve several interesting barriers. Conversely, a refutation of the NFA Acceptance hypothesis may lead the way to attacking the current barriers observed for Context-Free Language Reachability, the Word Break problem and the growing list of dynamic problems proven hard under the OMv hypothesis.

Cite as

Karl Bringmann, Allan Grønlund, Marvin Künnemann, and Kasper Green Larsen. The NFA Acceptance Hypothesis: Non-Combinatorial and Dynamic Lower Bounds. In 15th Innovations in Theoretical Computer Science Conference (ITCS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 287, pp. 22:1-22:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.ITCS.2024.22,
  author =	{Bringmann, Karl and Gr{\o}nlund, Allan and K\"{u}nnemann, Marvin and Larsen, Kasper Green},
  title =	{{The NFA Acceptance Hypothesis: Non-Combinatorial and Dynamic Lower Bounds}},
  booktitle =	{15th Innovations in Theoretical Computer Science Conference (ITCS 2024)},
  pages =	{22:1--22:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-309-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{287},
  editor =	{Guruswami, Venkatesan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2024.22},
  URN =		{urn:nbn:de:0030-drops-195500},
  doi =		{10.4230/LIPIcs.ITCS.2024.22},
  annote =	{Keywords: Fine-grained complexity theory, non-deterministic finite automata}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Coverability in VASS Revisited: Improving Rackoff’s Bound to Obtain Conditional Optimality

Authors: Marvin Künnemann, Filip Mazowiecki, Lia Schütze, Henry Sinclair-Banks, and Karol Węgrzycki

Published in: LIPIcs, Volume 261, 50th International Colloquium on Automata, Languages, and Programming (ICALP 2023)


Abstract
Seminal results establish that the coverability problem for Vector Addition Systems with States (VASS) is in EXPSPACE (Rackoff, '78) and is EXPSPACE-hard already under unary encodings (Lipton, '76). More precisely, Rosier and Yen later utilise Rackoff’s bounding technique to show that if coverability holds then there is a run of length at most n^{2^𝒪(d log d)}, where d is the dimension and n is the size of the given unary VASS. Earlier, Lipton showed that there exist instances of coverability in d-dimensional unary VASS that are only witnessed by runs of length at least n^{2^Ω(d)}. Our first result closes this gap. We improve the upper bound by removing the twice-exponentiated log(d) factor, thus matching Lipton’s lower bound. This closes the corresponding gap for the exact space required to decide coverability. This also yields a deterministic n^{2^𝒪(d)}-time algorithm for coverability. Our second result is a matching lower bound, that there does not exist a deterministic n^{2^o(d)}-time algorithm, conditioned upon the Exponential Time Hypothesis. When analysing coverability, a standard proof technique is to consider VASS with bounded counters. Bounded VASS make for an interesting and popular model due to strong connections with timed automata. Withal, we study a natural setting where the counter bound is linear in the size of the VASS. Here the trivial exhaustive search algorithm runs in 𝒪(n^{d+1})-time. We give evidence to this being near-optimal. We prove that in dimension one this trivial algorithm is conditionally optimal, by showing that n^{2-o(1)}-time is required under the k-cycle hypothesis. In general fixed dimension d, we show that n^{d-2-o(1)}-time is required under the 3-uniform hyperclique hypothesis.

Cite as

Marvin Künnemann, Filip Mazowiecki, Lia Schütze, Henry Sinclair-Banks, and Karol Węgrzycki. Coverability in VASS Revisited: Improving Rackoff’s Bound to Obtain Conditional Optimality. In 50th International Colloquium on Automata, Languages, and Programming (ICALP 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 261, pp. 131:1-131:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{kunnemann_et_al:LIPIcs.ICALP.2023.131,
  author =	{K\"{u}nnemann, Marvin and Mazowiecki, Filip and Sch\"{u}tze, Lia and Sinclair-Banks, Henry and W\k{e}grzycki, Karol},
  title =	{{Coverability in VASS Revisited: Improving Rackoff’s Bound to Obtain Conditional Optimality}},
  booktitle =	{50th International Colloquium on Automata, Languages, and Programming (ICALP 2023)},
  pages =	{131:1--131:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-278-5},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{261},
  editor =	{Etessami, Kousha and Feige, Uriel and Puppis, Gabriele},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2023.131},
  URN =		{urn:nbn:de:0030-drops-181834},
  doi =		{10.4230/LIPIcs.ICALP.2023.131},
  annote =	{Keywords: Vector Addition System, Coverability, Reachability, Fine-Grained Complexity, Exponential Time Hypothesis, k-Cycle Hypothesis, Hyperclique Hypothesis}
}
Document
Combinatorial Designs Meet Hypercliques: Higher Lower Bounds for Klee’s Measure Problem and Related Problems in Dimensions d ≥ 4

Authors: Egor Gorbachev and Marvin Künnemann

Published in: LIPIcs, Volume 258, 39th International Symposium on Computational Geometry (SoCG 2023)


Abstract
Klee’s measure problem (computing the volume of the union of n axis-parallel boxes in ℝ^d) is well known to have n^{d/2± o(1)}-time algorithms (Overmars, Yap, SICOMP'91; Chan FOCS'13). Only recently, a conditional lower bound (without any restriction to "combinatorial" algorithms) could be shown for d = 3 (Künnemann, FOCS'22). Can this result be extended to a tight lower bound for dimensions d ≥ 4? In this paper, we formalize the technique of the tight lower bound for d = 3 using a combinatorial object we call prefix covering design. We show that these designs, which are related in spirit to combinatorial designs, directly translate to conditional lower bounds for Klee’s measure problem and various related problems. By devising good prefix covering designs, we give the following lower bounds for Klee’s measure problem in ℝ^d, the depth problem for axis-parallel boxes in ℝ^d, the largest-volume/max-perimeter empty (anchored) box problem in ℝ^{2d}, and related problems: - Ω(n^1.90476) for d = 4, - Ω(n^2.22222) for d = 5, - Ω(n^{d/3 + 2√d/9-o(√d)}) for general d, assuming the 3-uniform hyperclique hypothesis. For Klee’s measure problem and the depth problem, these bounds improve previous lower bounds of Ω(n^{1.777...}), Ω(n^{2.0833...}) and Ω(n^{d/3 + 1/3 + Θ(1/d)}) respectively. Our improved prefix covering designs were obtained by (1) exploiting a computer-aided search using problem-specific insights as well as SAT solvers, and (2) showing how to transform combinatorial covering designs known in the literature to strong prefix covering designs. In contrast, we show that our lower bounds are close to best possible using this proof technique.

Cite as

Egor Gorbachev and Marvin Künnemann. Combinatorial Designs Meet Hypercliques: Higher Lower Bounds for Klee’s Measure Problem and Related Problems in Dimensions d ≥ 4. In 39th International Symposium on Computational Geometry (SoCG 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 258, pp. 36:1-36:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{gorbachev_et_al:LIPIcs.SoCG.2023.36,
  author =	{Gorbachev, Egor and K\"{u}nnemann, Marvin},
  title =	{{Combinatorial Designs Meet Hypercliques: Higher Lower Bounds for Klee’s Measure Problem and Related Problems in Dimensions d ≥ 4}},
  booktitle =	{39th International Symposium on Computational Geometry (SoCG 2023)},
  pages =	{36:1--36:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-273-0},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{258},
  editor =	{Chambers, Erin W. and Gudmundsson, Joachim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SoCG.2023.36},
  URN =		{urn:nbn:de:0030-drops-178861},
  doi =		{10.4230/LIPIcs.SoCG.2023.36},
  annote =	{Keywords: Fine-grained complexity theory, non-combinatorial lower bounds, computational geometry, clique detection}
}
Document
Track A: Algorithms, Complexity and Games
A Structural Investigation of the Approximability of Polynomial-Time Problems

Authors: Karl Bringmann, Alejandro Cassis, Nick Fischer, and Marvin Künnemann

Published in: LIPIcs, Volume 229, 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)


Abstract
An extensive research effort targets optimal (in)approximability results for various NP-hard optimization problems. Notably, the works of (Creignou'95) as well as (Khanna, Sudan, Trevisan, Williamson'00) establish a tight characterization of a large subclass of MaxSNP, namely Boolean MaxCSPs and further variants, in terms of their polynomial-time approximability. Can we obtain similarly encompassing characterizations for classes of polynomial-time optimization problems? To this end, we initiate the systematic study of a recently introduced polynomial-time analogue of MaxSNP, which includes a large number of well-studied problems (including Nearest and Furthest Neighbor in the Hamming metric, Maximum Inner Product, optimization variants of k-XOR and Maximum k-Cover). Specifically, for each k, MaxSP_k denotes the class of O(m^k)-time problems of the form max_{x_1,… , x_k} #{y : ϕ(x_1,… ,x_k,y)} where ϕ is a quantifier-free first-order property and m denotes the size of the relational structure. Assuming central hypotheses about clique detection in hypergraphs and exact Max-3-SAT}, we show that for any MaxSP_k problem definable by a quantifier-free m-edge graph formula φ, the best possible approximation guarantee in faster-than-exhaustive-search time O(m^{k-δ})falls into one of four categories: - optimizable to exactness in time O(m^{k-δ}), - an (inefficient) approximation scheme, i.e., a (1+ε)-approximation in time O(m^{k-f(ε)}), - a (fixed) constant-factor approximation in time O(m^{k-δ}), or - a nm^ε-approximation in time O(m^{k-f(ε)}). We obtain an almost complete characterization of these regimes, for MaxSP_k as well as for an analogously defined minimization class MinSP_k. As our main technical contribution, we show how to rule out the existence of approximation schemes for a large class of problems admitting constant-factor approximations, under a hypothesis for exact Sparse Max-3-SAT algorithms posed by (Alman, Vassilevska Williams'20). As general trends for the problems we consider, we observe: (1) Exact optimizability has a simple algebraic characterization, (2) only few maximization problems do not admit a constant-factor approximation; these do not even have a subpolynomial-factor approximation, and (3) constant-factor approximation of minimization problems is equivalent to deciding whether the optimum is equal to 0.

Cite as

Karl Bringmann, Alejandro Cassis, Nick Fischer, and Marvin Künnemann. A Structural Investigation of the Approximability of Polynomial-Time Problems. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 30:1-30:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.ICALP.2022.30,
  author =	{Bringmann, Karl and Cassis, Alejandro and Fischer, Nick and K\"{u}nnemann, Marvin},
  title =	{{A Structural Investigation of the Approximability of Polynomial-Time Problems}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{30:1--30:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.30},
  URN =		{urn:nbn:de:0030-drops-163713},
  doi =		{10.4230/LIPIcs.ICALP.2022.30},
  annote =	{Keywords: Classification Theorems, Hardness of Approximation in P, Fine-grained Complexity Theory}
}
Document
Dynamic Time Warping Under Translation: Approximation Guided by Space-Filling Curves

Authors: Karl Bringmann, Sándor Kisfaludi‑Bak, Marvin Künnemann, Dániel Marx, and André Nusser

Published in: LIPIcs, Volume 224, 38th International Symposium on Computational Geometry (SoCG 2022)


Abstract
The Dynamic Time Warping (DTW) distance is a popular measure of similarity for a variety of sequence data. For comparing polygonal curves π, σ in ℝ^d, it provides a robust, outlier-insensitive alternative to the Fréchet distance. However, like the Fréchet distance, the DTW distance is not invariant under translations. Can we efficiently optimize the DTW distance of π and σ under arbitrary translations, to compare the curves' shape irrespective of their absolute location? There are surprisingly few works in this direction, which may be due to its computational intricacy: For the Euclidean norm, this problem contains as a special case the geometric median problem, which provably admits no exact algebraic algorithm (that is, no algorithm using only addition, multiplication, and k-th roots). We thus investigate exact algorithms for non-Euclidean norms as well as approximation algorithms for the Euclidean norm. For the L₁ norm in ℝ^d, we provide an 𝒪(n^{2(d+1)})-time algorithm, i.e., an exact polynomial-time algorithm for constant d. Here and below, n bounds the curves' complexities. For the Euclidean norm in ℝ², we show that a simple problem-specific insight leads to a (1+ε)-approximation in time 𝒪(n³/ε²). We then show how to obtain a subcubic 𝒪̃(n^{2.5}/ε²) time algorithm with significant new ideas; this time comes close to the well-known quadratic time barrier for computing DTW for fixed translations. Technically, the algorithm is obtained by speeding up repeated DTW distance estimations using a dynamic data structure for maintaining shortest paths in weighted planar digraphs. Crucially, we show how to traverse a candidate set of translations using space-filling curves in a way that incurs only few updates to the data structure. We hope that our results will facilitate the use of DTW under translation both in theory and practice, and inspire similar algorithmic approaches for related geometric optimization problems.

Cite as

Karl Bringmann, Sándor Kisfaludi‑Bak, Marvin Künnemann, Dániel Marx, and André Nusser. Dynamic Time Warping Under Translation: Approximation Guided by Space-Filling Curves. In 38th International Symposium on Computational Geometry (SoCG 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 224, pp. 20:1-20:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.SoCG.2022.20,
  author =	{Bringmann, Karl and Kisfaludi‑Bak, S\'{a}ndor and K\"{u}nnemann, Marvin and Marx, D\'{a}niel and Nusser, Andr\'{e}},
  title =	{{Dynamic Time Warping Under Translation: Approximation Guided by Space-Filling Curves}},
  booktitle =	{38th International Symposium on Computational Geometry (SoCG 2022)},
  pages =	{20:1--20:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-227-3},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{224},
  editor =	{Goaoc, Xavier and Kerber, Michael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SoCG.2022.20},
  URN =		{urn:nbn:de:0030-drops-160287},
  doi =		{10.4230/LIPIcs.SoCG.2022.20},
  annote =	{Keywords: Dynamic Time Warping, Sequence Similarity Measures}
}
Document
Towards Sub-Quadratic Diameter Computation in Geometric Intersection Graphs

Authors: Karl Bringmann, Sándor Kisfaludi‑Bak, Marvin Künnemann, André Nusser, and Zahra Parsaeian

Published in: LIPIcs, Volume 224, 38th International Symposium on Computational Geometry (SoCG 2022)


Abstract
We initiate the study of diameter computation in geometric intersection graphs from the fine-grained complexity perspective. A geometric intersection graph is a graph whose vertices correspond to some shapes in d-dimensional Euclidean space, such as balls, segments, or hypercubes, and whose edges correspond to pairs of intersecting shapes. The diameter of a graph is the largest distance realized by a pair of vertices in the graph. Computing the diameter in near-quadratic time is possible in several classes of intersection graphs [Chan and Skrepetos 2019], but it is not at all clear if these algorithms are optimal, especially since in the related class of planar graphs the diameter can be computed in 𝒪̃(n^{5/3}) time [Cabello 2019, Gawrychowski et al. 2021]. In this work we (conditionally) rule out sub-quadratic algorithms in several classes of intersection graphs, i.e., algorithms of running time 𝒪(n^{2-δ}) for some δ > 0. In particular, there are no sub-quadratic algorithms already for fat objects in small dimensions: unit balls in ℝ³ or congruent equilateral triangles in ℝ². For unit segments and congruent equilateral triangles, we can even rule out strong sub-quadratic approximations already in ℝ². It seems that the hardness of approximation may also depend on dimensionality: for axis-parallel unit hypercubes in ℝ^{12}, distinguishing between diameter 2 and 3 needs quadratic time (ruling out (3/2-ε)- approximations), whereas for axis-parallel unit squares, we give an algorithm that distinguishes between diameter 2 and 3 in near-linear time. Note that many of our lower bounds match the best known algorithms up to sub-polynomial factors. Ultimately, this fine-grained perspective may enable us to determine for which shapes we can have efficient algorithms and approximation schemes for diameter computation.

Cite as

Karl Bringmann, Sándor Kisfaludi‑Bak, Marvin Künnemann, André Nusser, and Zahra Parsaeian. Towards Sub-Quadratic Diameter Computation in Geometric Intersection Graphs. In 38th International Symposium on Computational Geometry (SoCG 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 224, pp. 21:1-21:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.SoCG.2022.21,
  author =	{Bringmann, Karl and Kisfaludi‑Bak, S\'{a}ndor and K\"{u}nnemann, Marvin and Nusser, Andr\'{e} and Parsaeian, Zahra},
  title =	{{Towards Sub-Quadratic Diameter Computation in Geometric Intersection Graphs}},
  booktitle =	{38th International Symposium on Computational Geometry (SoCG 2022)},
  pages =	{21:1--21:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-227-3},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{224},
  editor =	{Goaoc, Xavier and Kerber, Michael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SoCG.2022.21},
  URN =		{urn:nbn:de:0030-drops-160294},
  doi =		{10.4230/LIPIcs.SoCG.2022.21},
  annote =	{Keywords: Hardness in P, Geometric Intersection Graph, Graph Diameter, Orthogonal Vectors, Hyperclique Detection}
}
Document
The Fine-Grained Complexity of Multi-Dimensional Ordering Properties

Authors: Haozhe An, Mohit Gurumukhani, Russell Impagliazzo, Michael Jaber, Marvin Künnemann, and Maria Paula Parga Nina

Published in: LIPIcs, Volume 214, 16th International Symposium on Parameterized and Exact Computation (IPEC 2021)


Abstract
We define a class of problems whose input is an n-sized set of d-dimensional vectors, and where the problem is first-order definable using comparisons between coordinates. This class captures a wide variety of tasks, such as complex types of orthogonal range search, model-checking first-order properties on geometric intersection graphs, and elementary questions on multidimensional data like verifying Pareto optimality of a choice of data points. Focusing on constant dimension d, we show that any k-quantifier, d-dimensional such problem is solvable in O(n^{k-1} log^{d-1} n) time. Furthermore, this algorithm is conditionally tight up to subpolynomial factors: we show that assuming the 3-uniform hyperclique hypothesis, there is a k-quantifier, (3k-3)-dimensional problem in this class that requires time Ω(n^{k-1-o(1)}). Towards identifying a single representative problem for this class, we study the existence of complete problems for the 3-quantifier setting (since 2-quantifier problems can already be solved in near-linear time O(nlog^{d-1} n), and k-quantifier problems with k > 3 reduce to the 3-quantifier case). We define a problem Vector Concatenated Non-Domination VCND_d (Given three sets of vectors X,Y and Z of dimension d,d and 2d, respectively, is there an x ∈ X and a y ∈ Y so that their concatenation x∘y is not dominated by any z ∈ Z, where vector u is dominated by vector v if u_i ≤ v_i for each coordinate 1 ≤ i ≤ d), and determine it as the "unique" candidate to be complete for this class (under fine-grained assumptions).

Cite as

Haozhe An, Mohit Gurumukhani, Russell Impagliazzo, Michael Jaber, Marvin Künnemann, and Maria Paula Parga Nina. The Fine-Grained Complexity of Multi-Dimensional Ordering Properties. In 16th International Symposium on Parameterized and Exact Computation (IPEC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 214, pp. 3:1-3:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{an_et_al:LIPIcs.IPEC.2021.3,
  author =	{An, Haozhe and Gurumukhani, Mohit and Impagliazzo, Russell and Jaber, Michael and K\"{u}nnemann, Marvin and Nina, Maria Paula Parga},
  title =	{{The Fine-Grained Complexity of Multi-Dimensional Ordering Properties}},
  booktitle =	{16th International Symposium on Parameterized and Exact Computation (IPEC 2021)},
  pages =	{3:1--3:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-216-7},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{214},
  editor =	{Golovach, Petr A. and Zehavi, Meirav},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.IPEC.2021.3},
  URN =		{urn:nbn:de:0030-drops-153869},
  doi =		{10.4230/LIPIcs.IPEC.2021.3},
  annote =	{Keywords: Fine-grained complexity, First-order logic, Orthogonal vectors}
}
Document
APPROX
Fine-Grained Completeness for Optimization in P

Authors: Karl Bringmann, Alejandro Cassis, Nick Fischer, and Marvin Künnemann

Published in: LIPIcs, Volume 207, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021)


Abstract
We initiate the study of fine-grained completeness theorems for exact and approximate optimization in the polynomial-time regime. Inspired by the first completeness results for decision problems in P (Gao, Impagliazzo, Kolokolova, Williams, TALG 2019) as well as the classic class MaxSNP and MaxSNP-completeness for NP optimization problems (Papadimitriou, Yannakakis, JCSS 1991), we define polynomial-time analogues MaxSP and MinSP, which contain a number of natural optimization problems in P, including Maximum Inner Product, general forms of nearest neighbor search and optimization variants of the k-XOR problem. Specifically, we define MaxSP as the class of problems definable as max_{x₁,… ,x_k} #{(y₁,… ,y_𝓁) : ϕ(x₁,… ,x_k, y₁,… ,y_𝓁)}, where ϕ is a quantifier-free first-order property over a given relational structure (with MinSP defined analogously). On m-sized structures, we can solve each such problem in time O(m^{k+𝓁-1}). Our results are: - We determine (a sparse variant of) the Maximum/Minimum Inner Product problem as complete under deterministic fine-grained reductions: A strongly subquadratic algorithm for Maximum/Minimum Inner Product would beat the baseline running time of O(m^{k+𝓁-1}) for all problems in MaxSP/MinSP by a polynomial factor. - This completeness transfers to approximation: Maximum/Minimum Inner Product is also complete in the sense that a strongly subquadratic c-approximation would give a (c+ε)-approximation for all MaxSP/MinSP problems in time O(m^{k+𝓁-1-δ}), where ε > 0 can be chosen arbitrarily small. Combining our completeness with (Chen, Williams, SODA 2019), we obtain the perhaps surprising consequence that refuting the OV Hypothesis is equivalent to giving a O(1)-approximation for all MinSP problems in faster-than-O(m^{k+𝓁-1}) time. - By fine-tuning our reductions, we obtain mild algorithmic improvements for solving and approximating all problems in MaxSP and MinSP, using the fastest known algorithms for Maximum/Minimum Inner Product.

Cite as

Karl Bringmann, Alejandro Cassis, Nick Fischer, and Marvin Künnemann. Fine-Grained Completeness for Optimization in P. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 207, pp. 9:1-9:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.APPROX/RANDOM.2021.9,
  author =	{Bringmann, Karl and Cassis, Alejandro and Fischer, Nick and K\"{u}nnemann, Marvin},
  title =	{{Fine-Grained Completeness for Optimization in P}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021)},
  pages =	{9:1--9:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-207-5},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{207},
  editor =	{Wootters, Mary and Sanit\`{a}, Laura},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2021.9},
  URN =		{urn:nbn:de:0030-drops-147024},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2021.9},
  annote =	{Keywords: Fine-grained Complexity \& Algorithm Design, Completeness, Hardness of Approximation in P, Dimensionality Reductions}
}
Document
Approximating the (Continuous) Fréchet Distance

Authors: Connor Colombe and Kyle Fox

Published in: LIPIcs, Volume 189, 37th International Symposium on Computational Geometry (SoCG 2021)


Abstract
We describe the first strongly subquadratic time algorithm with subexponential approximation ratio for approximately computing the Fréchet distance between two polygonal chains. Specifically, let P and Q be two polygonal chains with n vertices in d-dimensional Euclidean space, and let α ∈ [√n, n]. Our algorithm deterministically finds an O(α)-approximate Fréchet correspondence in time O((n³ / α²) log n). In particular, we get an O(n)-approximation in near-linear O(n log n) time, a vast improvement over the previously best know result, a linear time 2^O(n)-approximation. As part of our algorithm, we also describe how to turn any approximate decision procedure for the Fréchet distance into an approximate optimization algorithm whose approximation ratio is the same up to arbitrarily small constant factors. The transformation into an approximate optimization algorithm increases the running time of the decision procedure by only an O(log n) factor.

Cite as

Connor Colombe and Kyle Fox. Approximating the (Continuous) Fréchet Distance. In 37th International Symposium on Computational Geometry (SoCG 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 189, pp. 26:1-26:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{colombe_et_al:LIPIcs.SoCG.2021.26,
  author =	{Colombe, Connor and Fox, Kyle},
  title =	{{Approximating the (Continuous) Fr\'{e}chet Distance}},
  booktitle =	{37th International Symposium on Computational Geometry (SoCG 2021)},
  pages =	{26:1--26:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-184-9},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{189},
  editor =	{Buchin, Kevin and Colin de Verdi\`{e}re, \'{E}ric},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SoCG.2021.26},
  URN =		{urn:nbn:de:0030-drops-138259},
  doi =		{10.4230/LIPIcs.SoCG.2021.26},
  annote =	{Keywords: Fr\'{e}chet distance, approximation algorithm, approximate decision procedure}
}
Document
When Lipschitz Walks Your Dog: Algorithm Engineering of the Discrete Fréchet Distance Under Translation

Authors: Karl Bringmann, Marvin Künnemann, and André Nusser

Published in: LIPIcs, Volume 173, 28th Annual European Symposium on Algorithms (ESA 2020)


Abstract
Consider the natural question of how to measure the similarity of curves in the plane by a quantity that is invariant under translations of the curves. Such a measure is justified whenever we aim to quantify the similarity of the curves' shapes rather than their positioning in the plane, e.g., to compare the similarity of handwritten characters. Perhaps the most natural such notion is the (discrete) Fréchet distance under translation. Unfortunately, the algorithmic literature on this problem yields a very pessimistic view: On polygonal curves with n vertices, the fastest algorithm runs in time 𝒪(n^4.667) and cannot be improved below n^{4-o(1)} unless the Strong Exponential Time Hypothesis fails. Can we still obtain an implementation that is efficient on realistic datasets? Spurred by the surprising performance of recent implementations for the Fréchet distance, we perform algorithm engineering for the Fréchet distance under translation. Our solution combines fast, but inexact tools from continuous optimization (specifically, branch-and-bound algorithms for global Lipschitz optimization) with exact, but expensive algorithms from computational geometry (specifically, problem-specific algorithms based on an arrangement construction). We combine these two ingredients to obtain an exact decision algorithm for the Fréchet distance under translation. For the related task of computing the distance value up to a desired precision, we engineer and compare different methods. On a benchmark set involving handwritten characters and route trajectories, our implementation answers a typical query for either task in the range of a few milliseconds up to a second on standard desktop hardware. We believe that our implementation will enable, for the first time, the use of the Fréchet distance under translation in applications, whereas previous algorithmic approaches would have been computationally infeasible. Furthermore, we hope that our combination of continuous optimization and computational geometry will inspire similar approaches for further algorithmic questions.

Cite as

Karl Bringmann, Marvin Künnemann, and André Nusser. When Lipschitz Walks Your Dog: Algorithm Engineering of the Discrete Fréchet Distance Under Translation. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 25:1-25:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.ESA.2020.25,
  author =	{Bringmann, Karl and K\"{u}nnemann, Marvin and Nusser, Andr\'{e}},
  title =	{{When Lipschitz Walks Your Dog: Algorithm Engineering of the Discrete Fr\'{e}chet Distance Under Translation}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{25:1--25:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.25},
  URN =		{urn:nbn:de:0030-drops-128912},
  doi =		{10.4230/LIPIcs.ESA.2020.25},
  annote =	{Keywords: Fr\'{e}chet Distance, Computational Geometry, Continuous Optimization, Algorithm Engineering}
}
Document
Finding Small Satisfying Assignments Faster Than Brute Force: A Fine-Grained Perspective into Boolean Constraint Satisfaction

Authors: Marvin Künnemann and Dániel Marx

Published in: LIPIcs, Volume 169, 35th Computational Complexity Conference (CCC 2020)


Abstract
To study the question under which circumstances small solutions can be found faster than by exhaustive search (and by how much), we study the fine-grained complexity of Boolean constraint satisfaction with size constraint exactly k. More precisely, we aim to determine, for any finite constraint family, the optimal running time f(k)n^g(k) required to find satisfying assignments that set precisely k of the n variables to 1. Under central hardness assumptions on detecting cliques in graphs and 3-uniform hypergraphs, we give an almost tight characterization of g(k) into four regimes: 1) Brute force is essentially best-possible, i.e., g(k) = (1 ± o(1))k, 2) the best algorithms are as fast as current k-clique algorithms, i.e., g(k) = (ω/3 ± o(1))k, 3) the exponent has sublinear dependence on k with g(k) ∈ [Ω(∛k), O(√k)], or 4) the problem is fixed-parameter tractable, i.e., g(k) = O(1). This yields a more fine-grained perspective than a previous FPT/W[1]-hardness dichotomy (Marx, Computational Complexity 2005). Our most interesting technical contribution is a f(k)n^(4√k)-time algorithm for SubsetSum with precedence constraints parameterized by the target k - particularly the approach, based on generalizing a bound on the Frobenius coin problem to a setting with precedence constraints, might be of independent interest.

Cite as

Marvin Künnemann and Dániel Marx. Finding Small Satisfying Assignments Faster Than Brute Force: A Fine-Grained Perspective into Boolean Constraint Satisfaction. In 35th Computational Complexity Conference (CCC 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 169, pp. 27:1-27:28, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{kunnemann_et_al:LIPIcs.CCC.2020.27,
  author =	{K\"{u}nnemann, Marvin and Marx, D\'{a}niel},
  title =	{{Finding Small Satisfying Assignments Faster Than Brute Force: A Fine-Grained Perspective into Boolean Constraint Satisfaction}},
  booktitle =	{35th Computational Complexity Conference (CCC 2020)},
  pages =	{27:1--27:28},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-156-6},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{169},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2020.27},
  URN =		{urn:nbn:de:0030-drops-125791},
  doi =		{10.4230/LIPIcs.CCC.2020.27},
  annote =	{Keywords: Fine-grained complexity theory, algorithmic classification theorem, multivariate algorithms and complexity, constraint satisfaction problems, satisfiability}
}
Document
On the Fine-Grained Complexity of Least Weight Subsequence in Multitrees and Bounded Treewidth DAGs

Authors: Jiawei Gao

Published in: LIPIcs, Volume 148, 14th International Symposium on Parameterized and Exact Computation (IPEC 2019)


Abstract
This paper introduces a new technique that generalizes previously known fine-grained reductions from linear structures to graphs. Least Weight Subsequence (LWS) [Hirschberg and Larmore, 1987] is a class of highly sequential optimization problems with form F(j) = min_{i < j} [F(i) + c_{i,j}] . They can be solved in quadratic time using dynamic programming, but it is not known whether these problems can be solved faster than n^{2-o(1)} time. Surprisingly, each such problem is subquadratic time reducible to a highly parallel, non-dynamic programming problem [Marvin Künnemann et al., 2017]. In other words, if a "static" problem is faster than quadratic time, so is an LWS problem. For many instances of LWS, the sequential versions are equivalent to their static versions by subquadratic time reductions. The previous result applies to LWS on linear structures, and this paper extends this result to LWS on paths in sparse graphs, the Least Weight Subpath (LWSP) problems. When the graph is a multitree (i.e. a DAG where any pair of vertices can have at most one path) or when the graph is a DAG whose underlying undirected graph has constant treewidth, we show that LWSP on this graph is still subquadratically reducible to their corresponding static problems. For many instances, the graph versions are still equivalent to their static versions. Moreover, this paper shows that if we can decide a property of form Exists x Exists y P(x,y) in subquadratic time, where P is a quickly checkable property on a pair of elements, then on these classes of graphs, we can also in subquadratic time decide whether there exists a pair x,y in the transitive closure of the graph that also satisfy P(x,y).

Cite as

Jiawei Gao. On the Fine-Grained Complexity of Least Weight Subsequence in Multitrees and Bounded Treewidth DAGs. In 14th International Symposium on Parameterized and Exact Computation (IPEC 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 148, pp. 16:1-16:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{gao:LIPIcs.IPEC.2019.16,
  author =	{Gao, Jiawei},
  title =	{{On the Fine-Grained Complexity of Least Weight Subsequence in Multitrees and Bounded Treewidth DAGs}},
  booktitle =	{14th International Symposium on Parameterized and Exact Computation (IPEC 2019)},
  pages =	{16:1--16:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-129-0},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{148},
  editor =	{Jansen, Bart M. P. and Telle, Jan Arne},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.IPEC.2019.16},
  URN =		{urn:nbn:de:0030-drops-114778},
  doi =		{10.4230/LIPIcs.IPEC.2019.16},
  annote =	{Keywords: fine-grained complexity, dynamic programming, graph reachability}
}
Document
Searching for Cryptogenography Upper Bounds via Sum of Square Programming

Authors: Dominik Scheder, Shuyang Tang, and Jiaheng Zhang

Published in: LIPIcs, Volume 149, 30th International Symposium on Algorithms and Computation (ISAAC 2019)


Abstract
Cryptogenography is a secret-leaking game in which one of n players is holding a secret to be leaked. The n players engage in communication as to (1) reveal the secret while (2) keeping the identity of the secret holder as obscure as possible. All communication is public, and no computational hardness assumptions are made, i.e., the setting is purely information theoretic. Brody, Jakobsen, Scheder, and Winkler [Joshua Brody et al., 2014] formally defined this problem, showed that it has an equivalent geometric characterization, and gave upper and lower bounds for the case in which the n players want to leak a single bit. Surprisingly, even the easiest case, where two players want to leak a secret consisting of a single bit, is not completely understood. Doerr and Künnemann [Benjamin Doerr and Marvin Künnemann, 2016] showed how to automatically search for good protocols using a computer, thus finding an improved protocol for the 1-bit two-player case. In this work, we show how the search for upper bounds (impossibility results) can be formulated as a Sum of Squares program. We implement this idea for the 1-bit two-player case and significantly improve the previous upper bound from 47/128 = 0.3671875 to 0.35183.

Cite as

Dominik Scheder, Shuyang Tang, and Jiaheng Zhang. Searching for Cryptogenography Upper Bounds via Sum of Square Programming. In 30th International Symposium on Algorithms and Computation (ISAAC 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 149, pp. 31:1-31:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{scheder_et_al:LIPIcs.ISAAC.2019.31,
  author =	{Scheder, Dominik and Tang, Shuyang and Zhang, Jiaheng},
  title =	{{Searching for Cryptogenography Upper Bounds via Sum of Square Programming}},
  booktitle =	{30th International Symposium on Algorithms and Computation (ISAAC 2019)},
  pages =	{31:1--31:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-130-6},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{149},
  editor =	{Lu, Pinyan and Zhang, Guochuan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2019.31},
  URN =		{urn:nbn:de:0030-drops-115276},
  doi =		{10.4230/LIPIcs.ISAAC.2019.31},
  annote =	{Keywords: Communication Complexity, Secret Leaking, Sum of Squares Programming}
}
Document
A Fine-Grained Analogue of Schaefer’s Theorem in P: Dichotomy of Exists^k-Forall-Quantified First-Order Graph Properties

Authors: Karl Bringmann, Nick Fischer, and Marvin Künnemann

Published in: LIPIcs, Volume 137, 34th Computational Complexity Conference (CCC 2019)


Abstract
An important class of problems in logics and database theory is given by fixing a first-order property psi over a relational structure, and considering the model-checking problem for psi. Recently, Gao, Impagliazzo, Kolokolova, and Williams (SODA 2017) identified this class as fundamental for the theory of fine-grained complexity in P, by showing that the (Sparse) Orthogonal Vectors problem is complete for this class under fine-grained reductions. This raises the question whether fine-grained complexity can yield a precise understanding of all first-order model-checking problems. Specifically, can we determine, for any fixed first-order property psi, the exponent of the optimal running time O(m^{c_psi}), where m denotes the number of tuples in the relational structure? Towards answering this question, in this work we give a dichotomy for the class of exists^k-forall-quantified graph properties. For every such property psi, we either give a polynomial-time improvement over the baseline O(m^k)-time algorithm or show that it requires time m^{k-o(1)} under the hypothesis that MAX-3-SAT has no O((2-epsilon)^n)-time algorithm. More precisely, we define a hardness parameter h = H(psi) such that psi can be decided in time O(m^{k-epsilon}) if h <=2 and requires time m^{k-o(1)} for h >= 3 unless the h-uniform HyperClique hypothesis fails. This unveils a natural hardness hierarchy within first-order properties: for any h >= 3, we show that there exists a exists^k-forall-quantified graph property psi with hardness H(psi)=h that is solvable in time O(m^{k-epsilon}) if and only if the h-uniform HyperClique hypothesis fails. Finally, we give more precise upper and lower bounds for an exemplary class of formulas with k=3 and extend our classification to a counting dichotomy.

Cite as

Karl Bringmann, Nick Fischer, and Marvin Künnemann. A Fine-Grained Analogue of Schaefer’s Theorem in P: Dichotomy of Exists^k-Forall-Quantified First-Order Graph Properties. In 34th Computational Complexity Conference (CCC 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 137, pp. 31:1-31:27, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.CCC.2019.31,
  author =	{Bringmann, Karl and Fischer, Nick and K\"{u}nnemann, Marvin},
  title =	{{A Fine-Grained Analogue of Schaefer’s Theorem in P: Dichotomy of Exists^k-Forall-Quantified First-Order Graph Properties}},
  booktitle =	{34th Computational Complexity Conference (CCC 2019)},
  pages =	{31:1--31:27},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-116-0},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{137},
  editor =	{Shpilka, Amir},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2019.31},
  URN =		{urn:nbn:de:0030-drops-108533},
  doi =		{10.4230/LIPIcs.CCC.2019.31},
  annote =	{Keywords: Fine-grained Complexity, Hardness in P, Hyperclique Conjecture, Constrained Triangle Detection}
}
Document
Track A: Algorithms, Complexity and Games
Capacitated Dynamic Programming: Faster Knapsack and Graph Algorithms

Authors: Kyriakos Axiotis and Christos Tzamos

Published in: LIPIcs, Volume 132, 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)


Abstract
One of the most fundamental problems in Computer Science is the Knapsack problem. Given a set of n items with different weights and values, it asks to pick the most valuable subset whose total weight is below a capacity threshold T. Despite its wide applicability in various areas in Computer Science, Operations Research, and Finance, the best known running time for the problem is O(T n). The main result of our work is an improved algorithm running in time O(TD), where D is the number of distinct weights. Previously, faster runtimes for Knapsack were only possible when both weights and values are bounded by M and V respectively, running in time O(nMV) [Pisinger, 1999]. In comparison, our algorithm implies a bound of O(n M^2) without any dependence on V, or O(n V^2) without any dependence on M. Additionally, for the unbounded Knapsack problem, we provide an algorithm running in time O(M^2) or O(V^2). Both our algorithms match recent conditional lower bounds shown for the Knapsack problem [Marek Cygan et al., 2017; Marvin Künnemann et al., 2017]. We also initiate a systematic study of general capacitated dynamic programming, of which Knapsack is a core problem. This problem asks to compute the maximum weight path of length k in an edge- or node-weighted directed acyclic graph. In a graph with m edges, these problems are solvable by dynamic programming in time O(k m), and we explore under which conditions the dependence on k can be eliminated. We identify large classes of graphs where this is possible and apply our results to obtain linear time algorithms for the problem of k-sparse Delta-separated sequences. The main technical innovation behind our results is identifying and exploiting concavity that appears in relaxations and subproblems of the tasks we consider.

Cite as

Kyriakos Axiotis and Christos Tzamos. Capacitated Dynamic Programming: Faster Knapsack and Graph Algorithms. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 19:1-19:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{axiotis_et_al:LIPIcs.ICALP.2019.19,
  author =	{Axiotis, Kyriakos and Tzamos, Christos},
  title =	{{Capacitated Dynamic Programming: Faster Knapsack and Graph Algorithms}},
  booktitle =	{46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
  pages =	{19:1--19:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-109-2},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{132},
  editor =	{Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.19},
  URN =		{urn:nbn:de:0030-drops-105952},
  doi =		{10.4230/LIPIcs.ICALP.2019.19},
  annote =	{Keywords: Knapsack, Fine-Grained Complexity, Dynamic Programming}
}
  • Refine by Author
  • 16 Künnemann, Marvin
  • 8 Bringmann, Karl
  • 4 Nusser, André
  • 3 Fischer, Nick
  • 2 Cassis, Alejandro
  • Show More...

  • Refine by Classification
  • 6 Theory of computation → Problems, reductions and completeness
  • 5 Theory of computation → Computational geometry
  • 4 Theory of computation → Design and analysis of algorithms
  • 2 Theory of computation → Approximation algorithms analysis
  • 1 Theory of computation → Algorithm design techniques
  • Show More...

  • Refine by Keyword
  • 3 Fine-Grained Complexity
  • 3 Fine-grained complexity theory
  • 2 Fréchet distance
  • 2 Hardness in P
  • 2 Hardness of Approximation in P
  • Show More...

  • Refine by Type
  • 20 document

  • Refine by Publication Year
  • 5 2019
  • 3 2021
  • 3 2022
  • 2 2018
  • 2 2020
  • Show More...

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail