Document

**Published in:** LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)

We present a pseudopolynomial-time algorithm for the Knapsack problem that has running time Õ(n + t√{p_{max}}), where n is the number of items, t is the knapsack capacity, and p_{max} is the maximum item profit. This improves over the Õ(n + t p_{max})-time algorithm based on the convolution and prediction technique by Bateni et al. (STOC 2018). Moreover, we give some evidence, based on a strengthening of the Min-Plus Convolution Hypothesis, that our running time might be optimal.
Our algorithm uses two new technical tools, which might be of independent interest. First, we generalize the Õ(n^{1.5})-time algorithm for bounded monotone min-plus convolution by Chi et al. (STOC 2022) to the rectangular case where the range of entries can be different from the sequence length. Second, we give a reduction from general knapsack instances to balanced instances, where all items have nearly the same profit-to-weight ratio, up to a constant factor.
Using these techniques, we can also obtain algorithms that run in time Õ(n + OPT√{w_{max}}), Õ(n + (nw_{max}p_{max})^{1/3}t^{2/3}), and Õ(n + (nw_{max}p_{max})^{1/3} OPT^{2/3}), where OPT is the optimal total profit and w_{max} is the maximum item weight.

Karl Bringmann, Anita Dürr, and Adam Polak. Even Faster Knapsack via Rectangular Monotone Min-Plus Convolution and Balancing. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 33:1-33:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)

Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.ESA.2024.33, author = {Bringmann, Karl and D\"{u}rr, Anita and Polak, Adam}, title = {{Even Faster Knapsack via Rectangular Monotone Min-Plus Convolution and Balancing}}, booktitle = {32nd Annual European Symposium on Algorithms (ESA 2024)}, pages = {33:1--33:15}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-338-6}, ISSN = {1868-8969}, year = {2024}, volume = {308}, editor = {Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.33}, URN = {urn:nbn:de:0030-drops-211047}, doi = {10.4230/LIPIcs.ESA.2024.33}, annote = {Keywords: 0-1-Knapsack problem, bounded monotone min-plus convolution, fine-grained complexity} }

Document

**Published in:** LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)

Since an increasing number of problems in P have conditional lower bounds against exact algorithms, it is natural to study which of these problems can be efficiently approximated. Often, however, there are many potential ways to formulate an approximate version of a problem. We ask: How sensitive is the (in-)approximability of a problem in P to its precise formulation?
To this end, we perform a case study using the popular 3SUM problem. Its many equivalent formulations give rise to a wide range of potential approximate relaxations. Specifically, to obtain an approximate relaxation in our framework, one can choose among the options: (a) 3SUM or Convolution 3SUM, (b) monochromatic or trichromatic, (c) allowing under-approximation, over-approximation, or both, (d) approximate decision or approximate optimization, (e) single output or multiple outputs and (f) implicit or explicit target (given as input).
We show general reduction principles between some variants and find that we can classify the remaining problems (over polynomially bounded positive integers) into three regimes:
1) (1+ε)-approximable in near-linear time Õ(n + 1/ε),
2) (1+ε)-approximable in near-quadratic time Õ(n/ε) or Õ(n+1/ε²), or
3) non-approximable, i.e., requiring time n^{2± o(1)} even for any approximation factor. In each of these three regimes, we provide matching upper and conditional lower bounds.
To prove our results, we establish two results that may be of independent interest: Over polynomially bounded integers, we show subquadratic equivalence of (min,+)-convolution and polyhedral 3SUM, and we prove equivalence of the Strong 3SUM conjecture and the Strong Convolution 3SUM conjecture.

Karl Bringmann, Ahmed Ghazy, and Marvin Künnemann. Exploring the Approximability Landscape of 3SUM. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 34:1-34:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)

Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.ESA.2024.34, author = {Bringmann, Karl and Ghazy, Ahmed and K\"{u}nnemann, Marvin}, title = {{Exploring the Approximability Landscape of 3SUM}}, booktitle = {32nd Annual European Symposium on Algorithms (ESA 2024)}, pages = {34:1--34:15}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-338-6}, ISSN = {1868-8969}, year = {2024}, volume = {308}, editor = {Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.34}, URN = {urn:nbn:de:0030-drops-211057}, doi = {10.4230/LIPIcs.ESA.2024.34}, annote = {Keywords: Fine-grained Complexity, Conditional Lower Bounds, Approximation Schemes, Min-Plus Convolution} }

Document

Complete Volume

**Published in:** LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)

LIPIcs, Volume 297, ICALP 2024, Complete Volume

51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 1-2938, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)

Copy BibTex To Clipboard

@Proceedings{bringmann_et_al:LIPIcs.ICALP.2024, title = {{LIPIcs, Volume 297, ICALP 2024, Complete Volume}}, booktitle = {51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)}, pages = {1--2938}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-322-5}, ISSN = {1868-8969}, year = {2024}, volume = {297}, editor = {Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024}, URN = {urn:nbn:de:0030-drops-201424}, doi = {10.4230/LIPIcs.ICALP.2024}, annote = {Keywords: LIPIcs, Volume 297, ICALP 2024, Complete Volume} }

Document

Front Matter

**Published in:** LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)

Front Matter, Table of Contents, Preface, Conference Organization

51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 0:i-0:xl, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)

Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.ICALP.2024.0, author = {Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola}, title = {{Front Matter, Table of Contents, Preface, Conference Organization}}, booktitle = {51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)}, pages = {0:i--0:xl}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-322-5}, ISSN = {1868-8969}, year = {2024}, volume = {297}, editor = {Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.0}, URN = {urn:nbn:de:0030-drops-201437}, doi = {10.4230/LIPIcs.ICALP.2024.0}, annote = {Keywords: Front Matter, Table of Contents, Preface, Conference Organization} }

Document

**Published in:** LIPIcs, Volume 293, 40th International Symposium on Computational Geometry (SoCG 2024)

The Earth Mover’s Distance is a popular similarity measure in several branches of computer science. It measures the minimum total edge length of a perfect matching between two point sets. The Earth Mover’s Distance under Translation (EMDuT) is a translation-invariant version thereof. It minimizes the Earth Mover’s Distance over all translations of one point set.
For EMDuT in ℝ¹, we present an 𝒪̃(n²)-time algorithm. We also show that this algorithm is nearly optimal by presenting a matching conditional lower bound based on the Orthogonal Vectors Hypothesis. For EMDuT in ℝ^d, we present an 𝒪̃(n^{2d+2})-time algorithm for the L₁ and L_∞ metric. We show that this dependence on d is asymptotically tight, as an n^o(d)-time algorithm for L_1 or L_∞ would contradict the Exponential Time Hypothesis (ETH). Prior to our work, only approximation algorithms were known for these problems.

Karl Bringmann, Frank Staals, Karol Węgrzycki, and Geert van Wordragen. Fine-Grained Complexity of Earth Mover’s Distance Under Translation. In 40th International Symposium on Computational Geometry (SoCG 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 293, pp. 25:1-25:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)

Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.SoCG.2024.25, author = {Bringmann, Karl and Staals, Frank and W\k{e}grzycki, Karol and van Wordragen, Geert}, title = {{Fine-Grained Complexity of Earth Mover’s Distance Under Translation}}, booktitle = {40th International Symposium on Computational Geometry (SoCG 2024)}, pages = {25:1--25:17}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-316-4}, ISSN = {1868-8969}, year = {2024}, volume = {293}, editor = {Mulzer, Wolfgang and Phillips, Jeff M.}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SoCG.2024.25}, URN = {urn:nbn:de:0030-drops-199706}, doi = {10.4230/LIPIcs.SoCG.2024.25}, annote = {Keywords: Earth Mover’s Distance, Earth Mover’s Distance under Translation, Fine-Grained Complexity, Maximum Weight Bipartite Matching} }

Document

**Published in:** LIPIcs, Volume 287, 15th Innovations in Theoretical Computer Science Conference (ITCS 2024)

We pose the fine-grained hardness hypothesis that the textbook algorithm for the NFA Acceptance problem is optimal up to subpolynomial factors, even for dense NFAs and fixed alphabets.
We show that this barrier appears in many variations throughout the algorithmic literature by introducing a framework of Colored Walk problems. These yield fine-grained equivalent formulations of the NFA Acceptance problem as problems concerning detection of an s-t-walk with a prescribed color sequence in a given edge- or node-colored graph. For NFA Acceptance on sparse NFAs (or equivalently, Colored Walk in sparse graphs), a tight lower bound under the Strong Exponential Time Hypothesis has been rediscovered several times in recent years. We show that our hardness hypothesis, which concerns dense NFAs, has several interesting implications:
- It gives a tight lower bound for Context-Free Language Reachability. This proves conditional optimality for the class of 2NPDA-complete problems, explaining the cubic bottleneck of interprocedural program analysis.
- It gives a tight (n+nm^{1/3})^{1-o(1)} lower bound for the Word Break problem on strings of length n and dictionaries of total size m.
- It implies the popular OMv hypothesis. Since the NFA acceptance problem is a static (i.e., non-dynamic) problem, this provides a static reason for the hardness of many dynamic problems. Thus, a proof of the NFA Acceptance hypothesis would resolve several interesting barriers. Conversely, a refutation of the NFA Acceptance hypothesis may lead the way to attacking the current barriers observed for Context-Free Language Reachability, the Word Break problem and the growing list of dynamic problems proven hard under the OMv hypothesis.

Karl Bringmann, Allan Grønlund, Marvin Künnemann, and Kasper Green Larsen. The NFA Acceptance Hypothesis: Non-Combinatorial and Dynamic Lower Bounds. In 15th Innovations in Theoretical Computer Science Conference (ITCS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 287, pp. 22:1-22:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)

Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.ITCS.2024.22, author = {Bringmann, Karl and Gr{\o}nlund, Allan and K\"{u}nnemann, Marvin and Larsen, Kasper Green}, title = {{The NFA Acceptance Hypothesis: Non-Combinatorial and Dynamic Lower Bounds}}, booktitle = {15th Innovations in Theoretical Computer Science Conference (ITCS 2024)}, pages = {22:1--22:25}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-309-6}, ISSN = {1868-8969}, year = {2024}, volume = {287}, editor = {Guruswami, Venkatesan}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2024.22}, URN = {urn:nbn:de:0030-drops-195500}, doi = {10.4230/LIPIcs.ITCS.2024.22}, annote = {Keywords: Fine-grained complexity theory, non-deterministic finite automata} }

Document

**Published in:** LIPIcs, Volume 274, 31st Annual European Symposium on Algorithms (ESA 2023)

We revisit the classic 0-1-Knapsack problem, in which we are given n items with their weights and profits as well as a weight budget W, and the goal is to find a subset of items of total weight at most W that maximizes the total profit. We study pseudopolynomial-time algorithms parameterized by the largest profit of any item p_{max}, and the largest weight of any item w_max. Our main result are algorithms for 0-1-Knapsack running in time Õ(n w_max p_max^{2/3}) and Õ(n p_max w_max^{2/3}), improving upon an algorithm in time O(n p_max w_max) by Pisinger [J. Algorithms '99]. In the regime p_max ≈ w_max ≈ n (and W ≈ OPT ≈ n²) our algorithms are the first to break the cubic barrier n³.
To obtain our result, we give an efficient algorithm to compute the min-plus convolution of near-convex functions. More precisely, we say that a function f : [n] ↦ ℤ is Δ-near convex with Δ ≥ 1, if there is a convex function f ̆ such that f ̆(i) ≤ f(i) ≤ f ̆(i) + Δ for every i. We design an algorithm computing the min-plus convolution of two Δ-near convex functions in time Õ(nΔ). This tool can replace the usage of the prediction technique of Bateni, Hajiaghayi, Seddighin and Stein [STOC '18] in all applications we are aware of, and we believe it has wider applicability.

Karl Bringmann and Alejandro Cassis. Faster 0-1-Knapsack via Near-Convex Min-Plus-Convolution. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 24:1-24:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)

Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.ESA.2023.24, author = {Bringmann, Karl and Cassis, Alejandro}, title = {{Faster 0-1-Knapsack via Near-Convex Min-Plus-Convolution}}, booktitle = {31st Annual European Symposium on Algorithms (ESA 2023)}, pages = {24:1--24:16}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-295-2}, ISSN = {1868-8969}, year = {2023}, volume = {274}, editor = {G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.24}, URN = {urn:nbn:de:0030-drops-186776}, doi = {10.4230/LIPIcs.ESA.2023.24}, annote = {Keywords: Knapsack, Fine-Grained Complexity, Min-Plus Convolution} }

Document

Track A: Algorithms, Complexity and Games

**Published in:** LIPIcs, Volume 229, 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)

An extensive research effort targets optimal (in)approximability results for various NP-hard optimization problems. Notably, the works of (Creignou'95) as well as (Khanna, Sudan, Trevisan, Williamson'00) establish a tight characterization of a large subclass of MaxSNP, namely Boolean MaxCSPs and further variants, in terms of their polynomial-time approximability. Can we obtain similarly encompassing characterizations for classes of polynomial-time optimization problems?
To this end, we initiate the systematic study of a recently introduced polynomial-time analogue of MaxSNP, which includes a large number of well-studied problems (including Nearest and Furthest Neighbor in the Hamming metric, Maximum Inner Product, optimization variants of k-XOR and Maximum k-Cover). Specifically, for each k, MaxSP_k denotes the class of O(m^k)-time problems of the form max_{x_1,… , x_k} #{y : ϕ(x_1,… ,x_k,y)} where ϕ is a quantifier-free first-order property and m denotes the size of the relational structure. Assuming central hypotheses about clique detection in hypergraphs and exact Max-3-SAT}, we show that for any MaxSP_k problem definable by a quantifier-free m-edge graph formula φ, the best possible approximation guarantee in faster-than-exhaustive-search time O(m^{k-δ})falls into one of four categories:
- optimizable to exactness in time O(m^{k-δ}),
- an (inefficient) approximation scheme, i.e., a (1+ε)-approximation in time O(m^{k-f(ε)}),
- a (fixed) constant-factor approximation in time O(m^{k-δ}), or
- a nm^ε-approximation in time O(m^{k-f(ε)}).
We obtain an almost complete characterization of these regimes, for MaxSP_k as well as for an analogously defined minimization class MinSP_k. As our main technical contribution, we show how to rule out the existence of approximation schemes for a large class of problems admitting constant-factor approximations, under a hypothesis for exact Sparse Max-3-SAT algorithms posed by (Alman, Vassilevska Williams'20). As general trends for the problems we consider, we observe: (1) Exact optimizability has a simple algebraic characterization, (2) only few maximization problems do not admit a constant-factor approximation; these do not even have a subpolynomial-factor approximation, and (3) constant-factor approximation of minimization problems is equivalent to deciding whether the optimum is equal to 0.

Karl Bringmann, Alejandro Cassis, Nick Fischer, and Marvin Künnemann. A Structural Investigation of the Approximability of Polynomial-Time Problems. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 30:1-30:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)

Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.ICALP.2022.30, author = {Bringmann, Karl and Cassis, Alejandro and Fischer, Nick and K\"{u}nnemann, Marvin}, title = {{A Structural Investigation of the Approximability of Polynomial-Time Problems}}, booktitle = {49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)}, pages = {30:1--30:20}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-235-8}, ISSN = {1868-8969}, year = {2022}, volume = {229}, editor = {Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.30}, URN = {urn:nbn:de:0030-drops-163713}, doi = {10.4230/LIPIcs.ICALP.2022.30}, annote = {Keywords: Classification Theorems, Hardness of Approximation in P, Fine-grained Complexity Theory} }

Document

Track A: Algorithms, Complexity and Games

**Published in:** LIPIcs, Volume 229, 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)

We present new exact and approximation algorithms for 0-1-Knapsack and Unbounded Knapsack:
- Exact Algorithm for 0-1-Knapsack: 0-1-Knapsack has known algorithms running in time Õ(n + min{n ⋅ OPT, n ⋅ W, OPT², W²}) [Bellman '57], where n is the number of items, W is the weight budget, and OPT is the optimal profit. We present an algorithm running in time Õ(n + (W + OPT)^{1.5}). This improves the running time in case n,W,OPT are roughly equal.
- Exact Algorithm for Unbounded Knapsack: Unbounded Knapsack has known algorithms running in time Õ(n + min{n ⋅ p_max, n ⋅ w_max, p_max², w_max²}) [Axiotis, Tzamos '19, Jansen, Rohwedder '19, Chan, He '22], where n is the number of items, w_{max} is the largest weight of any item, and p_max is the largest profit of any item. We present an algorithm running in time Õ(n + (p_max + w_max)^{1.5}), giving a similar improvement as for 0-1-Knapsack.
- Approximating Unbounded Knapsack with Resource Augmentation: Unbounded Knapsack has a known FPTAS with running time Õ(min{n/ε, n + 1/ε²}) [Jansen, Kraft '18]. We study weak approximation algorithms, which approximate the optimal profit but are allowed to overshoot the weight constraint (i.e. resource augmentation). We present the first approximation scheme for Unbounded Knapsack in this setting, achieving running time Õ(n + 1/ε^{1.5}). Along the way, we also give a simpler FPTAS with lower order improvement in the standard setting.
For all of these problem settings the previously known results had matching conditional lower bounds. We avoid these lower bounds in the approximation setting by allowing resource augmentation, and in the exact setting by analyzing the time complexity in terms of weight and profit parameters (instead of only weight or only profit parameters).
Our algorithms can be seen as reductions to Min-Plus-Convolution on monotone sequences with bounded entries. These structured instances of Min-Plus-Convolution can be solved in time O(n^1.5) [Chi, Duan, Xie, Zhang '22] (in contrast to the conjectured n^{2-o(1)} lower bound for the general case). We complement our results by showing reductions in the opposite direction, that is, we show that achieving our results with the constant 1.5 replaced by any constant < 2 implies subquadratic algorithms for Min-Plus-Convolution on monotone sequences with bounded entries.

Karl Bringmann and Alejandro Cassis. Faster Knapsack Algorithms via Bounded Monotone Min-Plus-Convolution. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 31:1-31:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)

Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.ICALP.2022.31, author = {Bringmann, Karl and Cassis, Alejandro}, title = {{Faster Knapsack Algorithms via Bounded Monotone Min-Plus-Convolution}}, booktitle = {49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)}, pages = {31:1--31:21}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-235-8}, ISSN = {1868-8969}, year = {2022}, volume = {229}, editor = {Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.31}, URN = {urn:nbn:de:0030-drops-163727}, doi = {10.4230/LIPIcs.ICALP.2022.31}, annote = {Keywords: Knapsack, Approximation Schemes, Fine-Grained Complexity, Min-Plus Convolution} }

Document

Track A: Algorithms, Complexity and Games

**Published in:** LIPIcs, Volume 229, 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)

We study the problem of approximating the edit distance of two strings in sublinear time, in a setting where one or both string(s) are preprocessed, as initiated by Goldenberg, Rubinstein, Saha (STOC '20). Specifically, in the (k, K)-gap edit distance problem, the goal is to distinguish whether the edit distance of two strings is at most k or at least K. We obtain the following results:
- After preprocessing one string in time n^{1+o(1)}, we can solve (k, k ⋅ n^o(1))-gap-gap edit distance in time (n/k + k) ⋅ n^o(1).
- After preprocessing both strings separately in time n^{1+o(1)}, we can solve (k, k ⋅ n^o(1))-gap edit distance in time kn^o(1). Both results improve upon some previously best known result, with respect to either the gap or the query time or the preprocessing time.
Our algorithms build on the framework by Andoni, Krauthgamer and Onak (FOCS '10) and the recent sublinear-time algorithm by Bringmann, Cassis, Fischer and Nakos (STOC '22). We replace many complicated parts in their algorithm by faster and simpler solutions which exploit the preprocessing.

Karl Bringmann, Alejandro Cassis, Nick Fischer, and Vasileios Nakos. Improved Sublinear-Time Edit Distance for Preprocessed Strings. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 32:1-32:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)

Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.ICALP.2022.32, author = {Bringmann, Karl and Cassis, Alejandro and Fischer, Nick and Nakos, Vasileios}, title = {{Improved Sublinear-Time Edit Distance for Preprocessed Strings}}, booktitle = {49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)}, pages = {32:1--32:20}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-235-8}, ISSN = {1868-8969}, year = {2022}, volume = {229}, editor = {Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.32}, URN = {urn:nbn:de:0030-drops-163734}, doi = {10.4230/LIPIcs.ICALP.2022.32}, annote = {Keywords: Edit Distance, Property Testing, Preprocessing, Precision Sampling} }

Document

**Published in:** LIPIcs, Volume 224, 38th International Symposium on Computational Geometry (SoCG 2022)

We consider the unlabeled motion-planning problem of m unit-disc robots moving in a simple polygonal workspace of n edges. The goal is to find a motion plan that moves the robots to a given set of m target positions. For the unlabeled variant, it does not matter which robot reaches which target position as long as all target positions are occupied in the end.
If the workspace has narrow passages such that the robots cannot fit through them, then the free configuration space, representing all possible unobstructed positions of the robots, will consist of multiple connected components. Even if in each component of the free space the number of targets matches the number of start positions, the motion-planning problem does not always have a solution when the robots and their targets are positioned very densely. In this paper, we prove tight bounds on how much separation between start and target positions is necessary to always guarantee a solution. Moreover, we describe an algorithm that always finds a solution in time O(n log n + mn + m²) if the separation bounds are met. Specifically, we prove that the following separation is sufficient: any two start positions are at least distance 4 apart, any two target positions are at least distance 4 apart, and any pair of a start and a target positions is at least distance 3 apart. We further show that when the free space consists of a single connected component, the separation between start and target positions is not necessary.

Bahareh Banyassady, Mark de Berg, Karl Bringmann, Kevin Buchin, Henning Fernau, Dan Halperin, Irina Kostitsyna, Yoshio Okamoto, and Stijn Slot. Unlabeled Multi-Robot Motion Planning with Tighter Separation Bounds. In 38th International Symposium on Computational Geometry (SoCG 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 224, pp. 12:1-12:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)

Copy BibTex To Clipboard

@InProceedings{banyassady_et_al:LIPIcs.SoCG.2022.12, author = {Banyassady, Bahareh and de Berg, Mark and Bringmann, Karl and Buchin, Kevin and Fernau, Henning and Halperin, Dan and Kostitsyna, Irina and Okamoto, Yoshio and Slot, Stijn}, title = {{Unlabeled Multi-Robot Motion Planning with Tighter Separation Bounds}}, booktitle = {38th International Symposium on Computational Geometry (SoCG 2022)}, pages = {12:1--12:16}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-227-3}, ISSN = {1868-8969}, year = {2022}, volume = {224}, editor = {Goaoc, Xavier and Kerber, Michael}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SoCG.2022.12}, URN = {urn:nbn:de:0030-drops-160203}, doi = {10.4230/LIPIcs.SoCG.2022.12}, annote = {Keywords: motion planning, computational geometry, simple polygon} }

Document

**Published in:** LIPIcs, Volume 224, 38th International Symposium on Computational Geometry (SoCG 2022)

The Dynamic Time Warping (DTW) distance is a popular measure of similarity for a variety of sequence data. For comparing polygonal curves π, σ in ℝ^d, it provides a robust, outlier-insensitive alternative to the Fréchet distance. However, like the Fréchet distance, the DTW distance is not invariant under translations. Can we efficiently optimize the DTW distance of π and σ under arbitrary translations, to compare the curves' shape irrespective of their absolute location?
There are surprisingly few works in this direction, which may be due to its computational intricacy: For the Euclidean norm, this problem contains as a special case the geometric median problem, which provably admits no exact algebraic algorithm (that is, no algorithm using only addition, multiplication, and k-th roots). We thus investigate exact algorithms for non-Euclidean norms as well as approximation algorithms for the Euclidean norm.
For the L₁ norm in ℝ^d, we provide an 𝒪(n^{2(d+1)})-time algorithm, i.e., an exact polynomial-time algorithm for constant d. Here and below, n bounds the curves' complexities. For the Euclidean norm in ℝ², we show that a simple problem-specific insight leads to a (1+ε)-approximation in time 𝒪(n³/ε²). We then show how to obtain a subcubic 𝒪̃(n^{2.5}/ε²) time algorithm with significant new ideas; this time comes close to the well-known quadratic time barrier for computing DTW for fixed translations. Technically, the algorithm is obtained by speeding up repeated DTW distance estimations using a dynamic data structure for maintaining shortest paths in weighted planar digraphs. Crucially, we show how to traverse a candidate set of translations using space-filling curves in a way that incurs only few updates to the data structure.
We hope that our results will facilitate the use of DTW under translation both in theory and practice, and inspire similar algorithmic approaches for related geometric optimization problems.

Karl Bringmann, Sándor Kisfaludi‑Bak, Marvin Künnemann, Dániel Marx, and André Nusser. Dynamic Time Warping Under Translation: Approximation Guided by Space-Filling Curves. In 38th International Symposium on Computational Geometry (SoCG 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 224, pp. 20:1-20:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)

Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.SoCG.2022.20, author = {Bringmann, Karl and Kisfaludi‑Bak, S\'{a}ndor and K\"{u}nnemann, Marvin and Marx, D\'{a}niel and Nusser, Andr\'{e}}, title = {{Dynamic Time Warping Under Translation: Approximation Guided by Space-Filling Curves}}, booktitle = {38th International Symposium on Computational Geometry (SoCG 2022)}, pages = {20:1--20:17}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-227-3}, ISSN = {1868-8969}, year = {2022}, volume = {224}, editor = {Goaoc, Xavier and Kerber, Michael}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SoCG.2022.20}, URN = {urn:nbn:de:0030-drops-160287}, doi = {10.4230/LIPIcs.SoCG.2022.20}, annote = {Keywords: Dynamic Time Warping, Sequence Similarity Measures} }

Document

**Published in:** LIPIcs, Volume 224, 38th International Symposium on Computational Geometry (SoCG 2022)

We initiate the study of diameter computation in geometric intersection graphs from the fine-grained complexity perspective. A geometric intersection graph is a graph whose vertices correspond to some shapes in d-dimensional Euclidean space, such as balls, segments, or hypercubes, and whose edges correspond to pairs of intersecting shapes. The diameter of a graph is the largest distance realized by a pair of vertices in the graph.
Computing the diameter in near-quadratic time is possible in several classes of intersection graphs [Chan and Skrepetos 2019], but it is not at all clear if these algorithms are optimal, especially since in the related class of planar graphs the diameter can be computed in 𝒪̃(n^{5/3}) time [Cabello 2019, Gawrychowski et al. 2021].
In this work we (conditionally) rule out sub-quadratic algorithms in several classes of intersection graphs, i.e., algorithms of running time 𝒪(n^{2-δ}) for some δ > 0. In particular, there are no sub-quadratic algorithms already for fat objects in small dimensions: unit balls in ℝ³ or congruent equilateral triangles in ℝ². For unit segments and congruent equilateral triangles, we can even rule out strong sub-quadratic approximations already in ℝ². It seems that the hardness of approximation may also depend on dimensionality: for axis-parallel unit hypercubes in ℝ^{12}, distinguishing between diameter 2 and 3 needs quadratic time (ruling out (3/2-ε)- approximations), whereas for axis-parallel unit squares, we give an algorithm that distinguishes between diameter 2 and 3 in near-linear time.
Note that many of our lower bounds match the best known algorithms up to sub-polynomial factors. Ultimately, this fine-grained perspective may enable us to determine for which shapes we can have efficient algorithms and approximation schemes for diameter computation.

Karl Bringmann, Sándor Kisfaludi‑Bak, Marvin Künnemann, André Nusser, and Zahra Parsaeian. Towards Sub-Quadratic Diameter Computation in Geometric Intersection Graphs. In 38th International Symposium on Computational Geometry (SoCG 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 224, pp. 21:1-21:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)

Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.SoCG.2022.21, author = {Bringmann, Karl and Kisfaludi‑Bak, S\'{a}ndor and K\"{u}nnemann, Marvin and Nusser, Andr\'{e} and Parsaeian, Zahra}, title = {{Towards Sub-Quadratic Diameter Computation in Geometric Intersection Graphs}}, booktitle = {38th International Symposium on Computational Geometry (SoCG 2022)}, pages = {21:1--21:16}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-227-3}, ISSN = {1868-8969}, year = {2022}, volume = {224}, editor = {Goaoc, Xavier and Kerber, Michael}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SoCG.2022.21}, URN = {urn:nbn:de:0030-drops-160294}, doi = {10.4230/LIPIcs.SoCG.2022.21}, annote = {Keywords: Hardness in P, Geometric Intersection Graph, Graph Diameter, Orthogonal Vectors, Hyperclique Detection} }

Document

APPROX

**Published in:** LIPIcs, Volume 207, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021)

We initiate the study of fine-grained completeness theorems for exact and approximate optimization in the polynomial-time regime.
Inspired by the first completeness results for decision problems in P (Gao, Impagliazzo, Kolokolova, Williams, TALG 2019) as well as the classic class MaxSNP and MaxSNP-completeness for NP optimization problems (Papadimitriou, Yannakakis, JCSS 1991), we define polynomial-time analogues MaxSP and MinSP, which contain a number of natural optimization problems in P, including Maximum Inner Product, general forms of nearest neighbor search and optimization variants of the k-XOR problem. Specifically, we define MaxSP as the class of problems definable as max_{x₁,… ,x_k} #{(y₁,… ,y_𝓁) : ϕ(x₁,… ,x_k, y₁,… ,y_𝓁)}, where ϕ is a quantifier-free first-order property over a given relational structure (with MinSP defined analogously). On m-sized structures, we can solve each such problem in time O(m^{k+𝓁-1}). Our results are:
- We determine (a sparse variant of) the Maximum/Minimum Inner Product problem as complete under deterministic fine-grained reductions: A strongly subquadratic algorithm for Maximum/Minimum Inner Product would beat the baseline running time of O(m^{k+𝓁-1}) for all problems in MaxSP/MinSP by a polynomial factor.
- This completeness transfers to approximation: Maximum/Minimum Inner Product is also complete in the sense that a strongly subquadratic c-approximation would give a (c+ε)-approximation for all MaxSP/MinSP problems in time O(m^{k+𝓁-1-δ}), where ε > 0 can be chosen arbitrarily small. Combining our completeness with (Chen, Williams, SODA 2019), we obtain the perhaps surprising consequence that refuting the OV Hypothesis is equivalent to giving a O(1)-approximation for all MinSP problems in faster-than-O(m^{k+𝓁-1}) time.
- By fine-tuning our reductions, we obtain mild algorithmic improvements for solving and approximating all problems in MaxSP and MinSP, using the fastest known algorithms for Maximum/Minimum Inner Product.

Karl Bringmann, Alejandro Cassis, Nick Fischer, and Marvin Künnemann. Fine-Grained Completeness for Optimization in P. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 207, pp. 9:1-9:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)

Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.APPROX/RANDOM.2021.9, author = {Bringmann, Karl and Cassis, Alejandro and Fischer, Nick and K\"{u}nnemann, Marvin}, title = {{Fine-Grained Completeness for Optimization in P}}, booktitle = {Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021)}, pages = {9:1--9:22}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-207-5}, ISSN = {1868-8969}, year = {2021}, volume = {207}, editor = {Wootters, Mary and Sanit\`{a}, Laura}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2021.9}, URN = {urn:nbn:de:0030-drops-147024}, doi = {10.4230/LIPIcs.APPROX/RANDOM.2021.9}, annote = {Keywords: Fine-grained Complexity \& Algorithm Design, Completeness, Hardness of Approximation in P, Dimensionality Reductions} }

Document

Track A: Algorithms, Complexity and Games

**Published in:** LIPIcs, Volume 198, 48th International Colloquium on Automata, Languages, and Programming (ICALP 2021)

We consider the classic problem of computing the Longest Common Subsequence (LCS) of two strings of length n. While a simple quadratic algorithm has been known for the problem for more than 40 years, no faster algorithm has been found despite an extensive effort. The lack of progress on the problem has recently been explained by Abboud, Backurs, and Vassilevska Williams [FOCS'15] and Bringmann and Künnemann [FOCS'15] who proved that there is no subquadratic algorithm unless the Strong Exponential Time Hypothesis fails. This major roadblock for getting faster exact algorithms has led the community to look for subquadratic approximation algorithms for the problem.
Yet, unlike the edit distance problem for which a constant-factor approximation in almost-linear time is known, very little progress has been made on LCS, making it a notoriously difficult problem also in the realm of approximation. For the general setting (where we make no assumption on the length of the optimum solution or the alphabet size), only a naive O(n^{ε/2})-approximation algorithm with running time Õ(n^{2-ε}) has been known, for any constant 0 < ε ≤ 1. Recently, a breakthrough result by Hajiaghayi, Seddighin, Seddighin, and Sun [SODA'19] provided a linear-time algorithm that yields a O(n^{0.497956})-approximation in expectation; improving upon the naive O(√n)-approximation for the first time.
In this paper, we provide an algorithm that in time O(n^{2-ε}) computes an Õ(n^{2ε/5})-approximation with high probability, for any 0 < ε ≤ 1. Our result (1) gives an Õ(n^{0.4})-approximation in linear time, improving upon the bound of Hajiaghayi, Seddighin, Seddighin, and Sun, (2) provides an algorithm whose approximation scales with any subquadratic running time O(n^{2-ε}), improving upon the naive bound of O(n^{ε/2}) for any ε, and (3) instead of only in expectation, succeeds with high probability.

Karl Bringmann and Debarati Das. A Linear-Time n^{0.4}-Approximation for Longest Common Subsequence. In 48th International Colloquium on Automata, Languages, and Programming (ICALP 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 198, pp. 39:1-39:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)

Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.ICALP.2021.39, author = {Bringmann, Karl and Das, Debarati}, title = {{A Linear-Time n^\{0.4\}-Approximation for Longest Common Subsequence}}, booktitle = {48th International Colloquium on Automata, Languages, and Programming (ICALP 2021)}, pages = {39:1--39:20}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-195-5}, ISSN = {1868-8969}, year = {2021}, volume = {198}, editor = {Bansal, Nikhil and Merelli, Emanuela and Worrell, James}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2021.39}, URN = {urn:nbn:de:0030-drops-141082}, doi = {10.4230/LIPIcs.ICALP.2021.39}, annote = {Keywords: approximation algorithm, longest common subsequence, string algorithm} }

Document

Track A: Algorithms, Complexity and Games

**Published in:** LIPIcs, Volume 198, 48th International Colloquium on Automata, Languages, and Programming (ICALP 2021)

The Subgraph Isomorphism problem is of considerable importance in computer science. We examine the problem when the pattern graph H is of bounded treewidth, as occurs in a variety of applications. This problem has a well-known algorithm via color-coding that runs in time O(n^{tw(H)+1}) [Alon, Yuster, Zwick'95], where n is the number of vertices of the host graph G. While there are pattern graphs known for which Subgraph Isomorphism can be solved in an improved running time of O(n^{tw(H)+1-ε}) or even faster (e.g. for k-cliques), it is not known whether such improvements are possible for all patterns. The only known lower bound rules out time n^{o(tw(H) / log(tw(H)))} for any class of patterns of unbounded treewidth assuming the Exponential Time Hypothesis [Marx'07].
In this paper, we demonstrate the existence of maximally hard pattern graphs H that require time n^{tw(H)+1-o(1)}. Specifically, under the Strong Exponential Time Hypothesis (SETH), a standard assumption from fine-grained complexity theory, we prove the following asymptotic statement for large treewidth t:
For any ε > 0 there exists t ≥ 3 and a pattern graph H of treewidth t such that Subgraph Isomorphism on pattern H has no algorithm running in time O(n^{t+1-ε}).
Under the more recent 3-uniform Hyperclique hypothesis, we even obtain tight lower bounds for each specific treewidth t ≥ 3:
For any t ≥ 3 there exists a pattern graph H of treewidth t such that for any ε > 0 Subgraph Isomorphism on pattern H has no algorithm running in time O(n^{t+1-ε}).
In addition to these main results, we explore (1) colored and uncolored problem variants (and why they are equivalent for most cases), (2) Subgraph Isomorphism for tw < 3, (3) Subgraph Isomorphism parameterized by pathwidth instead of treewidth, and (4) a weighted variant that we call Exact Weight Subgraph Isomorphism, for which we examine pseudo-polynomial time algorithms. For many of these settings we obtain similarly tight upper and lower bounds.

Karl Bringmann and Jasper Slusallek. Current Algorithms for Detecting Subgraphs of Bounded Treewidth Are Probably Optimal. In 48th International Colloquium on Automata, Languages, and Programming (ICALP 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 198, pp. 40:1-40:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)

Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.ICALP.2021.40, author = {Bringmann, Karl and Slusallek, Jasper}, title = {{Current Algorithms for Detecting Subgraphs of Bounded Treewidth Are Probably Optimal}}, booktitle = {48th International Colloquium on Automata, Languages, and Programming (ICALP 2021)}, pages = {40:1--40:16}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-195-5}, ISSN = {1868-8969}, year = {2021}, volume = {198}, editor = {Bansal, Nikhil and Merelli, Emanuela and Worrell, James}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2021.40}, URN = {urn:nbn:de:0030-drops-141095}, doi = {10.4230/LIPIcs.ICALP.2021.40}, annote = {Keywords: subgraph isomorphism, treewidth, fine-grained complexity, hyperclique} }

Document

Track A: Algorithms, Complexity and Games

**Published in:** LIPIcs, Volume 198, 48th International Colloquium on Automata, Languages, and Programming (ICALP 2021)

We consider the problem of computing the Boolean convolution (with wraparound) of n vectors of dimension m, or, equivalently, the problem of computing the sumset A₁+A₂+…+A_n for A₁,…,A_n ⊆ ℤ_m. Boolean convolution formalizes the frequent task of combining two subproblems, where the whole problem has a solution of size k if for some i the first subproblem has a solution of size i and the second subproblem has a solution of size k-i. Our problem formalizes a natural generalization, namely combining solutions of n subproblems subject to a modular constraint. This simultaneously generalises Modular Subset Sum and Boolean Convolution (Sumset Computation). Although nearly optimal algorithms are known for special cases of this problem, not even tiny improvements are known for the general case.
We almost resolve the computational complexity of this problem, shaving essentially a factor of n from the running time of previous algorithms. Specifically, we present a deterministic algorithm running in almost linear time with respect to the input plus output size k. We also present a Las Vegas algorithm running in nearly linear expected time with respect to the input plus output size k. Previously, no deterministic or randomized o(nk) algorithm was known.
At the heart of our approach lies a careful usage of Kneser’s theorem from Additive Combinatorics, and a new deterministic almost linear output-sensitive algorithm for non-negative sparse convolution. In total, our work builds a solid toolbox that could be of independent interest.

Karl Bringmann and Vasileios Nakos. Fast n-Fold Boolean Convolution via Additive Combinatorics. In 48th International Colloquium on Automata, Languages, and Programming (ICALP 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 198, pp. 41:1-41:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)

Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.ICALP.2021.41, author = {Bringmann, Karl and Nakos, Vasileios}, title = {{Fast n-Fold Boolean Convolution via Additive Combinatorics}}, booktitle = {48th International Colloquium on Automata, Languages, and Programming (ICALP 2021)}, pages = {41:1--41:17}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-195-5}, ISSN = {1868-8969}, year = {2021}, volume = {198}, editor = {Bansal, Nikhil and Merelli, Emanuela and Worrell, James}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2021.41}, URN = {urn:nbn:de:0030-drops-141108}, doi = {10.4230/LIPIcs.ICALP.2021.41}, annote = {Keywords: convolution, sumset computation, modular subset sum, output sensitive} }

Document

**Published in:** LIPIcs, Volume 189, 37th International Symposium on Computational Geometry (SoCG 2021)

Computing the similarity of two point sets is a ubiquitous task in medical imaging, geometric shape comparison, trajectory analysis, and many more settings. Arguably the most basic distance measure for this task is the Hausdorff distance, which assigns to each point from one set the closest point in the other set and then evaluates the maximum distance of any assigned pair. A drawback is that this distance measure is not translational invariant, that is, comparing two objects just according to their shape while disregarding their position in space is impossible.
Fortunately, there is a canonical translational invariant version, the Hausdorff distance under translation, which minimizes the Hausdorff distance over all translations of one of the point sets. For point sets of size n and m, the Hausdorff distance under translation can be computed in time 𝒪̃(nm) for the L₁ and L_∞ norm [Chew, Kedem SWAT'92] and 𝒪̃(nm (n+m)) for the L₂ norm [Huttenlocher, Kedem, Sharir DCG'93].
As these bounds have not been improved for over 25 years, in this paper we approach the Hausdorff distance under translation from the perspective of fine-grained complexity theory. We show (i) a matching lower bound of (nm)^{1-o(1)} for L₁ and L_∞ assuming the Orthogonal Vectors Hypothesis and (ii) a matching lower bound of n^{2-o(1)} for L₂ in the imbalanced case of m = 𝒪(1) assuming the 3SUM Hypothesis.

Karl Bringmann and André Nusser. Translating Hausdorff Is Hard: Fine-Grained Lower Bounds for Hausdorff Distance Under Translation. In 37th International Symposium on Computational Geometry (SoCG 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 189, pp. 18:1-18:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)

Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.SoCG.2021.18, author = {Bringmann, Karl and Nusser, Andr\'{e}}, title = {{Translating Hausdorff Is Hard: Fine-Grained Lower Bounds for Hausdorff Distance Under Translation}}, booktitle = {37th International Symposium on Computational Geometry (SoCG 2021)}, pages = {18:1--18:17}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-184-9}, ISSN = {1868-8969}, year = {2021}, volume = {189}, editor = {Buchin, Kevin and Colin de Verdi\`{e}re, \'{E}ric}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SoCG.2021.18}, URN = {urn:nbn:de:0030-drops-138177}, doi = {10.4230/LIPIcs.SoCG.2021.18}, annote = {Keywords: Hausdorff Distance Under Translation, Fine-Grained Complexity Theory, Lower Bounds} }

Document

**Published in:** LIPIcs, Volume 173, 28th Annual European Symposium on Algorithms (ESA 2020)

Consider the natural question of how to measure the similarity of curves in the plane by a quantity that is invariant under translations of the curves. Such a measure is justified whenever we aim to quantify the similarity of the curves' shapes rather than their positioning in the plane, e.g., to compare the similarity of handwritten characters. Perhaps the most natural such notion is the (discrete) Fréchet distance under translation. Unfortunately, the algorithmic literature on this problem yields a very pessimistic view: On polygonal curves with n vertices, the fastest algorithm runs in time 𝒪(n^4.667) and cannot be improved below n^{4-o(1)} unless the Strong Exponential Time Hypothesis fails. Can we still obtain an implementation that is efficient on realistic datasets?
Spurred by the surprising performance of recent implementations for the Fréchet distance, we perform algorithm engineering for the Fréchet distance under translation. Our solution combines fast, but inexact tools from continuous optimization (specifically, branch-and-bound algorithms for global Lipschitz optimization) with exact, but expensive algorithms from computational geometry (specifically, problem-specific algorithms based on an arrangement construction). We combine these two ingredients to obtain an exact decision algorithm for the Fréchet distance under translation. For the related task of computing the distance value up to a desired precision, we engineer and compare different methods. On a benchmark set involving handwritten characters and route trajectories, our implementation answers a typical query for either task in the range of a few milliseconds up to a second on standard desktop hardware.
We believe that our implementation will enable, for the first time, the use of the Fréchet distance under translation in applications, whereas previous algorithmic approaches would have been computationally infeasible. Furthermore, we hope that our combination of continuous optimization and computational geometry will inspire similar approaches for further algorithmic questions.

Karl Bringmann, Marvin Künnemann, and André Nusser. When Lipschitz Walks Your Dog: Algorithm Engineering of the Discrete Fréchet Distance Under Translation. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 25:1-25:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)

Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.ESA.2020.25, author = {Bringmann, Karl and K\"{u}nnemann, Marvin and Nusser, Andr\'{e}}, title = {{When Lipschitz Walks Your Dog: Algorithm Engineering of the Discrete Fr\'{e}chet Distance Under Translation}}, booktitle = {28th Annual European Symposium on Algorithms (ESA 2020)}, pages = {25:1--25:17}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-162-7}, ISSN = {1868-8969}, year = {2020}, volume = {173}, editor = {Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.25}, URN = {urn:nbn:de:0030-drops-128912}, doi = {10.4230/LIPIcs.ESA.2020.25}, annote = {Keywords: Fr\'{e}chet Distance, Computational Geometry, Continuous Optimization, Algorithm Engineering} }

Document

Track A: Algorithms, Complexity and Games

**Published in:** LIPIcs, Volume 168, 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)

Given N instances (X_1,t_1),…,(X_N,t_N) of Subset Sum, the AND Subset Sum problem asks to determine whether all of these instances are yes-instances; that is, whether each set of integers X_i has a subset that sums up to the target integer t_i. We prove that this problem cannot be solved in time Õ((N ⋅ t_max)^{1-ε}), for t_max = max_i t_i and any ε > 0, assuming the ∀ ∃ Strong Exponential Time Hypothesis (∀∃-SETH). We then use this result to exclude Õ(n+P_max⋅n^{1-ε})-time algorithms for several scheduling problems on n jobs with maximum processing time P_max, assuming ∀∃-SETH. These include classical problems such as 1||∑ w_jU_j, the problem of minimizing the total weight of tardy jobs on a single machine, and P₂||∑ U_j, the problem of minimizing the number of tardy jobs on two identical parallel machines.

Amir Abboud, Karl Bringmann, Danny Hermelin, and Dvir Shabtay. Scheduling Lower Bounds via AND Subset Sum. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 4:1-4:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)

Copy BibTex To Clipboard

@InProceedings{abboud_et_al:LIPIcs.ICALP.2020.4, author = {Abboud, Amir and Bringmann, Karl and Hermelin, Danny and Shabtay, Dvir}, title = {{Scheduling Lower Bounds via AND Subset Sum}}, booktitle = {47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)}, pages = {4:1--4:15}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-138-2}, ISSN = {1868-8969}, year = {2020}, volume = {168}, editor = {Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.4}, URN = {urn:nbn:de:0030-drops-124119}, doi = {10.4230/LIPIcs.ICALP.2020.4}, annote = {Keywords: SETH, fine grained complexity, Subset Sum, scheduling} }

Document

Track A: Algorithms, Complexity and Games

**Published in:** LIPIcs, Volume 168, 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)

This paper is concerned with the 1||∑ p_jU_j problem, the problem of minimizing the total processing time of tardy jobs on a single machine. This is not only a fundamental scheduling problem, but also a very important problem from a theoretical point of view as it generalizes the Subset Sum problem and is closely related to the 0/1-Knapsack problem. The problem is well-known to be NP-hard, but only in a weak sense, meaning it admits pseudo-polynomial time algorithms. The fastest known pseudo-polynomial time algorithm for the problem is the famous Lawler and Moore algorithm which runs in O(P ⋅ n) time, where P is the total processing time of all n jobs in the input. This algorithm has been developed in the late 60s, and has yet to be improved to date.
In this paper we develop two new algorithms for 1||∑ p_jU_j, each improving on Lawler and Moore’s algorithm in a different scenario:
- Our first algorithm runs in Õ(P^{7/4}) time, and outperforms Lawler and Moore’s algorithm in instances where n = ω̃(P^{3/4}).
- Our second algorithm runs in Õ(min{P ⋅ D_#, P + D}) time, where D_# is the number of different due dates in the instance, and D is the sum of all different due dates. This algorithm improves on Lawler and Moore’s algorithm when n = ω̃(D_#) or n = ω̃(D/P). Further, it extends the known Õ(P) algorithm for the single due date special case of 1||∑ p_jU_j in a natural way.
Both algorithms rely on basic primitive operations between sets of integers and vectors of integers for the speedup in their running times. The second algorithm relies on fast polynomial multiplication as its main engine, while for the first algorithm we define a new "skewed" version of (max,min)-convolution which is interesting in its own right.

Karl Bringmann, Nick Fischer, Danny Hermelin, Dvir Shabtay, and Philip Wellnitz. Faster Minimization of Tardy Processing Time on a Single Machine. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 19:1-19:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)

Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.ICALP.2020.19, author = {Bringmann, Karl and Fischer, Nick and Hermelin, Danny and Shabtay, Dvir and Wellnitz, Philip}, title = {{Faster Minimization of Tardy Processing Time on a Single Machine}}, booktitle = {47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)}, pages = {19:1--19:12}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-138-2}, ISSN = {1868-8969}, year = {2020}, volume = {168}, editor = {Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.19}, URN = {urn:nbn:de:0030-drops-124269}, doi = {10.4230/LIPIcs.ICALP.2020.19}, annote = {Keywords: Weighted number of tardy jobs, sumsets, convolutions} }

Document

**Published in:** LIPIcs, Volume 144, 27th Annual European Symposium on Algorithms (ESA 2019)

We study SET COVER for orthants: Given a set of points in a d-dimensional Euclidean space and a set of orthants of the form (-infty,p_1] x ... x (-infty,p_d], select a minimum number of orthants so that every point is contained in at least one selected orthant. This problem draws its motivation from applications in multi-objective optimization problems. While for d=2 the problem can be solved in polynomial time, for d>2 no algorithm is known that avoids the enumeration of all size-k subsets of the input to test whether there is a set cover of size k. Our contribution is a precise understanding of the complexity of this problem in any dimension d >= 3, when k is considered a parameter:
- For d=3, we give an algorithm with runtime n^O(sqrt{k}), thus avoiding exhaustive enumeration.
- For d=3, we prove a tight lower bound of n^Omega(sqrt{k}) (assuming ETH).
- For d >=slant 4, we prove a tight lower bound of n^Omega(k) (assuming ETH).
Here n is the size of the set of points plus the size of the set of orthants. The first statement comes as a corollary of a more general result: an algorithm for SET COVER for half-spaces in dimension 3. In particular, we show that given a set of points U in R^3, a set of half-spaces D in R^3, and an integer k, one can decide whether U can be covered by the union of at most k half-spaces from D in time |D|^O(sqrt{k})* |U|^O(1).
We also study approximation for SET COVER for orthants. While in dimension 3 a PTAS can be inferred from existing results, we show that in dimension 4 and larger, there is no 1.05-approximation algorithm with runtime f(k)* n^o(k) for any computable f, where k is the optimum.

Karl Bringmann, Sándor Kisfaludi-Bak, Michał Pilipczuk, and Erik Jan van Leeuwen. On Geometric Set Cover for Orthants. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 26:1-26:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)

Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.ESA.2019.26, author = {Bringmann, Karl and Kisfaludi-Bak, S\'{a}ndor and Pilipczuk, Micha{\l} and van Leeuwen, Erik Jan}, title = {{On Geometric Set Cover for Orthants}}, booktitle = {27th Annual European Symposium on Algorithms (ESA 2019)}, pages = {26:1--26:18}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-124-5}, ISSN = {1868-8969}, year = {2019}, volume = {144}, editor = {Bender, Michael A. and Svensson, Ola and Herman, Grzegorz}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.26}, URN = {urn:nbn:de:0030-drops-111476}, doi = {10.4230/LIPIcs.ESA.2019.26}, annote = {Keywords: Set Cover, parameterized complexity, algorithms, Exponential Time Hypothesis} }

Document

**Published in:** LIPIcs, Volume 137, 34th Computational Complexity Conference (CCC 2019)

An important class of problems in logics and database theory is given by fixing a first-order property psi over a relational structure, and considering the model-checking problem for psi. Recently, Gao, Impagliazzo, Kolokolova, and Williams (SODA 2017) identified this class as fundamental for the theory of fine-grained complexity in P, by showing that the (Sparse) Orthogonal Vectors problem is complete for this class under fine-grained reductions. This raises the question whether fine-grained complexity can yield a precise understanding of all first-order model-checking problems. Specifically, can we determine, for any fixed first-order property psi, the exponent of the optimal running time O(m^{c_psi}), where m denotes the number of tuples in the relational structure?
Towards answering this question, in this work we give a dichotomy for the class of exists^k-forall-quantified graph properties. For every such property psi, we either give a polynomial-time improvement over the baseline O(m^k)-time algorithm or show that it requires time m^{k-o(1)} under the hypothesis that MAX-3-SAT has no O((2-epsilon)^n)-time algorithm. More precisely, we define a hardness parameter h = H(psi) such that psi can be decided in time O(m^{k-epsilon}) if h <=2 and requires time m^{k-o(1)} for h >= 3 unless the h-uniform HyperClique hypothesis fails. This unveils a natural hardness hierarchy within first-order properties: for any h >= 3, we show that there exists a exists^k-forall-quantified graph property psi with hardness H(psi)=h that is solvable in time O(m^{k-epsilon}) if and only if the h-uniform HyperClique hypothesis fails. Finally, we give more precise upper and lower bounds for an exemplary class of formulas with k=3 and extend our classification to a counting dichotomy.

Karl Bringmann, Nick Fischer, and Marvin Künnemann. A Fine-Grained Analogue of Schaefer’s Theorem in P: Dichotomy of Exists^k-Forall-Quantified First-Order Graph Properties. In 34th Computational Complexity Conference (CCC 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 137, pp. 31:1-31:27, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)

Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.CCC.2019.31, author = {Bringmann, Karl and Fischer, Nick and K\"{u}nnemann, Marvin}, title = {{A Fine-Grained Analogue of Schaefer’s Theorem in P: Dichotomy of Exists^k-Forall-Quantified First-Order Graph Properties}}, booktitle = {34th Computational Complexity Conference (CCC 2019)}, pages = {31:1--31:27}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-116-0}, ISSN = {1868-8969}, year = {2019}, volume = {137}, editor = {Shpilka, Amir}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2019.31}, URN = {urn:nbn:de:0030-drops-108533}, doi = {10.4230/LIPIcs.CCC.2019.31}, annote = {Keywords: Fine-grained Complexity, Hardness in P, Hyperclique Conjecture, Constrained Triangle Detection} }

Document

**Published in:** LIPIcs, Volume 129, 35th International Symposium on Computational Geometry (SoCG 2019)

The Fréchet distance provides a natural and intuitive measure for the popular task of computing the similarity of two (polygonal) curves. While a simple algorithm computes it in near-quadratic time, a strongly subquadratic algorithm cannot exist unless the Strong Exponential Time Hypothesis fails. Still, fast practical implementations of the Fréchet distance, in particular for realistic input curves, are highly desirable. This has even lead to a designated competition, the ACM SIGSPATIAL GIS Cup 2017: Here, the challenge was to implement a near-neighbor data structure under the Fréchet distance. The bottleneck of the top three implementations turned out to be precisely the decision procedure for the Fréchet distance.
In this work, we present a fast, certifying implementation for deciding the Fréchet distance, in order to (1) complement its pessimistic worst-case hardness by an empirical analysis on realistic input data and to (2) improve the state of the art for the GIS Cup challenge. We experimentally evaluate our implementation on a large benchmark consisting of several data sets (including handwritten characters and GPS trajectories). Compared to the winning implementation of the GIS Cup, we obtain running time improvements of up to more than two orders of magnitude for the decision procedure and of up to a factor of 30 for queries to the near-neighbor data structure.

Karl Bringmann, Marvin Künnemann, and André Nusser. Walking the Dog Fast in Practice: Algorithm Engineering of the Fréchet Distance. In 35th International Symposium on Computational Geometry (SoCG 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 129, pp. 17:1-17:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)

Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.SoCG.2019.17, author = {Bringmann, Karl and K\"{u}nnemann, Marvin and Nusser, Andr\'{e}}, title = {{Walking the Dog Fast in Practice: Algorithm Engineering of the Fr\'{e}chet Distance}}, booktitle = {35th International Symposium on Computational Geometry (SoCG 2019)}, pages = {17:1--17:21}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-104-7}, ISSN = {1868-8969}, year = {2019}, volume = {129}, editor = {Barequet, Gill and Wang, Yusu}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SoCG.2019.17}, URN = {urn:nbn:de:0030-drops-104219}, doi = {10.4230/LIPIcs.SoCG.2019.17}, annote = {Keywords: curve simplification, Fr\'{e}chet distance, algorithm engineering} }

Document

**Published in:** LIPIcs, Volume 129, 35th International Symposium on Computational Geometry (SoCG 2019)

In the classic polyline simplification problem we want to replace a given polygonal curve P, consisting of n vertices, by a subsequence P' of k vertices from P such that the polygonal curves P and P' are "close". Closeness is usually measured using the Hausdorff or Fréchet distance. These distance measures can be applied globally, i.e., to the whole curves P and P', or locally, i.e., to each simplified subcurve and the line segment that it was replaced with separately (and then taking the maximum). We provide an O(n^3) time algorithm for simplification under Global-Fréchet distance, improving the previous best algorithm by a factor of Omega(kn^2). We also provide evidence that in high dimensions cubic time is essentially optimal for all three problems (Local-Hausdorff, Local-Fréchet, and Global-Fréchet). Specifically, improving the cubic time to O(n^{3-epsilon} poly(d)) for polyline simplification over (R^d,L_p) for p = 1 would violate plausible conjectures. We obtain similar results for all p in [1,infty), p != 2. In total, in high dimensions and over general L_p-norms we resolve the complexity of polyline simplification with respect to Local-Hausdorff, Local-Fréchet, and Global-Fréchet, by providing new algorithms and conditional lower bounds.

Karl Bringmann and Bhaskar Ray Chaudhury. Polyline Simplification has Cubic Complexity. In 35th International Symposium on Computational Geometry (SoCG 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 129, pp. 18:1-18:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)

Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.SoCG.2019.18, author = {Bringmann, Karl and Chaudhury, Bhaskar Ray}, title = {{Polyline Simplification has Cubic Complexity}}, booktitle = {35th International Symposium on Computational Geometry (SoCG 2019)}, pages = {18:1--18:16}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-104-7}, ISSN = {1868-8969}, year = {2019}, volume = {129}, editor = {Barequet, Gill and Wang, Yusu}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SoCG.2019.18}, URN = {urn:nbn:de:0030-drops-104224}, doi = {10.4230/LIPIcs.SoCG.2019.18}, annote = {Keywords: Polyline simplification, Fr\'{e}chet distance, Hausdorff distance, Conditional lower bounds} }

Document

Tutorial

**Published in:** LIPIcs, Volume 126, 36th International Symposium on Theoretical Aspects of Computer Science (STACS 2019)

Suppose the fastest algorithm that we can design for some problem runs in time O(n^2). However, we want to solve the problem on big data inputs, for which quadratic time is impractically slow. We can keep searching for a faster algorithm, but maybe none exists. Is there any reasoning that provides evidence against significantly faster algorithms, and thus allows us to stop searching? In other words, is there an analogue of NP-hardness for polynomial-time problems?
In this tutorial, we will give an introduction to fine-grained complexity theory, which allows to rule out faster algorithms by proving conditional lower bounds via fine-grained reductions from certain key conjectures. We will define these terms and show exemplary lower bounds.

Karl Bringmann. Fine-Grained Complexity Theory (Tutorial). In 36th International Symposium on Theoretical Aspects of Computer Science (STACS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 126, pp. 4:1-4:7, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)

Copy BibTex To Clipboard

@InProceedings{bringmann:LIPIcs.STACS.2019.4, author = {Bringmann, Karl}, title = {{Fine-Grained Complexity Theory}}, booktitle = {36th International Symposium on Theoretical Aspects of Computer Science (STACS 2019)}, pages = {4:1--4:7}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-100-9}, ISSN = {1868-8969}, year = {2019}, volume = {126}, editor = {Niedermeier, Rolf and Paul, Christophe}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2019.4}, URN = {urn:nbn:de:0030-drops-102437}, doi = {10.4230/LIPIcs.STACS.2019.4}, annote = {Keywords: Hardness in P, conditional lower bound, fine-grained reduction} }

Document

**Published in:** LIPIcs, Volume 115, 13th International Symposium on Parameterized and Exact Computation (IPEC 2018)

We show that the eccentricities, diameter, radius, and Wiener index of an undirected n-vertex graph with nonnegative edge lengths can be computed in time O(n * binom{k+ceil[log n]}{k} * 2^k k^2 log n), where k is the treewidth of the graph. For every epsilon>0, this bound is n^{1+epsilon}exp O(k), which matches a hardness result of Abboud, Vassilevska Williams, and Wang (SODA 2015) and closes an open problem in the multivariate analysis of polynomial-time computation. To this end, we show that the analysis of an algorithm of Cabello and Knauer (Comp. Geom., 2009) in the regime of non-constant treewidth can be improved by revisiting the analysis of orthogonal range searching, improving bounds of the form log^d n to binom{d+ceil[log n]}{d}, as originally observed by Monier (J. Alg. 1980).
We also investigate the parameterization by vertex cover number.

Karl Bringmann, Thore Husfeldt, and Måns Magnusson. Multivariate Analysis of Orthogonal Range Searching and Graph Distances. In 13th International Symposium on Parameterized and Exact Computation (IPEC 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 115, pp. 4:1-4:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)

Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.IPEC.2018.4, author = {Bringmann, Karl and Husfeldt, Thore and Magnusson, M\r{a}ns}, title = {{Multivariate Analysis of Orthogonal Range Searching and Graph Distances}}, booktitle = {13th International Symposium on Parameterized and Exact Computation (IPEC 2018)}, pages = {4:1--4:13}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-084-2}, ISSN = {1868-8969}, year = {2019}, volume = {115}, editor = {Paul, Christophe and Pilipczuk, Michal}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.IPEC.2018.4}, URN = {urn:nbn:de:0030-drops-102050}, doi = {10.4230/LIPIcs.IPEC.2018.4}, annote = {Keywords: Diameter, radius, Wiener index, orthogonal range searching, treewidth, vertex cover number} }

Document

**Published in:** LIPIcs, Volume 122, 38th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2018)

We study sketching and streaming algorithms for the Longest Common Subsequence problem (LCS) on strings of small alphabet size |Sigma|. For the problem of deciding whether the LCS of strings x,y has length at least L, we obtain a sketch size and streaming space usage of O(L^{|Sigma| - 1} log L). We also prove matching unconditional lower bounds.
As an application, we study a variant of LCS where each alphabet symbol is equipped with a weight that is given as input, and the task is to compute a common subsequence of maximum total weight. Using our sketching algorithm, we obtain an O(min{nm, n + m^{|Sigma|}})-time algorithm for this problem, on strings x,y of length n,m, with n >= m. We prove optimality of this running time up to lower order factors, assuming the Strong Exponential Time Hypothesis.

Karl Bringmann and Bhaskar Ray Chaudhury. Sketching, Streaming, and Fine-Grained Complexity of (Weighted) LCS. In 38th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 122, pp. 40:1-40:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)

Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.FSTTCS.2018.40, author = {Bringmann, Karl and Chaudhury, Bhaskar Ray}, title = {{Sketching, Streaming, and Fine-Grained Complexity of (Weighted) LCS}}, booktitle = {38th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2018)}, pages = {40:1--40:16}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-093-4}, ISSN = {1868-8969}, year = {2018}, volume = {122}, editor = {Ganguly, Sumit and Pandya, Paritosh}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2018.40}, URN = {urn:nbn:de:0030-drops-99390}, doi = {10.4230/LIPIcs.FSTTCS.2018.40}, annote = {Keywords: algorithms, SETH, communication complexity, run-length encoding} }

Document

**Published in:** LIPIcs, Volume 107, 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)

A noticeable fraction of Algorithms papers in the last few decades improve the running time of well-known algorithms for fundamental problems by logarithmic factors. For example, the {O}(n^2) dynamic programming solution to the Longest Common Subsequence problem (LCS) was improved to O(n^2/log^{2}n) in several ways and using a variety of ingenious tricks. This line of research, also known as the art of shaving log factors, lacks a tool for proving negative results. Specifically, how can we show that it is unlikely that LCS can be solved in time O(n^2/log^3n)?
Perhaps the only approach for such results was suggested in a recent paper of Abboud, Hansen, Vassilevska W. and Williams (STOC'16). The authors blame the hardness of shaving logs on the hardness of solving satisfiability on boolean formulas (Formula-SAT) faster than exhaustive search. They show that an O(n^2/log^{1000} n) algorithm for LCS would imply a major advance in circuit lower bounds. Whether this approach can lead to tighter barriers was unclear.
In this paper, we push this approach to its limit and, in particular, prove that a well-known barrier from complexity theory stands in the way for shaving five additional log factors for fundamental combinatorial problems. For LCS, regular expression pattern matching, as well as the Fréchet distance problem from Computational Geometry, we show that an O(n^2/log^{7+epsilon}{n}) runtime would imply new Formula-SAT algorithms.
Our main result is a reduction from SAT on formulas of size s over n variables to LCS on sequences of length N=2^{n/2} * s^{1+o(1)}. Our reduction is essentially as efficient as possible, and it greatly improves the previously known reduction for LCS with N=2^{n/2} * s^c, for some c >= 100.

Amir Abboud and Karl Bringmann. Tighter Connections Between Formula-SAT and Shaving Logs. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 8:1-8:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)

Copy BibTex To Clipboard

@InProceedings{abboud_et_al:LIPIcs.ICALP.2018.8, author = {Abboud, Amir and Bringmann, Karl}, title = {{Tighter Connections Between Formula-SAT and Shaving Logs}}, booktitle = {45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)}, pages = {8:1--8:18}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-076-7}, ISSN = {1868-8969}, year = {2018}, volume = {107}, editor = {Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.8}, URN = {urn:nbn:de:0030-drops-90129}, doi = {10.4230/LIPIcs.ICALP.2018.8}, annote = {Keywords: Fine-Grained Complexity, Hardness in P, Formula-SAT, Longest Common Subsequence, Frechet Distance} }

Document

Brief Announcement

**Published in:** LIPIcs, Volume 91, 31st International Symposium on Distributed Computing (DISC 2017)

We revisit the hardness of approximating the diameter of a network. In the CONGEST model, ~Omega(n) rounds are necessary to compute the diameter [Frischknecht et al. SODA'12]. Abboud et al. [DISC 2016] extended this result to sparse graphs and, at a more fine-grained level, showed that, for any integer 1 <= l <= polylog(n) , distinguishing between networks of diameter 4l + 2 and 6l + 1 requires ~Omega(n) rounds. We slightly tighten this result by showing that even distinguishing between diameter 2l + 1 and 3l + 1 requires ~Omega(n) rounds. The reduction of Abboud et al. is inspired by recent conditional lower bounds in the RAM model, where the orthogonal vectors problem plays a pivotal role. In our new lower bound, we make the connection to orthogonal vectors explicit, leading to a conceptually more streamlined exposition. This is suited for teaching both the lower bound in the CONGEST model and the conditional lower bound in the RAM model.

Karl Bringmann and Sebastian Krinninger. Brief Announcement: A Note on Hardness of Diameter Approximation. In 31st International Symposium on Distributed Computing (DISC 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 91, pp. 44:1-44:3, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)

Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.DISC.2017.44, author = {Bringmann, Karl and Krinninger, Sebastian}, title = {{Brief Announcement: A Note on Hardness of Diameter Approximation}}, booktitle = {31st International Symposium on Distributed Computing (DISC 2017)}, pages = {44:1--44:3}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-053-8}, ISSN = {1868-8969}, year = {2017}, volume = {91}, editor = {Richa, Andr\'{e}a}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.DISC.2017.44}, URN = {urn:nbn:de:0030-drops-79874}, doi = {10.4230/LIPIcs.DISC.2017.44}, annote = {Keywords: diameter, fine-grained reductions, conditional lower bounds} }

Document

**Published in:** LIPIcs, Volume 87, 25th Annual European Symposium on Algorithms (ESA 2017)

Real-world networks, like social networks or the internet infrastructure, have structural properties such as large clustering coefficients that can best be described in terms of an underlying geometry. This is why the focus of the literature on theoretical models for real-world networks shifted from classic models without geometry, such as Chung-Lu random graphs, to modern geometry-based models, such as hyperbolic random graphs.
With this paper we contribute to the theoretical analysis of these modern, more realistic random graph models. Instead of studying directly hyperbolic random graphs, we introduce a generalization that we call geometric inhomogeneous random graphs (GIRGs). Since we ignore constant factors in the edge probabilities, GIRGs are technically simpler (specifically, we avoid hyperbolic cosines), while preserving the qualitative behaviour of hyperbolic random graphs, and we suggest to replace hyperbolic random graphs by this new model in future theoretical studies.
We prove the following fundamental structural and algorithmic results on GIRGs. (1) As our main contribution we provide a sampling algorithm that generates a random graph from our model in expected linear time, improving the best-known sampling algorithm for hyperbolic random graphs by a substantial factor O(n^0.5). (2) We establish that GIRGs have clustering coefficients in Omega(1), (3) we prove that GIRGs have small separators, i.e., it suffices to delete a sublinear number of edges to break the giant component into two large pieces, and (4) we show how to compress GIRGs using an expected linear number of bits.

Karl Bringmann, Ralph Keusch, and Johannes Lengler. Sampling Geometric Inhomogeneous Random Graphs in Linear Time. In 25th Annual European Symposium on Algorithms (ESA 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 87, pp. 20:1-20:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)

Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.ESA.2017.20, author = {Bringmann, Karl and Keusch, Ralph and Lengler, Johannes}, title = {{Sampling Geometric Inhomogeneous Random Graphs in Linear Time}}, booktitle = {25th Annual European Symposium on Algorithms (ESA 2017)}, pages = {20:1--20:15}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-049-1}, ISSN = {1868-8969}, year = {2017}, volume = {87}, editor = {Pruhs, Kirk and Sohler, Christian}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2017.20}, URN = {urn:nbn:de:0030-drops-78396}, doi = {10.4230/LIPIcs.ESA.2017.20}, annote = {Keywords: real-world networks, random graph models, sampling algorithms, compression algorithms, hyperbolic random graphs} }

Document

**Published in:** LIPIcs, Volume 79, 32nd Computational Complexity Conference (CCC 2017)

In 1979 Valiant showed that the complexity class VP_e of families with polynomially bounded formula size is contained in the class VP_s of families that have algebraic branching programs (ABPs) of polynomially bounded size. Motivated by the problem of separating these classes we study the topological closure VP_e-bar, i.e. the class of polynomials that can be approximated arbitrarily closely by polynomials in VP_e. We describe VP_e-bar with a strikingly simple complete polynomial (in characteristic different from 2) whose recursive definition is similar to the Fibonacci numbers. Further understanding this polynomial seems to be a promising route to new formula lower bounds.
Our methods are rooted in the study of ABPs of small constant width. In 1992 Ben-Or and Cleve showed that formula size is polynomially equivalent to width-3 ABP size. We extend their result (in characteristic different from 2) by showing that approximate formula size is polynomially equivalent to approximate width-2 ABP size. This is surprising because in 2011 Allender and Wang gave explicit polynomials that cannot be computed by width-2 ABPs at all! The details of our construction lead to the aforementioned characterization of VP_e-bar.
As a natural continuation of this work we prove that the class VNP can be described as the class of families that admit a hypercube summation of polynomially bounded dimension over a product of polynomially many affine linear forms. This gives the first separations of algebraic complexity classes from their nondeterministic analogs.

Karl Bringmann, Christian Ikenmeyer, and Jeroen Zuiddam. On Algebraic Branching Programs of Small Width. In 32nd Computational Complexity Conference (CCC 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 79, pp. 20:1-20:31, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)

Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.CCC.2017.20, author = {Bringmann, Karl and Ikenmeyer, Christian and Zuiddam, Jeroen}, title = {{On Algebraic Branching Programs of Small Width}}, booktitle = {32nd Computational Complexity Conference (CCC 2017)}, pages = {20:1--20:31}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-040-8}, ISSN = {1868-8969}, year = {2017}, volume = {79}, editor = {O'Donnell, Ryan}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2017.20}, URN = {urn:nbn:de:0030-drops-75217}, doi = {10.4230/LIPIcs.CCC.2017.20}, annote = {Keywords: algebraic branching programs, algebraic complexity theory, border complexity, formula size, iterated matrix multiplication} }

Document

**Published in:** LIPIcs, Volume 80, 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)

We study the problem of finding the cycle of minimum cost-to-time ratio in a directed graph with n nodes and m edges. This problem has a long history in combinatorial optimization and has recently seen interesting applications in the context of quantitative verification. We focus on strongly polynomial algorithms to cover the use-case where the weights are relatively large compared to the size of the graph. Our main result is an algorithm with running time ~O(m^{3/4} n^{3/2}), which gives the first improvement over Megiddo's ~O(n^3) algorithm [JACM'83] for sparse graphs (We use the notation ~O(.) to hide factors that are polylogarithmic in n.) We further demonstrate how to obtain both an algorithm with running time n^3/2^{Omega(sqrt(log n)} on general graphs and an algorithm with running time ~O(n) on constant treewidth graphs. To obtain our main result, we develop a parallel algorithm for negative cycle detection and single-source shortest paths that might be of independent interest.

Karl Bringmann, Thomas Dueholm Hansen, and Sebastian Krinninger. Improved Algorithms for Computing the Cycle of Minimum Cost-to-Time Ratio in Directed Graphs. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 124:1-124:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)

Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.ICALP.2017.124, author = {Bringmann, Karl and Dueholm Hansen, Thomas and Krinninger, Sebastian}, title = {{Improved Algorithms for Computing the Cycle of Minimum Cost-to-Time Ratio in Directed Graphs}}, booktitle = {44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)}, pages = {124:1--124:16}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-041-5}, ISSN = {1868-8969}, year = {2017}, volume = {80}, editor = {Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.124}, URN = {urn:nbn:de:0030-drops-74398}, doi = {10.4230/LIPIcs.ICALP.2017.124}, annote = {Keywords: quantitative verification and synthesis, parametric search, shortest paths, negative cycle detection} }

Document

**Published in:** LIPIcs, Volume 78, 28th Annual Symposium on Combinatorial Pattern Matching (CPM 2017)

Tree-adjoining grammars are a generalization of context-free grammars that are well suited to model human languages and are thus popular in computational linguistics. In the tree-adjoining grammar recognition problem, given a grammar G and a string s of length n, the task is to decide whether s can be obtained from G. Rajasekaran and Yooseph’s parser (JCSS’98) solves this problem in time O(n^2w), where w < 2.373 is the matrix multiplication exponent. The best algorithms avoiding fast matrix multiplication take time O(n^6). The first evidence for hardness was given by Satta (J. Comp. Linguist.’94): For a more general parsing problem, any algorithm that avoids fast matrix multiplication and is significantly faster than O(|G|·n^6) in the case of |G| = Theta(n^12) would imply a breakthrough for Boolean matrix multiplication. Following an approach by Abboud et al. (FOCS’15) for context-free grammar recognition, in this paper we resolve many of the disadvantages of the previous lower bound. We show that, even on constant-size grammars, any improvement on Rajasekaran and Yooseph’s parser would imply a breakthrough for the k-Clique problem. This establishes tree-adjoining grammar parsing as a practically relevant problem with the unusual running time of n^2w , up to lower order factors.

Karl Bringmann and Philip Wellnitz. Clique-Based Lower Bounds for Parsing Tree-Adjoining Grammars. In 28th Annual Symposium on Combinatorial Pattern Matching (CPM 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 78, pp. 12:1-12:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)

Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.CPM.2017.12, author = {Bringmann, Karl and Wellnitz, Philip}, title = {{Clique-Based Lower Bounds for Parsing Tree-Adjoining Grammars}}, booktitle = {28th Annual Symposium on Combinatorial Pattern Matching (CPM 2017)}, pages = {12:1--12:14}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-039-2}, ISSN = {1868-8969}, year = {2017}, volume = {78}, editor = {K\"{a}rkk\"{a}inen, Juha and Radoszewski, Jakub and Rytter, Wojciech}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CPM.2017.12}, URN = {urn:nbn:de:0030-drops-73329}, doi = {10.4230/LIPIcs.CPM.2017.12}, annote = {Keywords: conditional lower bounds, k-Clique, parsing, tree-adjoining grammars} }

Document

**Published in:** LIPIcs, Volume 77, 33rd International Symposium on Computational Geometry (SoCG 2017)

Let B be a set of n axis-parallel boxes in d-dimensions such that each box has a corner at the origin and the other corner in the positive quadrant, and let k be a positive integer. We study the problem of selecting k boxes in B that maximize the volume of the union of the selected boxes. The research is motivated by applications in skyline queries for databases and in multicriteria optimization, where the problem is known as the hypervolume subset selection problem. It is known that the problem can be solved in polynomial time in the plane, while the best known algorithms in any dimension d>2 enumerate all size-k subsets. We show that:
* The problem is NP-hard already in 3 dimensions.
* In 3 dimensions, we break the enumeration of all size-k subsets, by providing an n^O(sqrt(k)) algorithm.
* For any constant dimension d, we give an efficient polynomial-time approximation scheme.

Karl Bringmann, Sergio Cabello, and Michael T. M. Emmerich. Maximum Volume Subset Selection for Anchored Boxes. In 33rd International Symposium on Computational Geometry (SoCG 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 77, pp. 22:1-22:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)

Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.SoCG.2017.22, author = {Bringmann, Karl and Cabello, Sergio and Emmerich, Michael T. M.}, title = {{Maximum Volume Subset Selection for Anchored Boxes}}, booktitle = {33rd International Symposium on Computational Geometry (SoCG 2017)}, pages = {22:1--22:15}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-038-5}, ISSN = {1868-8969}, year = {2017}, volume = {77}, editor = {Aronov, Boris and Katz, Matthew J.}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SoCG.2017.22}, URN = {urn:nbn:de:0030-drops-72011}, doi = {10.4230/LIPIcs.SoCG.2017.22}, annote = {Keywords: geometric optimization, subset selection, hypervolume indicator, Klee’s 23 measure problem, boxes, NP-hardness, PTAS} }

Document

**Published in:** LIPIcs, Volume 57, 24th Annual European Symposium on Algorithms (ESA 2016)

We study the complexity of the Hitting Set problem in set systems (hypergraphs) that avoid certain sub-structures. In particular, we characterize the classical and parameterized complexity of the problem when the Vapnik-Chervonenkis dimension (VC-dimension) of the input is small.
VC-dimension is a natural measure of complexity of set systems. Several tractable instances of Hitting Set with a geometric or graph-theoretical flavor are known to have low VC-dimension. In set systems of bounded VC-dimension, Hitting Set is known to admit efficient and almost optimal approximation algorithms (Brönnimann and Goodrich, 1995; Even, Rawitz, and Shahar, 2005; Agarwal and Pan, 2014).
In contrast to these approximation-results, a low VC-dimension does not necessarily imply tractability in the parameterized sense. In fact, we show that Hitting Set is W[1]-hard already on inputs with VC-dimension 2, even if the VC-dimension of the dual set system is also 2. Thus, Hitting Set is very unlikely to be fixed-parameter tractable even in this arguably simple case. This answers an open question raised by King in 2010. For set systems whose (primal or dual) VC-dimension is 1, we show that Hitting Set is solvable in polynomial time.
To bridge the gap in complexity between the classes of inputs with VC-dimension 1 and 2, we use a measure that is more fine-grained than VC-dimension. In terms of this measure, we identify a sharp threshold where the complexity of Hitting Set transitions from polynomial-time-solvable to NP-hard. The tractable class that lies just under the threshold is a generalization of Edge Cover, and thus extends the domain of polynomial-time tractability of Hitting Set.

Karl Bringmann, László Kozma, Shay Moran, and N. S. Narayanaswamy. Hitting Set for Hypergraphs of Low VC-dimension. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 23:1-23:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)

Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.ESA.2016.23, author = {Bringmann, Karl and Kozma, L\'{a}szl\'{o} and Moran, Shay and Narayanaswamy, N. S.}, title = {{Hitting Set for Hypergraphs of Low VC-dimension}}, booktitle = {24th Annual European Symposium on Algorithms (ESA 2016)}, pages = {23:1--23:18}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-015-6}, ISSN = {1868-8969}, year = {2016}, volume = {57}, editor = {Sankowski, Piotr and Zaroliagis, Christos}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.23}, URN = {urn:nbn:de:0030-drops-63749}, doi = {10.4230/LIPIcs.ESA.2016.23}, annote = {Keywords: hitting set, VC-dimension} }

Document

**Published in:** LIPIcs, Volume 34, 31st International Symposium on Computational Geometry (SoCG 2015)

The Fréchet distance is a popular and widespread distance measure for point sequences and for curves. About two years ago, Agarwal et al [SIAM J. Comput. 2014] presented a new (mildly) subquadratic algorithm for the discrete version of the problem. This spawned a flurry of activity that has led to several new algorithms and lower bounds.
In this paper, we study the approximability of the discrete Fréchet distance. Building on a recent result by Bringmann [FOCS 2014], we present a new conditional lower bound that strongly subquadratic algorithms for the discrete Fréchet distance are unlikely to exist, even in the one-dimensional case and even if the solution may be approximated up to a factor of 1.399.
This raises the question of how well we can approximate the Fréchet distance (of two given d-dimensional point sequences of length n) in strongly subquadratic time. Previously, no general results were known. We present the first such algorithm by analysing the approximation ratio of a simple, linear-time greedy algorithm to be 2^Theta(n). Moreover, we design an alpha-approximation algorithm that runs in time O(n log n + n^2 / alpha), for any alpha in [1, n]. Hence, an n^epsilon-approximation of the Fréchet distance can be computed in strongly subquadratic time, for any epsilon > 0.

Karl Bringmann and Wolfgang Mulzer. Approximability of the Discrete Fréchet Distance. In 31st International Symposium on Computational Geometry (SoCG 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 34, pp. 739-753, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)

Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.SOCG.2015.739, author = {Bringmann, Karl and Mulzer, Wolfgang}, title = {{Approximability of the Discrete Fr\'{e}chet Distance}}, booktitle = {31st International Symposium on Computational Geometry (SoCG 2015)}, pages = {739--753}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-939897-83-5}, ISSN = {1868-8969}, year = {2015}, volume = {34}, editor = {Arge, Lars and Pach, J\'{a}nos}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SOCG.2015.739}, URN = {urn:nbn:de:0030-drops-51072}, doi = {10.4230/LIPIcs.SOCG.2015.739}, annote = {Keywords: Fr\'{e}chet distance, approximation, lower bounds, Strong Exponential Time Hypothesis} }

Document

**Published in:** LIPIcs, Volume 30, 32nd International Symposium on Theoretical Aspects of Computer Science (STACS 2015)

We consider the Steiner Multicut problem, which asks, given an undirected graph G, a collection T = \{T_{1},...,T_{t}}, T_i \subseteq V(G), of terminal sets of size at most p, and an integer k, whether there is a set S of at most k edges or nodes such that of each set T_{i} at least one pair of terminals is in different connected components of G \ S. This problem generalizes several well-studied graph cut problems, in particular the Multicut problem, which corresponds to the case p = 2. The Multicut problem was recently shown to be fixed-parameter tractable for parameter k [Marx and Razgon, Bousquet et al., STOC 2011]. The question whether this result generalizes to Steiner Multicut motivates the present work.
We answer the question that motivated this work, and in fact provide a dichotomy of the parameterized complexity of Steiner Multicut on general graphs. That is, for any combination of k, t, p, and the treewidth tw(G) as constant, parameter, or unbounded, and for all versions of the problem (edge deletion and node deletion with and without deletable terminals), we prove either that the problem is fixed-parameter tractable or that the problem is hard (W[1]-hard or even (para-)NP-complete). Among the many results in the paper, we highlight that:
- The edge deletion version of Steiner Multicut is fixed-parameter tractable for parameter k+t on general graphs (but has no polynomial kernel, even on trees).
- In contrast, both node deletion versions of Steiner Multicut are W[1]-hard for the parameter k+t on general graphs.
- All versions of Steiner Multicut are W[1]-hard for the parameter k, even when p=3 and the graph is a tree plus one node.
Since we allow k, t, p, and tw(G) to be any constants, our characterization includes a dichotomy for Steiner Multicut on trees (for tw(G) = 1) as well as a polynomial time versus NP-hardness dichotomy (by restricting k,t,p,tw(G) to constant or unbounded).

Karl Bringmann, Danny Hermelin, Matthias Mnich, and Erik Jan van Leeuwen. Parameterized Complexity Dichotomy for Steiner Multicut. In 32nd International Symposium on Theoretical Aspects of Computer Science (STACS 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 30, pp. 157-170, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)

Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.STACS.2015.157, author = {Bringmann, Karl and Hermelin, Danny and Mnich, Matthias and van Leeuwen, Erik Jan}, title = {{Parameterized Complexity Dichotomy for Steiner Multicut}}, booktitle = {32nd International Symposium on Theoretical Aspects of Computer Science (STACS 2015)}, pages = {157--170}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-939897-78-1}, ISSN = {1868-8969}, year = {2015}, volume = {30}, editor = {Mayr, Ernst W. and Ollinger, Nicolas}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2015.157}, URN = {urn:nbn:de:0030-drops-49115}, doi = {10.4230/LIPIcs.STACS.2015.157}, annote = {Keywords: graph cut problems, Steiner cut, fixed-parameter tractability} }

Document

**Published in:** LIPIcs, Volume 25, 31st International Symposium on Theoretical Aspects of Computer Science (STACS 2014)

We study a natural process for allocating m balls into n bins that are organized as the vertices of an undirected graph G. Balls arrive one at a time. When a ball arrives, it first chooses a vertex u in G uniformly at random. Then the ball performs a local search in G starting from u until it reaches a vertex with local minimum load, where the ball is finally placed on. Then the next ball arrives and this procedure is repeated. For the case m=n, we give an upper bound for the maximum load on graphs with bounded degrees. We also propose the study of the cover time of this process, which is defined as the smallest m so that every bin has at least one ball allocated to it. We establish an upper bound for the cover time on graphs with bounded degrees. Our bounds for the maximum load and the cover time are tight when the graph is vertex transitive or sufficiently homogeneous. We also give upper bounds for the maximum load when m>=n.

Karl Bringmann, Thomas Sauerwald, Alexandre Stauffer, and He Sun. Balls into bins via local search: cover time and maximum load. In 31st International Symposium on Theoretical Aspects of Computer Science (STACS 2014). Leibniz International Proceedings in Informatics (LIPIcs), Volume 25, pp. 187-198, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)

Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.STACS.2014.187, author = {Bringmann, Karl and Sauerwald, Thomas and Stauffer, Alexandre and Sun, He}, title = {{Balls into bins via local search: cover time and maximum load}}, booktitle = {31st International Symposium on Theoretical Aspects of Computer Science (STACS 2014)}, pages = {187--198}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-939897-65-1}, ISSN = {1868-8969}, year = {2014}, volume = {25}, editor = {Mayr, Ernst W. and Portier, Natacha}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2014.187}, URN = {urn:nbn:de:0030-drops-44570}, doi = {10.4230/LIPIcs.STACS.2014.187}, annote = {Keywords: Balls and Bins, Stochastic Process, Randomized Algorithm} }

X

Feedback for Dagstuhl Publishing

Feedback submitted

Please try again later or send an E-mail