18 Search Results for "D�ubel, Karl"


Document
Combinatorial Designs Meet Hypercliques: Higher Lower Bounds for Klee’s Measure Problem and Related Problems in Dimensions d ≥ 4

Authors: Egor Gorbachev and Marvin Künnemann

Published in: LIPIcs, Volume 258, 39th International Symposium on Computational Geometry (SoCG 2023)


Abstract
Klee’s measure problem (computing the volume of the union of n axis-parallel boxes in ℝ^d) is well known to have n^{d/2± o(1)}-time algorithms (Overmars, Yap, SICOMP'91; Chan FOCS'13). Only recently, a conditional lower bound (without any restriction to "combinatorial" algorithms) could be shown for d = 3 (Künnemann, FOCS'22). Can this result be extended to a tight lower bound for dimensions d ≥ 4? In this paper, we formalize the technique of the tight lower bound for d = 3 using a combinatorial object we call prefix covering design. We show that these designs, which are related in spirit to combinatorial designs, directly translate to conditional lower bounds for Klee’s measure problem and various related problems. By devising good prefix covering designs, we give the following lower bounds for Klee’s measure problem in ℝ^d, the depth problem for axis-parallel boxes in ℝ^d, the largest-volume/max-perimeter empty (anchored) box problem in ℝ^{2d}, and related problems: - Ω(n^1.90476) for d = 4, - Ω(n^2.22222) for d = 5, - Ω(n^{d/3 + 2√d/9-o(√d)}) for general d, assuming the 3-uniform hyperclique hypothesis. For Klee’s measure problem and the depth problem, these bounds improve previous lower bounds of Ω(n^{1.777...}), Ω(n^{2.0833...}) and Ω(n^{d/3 + 1/3 + Θ(1/d)}) respectively. Our improved prefix covering designs were obtained by (1) exploiting a computer-aided search using problem-specific insights as well as SAT solvers, and (2) showing how to transform combinatorial covering designs known in the literature to strong prefix covering designs. In contrast, we show that our lower bounds are close to best possible using this proof technique.

Cite as

Egor Gorbachev and Marvin Künnemann. Combinatorial Designs Meet Hypercliques: Higher Lower Bounds for Klee’s Measure Problem and Related Problems in Dimensions d ≥ 4. In 39th International Symposium on Computational Geometry (SoCG 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 258, pp. 36:1-36:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{gorbachev_et_al:LIPIcs.SoCG.2023.36,
  author =	{Gorbachev, Egor and K\"{u}nnemann, Marvin},
  title =	{{Combinatorial Designs Meet Hypercliques: Higher Lower Bounds for Klee’s Measure Problem and Related Problems in Dimensions d ≥ 4}},
  booktitle =	{39th International Symposium on Computational Geometry (SoCG 2023)},
  pages =	{36:1--36:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-273-0},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{258},
  editor =	{Chambers, Erin W. and Gudmundsson, Joachim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SoCG.2023.36},
  URN =		{urn:nbn:de:0030-drops-178861},
  doi =		{10.4230/LIPIcs.SoCG.2023.36},
  annote =	{Keywords: Fine-grained complexity theory, non-combinatorial lower bounds, computational geometry, clique detection}
}
Document
Reflections on the Birth of Spoofax

Authors: Karl Trygve Kalleberg

Published in: OASIcs, Volume 109, Eelco Visser Commemorative Symposium (EVCS 2023)


Abstract
The Spoofax Language Workbench is one of the many successful research projects to come out of Eelco’s tremendously productive career, as evidenced by the Most Influential Paper designation awarded at OOPSLA 2020 to the famous 2010 paper by Lennart and Eelco [Lennart C. L. Kats and Eelco Visser, 2010]. In connection with the award, Eelco wrote an excellent retrospective [Eelco Visser, 2021]. Here, I will reflect briefly on the birth of the Spoofax, subject to omissions and inaccuracies, as the topic may be of interest to hardcore fans. Eelco was my PhD co-supervisor, and I was fortunate to be part of his group during the inception of the project.

Cite as

Karl Trygve Kalleberg. Reflections on the Birth of Spoofax. In Eelco Visser Commemorative Symposium (EVCS 2023). Open Access Series in Informatics (OASIcs), Volume 109, pp. 14:1-14:4, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{kalleberg:OASIcs.EVCS.2023.14,
  author =	{Kalleberg, Karl Trygve},
  title =	{{Reflections on the Birth of Spoofax}},
  booktitle =	{Eelco Visser Commemorative Symposium (EVCS 2023)},
  pages =	{14:1--14:4},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-267-9},
  ISSN =	{2190-6807},
  year =	{2023},
  volume =	{109},
  editor =	{L\"{a}mmel, Ralf and Mosses, Peter D. and Steimann, Friedrich},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/OASIcs.EVCS.2023.14},
  URN =		{urn:nbn:de:0030-drops-177842},
  doi =		{10.4230/OASIcs.EVCS.2023.14},
  annote =	{Keywords: Spoofax, Language Workbench, Program Transformation}
}
Document
Computing Generalized Convolutions Faster Than Brute Force

Authors: Barış Can Esmer, Ariel Kulik, Dániel Marx, Philipp Schepper, and Karol Węgrzycki

Published in: LIPIcs, Volume 249, 17th International Symposium on Parameterized and Exact Computation (IPEC 2022)


Abstract
In this paper, we consider a general notion of convolution. Let D be a finite domain and let Dⁿ be the set of n-length vectors (tuples) of D. Let f : D × D → D be a function and let ⊕_f be a coordinate-wise application of f. The f-Convolution of two functions g,h : Dⁿ → {-M,…,M} is (g ⊛_f h)(v) := ∑_{v_g,v_h ∈ D^n s.t. v = v_g ⊕_f v_h} g(v_g) ⋅ h(v_h) for every 𝐯 ∈ Dⁿ. This problem generalizes many fundamental convolutions such as Subset Convolution, XOR Product, Covering Product or Packing Product, etc. For arbitrary function f and domain D we can compute f-Convolution via brute-force enumeration in 𝒪̃(|D|^{2n} ⋅ polylog(M)) time. Our main result is an improvement over this naive algorithm. We show that f-Convolution can be computed exactly in 𝒪̃((c ⋅ |D|²)ⁿ ⋅ polylog(M)) for constant c := 5/6 when D has even cardinality. Our main observation is that a cyclic partition of a function f : D × D → D can be used to speed up the computation of f-Convolution, and we show that an appropriate cyclic partition exists for every f. Furthermore, we demonstrate that a single entry of the f-Convolution can be computed more efficiently. In this variant, we are given two functions g,h : Dⁿ → {-M,…,M} alongside with a vector 𝐯 ∈ Dⁿ and the task of the f-Query problem is to compute integer (g ⊛_f h)(𝐯). This is a generalization of the well-known Orthogonal Vectors problem. We show that f-Query can be computed in 𝒪̃(|D|^{(ω/2)n} ⋅ polylog(M)) time, where ω ∈ [2,2.373) is the exponent of currently fastest matrix multiplication algorithm.

Cite as

Barış Can Esmer, Ariel Kulik, Dániel Marx, Philipp Schepper, and Karol Węgrzycki. Computing Generalized Convolutions Faster Than Brute Force. In 17th International Symposium on Parameterized and Exact Computation (IPEC 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 249, pp. 12:1-12:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{esmer_et_al:LIPIcs.IPEC.2022.12,
  author =	{Esmer, Bar{\i}\c{s} Can and Kulik, Ariel and Marx, D\'{a}niel and Schepper, Philipp and W\k{e}grzycki, Karol},
  title =	{{Computing Generalized Convolutions Faster Than Brute Force}},
  booktitle =	{17th International Symposium on Parameterized and Exact Computation (IPEC 2022)},
  pages =	{12:1--12:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-260-0},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{249},
  editor =	{Dell, Holger and Nederlof, Jesper},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.IPEC.2022.12},
  URN =		{urn:nbn:de:0030-drops-173685},
  doi =		{10.4230/LIPIcs.IPEC.2022.12},
  annote =	{Keywords: Generalized Convolution, Fast Fourier Transform, Fast Subset Convolution}
}
Document
Dynamic Time Warping Under Translation: Approximation Guided by Space-Filling Curves

Authors: Karl Bringmann, Sándor Kisfaludi‑Bak, Marvin Künnemann, Dániel Marx, and André Nusser

Published in: LIPIcs, Volume 224, 38th International Symposium on Computational Geometry (SoCG 2022)


Abstract
The Dynamic Time Warping (DTW) distance is a popular measure of similarity for a variety of sequence data. For comparing polygonal curves π, σ in ℝ^d, it provides a robust, outlier-insensitive alternative to the Fréchet distance. However, like the Fréchet distance, the DTW distance is not invariant under translations. Can we efficiently optimize the DTW distance of π and σ under arbitrary translations, to compare the curves' shape irrespective of their absolute location? There are surprisingly few works in this direction, which may be due to its computational intricacy: For the Euclidean norm, this problem contains as a special case the geometric median problem, which provably admits no exact algebraic algorithm (that is, no algorithm using only addition, multiplication, and k-th roots). We thus investigate exact algorithms for non-Euclidean norms as well as approximation algorithms for the Euclidean norm. For the L₁ norm in ℝ^d, we provide an 𝒪(n^{2(d+1)})-time algorithm, i.e., an exact polynomial-time algorithm for constant d. Here and below, n bounds the curves' complexities. For the Euclidean norm in ℝ², we show that a simple problem-specific insight leads to a (1+ε)-approximation in time 𝒪(n³/ε²). We then show how to obtain a subcubic 𝒪̃(n^{2.5}/ε²) time algorithm with significant new ideas; this time comes close to the well-known quadratic time barrier for computing DTW for fixed translations. Technically, the algorithm is obtained by speeding up repeated DTW distance estimations using a dynamic data structure for maintaining shortest paths in weighted planar digraphs. Crucially, we show how to traverse a candidate set of translations using space-filling curves in a way that incurs only few updates to the data structure. We hope that our results will facilitate the use of DTW under translation both in theory and practice, and inspire similar algorithmic approaches for related geometric optimization problems.

Cite as

Karl Bringmann, Sándor Kisfaludi‑Bak, Marvin Künnemann, Dániel Marx, and André Nusser. Dynamic Time Warping Under Translation: Approximation Guided by Space-Filling Curves. In 38th International Symposium on Computational Geometry (SoCG 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 224, pp. 20:1-20:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.SoCG.2022.20,
  author =	{Bringmann, Karl and Kisfaludi‑Bak, S\'{a}ndor and K\"{u}nnemann, Marvin and Marx, D\'{a}niel and Nusser, Andr\'{e}},
  title =	{{Dynamic Time Warping Under Translation: Approximation Guided by Space-Filling Curves}},
  booktitle =	{38th International Symposium on Computational Geometry (SoCG 2022)},
  pages =	{20:1--20:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-227-3},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{224},
  editor =	{Goaoc, Xavier and Kerber, Michael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SoCG.2022.20},
  URN =		{urn:nbn:de:0030-drops-160287},
  doi =		{10.4230/LIPIcs.SoCG.2022.20},
  annote =	{Keywords: Dynamic Time Warping, Sequence Similarity Measures}
}
Document
Towards Sub-Quadratic Diameter Computation in Geometric Intersection Graphs

Authors: Karl Bringmann, Sándor Kisfaludi‑Bak, Marvin Künnemann, André Nusser, and Zahra Parsaeian

Published in: LIPIcs, Volume 224, 38th International Symposium on Computational Geometry (SoCG 2022)


Abstract
We initiate the study of diameter computation in geometric intersection graphs from the fine-grained complexity perspective. A geometric intersection graph is a graph whose vertices correspond to some shapes in d-dimensional Euclidean space, such as balls, segments, or hypercubes, and whose edges correspond to pairs of intersecting shapes. The diameter of a graph is the largest distance realized by a pair of vertices in the graph. Computing the diameter in near-quadratic time is possible in several classes of intersection graphs [Chan and Skrepetos 2019], but it is not at all clear if these algorithms are optimal, especially since in the related class of planar graphs the diameter can be computed in 𝒪̃(n^{5/3}) time [Cabello 2019, Gawrychowski et al. 2021]. In this work we (conditionally) rule out sub-quadratic algorithms in several classes of intersection graphs, i.e., algorithms of running time 𝒪(n^{2-δ}) for some δ > 0. In particular, there are no sub-quadratic algorithms already for fat objects in small dimensions: unit balls in ℝ³ or congruent equilateral triangles in ℝ². For unit segments and congruent equilateral triangles, we can even rule out strong sub-quadratic approximations already in ℝ². It seems that the hardness of approximation may also depend on dimensionality: for axis-parallel unit hypercubes in ℝ^{12}, distinguishing between diameter 2 and 3 needs quadratic time (ruling out (3/2-ε)- approximations), whereas for axis-parallel unit squares, we give an algorithm that distinguishes between diameter 2 and 3 in near-linear time. Note that many of our lower bounds match the best known algorithms up to sub-polynomial factors. Ultimately, this fine-grained perspective may enable us to determine for which shapes we can have efficient algorithms and approximation schemes for diameter computation.

Cite as

Karl Bringmann, Sándor Kisfaludi‑Bak, Marvin Künnemann, André Nusser, and Zahra Parsaeian. Towards Sub-Quadratic Diameter Computation in Geometric Intersection Graphs. In 38th International Symposium on Computational Geometry (SoCG 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 224, pp. 21:1-21:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.SoCG.2022.21,
  author =	{Bringmann, Karl and Kisfaludi‑Bak, S\'{a}ndor and K\"{u}nnemann, Marvin and Nusser, Andr\'{e} and Parsaeian, Zahra},
  title =	{{Towards Sub-Quadratic Diameter Computation in Geometric Intersection Graphs}},
  booktitle =	{38th International Symposium on Computational Geometry (SoCG 2022)},
  pages =	{21:1--21:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-227-3},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{224},
  editor =	{Goaoc, Xavier and Kerber, Michael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SoCG.2022.21},
  URN =		{urn:nbn:de:0030-drops-160294},
  doi =		{10.4230/LIPIcs.SoCG.2022.21},
  annote =	{Keywords: Hardness in P, Geometric Intersection Graph, Graph Diameter, Orthogonal Vectors, Hyperclique Detection}
}
Document
Hitting Sets and Reconstruction for Dense Orbits in VP_{e} and ΣΠΣ Circuits

Authors: Dori Medini and Amir Shpilka

Published in: LIPIcs, Volume 200, 36th Computational Complexity Conference (CCC 2021)


Abstract
In this paper we study polynomials in VP_{e} (polynomial-sized formulas) and in ΣΠΣ (polynomial-size depth-3 circuits) whose orbits, under the action of the affine group GL^{aff}_n(𝔽) (the action of (A,b) ∈ GL^{aff}_n(𝔽) on a polynomial f ∈ 𝔽[x] is defined as (A,b)∘f = f(A^Tx+b)), are dense in their ambient class. We construct hitting sets and interpolating sets for these orbits as well as give reconstruction algorithms. Specifically, we obtain the following results: 1) For C_n(ℓ_1(x),…,ℓ_n(x)) ≜ Trace(\begin{pmatrix} 𝓁₁(x) & 1 \\ 1 & 0 \end{pmatrix} ⋅ … ⋅ \begin{pmatrix} 𝓁_n(x) & 1 \\ 1 & 0 \end{pmatrix}), where the 𝓁_is are linearly independent linear functions, we construct a polynomial-sized interpolating set, and give a polynomial-time reconstruction algorithm. By a result of Bringmann, Ikenmeyer and Zuiddam, the set of all such polynomials is dense in VP_e [Karl Bringmann et al., 2018], thus our construction gives the first polynomial-size interpolating set for a dense subclass of VP_e. 2) For polynomials of the form ANF_Δ(𝓁₁(x),…,𝓁_{4^Δ}(x)), where ANF_Δ(x) is the canonical read-once formula in alternating normal form, of depth 2Δ, and the 𝓁_is are linearly independent linear functions, we provide a quasipolynomial-size interpolating set. We also observe that the reconstruction algorithm of [Ankit Gupta et al., 2014] works for all polynomials in this class. This class is also dense in VP_e. 3) Similarly, we give a quasipolynomial-sized hitting set for read-once formulas (not necessarily in alternating normal form) composed with a set of linearly independent linear functions. This gives another dense class in VP_e. 4) We give a quasipolynomial-sized hitting set for polynomials of the form f(𝓁₁(x),…,𝓁_{m}(x)), where f is an m-variate s-sparse polynomial. and the 𝓁_is are linearly independent linear functions in n ≥ m variables. This class is dense in ΣΠΣ. 5) For polynomials of the form ∑_{i=1}^{s}∏_{j=1}^{d}𝓁_{i,j}(x), where the 𝓁_{i,j}s are linearly independent linear functions, we construct a polynomial-sized interpolating set. We also observe that the reconstruction algorithm of [Neeraj Kayal and Chandan Saha, 2019] works for every polynomial in the class. This class is dense in ΣΠΣ. As VP = VNC², our results for VP_{e} translate immediately to VP with a quasipolynomial blow up in parameters. If any of our hitting or interpolating sets could be made robust then this would immediately yield a hitting set for the superclass in which the relevant class is dense, and as a consequence also a lower bound for the superclass. Unfortunately, we also prove that the kind of constructions that we have found (which are defined in terms of k-independent polynomial maps) do not necessarily yield robust hitting sets.

Cite as

Dori Medini and Amir Shpilka. Hitting Sets and Reconstruction for Dense Orbits in VP_{e} and ΣΠΣ Circuits. In 36th Computational Complexity Conference (CCC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 200, pp. 19:1-19:27, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{medini_et_al:LIPIcs.CCC.2021.19,
  author =	{Medini, Dori and Shpilka, Amir},
  title =	{{Hitting Sets and Reconstruction for Dense Orbits in VP\underline\{e\} and \Sigma\Pi\Sigma Circuits}},
  booktitle =	{36th Computational Complexity Conference (CCC 2021)},
  pages =	{19:1--19:27},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-193-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{200},
  editor =	{Kabanets, Valentine},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2021.19},
  URN =		{urn:nbn:de:0030-drops-142930},
  doi =		{10.4230/LIPIcs.CCC.2021.19},
  annote =	{Keywords: Algebraic complexity, VP, VNP, algebraic circuits, algebraic formula}
}
Document
Approximating the (Continuous) Fréchet Distance

Authors: Connor Colombe and Kyle Fox

Published in: LIPIcs, Volume 189, 37th International Symposium on Computational Geometry (SoCG 2021)


Abstract
We describe the first strongly subquadratic time algorithm with subexponential approximation ratio for approximately computing the Fréchet distance between two polygonal chains. Specifically, let P and Q be two polygonal chains with n vertices in d-dimensional Euclidean space, and let α ∈ [√n, n]. Our algorithm deterministically finds an O(α)-approximate Fréchet correspondence in time O((n³ / α²) log n). In particular, we get an O(n)-approximation in near-linear O(n log n) time, a vast improvement over the previously best know result, a linear time 2^O(n)-approximation. As part of our algorithm, we also describe how to turn any approximate decision procedure for the Fréchet distance into an approximate optimization algorithm whose approximation ratio is the same up to arbitrarily small constant factors. The transformation into an approximate optimization algorithm increases the running time of the decision procedure by only an O(log n) factor.

Cite as

Connor Colombe and Kyle Fox. Approximating the (Continuous) Fréchet Distance. In 37th International Symposium on Computational Geometry (SoCG 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 189, pp. 26:1-26:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{colombe_et_al:LIPIcs.SoCG.2021.26,
  author =	{Colombe, Connor and Fox, Kyle},
  title =	{{Approximating the (Continuous) Fr\'{e}chet Distance}},
  booktitle =	{37th International Symposium on Computational Geometry (SoCG 2021)},
  pages =	{26:1--26:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-184-9},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{189},
  editor =	{Buchin, Kevin and Colin de Verdi\`{e}re, \'{E}ric},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SoCG.2021.26},
  URN =		{urn:nbn:de:0030-drops-138259},
  doi =		{10.4230/LIPIcs.SoCG.2021.26},
  annote =	{Keywords: Fr\'{e}chet distance, approximation algorithm, approximate decision procedure}
}
Document
Track A: Algorithms, Complexity and Games
Faster Minimization of Tardy Processing Time on a Single Machine

Authors: Karl Bringmann, Nick Fischer, Danny Hermelin, Dvir Shabtay, and Philip Wellnitz

Published in: LIPIcs, Volume 168, 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)


Abstract
This paper is concerned with the 1||∑ p_jU_j problem, the problem of minimizing the total processing time of tardy jobs on a single machine. This is not only a fundamental scheduling problem, but also a very important problem from a theoretical point of view as it generalizes the Subset Sum problem and is closely related to the 0/1-Knapsack problem. The problem is well-known to be NP-hard, but only in a weak sense, meaning it admits pseudo-polynomial time algorithms. The fastest known pseudo-polynomial time algorithm for the problem is the famous Lawler and Moore algorithm which runs in O(P ⋅ n) time, where P is the total processing time of all n jobs in the input. This algorithm has been developed in the late 60s, and has yet to be improved to date. In this paper we develop two new algorithms for 1||∑ p_jU_j, each improving on Lawler and Moore’s algorithm in a different scenario: - Our first algorithm runs in Õ(P^{7/4}) time, and outperforms Lawler and Moore’s algorithm in instances where n = ω̃(P^{3/4}). - Our second algorithm runs in Õ(min{P ⋅ D_#, P + D}) time, where D_# is the number of different due dates in the instance, and D is the sum of all different due dates. This algorithm improves on Lawler and Moore’s algorithm when n = ω̃(D_#) or n = ω̃(D/P). Further, it extends the known Õ(P) algorithm for the single due date special case of 1||∑ p_jU_j in a natural way. Both algorithms rely on basic primitive operations between sets of integers and vectors of integers for the speedup in their running times. The second algorithm relies on fast polynomial multiplication as its main engine, while for the first algorithm we define a new "skewed" version of (max,min)-convolution which is interesting in its own right.

Cite as

Karl Bringmann, Nick Fischer, Danny Hermelin, Dvir Shabtay, and Philip Wellnitz. Faster Minimization of Tardy Processing Time on a Single Machine. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 19:1-19:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.ICALP.2020.19,
  author =	{Bringmann, Karl and Fischer, Nick and Hermelin, Danny and Shabtay, Dvir and Wellnitz, Philip},
  title =	{{Faster Minimization of Tardy Processing Time on a Single Machine}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{19:1--19:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.19},
  URN =		{urn:nbn:de:0030-drops-124269},
  doi =		{10.4230/LIPIcs.ICALP.2020.19},
  annote =	{Keywords: Weighted number of tardy jobs, sumsets, convolutions}
}
Document
On Geometric Set Cover for Orthants

Authors: Karl Bringmann, Sándor Kisfaludi-Bak, Michał Pilipczuk, and Erik Jan van Leeuwen

Published in: LIPIcs, Volume 144, 27th Annual European Symposium on Algorithms (ESA 2019)


Abstract
We study SET COVER for orthants: Given a set of points in a d-dimensional Euclidean space and a set of orthants of the form (-infty,p_1] x ... x (-infty,p_d], select a minimum number of orthants so that every point is contained in at least one selected orthant. This problem draws its motivation from applications in multi-objective optimization problems. While for d=2 the problem can be solved in polynomial time, for d>2 no algorithm is known that avoids the enumeration of all size-k subsets of the input to test whether there is a set cover of size k. Our contribution is a precise understanding of the complexity of this problem in any dimension d >= 3, when k is considered a parameter: - For d=3, we give an algorithm with runtime n^O(sqrt{k}), thus avoiding exhaustive enumeration. - For d=3, we prove a tight lower bound of n^Omega(sqrt{k}) (assuming ETH). - For d >=slant 4, we prove a tight lower bound of n^Omega(k) (assuming ETH). Here n is the size of the set of points plus the size of the set of orthants. The first statement comes as a corollary of a more general result: an algorithm for SET COVER for half-spaces in dimension 3. In particular, we show that given a set of points U in R^3, a set of half-spaces D in R^3, and an integer k, one can decide whether U can be covered by the union of at most k half-spaces from D in time |D|^O(sqrt{k})* |U|^O(1). We also study approximation for SET COVER for orthants. While in dimension 3 a PTAS can be inferred from existing results, we show that in dimension 4 and larger, there is no 1.05-approximation algorithm with runtime f(k)* n^o(k) for any computable f, where k is the optimum.

Cite as

Karl Bringmann, Sándor Kisfaludi-Bak, Michał Pilipczuk, and Erik Jan van Leeuwen. On Geometric Set Cover for Orthants. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 26:1-26:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.ESA.2019.26,
  author =	{Bringmann, Karl and Kisfaludi-Bak, S\'{a}ndor and Pilipczuk, Micha{\l} and van Leeuwen, Erik Jan},
  title =	{{On Geometric Set Cover for Orthants}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{26:1--26:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.26},
  URN =		{urn:nbn:de:0030-drops-111476},
  doi =		{10.4230/LIPIcs.ESA.2019.26},
  annote =	{Keywords: Set Cover, parameterized complexity, algorithms, Exponential Time Hypothesis}
}
Document
Polyline Simplification has Cubic Complexity

Authors: Karl Bringmann and Bhaskar Ray Chaudhury

Published in: LIPIcs, Volume 129, 35th International Symposium on Computational Geometry (SoCG 2019)


Abstract
In the classic polyline simplification problem we want to replace a given polygonal curve P, consisting of n vertices, by a subsequence P' of k vertices from P such that the polygonal curves P and P' are "close". Closeness is usually measured using the Hausdorff or Fréchet distance. These distance measures can be applied globally, i.e., to the whole curves P and P', or locally, i.e., to each simplified subcurve and the line segment that it was replaced with separately (and then taking the maximum). We provide an O(n^3) time algorithm for simplification under Global-Fréchet distance, improving the previous best algorithm by a factor of Omega(kn^2). We also provide evidence that in high dimensions cubic time is essentially optimal for all three problems (Local-Hausdorff, Local-Fréchet, and Global-Fréchet). Specifically, improving the cubic time to O(n^{3-epsilon} poly(d)) for polyline simplification over (R^d,L_p) for p = 1 would violate plausible conjectures. We obtain similar results for all p in [1,infty), p != 2. In total, in high dimensions and over general L_p-norms we resolve the complexity of polyline simplification with respect to Local-Hausdorff, Local-Fréchet, and Global-Fréchet, by providing new algorithms and conditional lower bounds.

Cite as

Karl Bringmann and Bhaskar Ray Chaudhury. Polyline Simplification has Cubic Complexity. In 35th International Symposium on Computational Geometry (SoCG 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 129, pp. 18:1-18:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.SoCG.2019.18,
  author =	{Bringmann, Karl and Chaudhury, Bhaskar Ray},
  title =	{{Polyline Simplification has Cubic Complexity}},
  booktitle =	{35th International Symposium on Computational Geometry (SoCG 2019)},
  pages =	{18:1--18:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-104-7},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{129},
  editor =	{Barequet, Gill and Wang, Yusu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SoCG.2019.18},
  URN =		{urn:nbn:de:0030-drops-104224},
  doi =		{10.4230/LIPIcs.SoCG.2019.18},
  annote =	{Keywords: Polyline simplification, Fr\'{e}chet distance, Hausdorff distance, Conditional lower bounds}
}
Document
Multivariate Analysis of Orthogonal Range Searching and Graph Distances

Authors: Karl Bringmann, Thore Husfeldt, and Måns Magnusson

Published in: LIPIcs, Volume 115, 13th International Symposium on Parameterized and Exact Computation (IPEC 2018)


Abstract
We show that the eccentricities, diameter, radius, and Wiener index of an undirected n-vertex graph with nonnegative edge lengths can be computed in time O(n * binom{k+ceil[log n]}{k} * 2^k k^2 log n), where k is the treewidth of the graph. For every epsilon>0, this bound is n^{1+epsilon}exp O(k), which matches a hardness result of Abboud, Vassilevska Williams, and Wang (SODA 2015) and closes an open problem in the multivariate analysis of polynomial-time computation. To this end, we show that the analysis of an algorithm of Cabello and Knauer (Comp. Geom., 2009) in the regime of non-constant treewidth can be improved by revisiting the analysis of orthogonal range searching, improving bounds of the form log^d n to binom{d+ceil[log n]}{d}, as originally observed by Monier (J. Alg. 1980). We also investigate the parameterization by vertex cover number.

Cite as

Karl Bringmann, Thore Husfeldt, and Måns Magnusson. Multivariate Analysis of Orthogonal Range Searching and Graph Distances. In 13th International Symposium on Parameterized and Exact Computation (IPEC 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 115, pp. 4:1-4:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.IPEC.2018.4,
  author =	{Bringmann, Karl and Husfeldt, Thore and Magnusson, M\r{a}ns},
  title =	{{Multivariate Analysis of Orthogonal Range Searching and Graph Distances}},
  booktitle =	{13th International Symposium on Parameterized and Exact Computation (IPEC 2018)},
  pages =	{4:1--4:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-084-2},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{115},
  editor =	{Paul, Christophe and Pilipczuk, Michal},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.IPEC.2018.4},
  URN =		{urn:nbn:de:0030-drops-102050},
  doi =		{10.4230/LIPIcs.IPEC.2018.4},
  annote =	{Keywords: Diameter, radius, Wiener index, orthogonal range searching, treewidth, vertex cover number}
}
Document
Flipping out with Many Flips: Hardness of Testing k-Monotonicity

Authors: Elena Grigorescu, Akash Kumar, and Karl Wimmer

Published in: LIPIcs, Volume 116, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2018)


Abstract
A function f:{0,1}^n - > {0,1} is said to be k-monotone if it flips between 0 and 1 at most k times on every ascending chain. Such functions represent a natural generalization of (1-)monotone functions, and have been recently studied in circuit complexity, PAC learning, and cryptography. Our work is part of a renewed focus in understanding testability of properties characterized by freeness of arbitrary order patterns as a generalization of monotonicity. Recently, Canonne et al. (ITCS 2017) initiate the study of k-monotone functions in the area of property testing, and Newman et al. (SODA 2017) study testability of families characterized by freeness from order patterns on real-valued functions over the line [n] domain. We study k-monotone functions in the more relaxed parametrized property testing model, introduced by Parnas et al. (JCSS, 72(6), 2006). In this process we resolve a problem left open in previous work. Specifically, our results include the following. 1) Testing 2-monotonicity on the hypercube non-adaptively with one-sided error requires an exponential in sqrt{n} number of queries. This behavior shows a stark contrast with testing (1-)monotonicity, which only needs O~(sqrt{n}) queries (Khot et al. (FOCS 2015)). Furthermore, even the apparently easier task of distinguishing 2-monotone functions from functions that are far from being n^{.01}-monotone also requires an exponential number of queries. 2) On the hypercube [n]^d domain, there exists a testing algorithm that makes a constant number of queries and distinguishes functions that are k-monotone from functions that are far from being O(kd^2) -monotone. Such a dependency is likely necessary, given the lower bound above for the hypercube.

Cite as

Elena Grigorescu, Akash Kumar, and Karl Wimmer. Flipping out with Many Flips: Hardness of Testing k-Monotonicity. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 116, pp. 40:1-40:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{grigorescu_et_al:LIPIcs.APPROX-RANDOM.2018.40,
  author =	{Grigorescu, Elena and Kumar, Akash and Wimmer, Karl},
  title =	{{Flipping out with Many Flips: Hardness of Testing k-Monotonicity}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2018)},
  pages =	{40:1--40:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-085-9},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{116},
  editor =	{Blais, Eric and Jansen, Klaus and D. P. Rolim, Jos\'{e} and Steurer, David},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2018.40},
  URN =		{urn:nbn:de:0030-drops-94448},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2018.40},
  annote =	{Keywords: Property Testing, Boolean Functions, k-Monotonicity, Lower Bounds}
}
Document
Tighter Connections Between Formula-SAT and Shaving Logs

Authors: Amir Abboud and Karl Bringmann

Published in: LIPIcs, Volume 107, 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)


Abstract
A noticeable fraction of Algorithms papers in the last few decades improve the running time of well-known algorithms for fundamental problems by logarithmic factors. For example, the {O}(n^2) dynamic programming solution to the Longest Common Subsequence problem (LCS) was improved to O(n^2/log^{2}n) in several ways and using a variety of ingenious tricks. This line of research, also known as the art of shaving log factors, lacks a tool for proving negative results. Specifically, how can we show that it is unlikely that LCS can be solved in time O(n^2/log^3n)? Perhaps the only approach for such results was suggested in a recent paper of Abboud, Hansen, Vassilevska W. and Williams (STOC'16). The authors blame the hardness of shaving logs on the hardness of solving satisfiability on boolean formulas (Formula-SAT) faster than exhaustive search. They show that an O(n^2/log^{1000} n) algorithm for LCS would imply a major advance in circuit lower bounds. Whether this approach can lead to tighter barriers was unclear. In this paper, we push this approach to its limit and, in particular, prove that a well-known barrier from complexity theory stands in the way for shaving five additional log factors for fundamental combinatorial problems. For LCS, regular expression pattern matching, as well as the Fréchet distance problem from Computational Geometry, we show that an O(n^2/log^{7+epsilon}{n}) runtime would imply new Formula-SAT algorithms. Our main result is a reduction from SAT on formulas of size s over n variables to LCS on sequences of length N=2^{n/2} * s^{1+o(1)}. Our reduction is essentially as efficient as possible, and it greatly improves the previously known reduction for LCS with N=2^{n/2} * s^c, for some c >= 100.

Cite as

Amir Abboud and Karl Bringmann. Tighter Connections Between Formula-SAT and Shaving Logs. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 8:1-8:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{abboud_et_al:LIPIcs.ICALP.2018.8,
  author =	{Abboud, Amir and Bringmann, Karl},
  title =	{{Tighter Connections Between Formula-SAT and Shaving Logs}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{8:1--8:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.8},
  URN =		{urn:nbn:de:0030-drops-90129},
  doi =		{10.4230/LIPIcs.ICALP.2018.8},
  annote =	{Keywords: Fine-Grained Complexity, Hardness in P, Formula-SAT, Longest Common Subsequence, Frechet Distance}
}
Document
Distance-Preserving Graph Contractions

Authors: Aaron Bernstein, Karl Däubel, Yann Disser, Max Klimm, Torsten Mütze, and Frieder Smolny

Published in: LIPIcs, Volume 94, 9th Innovations in Theoretical Computer Science Conference (ITCS 2018)


Abstract
Compression and sparsification algorithms are frequently applied in a preprocessing step before analyzing or optimizing large networks/graphs. In this paper we propose and study a new framework contracting edges of a graph (merging vertices into super-vertices) with the goal of preserving pairwise distances as accurately as possible. Formally, given an edge-weighted graph, the contraction should guarantee that for any two vertices at distance d, the corresponding super-vertices remain at distance at least \varphi(d) in the contracted graph, where \varphi is a tolerance function bounding the permitted distance distortion. We present a comprehensive picture of the algorithmic complexity of the contraction problem for affine tolerance functions \varphi(x)=x/\alpha-\beta, where \alpha \geq 1 and \beta \geq 0 are arbitrary real-valued parameters. Specifically, we present polynomial-time algorithms for trees as well as hardness and inapproximability results for different graph classes, precisely separating easy and hard cases. Further we analyze the asymptotic behavior of the size of contractions, and find efficient algorithms to compute (non-optimal) contractions despite our hardness results.

Cite as

Aaron Bernstein, Karl Däubel, Yann Disser, Max Klimm, Torsten Mütze, and Frieder Smolny. Distance-Preserving Graph Contractions. In 9th Innovations in Theoretical Computer Science Conference (ITCS 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 94, pp. 51:1-51:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{bernstein_et_al:LIPIcs.ITCS.2018.51,
  author =	{Bernstein, Aaron and D\"{a}ubel, Karl and Disser, Yann and Klimm, Max and M\"{u}tze, Torsten and Smolny, Frieder},
  title =	{{Distance-Preserving Graph Contractions}},
  booktitle =	{9th Innovations in Theoretical Computer Science Conference (ITCS 2018)},
  pages =	{51:1--51:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-060-6},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{94},
  editor =	{Karlin, Anna R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2018.51},
  URN =		{urn:nbn:de:0030-drops-83427},
  doi =		{10.4230/LIPIcs.ITCS.2018.51},
  annote =	{Keywords: distance oracle, contraction, spanner}
}
Document
Testing k-Monotonicity

Authors: Clément L. Canonne, Elena Grigorescu, Siyao Guo, Akash Kumar, and Karl Wimmer

Published in: LIPIcs, Volume 67, 8th Innovations in Theoretical Computer Science Conference (ITCS 2017)


Abstract
A Boolean k-monotone function defined over a finite poset domain D alternates between the values 0 and 1 at most k times on any ascending chain in D. Therefore, k-monotone functions are natural generalizations of the classical monotone functions, which are the 1-monotone functions. Motivated by the recent interest in k-monotone functions in the context of circuit complexity and learning theory, and by the central role that monotonicity testing plays in the context of property testing, we initiate a systematic study of k-monotone functions, in the property testing model. In this model, the goal is to distinguish functions that are k-monotone (or are close to being k-monotone) from functions that are far from being k-monotone. Our results include the following: 1. We demonstrate a separation between testing k-monotonicity and testing monotonicity, on the hypercube domain {0,1}^d, for k >= 3; 2. We demonstrate a separation between testing and learning on {0,1}^d, for k=\omega(\log d): testing k-monotonicity can be performed with 2^{O(\sqrt d . \log d . \log{1/\eps})} queries, while learning k-monotone functions requires 2^{\Omega(k . \sqrt d .{1/\eps})} queries (Blais et al. (RANDOM 2015)). 3. We present a tolerant test for functions f\colon[n]^d\to \{0,1\}$with complexity independent of n, which makes progress on a problem left open by Berman et al. (STOC 2014). Our techniques exploit the testing-by-learning paradigm, use novel applications of Fourier analysis on the grid [n]^d, and draw connections to distribution testing techniques. Our techniques exploit the testing-by-learning paradigm, use novel applications of Fourier analysis on the grid [n]^d, and draw connections to distribution testing techniques.

Cite as

Clément L. Canonne, Elena Grigorescu, Siyao Guo, Akash Kumar, and Karl Wimmer. Testing k-Monotonicity. In 8th Innovations in Theoretical Computer Science Conference (ITCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 67, pp. 29:1-29:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{canonne_et_al:LIPIcs.ITCS.2017.29,
  author =	{Canonne, Cl\'{e}ment L. and Grigorescu, Elena and Guo, Siyao and Kumar, Akash and Wimmer, Karl},
  title =	{{Testing k-Monotonicity}},
  booktitle =	{8th Innovations in Theoretical Computer Science Conference (ITCS 2017)},
  pages =	{29:1--29:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-029-3},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{67},
  editor =	{Papadimitriou, Christos H.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2017.29},
  URN =		{urn:nbn:de:0030-drops-81583},
  doi =		{10.4230/LIPIcs.ITCS.2017.29},
  annote =	{Keywords: Boolean Functions, Learning, Monotonicity, Property Testing}
}
  • Refine by Author
  • 9 Bringmann, Karl
  • 3 Grigorescu, Elena
  • 3 Künnemann, Marvin
  • 3 Wimmer, Karl
  • 2 Kisfaludi‑Bak, Sándor
  • Show More...

  • Refine by Classification
  • 5 Theory of computation → Computational geometry
  • 3 Theory of computation → Design and analysis of algorithms
  • 2 Theory of computation → Parameterized complexity and exact algorithms
  • 1 Mathematics of computing → Paths and connectivity problems
  • 1 Software and its engineering → Compilers
  • Show More...

  • Refine by Keyword
  • 3 Fréchet distance
  • 2 Boolean Functions
  • 2 Hardness in P
  • 2 Property Testing
  • 2 lower bounds
  • Show More...

  • Refine by Type
  • 18 document

  • Refine by Publication Year
  • 3 2018
  • 3 2019
  • 3 2022
  • 2 2017
  • 2 2021
  • Show More...

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail