Search Results

Documents authored by Wegrzycki, Karol


Found 2 Possible Name Variants:

Wegrzycki, Karol

Document
Parameterized Approximation for Maximum Weight Independent Set of Rectangles and Segments

Authors: Jana Cslovjecsek, Michał Pilipczuk, and Karol Węgrzycki

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
In the Maximum Weight Independent Set of Rectangles problem (MWISR) we are given a weighted set of n axis-parallel rectangles in the plane. The task is to find a subset of pairwise non-overlapping rectangles with the maximum possible total weight. This problem is NP-hard and the best-known polynomial-time approximation algorithm, due to Chalermsook and Walczak [SODA 2021], achieves approximation factor 𝒪(log log n). While in the unweighted setting, constant factor approximation algorithms are known, due to Mitchell [FOCS 2021] and to Gálvez et al. [SODA 2022], it remains open to extend these techniques to the weighted setting. In this paper, we consider MWISR through the lens of parameterized approximation. Grandoni, Kratsch and Wiese [ESA 2019] gave a (1-ε)-approximation algorithm running in k^{𝒪(k/ε⁸)} n^{𝒪(1/ε⁸)} time, where k is the number of rectangles in an optimum solution. Unfortunately, their algorithm works only in the unweighted setting and they left it as an open problem to give a parameterized approximation scheme in the weighted setting. We give a parameterized approximation algorithm for MWISR that given a parameter k ∈ ℕ, finds a set of non-overlapping rectangles of weight at least (1-ε) opt_k in 2^{𝒪(k log(k/ε))} n^{𝒪(1/ε)} time, where opt_k is the maximum weight of a solution of cardinality at most k. We also propose a parameterized approximation scheme with running time 2^{𝒪(k² log(k/ε))} n^{𝒪(1)} that finds a solution with cardinality at most k and total weight at least (1-ε)opt_k for the special case of axis-parallel segments.

Cite as

Jana Cslovjecsek, Michał Pilipczuk, and Karol Węgrzycki. Parameterized Approximation for Maximum Weight Independent Set of Rectangles and Segments. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 43:1-43:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{cslovjecsek_et_al:LIPIcs.ESA.2024.43,
  author =	{Cslovjecsek, Jana and Pilipczuk, Micha{\l} and W\k{e}grzycki, Karol},
  title =	{{Parameterized Approximation for Maximum Weight Independent Set of Rectangles and Segments}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{43:1--43:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.43},
  URN =		{urn:nbn:de:0030-drops-211146},
  doi =		{10.4230/LIPIcs.ESA.2024.43},
  annote =	{Keywords: parameterized approximation, Maximum Weight Independent Set, rectangles, segments}
}
Document
Hitting Meets Packing: How Hard Can It Be?

Authors: Jacob Focke, Fabian Frei, Shaohua Li, Dániel Marx, Philipp Schepper, Roohani Sharma, and Karol Węgrzycki

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
We study a general family of problems that form a common generalization of classic hitting (also referred to as covering or transversal) and packing problems. An instance of 𝒳-HitPack asks: Can removing k (deletable) vertices of a graph G prevent us from packing 𝓁 vertex-disjoint objects of type 𝒳? This problem captures a spectrum of problems with standard hitting and packing on opposite ends. Our main motivating question is whether the combination 𝒳-HitPack can be significantly harder than these two base problems. Already for one particular choice of 𝒳, this question can be posed for many different complexity notions, leading to a large, so-far unexplored domain at the intersection of the areas of hitting and packing problems. At a high level, we present two case studies: (1) 𝒳 being all cycles, and (2) 𝒳 being all copies of a fixed graph H. In each, we explore the classical complexity as well as the parameterized complexity with the natural parameters k+𝓁 and treewidth. We observe that the combined problem can be drastically harder than the base problems: for cycles or for H being a connected graph on at least 3 vertices, the problem is Σ₂^𝖯-complete and requires double-exponential dependence on the treewidth of the graph (assuming the Exponential-Time Hypothesis). In contrast, the combined problem admits qualitatively similar running times as the base problems in some cases, although significant novel ideas are required. For 𝒳 being all cycles, we establish a 2^{poly(k+𝓁)}⋅ n^{𝒪(1)} algorithm using an involved branching method, for example. Also, for 𝒳 being all edges (i.e., H = K₂; this combines Vertex Cover and Maximum Matching) the problem can be solved in time 2^{poly(tw)}⋅ n^{𝒪(1)} on graphs of treewidth tw. The key step enabling this running time relies on a combinatorial bound obtained from an algebraic (linear delta-matroid) representation of possible matchings.

Cite as

Jacob Focke, Fabian Frei, Shaohua Li, Dániel Marx, Philipp Schepper, Roohani Sharma, and Karol Węgrzycki. Hitting Meets Packing: How Hard Can It Be?. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 55:1-55:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{focke_et_al:LIPIcs.ESA.2024.55,
  author =	{Focke, Jacob and Frei, Fabian and Li, Shaohua and Marx, D\'{a}niel and Schepper, Philipp and Sharma, Roohani and W\k{e}grzycki, Karol},
  title =	{{Hitting Meets Packing: How Hard Can It Be?}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{55:1--55:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.55},
  URN =		{urn:nbn:de:0030-drops-211261},
  doi =		{10.4230/LIPIcs.ESA.2024.55},
  annote =	{Keywords: Hitting, Packing, Covering, Parameterized Algorithms, Lower Bounds, Treewidth}
}
Document
Parameterized Algorithms on Integer Sets with Small Doubling: Integer Programming, Subset Sum and k-SUM

Authors: Tim Randolph and Karol Węgrzycki

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
We study the parameterized complexity of algorithmic problems whose input is an integer set A in terms of the doubling constant 𝒞 := |A+A| / |A|, a fundamental measure of additive structure. We present evidence that this new parameterization is algorithmically useful in the form of new results for two difficult, well-studied problems: Integer Programming and Subset Sum. First, we show that determining the feasibility of bounded Integer Programs is a tractable problem when parameterized in the doubling constant. Specifically, we prove that the feasibility of an integer program ℐ with n polynomially-bounded variables and m constraints can be determined in time n^{O_𝒞(1)} ⋅ poly(|ℐ|) when the column set of the constraint matrix has doubling constant 𝒞. Second, we show that the Subset Sum and Unbounded Subset Sum problems can be solved in time n^{O_C(1)} and n^{O_𝒞(log log log n)}, respectively, where the O_C notation hides functions that depend only on the doubling constant 𝒞. We also show the equivalence of achieving an FPT algorithm for Subset Sum with bounded doubling and achieving a milestone result for the parameterized complexity of Box ILP. Finally, we design near-linear time algorithms for k-SUM as well as tight lower bounds for 4-SUM and nearly tight lower bounds for k-SUM, under the k-SUM conjecture. Several of our results rely on a new proof that Freiman’s Theorem, a central result in additive combinatorics, can be made efficiently constructive. This result may be of independent interest.

Cite as

Tim Randolph and Karol Węgrzycki. Parameterized Algorithms on Integer Sets with Small Doubling: Integer Programming, Subset Sum and k-SUM. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 96:1-96:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{randolph_et_al:LIPIcs.ESA.2024.96,
  author =	{Randolph, Tim and W\k{e}grzycki, Karol},
  title =	{{Parameterized Algorithms on Integer Sets with Small Doubling: Integer Programming, Subset Sum and k-SUM}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{96:1--96:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.96},
  URN =		{urn:nbn:de:0030-drops-211672},
  doi =		{10.4230/LIPIcs.ESA.2024.96},
  annote =	{Keywords: Parameterized algorithms, parameterized complexity, additive combinatorics, Subset Sum, integer programming, doubling constant}
}
Document
Fine-Grained Complexity of Earth Mover’s Distance Under Translation

Authors: Karl Bringmann, Frank Staals, Karol Węgrzycki, and Geert van Wordragen

Published in: LIPIcs, Volume 293, 40th International Symposium on Computational Geometry (SoCG 2024)


Abstract
The Earth Mover’s Distance is a popular similarity measure in several branches of computer science. It measures the minimum total edge length of a perfect matching between two point sets. The Earth Mover’s Distance under Translation (EMDuT) is a translation-invariant version thereof. It minimizes the Earth Mover’s Distance over all translations of one point set. For EMDuT in ℝ¹, we present an 𝒪̃(n²)-time algorithm. We also show that this algorithm is nearly optimal by presenting a matching conditional lower bound based on the Orthogonal Vectors Hypothesis. For EMDuT in ℝ^d, we present an 𝒪̃(n^{2d+2})-time algorithm for the L₁ and L_∞ metric. We show that this dependence on d is asymptotically tight, as an n^o(d)-time algorithm for L_1 or L_∞ would contradict the Exponential Time Hypothesis (ETH). Prior to our work, only approximation algorithms were known for these problems.

Cite as

Karl Bringmann, Frank Staals, Karol Węgrzycki, and Geert van Wordragen. Fine-Grained Complexity of Earth Mover’s Distance Under Translation. In 40th International Symposium on Computational Geometry (SoCG 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 293, pp. 25:1-25:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.SoCG.2024.25,
  author =	{Bringmann, Karl and Staals, Frank and W\k{e}grzycki, Karol and van Wordragen, Geert},
  title =	{{Fine-Grained Complexity of Earth Mover’s Distance Under Translation}},
  booktitle =	{40th International Symposium on Computational Geometry (SoCG 2024)},
  pages =	{25:1--25:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-316-4},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{293},
  editor =	{Mulzer, Wolfgang and Phillips, Jeff M.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SoCG.2024.25},
  URN =		{urn:nbn:de:0030-drops-199706},
  doi =		{10.4230/LIPIcs.SoCG.2024.25},
  annote =	{Keywords: Earth Mover’s Distance, Earth Mover’s Distance under Translation, Fine-Grained Complexity, Maximum Weight Bipartite Matching}
}
Document
Separator Theorem and Algorithms for Planar Hyperbolic Graphs

Authors: Sándor Kisfaludi-Bak, Jana Masaříková, Erik Jan van Leeuwen, Bartosz Walczak, and Karol Węgrzycki

Published in: LIPIcs, Volume 293, 40th International Symposium on Computational Geometry (SoCG 2024)


Abstract
The hyperbolicity of a graph, informally, measures how close a graph is (metrically) to a tree. Hence, it is intuitively similar to treewidth, but the measures are formally incomparable. Motivated by the broad study of algorithms and separators on planar graphs and their relation to treewidth, we initiate the study of planar graphs of bounded hyperbolicity. Our main technical contribution is a novel balanced separator theorem for planar δ-hyperbolic graphs that is substantially stronger than the classic planar separator theorem. For any fixed δ ⩾ 0, we can find a small balanced separator that induces either a single geodesic (shortest) path or a single geodesic cycle in the graph. An important advantage of our separator is that the union of our separator (vertex set Z) with any subset of the connected components of G - Z induces again a planar δ-hyperbolic graph, which would not be guaranteed with an arbitrary separator. Our construction runs in near-linear time and guarantees that the size of the separator is poly(δ) ⋅ log n. As an application of our separator theorem and its strong properties, we obtain two novel approximation schemes on planar δ-hyperbolic graphs. We prove that both Maximum Independent Set and the Traveling Salesperson problem have a near-linear time FPTAS for any constant δ, running in n polylog(n) ⋅ 2^𝒪(δ²) ⋅ ε^{-𝒪(δ)} time. We also show that our approximation scheme for Maximum Independent Set has essentially the best possible running time under the Exponential Time Hypothesis (ETH). This immediately follows from our third contribution: we prove that Maximum Independent Set has no n^{o(δ)}-time algorithm on planar δ-hyperbolic graphs, unless ETH fails.

Cite as

Sándor Kisfaludi-Bak, Jana Masaříková, Erik Jan van Leeuwen, Bartosz Walczak, and Karol Węgrzycki. Separator Theorem and Algorithms for Planar Hyperbolic Graphs. In 40th International Symposium on Computational Geometry (SoCG 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 293, pp. 67:1-67:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{kisfaludibak_et_al:LIPIcs.SoCG.2024.67,
  author =	{Kisfaludi-Bak, S\'{a}ndor and Masa\v{r}{\'\i}kov\'{a}, Jana and van Leeuwen, Erik Jan and Walczak, Bartosz and W\k{e}grzycki, Karol},
  title =	{{Separator Theorem and Algorithms for Planar Hyperbolic Graphs}},
  booktitle =	{40th International Symposium on Computational Geometry (SoCG 2024)},
  pages =	{67:1--67:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-316-4},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{293},
  editor =	{Mulzer, Wolfgang and Phillips, Jeff M.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SoCG.2024.67},
  URN =		{urn:nbn:de:0030-drops-200126},
  doi =		{10.4230/LIPIcs.SoCG.2024.67},
  annote =	{Keywords: Hyperbolic metric, Planar Graphs, r-Division, Approximation Algorithms}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Coverability in VASS Revisited: Improving Rackoff’s Bound to Obtain Conditional Optimality

Authors: Marvin Künnemann, Filip Mazowiecki, Lia Schütze, Henry Sinclair-Banks, and Karol Węgrzycki

Published in: LIPIcs, Volume 261, 50th International Colloquium on Automata, Languages, and Programming (ICALP 2023)


Abstract
Seminal results establish that the coverability problem for Vector Addition Systems with States (VASS) is in EXPSPACE (Rackoff, '78) and is EXPSPACE-hard already under unary encodings (Lipton, '76). More precisely, Rosier and Yen later utilise Rackoff’s bounding technique to show that if coverability holds then there is a run of length at most n^{2^𝒪(d log d)}, where d is the dimension and n is the size of the given unary VASS. Earlier, Lipton showed that there exist instances of coverability in d-dimensional unary VASS that are only witnessed by runs of length at least n^{2^Ω(d)}. Our first result closes this gap. We improve the upper bound by removing the twice-exponentiated log(d) factor, thus matching Lipton’s lower bound. This closes the corresponding gap for the exact space required to decide coverability. This also yields a deterministic n^{2^𝒪(d)}-time algorithm for coverability. Our second result is a matching lower bound, that there does not exist a deterministic n^{2^o(d)}-time algorithm, conditioned upon the Exponential Time Hypothesis. When analysing coverability, a standard proof technique is to consider VASS with bounded counters. Bounded VASS make for an interesting and popular model due to strong connections with timed automata. Withal, we study a natural setting where the counter bound is linear in the size of the VASS. Here the trivial exhaustive search algorithm runs in 𝒪(n^{d+1})-time. We give evidence to this being near-optimal. We prove that in dimension one this trivial algorithm is conditionally optimal, by showing that n^{2-o(1)}-time is required under the k-cycle hypothesis. In general fixed dimension d, we show that n^{d-2-o(1)}-time is required under the 3-uniform hyperclique hypothesis.

Cite as

Marvin Künnemann, Filip Mazowiecki, Lia Schütze, Henry Sinclair-Banks, and Karol Węgrzycki. Coverability in VASS Revisited: Improving Rackoff’s Bound to Obtain Conditional Optimality. In 50th International Colloquium on Automata, Languages, and Programming (ICALP 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 261, pp. 131:1-131:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{kunnemann_et_al:LIPIcs.ICALP.2023.131,
  author =	{K\"{u}nnemann, Marvin and Mazowiecki, Filip and Sch\"{u}tze, Lia and Sinclair-Banks, Henry and W\k{e}grzycki, Karol},
  title =	{{Coverability in VASS Revisited: Improving Rackoff’s Bound to Obtain Conditional Optimality}},
  booktitle =	{50th International Colloquium on Automata, Languages, and Programming (ICALP 2023)},
  pages =	{131:1--131:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-278-5},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{261},
  editor =	{Etessami, Kousha and Feige, Uriel and Puppis, Gabriele},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2023.131},
  URN =		{urn:nbn:de:0030-drops-181834},
  doi =		{10.4230/LIPIcs.ICALP.2023.131},
  annote =	{Keywords: Vector Addition System, Coverability, Reachability, Fine-Grained Complexity, Exponential Time Hypothesis, k-Cycle Hypothesis, Hyperclique Hypothesis}
}
Document
Dynamic Data Structures for Parameterized String Problems

Authors: Jędrzej Olkowski, Michał Pilipczuk, Mateusz Rychlicki, Karol Węgrzycki, and Anna Zych-Pawlewicz

Published in: LIPIcs, Volume 254, 40th International Symposium on Theoretical Aspects of Computer Science (STACS 2023)


Abstract
We revisit classic string problems considered in the area of parameterized complexity, and study them through the lens of dynamic data structures. That is, instead of asking for a static algorithm that solves the given instance efficiently, our goal is to design a data structure that efficiently maintains a solution, or reports a lack thereof, upon updates in the instance. We first consider the CLOSEST STRING problem, for which we design randomized dynamic data structures with amortized update times d^𝒪(d) and |Σ|^𝒪(d), respectively, where Σ is the alphabet and d is the assumed bound on the maximum distance. These are obtained by combining known static approaches to CLOSEST STRING with color-coding. Next, we note that from a result of Frandsen et al. [J. ACM'97] one can easily infer a meta-theorem that provides dynamic data structures for parameterized string problems with worst-case update time of the form 𝒪_k(log log n), where k is the parameter in question and n is the length of the string. We showcase the utility of this meta-theorem by giving such data structures for problems DISJOINT FACTORS and EDIT DISTANCE. We also give explicit data structures for these problems, with worst-case update times 𝒪(k 2^k log log n) and 𝒪(k²log log n), respectively. Finally, we discuss how a lower bound methodology introduced by Amarilli et al. [ICALP'21] can be used to show that obtaining update time 𝒪(f(k)) for DISJOINT FACTORS and EDIT DISTANCE is unlikely already for a constant value of the parameter k.

Cite as

Jędrzej Olkowski, Michał Pilipczuk, Mateusz Rychlicki, Karol Węgrzycki, and Anna Zych-Pawlewicz. Dynamic Data Structures for Parameterized String Problems. In 40th International Symposium on Theoretical Aspects of Computer Science (STACS 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 254, pp. 50:1-50:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{olkowski_et_al:LIPIcs.STACS.2023.50,
  author =	{Olkowski, J\k{e}drzej and Pilipczuk, Micha{\l} and Rychlicki, Mateusz and W\k{e}grzycki, Karol and Zych-Pawlewicz, Anna},
  title =	{{Dynamic Data Structures for Parameterized String Problems}},
  booktitle =	{40th International Symposium on Theoretical Aspects of Computer Science (STACS 2023)},
  pages =	{50:1--50:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-266-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{254},
  editor =	{Berenbrink, Petra and Bouyer, Patricia and Dawar, Anuj and Kant\'{e}, Mamadou Moustapha},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2023.50},
  URN =		{urn:nbn:de:0030-drops-177026},
  doi =		{10.4230/LIPIcs.STACS.2023.50},
  annote =	{Keywords: Parameterized algorithms, Dynamic data structures, String problems, Closest String, Edit Distance, Disjoint Factors, Predecessor problem}
}
Document
Computing Generalized Convolutions Faster Than Brute Force

Authors: Barış Can Esmer, Ariel Kulik, Dániel Marx, Philipp Schepper, and Karol Węgrzycki

Published in: LIPIcs, Volume 249, 17th International Symposium on Parameterized and Exact Computation (IPEC 2022)


Abstract
In this paper, we consider a general notion of convolution. Let D be a finite domain and let Dⁿ be the set of n-length vectors (tuples) of D. Let f : D × D → D be a function and let ⊕_f be a coordinate-wise application of f. The f-Convolution of two functions g,h : Dⁿ → {-M,…,M} is (g ⊛_f h)(v) := ∑_{v_g,v_h ∈ D^n s.t. v = v_g ⊕_f v_h} g(v_g) ⋅ h(v_h) for every 𝐯 ∈ Dⁿ. This problem generalizes many fundamental convolutions such as Subset Convolution, XOR Product, Covering Product or Packing Product, etc. For arbitrary function f and domain D we can compute f-Convolution via brute-force enumeration in 𝒪̃(|D|^{2n} ⋅ polylog(M)) time. Our main result is an improvement over this naive algorithm. We show that f-Convolution can be computed exactly in 𝒪̃((c ⋅ |D|²)ⁿ ⋅ polylog(M)) for constant c := 5/6 when D has even cardinality. Our main observation is that a cyclic partition of a function f : D × D → D can be used to speed up the computation of f-Convolution, and we show that an appropriate cyclic partition exists for every f. Furthermore, we demonstrate that a single entry of the f-Convolution can be computed more efficiently. In this variant, we are given two functions g,h : Dⁿ → {-M,…,M} alongside with a vector 𝐯 ∈ Dⁿ and the task of the f-Query problem is to compute integer (g ⊛_f h)(𝐯). This is a generalization of the well-known Orthogonal Vectors problem. We show that f-Query can be computed in 𝒪̃(|D|^{(ω/2)n} ⋅ polylog(M)) time, where ω ∈ [2,2.373) is the exponent of currently fastest matrix multiplication algorithm.

Cite as

Barış Can Esmer, Ariel Kulik, Dániel Marx, Philipp Schepper, and Karol Węgrzycki. Computing Generalized Convolutions Faster Than Brute Force. In 17th International Symposium on Parameterized and Exact Computation (IPEC 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 249, pp. 12:1-12:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{esmer_et_al:LIPIcs.IPEC.2022.12,
  author =	{Esmer, Bar{\i}\c{s} Can and Kulik, Ariel and Marx, D\'{a}niel and Schepper, Philipp and W\k{e}grzycki, Karol},
  title =	{{Computing Generalized Convolutions Faster Than Brute Force}},
  booktitle =	{17th International Symposium on Parameterized and Exact Computation (IPEC 2022)},
  pages =	{12:1--12:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-260-0},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{249},
  editor =	{Dell, Holger and Nederlof, Jesper},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.IPEC.2022.12},
  URN =		{urn:nbn:de:0030-drops-173685},
  doi =		{10.4230/LIPIcs.IPEC.2022.12},
  annote =	{Keywords: Generalized Convolution, Fast Fourier Transform, Fast Subset Convolution}
}
Document
Isolation Schemes for Problems on Decomposable Graphs

Authors: Jesper Nederlof, Michał Pilipczuk, Céline M. F. Swennenhuis, and Karol Węgrzycki

Published in: LIPIcs, Volume 219, 39th International Symposium on Theoretical Aspects of Computer Science (STACS 2022)


Abstract
The Isolation Lemma of Mulmuley, Vazirani and Vazirani [Combinatorica'87] provides a self-reduction scheme that allows one to assume that a given instance of a problem has a unique solution, provided a solution exists at all. Since its introduction, much effort has been dedicated towards derandomization of the Isolation Lemma for specific classes of problems. So far, the focus was mainly on problems solvable in polynomial time. In this paper, we study a setting that is more typical for NP-complete problems, and obtain partial derandomizations in the form of significantly decreasing the number of required random bits. In particular, motivated by the advances in parameterized algorithms, we focus on problems on decomposable graphs. For example, for the problem of detecting a Hamiltonian cycle, we build upon the rank-based approach from [Bodlaender et al., Inf. Comput.'15] and design isolation schemes that use - 𝒪(tlog n + log²{n}) random bits on graphs of treewidth at most t; - 𝒪(√n) random bits on planar or H-minor free graphs; and - 𝒪(n)-random bits on general graphs. In all these schemes, the weights are bounded exponentially in the number of random bits used. As a corollary, for every fixed H we obtain an algorithm for detecting a Hamiltonian cycle in an H-minor-free graph that runs in deterministic time 2^{𝒪(√n)} and uses polynomial space; this is the first algorithm to achieve such complexity guarantees. For problems of more local nature, such as finding an independent set of maximum size, we obtain isolation schemes on graphs of treedepth at most d that use 𝒪(d) random bits and assign polynomially-bounded weights. We also complement our findings with several unconditional and conditional lower bounds, which show that many of the results cannot be significantly improved.

Cite as

Jesper Nederlof, Michał Pilipczuk, Céline M. F. Swennenhuis, and Karol Węgrzycki. Isolation Schemes for Problems on Decomposable Graphs. In 39th International Symposium on Theoretical Aspects of Computer Science (STACS 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 219, pp. 50:1-50:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{nederlof_et_al:LIPIcs.STACS.2022.50,
  author =	{Nederlof, Jesper and Pilipczuk, Micha{\l} and Swennenhuis, C\'{e}line M. F. and W\k{e}grzycki, Karol},
  title =	{{Isolation Schemes for Problems on Decomposable Graphs}},
  booktitle =	{39th International Symposium on Theoretical Aspects of Computer Science (STACS 2022)},
  pages =	{50:1--50:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-222-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{219},
  editor =	{Berenbrink, Petra and Monmege, Benjamin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2022.50},
  URN =		{urn:nbn:de:0030-drops-158601},
  doi =		{10.4230/LIPIcs.STACS.2022.50},
  annote =	{Keywords: Isolation Lemma, Derandomization, Hamiltonian Cycle, Exact Algorithms}
}
Document
Track A: Algorithms, Complexity and Games
Knapsack and Subset Sum with Small Items

Authors: Adam Polak, Lars Rohwedder, and Karol Węgrzycki

Published in: LIPIcs, Volume 198, 48th International Colloquium on Automata, Languages, and Programming (ICALP 2021)


Abstract
Knapsack and Subset Sum are fundamental NP-hard problems in combinatorial optimization. Recently there has been a growing interest in understanding the best possible pseudopolynomial running times for these problems with respect to various parameters. In this paper we focus on the maximum item size s and the maximum item value v. We give algorithms that run in time O(n + s³) and O(n + v³) for the Knapsack problem, and in time Õ(n + s^{5/3}) for the Subset Sum problem. Our algorithms work for the more general problem variants with multiplicities, where each input item comes with a (binary encoded) multiplicity, which succinctly describes how many times the item appears in the instance. In these variants n denotes the (possibly much smaller) number of distinct items. Our results follow from combining and optimizing several diverse lines of research, notably proximity arguments for integer programming due to Eisenbrand and Weismantel (TALG 2019), fast structured (min,+)-convolution by Kellerer and Pferschy (J. Comb. Optim. 2004), and additive combinatorics methods originating from Galil and Margalit (SICOMP 1991).

Cite as

Adam Polak, Lars Rohwedder, and Karol Węgrzycki. Knapsack and Subset Sum with Small Items. In 48th International Colloquium on Automata, Languages, and Programming (ICALP 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 198, pp. 106:1-106:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{polak_et_al:LIPIcs.ICALP.2021.106,
  author =	{Polak, Adam and Rohwedder, Lars and W\k{e}grzycki, Karol},
  title =	{{Knapsack and Subset Sum with Small Items}},
  booktitle =	{48th International Colloquium on Automata, Languages, and Programming (ICALP 2021)},
  pages =	{106:1--106:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-195-5},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{198},
  editor =	{Bansal, Nikhil and Merelli, Emanuela and Worrell, James},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2021.106},
  URN =		{urn:nbn:de:0030-drops-141752},
  doi =		{10.4230/LIPIcs.ICALP.2021.106},
  annote =	{Keywords: Knapsack, Subset Sum, Proximity, Additive Combinatorics, Multiset}
}
Document
Equal-Subset-Sum Faster Than the Meet-in-the-Middle

Authors: Marcin Mucha, Jesper Nederlof, Jakub Pawlewicz, and Karol Węgrzycki

Published in: LIPIcs, Volume 144, 27th Annual European Symposium on Algorithms (ESA 2019)


Abstract
In the Equal-Subset-Sum problem, we are given a set S of n integers and the problem is to decide if there exist two disjoint nonempty subsets A,B subseteq S, whose elements sum up to the same value. The problem is NP-complete. The state-of-the-art algorithm runs in O^*(3^(n/2)) <= O^*(1.7321^n) time and is based on the meet-in-the-middle technique. In this paper, we improve upon this algorithm and give O^*(1.7088^n) worst case Monte Carlo algorithm. This answers a question suggested by Woeginger in his inspirational survey. Additionally, we analyse the polynomial space algorithm for Equal-Subset-Sum. A naive polynomial space algorithm for Equal-Subset-Sum runs in O^*(3^n) time. With read-only access to the exponentially many random bits, we show a randomized algorithm running in O^*(2.6817^n) time and polynomial space.

Cite as

Marcin Mucha, Jesper Nederlof, Jakub Pawlewicz, and Karol Węgrzycki. Equal-Subset-Sum Faster Than the Meet-in-the-Middle. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 73:1-73:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{mucha_et_al:LIPIcs.ESA.2019.73,
  author =	{Mucha, Marcin and Nederlof, Jesper and Pawlewicz, Jakub and W\k{e}grzycki, Karol},
  title =	{{Equal-Subset-Sum Faster Than the Meet-in-the-Middle}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{73:1--73:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.73},
  URN =		{urn:nbn:de:0030-drops-111946},
  doi =		{10.4230/LIPIcs.ESA.2019.73},
  annote =	{Keywords: Equal-Subset-Sum, Subset-Sum, meet-in-the-middle, enumeration technique, randomized algorithm}
}
Document
On Problems Equivalent to (min,+)-Convolution

Authors: Marek Cygan, Marcin Mucha, Karol Wegrzycki, and Michal Wlodarczyk

Published in: LIPIcs, Volume 80, 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)


Abstract
In the recent years, significant progress has been made in explaining apparent hardness of improving over naive solutions for many fundamental polynomially solvable problems. This came in the form of conditional lower bounds -- reductions from a problem assumed to be hard. These include 3SUM, All-Pairs Shortest Paths, SAT and Orthogonal Vectors, and others. In the (min,+)-convolution problem, the goal is to compute a sequence c, where c[k] = min_i a[i]+b[k-i], given sequences a and b. This can easily be done in O(n^2) time, but no O(n^{2-eps}) algorithm is known for eps > 0. In this paper we undertake a systematic study of the (min,+)-convolution problem as a hardness assumption. As the first step, we establish equivalence of this problem to a group of other problems, including variants of the classic knapsack problem and problems related to subadditive sequences. The (min,+)-convolution has been used as a building block in algorithms for many problems, notably problems in stringology. It has also already appeared as an ad hoc hardness assumption. We investigate some of these connections and provide new reductions and other results.

Cite as

Marek Cygan, Marcin Mucha, Karol Wegrzycki, and Michal Wlodarczyk. On Problems Equivalent to (min,+)-Convolution. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 22:1-22:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{cygan_et_al:LIPIcs.ICALP.2017.22,
  author =	{Cygan, Marek and Mucha, Marcin and Wegrzycki, Karol and Wlodarczyk, Michal},
  title =	{{On Problems Equivalent to (min,+)-Convolution}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{22:1--22:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.22},
  URN =		{urn:nbn:de:0030-drops-74216},
  doi =		{10.4230/LIPIcs.ICALP.2017.22},
  annote =	{Keywords: fine-grained complexity, knapsack, conditional lower bounds, (min,+)-convolution, subquadratic equivalence}
}
Document
Improved Distance Queries and Cycle Counting by Frobenius Normal Form

Authors: Piotr Sankowski and Karol Wegrzycki

Published in: LIPIcs, Volume 66, 34th Symposium on Theoretical Aspects of Computer Science (STACS 2017)


Abstract
Consider an unweighted, directed graph G with the diameter D. In this paper, we introduce the framework for counting cycles and walks of given length in matrix multiplication time O-tilde(n^omega). The framework is based on the fast decomposition into Frobenius normal form and the Hankel matrix-vector multiplication. It allows us to solve the following problems efficiently. * All Nodes Shortest Cycles - for every node return the length of the shortest cycle containing it. We give an O-tilde(n^omega) algorithm that improves the previous O-tilde(n^((omega + 3)/2)) algorithm for unweighted digraphs. * We show how to compute all D sets of vertices lying on cycles of length c in {1, ..., D} in randomized time O-tilde(n^omega). It improves upon an algorithm by Cygan where algorithm that computes a single set is presented. * We present a functional improvement of distance queries for directed, unweighted graphs. * All Pairs All Walks - we show almost optimal O-tilde(n^3) time algorithm for all walks counting problem. We improve upon the naive O(D n^omega) time algorithm.

Cite as

Piotr Sankowski and Karol Wegrzycki. Improved Distance Queries and Cycle Counting by Frobenius Normal Form. In 34th Symposium on Theoretical Aspects of Computer Science (STACS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 66, pp. 56:1-56:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{sankowski_et_al:LIPIcs.STACS.2017.56,
  author =	{Sankowski, Piotr and Wegrzycki, Karol},
  title =	{{Improved Distance Queries and Cycle Counting by Frobenius Normal Form}},
  booktitle =	{34th Symposium on Theoretical Aspects of Computer Science (STACS 2017)},
  pages =	{56:1--56:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-028-6},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{66},
  editor =	{Vollmer, Heribert and Vall\'{e}e, Brigitte},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2017.56},
  URN =		{urn:nbn:de:0030-drops-69773},
  doi =		{10.4230/LIPIcs.STACS.2017.56},
  annote =	{Keywords: Frobenius Normal Form, Graph Algorithms, All Nodes Shortest Cycles}
}

Węgrzycki, Karol

Document
Parameterized Approximation for Maximum Weight Independent Set of Rectangles and Segments

Authors: Jana Cslovjecsek, Michał Pilipczuk, and Karol Węgrzycki

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
In the Maximum Weight Independent Set of Rectangles problem (MWISR) we are given a weighted set of n axis-parallel rectangles in the plane. The task is to find a subset of pairwise non-overlapping rectangles with the maximum possible total weight. This problem is NP-hard and the best-known polynomial-time approximation algorithm, due to Chalermsook and Walczak [SODA 2021], achieves approximation factor 𝒪(log log n). While in the unweighted setting, constant factor approximation algorithms are known, due to Mitchell [FOCS 2021] and to Gálvez et al. [SODA 2022], it remains open to extend these techniques to the weighted setting. In this paper, we consider MWISR through the lens of parameterized approximation. Grandoni, Kratsch and Wiese [ESA 2019] gave a (1-ε)-approximation algorithm running in k^{𝒪(k/ε⁸)} n^{𝒪(1/ε⁸)} time, where k is the number of rectangles in an optimum solution. Unfortunately, their algorithm works only in the unweighted setting and they left it as an open problem to give a parameterized approximation scheme in the weighted setting. We give a parameterized approximation algorithm for MWISR that given a parameter k ∈ ℕ, finds a set of non-overlapping rectangles of weight at least (1-ε) opt_k in 2^{𝒪(k log(k/ε))} n^{𝒪(1/ε)} time, where opt_k is the maximum weight of a solution of cardinality at most k. We also propose a parameterized approximation scheme with running time 2^{𝒪(k² log(k/ε))} n^{𝒪(1)} that finds a solution with cardinality at most k and total weight at least (1-ε)opt_k for the special case of axis-parallel segments.

Cite as

Jana Cslovjecsek, Michał Pilipczuk, and Karol Węgrzycki. Parameterized Approximation for Maximum Weight Independent Set of Rectangles and Segments. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 43:1-43:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{cslovjecsek_et_al:LIPIcs.ESA.2024.43,
  author =	{Cslovjecsek, Jana and Pilipczuk, Micha{\l} and W\k{e}grzycki, Karol},
  title =	{{Parameterized Approximation for Maximum Weight Independent Set of Rectangles and Segments}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{43:1--43:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.43},
  URN =		{urn:nbn:de:0030-drops-211146},
  doi =		{10.4230/LIPIcs.ESA.2024.43},
  annote =	{Keywords: parameterized approximation, Maximum Weight Independent Set, rectangles, segments}
}
Document
Hitting Meets Packing: How Hard Can It Be?

Authors: Jacob Focke, Fabian Frei, Shaohua Li, Dániel Marx, Philipp Schepper, Roohani Sharma, and Karol Węgrzycki

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
We study a general family of problems that form a common generalization of classic hitting (also referred to as covering or transversal) and packing problems. An instance of 𝒳-HitPack asks: Can removing k (deletable) vertices of a graph G prevent us from packing 𝓁 vertex-disjoint objects of type 𝒳? This problem captures a spectrum of problems with standard hitting and packing on opposite ends. Our main motivating question is whether the combination 𝒳-HitPack can be significantly harder than these two base problems. Already for one particular choice of 𝒳, this question can be posed for many different complexity notions, leading to a large, so-far unexplored domain at the intersection of the areas of hitting and packing problems. At a high level, we present two case studies: (1) 𝒳 being all cycles, and (2) 𝒳 being all copies of a fixed graph H. In each, we explore the classical complexity as well as the parameterized complexity with the natural parameters k+𝓁 and treewidth. We observe that the combined problem can be drastically harder than the base problems: for cycles or for H being a connected graph on at least 3 vertices, the problem is Σ₂^𝖯-complete and requires double-exponential dependence on the treewidth of the graph (assuming the Exponential-Time Hypothesis). In contrast, the combined problem admits qualitatively similar running times as the base problems in some cases, although significant novel ideas are required. For 𝒳 being all cycles, we establish a 2^{poly(k+𝓁)}⋅ n^{𝒪(1)} algorithm using an involved branching method, for example. Also, for 𝒳 being all edges (i.e., H = K₂; this combines Vertex Cover and Maximum Matching) the problem can be solved in time 2^{poly(tw)}⋅ n^{𝒪(1)} on graphs of treewidth tw. The key step enabling this running time relies on a combinatorial bound obtained from an algebraic (linear delta-matroid) representation of possible matchings.

Cite as

Jacob Focke, Fabian Frei, Shaohua Li, Dániel Marx, Philipp Schepper, Roohani Sharma, and Karol Węgrzycki. Hitting Meets Packing: How Hard Can It Be?. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 55:1-55:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{focke_et_al:LIPIcs.ESA.2024.55,
  author =	{Focke, Jacob and Frei, Fabian and Li, Shaohua and Marx, D\'{a}niel and Schepper, Philipp and Sharma, Roohani and W\k{e}grzycki, Karol},
  title =	{{Hitting Meets Packing: How Hard Can It Be?}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{55:1--55:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.55},
  URN =		{urn:nbn:de:0030-drops-211261},
  doi =		{10.4230/LIPIcs.ESA.2024.55},
  annote =	{Keywords: Hitting, Packing, Covering, Parameterized Algorithms, Lower Bounds, Treewidth}
}
Document
Parameterized Algorithms on Integer Sets with Small Doubling: Integer Programming, Subset Sum and k-SUM

Authors: Tim Randolph and Karol Węgrzycki

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
We study the parameterized complexity of algorithmic problems whose input is an integer set A in terms of the doubling constant 𝒞 := |A+A| / |A|, a fundamental measure of additive structure. We present evidence that this new parameterization is algorithmically useful in the form of new results for two difficult, well-studied problems: Integer Programming and Subset Sum. First, we show that determining the feasibility of bounded Integer Programs is a tractable problem when parameterized in the doubling constant. Specifically, we prove that the feasibility of an integer program ℐ with n polynomially-bounded variables and m constraints can be determined in time n^{O_𝒞(1)} ⋅ poly(|ℐ|) when the column set of the constraint matrix has doubling constant 𝒞. Second, we show that the Subset Sum and Unbounded Subset Sum problems can be solved in time n^{O_C(1)} and n^{O_𝒞(log log log n)}, respectively, where the O_C notation hides functions that depend only on the doubling constant 𝒞. We also show the equivalence of achieving an FPT algorithm for Subset Sum with bounded doubling and achieving a milestone result for the parameterized complexity of Box ILP. Finally, we design near-linear time algorithms for k-SUM as well as tight lower bounds for 4-SUM and nearly tight lower bounds for k-SUM, under the k-SUM conjecture. Several of our results rely on a new proof that Freiman’s Theorem, a central result in additive combinatorics, can be made efficiently constructive. This result may be of independent interest.

Cite as

Tim Randolph and Karol Węgrzycki. Parameterized Algorithms on Integer Sets with Small Doubling: Integer Programming, Subset Sum and k-SUM. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 96:1-96:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{randolph_et_al:LIPIcs.ESA.2024.96,
  author =	{Randolph, Tim and W\k{e}grzycki, Karol},
  title =	{{Parameterized Algorithms on Integer Sets with Small Doubling: Integer Programming, Subset Sum and k-SUM}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{96:1--96:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.96},
  URN =		{urn:nbn:de:0030-drops-211672},
  doi =		{10.4230/LIPIcs.ESA.2024.96},
  annote =	{Keywords: Parameterized algorithms, parameterized complexity, additive combinatorics, Subset Sum, integer programming, doubling constant}
}
Document
Fine-Grained Complexity of Earth Mover’s Distance Under Translation

Authors: Karl Bringmann, Frank Staals, Karol Węgrzycki, and Geert van Wordragen

Published in: LIPIcs, Volume 293, 40th International Symposium on Computational Geometry (SoCG 2024)


Abstract
The Earth Mover’s Distance is a popular similarity measure in several branches of computer science. It measures the minimum total edge length of a perfect matching between two point sets. The Earth Mover’s Distance under Translation (EMDuT) is a translation-invariant version thereof. It minimizes the Earth Mover’s Distance over all translations of one point set. For EMDuT in ℝ¹, we present an 𝒪̃(n²)-time algorithm. We also show that this algorithm is nearly optimal by presenting a matching conditional lower bound based on the Orthogonal Vectors Hypothesis. For EMDuT in ℝ^d, we present an 𝒪̃(n^{2d+2})-time algorithm for the L₁ and L_∞ metric. We show that this dependence on d is asymptotically tight, as an n^o(d)-time algorithm for L_1 or L_∞ would contradict the Exponential Time Hypothesis (ETH). Prior to our work, only approximation algorithms were known for these problems.

Cite as

Karl Bringmann, Frank Staals, Karol Węgrzycki, and Geert van Wordragen. Fine-Grained Complexity of Earth Mover’s Distance Under Translation. In 40th International Symposium on Computational Geometry (SoCG 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 293, pp. 25:1-25:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.SoCG.2024.25,
  author =	{Bringmann, Karl and Staals, Frank and W\k{e}grzycki, Karol and van Wordragen, Geert},
  title =	{{Fine-Grained Complexity of Earth Mover’s Distance Under Translation}},
  booktitle =	{40th International Symposium on Computational Geometry (SoCG 2024)},
  pages =	{25:1--25:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-316-4},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{293},
  editor =	{Mulzer, Wolfgang and Phillips, Jeff M.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SoCG.2024.25},
  URN =		{urn:nbn:de:0030-drops-199706},
  doi =		{10.4230/LIPIcs.SoCG.2024.25},
  annote =	{Keywords: Earth Mover’s Distance, Earth Mover’s Distance under Translation, Fine-Grained Complexity, Maximum Weight Bipartite Matching}
}
Document
Separator Theorem and Algorithms for Planar Hyperbolic Graphs

Authors: Sándor Kisfaludi-Bak, Jana Masaříková, Erik Jan van Leeuwen, Bartosz Walczak, and Karol Węgrzycki

Published in: LIPIcs, Volume 293, 40th International Symposium on Computational Geometry (SoCG 2024)


Abstract
The hyperbolicity of a graph, informally, measures how close a graph is (metrically) to a tree. Hence, it is intuitively similar to treewidth, but the measures are formally incomparable. Motivated by the broad study of algorithms and separators on planar graphs and their relation to treewidth, we initiate the study of planar graphs of bounded hyperbolicity. Our main technical contribution is a novel balanced separator theorem for planar δ-hyperbolic graphs that is substantially stronger than the classic planar separator theorem. For any fixed δ ⩾ 0, we can find a small balanced separator that induces either a single geodesic (shortest) path or a single geodesic cycle in the graph. An important advantage of our separator is that the union of our separator (vertex set Z) with any subset of the connected components of G - Z induces again a planar δ-hyperbolic graph, which would not be guaranteed with an arbitrary separator. Our construction runs in near-linear time and guarantees that the size of the separator is poly(δ) ⋅ log n. As an application of our separator theorem and its strong properties, we obtain two novel approximation schemes on planar δ-hyperbolic graphs. We prove that both Maximum Independent Set and the Traveling Salesperson problem have a near-linear time FPTAS for any constant δ, running in n polylog(n) ⋅ 2^𝒪(δ²) ⋅ ε^{-𝒪(δ)} time. We also show that our approximation scheme for Maximum Independent Set has essentially the best possible running time under the Exponential Time Hypothesis (ETH). This immediately follows from our third contribution: we prove that Maximum Independent Set has no n^{o(δ)}-time algorithm on planar δ-hyperbolic graphs, unless ETH fails.

Cite as

Sándor Kisfaludi-Bak, Jana Masaříková, Erik Jan van Leeuwen, Bartosz Walczak, and Karol Węgrzycki. Separator Theorem and Algorithms for Planar Hyperbolic Graphs. In 40th International Symposium on Computational Geometry (SoCG 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 293, pp. 67:1-67:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{kisfaludibak_et_al:LIPIcs.SoCG.2024.67,
  author =	{Kisfaludi-Bak, S\'{a}ndor and Masa\v{r}{\'\i}kov\'{a}, Jana and van Leeuwen, Erik Jan and Walczak, Bartosz and W\k{e}grzycki, Karol},
  title =	{{Separator Theorem and Algorithms for Planar Hyperbolic Graphs}},
  booktitle =	{40th International Symposium on Computational Geometry (SoCG 2024)},
  pages =	{67:1--67:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-316-4},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{293},
  editor =	{Mulzer, Wolfgang and Phillips, Jeff M.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SoCG.2024.67},
  URN =		{urn:nbn:de:0030-drops-200126},
  doi =		{10.4230/LIPIcs.SoCG.2024.67},
  annote =	{Keywords: Hyperbolic metric, Planar Graphs, r-Division, Approximation Algorithms}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Coverability in VASS Revisited: Improving Rackoff’s Bound to Obtain Conditional Optimality

Authors: Marvin Künnemann, Filip Mazowiecki, Lia Schütze, Henry Sinclair-Banks, and Karol Węgrzycki

Published in: LIPIcs, Volume 261, 50th International Colloquium on Automata, Languages, and Programming (ICALP 2023)


Abstract
Seminal results establish that the coverability problem for Vector Addition Systems with States (VASS) is in EXPSPACE (Rackoff, '78) and is EXPSPACE-hard already under unary encodings (Lipton, '76). More precisely, Rosier and Yen later utilise Rackoff’s bounding technique to show that if coverability holds then there is a run of length at most n^{2^𝒪(d log d)}, where d is the dimension and n is the size of the given unary VASS. Earlier, Lipton showed that there exist instances of coverability in d-dimensional unary VASS that are only witnessed by runs of length at least n^{2^Ω(d)}. Our first result closes this gap. We improve the upper bound by removing the twice-exponentiated log(d) factor, thus matching Lipton’s lower bound. This closes the corresponding gap for the exact space required to decide coverability. This also yields a deterministic n^{2^𝒪(d)}-time algorithm for coverability. Our second result is a matching lower bound, that there does not exist a deterministic n^{2^o(d)}-time algorithm, conditioned upon the Exponential Time Hypothesis. When analysing coverability, a standard proof technique is to consider VASS with bounded counters. Bounded VASS make for an interesting and popular model due to strong connections with timed automata. Withal, we study a natural setting where the counter bound is linear in the size of the VASS. Here the trivial exhaustive search algorithm runs in 𝒪(n^{d+1})-time. We give evidence to this being near-optimal. We prove that in dimension one this trivial algorithm is conditionally optimal, by showing that n^{2-o(1)}-time is required under the k-cycle hypothesis. In general fixed dimension d, we show that n^{d-2-o(1)}-time is required under the 3-uniform hyperclique hypothesis.

Cite as

Marvin Künnemann, Filip Mazowiecki, Lia Schütze, Henry Sinclair-Banks, and Karol Węgrzycki. Coverability in VASS Revisited: Improving Rackoff’s Bound to Obtain Conditional Optimality. In 50th International Colloquium on Automata, Languages, and Programming (ICALP 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 261, pp. 131:1-131:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{kunnemann_et_al:LIPIcs.ICALP.2023.131,
  author =	{K\"{u}nnemann, Marvin and Mazowiecki, Filip and Sch\"{u}tze, Lia and Sinclair-Banks, Henry and W\k{e}grzycki, Karol},
  title =	{{Coverability in VASS Revisited: Improving Rackoff’s Bound to Obtain Conditional Optimality}},
  booktitle =	{50th International Colloquium on Automata, Languages, and Programming (ICALP 2023)},
  pages =	{131:1--131:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-278-5},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{261},
  editor =	{Etessami, Kousha and Feige, Uriel and Puppis, Gabriele},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2023.131},
  URN =		{urn:nbn:de:0030-drops-181834},
  doi =		{10.4230/LIPIcs.ICALP.2023.131},
  annote =	{Keywords: Vector Addition System, Coverability, Reachability, Fine-Grained Complexity, Exponential Time Hypothesis, k-Cycle Hypothesis, Hyperclique Hypothesis}
}
Document
Dynamic Data Structures for Parameterized String Problems

Authors: Jędrzej Olkowski, Michał Pilipczuk, Mateusz Rychlicki, Karol Węgrzycki, and Anna Zych-Pawlewicz

Published in: LIPIcs, Volume 254, 40th International Symposium on Theoretical Aspects of Computer Science (STACS 2023)


Abstract
We revisit classic string problems considered in the area of parameterized complexity, and study them through the lens of dynamic data structures. That is, instead of asking for a static algorithm that solves the given instance efficiently, our goal is to design a data structure that efficiently maintains a solution, or reports a lack thereof, upon updates in the instance. We first consider the CLOSEST STRING problem, for which we design randomized dynamic data structures with amortized update times d^𝒪(d) and |Σ|^𝒪(d), respectively, where Σ is the alphabet and d is the assumed bound on the maximum distance. These are obtained by combining known static approaches to CLOSEST STRING with color-coding. Next, we note that from a result of Frandsen et al. [J. ACM'97] one can easily infer a meta-theorem that provides dynamic data structures for parameterized string problems with worst-case update time of the form 𝒪_k(log log n), where k is the parameter in question and n is the length of the string. We showcase the utility of this meta-theorem by giving such data structures for problems DISJOINT FACTORS and EDIT DISTANCE. We also give explicit data structures for these problems, with worst-case update times 𝒪(k 2^k log log n) and 𝒪(k²log log n), respectively. Finally, we discuss how a lower bound methodology introduced by Amarilli et al. [ICALP'21] can be used to show that obtaining update time 𝒪(f(k)) for DISJOINT FACTORS and EDIT DISTANCE is unlikely already for a constant value of the parameter k.

Cite as

Jędrzej Olkowski, Michał Pilipczuk, Mateusz Rychlicki, Karol Węgrzycki, and Anna Zych-Pawlewicz. Dynamic Data Structures for Parameterized String Problems. In 40th International Symposium on Theoretical Aspects of Computer Science (STACS 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 254, pp. 50:1-50:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{olkowski_et_al:LIPIcs.STACS.2023.50,
  author =	{Olkowski, J\k{e}drzej and Pilipczuk, Micha{\l} and Rychlicki, Mateusz and W\k{e}grzycki, Karol and Zych-Pawlewicz, Anna},
  title =	{{Dynamic Data Structures for Parameterized String Problems}},
  booktitle =	{40th International Symposium on Theoretical Aspects of Computer Science (STACS 2023)},
  pages =	{50:1--50:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-266-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{254},
  editor =	{Berenbrink, Petra and Bouyer, Patricia and Dawar, Anuj and Kant\'{e}, Mamadou Moustapha},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2023.50},
  URN =		{urn:nbn:de:0030-drops-177026},
  doi =		{10.4230/LIPIcs.STACS.2023.50},
  annote =	{Keywords: Parameterized algorithms, Dynamic data structures, String problems, Closest String, Edit Distance, Disjoint Factors, Predecessor problem}
}
Document
Computing Generalized Convolutions Faster Than Brute Force

Authors: Barış Can Esmer, Ariel Kulik, Dániel Marx, Philipp Schepper, and Karol Węgrzycki

Published in: LIPIcs, Volume 249, 17th International Symposium on Parameterized and Exact Computation (IPEC 2022)


Abstract
In this paper, we consider a general notion of convolution. Let D be a finite domain and let Dⁿ be the set of n-length vectors (tuples) of D. Let f : D × D → D be a function and let ⊕_f be a coordinate-wise application of f. The f-Convolution of two functions g,h : Dⁿ → {-M,…,M} is (g ⊛_f h)(v) := ∑_{v_g,v_h ∈ D^n s.t. v = v_g ⊕_f v_h} g(v_g) ⋅ h(v_h) for every 𝐯 ∈ Dⁿ. This problem generalizes many fundamental convolutions such as Subset Convolution, XOR Product, Covering Product or Packing Product, etc. For arbitrary function f and domain D we can compute f-Convolution via brute-force enumeration in 𝒪̃(|D|^{2n} ⋅ polylog(M)) time. Our main result is an improvement over this naive algorithm. We show that f-Convolution can be computed exactly in 𝒪̃((c ⋅ |D|²)ⁿ ⋅ polylog(M)) for constant c := 5/6 when D has even cardinality. Our main observation is that a cyclic partition of a function f : D × D → D can be used to speed up the computation of f-Convolution, and we show that an appropriate cyclic partition exists for every f. Furthermore, we demonstrate that a single entry of the f-Convolution can be computed more efficiently. In this variant, we are given two functions g,h : Dⁿ → {-M,…,M} alongside with a vector 𝐯 ∈ Dⁿ and the task of the f-Query problem is to compute integer (g ⊛_f h)(𝐯). This is a generalization of the well-known Orthogonal Vectors problem. We show that f-Query can be computed in 𝒪̃(|D|^{(ω/2)n} ⋅ polylog(M)) time, where ω ∈ [2,2.373) is the exponent of currently fastest matrix multiplication algorithm.

Cite as

Barış Can Esmer, Ariel Kulik, Dániel Marx, Philipp Schepper, and Karol Węgrzycki. Computing Generalized Convolutions Faster Than Brute Force. In 17th International Symposium on Parameterized and Exact Computation (IPEC 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 249, pp. 12:1-12:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{esmer_et_al:LIPIcs.IPEC.2022.12,
  author =	{Esmer, Bar{\i}\c{s} Can and Kulik, Ariel and Marx, D\'{a}niel and Schepper, Philipp and W\k{e}grzycki, Karol},
  title =	{{Computing Generalized Convolutions Faster Than Brute Force}},
  booktitle =	{17th International Symposium on Parameterized and Exact Computation (IPEC 2022)},
  pages =	{12:1--12:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-260-0},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{249},
  editor =	{Dell, Holger and Nederlof, Jesper},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.IPEC.2022.12},
  URN =		{urn:nbn:de:0030-drops-173685},
  doi =		{10.4230/LIPIcs.IPEC.2022.12},
  annote =	{Keywords: Generalized Convolution, Fast Fourier Transform, Fast Subset Convolution}
}
Document
Isolation Schemes for Problems on Decomposable Graphs

Authors: Jesper Nederlof, Michał Pilipczuk, Céline M. F. Swennenhuis, and Karol Węgrzycki

Published in: LIPIcs, Volume 219, 39th International Symposium on Theoretical Aspects of Computer Science (STACS 2022)


Abstract
The Isolation Lemma of Mulmuley, Vazirani and Vazirani [Combinatorica'87] provides a self-reduction scheme that allows one to assume that a given instance of a problem has a unique solution, provided a solution exists at all. Since its introduction, much effort has been dedicated towards derandomization of the Isolation Lemma for specific classes of problems. So far, the focus was mainly on problems solvable in polynomial time. In this paper, we study a setting that is more typical for NP-complete problems, and obtain partial derandomizations in the form of significantly decreasing the number of required random bits. In particular, motivated by the advances in parameterized algorithms, we focus on problems on decomposable graphs. For example, for the problem of detecting a Hamiltonian cycle, we build upon the rank-based approach from [Bodlaender et al., Inf. Comput.'15] and design isolation schemes that use - 𝒪(tlog n + log²{n}) random bits on graphs of treewidth at most t; - 𝒪(√n) random bits on planar or H-minor free graphs; and - 𝒪(n)-random bits on general graphs. In all these schemes, the weights are bounded exponentially in the number of random bits used. As a corollary, for every fixed H we obtain an algorithm for detecting a Hamiltonian cycle in an H-minor-free graph that runs in deterministic time 2^{𝒪(√n)} and uses polynomial space; this is the first algorithm to achieve such complexity guarantees. For problems of more local nature, such as finding an independent set of maximum size, we obtain isolation schemes on graphs of treedepth at most d that use 𝒪(d) random bits and assign polynomially-bounded weights. We also complement our findings with several unconditional and conditional lower bounds, which show that many of the results cannot be significantly improved.

Cite as

Jesper Nederlof, Michał Pilipczuk, Céline M. F. Swennenhuis, and Karol Węgrzycki. Isolation Schemes for Problems on Decomposable Graphs. In 39th International Symposium on Theoretical Aspects of Computer Science (STACS 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 219, pp. 50:1-50:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{nederlof_et_al:LIPIcs.STACS.2022.50,
  author =	{Nederlof, Jesper and Pilipczuk, Micha{\l} and Swennenhuis, C\'{e}line M. F. and W\k{e}grzycki, Karol},
  title =	{{Isolation Schemes for Problems on Decomposable Graphs}},
  booktitle =	{39th International Symposium on Theoretical Aspects of Computer Science (STACS 2022)},
  pages =	{50:1--50:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-222-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{219},
  editor =	{Berenbrink, Petra and Monmege, Benjamin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2022.50},
  URN =		{urn:nbn:de:0030-drops-158601},
  doi =		{10.4230/LIPIcs.STACS.2022.50},
  annote =	{Keywords: Isolation Lemma, Derandomization, Hamiltonian Cycle, Exact Algorithms}
}
Document
Track A: Algorithms, Complexity and Games
Knapsack and Subset Sum with Small Items

Authors: Adam Polak, Lars Rohwedder, and Karol Węgrzycki

Published in: LIPIcs, Volume 198, 48th International Colloquium on Automata, Languages, and Programming (ICALP 2021)


Abstract
Knapsack and Subset Sum are fundamental NP-hard problems in combinatorial optimization. Recently there has been a growing interest in understanding the best possible pseudopolynomial running times for these problems with respect to various parameters. In this paper we focus on the maximum item size s and the maximum item value v. We give algorithms that run in time O(n + s³) and O(n + v³) for the Knapsack problem, and in time Õ(n + s^{5/3}) for the Subset Sum problem. Our algorithms work for the more general problem variants with multiplicities, where each input item comes with a (binary encoded) multiplicity, which succinctly describes how many times the item appears in the instance. In these variants n denotes the (possibly much smaller) number of distinct items. Our results follow from combining and optimizing several diverse lines of research, notably proximity arguments for integer programming due to Eisenbrand and Weismantel (TALG 2019), fast structured (min,+)-convolution by Kellerer and Pferschy (J. Comb. Optim. 2004), and additive combinatorics methods originating from Galil and Margalit (SICOMP 1991).

Cite as

Adam Polak, Lars Rohwedder, and Karol Węgrzycki. Knapsack and Subset Sum with Small Items. In 48th International Colloquium on Automata, Languages, and Programming (ICALP 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 198, pp. 106:1-106:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{polak_et_al:LIPIcs.ICALP.2021.106,
  author =	{Polak, Adam and Rohwedder, Lars and W\k{e}grzycki, Karol},
  title =	{{Knapsack and Subset Sum with Small Items}},
  booktitle =	{48th International Colloquium on Automata, Languages, and Programming (ICALP 2021)},
  pages =	{106:1--106:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-195-5},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{198},
  editor =	{Bansal, Nikhil and Merelli, Emanuela and Worrell, James},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2021.106},
  URN =		{urn:nbn:de:0030-drops-141752},
  doi =		{10.4230/LIPIcs.ICALP.2021.106},
  annote =	{Keywords: Knapsack, Subset Sum, Proximity, Additive Combinatorics, Multiset}
}
Document
Equal-Subset-Sum Faster Than the Meet-in-the-Middle

Authors: Marcin Mucha, Jesper Nederlof, Jakub Pawlewicz, and Karol Węgrzycki

Published in: LIPIcs, Volume 144, 27th Annual European Symposium on Algorithms (ESA 2019)


Abstract
In the Equal-Subset-Sum problem, we are given a set S of n integers and the problem is to decide if there exist two disjoint nonempty subsets A,B subseteq S, whose elements sum up to the same value. The problem is NP-complete. The state-of-the-art algorithm runs in O^*(3^(n/2)) <= O^*(1.7321^n) time and is based on the meet-in-the-middle technique. In this paper, we improve upon this algorithm and give O^*(1.7088^n) worst case Monte Carlo algorithm. This answers a question suggested by Woeginger in his inspirational survey. Additionally, we analyse the polynomial space algorithm for Equal-Subset-Sum. A naive polynomial space algorithm for Equal-Subset-Sum runs in O^*(3^n) time. With read-only access to the exponentially many random bits, we show a randomized algorithm running in O^*(2.6817^n) time and polynomial space.

Cite as

Marcin Mucha, Jesper Nederlof, Jakub Pawlewicz, and Karol Węgrzycki. Equal-Subset-Sum Faster Than the Meet-in-the-Middle. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 73:1-73:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{mucha_et_al:LIPIcs.ESA.2019.73,
  author =	{Mucha, Marcin and Nederlof, Jesper and Pawlewicz, Jakub and W\k{e}grzycki, Karol},
  title =	{{Equal-Subset-Sum Faster Than the Meet-in-the-Middle}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{73:1--73:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.73},
  URN =		{urn:nbn:de:0030-drops-111946},
  doi =		{10.4230/LIPIcs.ESA.2019.73},
  annote =	{Keywords: Equal-Subset-Sum, Subset-Sum, meet-in-the-middle, enumeration technique, randomized algorithm}
}
Document
On Problems Equivalent to (min,+)-Convolution

Authors: Marek Cygan, Marcin Mucha, Karol Wegrzycki, and Michal Wlodarczyk

Published in: LIPIcs, Volume 80, 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)


Abstract
In the recent years, significant progress has been made in explaining apparent hardness of improving over naive solutions for many fundamental polynomially solvable problems. This came in the form of conditional lower bounds -- reductions from a problem assumed to be hard. These include 3SUM, All-Pairs Shortest Paths, SAT and Orthogonal Vectors, and others. In the (min,+)-convolution problem, the goal is to compute a sequence c, where c[k] = min_i a[i]+b[k-i], given sequences a and b. This can easily be done in O(n^2) time, but no O(n^{2-eps}) algorithm is known for eps > 0. In this paper we undertake a systematic study of the (min,+)-convolution problem as a hardness assumption. As the first step, we establish equivalence of this problem to a group of other problems, including variants of the classic knapsack problem and problems related to subadditive sequences. The (min,+)-convolution has been used as a building block in algorithms for many problems, notably problems in stringology. It has also already appeared as an ad hoc hardness assumption. We investigate some of these connections and provide new reductions and other results.

Cite as

Marek Cygan, Marcin Mucha, Karol Wegrzycki, and Michal Wlodarczyk. On Problems Equivalent to (min,+)-Convolution. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 22:1-22:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{cygan_et_al:LIPIcs.ICALP.2017.22,
  author =	{Cygan, Marek and Mucha, Marcin and Wegrzycki, Karol and Wlodarczyk, Michal},
  title =	{{On Problems Equivalent to (min,+)-Convolution}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{22:1--22:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.22},
  URN =		{urn:nbn:de:0030-drops-74216},
  doi =		{10.4230/LIPIcs.ICALP.2017.22},
  annote =	{Keywords: fine-grained complexity, knapsack, conditional lower bounds, (min,+)-convolution, subquadratic equivalence}
}
Document
Improved Distance Queries and Cycle Counting by Frobenius Normal Form

Authors: Piotr Sankowski and Karol Wegrzycki

Published in: LIPIcs, Volume 66, 34th Symposium on Theoretical Aspects of Computer Science (STACS 2017)


Abstract
Consider an unweighted, directed graph G with the diameter D. In this paper, we introduce the framework for counting cycles and walks of given length in matrix multiplication time O-tilde(n^omega). The framework is based on the fast decomposition into Frobenius normal form and the Hankel matrix-vector multiplication. It allows us to solve the following problems efficiently. * All Nodes Shortest Cycles - for every node return the length of the shortest cycle containing it. We give an O-tilde(n^omega) algorithm that improves the previous O-tilde(n^((omega + 3)/2)) algorithm for unweighted digraphs. * We show how to compute all D sets of vertices lying on cycles of length c in {1, ..., D} in randomized time O-tilde(n^omega). It improves upon an algorithm by Cygan where algorithm that computes a single set is presented. * We present a functional improvement of distance queries for directed, unweighted graphs. * All Pairs All Walks - we show almost optimal O-tilde(n^3) time algorithm for all walks counting problem. We improve upon the naive O(D n^omega) time algorithm.

Cite as

Piotr Sankowski and Karol Wegrzycki. Improved Distance Queries and Cycle Counting by Frobenius Normal Form. In 34th Symposium on Theoretical Aspects of Computer Science (STACS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 66, pp. 56:1-56:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{sankowski_et_al:LIPIcs.STACS.2017.56,
  author =	{Sankowski, Piotr and Wegrzycki, Karol},
  title =	{{Improved Distance Queries and Cycle Counting by Frobenius Normal Form}},
  booktitle =	{34th Symposium on Theoretical Aspects of Computer Science (STACS 2017)},
  pages =	{56:1--56:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-028-6},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{66},
  editor =	{Vollmer, Heribert and Vall\'{e}e, Brigitte},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2017.56},
  URN =		{urn:nbn:de:0030-drops-69773},
  doi =		{10.4230/LIPIcs.STACS.2017.56},
  annote =	{Keywords: Frobenius Normal Form, Graph Algorithms, All Nodes Shortest Cycles}
}
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail