28 Search Results for "Wegrzycki, Karol"


Document
A (5/3+ε)-Approximation for Tricolored Non-Crossing Euclidean TSP

Authors: Júlia Baligács, Yann Disser, Andreas Emil Feldmann, and Anna Zych-Pawlewicz

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
In the Tricolored Euclidean Traveling Salesperson problem, we are given k = 3 sets of points in the plane and are looking for disjoint tours, each covering one of the sets. Arora (1998) famously gave a PTAS based on "patching" for the case k = 1 and, recently, Dross et al. (2023) generalized this result to k = 2. Our contribution is a (5/3+ε)-approximation algorithm for k = 3 that further generalizes Arora’s approach. It is believed that patching is generally no longer possible for more than two tours. We circumvent this issue by either applying a conditional patching scheme for three tours or using an alternative approach based on a weighted solution for k = 2.

Cite as

Júlia Baligács, Yann Disser, Andreas Emil Feldmann, and Anna Zych-Pawlewicz. A (5/3+ε)-Approximation for Tricolored Non-Crossing Euclidean TSP. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 15:1-15:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{baligacs_et_al:LIPIcs.ESA.2024.15,
  author =	{Balig\'{a}cs, J\'{u}lia and Disser, Yann and Feldmann, Andreas Emil and Zych-Pawlewicz, Anna},
  title =	{{A (5/3+\epsilon)-Approximation for Tricolored Non-Crossing Euclidean TSP}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{15:1--15:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.15},
  URN =		{urn:nbn:de:0030-drops-210862},
  doi =		{10.4230/LIPIcs.ESA.2024.15},
  annote =	{Keywords: Approximation Algorithms, geometric Network Optimization, Euclidean TSP, non-crossing Structures}
}
Document
Improved Space Bounds for Subset Sum

Authors: Tatiana Belova, Nikolai Chukhin, Alexander S. Kulikov, and Ivan Mihajlin

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
More than 40 years ago, Schroeppel and Shamir presented an algorithm that solves the Subset Sum problem for n integers in time O^*(2^{0.5n}) and space O^*(2^{0.25n}). The time upper bound remains unbeaten, but the space upper bound has been improved to O^*(2^{0.249999n}) in a recent breakthrough paper by Nederlof and Węgrzycki (STOC 2021). Their algorithm is a clever combination of a number of previously known techniques with a new reduction and a new algorithm for the Orthogonal Vectors problem. In this paper, we give two new algorithms for Subset Sum. We start by presenting an Arthur-Merlin algorithm: upon receiving the verifier’s randomness, the prover sends an n/4-bit long proof to the verifier who checks it in (deterministic) time and space O^*(2^{n/4}). An interesting consequence of this result is the following fine-grained lower bound: assuming that 4-SUM cannot be solved in time O(n^{2-ε}) for all ε > 0, Circuit SAT cannot be solved in time O(g2^{(1-ε)n}), for all ε > 0 (where n and g denote the number of inputs and the number of gates, respectively). Then, we improve the space bound by Nederlof and Węgrzycki to O^*(2^{0.246n}) and also simplify their algorithm and its analysis. We achieve this space bound by further filtering sets of subsets using a random prime number. This allows us to reduce an instance of Subset Sum to a larger number of instances of smaller size.

Cite as

Tatiana Belova, Nikolai Chukhin, Alexander S. Kulikov, and Ivan Mihajlin. Improved Space Bounds for Subset Sum. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 21:1-21:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{belova_et_al:LIPIcs.ESA.2024.21,
  author =	{Belova, Tatiana and Chukhin, Nikolai and Kulikov, Alexander S. and Mihajlin, Ivan},
  title =	{{Improved Space Bounds for Subset Sum}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{21:1--21:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.21},
  URN =		{urn:nbn:de:0030-drops-210925},
  doi =		{10.4230/LIPIcs.ESA.2024.21},
  annote =	{Keywords: algorithms, subset sum, complexity, space, upper bounds}
}
Document
Even Faster Knapsack via Rectangular Monotone Min-Plus Convolution and Balancing

Authors: Karl Bringmann, Anita Dürr, and Adam Polak

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
We present a pseudopolynomial-time algorithm for the Knapsack problem that has running time Õ(n + t√{p_{max}}), where n is the number of items, t is the knapsack capacity, and p_{max} is the maximum item profit. This improves over the Õ(n + t p_{max})-time algorithm based on the convolution and prediction technique by Bateni et al. (STOC 2018). Moreover, we give some evidence, based on a strengthening of the Min-Plus Convolution Hypothesis, that our running time might be optimal. Our algorithm uses two new technical tools, which might be of independent interest. First, we generalize the Õ(n^{1.5})-time algorithm for bounded monotone min-plus convolution by Chi et al. (STOC 2022) to the rectangular case where the range of entries can be different from the sequence length. Second, we give a reduction from general knapsack instances to balanced instances, where all items have nearly the same profit-to-weight ratio, up to a constant factor. Using these techniques, we can also obtain algorithms that run in time Õ(n + OPT√{w_{max}}), Õ(n + (nw_{max}p_{max})^{1/3}t^{2/3}), and Õ(n + (nw_{max}p_{max})^{1/3} OPT^{2/3}), where OPT is the optimal total profit and w_{max} is the maximum item weight.

Cite as

Karl Bringmann, Anita Dürr, and Adam Polak. Even Faster Knapsack via Rectangular Monotone Min-Plus Convolution and Balancing. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 33:1-33:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.ESA.2024.33,
  author =	{Bringmann, Karl and D\"{u}rr, Anita and Polak, Adam},
  title =	{{Even Faster Knapsack via Rectangular Monotone Min-Plus Convolution and Balancing}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{33:1--33:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.33},
  URN =		{urn:nbn:de:0030-drops-211047},
  doi =		{10.4230/LIPIcs.ESA.2024.33},
  annote =	{Keywords: 0-1-Knapsack problem, bounded monotone min-plus convolution, fine-grained complexity}
}
Document
Exploring the Approximability Landscape of 3SUM

Authors: Karl Bringmann, Ahmed Ghazy, and Marvin Künnemann

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
Since an increasing number of problems in P have conditional lower bounds against exact algorithms, it is natural to study which of these problems can be efficiently approximated. Often, however, there are many potential ways to formulate an approximate version of a problem. We ask: How sensitive is the (in-)approximability of a problem in P to its precise formulation? To this end, we perform a case study using the popular 3SUM problem. Its many equivalent formulations give rise to a wide range of potential approximate relaxations. Specifically, to obtain an approximate relaxation in our framework, one can choose among the options: (a) 3SUM or Convolution 3SUM, (b) monochromatic or trichromatic, (c) allowing under-approximation, over-approximation, or both, (d) approximate decision or approximate optimization, (e) single output or multiple outputs and (f) implicit or explicit target (given as input). We show general reduction principles between some variants and find that we can classify the remaining problems (over polynomially bounded positive integers) into three regimes: 1) (1+ε)-approximable in near-linear time Õ(n + 1/ε), 2) (1+ε)-approximable in near-quadratic time Õ(n/ε) or Õ(n+1/ε²), or 3) non-approximable, i.e., requiring time n^{2± o(1)} even for any approximation factor. In each of these three regimes, we provide matching upper and conditional lower bounds. To prove our results, we establish two results that may be of independent interest: Over polynomially bounded integers, we show subquadratic equivalence of (min,+)-convolution and polyhedral 3SUM, and we prove equivalence of the Strong 3SUM conjecture and the Strong Convolution 3SUM conjecture.

Cite as

Karl Bringmann, Ahmed Ghazy, and Marvin Künnemann. Exploring the Approximability Landscape of 3SUM. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 34:1-34:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.ESA.2024.34,
  author =	{Bringmann, Karl and Ghazy, Ahmed and K\"{u}nnemann, Marvin},
  title =	{{Exploring the Approximability Landscape of 3SUM}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{34:1--34:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.34},
  URN =		{urn:nbn:de:0030-drops-211057},
  doi =		{10.4230/LIPIcs.ESA.2024.34},
  annote =	{Keywords: Fine-grained Complexity, Conditional Lower Bounds, Approximation Schemes, Min-Plus Convolution}
}
Document
Parameterized Approximation for Maximum Weight Independent Set of Rectangles and Segments

Authors: Jana Cslovjecsek, Michał Pilipczuk, and Karol Węgrzycki

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
In the Maximum Weight Independent Set of Rectangles problem (MWISR) we are given a weighted set of n axis-parallel rectangles in the plane. The task is to find a subset of pairwise non-overlapping rectangles with the maximum possible total weight. This problem is NP-hard and the best-known polynomial-time approximation algorithm, due to Chalermsook and Walczak [SODA 2021], achieves approximation factor 𝒪(log log n). While in the unweighted setting, constant factor approximation algorithms are known, due to Mitchell [FOCS 2021] and to Gálvez et al. [SODA 2022], it remains open to extend these techniques to the weighted setting. In this paper, we consider MWISR through the lens of parameterized approximation. Grandoni, Kratsch and Wiese [ESA 2019] gave a (1-ε)-approximation algorithm running in k^{𝒪(k/ε⁸)} n^{𝒪(1/ε⁸)} time, where k is the number of rectangles in an optimum solution. Unfortunately, their algorithm works only in the unweighted setting and they left it as an open problem to give a parameterized approximation scheme in the weighted setting. We give a parameterized approximation algorithm for MWISR that given a parameter k ∈ ℕ, finds a set of non-overlapping rectangles of weight at least (1-ε) opt_k in 2^{𝒪(k log(k/ε))} n^{𝒪(1/ε)} time, where opt_k is the maximum weight of a solution of cardinality at most k. We also propose a parameterized approximation scheme with running time 2^{𝒪(k² log(k/ε))} n^{𝒪(1)} that finds a solution with cardinality at most k and total weight at least (1-ε)opt_k for the special case of axis-parallel segments.

Cite as

Jana Cslovjecsek, Michał Pilipczuk, and Karol Węgrzycki. Parameterized Approximation for Maximum Weight Independent Set of Rectangles and Segments. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 43:1-43:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{cslovjecsek_et_al:LIPIcs.ESA.2024.43,
  author =	{Cslovjecsek, Jana and Pilipczuk, Micha{\l} and W\k{e}grzycki, Karol},
  title =	{{Parameterized Approximation for Maximum Weight Independent Set of Rectangles and Segments}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{43:1--43:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.43},
  URN =		{urn:nbn:de:0030-drops-211146},
  doi =		{10.4230/LIPIcs.ESA.2024.43},
  annote =	{Keywords: parameterized approximation, Maximum Weight Independent Set, rectangles, segments}
}
Document
Hitting Meets Packing: How Hard Can It Be?

Authors: Jacob Focke, Fabian Frei, Shaohua Li, Dániel Marx, Philipp Schepper, Roohani Sharma, and Karol Węgrzycki

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
We study a general family of problems that form a common generalization of classic hitting (also referred to as covering or transversal) and packing problems. An instance of 𝒳-HitPack asks: Can removing k (deletable) vertices of a graph G prevent us from packing 𝓁 vertex-disjoint objects of type 𝒳? This problem captures a spectrum of problems with standard hitting and packing on opposite ends. Our main motivating question is whether the combination 𝒳-HitPack can be significantly harder than these two base problems. Already for one particular choice of 𝒳, this question can be posed for many different complexity notions, leading to a large, so-far unexplored domain at the intersection of the areas of hitting and packing problems. At a high level, we present two case studies: (1) 𝒳 being all cycles, and (2) 𝒳 being all copies of a fixed graph H. In each, we explore the classical complexity as well as the parameterized complexity with the natural parameters k+𝓁 and treewidth. We observe that the combined problem can be drastically harder than the base problems: for cycles or for H being a connected graph on at least 3 vertices, the problem is Σ₂^𝖯-complete and requires double-exponential dependence on the treewidth of the graph (assuming the Exponential-Time Hypothesis). In contrast, the combined problem admits qualitatively similar running times as the base problems in some cases, although significant novel ideas are required. For 𝒳 being all cycles, we establish a 2^{poly(k+𝓁)}⋅ n^{𝒪(1)} algorithm using an involved branching method, for example. Also, for 𝒳 being all edges (i.e., H = K₂; this combines Vertex Cover and Maximum Matching) the problem can be solved in time 2^{poly(tw)}⋅ n^{𝒪(1)} on graphs of treewidth tw. The key step enabling this running time relies on a combinatorial bound obtained from an algebraic (linear delta-matroid) representation of possible matchings.

Cite as

Jacob Focke, Fabian Frei, Shaohua Li, Dániel Marx, Philipp Schepper, Roohani Sharma, and Karol Węgrzycki. Hitting Meets Packing: How Hard Can It Be?. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 55:1-55:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{focke_et_al:LIPIcs.ESA.2024.55,
  author =	{Focke, Jacob and Frei, Fabian and Li, Shaohua and Marx, D\'{a}niel and Schepper, Philipp and Sharma, Roohani and W\k{e}grzycki, Karol},
  title =	{{Hitting Meets Packing: How Hard Can It Be?}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{55:1--55:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.55},
  URN =		{urn:nbn:de:0030-drops-211261},
  doi =		{10.4230/LIPIcs.ESA.2024.55},
  annote =	{Keywords: Hitting, Packing, Covering, Parameterized Algorithms, Lower Bounds, Treewidth}
}
Document
Random-Order Online Independent Set of Intervals and Hyperrectangles

Authors: Mohit Garg, Debajyoti Kar, and Arindam Khan

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
In the Maximum Independent Set of Hyperrectangles problem, we are given a set of n (possibly overlapping) d-dimensional axis-aligned hyperrectangles, and the goal is to find a subset of non-overlapping hyperrectangles of maximum cardinality. For d = 1, this corresponds to the classical Interval Scheduling problem, where a simple greedy algorithm returns an optimal solution. In the offline setting, for d-dimensional hyperrectangles, polynomial time (log n)^{O(d)}-approximation algorithms are known [Chalermsook and Chuzhoy, 2009]. However, the problem becomes notably challenging in the online setting, where the input objects (hyperrectangles) appear one by one in an adversarial order, and on the arrival of an object, the algorithm needs to make an immediate and irrevocable decision whether or not to select the object while maintaining the feasibility. Even for interval scheduling, an Ω(n) lower bound is known on the competitive ratio. To circumvent these negative results, in this work, we study the online maximum independent set of axis-aligned hyperrectangles in the random-order arrival model, where the adversary specifies the set of input objects which then arrive in a uniformly random order. Starting from the prototypical secretary problem, the random-order model has received significant attention to study algorithms beyond the worst-case competitive analysis (see the survey by Gupta and Singla [Anupam Gupta and Sahil Singla, 2020]). Surprisingly, we show that the problem in the random-order model almost matches the best-known offline approximation guarantees, up to polylogarithmic factors. In particular, we give a simple (log n)^{O(d)}-competitive algorithm for d-dimensional hyperrectangles in this model, which runs in O_d̃(n) time. Our approach also yields (log n)^{O(d)}-competitive algorithms in the random-order model for more general objects such as d-dimensional fat objects and ellipsoids. Furthermore, all our competitiveness guarantees hold with high probability, and not just in expectation.

Cite as

Mohit Garg, Debajyoti Kar, and Arindam Khan. Random-Order Online Independent Set of Intervals and Hyperrectangles. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 58:1-58:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{garg_et_al:LIPIcs.ESA.2024.58,
  author =	{Garg, Mohit and Kar, Debajyoti and Khan, Arindam},
  title =	{{Random-Order Online Independent Set of Intervals and Hyperrectangles}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{58:1--58:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.58},
  URN =		{urn:nbn:de:0030-drops-211298},
  doi =		{10.4230/LIPIcs.ESA.2024.58},
  annote =	{Keywords: Online Algorithms, Random-Order Model, Maximum Independent Set of Rectangles, Hyperrectangles, Fat Objects, Interval Scheduling}
}
Document
Minimizing the Weighted Number of Tardy Jobs Is W[1]-Hard

Authors: Klaus Heeger and Danny Hermelin

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
We consider the 1∣∣∑ w_jU_j problem, the problem of minimizing the weighted number of tardy jobs on a single machine. This problem is one of the most basic and fundamental problems in scheduling theory, with several different applications both in theory and practice. Using a reduction from the Multicolored Clique problem, we prove that 1∣∣∑ w_jU_j is W[1]-hard with respect to the number p_# of different processing times in the input, as well as with respect to the number w_# of different weights in the input. This, along with previous work, provides a complete picture for 1∣∣∑ w_jU_j from the perspective of parameterized complexity, as well as almost tight complexity bounds for the problem under the Exponential Time Hypothesis (ETH).

Cite as

Klaus Heeger and Danny Hermelin. Minimizing the Weighted Number of Tardy Jobs Is W[1]-Hard. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 68:1-68:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{heeger_et_al:LIPIcs.ESA.2024.68,
  author =	{Heeger, Klaus and Hermelin, Danny},
  title =	{{Minimizing the Weighted Number of Tardy Jobs Is W\lbrack1\rbrack-Hard}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{68:1--68:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.68},
  URN =		{urn:nbn:de:0030-drops-211392},
  doi =		{10.4230/LIPIcs.ESA.2024.68},
  annote =	{Keywords: single-machine scheduling, number of different weights, number of different processing times}
}
Document
Parameterized Dynamic Data Structure for Split Completion

Authors: Konrad Majewski, Michał Pilipczuk, and Anna Zych-Pawlewicz

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
We design a randomized data structure that, for a fully dynamic graph G updated by edge insertions and deletions and integers k, d fixed upon initialization, maintains the answer to the Split Completion problem: whether one can add k edges to G to obtain a split graph. The data structure can be initialized on an edgeless n-vertex graph in time n ⋅ (k d ⋅ log n)^{𝒪(1)}, and the amortized time complexity of an update is 5^k ⋅ (k d ⋅ log n)^{𝒪(1)}. The answer provided by the data structure is correct with probability 1-𝒪(n^{-d}).

Cite as

Konrad Majewski, Michał Pilipczuk, and Anna Zych-Pawlewicz. Parameterized Dynamic Data Structure for Split Completion. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 87:1-87:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{majewski_et_al:LIPIcs.ESA.2024.87,
  author =	{Majewski, Konrad and Pilipczuk, Micha{\l} and Zych-Pawlewicz, Anna},
  title =	{{Parameterized Dynamic Data Structure for Split Completion}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{87:1--87:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.87},
  URN =		{urn:nbn:de:0030-drops-211587},
  doi =		{10.4230/LIPIcs.ESA.2024.87},
  annote =	{Keywords: parameterized complexity, dynamic data structures, split graphs}
}
Document
Euclidean Capacitated Vehicle Routing in the Random Setting: A 1.55-Approximation Algorithm

Authors: Zipei Nie and Hang Zhou

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
We study the unit-demand capacitated vehicle routing problem in the random setting of the Euclidean plane. The objective is to visit n random terminals in a square using a set of tours of minimum total length, such that each tour visits the depot and at most k terminals. We design an algorithm combining the classical sweep heuristic and the framework for the Euclidean traveling salesman problem due to Arora [J. ACM 1998] and Mitchell [SICOMP 1999]. We show that our algorithm is a polynomial-time approximation of ratio at most 1.55 asymptotically almost surely. This improves on the prior ratio of 1.915 due to Mathieu and Zhou [RSA 2022]. In addition, we conjecture that, for any ε > 0, our algorithm is a (1+ε)-approximation asymptotically almost surely.

Cite as

Zipei Nie and Hang Zhou. Euclidean Capacitated Vehicle Routing in the Random Setting: A 1.55-Approximation Algorithm. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 91:1-91:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{nie_et_al:LIPIcs.ESA.2024.91,
  author =	{Nie, Zipei and Zhou, Hang},
  title =	{{Euclidean Capacitated Vehicle Routing in the Random Setting: A 1.55-Approximation Algorithm}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{91:1--91:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.91},
  URN =		{urn:nbn:de:0030-drops-211627},
  doi =		{10.4230/LIPIcs.ESA.2024.91},
  annote =	{Keywords: capacitated vehicle routing, approximation algorithm, combinatorial optimization}
}
Document
Parameterized Algorithms on Integer Sets with Small Doubling: Integer Programming, Subset Sum and k-SUM

Authors: Tim Randolph and Karol Węgrzycki

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
We study the parameterized complexity of algorithmic problems whose input is an integer set A in terms of the doubling constant 𝒞 := |A+A| / |A|, a fundamental measure of additive structure. We present evidence that this new parameterization is algorithmically useful in the form of new results for two difficult, well-studied problems: Integer Programming and Subset Sum. First, we show that determining the feasibility of bounded Integer Programs is a tractable problem when parameterized in the doubling constant. Specifically, we prove that the feasibility of an integer program ℐ with n polynomially-bounded variables and m constraints can be determined in time n^{O_𝒞(1)} ⋅ poly(|ℐ|) when the column set of the constraint matrix has doubling constant 𝒞. Second, we show that the Subset Sum and Unbounded Subset Sum problems can be solved in time n^{O_C(1)} and n^{O_𝒞(log log log n)}, respectively, where the O_C notation hides functions that depend only on the doubling constant 𝒞. We also show the equivalence of achieving an FPT algorithm for Subset Sum with bounded doubling and achieving a milestone result for the parameterized complexity of Box ILP. Finally, we design near-linear time algorithms for k-SUM as well as tight lower bounds for 4-SUM and nearly tight lower bounds for k-SUM, under the k-SUM conjecture. Several of our results rely on a new proof that Freiman’s Theorem, a central result in additive combinatorics, can be made efficiently constructive. This result may be of independent interest.

Cite as

Tim Randolph and Karol Węgrzycki. Parameterized Algorithms on Integer Sets with Small Doubling: Integer Programming, Subset Sum and k-SUM. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 96:1-96:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{randolph_et_al:LIPIcs.ESA.2024.96,
  author =	{Randolph, Tim and W\k{e}grzycki, Karol},
  title =	{{Parameterized Algorithms on Integer Sets with Small Doubling: Integer Programming, Subset Sum and k-SUM}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{96:1--96:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.96},
  URN =		{urn:nbn:de:0030-drops-211672},
  doi =		{10.4230/LIPIcs.ESA.2024.96},
  annote =	{Keywords: Parameterized algorithms, parameterized complexity, additive combinatorics, Subset Sum, integer programming, doubling constant}
}
Document
Track A: Algorithms, Complexity and Games
Lower Bounds for Matroid Optimization Problems with a Linear Constraint

Authors: Ilan Doron-Arad, Ariel Kulik, and Hadas Shachnai

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
We study a family of matroid optimization problems with a linear constraint (MOL). In these problems, we seek a subset of elements which optimizes (i.e., maximizes or minimizes) a linear objective function subject to (i) a matroid independent set, or a matroid basis constraint, (ii) additional linear constraint. A notable member in this family is budgeted matroid independent set (BM), which can be viewed as classic 0/1-Knapsack with a matroid constraint. While special cases of BM, such as knapsack with cardinality constraint and multiple-choice knapsack, admit a fully polynomial-time approximation scheme (Fully PTAS), the best known result for BM on a general matroid is an Efficient PTAS. Prior to this work, the existence of a Fully PTAS for BM, and more generally, for any problem in the family of MOL problems, has been open. In this paper, we answer this question negatively by showing that none of the (non-trivial) problems in this family admits a Fully PTAS. This resolves the complexity status of several well studied problems. Our main result is obtained by showing first that exact weight matroid basis (EMB) does not admit a pseudo-polynomial time algorithm. This distinguishes EMB from the special cases of k-subset sum and EMB on a linear matroid, which are solvable in pseudo-polynomial time. We then obtain unconditional hardness results for the family of MOL problems in the oracle model (even if randomization is allowed), and show that the same results hold when the matroids are encoded as part of the input, assuming P ≠ NP. For the hardness proof of EMB, we introduce the Π-matroid family. This intricate subclass of matroids, which exploits the interaction between a weight function and the matroid constraint, may find use in tackling other matroid optimization problems.

Cite as

Ilan Doron-Arad, Ariel Kulik, and Hadas Shachnai. Lower Bounds for Matroid Optimization Problems with a Linear Constraint. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 56:1-56:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{doronarad_et_al:LIPIcs.ICALP.2024.56,
  author =	{Doron-Arad, Ilan and Kulik, Ariel and Shachnai, Hadas},
  title =	{{Lower Bounds for Matroid Optimization Problems with a Linear Constraint}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{56:1--56:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.56},
  URN =		{urn:nbn:de:0030-drops-201990},
  doi =		{10.4230/LIPIcs.ICALP.2024.56},
  annote =	{Keywords: matroid optimization, budgeted problems, knapsack, approximation schemes}
}
Document
Track A: Algorithms, Complexity and Games
Minimizing Tardy Processing Time on a Single Machine in Near-Linear Time

Authors: Nick Fischer and Leo Wennmann

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
In this work we revisit the elementary scheduling problem 1||∑ p_j U_j. The goal is to select, among n jobs with processing times and due dates, a subset of jobs with maximum total processing time that can be scheduled in sequence without violating their due dates. This problem is NP-hard, but a classical algorithm by Lawler and Moore from the 60s solves this problem in pseudo-polynomial time O(nP), where P is the total processing time of all jobs. With the aim to develop best-possible pseudo-polynomial-time algorithms, a recent wave of results has improved Lawler and Moore’s algorithm for 1||∑ p_j U_j: First to time Õ(P^{7/4}) [Bringmann, Fischer, Hermelin, Shabtay, Wellnitz; ICALP'20], then to time Õ(P^{5/3}) [Klein, Polak, Rohwedder; SODA'23], and finally to time Õ(P^{7/5}) [Schieber, Sitaraman; WADS'23]. It remained an exciting open question whether these works can be improved further. In this work we develop an algorithm in near-linear time Õ(P) for the 1||∑ p_j U_j problem. This running time not only significantly improves upon the previous results, but also matches conditional lower bounds based on the Strong Exponential Time Hypothesis or the Set Cover Hypothesis and is therefore likely optimal (up to subpolynomial factors). Our new algorithm also extends to the case of m machines in time Õ(P^m). In contrast to the previous improvements, we take a different, more direct approach inspired by the recent reductions from Modular Subset Sum to dynamic string problems. We thereby arrive at a satisfyingly simple algorithm.

Cite as

Nick Fischer and Leo Wennmann. Minimizing Tardy Processing Time on a Single Machine in Near-Linear Time. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 64:1-64:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{fischer_et_al:LIPIcs.ICALP.2024.64,
  author =	{Fischer, Nick and Wennmann, Leo},
  title =	{{Minimizing Tardy Processing Time on a Single Machine in Near-Linear Time}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{64:1--64:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.64},
  URN =		{urn:nbn:de:0030-drops-202079},
  doi =		{10.4230/LIPIcs.ICALP.2024.64},
  annote =	{Keywords: Scheduling, Fine-Grained Complexity, Dynamic Strings}
}
Document
Track A: Algorithms, Complexity and Games
A Faster Algorithm for Pigeonhole Equal Sums

Authors: Ce Jin and Hongxun Wu

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
An important area of research in exact algorithms is to solve Subset-Sum-type problems faster than meet-in-middle. In this paper we study Pigeonhole Equal Sums, a total search problem proposed by Papadimitriou (1994): given n positive integers w₁,… ,w_n of total sum ∑_{i = 1}ⁿ w_i < 2ⁿ-1, the task is to find two distinct subsets A, B ⊆ [n] such that ∑_{i ∈ A}w_i = ∑_{i ∈ B}w_i. Similar to the status of the Subset Sum problem, the best known algorithm for Pigeonhole Equal Sums runs in O^*(2^{n/2}) time, via either meet-in-middle or dynamic programming (Allcock, Hamoudi, Joux, Klingelhöfer, and Santha, 2022). Our main result is an improved algorithm for Pigeonhole Equal Sums in O^*(2^{0.4n}) time. We also give a polynomial-space algorithm in O^*(2^{0.75n}) time. Unlike many previous works in this area, our approach does not use the representation method, but rather exploits a simple structural characterization of input instances with few solutions.

Cite as

Ce Jin and Hongxun Wu. A Faster Algorithm for Pigeonhole Equal Sums. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 94:1-94:11, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{jin_et_al:LIPIcs.ICALP.2024.94,
  author =	{Jin, Ce and Wu, Hongxun},
  title =	{{A Faster Algorithm for Pigeonhole Equal Sums}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{94:1--94:11},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.94},
  URN =		{urn:nbn:de:0030-drops-202375},
  doi =		{10.4230/LIPIcs.ICALP.2024.94},
  annote =	{Keywords: Subset Sum, Pigeonhole, PPP}
}
Document
Track A: Algorithms, Complexity and Games
No Polynomial Kernels for Knapsack

Authors: Klaus Heeger, Danny Hermelin, Matthias Mnich, and Dvir Shabtay

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
This paper focuses on kernelization algorithms for the fundamental Knapsack problem. A kernelization algorithm (or kernel) is a polynomial-time reduction from a problem onto itself, where the output size is bounded by a function of some problem-specific parameter. Such algorithms provide a theoretical model for data reduction and preprocessing and are central in the area of parameterized complexity. In this way, a kernel for Knapsack for some parameter k reduces any instance of Knapsack to an equivalent instance of size at most f(k) in polynomial time, for some computable function f. When f(k) = k^{O(1)} then we call such a reduction a polynomial kernel. Our study focuses on two natural parameters for Knapsack: The number w_# of different item weights, and the number p_# of different item profits. Our main technical contribution is a proof showing that Knapsack does not admit a polynomial kernel for any of these two parameters under standard complexity-theoretic assumptions. Our proof discovers an elaborate application of the standard kernelization lower bound framework, and develops along the way novel ideas that should be useful for other problems as well. We complement our lower bounds by showing that Knapsack admits a polynomial kernel for the combined parameter w_# ⋅ p_#.

Cite as

Klaus Heeger, Danny Hermelin, Matthias Mnich, and Dvir Shabtay. No Polynomial Kernels for Knapsack. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 83:1-83:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{heeger_et_al:LIPIcs.ICALP.2024.83,
  author =	{Heeger, Klaus and Hermelin, Danny and Mnich, Matthias and Shabtay, Dvir},
  title =	{{No Polynomial Kernels for Knapsack}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{83:1--83:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.83},
  URN =		{urn:nbn:de:0030-drops-202261},
  doi =		{10.4230/LIPIcs.ICALP.2024.83},
  annote =	{Keywords: Knapsack, polynomial kernels, compositions, number of different weights, number of different profits}
}
  • Refine by Author
  • 11 Węgrzycki, Karol
  • 4 Pilipczuk, Michał
  • 3 Bringmann, Karl
  • 3 Künnemann, Marvin
  • 3 Zych-Pawlewicz, Anna
  • Show More...

  • Refine by Classification
  • 8 Theory of computation → Parameterized complexity and exact algorithms
  • 5 Theory of computation → Design and analysis of algorithms
  • 4 Mathematics of computing → Combinatorial optimization
  • 4 Theory of computation → Computational geometry
  • 2 Theory of computation → Algorithm design techniques
  • Show More...

  • Refine by Keyword
  • 3 Fine-Grained Complexity
  • 3 Subset Sum
  • 2 Approximation Algorithms
  • 2 Knapsack
  • 2 Parameterized algorithms
  • Show More...

  • Refine by Type
  • 28 document

  • Refine by Publication Year
  • 18 2024
  • 4 2023
  • 2 2017
  • 2 2022
  • 1 2019
  • Show More...

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail