12 Search Results for "Stein, Max"


Document
Multiway Cuts with a Choice of Representatives

Authors: Kristóf Bérczi, Tamás Király, and Daniel P. Szabo

Published in: LIPIcs, Volume 306, 49th International Symposium on Mathematical Foundations of Computer Science (MFCS 2024)


Abstract
In the Multiway Cut problem, we are given an undirected graph with nonnegative edge weights and a subset of k terminals, and the goal is to determine a set of edges of minimum total weight whose removal disconnects each terminal from the rest. The problem is APX-hard for k ≥ 3, and an extensive line of research has concentrated on closing the gap between the best upper and lower bounds for approximability and inapproximability, respectively. In this paper, we study several generalizations of Multiway Cut where the terminals can be chosen as representatives from sets of candidates T₁,…,T_q. In this setting, one is allowed to choose these representatives so that the minimum-weight cut separating these sets via their representatives is as small as possible. We distinguish different cases depending on (A) whether the representative of a candidate set has to be separated from the other candidate sets completely or only from the representatives, and (B) whether there is a single representative for each candidate set or the choice of representative is independent for each pair of candidate sets. For fixed q, we give approximation algorithms for each of these problems that match the best known approximation guarantee for Multiway Cut. Our technical contribution is a new extension of the CKR relaxation that preserves approximation guarantees. For general q, we show o(log q)-inapproximability for all cases where the choice of representatives may depend on the pair of candidate sets, as well as for the case where the goal is to separate a fixed node from a single representative from each candidate set. As a positive result, we give a 2-approximation algorithm for the case where we need to choose a single representative from each candidate set. This is a generalization of the (2-2/k)-approximation for k-Cut, and we can solve it by relating the tree case to optimization over a gammoid.

Cite as

Kristóf Bérczi, Tamás Király, and Daniel P. Szabo. Multiway Cuts with a Choice of Representatives. In 49th International Symposium on Mathematical Foundations of Computer Science (MFCS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 306, pp. 25:1-25:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{berczi_et_al:LIPIcs.MFCS.2024.25,
  author =	{B\'{e}rczi, Krist\'{o}f and Kir\'{a}ly, Tam\'{a}s and Szabo, Daniel P.},
  title =	{{Multiway Cuts with a Choice of Representatives}},
  booktitle =	{49th International Symposium on Mathematical Foundations of Computer Science (MFCS 2024)},
  pages =	{25:1--25:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-335-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{306},
  editor =	{Kr\'{a}lovi\v{c}, Rastislav and Ku\v{c}era, Anton{\'\i}n},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2024.25},
  URN =		{urn:nbn:de:0030-drops-205813},
  doi =		{10.4230/LIPIcs.MFCS.2024.25},
  annote =	{Keywords: Approximation algorithms, Multiway cut, CKR relaxation, Steiner multicut}
}
Document
Track A: Algorithms, Complexity and Games
The Bit Complexity of Dynamic Algebraic Formulas and Their Determinants

Authors: Emile Anand, Jan van den Brand, Mehrdad Ghadiri, and Daniel J. Zhang

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
Many iterative algorithms in computer science require repeated computation of some algebraic expression whose input varies slightly from one iteration to the next. Although efficient data structures have been proposed for maintaining the solution of such algebraic expressions under low-rank updates, most of these results are only analyzed under exact arithmetic (real-RAM model and finite fields) which may not accurately reflect the more limited complexity guarantees of real computers. In this paper, we analyze the stability and bit complexity of such data structures for expressions that involve the inversion, multiplication, addition, and subtraction of matrices under the word-RAM model. We show that the bit complexity only increases linearly in the number of matrix operations in the expression. In addition, we consider the bit complexity of maintaining the determinant of a matrix expression. We show that the required bit complexity depends on the logarithm of the condition number of matrices instead of the logarithm of their determinant. Finally, we discuss rank maintenance and its connections to determinant maintenance. Our results have wide applications ranging from computational geometry (e.g., computing the volume of a polytope) to optimization (e.g., solving linear programs using the simplex algorithm).

Cite as

Emile Anand, Jan van den Brand, Mehrdad Ghadiri, and Daniel J. Zhang. The Bit Complexity of Dynamic Algebraic Formulas and Their Determinants. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 10:1-10:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{anand_et_al:LIPIcs.ICALP.2024.10,
  author =	{Anand, Emile and van den Brand, Jan and Ghadiri, Mehrdad and Zhang, Daniel J.},
  title =	{{The Bit Complexity of Dynamic Algebraic Formulas and Their Determinants}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{10:1--10:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.10},
  URN =		{urn:nbn:de:0030-drops-201538},
  doi =		{10.4230/LIPIcs.ICALP.2024.10},
  annote =	{Keywords: Data Structures, Online Algorithms, Bit Complexity}
}
Document
Track A: Algorithms, Complexity and Games
High-Accuracy Multicommodity Flows via Iterative Refinement

Authors: Li Chen and Mingquan Ye

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
The multicommodity flow problem is a classic problem in network flow and combinatorial optimization, with applications in transportation, communication, logistics, and supply chain management, etc. Existing algorithms often focus on low-accuracy approximate solutions, while high-accuracy algorithms typically rely on general linear program solvers. In this paper, we present efficient high-accuracy algorithms for a broad family of multicommodity flow problems on undirected graphs, demonstrating improved running times compared to general linear program solvers. Our main result shows that we can solve the 𝓁_{q, p}-norm multicommodity flow problem to a (1 + ε) approximation in time O_{q, p}(m^{1+o(1)} k² log(1/ε)), where k is the number of commodities, and O_{q, p}(⋅) hides constants depending only on q or p. As q and p approach to 1 and ∞ respectively, 𝓁_{q, p}-norm flow tends to maximum concurrent flow. We introduce the first iterative refinement framework for 𝓁_{q, p}-norm minimization problems, which reduces the problem to solving a series of decomposable residual problems. In the case of k-commodity flow, each residual problem can be decomposed into k single commodity convex flow problems, each of which can be solved in almost-linear time. As many classical variants of multicommodity flows were shown to be complete for linear programs in the high-accuracy regime [Ding-Kyng-Zhang, ICALP'22], our result provides new directions for studying more efficient high-accuracy multicommodity flow algorithms.

Cite as

Li Chen and Mingquan Ye. High-Accuracy Multicommodity Flows via Iterative Refinement. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 45:1-45:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.ICALP.2024.45,
  author =	{Chen, Li and Ye, Mingquan},
  title =	{{High-Accuracy Multicommodity Flows via Iterative Refinement}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{45:1--45:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.45},
  URN =		{urn:nbn:de:0030-drops-201887},
  doi =		{10.4230/LIPIcs.ICALP.2024.45},
  annote =	{Keywords: High-accuracy multicommodity flow, Iterative refinement framework, Convex flow solver}
}
Document
Track A: Algorithms, Complexity and Games
Simultaneously Approximating All 𝓁_p-Norms in Correlation Clustering

Authors: Sami Davies, Benjamin Moseley, and Heather Newman

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
This paper considers correlation clustering on unweighted complete graphs. We give a combinatorial algorithm that returns a single clustering solution that is simultaneously O(1)-approximate for all 𝓁_p-norms of the disagreement vector; in other words, a combinatorial O(1)-approximation of the all-norms objective for correlation clustering. This is the first proof that minimal sacrifice is needed in order to optimize different norms of the disagreement vector. In addition, our algorithm is the first combinatorial approximation algorithm for the 𝓁₂-norm objective, and more generally the first combinatorial algorithm for the 𝓁_p-norm objective when 1 < p < ∞. It is also faster than all previous algorithms that minimize the 𝓁_p-norm of the disagreement vector, with run-time O(n^ω), where O(n^ω) is the time for matrix multiplication on n × n matrices. When the maximum positive degree in the graph is at most Δ, this can be improved to a run-time of O(nΔ² log n).

Cite as

Sami Davies, Benjamin Moseley, and Heather Newman. Simultaneously Approximating All 𝓁_p-Norms in Correlation Clustering. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 52:1-52:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{davies_et_al:LIPIcs.ICALP.2024.52,
  author =	{Davies, Sami and Moseley, Benjamin and Newman, Heather},
  title =	{{Simultaneously Approximating All 𝓁\underlinep-Norms in Correlation Clustering}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{52:1--52:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.52},
  URN =		{urn:nbn:de:0030-drops-201950},
  doi =		{10.4230/LIPIcs.ICALP.2024.52},
  annote =	{Keywords: Approximation algorithms, correlation clustering, all-norms, lp-norms}
}
Document
Track A: Algorithms, Complexity and Games
Two-Sets Cut-Uncut on Planar Graphs

Authors: Matthias Bentert, Pål Grønås Drange, Fedor V. Fomin, Petr A. Golovach, and Tuukka Korhonen

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
We study Two-Sets Cut-Uncut on planar graphs. Therein, one is given an undirected planar graph G and two disjoint sets S and T of vertices as input. The question is, what is the minimum number of edges to remove from G, such that all vertices in S are separated from all vertices in T, while maintaining that every vertex in S, and respectively in T, stays in the same connected component. We show that this problem can be solved in 2^{|S|+|T|} n^𝒪(1) time with a one-sided-error randomized algorithm. Our algorithm implies a polynomial-time algorithm for the network diversion problem on planar graphs, which resolves an open question from the literature. More generally, we show that Two-Sets Cut-Uncut is fixed-parameter tractable when parameterized by the number r of faces in a planar embedding covering the terminals S ∪ T, by providing a 2^𝒪(r) n^𝒪(1)-time algorithm.

Cite as

Matthias Bentert, Pål Grønås Drange, Fedor V. Fomin, Petr A. Golovach, and Tuukka Korhonen. Two-Sets Cut-Uncut on Planar Graphs. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 22:1-22:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{bentert_et_al:LIPIcs.ICALP.2024.22,
  author =	{Bentert, Matthias and Drange, P\r{a}l Gr{\o}n\r{a}s and Fomin, Fedor V. and Golovach, Petr A. and Korhonen, Tuukka},
  title =	{{Two-Sets Cut-Uncut on Planar Graphs}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{22:1--22:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.22},
  URN =		{urn:nbn:de:0030-drops-201654},
  doi =		{10.4230/LIPIcs.ICALP.2024.22},
  annote =	{Keywords: planar graphs, cut-uncut, group-constrained paths}
}
Document
Track A: Algorithms, Complexity and Games
Minimizing Tardy Processing Time on a Single Machine in Near-Linear Time

Authors: Nick Fischer and Leo Wennmann

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
In this work we revisit the elementary scheduling problem 1||∑ p_j U_j. The goal is to select, among n jobs with processing times and due dates, a subset of jobs with maximum total processing time that can be scheduled in sequence without violating their due dates. This problem is NP-hard, but a classical algorithm by Lawler and Moore from the 60s solves this problem in pseudo-polynomial time O(nP), where P is the total processing time of all jobs. With the aim to develop best-possible pseudo-polynomial-time algorithms, a recent wave of results has improved Lawler and Moore’s algorithm for 1||∑ p_j U_j: First to time Õ(P^{7/4}) [Bringmann, Fischer, Hermelin, Shabtay, Wellnitz; ICALP'20], then to time Õ(P^{5/3}) [Klein, Polak, Rohwedder; SODA'23], and finally to time Õ(P^{7/5}) [Schieber, Sitaraman; WADS'23]. It remained an exciting open question whether these works can be improved further. In this work we develop an algorithm in near-linear time Õ(P) for the 1||∑ p_j U_j problem. This running time not only significantly improves upon the previous results, but also matches conditional lower bounds based on the Strong Exponential Time Hypothesis or the Set Cover Hypothesis and is therefore likely optimal (up to subpolynomial factors). Our new algorithm also extends to the case of m machines in time Õ(P^m). In contrast to the previous improvements, we take a different, more direct approach inspired by the recent reductions from Modular Subset Sum to dynamic string problems. We thereby arrive at a satisfyingly simple algorithm.

Cite as

Nick Fischer and Leo Wennmann. Minimizing Tardy Processing Time on a Single Machine in Near-Linear Time. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 64:1-64:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{fischer_et_al:LIPIcs.ICALP.2024.64,
  author =	{Fischer, Nick and Wennmann, Leo},
  title =	{{Minimizing Tardy Processing Time on a Single Machine in Near-Linear Time}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{64:1--64:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.64},
  URN =		{urn:nbn:de:0030-drops-202079},
  doi =		{10.4230/LIPIcs.ICALP.2024.64},
  annote =	{Keywords: Scheduling, Fine-Grained Complexity, Dynamic Strings}
}
Document
Track A: Algorithms, Complexity and Games
Optimal Electrical Oblivious Routing on Expanders

Authors: Cella Florescu, Rasmus Kyng, Maximilian Probst Gutenberg, and Sushant Sachdeva

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
In this paper, we investigate the question of whether the electrical flow routing is a good oblivious routing scheme on an m-edge graph G = (V, E) that is a Φ-expander, i.e. where |∂ S| ≥ Φ ⋅ vol(S) for every S ⊆ V, vol(S) ≤ vol(V)/2. Beyond its simplicity and structural importance, this question is well-motivated by the current state-of-the-art of fast algorithms for 𝓁_∞ oblivious routings that reduce to the expander-case which is in turn solved by electrical flow routing. Our main result proves that the electrical routing is an O(Φ^{-1} log m)-competitive oblivious routing in the 𝓁₁- and 𝓁_∞-norms. We further observe that the oblivious routing is O(log² m)-competitive in the 𝓁₂-norm and, in fact, O(log m)-competitive if 𝓁₂-localization is O(log m) which is widely believed. Using these three upper bounds, we can smoothly interpolate to obtain upper bounds for every p ∈ [2, ∞] and q given by 1/p + 1/q = 1. Assuming 𝓁₂-localization in O(log m), we obtain that in 𝓁_p and 𝓁_q, the electrical oblivious routing is O(Φ^{-(1-2/p)}log m) competitive. Using the currently known result for 𝓁₂-localization, this ratio deteriorates by at most a sublogarithmic factor for every p, q ≠ 2. We complement our upper bounds with lower bounds that show that the electrical routing for any such p and q is Ω(Φ^{-(1-2/p)} log m)-competitive. This renders our results in 𝓁₁ and 𝓁_∞ unconditionally tight up to constants, and the result in any 𝓁_p- and 𝓁_q-norm to be tight in case of 𝓁₂-localization in O(log m).

Cite as

Cella Florescu, Rasmus Kyng, Maximilian Probst Gutenberg, and Sushant Sachdeva. Optimal Electrical Oblivious Routing on Expanders. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 65:1-65:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{florescu_et_al:LIPIcs.ICALP.2024.65,
  author =	{Florescu, Cella and Kyng, Rasmus and Gutenberg, Maximilian Probst and Sachdeva, Sushant},
  title =	{{Optimal Electrical Oblivious Routing on Expanders}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{65:1--65:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.65},
  URN =		{urn:nbn:de:0030-drops-202083},
  doi =		{10.4230/LIPIcs.ICALP.2024.65},
  annote =	{Keywords: Expanders, Oblivious routing for 𝓁\underlinep, Electrical flow routing}
}
Document
Incremental (1-ε)-Approximate Dynamic Matching in O(poly(1/ε)) Update Time

Authors: Joakim Blikstad and Peter Kiss

Published in: LIPIcs, Volume 274, 31st Annual European Symposium on Algorithms (ESA 2023)


Abstract
In the dynamic approximate maximum bipartite matching problem we are given bipartite graph G undergoing updates and our goal is to maintain a matching of G which is large compared the maximum matching size μ(G). We define a dynamic matching algorithm to be α (respectively (α, β))-approximate if it maintains matching M such that at all times |M | ≥ μ(G) ⋅ α (respectively |M| ≥ μ(G) ⋅ α - β). We present the first deterministic (1-ε)-approximate dynamic matching algorithm with O(poly(ε^{-1})) amortized update time for graphs undergoing edge insertions. Previous solutions either required super-constant [Gupta FSTTCS'14, Bhattacharya-Kiss-Saranurak SODA'23] or exponential in 1/ε [Grandoni-Leonardi-Sankowski-Schwiegelshohn-Solomon SODA'19] update time. Our implementation is arguably simpler than the mentioned algorithms and its description is self contained. Moreover, we show that if we allow for additive (1, ε⋅n)-approximation our algorithm seamlessly extends to also handle vertex deletions, on top of edge insertions. This makes our algorithm one of the few small update time algorithms for (1-ε)-approximate dynamic matching allowing for updates both increasing and decreasing the maximum matching size of G in a fully dynamic manner. Our algorithm relies on the weighted variant of the celebrated Edge-Degree-Constrained-Subgraph (EDCS) datastructure introduced by [Bernstein-Stein ICALP'15]. As far as we are aware we introduce the first application of the weighted-EDCS for arbitrarily dense graphs. We also present a significantly simplified proof for the approximation ratio of weighed-EDCS as a matching sparsifier compared to [Bernstein-Stein], as well as simple descriptions of a fractional matching and fractional vertex cover defined on top of the EDCS. Considering the wide range of applications EDCS has found in settings such as streaming, sub-linear, stochastic and more we hope our techniques will be of independent research interest outside of the dynamic setting.

Cite as

Joakim Blikstad and Peter Kiss. Incremental (1-ε)-Approximate Dynamic Matching in O(poly(1/ε)) Update Time. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 22:1-22:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{blikstad_et_al:LIPIcs.ESA.2023.22,
  author =	{Blikstad, Joakim and Kiss, Peter},
  title =	{{Incremental (1-\epsilon)-Approximate Dynamic Matching in O(poly(1/\epsilon)) Update Time}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{22:1--22:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.22},
  URN =		{urn:nbn:de:0030-drops-186756},
  doi =		{10.4230/LIPIcs.ESA.2023.22},
  annote =	{Keywords: Bipartite Matching, Incremental Matching, Dynamic Algorithms, Approximation Algorithms, EDCS}
}
Document
Faster 0-1-Knapsack via Near-Convex Min-Plus-Convolution

Authors: Karl Bringmann and Alejandro Cassis

Published in: LIPIcs, Volume 274, 31st Annual European Symposium on Algorithms (ESA 2023)


Abstract
We revisit the classic 0-1-Knapsack problem, in which we are given n items with their weights and profits as well as a weight budget W, and the goal is to find a subset of items of total weight at most W that maximizes the total profit. We study pseudopolynomial-time algorithms parameterized by the largest profit of any item p_{max}, and the largest weight of any item w_max. Our main result are algorithms for 0-1-Knapsack running in time Õ(n w_max p_max^{2/3}) and Õ(n p_max w_max^{2/3}), improving upon an algorithm in time O(n p_max w_max) by Pisinger [J. Algorithms '99]. In the regime p_max ≈ w_max ≈ n (and W ≈ OPT ≈ n²) our algorithms are the first to break the cubic barrier n³. To obtain our result, we give an efficient algorithm to compute the min-plus convolution of near-convex functions. More precisely, we say that a function f : [n] ↦ ℤ is Δ-near convex with Δ ≥ 1, if there is a convex function f ̆ such that f ̆(i) ≤ f(i) ≤ f ̆(i) + Δ for every i. We design an algorithm computing the min-plus convolution of two Δ-near convex functions in time Õ(nΔ). This tool can replace the usage of the prediction technique of Bateni, Hajiaghayi, Seddighin and Stein [STOC '18] in all applications we are aware of, and we believe it has wider applicability.

Cite as

Karl Bringmann and Alejandro Cassis. Faster 0-1-Knapsack via Near-Convex Min-Plus-Convolution. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 24:1-24:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.ESA.2023.24,
  author =	{Bringmann, Karl and Cassis, Alejandro},
  title =	{{Faster 0-1-Knapsack via Near-Convex Min-Plus-Convolution}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{24:1--24:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.24},
  URN =		{urn:nbn:de:0030-drops-186776},
  doi =		{10.4230/LIPIcs.ESA.2023.24},
  annote =	{Keywords: Knapsack, Fine-Grained Complexity, Min-Plus Convolution}
}
Document
Maximum Weight b-Matchings in Random-Order Streams

Authors: Chien-Chung Huang and François Sellier

Published in: LIPIcs, Volume 244, 30th Annual European Symposium on Algorithms (ESA 2022)


Abstract
We consider the maximum weight b-matching problem in the random-order semi-streaming model. Assuming all weights are small integers drawn from [1,W], we present a 2 - 1/(2W) + ε approximation algorithm, using a memory of O(max(|M_G|, n) ⋅ poly(log(m),W,1/ε)), where |M_G| denotes the cardinality of the optimal matching. Our result generalizes that of Bernstein [Aaron Bernstein, 2020], which achieves a 3/2 + ε approximation for the maximum cardinality simple matching. When W is small, our result also improves upon that of Gamlath et al. [Gamlath et al., 2019], which obtains a 2 - δ approximation (for some small constant δ ∼ 10^{-17}) for the maximum weight simple matching. In particular, for the weighted b-matching problem, ours is the first result beating the approximation ratio of 2. Our technique hinges on a generalized weighted version of edge-degree constrained subgraphs, originally developed by Bernstein and Stein [Aaron Bernstein and Cliff Stein, 2015]. Such a subgraph has bounded vertex degree (hence uses only a small number of edges), and can be easily computed. The fact that it contains a 2 - 1/(2W) + ε approximation of the maximum weight matching is proved using the classical Kőnig-Egerváry’s duality theorem.

Cite as

Chien-Chung Huang and François Sellier. Maximum Weight b-Matchings in Random-Order Streams. In 30th Annual European Symposium on Algorithms (ESA 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 244, pp. 68:1-68:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{huang_et_al:LIPIcs.ESA.2022.68,
  author =	{Huang, Chien-Chung and Sellier, Fran\c{c}ois},
  title =	{{Maximum Weight b-Matchings in Random-Order Streams}},
  booktitle =	{30th Annual European Symposium on Algorithms (ESA 2022)},
  pages =	{68:1--68:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-247-1},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{244},
  editor =	{Chechik, Shiri and Navarro, Gonzalo and Rotenberg, Eva and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2022.68},
  URN =		{urn:nbn:de:0030-drops-170062},
  doi =		{10.4230/LIPIcs.ESA.2022.68},
  annote =	{Keywords: Maximum weight matching, b-matching, streaming, random order}
}
Document
Telescoping Filter: A Practical Adaptive Filter

Authors: David J. Lee, Samuel McCauley, Shikha Singh, and Max Stein

Published in: LIPIcs, Volume 204, 29th Annual European Symposium on Algorithms (ESA 2021)


Abstract
Filters are small, fast, and approximate set membership data structures. They are often used to filter out expensive accesses to a remote set S for negative queries (that is, filtering out queries x ∉ S). Filters have one-sided errors: on a negative query, a filter may say "present" with a tunable false-positive probability of ε. Correctness is traded for space: filters only use log (1/ε) + O(1) bits per element. The false-positive guarantees of most filters, however, hold only for a single query. In particular, if x is a false positive, a subsequent query to x is a false positive with probability 1, not ε. With this in mind, recent work has introduced the notion of an adaptive filter. A filter is adaptive if each query is a false positive with probability ε, regardless of answers to previous queries. This requires "fixing" false positives as they occur. Adaptive filters not only provide strong false positive guarantees in adversarial environments but also improve query performance on practical workloads by eliminating repeated false positives. Existing work on adaptive filters falls into two categories. On the one hand, there are practical filters, based on the cuckoo filter, that attempt to fix false positives heuristically without meeting the adaptivity guarantee. On the other hand, the broom filter is a very complex adaptive filter that meets the optimal theoretical bounds. In this paper, we bridge this gap by designing the telescoping adaptive filter (TAF), a practical, provably adaptive filter. We provide theoretical false-positive and space guarantees for our filter, along with empirical results where we compare its performance against state-of-the-art filters. We also implement the broom filter and compare it to the TAF. Our experiments show that theoretical adaptivity can lead to improved false-positive performance on practical inputs, and can be achieved while maintaining throughput that is similar to non-adaptive filters.

Cite as

David J. Lee, Samuel McCauley, Shikha Singh, and Max Stein. Telescoping Filter: A Practical Adaptive Filter. In 29th Annual European Symposium on Algorithms (ESA 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 204, pp. 60:1-60:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{lee_et_al:LIPIcs.ESA.2021.60,
  author =	{Lee, David J. and McCauley, Samuel and Singh, Shikha and Stein, Max},
  title =	{{Telescoping Filter: A Practical Adaptive Filter}},
  booktitle =	{29th Annual European Symposium on Algorithms (ESA 2021)},
  pages =	{60:1--60:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-204-4},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{204},
  editor =	{Mutzel, Petra and Pagh, Rasmus and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2021.60},
  URN =		{urn:nbn:de:0030-drops-146410},
  doi =		{10.4230/LIPIcs.ESA.2021.60},
  annote =	{Keywords: Filters, approximate-membership query data structures (AMQs), Bloom filters, quotient filters, cuckoo filters, adaptivity, succinct data structures}
}
Document
Routing in Polygonal Domains

Authors: Bahareh Banyassady, Man-Kwun Chiu, Matias Korman, Wolfgang Mulzer, André van Renssen, Marcel Roeloffzen, Paul Seiferth, Yannik Stein, Birgit Vogtenhuber, and Max Willert

Published in: LIPIcs, Volume 92, 28th International Symposium on Algorithms and Computation (ISAAC 2017)


Abstract
We consider the problem of routing a data packet through the visibility graph of a polygonal domain P with n vertices and h holes. We may preprocess P to obtain a label and a routing table for each vertex. Then, we must be able to route a data packet between any two vertices p and q of P , where each step must use only the label of the target node q and the routing table of the current node. For any fixed eps > 0, we pre ent a routing scheme that always achieves a routing path that exceeds the shortest path by a factor of at most 1 + eps. The labels have O(log n) bits, and the routing tables are of size O((eps^{-1} + h) log n). The preprocessing time is O(n^2 log n + hn^2 + eps^{-1}hn). It can be improved to O(n 2 + eps^{-1}n) for simple polygons.

Cite as

Bahareh Banyassady, Man-Kwun Chiu, Matias Korman, Wolfgang Mulzer, André van Renssen, Marcel Roeloffzen, Paul Seiferth, Yannik Stein, Birgit Vogtenhuber, and Max Willert. Routing in Polygonal Domains. In 28th International Symposium on Algorithms and Computation (ISAAC 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 92, pp. 10:1-10:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{banyassady_et_al:LIPIcs.ISAAC.2017.10,
  author =	{Banyassady, Bahareh and Chiu, Man-Kwun and Korman, Matias and Mulzer, Wolfgang and van Renssen, Andr\'{e} and Roeloffzen, Marcel and Seiferth, Paul and Stein, Yannik and Vogtenhuber, Birgit and Willert, Max},
  title =	{{Routing in Polygonal Domains}},
  booktitle =	{28th International Symposium on Algorithms and Computation (ISAAC 2017)},
  pages =	{10:1--10:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-054-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{92},
  editor =	{Okamoto, Yoshio and Tokuyama, Takeshi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2017.10},
  URN =		{urn:nbn:de:0030-drops-82379},
  doi =		{10.4230/LIPIcs.ISAAC.2017.10},
  annote =	{Keywords: polygonal domains, routing scheme, small stretch,Yao graph}
}
  • Refine by Author
  • 1 Anand, Emile
  • 1 Banyassady, Bahareh
  • 1 Bentert, Matthias
  • 1 Blikstad, Joakim
  • 1 Bringmann, Karl
  • Show More...

  • Refine by Classification
  • 3 Theory of computation → Approximation algorithms analysis
  • 2 Theory of computation → Data structures design and analysis
  • 2 Theory of computation → Graph algorithms analysis
  • 1 Mathematics of computing → Graph algorithms
  • 1 Theory of computation → Continuous optimization
  • Show More...

  • Refine by Keyword
  • 2 Approximation algorithms
  • 2 Fine-Grained Complexity
  • 1 Approximation Algorithms
  • 1 Bipartite Matching
  • 1 Bit Complexity
  • Show More...

  • Refine by Type
  • 12 document

  • Refine by Publication Year
  • 7 2024
  • 2 2023
  • 1 2017
  • 1 2021
  • 1 2022