13 Search Results for "Cassis, Alejandro"


Document
Faster Algorithm for Bounded Tree Edit Distance in the Low-Distance Regime

Authors: Tomasz Kociumaka and Ali Shahali

Published in: LIPIcs, Volume 351, 33rd Annual European Symposium on Algorithms (ESA 2025)


Abstract
The tree edit distance is a natural dissimilarity measure between rooted ordered trees whose nodes are labeled over an alphabet Σ. It is defined as the minimum number of node edits - insertions, deletions, and relabelings - required to transform one tree into the other. The weighted variant assigns costs ≥ 1 to edits (based on node labels), minimizing total cost rather than edit count. The unweighted tree edit distance between two trees of total size n can be computed in 𝒪(n^{2.6857}) time; in contrast, determining the weighted tree edit distance is fine-grained equivalent to the All-Pairs Shortest Paths (APSP) problem and requires n³/2^Ω(√{log n}) time [Nogler, Polak, Saha, Vassilevska Williams, Xu, Ye; STOC'25]. These impractical super-quadratic times for large, similar trees motivate the bounded version, parameterizing runtime by the distance k to enable faster algorithms for k ≪ n. Prior algorithms for bounded unweighted edit distance achieve 𝒪(nk²log n) [Akmal & Jin; ICALP’21] and 𝒪(n + k⁷log k) [Das, Gilbert, Hajiaghayi, Kociumaka, Saha; STOC'23]. For weighted, only 𝒪(n + k^{15}) is known [Das, Gilbert, Hajiaghayi, Kociumaka, Saha; STOC'23]. We present an 𝒪(n + k⁶ log k)-time algorithm for bounded tree edit distance in both weighted/unweighted settings. First, we devise a simpler weighted 𝒪(nk² log n)-time algorithm. Next, we exploit periodic structures in input trees via an optimized universal kernel: modifying prior 𝒪(n)-time 𝒪(k⁵)-size kernels to generate such structured instances, enabling efficient analysis.

Cite as

Tomasz Kociumaka and Ali Shahali. Faster Algorithm for Bounded Tree Edit Distance in the Low-Distance Regime. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 94:1-94:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{kociumaka_et_al:LIPIcs.ESA.2025.94,
  author =	{Kociumaka, Tomasz and Shahali, Ali},
  title =	{{Faster Algorithm for Bounded Tree Edit Distance in the Low-Distance Regime}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{94:1--94:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.94},
  URN =		{urn:nbn:de:0030-drops-245634},
  doi =		{10.4230/LIPIcs.ESA.2025.94},
  annote =	{Keywords: tree edit distance, edit distance, kernelization, dynamic programming}
}
Document
Bounded Weighted Edit Distance: Dynamic Algorithms and Matching Lower Bounds

Authors: Itai Boneh, Egor Gorbachev, and Tomasz Kociumaka

Published in: LIPIcs, Volume 351, 33rd Annual European Symposium on Algorithms (ESA 2025)


Abstract
The edit distance ed(X,Y) of two strings X,Y ∈ Σ^* is the minimum number of character edits (insertions, deletions, and substitutions) needed to transform X into Y. Its weighted counterpart ed^w(X,Y) minimizes the total cost of edits, where the costs of individual edits, depending on the edit type and the characters involved, are specified using a function w, normalized so that each edit costs at least one. The textbook dynamic-programming procedure, given strings X,Y ∈ Σ^{≤ n} and oracle access to w, computes ed^w(X,Y) in 𝒪(n²) time. Nevertheless, one can achieve better running times if the computed distance, denoted k, is small: 𝒪(n+k²) for unit weights [Landau and Vishkin; JCSS'88] and Õ(n+√{nk³}) for arbitrary weights [Cassis, Kociumaka, Wellnitz; FOCS'23]. In this paper, we study the dynamic version of the weighted edit distance problem, where the goal is to maintain ed^w(X,Y) for strings X,Y ∈ Σ^{≤ n} that change over time, with each update specified as an edit in X or Y. Very recently, Gorbachev and Kociumaka [STOC'25] showed that the unweighted distance ed(X,Y) can be maintained in Õ(k) time per update after Õ(n+k²)-time preprocessing; here, k denotes the current value of ed(X,Y). Their algorithm generalizes to small integer weights, but the underlying approach is incompatible with large weights. Our main result is a dynamic algorithm that maintains ed^w(X,Y) in Õ(k^{3-γ}) time per update after Õ(nk^γ)-time preprocessing. Here, γ ∈ [0,1] is a real trade-off parameter and k ≥ 1 is an integer threshold fixed at preprocessing time, with ∞ returned whenever ed^w(X,Y) > k. We complement our algorithm with conditional lower bounds showing fine-grained optimality of our trade-off for γ ∈ [0.5,1) and justifying our choice to fix k. We also generalize our solution to a much more robust setting while preserving the fine-grained optimal trade-off. Our full algorithm maintains X ∈ Σ^{≤ n} subject not only to character edits but also substring deletions and copy-pastes, each supported in Õ(k²) time. Instead of dynamically maintaining Y, it answers queries that, given any string Y specified through a sequence of 𝒪(k) arbitrary edits transforming X into Y, in Õ(k^{3-γ}) time compute ed^w(X,Y) or report that ed^w(X,Y) > k.

Cite as

Itai Boneh, Egor Gorbachev, and Tomasz Kociumaka. Bounded Weighted Edit Distance: Dynamic Algorithms and Matching Lower Bounds. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 45:1-45:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{boneh_et_al:LIPIcs.ESA.2025.45,
  author =	{Boneh, Itai and Gorbachev, Egor and Kociumaka, Tomasz},
  title =	{{Bounded Weighted Edit Distance: Dynamic Algorithms and Matching Lower Bounds}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{45:1--45:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.45},
  URN =		{urn:nbn:de:0030-drops-245139},
  doi =		{10.4230/LIPIcs.ESA.2025.45},
  annote =	{Keywords: Edit distance, dynamic algorithms, conditional lower bounds}
}
Document
Algorithm Engineering of SSSP with Negative Edge Weights

Authors: Alejandro Cassis, Andreas Karrenbauer, André Nusser, and Paolo Luigi Rinaldi

Published in: LIPIcs, Volume 338, 23rd International Symposium on Experimental Algorithms (SEA 2025)


Abstract
Computing shortest paths is one of the most fundamental algorithmic graph problems. It is known since decades that this problem can be solved in near-linear time if all weights are nonnegative. A recent break-through by [Aaron Bernstein et al., 2022] presented a randomized near-linear time algorithm for this problem. A subsequent improvement in [Karl Bringmann et al., 2023] significantly reduced the number of logarithmic factors and thereby also simplified the algorithm. It is surprising and exciting that both of these algorithms are combinatorial and do not contain any fundamental obstacles for being practical. We launch the, to the best of our knowledge, first extensive investigation towards a practical implementation of [Karl Bringmann et al., 2023]. To this end, we give an accessible overview of the algorithm and discuss what adaptions are necessary to obtain a fast algorithm in practice. We manifest these adaptions in an efficient implementation. We test our implementation on a benchmark data set that is adapted to be more difficult for our implementation in order to allow for a fair comparison. As in [Karl Bringmann et al., 2023] as well as in our implementation there are multiple parameters to tune, we empirically evaluate their effect and thereby determine the best choices. Our implementation is then extensively compared to one of the state-of-the-art algorithms for this problem [Andrew V. Goldberg and Tomasz Radzik, 1993]. On the hardest instance type, we are faster by up to almost two orders of magnitude.

Cite as

Alejandro Cassis, Andreas Karrenbauer, André Nusser, and Paolo Luigi Rinaldi. Algorithm Engineering of SSSP with Negative Edge Weights. In 23rd International Symposium on Experimental Algorithms (SEA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 338, pp. 10:1-10:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{cassis_et_al:LIPIcs.SEA.2025.10,
  author =	{Cassis, Alejandro and Karrenbauer, Andreas and Nusser, Andr\'{e} and Rinaldi, Paolo Luigi},
  title =	{{Algorithm Engineering of SSSP with Negative Edge Weights}},
  booktitle =	{23rd International Symposium on Experimental Algorithms (SEA 2025)},
  pages =	{10:1--10:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-375-1},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{338},
  editor =	{Mutzel, Petra and Prezza, Nicola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2025.10},
  URN =		{urn:nbn:de:0030-drops-232486},
  doi =		{10.4230/LIPIcs.SEA.2025.10},
  annote =	{Keywords: Single Source Shortest Paths, Negative Weights, Near-Linear Time}
}
Document
Track A: Algorithms, Complexity and Games
Weakly Approximating Knapsack in Subquadratic Time

Authors: Lin Chen, Jiayi Lian, Yuchen Mao, and Guochuan Zhang

Published in: LIPIcs, Volume 334, 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)


Abstract
We consider the classic Knapsack problem. Let t and OPT be the capacity and the optimal value, respectively. If one seeks a solution with total profit at least OPT/(1 + ε) and total weight at most t, then Knapsack can be solved in Õ(n + (1/(ε))²) time [Chen, Lian, Mao, and Zhang '24][Mao '24]. This running time is the best possible (up to a logarithmic factor), assuming that (min,+)-convolution cannot be solved in truly subquadratic time [Künnemann, Paturi, and Schneider '17][Cygan, Mucha, Węgrzycki, and Włodarczyk '19]. The same upper and lower bounds hold if one seeks a solution with total profit at least OPT and total weight at most (1 + ε)t. Therefore, it is natural to ask the following question. If one seeks a solution with total profit at least OPT/(1+ε) and total weight at most (1 + ε)t, can Knsapck be solved in Õ(n + (1/(ε))^{2-δ}) time for some constant δ > 0? We answer this open question affirmatively by proposing an Õ(n + (1/(ε))^{7/4})-time algorithm.

Cite as

Lin Chen, Jiayi Lian, Yuchen Mao, and Guochuan Zhang. Weakly Approximating Knapsack in Subquadratic Time. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 51:1-51:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.ICALP.2025.51,
  author =	{Chen, Lin and Lian, Jiayi and Mao, Yuchen and Zhang, Guochuan},
  title =	{{Weakly Approximating Knapsack in Subquadratic Time}},
  booktitle =	{52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
  pages =	{51:1--51:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-372-0},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{334},
  editor =	{Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.51},
  URN =		{urn:nbn:de:0030-drops-234286},
  doi =		{10.4230/LIPIcs.ICALP.2025.51},
  annote =	{Keywords: Knapsack, FPTAS}
}
Document
Track A: Algorithms, Complexity and Games
The Role of Regularity in (Hyper-)Clique Detection and Implications for Optimizing Boolean CSPs

Authors: Nick Fischer, Marvin Künnemann, Mirza Redžić, and Julian Stieß

Published in: LIPIcs, Volume 334, 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)


Abstract
Is detecting a k-clique in k-partite regular (hyper-)graphs as hard as in the general case? Intuition suggests yes, but proving this - especially for hypergraphs - poses notable challenges. Concretely, we consider a strong notion of regularity in h-uniform hypergraphs, where we essentially require that any subset of at most h-1 is incident to a uniform number of hyperedges. Such notions are studied intensively in the combinatorial block design literature. We show that any f(k)n^{g(k)}-time algorithm for detecting k-cliques in such graphs transfers to an f'(k)n^{g(k)}-time algorithm for the general case, establishing a fine-grained equivalence between the h-uniform hyperclique hypothesis and its natural regular analogue. Equipped with this regularization result, we then fully resolve the fine-grained complexity of optimizing Boolean constraint satisfaction problems over assignments with k non-zeros. Our characterization depends on the maximum degree d of a constraint function. Specifically, if d ≤ 1, we obtain a linear-time solvable problem, if d = 2, the time complexity is essentially equivalent to k-clique detection, and if d ≥ 3 the problem requires exhaustive-search time under the 3-uniform hyperclique hypothesis. To obtain our hardness results, the regularization result plays a crucial role, enabling a very convenient approach when applied carefully. We believe that our regularization result will find further applications in the future.

Cite as

Nick Fischer, Marvin Künnemann, Mirza Redžić, and Julian Stieß. The Role of Regularity in (Hyper-)Clique Detection and Implications for Optimizing Boolean CSPs. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 78:1-78:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{fischer_et_al:LIPIcs.ICALP.2025.78,
  author =	{Fischer, Nick and K\"{u}nnemann, Marvin and Red\v{z}i\'{c}, Mirza and Stie{\ss}, Julian},
  title =	{{The Role of Regularity in (Hyper-)Clique Detection and Implications for Optimizing Boolean CSPs}},
  booktitle =	{52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
  pages =	{78:1--78:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-372-0},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{334},
  editor =	{Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.78},
  URN =		{urn:nbn:de:0030-drops-234559},
  doi =		{10.4230/LIPIcs.ICALP.2025.78},
  annote =	{Keywords: fine-grained complexity theory, clique detections in hypergraphs, constraint satisfaction, parameterized algorithms}
}
Document
Track A: Algorithms, Complexity and Games
Near-Optimal Directed Low-Diameter Decompositions

Authors: Karl Bringmann, Nick Fischer, Bernhard Haeupler, and Rustam Latypov

Published in: LIPIcs, Volume 334, 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)


Abstract
Low Diameter Decompositions (LDDs) are invaluable tools in the design of combinatorial graph algorithms. While historically they have been applied mainly to undirected graphs, in the recent breakthrough for the negative-length Single Source Shortest Path problem, Bernstein, Nanongkai, and Wulff-Nilsen [FOCS '22] extended the use of LDDs to directed graphs for the first time. Specifically, their LDD deletes each edge with probability at most O(1/D ⋅ log²n), while ensuring that each strongly connected component in the remaining graph has a (weak) diameter of at most D. In this work, we make further advancements in the study of directed LDDs. We reveal a natural and intuitive (in hindsight) connection to Expander Decompositions, and leveraging this connection along with additional techniques, we establish the existence of an LDD with an edge-cutting probability of O(1/D ⋅ log n log log n). This improves the previous bound by nearly a logarithmic factor and closely approaches the lower bound of Ω(1/D ⋅ log n). With significantly more technical effort, we also develop two efficient algorithms for computing our LDDs: a deterministic algorithm that runs in time Õ(m poly(D)) and a randomized algorithm that runs in near-linear time Õ(m). We believe that our work provides a solid conceptual and technical foundation for future research relying on directed LDDs, which will undoubtedly follow soon.

Cite as

Karl Bringmann, Nick Fischer, Bernhard Haeupler, and Rustam Latypov. Near-Optimal Directed Low-Diameter Decompositions. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 35:1-35:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.ICALP.2025.35,
  author =	{Bringmann, Karl and Fischer, Nick and Haeupler, Bernhard and Latypov, Rustam},
  title =	{{Near-Optimal Directed Low-Diameter Decompositions}},
  booktitle =	{52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
  pages =	{35:1--35:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-372-0},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{334},
  editor =	{Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.35},
  URN =		{urn:nbn:de:0030-drops-234125},
  doi =		{10.4230/LIPIcs.ICALP.2025.35},
  annote =	{Keywords: Low Diameter Decompositions, Expander Decompositions, Directed Graphs}
}
Document
Completeness Theorems for k-SUM and Geometric Friends: Deciding Fragments of Linear Integer Arithmetic

Authors: Geri Gokaj and Marvin Künnemann

Published in: LIPIcs, Volume 325, 16th Innovations in Theoretical Computer Science Conference (ITCS 2025)


Abstract
In the last three decades, the k-SUM hypothesis has emerged as a satisfying explanation of long-standing time barriers for a variety of algorithmic problems. Yet to this day, the literature knows of only few proven consequences of a refutation of this hypothesis. Taking a descriptive complexity viewpoint, we ask: What is the largest logically defined class of problems captured by the k-SUM problem? To this end, we introduce a class FOP_ℤ of problems corresponding to deciding sentences in Presburger arithmetic/linear integer arithmetic over finite subsets of integers. We establish two large fragments for which the k-SUM problem is complete under fine-grained reductions: 1) The k-SUM problem is complete for deciding the sentences with k existential quantifiers. 2) The 3-SUM problem is complete for all 3-quantifier sentences of FOP_ℤ expressible using at most 3 linear inequalities. Specifically, a faster-than-n^{⌈k/2⌉ ± o(1)} algorithm for k-SUM (or faster-than-n^{2 ± o(1)} algorithm for 3-SUM, respectively) directly translate to polynomial speedups of a general algorithm for all sentences in the respective fragment. Observing a barrier for proving completeness of 3-SUM for the entire class FOP_ℤ, we turn to the question which other - seemingly more general - problems are complete for FOP_ℤ. In this direction, we establish FOP_ℤ-completeness of the problem pair of Pareto Sum Verification and Hausdorff Distance under n Translations under the L_∞/L₁ norm in ℤ^d. In particular, our results invite to investigate Pareto Sum Verification as a high-dimensional generalization of 3-SUM.

Cite as

Geri Gokaj and Marvin Künnemann. Completeness Theorems for k-SUM and Geometric Friends: Deciding Fragments of Linear Integer Arithmetic. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 55:1-55:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{gokaj_et_al:LIPIcs.ITCS.2025.55,
  author =	{Gokaj, Geri and K\"{u}nnemann, Marvin},
  title =	{{Completeness Theorems for k-SUM and Geometric Friends: Deciding Fragments of Linear Integer Arithmetic}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{55:1--55:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.55},
  URN =		{urn:nbn:de:0030-drops-226835},
  doi =		{10.4230/LIPIcs.ITCS.2025.55},
  annote =	{Keywords: fine-grained complexity theory, descriptive complexity, presburger arithmetic, completeness results, k-SUM}
}
Document
Even Faster Knapsack via Rectangular Monotone Min-Plus Convolution and Balancing

Authors: Karl Bringmann, Anita Dürr, and Adam Polak

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
We present a pseudopolynomial-time algorithm for the Knapsack problem that has running time Õ(n + t√{p_{max}}), where n is the number of items, t is the knapsack capacity, and p_{max} is the maximum item profit. This improves over the Õ(n + t p_{max})-time algorithm based on the convolution and prediction technique by Bateni et al. (STOC 2018). Moreover, we give some evidence, based on a strengthening of the Min-Plus Convolution Hypothesis, that our running time might be optimal. Our algorithm uses two new technical tools, which might be of independent interest. First, we generalize the Õ(n^{1.5})-time algorithm for bounded monotone min-plus convolution by Chi et al. (STOC 2022) to the rectangular case where the range of entries can be different from the sequence length. Second, we give a reduction from general knapsack instances to balanced instances, where all items have nearly the same profit-to-weight ratio, up to a constant factor. Using these techniques, we can also obtain algorithms that run in time Õ(n + OPT√{w_{max}}), Õ(n + (nw_{max}p_{max})^{1/3}t^{2/3}), and Õ(n + (nw_{max}p_{max})^{1/3} OPT^{2/3}), where OPT is the optimal total profit and w_{max} is the maximum item weight.

Cite as

Karl Bringmann, Anita Dürr, and Adam Polak. Even Faster Knapsack via Rectangular Monotone Min-Plus Convolution and Balancing. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 33:1-33:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.ESA.2024.33,
  author =	{Bringmann, Karl and D\"{u}rr, Anita and Polak, Adam},
  title =	{{Even Faster Knapsack via Rectangular Monotone Min-Plus Convolution and Balancing}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{33:1--33:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.33},
  URN =		{urn:nbn:de:0030-drops-211047},
  doi =		{10.4230/LIPIcs.ESA.2024.33},
  annote =	{Keywords: 0-1-Knapsack problem, bounded monotone min-plus convolution, fine-grained complexity}
}
Document
Faster 0-1-Knapsack via Near-Convex Min-Plus-Convolution

Authors: Karl Bringmann and Alejandro Cassis

Published in: LIPIcs, Volume 274, 31st Annual European Symposium on Algorithms (ESA 2023)


Abstract
We revisit the classic 0-1-Knapsack problem, in which we are given n items with their weights and profits as well as a weight budget W, and the goal is to find a subset of items of total weight at most W that maximizes the total profit. We study pseudopolynomial-time algorithms parameterized by the largest profit of any item p_{max}, and the largest weight of any item w_max. Our main result are algorithms for 0-1-Knapsack running in time Õ(n w_max p_max^{2/3}) and Õ(n p_max w_max^{2/3}), improving upon an algorithm in time O(n p_max w_max) by Pisinger [J. Algorithms '99]. In the regime p_max ≈ w_max ≈ n (and W ≈ OPT ≈ n²) our algorithms are the first to break the cubic barrier n³. To obtain our result, we give an efficient algorithm to compute the min-plus convolution of near-convex functions. More precisely, we say that a function f : [n] ↦ ℤ is Δ-near convex with Δ ≥ 1, if there is a convex function f ̆ such that f ̆(i) ≤ f(i) ≤ f ̆(i) + Δ for every i. We design an algorithm computing the min-plus convolution of two Δ-near convex functions in time Õ(nΔ). This tool can replace the usage of the prediction technique of Bateni, Hajiaghayi, Seddighin and Stein [STOC '18] in all applications we are aware of, and we believe it has wider applicability.

Cite as

Karl Bringmann and Alejandro Cassis. Faster 0-1-Knapsack via Near-Convex Min-Plus-Convolution. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 24:1-24:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.ESA.2023.24,
  author =	{Bringmann, Karl and Cassis, Alejandro},
  title =	{{Faster 0-1-Knapsack via Near-Convex Min-Plus-Convolution}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{24:1--24:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.24},
  URN =		{urn:nbn:de:0030-drops-186776},
  doi =		{10.4230/LIPIcs.ESA.2023.24},
  annote =	{Keywords: Knapsack, Fine-Grained Complexity, Min-Plus Convolution}
}
Document
Track A: Algorithms, Complexity and Games
A Structural Investigation of the Approximability of Polynomial-Time Problems

Authors: Karl Bringmann, Alejandro Cassis, Nick Fischer, and Marvin Künnemann

Published in: LIPIcs, Volume 229, 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)


Abstract
An extensive research effort targets optimal (in)approximability results for various NP-hard optimization problems. Notably, the works of (Creignou'95) as well as (Khanna, Sudan, Trevisan, Williamson'00) establish a tight characterization of a large subclass of MaxSNP, namely Boolean MaxCSPs and further variants, in terms of their polynomial-time approximability. Can we obtain similarly encompassing characterizations for classes of polynomial-time optimization problems? To this end, we initiate the systematic study of a recently introduced polynomial-time analogue of MaxSNP, which includes a large number of well-studied problems (including Nearest and Furthest Neighbor in the Hamming metric, Maximum Inner Product, optimization variants of k-XOR and Maximum k-Cover). Specifically, for each k, MaxSP_k denotes the class of O(m^k)-time problems of the form max_{x_1,… , x_k} #{y : ϕ(x_1,… ,x_k,y)} where ϕ is a quantifier-free first-order property and m denotes the size of the relational structure. Assuming central hypotheses about clique detection in hypergraphs and exact Max-3-SAT}, we show that for any MaxSP_k problem definable by a quantifier-free m-edge graph formula φ, the best possible approximation guarantee in faster-than-exhaustive-search time O(m^{k-δ})falls into one of four categories: - optimizable to exactness in time O(m^{k-δ}), - an (inefficient) approximation scheme, i.e., a (1+ε)-approximation in time O(m^{k-f(ε)}), - a (fixed) constant-factor approximation in time O(m^{k-δ}), or - a nm^ε-approximation in time O(m^{k-f(ε)}). We obtain an almost complete characterization of these regimes, for MaxSP_k as well as for an analogously defined minimization class MinSP_k. As our main technical contribution, we show how to rule out the existence of approximation schemes for a large class of problems admitting constant-factor approximations, under a hypothesis for exact Sparse Max-3-SAT algorithms posed by (Alman, Vassilevska Williams'20). As general trends for the problems we consider, we observe: (1) Exact optimizability has a simple algebraic characterization, (2) only few maximization problems do not admit a constant-factor approximation; these do not even have a subpolynomial-factor approximation, and (3) constant-factor approximation of minimization problems is equivalent to deciding whether the optimum is equal to 0.

Cite as

Karl Bringmann, Alejandro Cassis, Nick Fischer, and Marvin Künnemann. A Structural Investigation of the Approximability of Polynomial-Time Problems. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 30:1-30:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.ICALP.2022.30,
  author =	{Bringmann, Karl and Cassis, Alejandro and Fischer, Nick and K\"{u}nnemann, Marvin},
  title =	{{A Structural Investigation of the Approximability of Polynomial-Time Problems}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{30:1--30:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.30},
  URN =		{urn:nbn:de:0030-drops-163713},
  doi =		{10.4230/LIPIcs.ICALP.2022.30},
  annote =	{Keywords: Classification Theorems, Hardness of Approximation in P, Fine-grained Complexity Theory}
}
Document
Track A: Algorithms, Complexity and Games
Faster Knapsack Algorithms via Bounded Monotone Min-Plus-Convolution

Authors: Karl Bringmann and Alejandro Cassis

Published in: LIPIcs, Volume 229, 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)


Abstract
We present new exact and approximation algorithms for 0-1-Knapsack and Unbounded Knapsack: - Exact Algorithm for 0-1-Knapsack: 0-1-Knapsack has known algorithms running in time Õ(n + min{n ⋅ OPT, n ⋅ W, OPT², W²}) [Bellman '57], where n is the number of items, W is the weight budget, and OPT is the optimal profit. We present an algorithm running in time Õ(n + (W + OPT)^{1.5}). This improves the running time in case n,W,OPT are roughly equal. - Exact Algorithm for Unbounded Knapsack: Unbounded Knapsack has known algorithms running in time Õ(n + min{n ⋅ p_max, n ⋅ w_max, p_max², w_max²}) [Axiotis, Tzamos '19, Jansen, Rohwedder '19, Chan, He '22], where n is the number of items, w_{max} is the largest weight of any item, and p_max is the largest profit of any item. We present an algorithm running in time Õ(n + (p_max + w_max)^{1.5}), giving a similar improvement as for 0-1-Knapsack. - Approximating Unbounded Knapsack with Resource Augmentation: Unbounded Knapsack has a known FPTAS with running time Õ(min{n/ε, n + 1/ε²}) [Jansen, Kraft '18]. We study weak approximation algorithms, which approximate the optimal profit but are allowed to overshoot the weight constraint (i.e. resource augmentation). We present the first approximation scheme for Unbounded Knapsack in this setting, achieving running time Õ(n + 1/ε^{1.5}). Along the way, we also give a simpler FPTAS with lower order improvement in the standard setting. For all of these problem settings the previously known results had matching conditional lower bounds. We avoid these lower bounds in the approximation setting by allowing resource augmentation, and in the exact setting by analyzing the time complexity in terms of weight and profit parameters (instead of only weight or only profit parameters). Our algorithms can be seen as reductions to Min-Plus-Convolution on monotone sequences with bounded entries. These structured instances of Min-Plus-Convolution can be solved in time O(n^1.5) [Chi, Duan, Xie, Zhang '22] (in contrast to the conjectured n^{2-o(1)} lower bound for the general case). We complement our results by showing reductions in the opposite direction, that is, we show that achieving our results with the constant 1.5 replaced by any constant < 2 implies subquadratic algorithms for Min-Plus-Convolution on monotone sequences with bounded entries.

Cite as

Karl Bringmann and Alejandro Cassis. Faster Knapsack Algorithms via Bounded Monotone Min-Plus-Convolution. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 31:1-31:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.ICALP.2022.31,
  author =	{Bringmann, Karl and Cassis, Alejandro},
  title =	{{Faster Knapsack Algorithms via Bounded Monotone Min-Plus-Convolution}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{31:1--31:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.31},
  URN =		{urn:nbn:de:0030-drops-163727},
  doi =		{10.4230/LIPIcs.ICALP.2022.31},
  annote =	{Keywords: Knapsack, Approximation Schemes, Fine-Grained Complexity, Min-Plus Convolution}
}
Document
Track A: Algorithms, Complexity and Games
Improved Sublinear-Time Edit Distance for Preprocessed Strings

Authors: Karl Bringmann, Alejandro Cassis, Nick Fischer, and Vasileios Nakos

Published in: LIPIcs, Volume 229, 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)


Abstract
We study the problem of approximating the edit distance of two strings in sublinear time, in a setting where one or both string(s) are preprocessed, as initiated by Goldenberg, Rubinstein, Saha (STOC '20). Specifically, in the (k, K)-gap edit distance problem, the goal is to distinguish whether the edit distance of two strings is at most k or at least K. We obtain the following results: - After preprocessing one string in time n^{1+o(1)}, we can solve (k, k ⋅ n^o(1))-gap-gap edit distance in time (n/k + k) ⋅ n^o(1). - After preprocessing both strings separately in time n^{1+o(1)}, we can solve (k, k ⋅ n^o(1))-gap edit distance in time kn^o(1). Both results improve upon some previously best known result, with respect to either the gap or the query time or the preprocessing time. Our algorithms build on the framework by Andoni, Krauthgamer and Onak (FOCS '10) and the recent sublinear-time algorithm by Bringmann, Cassis, Fischer and Nakos (STOC '22). We replace many complicated parts in their algorithm by faster and simpler solutions which exploit the preprocessing.

Cite as

Karl Bringmann, Alejandro Cassis, Nick Fischer, and Vasileios Nakos. Improved Sublinear-Time Edit Distance for Preprocessed Strings. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 32:1-32:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.ICALP.2022.32,
  author =	{Bringmann, Karl and Cassis, Alejandro and Fischer, Nick and Nakos, Vasileios},
  title =	{{Improved Sublinear-Time Edit Distance for Preprocessed Strings}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{32:1--32:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.32},
  URN =		{urn:nbn:de:0030-drops-163734},
  doi =		{10.4230/LIPIcs.ICALP.2022.32},
  annote =	{Keywords: Edit Distance, Property Testing, Preprocessing, Precision Sampling}
}
Document
APPROX
Fine-Grained Completeness for Optimization in P

Authors: Karl Bringmann, Alejandro Cassis, Nick Fischer, and Marvin Künnemann

Published in: LIPIcs, Volume 207, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021)


Abstract
We initiate the study of fine-grained completeness theorems for exact and approximate optimization in the polynomial-time regime. Inspired by the first completeness results for decision problems in P (Gao, Impagliazzo, Kolokolova, Williams, TALG 2019) as well as the classic class MaxSNP and MaxSNP-completeness for NP optimization problems (Papadimitriou, Yannakakis, JCSS 1991), we define polynomial-time analogues MaxSP and MinSP, which contain a number of natural optimization problems in P, including Maximum Inner Product, general forms of nearest neighbor search and optimization variants of the k-XOR problem. Specifically, we define MaxSP as the class of problems definable as max_{x₁,… ,x_k} #{(y₁,… ,y_𝓁) : ϕ(x₁,… ,x_k, y₁,… ,y_𝓁)}, where ϕ is a quantifier-free first-order property over a given relational structure (with MinSP defined analogously). On m-sized structures, we can solve each such problem in time O(m^{k+𝓁-1}). Our results are: - We determine (a sparse variant of) the Maximum/Minimum Inner Product problem as complete under deterministic fine-grained reductions: A strongly subquadratic algorithm for Maximum/Minimum Inner Product would beat the baseline running time of O(m^{k+𝓁-1}) for all problems in MaxSP/MinSP by a polynomial factor. - This completeness transfers to approximation: Maximum/Minimum Inner Product is also complete in the sense that a strongly subquadratic c-approximation would give a (c+ε)-approximation for all MaxSP/MinSP problems in time O(m^{k+𝓁-1-δ}), where ε > 0 can be chosen arbitrarily small. Combining our completeness with (Chen, Williams, SODA 2019), we obtain the perhaps surprising consequence that refuting the OV Hypothesis is equivalent to giving a O(1)-approximation for all MinSP problems in faster-than-O(m^{k+𝓁-1}) time. - By fine-tuning our reductions, we obtain mild algorithmic improvements for solving and approximating all problems in MaxSP and MinSP, using the fastest known algorithms for Maximum/Minimum Inner Product.

Cite as

Karl Bringmann, Alejandro Cassis, Nick Fischer, and Marvin Künnemann. Fine-Grained Completeness for Optimization in P. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 207, pp. 9:1-9:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.APPROX/RANDOM.2021.9,
  author =	{Bringmann, Karl and Cassis, Alejandro and Fischer, Nick and K\"{u}nnemann, Marvin},
  title =	{{Fine-Grained Completeness for Optimization in P}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021)},
  pages =	{9:1--9:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-207-5},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{207},
  editor =	{Wootters, Mary and Sanit\`{a}, Laura},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2021.9},
  URN =		{urn:nbn:de:0030-drops-147024},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2021.9},
  annote =	{Keywords: Fine-grained Complexity \& Algorithm Design, Completeness, Hardness of Approximation in P, Dimensionality Reductions}
}
  • Refine by Type
  • 13 Document/PDF
  • 7 Document/HTML

  • Refine by Publication Year
  • 7 2025
  • 1 2024
  • 1 2023
  • 3 2022
  • 1 2021

  • Refine by Author
  • 7 Bringmann, Karl
  • 6 Cassis, Alejandro
  • 5 Fischer, Nick
  • 4 Künnemann, Marvin
  • 2 Kociumaka, Tomasz
  • Show More...

  • Refine by Series/Journal
  • 13 LIPIcs

  • Refine by Classification
  • 3 Theory of computation → Design and analysis of algorithms
  • 3 Theory of computation → Problems, reductions and completeness
  • 2 Theory of computation → Approximation algorithms analysis
  • 2 Theory of computation → Graph algorithms analysis
  • 2 Theory of computation → Pattern matching
  • Show More...

  • Refine by Keyword
  • 3 Knapsack
  • 2 Fine-Grained Complexity
  • 2 Hardness of Approximation in P
  • 2 Min-Plus Convolution
  • 2 fine-grained complexity theory
  • Show More...

Any Issues?
X

Feedback on the Current Page

CAPTCHA

Thanks for your feedback!

Feedback submitted to Dagstuhl Publishing

Could not send message

Please try again later or send an E-mail