Search Results

Documents authored by Abboud, Amir


Document
From Donkeys to Kings in Tournaments

Authors: Amir Abboud, Tomer Grossman, Moni Naor, and Tomer Solomon

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
A tournament is an orientation of a complete graph. A vertex that can reach every other vertex within two steps is called a king. We study the complexity of finding k kings in a tournament graph. We show that the randomized query complexity of finding k ≤ 3 kings is O(n), and for the deterministic case it takes the same amount of queries (up to a constant) as finding a single king (the best known deterministic algorithm makes O(n^{3/2}) queries). On the other hand, we show that finding k ≥ 4 kings requires Ω(n²) queries, even in the randomized case. We consider the RAM model for k ≥ 4. We show an algorithm that finds k kings in time O(kn²), which is optimal for constant values of k. Alternatively, one can also find k ≥ 4 kings in time n^{ω} (the time for matrix multiplication). We provide evidence that this is optimal for large k by suggesting a fine-grained reduction from a variant of the triangle detection problem.

Cite as

Amir Abboud, Tomer Grossman, Moni Naor, and Tomer Solomon. From Donkeys to Kings in Tournaments. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 3:1-3:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{abboud_et_al:LIPIcs.ESA.2024.3,
  author =	{Abboud, Amir and Grossman, Tomer and Naor, Moni and Solomon, Tomer},
  title =	{{From Donkeys to Kings in Tournaments}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{3:1--3:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.3},
  URN =		{urn:nbn:de:0030-drops-210740},
  doi =		{10.4230/LIPIcs.ESA.2024.3},
  annote =	{Keywords: Tournament Graphs, Kings, Query Complexity, Fine Grained Complexity}
}
Document
Worst-Case to Expander-Case Reductions: Derandomized and Generalized

Authors: Amir Abboud and Nathan Wallheimer

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
A recent paper by Abboud and Wallheimer [ITCS 2023] presents self-reductions for various fundamental graph problems, which transform worst-case instances to expanders, thus proving that the complexity remains unchanged if the input is assumed to be an expander. An interesting corollary of their self-reductions is that if some problem admits such reduction, then the popular algorithmic paradigm based on expander-decompositions is useless against it. In this paper, we improve their core gadget, which augments a graph to make it an expander while retaining its important structure. Our new core construction has the benefit of being simple to analyze and generalize while obtaining the following results: - A derandomization of the self-reductions, showing that the equivalence between worst-case and expander-case holds even for deterministic algorithms, and ruling out the use of expander-decompositions as a derandomization tool. - An extension of the results to other models of computation, such as the Fully Dynamic model and the Congested Clique model. In the former, we either improve or provide an alternative approach to some recent hardness results for dynamic expander graphs by Henzinger, Paz, and Sricharan [ESA 2022]. In addition, we continue this line of research by designing new self-reductions for more problems, such as Max-Cut and dynamic Densest Subgraph, and demonstrating that the core gadget can be utilized to lift lower bounds based on the OMv Conjecture to expanders.

Cite as

Amir Abboud and Nathan Wallheimer. Worst-Case to Expander-Case Reductions: Derandomized and Generalized. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 4:1-4:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{abboud_et_al:LIPIcs.ESA.2024.4,
  author =	{Abboud, Amir and Wallheimer, Nathan},
  title =	{{Worst-Case to Expander-Case Reductions: Derandomized and Generalized}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{4:1--4:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.4},
  URN =		{urn:nbn:de:0030-drops-210751},
  doi =		{10.4230/LIPIcs.ESA.2024.4},
  annote =	{Keywords: Fine-grained complexity, expander graphs, self-reductions, worst-case to expander-case, expander decomposition, dynamic algorithms, exact and parameterized complexity, max-cut, maximum matching, k-clique detection, densest subgraph}
}
Document
Listing 4-Cycles

Authors: Amir Abboud, Seri Khoury, Oree Leibowitz, and Ron Safier

Published in: LIPIcs, Volume 284, 43rd IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2023)


Abstract
We study the fine-grained complexity of listing all 4-cycles in a graph on n nodes, m edges, and t such 4-cycles. The main result is an Õ(min(n²,m^{4/3})+t) upper bound, which is best-possible up to log factors unless the long-standing O(min(n²,m^{4/3})) upper bound for detecting a 4-cycle can be broken. Moreover, it almost-matches recent 3-SUM-based lower bounds for the problem by Abboud, Bringmann, and Fischer (STOC 2023) and independently by Jin and Xu (STOC 2023). Notably, our result separates 4-cycle listing from the closely related triangle listing for which higher conditional lower bounds exist, and rule out such a "detection plus t" bound. We also show by simple arguments that our bound cannot be extended to mild generalizations of the problem such as reporting all pairs of nodes that participate in a 4-cycle. [Independent work: Jin and Xu [Ce Jin and Yinzhan Xu, 2023] also present an algorithm with the same time bound.]

Cite as

Amir Abboud, Seri Khoury, Oree Leibowitz, and Ron Safier. Listing 4-Cycles. In 43rd IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 284, pp. 25:1-25:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{abboud_et_al:LIPIcs.FSTTCS.2023.25,
  author =	{Abboud, Amir and Khoury, Seri and Leibowitz, Oree and Safier, Ron},
  title =	{{Listing 4-Cycles}},
  booktitle =	{43rd IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2023)},
  pages =	{25:1--25:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-304-1},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{284},
  editor =	{Bouyer, Patricia and Srinivasan, Srikanth},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2023.25},
  URN =		{urn:nbn:de:0030-drops-193985},
  doi =		{10.4230/LIPIcs.FSTTCS.2023.25},
  annote =	{Keywords: Graph algorithms, cycles listing, subgraph detection, fine-grained complexity}
}
Document
APPROX
On Complexity of 1-Center in Various Metrics

Authors: Amir Abboud, MohammadHossein Bateni, Vincent Cohen-Addad, Karthik C. S., and Saeed Seddighin

Published in: LIPIcs, Volume 275, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2023)


Abstract
We consider the classic 1-center problem: Given a set P of n points in a metric space find the point in P that minimizes the maximum distance to the other points of P. We study the complexity of this problem in d-dimensional 𝓁_p-metrics and in edit and Ulam metrics over strings of length d. Our results for the 1-center problem may be classified based on d as follows. - Small d. Assuming the hitting set conjecture (HSC), we show that when d = ω(log n), no subquadratic algorithm can solve the 1-center problem in any of the 𝓁_p-metrics, or in the edit or Ulam metrics. - Large d. When d = Ω(n), we extend our conditional lower bound to rule out subquartic algorithms for the 1-center problem in edit metric (assuming Quantified SETH). On the other hand, we give a (1+ε)-approximation for 1-center in the Ulam metric with running time O_{ε}̃(nd+n²√d). We also strengthen some of the above lower bounds by allowing approximation algorithms or by reducing the dimension d, but only against a weaker class of algorithms which list all requisite solutions. Moreover, we extend one of our hardness results to rule out subquartic algorithms for the well-studied 1-median problem in the edit metric, where given a set of n strings each of length n, the goal is to find a string in the set that minimizes the sum of the edit distances to the rest of the strings in the set.

Cite as

Amir Abboud, MohammadHossein Bateni, Vincent Cohen-Addad, Karthik C. S., and Saeed Seddighin. On Complexity of 1-Center in Various Metrics. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 275, pp. 1:1-1:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{abboud_et_al:LIPIcs.APPROX/RANDOM.2023.1,
  author =	{Abboud, Amir and Bateni, MohammadHossein and Cohen-Addad, Vincent and Karthik C. S. and Seddighin, Saeed},
  title =	{{On Complexity of 1-Center in Various Metrics}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2023)},
  pages =	{1:1--1:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-296-9},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{275},
  editor =	{Megow, Nicole and Smith, Adam},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2023.1},
  URN =		{urn:nbn:de:0030-drops-188260},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2023.1},
  annote =	{Keywords: Center, Clustering, Edit metric, Ulam metric, Hamming metric, Fine-grained Complexity, Approximation}
}
Document
On Diameter Approximation in Directed Graphs

Authors: Amir Abboud, Mina Dalirrooyfard, Ray Li, and Virginia Vassilevska Williams

Published in: LIPIcs, Volume 274, 31st Annual European Symposium on Algorithms (ESA 2023)


Abstract
Computing the diameter of a graph, i.e. the largest distance, is a fundamental problem that is central in fine-grained complexity. In undirected graphs, the Strong Exponential Time Hypothesis (SETH) yields a lower bound on the time vs. approximation trade-off that is quite close to the upper bounds. In directed graphs, however, where only some of the upper bounds apply, much larger gaps remain. Since d(u,v) may not be the same as d(v,u), there are multiple ways to define the problem, the two most natural being the (one-way) diameter (max_(u,v) d(u,v)) and the roundtrip diameter (max_{u,v} d(u,v)+d(v,u)). In this paper we make progress on the outstanding open question for each of them. - We design the first algorithm for diameter in sparse directed graphs to achieve n^{1.5-ε} time with an approximation factor better than 2. The new upper bound trade-off makes the directed case appear more similar to the undirected case. Notably, this is the first algorithm for diameter in sparse graphs that benefits from fast matrix multiplication. - We design new hardness reductions separating roundtrip diameter from directed and undirected diameter. In particular, a 1.5-approximation in subquadratic time would refute the All-Nodes k-Cycle hypothesis, and any (2-ε)-approximation would imply a breakthrough algorithm for approximate 𝓁_∞-Closest-Pair. Notably, these are the first conditional lower bounds for diameter that are not based on SETH.

Cite as

Amir Abboud, Mina Dalirrooyfard, Ray Li, and Virginia Vassilevska Williams. On Diameter Approximation in Directed Graphs. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 2:1-2:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{abboud_et_al:LIPIcs.ESA.2023.2,
  author =	{Abboud, Amir and Dalirrooyfard, Mina and Li, Ray and Vassilevska Williams, Virginia},
  title =	{{On Diameter Approximation in Directed Graphs}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{2:1--2:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.2},
  URN =		{urn:nbn:de:0030-drops-186552},
  doi =		{10.4230/LIPIcs.ESA.2023.2},
  annote =	{Keywords: Diameter, Directed Graphs, Approximation Algorithms, Fine-grained complexity}
}
Document
Can You Solve Closest String Faster Than Exhaustive Search?

Authors: Amir Abboud, Nick Fischer, Elazar Goldenberg, Karthik C. S., and Ron Safier

Published in: LIPIcs, Volume 274, 31st Annual European Symposium on Algorithms (ESA 2023)


Abstract
We study the fundamental problem of finding the best string to represent a given set, in the form of the Closest String problem: Given a set X ⊆ Σ^d of n strings, find the string x^* minimizing the radius of the smallest Hamming ball around x^* that encloses all the strings in X. In this paper, we investigate whether the Closest String problem admits algorithms that are faster than the trivial exhaustive search algorithm. We obtain the following results for the two natural versions of the problem: - In the continuous Closest String problem, the goal is to find the solution string x^* anywhere in Σ^d. For binary strings, the exhaustive search algorithm runs in time O(2^d poly(nd)) and we prove that it cannot be improved to time O(2^{(1-ε) d} poly(nd)), for any ε > 0, unless the Strong Exponential Time Hypothesis fails. - In the discrete Closest String problem, x^* is required to be in the input set X. While this problem is clearly in polynomial time, its fine-grained complexity has been pinpointed to be quadratic time n^{2 ± o(1)} whenever the dimension is ω(log n) < d < n^o(1). We complement this known hardness result with new algorithms, proving essentially that whenever d falls out of this hard range, the discrete Closest String problem can be solved faster than exhaustive search. In the small-d regime, our algorithm is based on a novel application of the inclusion-exclusion principle. Interestingly, all of our results apply (and some are even stronger) to the natural dual of the Closest String problem, called the Remotest String problem, where the task is to find a string maximizing the Hamming distance to all the strings in X.

Cite as

Amir Abboud, Nick Fischer, Elazar Goldenberg, Karthik C. S., and Ron Safier. Can You Solve Closest String Faster Than Exhaustive Search?. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 3:1-3:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{abboud_et_al:LIPIcs.ESA.2023.3,
  author =	{Abboud, Amir and Fischer, Nick and Goldenberg, Elazar and Karthik C. S. and Safier, Ron},
  title =	{{Can You Solve Closest String Faster Than Exhaustive Search?}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{3:1--3:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.3},
  URN =		{urn:nbn:de:0030-drops-186566},
  doi =		{10.4230/LIPIcs.ESA.2023.3},
  annote =	{Keywords: Closest string, fine-grained complexity, SETH, inclusion-exclusion}
}
Document
What Else Can Voronoi Diagrams Do for Diameter in Planar Graphs?

Authors: Amir Abboud, Shay Mozes, and Oren Weimann

Published in: LIPIcs, Volume 274, 31st Annual European Symposium on Algorithms (ESA 2023)


Abstract
The Voronoi diagrams technique, introduced by Cabello [SODA'17] to compute the diameter of planar graphs in subquadratic time, has revolutionized the field of distance computations in planar graphs. We present novel applications of this technique in static, fault-tolerant, and partially-dynamic undirected unweighted planar graphs, as well as some new limitations. - In the static case, we give n^{3+o(1)}/D² and Õ(n⋅D²) time algorithms for computing the diameter of a planar graph G with diameter D. These are faster than the state of the art Õ(n^{5/3}) [SODA'18] when D < n^{1/3} or D > n^{2/3}. - In the fault-tolerant setting, we give an n^{7/3+o(1)} time algorithm for computing the diameter of G⧵ {e} for every edge e in G (the replacement diameter problem). This should be compared with the naive Õ(n^{8/3}) time algorithm that runs the static algorithm for every edge. - In the incremental setting, where we wish to maintain the diameter while adding edges, we present an algorithm with total running time n^{7/3+o(1)}. This should be compared with the naive Õ(n^{8/3}) time algorithm that runs the static algorithm after every update. - We give a lower bound (conditioned on the SETH) ruling out an amortized O(n^{1-ε}) update time for maintaining the diameter in weighted planar graph. The lower bound holds even for incremental or decremental updates. Our upper bounds are obtained by novel uses and manipulations of Voronoi diagrams. These include maintaining the Voronoi diagram when edges of the graph are deleted, allowing the sites of the Voronoi diagram to lie on a BFS tree level (rather than on boundaries of r-division), and a new reduction from incremental diameter to incremental distance oracles that could be of interest beyond planar graphs. Our lower bound is the first lower bound for a dynamic planar graph problem that is conditioned on the SETH.

Cite as

Amir Abboud, Shay Mozes, and Oren Weimann. What Else Can Voronoi Diagrams Do for Diameter in Planar Graphs?. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 4:1-4:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{abboud_et_al:LIPIcs.ESA.2023.4,
  author =	{Abboud, Amir and Mozes, Shay and Weimann, Oren},
  title =	{{What Else Can Voronoi Diagrams Do for Diameter in Planar Graphs?}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{4:1--4:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.4},
  URN =		{urn:nbn:de:0030-drops-186575},
  doi =		{10.4230/LIPIcs.ESA.2023.4},
  annote =	{Keywords: Planar graphs, diameter, dynamic graphs, fault tolerance}
}
Document
Worst-Case to Expander-Case Reductions

Authors: Amir Abboud and Nathan Wallheimer

Published in: LIPIcs, Volume 251, 14th Innovations in Theoretical Computer Science Conference (ITCS 2023)


Abstract
In recent years, the expander decomposition method was used to develop many graph algorithms, resulting in major improvements to longstanding complexity barriers. This powerful hammer has led the community to (1) believe that most problems are as easy on worst-case graphs as they are on expanders, and (2) suspect that expander decompositions are the key to breaking the remaining longstanding barriers in fine-grained complexity. We set out to investigate the extent to which these two things are true (and for which problems). Towards this end, we put forth the concept of worst-case to expander-case self-reductions. We design a collection of such reductions for fundamental graph problems, verifying belief (1) for them. The list includes k-Clique, 4-Cycle, Maximum Cardinality Matching, Vertex-Cover, and Minimum Dominating Set. Interestingly, for most (but not all) of these problems the proof is via a simple gadget reduction, not via expander decompositions, showing that this hammer is effectively useless against the problem and contradicting (2).

Cite as

Amir Abboud and Nathan Wallheimer. Worst-Case to Expander-Case Reductions. In 14th Innovations in Theoretical Computer Science Conference (ITCS 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 251, pp. 1:1-1:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{abboud_et_al:LIPIcs.ITCS.2023.1,
  author =	{Abboud, Amir and Wallheimer, Nathan},
  title =	{{Worst-Case to Expander-Case Reductions}},
  booktitle =	{14th Innovations in Theoretical Computer Science Conference (ITCS 2023)},
  pages =	{1:1--1:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-263-1},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{251},
  editor =	{Tauman Kalai, Yael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2023.1},
  URN =		{urn:nbn:de:0030-drops-175044},
  doi =		{10.4230/LIPIcs.ITCS.2023.1},
  annote =	{Keywords: Fine-Grained Complexity, Expander Decomposition, Reductions, Exact and Parameterized Complexity, Expander Graphs, Triangle, Maximum Matching, Clique, 4-Cycle, Vertex Cover, Dominating Set}
}
Document
Track A: Algorithms, Complexity and Games
Improved Approximation Algorithms and Lower Bounds for Search-Diversification Problems

Authors: Amir Abboud, Vincent Cohen-Addad, Euiwoong Lee, and Pasin Manurangsi

Published in: LIPIcs, Volume 229, 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)


Abstract
We study several questions related to diversifying search results. We give improved approximation algorithms in each of the following problems, together with some lower bounds. 1) We give a polynomial-time approximation scheme (PTAS) for a diversified search ranking problem [Nikhil Bansal et al., 2010] whose objective is to minimizes the discounted cumulative gain. Our PTAS runs in time n^{2^O(log(1/ε)/ε)} ⋅ m^O(1) where n denotes the number of elements in the databases and m denotes the number of constraints. Complementing this result, we show that no PTAS can run in time f(ε) ⋅ (nm)^{2^o(1/ε)} assuming Gap-ETH and therefore our running time is nearly tight. Both our upper and lower bounds answer open questions from [Nikhil Bansal et al., 2010]. 2) We next consider the Max-Sum Dispersion problem, whose objective is to select k out of n elements from a database that maximizes the dispersion, which is defined as the sum of the pairwise distances under a given metric. We give a quasipolynomial-time approximation scheme (QPTAS) for the problem which runs in time n^{O_ε(log n)}. This improves upon previously known polynomial-time algorithms with approximate ratios 0.5 [Refael Hassin et al., 1997; Allan Borodin et al., 2017]. Furthermore, we observe that reductions from previous work rule out approximation schemes that run in n^õ_ε(log n) time assuming ETH. 3) Finally, we consider a generalization of Max-Sum Dispersion called Max-Sum Diversification. In addition to the sum of pairwise distance, the objective also includes another function f. For monotone submodular function f, we give a quasipolynomial-time algorithm with approximation ratio arbitrarily close to (1-1/e). This improves upon the best polynomial-time algorithm which has approximation ratio 0.5 [Allan Borodin et al., 2017]. Furthermore, the (1-1/e) factor is also tight as achieving better-than-(1-1/e) approximation is NP-hard [Uriel Feige, 1998].

Cite as

Amir Abboud, Vincent Cohen-Addad, Euiwoong Lee, and Pasin Manurangsi. Improved Approximation Algorithms and Lower Bounds for Search-Diversification Problems. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 7:1-7:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{abboud_et_al:LIPIcs.ICALP.2022.7,
  author =	{Abboud, Amir and Cohen-Addad, Vincent and Lee, Euiwoong and Manurangsi, Pasin},
  title =	{{Improved Approximation Algorithms and Lower Bounds for Search-Diversification Problems}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{7:1--7:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.7},
  URN =		{urn:nbn:de:0030-drops-163481},
  doi =		{10.4230/LIPIcs.ICALP.2022.7},
  annote =	{Keywords: Approximation Algorithms, Complexity, Data Mining, Diversification}
}
Document
Track A: Algorithms, Complexity and Games
Fine-Grained Hardness for Edit Distance to a Fixed Sequence

Authors: Amir Abboud and Virginia Vassilevska Williams

Published in: LIPIcs, Volume 198, 48th International Colloquium on Automata, Languages, and Programming (ICALP 2021)


Abstract
Nearly all quadratic lower bounds conditioned on the Strong Exponential Time Hypothesis (SETH) start by reducing k-SAT to the Orthogonal Vectors (OV) problem: Given two sets A,B of n binary vectors, decide if there is an orthogonal pair a ∈ A, b ∈ B. In this paper, we give an alternative reduction in which the set A does not depend on the input to k-SAT; thus, the quadratic lower bound for OV holds even if one of the sets is fixed in advance. Using the reductions in the literature from OV to other problems such as computing similarity measures on strings, we get hardness results of a stronger kind: there is a family of sequences {S_n}_{n = 1}^{∞}, |S_n| = n such that computing the Edit Distance between an input sequence X of length n and the (fixed) sequence S_n requires n^{2-o(1)} time under SETH.

Cite as

Amir Abboud and Virginia Vassilevska Williams. Fine-Grained Hardness for Edit Distance to a Fixed Sequence. In 48th International Colloquium on Automata, Languages, and Programming (ICALP 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 198, pp. 7:1-7:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{abboud_et_al:LIPIcs.ICALP.2021.7,
  author =	{Abboud, Amir and Vassilevska Williams, Virginia},
  title =	{{Fine-Grained Hardness for Edit Distance to a Fixed Sequence}},
  booktitle =	{48th International Colloquium on Automata, Languages, and Programming (ICALP 2021)},
  pages =	{7:1--7:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-195-5},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{198},
  editor =	{Bansal, Nikhil and Merelli, Emanuela and Worrell, James},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2021.7},
  URN =		{urn:nbn:de:0030-drops-140768},
  doi =		{10.4230/LIPIcs.ICALP.2021.7},
  annote =	{Keywords: SAT, edit distance, fine-grained complexity, conditional lower bound, sequence alignment}
}
Document
Track A: Algorithms, Complexity and Games
Scheduling Lower Bounds via AND Subset Sum

Authors: Amir Abboud, Karl Bringmann, Danny Hermelin, and Dvir Shabtay

Published in: LIPIcs, Volume 168, 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)


Abstract
Given N instances (X_1,t_1),…,(X_N,t_N) of Subset Sum, the AND Subset Sum problem asks to determine whether all of these instances are yes-instances; that is, whether each set of integers X_i has a subset that sums up to the target integer t_i. We prove that this problem cannot be solved in time Õ((N ⋅ t_max)^{1-ε}), for t_max = max_i t_i and any ε > 0, assuming the ∀ ∃ Strong Exponential Time Hypothesis (∀∃-SETH). We then use this result to exclude Õ(n+P_max⋅n^{1-ε})-time algorithms for several scheduling problems on n jobs with maximum processing time P_max, assuming ∀∃-SETH. These include classical problems such as 1||∑ w_jU_j, the problem of minimizing the total weight of tardy jobs on a single machine, and P₂||∑ U_j, the problem of minimizing the number of tardy jobs on two identical parallel machines.

Cite as

Amir Abboud, Karl Bringmann, Danny Hermelin, and Dvir Shabtay. Scheduling Lower Bounds via AND Subset Sum. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 4:1-4:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{abboud_et_al:LIPIcs.ICALP.2020.4,
  author =	{Abboud, Amir and Bringmann, Karl and Hermelin, Danny and Shabtay, Dvir},
  title =	{{Scheduling Lower Bounds via AND Subset Sum}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{4:1--4:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.4},
  URN =		{urn:nbn:de:0030-drops-124119},
  doi =		{10.4230/LIPIcs.ICALP.2020.4},
  annote =	{Keywords: SETH, fine grained complexity, Subset Sum, scheduling}
}
Document
Track A: Algorithms, Complexity and Games
On the Fine-Grained Complexity of Parity Problems

Authors: Amir Abboud, Shon Feller, and Oren Weimann

Published in: LIPIcs, Volume 168, 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)


Abstract
We consider the parity variants of basic problems studied in fine-grained complexity. We show that finding the exact solution is just as hard as finding its parity (i.e. if the solution is even or odd) for a large number of classical problems, including All-Pairs Shortest Paths (APSP), Diameter, Radius, Median, Second Shortest Path, Maximum Consecutive Subsums, Min-Plus Convolution, and 0/1-Knapsack. A direct reduction from a problem to its parity version is often difficult to design. Instead, we revisit the existing hardness reductions and tailor them in a problem-specific way to the parity version. Nearly all reductions from APSP in the literature proceed via the (subcubic-equivalent but simpler) Negative Weight Triangle (NWT) problem. Our new modified reductions also start from NWT or a non-standard parity variant of it. We are not able to establish a subcubic-equivalence with the more natural parity counting variant of NWT, where we ask if the number of negative triangles is even or odd. Perhaps surprisingly, we justify this by designing a reduction from the seemingly-harder Zero Weight Triangle problem, showing that parity is (conditionally) strictly harder than decision for NWT.

Cite as

Amir Abboud, Shon Feller, and Oren Weimann. On the Fine-Grained Complexity of Parity Problems. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 5:1-5:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{abboud_et_al:LIPIcs.ICALP.2020.5,
  author =	{Abboud, Amir and Feller, Shon and Weimann, Oren},
  title =	{{On the Fine-Grained Complexity of Parity Problems}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{5:1--5:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.5},
  URN =		{urn:nbn:de:0030-drops-124127},
  doi =		{10.4230/LIPIcs.ICALP.2020.5},
  annote =	{Keywords: All-pairs shortest paths, Fine-grained complexity, Diameter, Distance product, Min-plus convolution, Parity problems}
}
Document
Track A: Algorithms, Complexity and Games
Faster Algorithms for All-Pairs Bounded Min-Cuts

Authors: Amir Abboud, Loukas Georgiadis, Giuseppe F. Italiano, Robert Krauthgamer, Nikos Parotsidis, Ohad Trabelsi, Przemysław Uznański, and Daniel Wolleb-Graf

Published in: LIPIcs, Volume 132, 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)


Abstract
The All-Pairs Min-Cut problem (aka All-Pairs Max-Flow) asks to compute a minimum s-t cut (or just its value) for all pairs of vertices s,t. We study this problem in directed graphs with unit edge/vertex capacities (corresponding to edge/vertex connectivity). Our focus is on the k-bounded case, where the algorithm has to find all pairs with min-cut value less than k, and report only those. The most basic case k=1 is the Transitive Closure (TC) problem, which can be solved in graphs with n vertices and m edges in time O(mn) combinatorially, and in time O(n^{omega}) where omega<2.38 is the matrix-multiplication exponent. These time bounds are conjectured to be optimal. We present new algorithms and conditional lower bounds that advance the frontier for larger k, as follows: - A randomized algorithm for vertex capacities that runs in time {O}((nk)^{omega}). This is only a factor k^omega away from the TC bound, and nearly matches it for all k=n^{o(1)}. - Two deterministic algorithms for edge capacities (which is more general) that work in DAGs and further reports a minimum cut for each pair. The first algorithm is combinatorial (does not involve matrix multiplication) and runs in time {O}(2^{{O}(k^2)}* mn). The second algorithm can be faster on dense DAGs and runs in time {O}((k log n)^{4^{k+o(k)}}* n^{omega}). Previously, Georgiadis et al. [ICALP 2017], could match the TC bound (up to n^{o(1)} factors) only when k=2, and now our two algorithms match it for all k=o(sqrt{log n}) and k=o(log log n). - The first super-cubic lower bound of n^{omega-1-o(1)} k^2 time under the 4-Clique conjecture, which holds even in the simplest case of DAGs with unit vertex capacities. It improves on the previous (SETH-based) lower bounds even in the unbounded setting k=n. For combinatorial algorithms, our reduction implies an n^{2-o(1)} k^2 conditional lower bound. Thus, we identify new settings where the complexity of the problem is (conditionally) higher than that of TC. Our three sets of results are obtained via different techniques. The first one adapts the network coding method of Cheung, Lau, and Leung [SICOMP 2013] to vertex-capacitated digraphs. The second set exploits new insights on the structure of latest cuts together with suitable algebraic tools. The lower bounds arise from a novel reduction of a different structure than the SETH-based constructions.

Cite as

Amir Abboud, Loukas Georgiadis, Giuseppe F. Italiano, Robert Krauthgamer, Nikos Parotsidis, Ohad Trabelsi, Przemysław Uznański, and Daniel Wolleb-Graf. Faster Algorithms for All-Pairs Bounded Min-Cuts. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 7:1-7:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{abboud_et_al:LIPIcs.ICALP.2019.7,
  author =	{Abboud, Amir and Georgiadis, Loukas and Italiano, Giuseppe F. and Krauthgamer, Robert and Parotsidis, Nikos and Trabelsi, Ohad and Uzna\'{n}ski, Przemys{\l}aw and Wolleb-Graf, Daniel},
  title =	{{Faster Algorithms for All-Pairs Bounded Min-Cuts}},
  booktitle =	{46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
  pages =	{7:1--7:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-109-2},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{132},
  editor =	{Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.7},
  URN =		{urn:nbn:de:0030-drops-105833},
  doi =		{10.4230/LIPIcs.ICALP.2019.7},
  annote =	{Keywords: All-pairs min-cut, k-reachability, network coding, Directed graphs, fine-grained complexity}
}
Document
Track A: Algorithms, Complexity and Games
Fine-Grained Reductions and Quantum Speedups for Dynamic Programming

Authors: Amir Abboud

Published in: LIPIcs, Volume 132, 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)


Abstract
This paper points at a connection between certain (classical) fine-grained reductions and the question: Do quantum algorithms offer an advantage for problems whose (classical) best solution is via dynamic programming? A remarkable recent result of Ambainis et al. [SODA 2019] indicates that the answer is positive for some fundamental problems such as Set-Cover and Travelling Salesman. They design a quantum O^*(1.728^n) time algorithm whereas the dynamic programming O^*(2^n) time algorithms are conjectured to be classically optimal. In this paper, fine-grained reductions are extracted from their algorithms giving the first lower bounds for problems in P that are based on the intriguing Set-Cover Conjecture (SeCoCo) of Cygan et al. [CCC 2010]. In particular, the SeCoCo implies: - a super-linear Omega(n^{1.08}) lower bound for 3-SUM on n integers, - an Omega(n^{k/(c_k)-epsilon}) lower bound for k-SUM on n integers and k-Clique on n-node graphs, for any integer k >= 3, where c_k <= log_2{k}+1.4427. While far from being tight, these lower bounds are significantly stronger than what is known to follow from the Strong Exponential Time Hypothesis (SETH); the well-known n^{Omega(k)} ETH-based lower bounds for k-Clique and k-SUM are vacuous when k is constant. Going in the opposite direction, this paper observes that some "sequential" problems with previously known fine-grained reductions to a "parallelizable" core also enjoy quantum speedups over their classical dynamic programming solutions. Examples include RNA Folding and Least-Weight Subsequence.

Cite as

Amir Abboud. Fine-Grained Reductions and Quantum Speedups for Dynamic Programming. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 8:1-8:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{abboud:LIPIcs.ICALP.2019.8,
  author =	{Abboud, Amir},
  title =	{{Fine-Grained Reductions and Quantum Speedups for Dynamic Programming}},
  booktitle =	{46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
  pages =	{8:1--8:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-109-2},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{132},
  editor =	{Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.8},
  URN =		{urn:nbn:de:0030-drops-105846},
  doi =		{10.4230/LIPIcs.ICALP.2019.8},
  annote =	{Keywords: Fine-Grained Complexity, Set-Cover, 3-SUM, k-Clique, k-SUM, Dynamic Programming, Quantum Algorithms}
}
Document
Tighter Connections Between Formula-SAT and Shaving Logs

Authors: Amir Abboud and Karl Bringmann

Published in: LIPIcs, Volume 107, 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)


Abstract
A noticeable fraction of Algorithms papers in the last few decades improve the running time of well-known algorithms for fundamental problems by logarithmic factors. For example, the {O}(n^2) dynamic programming solution to the Longest Common Subsequence problem (LCS) was improved to O(n^2/log^{2}n) in several ways and using a variety of ingenious tricks. This line of research, also known as the art of shaving log factors, lacks a tool for proving negative results. Specifically, how can we show that it is unlikely that LCS can be solved in time O(n^2/log^3n)? Perhaps the only approach for such results was suggested in a recent paper of Abboud, Hansen, Vassilevska W. and Williams (STOC'16). The authors blame the hardness of shaving logs on the hardness of solving satisfiability on boolean formulas (Formula-SAT) faster than exhaustive search. They show that an O(n^2/log^{1000} n) algorithm for LCS would imply a major advance in circuit lower bounds. Whether this approach can lead to tighter barriers was unclear. In this paper, we push this approach to its limit and, in particular, prove that a well-known barrier from complexity theory stands in the way for shaving five additional log factors for fundamental combinatorial problems. For LCS, regular expression pattern matching, as well as the Fréchet distance problem from Computational Geometry, we show that an O(n^2/log^{7+epsilon}{n}) runtime would imply new Formula-SAT algorithms. Our main result is a reduction from SAT on formulas of size s over n variables to LCS on sequences of length N=2^{n/2} * s^{1+o(1)}. Our reduction is essentially as efficient as possible, and it greatly improves the previously known reduction for LCS with N=2^{n/2} * s^c, for some c >= 100.

Cite as

Amir Abboud and Karl Bringmann. Tighter Connections Between Formula-SAT and Shaving Logs. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 8:1-8:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{abboud_et_al:LIPIcs.ICALP.2018.8,
  author =	{Abboud, Amir and Bringmann, Karl},
  title =	{{Tighter Connections Between Formula-SAT and Shaving Logs}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{8:1--8:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.8},
  URN =		{urn:nbn:de:0030-drops-90129},
  doi =		{10.4230/LIPIcs.ICALP.2018.8},
  annote =	{Keywords: Fine-Grained Complexity, Hardness in P, Formula-SAT, Longest Common Subsequence, Frechet Distance}
}
Document
Fast and Deterministic Constant Factor Approximation Algorithms for LCS Imply New Circuit Lower Bounds

Authors: Amir Abboud and Aviad Rubinstein

Published in: LIPIcs, Volume 94, 9th Innovations in Theoretical Computer Science Conference (ITCS 2018)


Abstract
The Longest Common Subsequence (LCS) is one of the most basic similarity measures and it captures important applications in bioinformatics and text analysis. Following the SETH-based nearly-quadratic time lower bounds for LCS from recent years, it is a major open problem to understand the complexity of approximate LCS. In the last ITCS [AB17] drew an interesting connection between this problem and the area of circuit complexity: they proved that approximation algorithms for LCS in deterministic truly-subquadratic time imply new circuit lower bounds (E^NP does not have non-uniform linear-size Valiant Series Parallel circuits). In this work, we strengthen this connection between approximate LCS and circuit complexity by applying the Distributed PCP framework of [ARW17]. We obtain a reduction that holds against much larger approximation factors (super-constant versus 1+o(1)), yields a lower bound for a larger class of circuits (linear-size NC^1), and is also easier to analyze.

Cite as

Amir Abboud and Aviad Rubinstein. Fast and Deterministic Constant Factor Approximation Algorithms for LCS Imply New Circuit Lower Bounds. In 9th Innovations in Theoretical Computer Science Conference (ITCS 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 94, pp. 35:1-35:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{abboud_et_al:LIPIcs.ITCS.2018.35,
  author =	{Abboud, Amir and Rubinstein, Aviad},
  title =	{{Fast and Deterministic Constant Factor Approximation Algorithms for LCS Imply New Circuit Lower Bounds}},
  booktitle =	{9th Innovations in Theoretical Computer Science Conference (ITCS 2018)},
  pages =	{35:1--35:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-060-6},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{94},
  editor =	{Karlin, Anna R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2018.35},
  URN =		{urn:nbn:de:0030-drops-83490},
  doi =		{10.4230/LIPIcs.ITCS.2018.35},
  annote =	{Keywords: Distributed PCP, Longest Common Subsequence, Fine-grained Complexity, Strong Exponential Time Hypothesis}
}
Document
Towards Hardness of Approximation for Polynomial Time Problems

Authors: Amir Abboud and Arturs Backurs

Published in: LIPIcs, Volume 67, 8th Innovations in Theoretical Computer Science Conference (ITCS 2017)


Abstract
Proving hardness of approximation is a major challenge in the field of fine-grained complexity and conditional lower bounds in P. How well can the Longest Common Subsequence (LCS) or the Edit Distance be approximated by an algorithm that runs in near-linear time? In this paper, we make progress towards answering these questions. We introduce a framework that exhibits barriers for truly subquadratic and deterministic algorithms with good approximation guarantees. Our framework highlights a novel connection between deterministic approximation algorithms for natural problems in P and circuit lower bounds. In particular, we discover a curious connection of the following form: if there exists a \delta>0 such that for all \eps>0 there is a deterministic (1+\eps)-approximation algorithm for LCS on two sequences of length n over an alphabet of size n^{o(1)} that runs in O(n^{2-\delta}) time, then a certain plausible hypothesis is refuted, and the class E^NP does not have non-uniform linear size Valiant Series-Parallel circuits. Thus, designing a "truly subquadratic PTAS" for LCS is as hard as resolving an old open question in complexity theory.

Cite as

Amir Abboud and Arturs Backurs. Towards Hardness of Approximation for Polynomial Time Problems. In 8th Innovations in Theoretical Computer Science Conference (ITCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 67, pp. 11:1-11:26, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{abboud_et_al:LIPIcs.ITCS.2017.11,
  author =	{Abboud, Amir and Backurs, Arturs},
  title =	{{Towards Hardness of Approximation for Polynomial Time Problems}},
  booktitle =	{8th Innovations in Theoretical Computer Science Conference (ITCS 2017)},
  pages =	{11:1--11:26},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-029-3},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{67},
  editor =	{Papadimitriou, Christos H.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2017.11},
  URN =		{urn:nbn:de:0030-drops-81443},
  doi =		{10.4230/LIPIcs.ITCS.2017.11},
  annote =	{Keywords: LCS, Edit Distance, Hardness in P}
}
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail