12 Search Results for "Fischer, Nick"


Document
Memoization on Shared Subtrees Accelerates Computations on Genealogical Forests

Authors: Lukas Hübner and Alexandros Stamatakis

Published in: LIPIcs, Volume 312, 24th International Workshop on Algorithms in Bioinformatics (WABI 2024)


Abstract
The field of population genetics attempts to advance our understanding of evolutionary processes. It has applications, for example, in medical research, wildlife conservation, and - in conjunction with recent advances in ancient DNA sequencing technology - studying human migration patterns over the past few thousand years. The basic toolbox of population genetics includes genealogical trees, which describe the shared evolutionary history among individuals of the same species. They are calculated on the basis of genetic variations. However, in recombining organisms, a single tree is insufficient to describe the evolutionary history of the whole genome. Instead, a collection of correlated trees can be used, where each describes the evolutionary history of a consecutive region of the genome. The current corresponding state of-the-art data structure, tree sequences, compresses these genealogical trees via edit operations when moving from one tree to the next along the genome instead of storing the full, often redundant, description for each tree. We propose a new data structure, genealogical forests, which compresses the set of genealogical trees into a DAG. In this DAG identical subtrees that are shared across the input trees are encoded only once, thereby allowing for straight-forward memoization of intermediate results. Additionally, we provide a C++ implementation of our proposed data structure, called gfkit, which is 2.1 to 11.2 (median 4.0) times faster than the state-of-the-art tool on empirical and simulated datasets at computing important population genetics statistics such as the Allele Frequency Spectrum, Patterson’s f, the Fixation Index, Tajima’s D, pairwise Lowest Common Ancestors, and others. On Lowest Common Ancestor queries with more than two samples as input, gfkit scales asymptotically better than the state-of-the-art, and is thus up to 990 times faster. In conclusion, our proposed data structure compresses genealogical trees by storing shared subtrees only once, thereby enabling straight-forward memoization of intermediate results, yielding a substantial runtime reduction and a potentially more intuitive data representation over the state-of-the-art. Our improvements will boost the development of novel analyses and models in the field of population genetics and increases scalability to ever-growing genomic datasets.

Cite as

Lukas Hübner and Alexandros Stamatakis. Memoization on Shared Subtrees Accelerates Computations on Genealogical Forests. In 24th International Workshop on Algorithms in Bioinformatics (WABI 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 312, pp. 5:1-5:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{hubner_et_al:LIPIcs.WABI.2024.5,
  author =	{H\"{u}bner, Lukas and Stamatakis, Alexandros},
  title =	{{Memoization on Shared Subtrees Accelerates Computations on Genealogical Forests}},
  booktitle =	{24th International Workshop on Algorithms in Bioinformatics (WABI 2024)},
  pages =	{5:1--5:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-340-9},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{312},
  editor =	{Pissis, Solon P. and Sung, Wing-Kin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.WABI.2024.5},
  URN =		{urn:nbn:de:0030-drops-206499},
  doi =		{10.4230/LIPIcs.WABI.2024.5},
  annote =	{Keywords: bioinformatics, population genetics, algorithms}
}
Document
Track A: Algorithms, Complexity and Games
Minimizing Tardy Processing Time on a Single Machine in Near-Linear Time

Authors: Nick Fischer and Leo Wennmann

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
In this work we revisit the elementary scheduling problem 1||∑ p_j U_j. The goal is to select, among n jobs with processing times and due dates, a subset of jobs with maximum total processing time that can be scheduled in sequence without violating their due dates. This problem is NP-hard, but a classical algorithm by Lawler and Moore from the 60s solves this problem in pseudo-polynomial time O(nP), where P is the total processing time of all jobs. With the aim to develop best-possible pseudo-polynomial-time algorithms, a recent wave of results has improved Lawler and Moore’s algorithm for 1||∑ p_j U_j: First to time Õ(P^{7/4}) [Bringmann, Fischer, Hermelin, Shabtay, Wellnitz; ICALP'20], then to time Õ(P^{5/3}) [Klein, Polak, Rohwedder; SODA'23], and finally to time Õ(P^{7/5}) [Schieber, Sitaraman; WADS'23]. It remained an exciting open question whether these works can be improved further. In this work we develop an algorithm in near-linear time Õ(P) for the 1||∑ p_j U_j problem. This running time not only significantly improves upon the previous results, but also matches conditional lower bounds based on the Strong Exponential Time Hypothesis or the Set Cover Hypothesis and is therefore likely optimal (up to subpolynomial factors). Our new algorithm also extends to the case of m machines in time Õ(P^m). In contrast to the previous improvements, we take a different, more direct approach inspired by the recent reductions from Modular Subset Sum to dynamic string problems. We thereby arrive at a satisfyingly simple algorithm.

Cite as

Nick Fischer and Leo Wennmann. Minimizing Tardy Processing Time on a Single Machine in Near-Linear Time. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 64:1-64:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{fischer_et_al:LIPIcs.ICALP.2024.64,
  author =	{Fischer, Nick and Wennmann, Leo},
  title =	{{Minimizing Tardy Processing Time on a Single Machine in Near-Linear Time}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{64:1--64:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.64},
  URN =		{urn:nbn:de:0030-drops-202079},
  doi =		{10.4230/LIPIcs.ICALP.2024.64},
  annote =	{Keywords: Scheduling, Fine-Grained Complexity, Dynamic Strings}
}
Document
Deterministic 3SUM-Hardness

Authors: Nick Fischer, Piotr Kaliciak, and Adam Polak

Published in: LIPIcs, Volume 287, 15th Innovations in Theoretical Computer Science Conference (ITCS 2024)


Abstract
As one of the three main pillars of fine-grained complexity theory, the 3SUM problem explains the hardness of many diverse polynomial-time problems via fine-grained reductions. Many of these reductions are either directly based on or heavily inspired by Pătraşcu’s framework involving additive hashing and are thus randomized. Some selected reductions were derandomized in previous work [Chan, He; SOSA'20], but the current techniques are limited and a major fraction of the reductions remains randomized. In this work we gather a toolkit aimed to derandomize reductions based on additive hashing. Using this toolkit, we manage to derandomize almost all known 3SUM-hardness reductions. As technical highlights we derandomize the hardness reductions to (offline) Set Disjointness, (offline) Set Intersection and Triangle Listing - these questions were explicitly left open in previous work [Kopelowitz, Pettie, Porat; SODA'16]. The few exceptions to our work fall into a special category of recent reductions based on structure-versus-randomness dichotomies. We expect that our toolkit can be readily applied to derandomize future reductions as well. As a conceptual innovation, our work thereby promotes the theory of deterministic 3SUM-hardness. As our second contribution, we prove that there is a deterministic universe reduction for 3SUM. Specifically, using additive hashing it is a standard trick to assume that the numbers in 3SUM have size at most n³. We prove that this assumption is similarly valid for deterministic algorithms.

Cite as

Nick Fischer, Piotr Kaliciak, and Adam Polak. Deterministic 3SUM-Hardness. In 15th Innovations in Theoretical Computer Science Conference (ITCS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 287, pp. 49:1-49:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{fischer_et_al:LIPIcs.ITCS.2024.49,
  author =	{Fischer, Nick and Kaliciak, Piotr and Polak, Adam},
  title =	{{Deterministic 3SUM-Hardness}},
  booktitle =	{15th Innovations in Theoretical Computer Science Conference (ITCS 2024)},
  pages =	{49:1--49:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-309-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{287},
  editor =	{Guruswami, Venkatesan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2024.49},
  URN =		{urn:nbn:de:0030-drops-195772},
  doi =		{10.4230/LIPIcs.ITCS.2024.49},
  annote =	{Keywords: 3SUM, derandomization, fine-grained complexity}
}
Document
RANDOM
Testing Versus Estimation of Graph Properties, Revisited

Authors: Lior Gishboliner, Nick Kushnir, and Asaf Shapira

Published in: LIPIcs, Volume 275, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2023)


Abstract
A graph G on n vertices is ε-far from property P if one should add/delete at least ε n² edges to turn G into a graph satisfying P. A distance estimator for P is an algorithm that given G and α, ε > 0 distinguishes between the case that G is (α-ε)-close to 𝒫 and the case that G is α-far from 𝒫. If P has a distance estimator whose query complexity depends only on ε, then P is said to be estimable. Every estimable property is clearly also testable, since testing corresponds to estimating with α = ε. A central result in the area of property testing is the Fischer-Newman theorem, stating that an inverse statement also holds, that is, that every testable property is in fact estimable. The proof of Fischer and Newmann was highly ineffective, since it incurred a tower-type loss when transforming a testing algorithm for P into a distance estimator. This raised the natural problem, studied recently by Fiat-Ron and by Hoppen-Kohayakawa-Lang-Lefmann-Stagni, whether one can find a transformation with a polynomial loss. We obtain the following results. - We show that if P is hereditary, then one can turn a tester for P into a distance estimator with an exponential loss. This is an exponential improvement over the result of Hoppen et. al., who obtained a transformation with a double exponential loss. - We show that for every P, one can turn a testing algorithm for P into a distance estimator with a double exponential loss. This improves over the transformation of Fischer-Newman that incurred a tower-type loss. Our main conceptual contribution in this work is that we manage to turn the approach of Fischer-Newman, which was inherently ineffective, into an efficient one. On the technical level, our main contribution is in establishing certain properties of Frieze-Kannan Weak Regular partitions that are of independent interest.

Cite as

Lior Gishboliner, Nick Kushnir, and Asaf Shapira. Testing Versus Estimation of Graph Properties, Revisited. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 275, pp. 46:1-46:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{gishboliner_et_al:LIPIcs.APPROX/RANDOM.2023.46,
  author =	{Gishboliner, Lior and Kushnir, Nick and Shapira, Asaf},
  title =	{{Testing Versus Estimation of Graph Properties, Revisited}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2023)},
  pages =	{46:1--46:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-296-9},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{275},
  editor =	{Megow, Nicole and Smith, Adam},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2023.46},
  URN =		{urn:nbn:de:0030-drops-188713},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2023.46},
  annote =	{Keywords: Testing, estimation, weak regularity, randomized algorithms, graph theory, Frieze-Kannan Regularity}
}
Document
Can You Solve Closest String Faster Than Exhaustive Search?

Authors: Amir Abboud, Nick Fischer, Elazar Goldenberg, Karthik C. S., and Ron Safier

Published in: LIPIcs, Volume 274, 31st Annual European Symposium on Algorithms (ESA 2023)


Abstract
We study the fundamental problem of finding the best string to represent a given set, in the form of the Closest String problem: Given a set X ⊆ Σ^d of n strings, find the string x^* minimizing the radius of the smallest Hamming ball around x^* that encloses all the strings in X. In this paper, we investigate whether the Closest String problem admits algorithms that are faster than the trivial exhaustive search algorithm. We obtain the following results for the two natural versions of the problem: - In the continuous Closest String problem, the goal is to find the solution string x^* anywhere in Σ^d. For binary strings, the exhaustive search algorithm runs in time O(2^d poly(nd)) and we prove that it cannot be improved to time O(2^{(1-ε) d} poly(nd)), for any ε > 0, unless the Strong Exponential Time Hypothesis fails. - In the discrete Closest String problem, x^* is required to be in the input set X. While this problem is clearly in polynomial time, its fine-grained complexity has been pinpointed to be quadratic time n^{2 ± o(1)} whenever the dimension is ω(log n) < d < n^o(1). We complement this known hardness result with new algorithms, proving essentially that whenever d falls out of this hard range, the discrete Closest String problem can be solved faster than exhaustive search. In the small-d regime, our algorithm is based on a novel application of the inclusion-exclusion principle. Interestingly, all of our results apply (and some are even stronger) to the natural dual of the Closest String problem, called the Remotest String problem, where the task is to find a string maximizing the Hamming distance to all the strings in X.

Cite as

Amir Abboud, Nick Fischer, Elazar Goldenberg, Karthik C. S., and Ron Safier. Can You Solve Closest String Faster Than Exhaustive Search?. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 3:1-3:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{abboud_et_al:LIPIcs.ESA.2023.3,
  author =	{Abboud, Amir and Fischer, Nick and Goldenberg, Elazar and Karthik C. S. and Safier, Ron},
  title =	{{Can You Solve Closest String Faster Than Exhaustive Search?}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{3:1--3:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.3},
  URN =		{urn:nbn:de:0030-drops-186566},
  doi =		{10.4230/LIPIcs.ESA.2023.3},
  annote =	{Keywords: Closest string, fine-grained complexity, SETH, inclusion-exclusion}
}
Document
Combinatorial Designs Meet Hypercliques: Higher Lower Bounds for Klee’s Measure Problem and Related Problems in Dimensions d ≥ 4

Authors: Egor Gorbachev and Marvin Künnemann

Published in: LIPIcs, Volume 258, 39th International Symposium on Computational Geometry (SoCG 2023)


Abstract
Klee’s measure problem (computing the volume of the union of n axis-parallel boxes in ℝ^d) is well known to have n^{d/2± o(1)}-time algorithms (Overmars, Yap, SICOMP'91; Chan FOCS'13). Only recently, a conditional lower bound (without any restriction to "combinatorial" algorithms) could be shown for d = 3 (Künnemann, FOCS'22). Can this result be extended to a tight lower bound for dimensions d ≥ 4? In this paper, we formalize the technique of the tight lower bound for d = 3 using a combinatorial object we call prefix covering design. We show that these designs, which are related in spirit to combinatorial designs, directly translate to conditional lower bounds for Klee’s measure problem and various related problems. By devising good prefix covering designs, we give the following lower bounds for Klee’s measure problem in ℝ^d, the depth problem for axis-parallel boxes in ℝ^d, the largest-volume/max-perimeter empty (anchored) box problem in ℝ^{2d}, and related problems: - Ω(n^1.90476) for d = 4, - Ω(n^2.22222) for d = 5, - Ω(n^{d/3 + 2√d/9-o(√d)}) for general d, assuming the 3-uniform hyperclique hypothesis. For Klee’s measure problem and the depth problem, these bounds improve previous lower bounds of Ω(n^{1.777...}), Ω(n^{2.0833...}) and Ω(n^{d/3 + 1/3 + Θ(1/d)}) respectively. Our improved prefix covering designs were obtained by (1) exploiting a computer-aided search using problem-specific insights as well as SAT solvers, and (2) showing how to transform combinatorial covering designs known in the literature to strong prefix covering designs. In contrast, we show that our lower bounds are close to best possible using this proof technique.

Cite as

Egor Gorbachev and Marvin Künnemann. Combinatorial Designs Meet Hypercliques: Higher Lower Bounds for Klee’s Measure Problem and Related Problems in Dimensions d ≥ 4. In 39th International Symposium on Computational Geometry (SoCG 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 258, pp. 36:1-36:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{gorbachev_et_al:LIPIcs.SoCG.2023.36,
  author =	{Gorbachev, Egor and K\"{u}nnemann, Marvin},
  title =	{{Combinatorial Designs Meet Hypercliques: Higher Lower Bounds for Klee’s Measure Problem and Related Problems in Dimensions d ≥ 4}},
  booktitle =	{39th International Symposium on Computational Geometry (SoCG 2023)},
  pages =	{36:1--36:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-273-0},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{258},
  editor =	{Chambers, Erin W. and Gudmundsson, Joachim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SoCG.2023.36},
  URN =		{urn:nbn:de:0030-drops-178861},
  doi =		{10.4230/LIPIcs.SoCG.2023.36},
  annote =	{Keywords: Fine-grained complexity theory, non-combinatorial lower bounds, computational geometry, clique detection}
}
Document
Track A: Algorithms, Complexity and Games
A Structural Investigation of the Approximability of Polynomial-Time Problems

Authors: Karl Bringmann, Alejandro Cassis, Nick Fischer, and Marvin Künnemann

Published in: LIPIcs, Volume 229, 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)


Abstract
An extensive research effort targets optimal (in)approximability results for various NP-hard optimization problems. Notably, the works of (Creignou'95) as well as (Khanna, Sudan, Trevisan, Williamson'00) establish a tight characterization of a large subclass of MaxSNP, namely Boolean MaxCSPs and further variants, in terms of their polynomial-time approximability. Can we obtain similarly encompassing characterizations for classes of polynomial-time optimization problems? To this end, we initiate the systematic study of a recently introduced polynomial-time analogue of MaxSNP, which includes a large number of well-studied problems (including Nearest and Furthest Neighbor in the Hamming metric, Maximum Inner Product, optimization variants of k-XOR and Maximum k-Cover). Specifically, for each k, MaxSP_k denotes the class of O(m^k)-time problems of the form max_{x_1,… , x_k} #{y : ϕ(x_1,… ,x_k,y)} where ϕ is a quantifier-free first-order property and m denotes the size of the relational structure. Assuming central hypotheses about clique detection in hypergraphs and exact Max-3-SAT}, we show that for any MaxSP_k problem definable by a quantifier-free m-edge graph formula φ, the best possible approximation guarantee in faster-than-exhaustive-search time O(m^{k-δ})falls into one of four categories: - optimizable to exactness in time O(m^{k-δ}), - an (inefficient) approximation scheme, i.e., a (1+ε)-approximation in time O(m^{k-f(ε)}), - a (fixed) constant-factor approximation in time O(m^{k-δ}), or - a nm^ε-approximation in time O(m^{k-f(ε)}). We obtain an almost complete characterization of these regimes, for MaxSP_k as well as for an analogously defined minimization class MinSP_k. As our main technical contribution, we show how to rule out the existence of approximation schemes for a large class of problems admitting constant-factor approximations, under a hypothesis for exact Sparse Max-3-SAT algorithms posed by (Alman, Vassilevska Williams'20). As general trends for the problems we consider, we observe: (1) Exact optimizability has a simple algebraic characterization, (2) only few maximization problems do not admit a constant-factor approximation; these do not even have a subpolynomial-factor approximation, and (3) constant-factor approximation of minimization problems is equivalent to deciding whether the optimum is equal to 0.

Cite as

Karl Bringmann, Alejandro Cassis, Nick Fischer, and Marvin Künnemann. A Structural Investigation of the Approximability of Polynomial-Time Problems. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 30:1-30:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.ICALP.2022.30,
  author =	{Bringmann, Karl and Cassis, Alejandro and Fischer, Nick and K\"{u}nnemann, Marvin},
  title =	{{A Structural Investigation of the Approximability of Polynomial-Time Problems}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{30:1--30:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.30},
  URN =		{urn:nbn:de:0030-drops-163713},
  doi =		{10.4230/LIPIcs.ICALP.2022.30},
  annote =	{Keywords: Classification Theorems, Hardness of Approximation in P, Fine-grained Complexity Theory}
}
Document
Track A: Algorithms, Complexity and Games
Improved Sublinear-Time Edit Distance for Preprocessed Strings

Authors: Karl Bringmann, Alejandro Cassis, Nick Fischer, and Vasileios Nakos

Published in: LIPIcs, Volume 229, 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)


Abstract
We study the problem of approximating the edit distance of two strings in sublinear time, in a setting where one or both string(s) are preprocessed, as initiated by Goldenberg, Rubinstein, Saha (STOC '20). Specifically, in the (k, K)-gap edit distance problem, the goal is to distinguish whether the edit distance of two strings is at most k or at least K. We obtain the following results: - After preprocessing one string in time n^{1+o(1)}, we can solve (k, k ⋅ n^o(1))-gap-gap edit distance in time (n/k + k) ⋅ n^o(1). - After preprocessing both strings separately in time n^{1+o(1)}, we can solve (k, k ⋅ n^o(1))-gap edit distance in time kn^o(1). Both results improve upon some previously best known result, with respect to either the gap or the query time or the preprocessing time. Our algorithms build on the framework by Andoni, Krauthgamer and Onak (FOCS '10) and the recent sublinear-time algorithm by Bringmann, Cassis, Fischer and Nakos (STOC '22). We replace many complicated parts in their algorithm by faster and simpler solutions which exploit the preprocessing.

Cite as

Karl Bringmann, Alejandro Cassis, Nick Fischer, and Vasileios Nakos. Improved Sublinear-Time Edit Distance for Preprocessed Strings. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 32:1-32:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.ICALP.2022.32,
  author =	{Bringmann, Karl and Cassis, Alejandro and Fischer, Nick and Nakos, Vasileios},
  title =	{{Improved Sublinear-Time Edit Distance for Preprocessed Strings}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{32:1--32:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.32},
  URN =		{urn:nbn:de:0030-drops-163734},
  doi =		{10.4230/LIPIcs.ICALP.2022.32},
  annote =	{Keywords: Edit Distance, Property Testing, Preprocessing, Precision Sampling}
}
Document
Superlinear Lower Bounds Based on ETH

Authors: András Z. Salamon and Michael Wehar

Published in: LIPIcs, Volume 219, 39th International Symposium on Theoretical Aspects of Computer Science (STACS 2022)


Abstract
We introduce techniques for proving superlinear conditional lower bounds for polynomial time problems. In particular, we show that CircuitSAT for circuits with m gates and log(m) inputs (denoted by log-CircuitSAT) is not decidable in essentially-linear time unless the exponential time hypothesis (ETH) is false and k-Clique is decidable in essentially-linear time in terms of the graph’s size for all fixed k. Such conditional lower bounds have previously only been demonstrated relative to the strong exponential time hypothesis (SETH). Our results therefore offer significant progress towards proving unconditional superlinear time complexity lower bounds for natural problems in polynomial time.

Cite as

András Z. Salamon and Michael Wehar. Superlinear Lower Bounds Based on ETH. In 39th International Symposium on Theoretical Aspects of Computer Science (STACS 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 219, pp. 55:1-55:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{salamon_et_al:LIPIcs.STACS.2022.55,
  author =	{Salamon, Andr\'{a}s Z. and Wehar, Michael},
  title =	{{Superlinear Lower Bounds Based on ETH}},
  booktitle =	{39th International Symposium on Theoretical Aspects of Computer Science (STACS 2022)},
  pages =	{55:1--55:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-222-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{219},
  editor =	{Berenbrink, Petra and Monmege, Benjamin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2022.55},
  URN =		{urn:nbn:de:0030-drops-158652},
  doi =		{10.4230/LIPIcs.STACS.2022.55},
  annote =	{Keywords: Circuit Satisfiability, Conditional Lower Bounds, Exponential Time Hypothesis, Limited Nondeterminism}
}
Document
APPROX
Fine-Grained Completeness for Optimization in P

Authors: Karl Bringmann, Alejandro Cassis, Nick Fischer, and Marvin Künnemann

Published in: LIPIcs, Volume 207, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021)


Abstract
We initiate the study of fine-grained completeness theorems for exact and approximate optimization in the polynomial-time regime. Inspired by the first completeness results for decision problems in P (Gao, Impagliazzo, Kolokolova, Williams, TALG 2019) as well as the classic class MaxSNP and MaxSNP-completeness for NP optimization problems (Papadimitriou, Yannakakis, JCSS 1991), we define polynomial-time analogues MaxSP and MinSP, which contain a number of natural optimization problems in P, including Maximum Inner Product, general forms of nearest neighbor search and optimization variants of the k-XOR problem. Specifically, we define MaxSP as the class of problems definable as max_{x₁,… ,x_k} #{(y₁,… ,y_𝓁) : ϕ(x₁,… ,x_k, y₁,… ,y_𝓁)}, where ϕ is a quantifier-free first-order property over a given relational structure (with MinSP defined analogously). On m-sized structures, we can solve each such problem in time O(m^{k+𝓁-1}). Our results are: - We determine (a sparse variant of) the Maximum/Minimum Inner Product problem as complete under deterministic fine-grained reductions: A strongly subquadratic algorithm for Maximum/Minimum Inner Product would beat the baseline running time of O(m^{k+𝓁-1}) for all problems in MaxSP/MinSP by a polynomial factor. - This completeness transfers to approximation: Maximum/Minimum Inner Product is also complete in the sense that a strongly subquadratic c-approximation would give a (c+ε)-approximation for all MaxSP/MinSP problems in time O(m^{k+𝓁-1-δ}), where ε > 0 can be chosen arbitrarily small. Combining our completeness with (Chen, Williams, SODA 2019), we obtain the perhaps surprising consequence that refuting the OV Hypothesis is equivalent to giving a O(1)-approximation for all MinSP problems in faster-than-O(m^{k+𝓁-1}) time. - By fine-tuning our reductions, we obtain mild algorithmic improvements for solving and approximating all problems in MaxSP and MinSP, using the fastest known algorithms for Maximum/Minimum Inner Product.

Cite as

Karl Bringmann, Alejandro Cassis, Nick Fischer, and Marvin Künnemann. Fine-Grained Completeness for Optimization in P. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 207, pp. 9:1-9:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.APPROX/RANDOM.2021.9,
  author =	{Bringmann, Karl and Cassis, Alejandro and Fischer, Nick and K\"{u}nnemann, Marvin},
  title =	{{Fine-Grained Completeness for Optimization in P}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021)},
  pages =	{9:1--9:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-207-5},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{207},
  editor =	{Wootters, Mary and Sanit\`{a}, Laura},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2021.9},
  URN =		{urn:nbn:de:0030-drops-147024},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2021.9},
  annote =	{Keywords: Fine-grained Complexity \& Algorithm Design, Completeness, Hardness of Approximation in P, Dimensionality Reductions}
}
Document
Track A: Algorithms, Complexity and Games
Faster Minimization of Tardy Processing Time on a Single Machine

Authors: Karl Bringmann, Nick Fischer, Danny Hermelin, Dvir Shabtay, and Philip Wellnitz

Published in: LIPIcs, Volume 168, 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)


Abstract
This paper is concerned with the 1||∑ p_jU_j problem, the problem of minimizing the total processing time of tardy jobs on a single machine. This is not only a fundamental scheduling problem, but also a very important problem from a theoretical point of view as it generalizes the Subset Sum problem and is closely related to the 0/1-Knapsack problem. The problem is well-known to be NP-hard, but only in a weak sense, meaning it admits pseudo-polynomial time algorithms. The fastest known pseudo-polynomial time algorithm for the problem is the famous Lawler and Moore algorithm which runs in O(P ⋅ n) time, where P is the total processing time of all n jobs in the input. This algorithm has been developed in the late 60s, and has yet to be improved to date. In this paper we develop two new algorithms for 1||∑ p_jU_j, each improving on Lawler and Moore’s algorithm in a different scenario: - Our first algorithm runs in Õ(P^{7/4}) time, and outperforms Lawler and Moore’s algorithm in instances where n = ω̃(P^{3/4}). - Our second algorithm runs in Õ(min{P ⋅ D_#, P + D}) time, where D_# is the number of different due dates in the instance, and D is the sum of all different due dates. This algorithm improves on Lawler and Moore’s algorithm when n = ω̃(D_#) or n = ω̃(D/P). Further, it extends the known Õ(P) algorithm for the single due date special case of 1||∑ p_jU_j in a natural way. Both algorithms rely on basic primitive operations between sets of integers and vectors of integers for the speedup in their running times. The second algorithm relies on fast polynomial multiplication as its main engine, while for the first algorithm we define a new "skewed" version of (max,min)-convolution which is interesting in its own right.

Cite as

Karl Bringmann, Nick Fischer, Danny Hermelin, Dvir Shabtay, and Philip Wellnitz. Faster Minimization of Tardy Processing Time on a Single Machine. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 19:1-19:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.ICALP.2020.19,
  author =	{Bringmann, Karl and Fischer, Nick and Hermelin, Danny and Shabtay, Dvir and Wellnitz, Philip},
  title =	{{Faster Minimization of Tardy Processing Time on a Single Machine}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{19:1--19:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.19},
  URN =		{urn:nbn:de:0030-drops-124269},
  doi =		{10.4230/LIPIcs.ICALP.2020.19},
  annote =	{Keywords: Weighted number of tardy jobs, sumsets, convolutions}
}
Document
A Fine-Grained Analogue of Schaefer’s Theorem in P: Dichotomy of Exists^k-Forall-Quantified First-Order Graph Properties

Authors: Karl Bringmann, Nick Fischer, and Marvin Künnemann

Published in: LIPIcs, Volume 137, 34th Computational Complexity Conference (CCC 2019)


Abstract
An important class of problems in logics and database theory is given by fixing a first-order property psi over a relational structure, and considering the model-checking problem for psi. Recently, Gao, Impagliazzo, Kolokolova, and Williams (SODA 2017) identified this class as fundamental for the theory of fine-grained complexity in P, by showing that the (Sparse) Orthogonal Vectors problem is complete for this class under fine-grained reductions. This raises the question whether fine-grained complexity can yield a precise understanding of all first-order model-checking problems. Specifically, can we determine, for any fixed first-order property psi, the exponent of the optimal running time O(m^{c_psi}), where m denotes the number of tuples in the relational structure? Towards answering this question, in this work we give a dichotomy for the class of exists^k-forall-quantified graph properties. For every such property psi, we either give a polynomial-time improvement over the baseline O(m^k)-time algorithm or show that it requires time m^{k-o(1)} under the hypothesis that MAX-3-SAT has no O((2-epsilon)^n)-time algorithm. More precisely, we define a hardness parameter h = H(psi) such that psi can be decided in time O(m^{k-epsilon}) if h <=2 and requires time m^{k-o(1)} for h >= 3 unless the h-uniform HyperClique hypothesis fails. This unveils a natural hardness hierarchy within first-order properties: for any h >= 3, we show that there exists a exists^k-forall-quantified graph property psi with hardness H(psi)=h that is solvable in time O(m^{k-epsilon}) if and only if the h-uniform HyperClique hypothesis fails. Finally, we give more precise upper and lower bounds for an exemplary class of formulas with k=3 and extend our classification to a counting dichotomy.

Cite as

Karl Bringmann, Nick Fischer, and Marvin Künnemann. A Fine-Grained Analogue of Schaefer’s Theorem in P: Dichotomy of Exists^k-Forall-Quantified First-Order Graph Properties. In 34th Computational Complexity Conference (CCC 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 137, pp. 31:1-31:27, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.CCC.2019.31,
  author =	{Bringmann, Karl and Fischer, Nick and K\"{u}nnemann, Marvin},
  title =	{{A Fine-Grained Analogue of Schaefer’s Theorem in P: Dichotomy of Exists^k-Forall-Quantified First-Order Graph Properties}},
  booktitle =	{34th Computational Complexity Conference (CCC 2019)},
  pages =	{31:1--31:27},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-116-0},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{137},
  editor =	{Shpilka, Amir},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2019.31},
  URN =		{urn:nbn:de:0030-drops-108533},
  doi =		{10.4230/LIPIcs.CCC.2019.31},
  annote =	{Keywords: Fine-grained Complexity, Hardness in P, Hyperclique Conjecture, Constrained Triangle Detection}
}
  • Refine by Author
  • 8 Fischer, Nick
  • 5 Bringmann, Karl
  • 4 Künnemann, Marvin
  • 3 Cassis, Alejandro
  • 1 Abboud, Amir
  • Show More...

  • Refine by Classification

  • Refine by Keyword
  • 2 Hardness of Approximation in P
  • 2 fine-grained complexity
  • 1 3SUM
  • 1 Circuit Satisfiability
  • 1 Classification Theorems
  • Show More...

  • Refine by Type
  • 12 document

  • Refine by Publication Year
  • 3 2022
  • 3 2023
  • 3 2024
  • 1 2019
  • 1 2020
  • Show More...

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail