316 Search Results for "Puglisi, Simon J."


Volume

LIPIcs, Volume 296

35th Annual Symposium on Combinatorial Pattern Matching (CPM 2024)

CPM 2024, June 25-27, 2024, Fukuoka, Japan

Editors: Shunsuke Inenaga and Simon J. Puglisi

Volume

LIPIcs, Volume 274

31st Annual European Symposium on Algorithms (ESA 2023)

ESA 2023, September 4-6, 2023, Amsterdam, The Netherlands

Editors: Inge Li Gørtz, Martin Farach-Colton, Simon J. Puglisi, and Grzegorz Herman

Volume

LIPIcs, Volume 202

46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)

MFCS 2021, August 23-27, 2021, Tallinn, Estonia

Editors: Filippo Bonchi and Simon J. Puglisi

Volume

LIPIcs, Volume 75

16th International Symposium on Experimental Algorithms (SEA 2017)

SEA 2017, June 21-23, 2017, London, UK

Editors: Costas S. Iliopoulos, Solon P. Pissis, Simon J. Puglisi, and Rajeev Raman

Document
Invited Talk
Simple (Invited Talk)

Authors: Eva Rotenberg

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
Simplicity in algorithms has various aspects; interpretations and implications. One is the simplicity of the algorithmic solution itself: if an algorithm (or data structure) has a brief verbal description or can be written with few lines of pseudocode, this can lead to easier, more robust, and possibly more efficient implementations. Another aspect of simplicity relates to the proofs of correctness and efficiency of our algorithmic solutions. Here, we experience that algorithms and data structures with simpler proofs of statements about their properties can be easier to understand, easier to teach, and sometimes, easier to generalise. Simplification of proofs also receives attention in mathematics; here, too, simplification has benefits to clarity of exposition and possibility of generalisation. There are even examples of proof simplification leading to the design of new and more efficient algorithms. This talk will present examples illustrating these various aspects of simplicity. Examples where algorithmic simplification or proof simplification has led to improved performance of algorithms and data structures, in theory, in practice, or both. Finally, some of the most attractive questions in discrete mathematics and in theory of computing have a property in common: they are very simple to pose, but surprisingly, to our knowledge, not very simple to answer. The talk will include examples of such questions, which I leave as an open problem for the audience.

Cite as

Eva Rotenberg. Simple (Invited Talk). In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 2:1-2:2, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{rotenberg:LIPIcs.ESA.2024.2,
  author =	{Rotenberg, Eva},
  title =	{{Simple}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{2:1--2:2},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.2},
  URN =		{urn:nbn:de:0030-drops-210739},
  doi =		{10.4230/LIPIcs.ESA.2024.2},
  annote =	{Keywords: Simplicity, graph algorithms, computational geometry, algorithmic simplification, data structures, combinatorics, proof simplification, dynamic graphs}
}
Document
From Donkeys to Kings in Tournaments

Authors: Amir Abboud, Tomer Grossman, Moni Naor, and Tomer Solomon

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
A tournament is an orientation of a complete graph. A vertex that can reach every other vertex within two steps is called a king. We study the complexity of finding k kings in a tournament graph. We show that the randomized query complexity of finding k ≤ 3 kings is O(n), and for the deterministic case it takes the same amount of queries (up to a constant) as finding a single king (the best known deterministic algorithm makes O(n^{3/2}) queries). On the other hand, we show that finding k ≥ 4 kings requires Ω(n²) queries, even in the randomized case. We consider the RAM model for k ≥ 4. We show an algorithm that finds k kings in time O(kn²), which is optimal for constant values of k. Alternatively, one can also find k ≥ 4 kings in time n^{ω} (the time for matrix multiplication). We provide evidence that this is optimal for large k by suggesting a fine-grained reduction from a variant of the triangle detection problem.

Cite as

Amir Abboud, Tomer Grossman, Moni Naor, and Tomer Solomon. From Donkeys to Kings in Tournaments. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 3:1-3:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{abboud_et_al:LIPIcs.ESA.2024.3,
  author =	{Abboud, Amir and Grossman, Tomer and Naor, Moni and Solomon, Tomer},
  title =	{{From Donkeys to Kings in Tournaments}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{3:1--3:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.3},
  URN =		{urn:nbn:de:0030-drops-210740},
  doi =		{10.4230/LIPIcs.ESA.2024.3},
  annote =	{Keywords: Tournament Graphs, Kings, Query Complexity, Fine Grained Complexity}
}
Document
Longest Common Substring with Gaps and Related Problems

Authors: Aranya Banerjee, Daniel Gibney, and Sharma V. Thankachan

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
The longest common substring (also known as longest common factor) and longest common subsequence problems are two well-studied classical string problems. The former is solvable in optimal 𝒪(n) time for two strings of length m and n with m ≤ n, and the latter is solvable in 𝒪(nm) time, which is conditionally optimal under the Strong Exponential Time Hypothesis. In this work, we study the problem of longest common factor with gaps, that is, finding a set of at most k matching substrings obeying precedence conditions with maximum total length. For k = 1, this is equivalent to the longest common factor problem, and for k = m, this is equivalent to the longest common subsequence problem. Our work demonstrates that, for constant k, this problem can be solved in strongly subquadratic time, i.e., nm^{1 - Θ(1)}. Motivated by co-linear chaining applications in Computational Biology, we further demonstrate that the longest common factor with gaps results can be extended to the case where the matches are restricted to maximal exact matches (MEMs). To further demonstrate the applicability of our techniques, we show that a similar approach can be used for a restricted version of the episode matching problem where one seeks an ordered set of at most k matches whose concatenation equals a query pattern P and the length of the substring of T containing the matches is minimized. These solutions all run in strongly subquadratic time for constant k.

Cite as

Aranya Banerjee, Daniel Gibney, and Sharma V. Thankachan. Longest Common Substring with Gaps and Related Problems. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 16:1-16:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{banerjee_et_al:LIPIcs.ESA.2024.16,
  author =	{Banerjee, Aranya and Gibney, Daniel and Thankachan, Sharma V.},
  title =	{{Longest Common Substring with Gaps and Related Problems}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{16:1--16:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.16},
  URN =		{urn:nbn:de:0030-drops-210877},
  doi =		{10.4230/LIPIcs.ESA.2024.16},
  annote =	{Keywords: Pattern Matching, Longest Common Subsequence, Episode Matching}
}
Document
Height-Bounded Lempel-Ziv Encodings

Authors: Hideo Bannai, Mitsuru Funakoshi, Diptarama Hendrian, Myuji Matsuda, and Simon J. Puglisi

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
We introduce height-bounded LZ encodings (LZHB), a new family of compressed representations that are variants of Lempel-Ziv parsings with a focus on bounding the worst-case access time to arbitrary positions in the text directly via the compressed representation. An LZ-like encoding is a partitioning of the string into phrases of length 1 which can be encoded literally, or phrases of length at least 2 which have a previous occurrence in the string and can be encoded by its position and length. An LZ-like encoding induces an implicit referencing forest on the set of positions of the string. An LZHB encoding is an LZ-like encoding where the height of the implicit referencing forest is bounded. An LZHB encoding with height constraint h allows access to an arbitrary position of the underlying text using O(h) predecessor queries. While computing the optimal (i.e., smallest) LZHB encoding efficiently seems to be difficult [Cicalese & Ugazio 2024, arXiv, to appear at DLT 2024], we give the first linear time algorithm for strings over a constant size alphabet that computes the greedy LZHB encoding, i.e., the string is processed from beginning to end, and the longest prefix of the remaining string that can satisfy the height constraint is taken as the next phrase. Our algorithms significantly improve both theoretically and practically, the very recently and independently proposed algorithms by Lipták et al. (CPM 2024). We also analyze the size of height bounded LZ encodings in the context of repetitiveness measures, and show that there exists a constant c such that the size ẑ_{HB(clog n)} of the optimal LZHB encoding whose height is bounded by clog n for any string of length n is O(ĝ_{rl}), where ĝ_{rl} is the size of the smallest run-length grammar. Furthermore, we show that there exists a family of strings such that ẑ_{HB(clog n)} = o(ĝ_{rl}), thus making ẑ_{HB(clog n)} one of the smallest known repetitiveness measures for which O(polylog n) time access is possible using linear (O(ẑ_{HB(clog n)})) space.

Cite as

Hideo Bannai, Mitsuru Funakoshi, Diptarama Hendrian, Myuji Matsuda, and Simon J. Puglisi. Height-Bounded Lempel-Ziv Encodings. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 18:1-18:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{bannai_et_al:LIPIcs.ESA.2024.18,
  author =	{Bannai, Hideo and Funakoshi, Mitsuru and Hendrian, Diptarama and Matsuda, Myuji and Puglisi, Simon J.},
  title =	{{Height-Bounded Lempel-Ziv Encodings}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{18:1--18:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.18},
  URN =		{urn:nbn:de:0030-drops-210899},
  doi =		{10.4230/LIPIcs.ESA.2024.18},
  annote =	{Keywords: Lempel-Ziv parsing, data compression}
}
Document
Longest Common Extensions with Wildcards: Trade-Off and Applications

Authors: Gabriel Bathie, Panagiotis Charalampopoulos, and Tatiana Starikovskaya

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
We study the Longest Common Extension (LCE) problem in a string containing wildcards. Wildcards (also called "don't cares" or "holes") are special characters that match any other character in the alphabet, similar to the character "?" in Unix commands or "." in regular expression engines. We consider the problem parametrized by G, the number of maximal contiguous groups of wildcards in the input string. Our main contribution is a simple data structure for this problem that can be built in O(n (G/t) log n) time, occupies O(nG/t) space, and answers queries in O(t) time, for any t ∈ [1 .. G]. Up to the O(log n) factor, this interpolates smoothly between the data structure of Crochemore et al. [JDA 2015], which has O(nG) preprocessing time and space, and O(1) query time, and a simple solution based on the "kangaroo jumping" technique [Landau and Vishkin, STOC 1986], which has O(n) preprocessing time and space, and O(G) query time. By establishing a connection between this problem and Boolean matrix multiplication, we show that our solution is optimal up to subpolynomial factors when G = Ω(n) under a widely believed hypothesis. In addition, we develop a new simple, deterministic and combinatorial algorithm for sparse Boolean matrix multiplication. Finally, we show that our data structure can be used to obtain efficient algorithms for approximate pattern matching and structural analysis of strings with wildcards. First, we consider the problem of pattern matching with k errors (i.e., edit operations) in the setting where both the pattern and the text may contain wildcards. The "kangaroo jumping" technique can be adapted to yield an algorithm for this problem with runtime O(n(k+G)), where G is the total number of maximal contiguous groups of wildcards in the text and the pattern and n is the length of the text. By combining "kangaroo jumping" with a tailor-made data structure for LCE queries, Akutsu [IPL 1995] devised an O(n√{km} polylog m)-time algorithm. We improve on both algorithms when k ≪ G ≪ m by giving an algorithm with runtime O(n(k + √{Gk log n})). Secondly, we give O(n√G log n)-time and O(n)-space algorithms for computing the prefix array, as well as the quantum/deterministic border and period arrays of a string with wildcards. This is an improvement over the O(n√{nlog n})-time algorithms of Iliopoulos and Radoszewski [CPM 2016] when G = O(n / log n).

Cite as

Gabriel Bathie, Panagiotis Charalampopoulos, and Tatiana Starikovskaya. Longest Common Extensions with Wildcards: Trade-Off and Applications. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 19:1-19:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{bathie_et_al:LIPIcs.ESA.2024.19,
  author =	{Bathie, Gabriel and Charalampopoulos, Panagiotis and Starikovskaya, Tatiana},
  title =	{{Longest Common Extensions with Wildcards: Trade-Off and Applications}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{19:1--19:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.19},
  URN =		{urn:nbn:de:0030-drops-210904},
  doi =		{10.4230/LIPIcs.ESA.2024.19},
  annote =	{Keywords: Longest common prefix, longest common extension, wildcards, Boolean matrix multiplication, approximate pattern matching, periodicity arrays}
}
Document
Improved Algorithms for Maximum Coverage in Dynamic and Random Order Streams

Authors: Amit Chakrabarti, Andrew McGregor, and Anthony Wirth

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
The maximum coverage problem is to select k sets, from a collection of m sets, such that the cardinality of their union, in a universe of size n, is maximized. We consider (1-1/e-ε)-approximation algorithms for this NP-hard problem in three standard data stream models. 1) Dynamic Model. The stream consists of a sequence of sets being inserted and deleted. Our multi-pass algorithm uses ε^{-2} k ⋅ polylog(n,m) space. The best previous result (Assadi and Khanna, SODA 2018) used (n +ε^{-4} k) polylog(n,m) space. While both algorithms use O(ε^{-1} log m) passes, our analysis shows that, when ε ≤ 1/log log m, it is possible to reduce the number of passes by a 1/log log m factor without incurring additional space. 2) Random Order Model. In this model, there are no deletions, and the sets forming the instance are uniformly randomly permuted to form the input stream. We show that a single pass and k polylog(n,m) space suffices for arbitrary small constant ε. The best previous result, by Warneke et al. (ESA 2023), used k² polylog(n,m) space. 3) Insert-Only Model. Lastly, our results, along with numerous previous results, use a sub-sampling technique introduced by McGregor and Vu (ICDT 2017) to sparsify the input instance. We explain how this technique and others used in the paper can be implemented such that the amortized update time of our algorithm is polylogarithmic. This also implies an improvement of the state-of-the-art insert only algorithms in terms of the update time: polylog(m,n) update time suffices, whereas the best previous result by Jaud et al. (SEA 2023) required update time that was linear in k.

Cite as

Amit Chakrabarti, Andrew McGregor, and Anthony Wirth. Improved Algorithms for Maximum Coverage in Dynamic and Random Order Streams. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 40:1-40:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{chakrabarti_et_al:LIPIcs.ESA.2024.40,
  author =	{Chakrabarti, Amit and McGregor, Andrew and Wirth, Anthony},
  title =	{{Improved Algorithms for Maximum Coverage in Dynamic and Random Order Streams}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{40:1--40:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.40},
  URN =		{urn:nbn:de:0030-drops-211114},
  doi =		{10.4230/LIPIcs.ESA.2024.40},
  annote =	{Keywords: Data Stream Computation, Maximum Coverage, Submodular Maximization}
}
Document
Bicriterial Approximation for the Incremental Prize-Collecting Steiner-Tree Problem

Authors: Yann Disser, Svenja M. Griesbach, Max Klimm, and Annette Lutz

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
We consider an incremental variant of the rooted prize-collecting Steiner-tree problem with a growing budget constraint. While no incremental solution exists that simultaneously approximates the optimum for all budgets, we show that a bicriterial (α,μ)-approximation is possible, i.e., a solution that with budget B+α for all B ∈ ℝ_{≥ 0} is a multiplicative μ-approximation compared to the optimum solution with budget B. For the case that the underlying graph is a tree, we present a polynomial-time density-greedy algorithm that computes a (χ,1)-approximation, where χ denotes the eccentricity of the root vertex in the underlying graph, and show that this is best possible. An adaptation of the density-greedy algorithm for general graphs is (γ,2)-competitive where γ is the maximal length of a vertex-disjoint path starting in the root. While this algorithm does not run in polynomial time, it can be adapted to a (γ,3)-competitive algorithm that runs in polynomial time. We further devise a capacity-scaling algorithm that guarantees a (3χ,8)-approximation and, more generally, a ((4𝓁 - 1)χ, (2^{𝓁 + 2})/(2^𝓁 -1))-approximation for every fixed 𝓁 ∈ ℕ.

Cite as

Yann Disser, Svenja M. Griesbach, Max Klimm, and Annette Lutz. Bicriterial Approximation for the Incremental Prize-Collecting Steiner-Tree Problem. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 47:1-47:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{disser_et_al:LIPIcs.ESA.2024.47,
  author =	{Disser, Yann and Griesbach, Svenja M. and Klimm, Max and Lutz, Annette},
  title =	{{Bicriterial Approximation for the Incremental Prize-Collecting Steiner-Tree Problem}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{47:1--47:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.47},
  URN =		{urn:nbn:de:0030-drops-211188},
  doi =		{10.4230/LIPIcs.ESA.2024.47},
  annote =	{Keywords: incremental optimization, competitive analysis, prize-collecting Steiner-tree}
}
Document
Better Diameter Algorithms for Bounded VC-Dimension Graphs and Geometric Intersection Graphs

Authors: Lech Duraj, Filip Konieczny, and Krzysztof Potępa

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
We develop a framework for algorithms finding the diameter in graphs of bounded distance Vapnik-Chervonenkis dimension, in (parameterized) subquadratic time complexity. The class of bounded distance VC-dimension graphs is wide, including, e.g. all minor-free graphs. We build on the work of Ducoffe et al. [SODA'20, SIGCOMP'22], improving their technique. With our approach the algorithms become simpler and faster, working in 𝒪{(k ⋅ n^{1-1/d} ⋅ m ⋅ polylog(n))} time complexity for the graph on n vertices and m edges, where k is the diameter and d is the distance VC-dimension of the graph. Furthermore, it allows us to use the improved technique in more general setting. In particular, we use this framework for geometric intersection graphs, i.e. graphs where vertices are identical geometric objects on a plane and the adjacency is defined by intersection. Applying our approach for these graphs, we partially answer a question posed by Bringmann et al. [SoCG'22], finding an 𝒪{(n^{7/4} ⋅ polylog(n))} parameterized diameter algorithm for unit square intersection graph of size n, as well as a more general algorithm for convex polygon intersection graphs.

Cite as

Lech Duraj, Filip Konieczny, and Krzysztof Potępa. Better Diameter Algorithms for Bounded VC-Dimension Graphs and Geometric Intersection Graphs. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 51:1-51:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{duraj_et_al:LIPIcs.ESA.2024.51,
  author =	{Duraj, Lech and Konieczny, Filip and Pot\k{e}pa, Krzysztof},
  title =	{{Better Diameter Algorithms for Bounded VC-Dimension Graphs and Geometric Intersection Graphs}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{51:1--51:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.51},
  URN =		{urn:nbn:de:0030-drops-211229},
  doi =		{10.4230/LIPIcs.ESA.2024.51},
  annote =	{Keywords: Graph Diameter, Geometric Intersection Graphs, Vapnik-Chervonenkis Dimension}
}
Document
A Simple Representation of Tree Covering Utilizing Balanced Parentheses and Efficient Implementation of Average-Case Optimal RMQs

Authors: Kou Hamada, Sankardeep Chakraborty, Seungbum Jo, Takuto Koriyama, Kunihiko Sadakane, and Srinivasa Rao Satti

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
Tree covering is a technique for decomposing a tree into smaller sized trees with desirable properties, and has been employed in various succinct data structures. However, significant hurdles stand in the way of a practical implementation of tree covering: a lot of pointers are used to maintain the tree-covering hierarchy and many indices for tree navigational queries consume theoretically negligible yet practically vast space. To tackle these problems, we propose a simple representation of tree covering using a balanced-parenthesis representation. The key to the proposal is the observation that every micro tree splits into at most two intervals on the BP representation. Utilizing the representation, we propose several data structures that represent a tree and its tree cover, which consequently allow micro tree compression with arbitrary coding and efficient tree navigational queries. We also applied our data structure to average-case optimal RMQ by Munro et al. [ESA 2021] and implemented the RMQ data structure. Our RMQ data structures spend less than 2n bits and process queries in a practical time on several settings of the performance evaluation, reducing the gap between theoretical space complexity and actual space consumption. For example, our implementation consumes 1.822n bits and processes queries in 5µs on average for random queries and in 13µs on average for the worst query widths. We also implement tree navigational operations while using the same amount of space as the RMQ data structures. We believe the representation can be widely utilized for designing practically memory-efficient data structures based on tree covering.

Cite as

Kou Hamada, Sankardeep Chakraborty, Seungbum Jo, Takuto Koriyama, Kunihiko Sadakane, and Srinivasa Rao Satti. A Simple Representation of Tree Covering Utilizing Balanced Parentheses and Efficient Implementation of Average-Case Optimal RMQs. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 64:1-64:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{hamada_et_al:LIPIcs.ESA.2024.64,
  author =	{Hamada, Kou and Chakraborty, Sankardeep and Jo, Seungbum and Koriyama, Takuto and Sadakane, Kunihiko and Satti, Srinivasa Rao},
  title =	{{A Simple Representation of Tree Covering Utilizing Balanced Parentheses and Efficient Implementation of Average-Case Optimal RMQs}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{64:1--64:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.64},
  URN =		{urn:nbn:de:0030-drops-211359},
  doi =		{10.4230/LIPIcs.ESA.2024.64},
  annote =	{Keywords: Hypersuccinct trees, Succinct data structures, Range minimum queries, Binary trees}
}
Document
Re²Pair: Increasing the Scalability of RePair by Decreasing Memory Usage

Authors: Justin Kim, Rahul Varki, Marco Oliva, and Christina Boucher

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
The RePair compression algorithm produces a context-free grammar by iteratively substituting the most frequently occurring pair of consecutive symbols with a new symbol until all consecutive pairs of symbols appear only once in the compressed text. It is widely used in the settings of bioinformatics, machine learning, and information retrieval where random access to the original input text is needed. For example, in pangenomics, RePair is used for random access to a population of genomes. BigRePair improves the scalability of the original RePair algorithm by using Prefix-Free Parsing (PFP) to preprocess the text prior to building the RePair grammar. Despite the efficiency of PFP on repetitive text, there is a scalability issue with the size of the parse which causes a memory bottleneck in BigRePair. In this paper, we design and implement recursive RePair (denoted as Re²Pair), which builds the RePair grammar using recursive PFP. Our novel algorithm faces the challenge of constructing the RePair grammar without direct access to the parse of text, relying solely on the dictionary of the text and the parse and dictionary of the parse of the text. We compare Re²Pair to BigRePair using SARS-CoV-2 haplotypes and haplotypes from the 1000 Genomes Project. We show that our method Re²Pair achieves over a 40% peak memory reduction and a speed up ranging between 12% to 79% compared to BigRePair when compressing the largest input texts in all experiments. Re²Pair is made publicly available under the GNU public license here: https://github.com/jkim210/Recursive-RePair

Cite as

Justin Kim, Rahul Varki, Marco Oliva, and Christina Boucher. Re²Pair: Increasing the Scalability of RePair by Decreasing Memory Usage. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 78:1-78:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{kim_et_al:LIPIcs.ESA.2024.78,
  author =	{Kim, Justin and Varki, Rahul and Oliva, Marco and Boucher, Christina},
  title =	{{Re²Pair: Increasing the Scalability of RePair by Decreasing Memory Usage}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{78:1--78:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.78},
  URN =		{urn:nbn:de:0030-drops-211496},
  doi =		{10.4230/LIPIcs.ESA.2024.78},
  annote =	{Keywords: RePair, Compressed Data Structures, Prefix-free Parsing}
}
Document
Giving Some Slack: Shortcuts and Transitive Closure Compressions

Authors: Shimon Kogan and Merav Parter

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
We consider the fundamental problems of reachability shortcuts and compression schemes of the transitive closure (TC) of n-vertex directed acyclic graphs (DAGs) G when we are allowed to neglect the distance (or reachability) constraints for an ε fraction of the pairs in the transitive closure of G, denoted by TC(G). Shortcuts with Slack. For a directed graph G = (V,E), a d-reachability shortcut is a set of edges H ⊆ TC(G), whose addition decreases the directed diameter of G to be at most d. We introduce the notion of shortcuts with slack which provide the desired distance bound d for all but a small fraction ε of the vertex pairs in TC(G). For ε ∈ (0,1), a (d,ε)-shortcut H ⊆ TC(G) is a subset of edges with the property that dist_{G ∪ H}(u,v) ≤ d for at least (1-ε) fraction of the (u,v) pairs in TC(G). Our constructions hold for any DAG G and their size bounds are parameterized by the width of the graph G defined by the smallest number of directed paths in G that cover all vertices in G. - For every ε ∈ (0,1] and integer d ≥ 5, every n-vertex DAG G of width {ω} admits a (d,ε)-shortcut of size Õ({ω}²/(ε d)+n). A more delicate construction yields a (3,ε)-shortcut of size Õ({ω}²/(ε d)+n/ε), hence of linear size for {ω} ≤ √n. We show that without a slack (i.e., for ε = 0), graphs with {ω} ≤ √n cannot be shortcut to diameter below n^{1/6} using a linear number of shortcut edges. - There exists an n-vertex DAG G for which any (3,ε = 1/2^{√{log ω}})-shortcut set has Ω({ω}²/2^{√{log ω}}+n) edges. Hence, for d = Õ(1), our constructions are almost optimal. Approximate TC Representations. A key application of our shortcut’s constructions is a (1-ε)-approximate all-successors data structure which given a vertex v, reports a list containing (1-ε) fraction of the successors of v in the graph. We present a Õ({ω}²/ε+n)-space data structure with a near linear (in the output size) query time. Using connections to Error Correcting Codes, we also present a near-matching space lower bound of Ω({ω}²+n) bits (regardless of the query time) for constant ε. This improves upon the state-of-the-art space bounds of O({ω} ⋅ n) for ε = 0 by the prior work of Jagadish [ACM Trans. Database Syst., 1990].

Cite as

Shimon Kogan and Merav Parter. Giving Some Slack: Shortcuts and Transitive Closure Compressions. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 79:1-79:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{kogan_et_al:LIPIcs.ESA.2024.79,
  author =	{Kogan, Shimon and Parter, Merav},
  title =	{{Giving Some Slack: Shortcuts and Transitive Closure Compressions}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{79:1--79:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.79},
  URN =		{urn:nbn:de:0030-drops-211509},
  doi =		{10.4230/LIPIcs.ESA.2024.79},
  annote =	{Keywords: Reachability Shortcuts, Width, DAG}
}
  • Refine by Author
  • 23 Puglisi, Simon J.
  • 6 Gørtz, Inge Li
  • 6 Prezza, Nicola
  • 5 Köppl, Dominik
  • 5 Pissis, Solon P.
  • Show More...

  • Refine by Classification
  • 41 Theory of computation → Design and analysis of algorithms
  • 31 Theory of computation → Parameterized complexity and exact algorithms
  • 30 Theory of computation → Pattern matching
  • 28 Theory of computation → Graph algorithms analysis
  • 19 Mathematics of computing → Graph algorithms
  • Show More...

  • Refine by Keyword
  • 8 data structures
  • 7 Approximation Algorithms
  • 7 parameterized complexity
  • 6 Algorithms
  • 6 Parameterized Complexity
  • Show More...

  • Refine by Type
  • 312 document
  • 4 volume

  • Refine by Publication Year
  • 110 2023
  • 96 2021
  • 65 2024
  • 39 2017
  • 2 2020
  • Show More...

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail