102 Search Results for "Bender, Michael A."


Volume

LIPIcs, Volume 144

27th Annual European Symposium on Algorithms (ESA 2019)

ESA 2019, September 9-11, 2019, Munich/Garching, Germany

Editors: Michael A. Bender, Ola Svensson, and Grzegorz Herman

Document
Partitioning the Bags of a Tree Decomposition into Cliques

Authors: Thomas Bläsius, Maximilian Katzmann, and Marcus Wilhelm

Published in: LIPIcs, Volume 265, 21st International Symposium on Experimental Algorithms (SEA 2023)


Abstract
We consider a variant of treewidth that we call clique-partitioned treewidth in which each bag is partitioned into cliques. This is motivated by the recent development of FPT-algorithms based on similar parameters for various problems. With this paper, we take a first step towards computing clique-partitioned tree decompositions. Our focus lies on the subproblem of computing clique partitions, i.e., for each bag of a given tree decomposition, we compute an optimal partition of the induced subgraph into cliques. The goal here is to minimize the product of the clique sizes (plus 1). We show that this problem is NP-hard. We also describe four heuristic approaches as well as an exact branch-and-bound algorithm. Our evaluation shows that the branch-and-bound solver is sufficiently efficient to serve as a good baseline. Moreover, our heuristics yield solutions close to the optimum. As a bonus, our algorithms allow us to compute first upper bounds for the clique-partitioned treewidth of real-world networks. A comparison to traditional treewidth indicates that clique-partitioned treewidth is a promising parameter for graphs with high clustering.

Cite as

Thomas Bläsius, Maximilian Katzmann, and Marcus Wilhelm. Partitioning the Bags of a Tree Decomposition into Cliques. In 21st International Symposium on Experimental Algorithms (SEA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 265, pp. 3:1-3:19, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{blasius_et_al:LIPIcs.SEA.2023.3,
  author =	{Bl\"{a}sius, Thomas and Katzmann, Maximilian and Wilhelm, Marcus},
  title =	{{Partitioning the Bags of a Tree Decomposition into Cliques}},
  booktitle =	{21st International Symposium on Experimental Algorithms (SEA 2023)},
  pages =	{3:1--3:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-279-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{265},
  editor =	{Georgiadis, Loukas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2023.3},
  URN =		{urn:nbn:de:0030-drops-183533},
  doi =		{10.4230/LIPIcs.SEA.2023.3},
  annote =	{Keywords: treewidth, weighted treewidth, algorithm engineering, cliques, clustering, complex networks}
}
Document
Engineering Shared-Memory Parallel Shuffling to Generate Random Permutations In-Place

Authors: Manuel Penschuck

Published in: LIPIcs, Volume 265, 21st International Symposium on Experimental Algorithms (SEA 2023)


Abstract
Shuffling is the process of placing elements into a random order such that any permutation occurs with equal probability. It is an important building block in virtually all scientific areas. We engineer, - to the best of our knowledge - for the first time, a practically fast, parallel shuffling algorithm with O(√n log n) parallel depth that requires only poly-logarithmic auxiliary memory (with high probability). In an empirical evaluation, we compare our implementations with a number of existing solutions on various computer architectures. Our algorithms consistently achieve the highest through-put on all machines. Further, we demonstrate that the runtime of our parallel algorithm is comparable to the time that other algorithms may take to acquire the memory from the operating system to copy the input.

Cite as

Manuel Penschuck. Engineering Shared-Memory Parallel Shuffling to Generate Random Permutations In-Place. In 21st International Symposium on Experimental Algorithms (SEA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 265, pp. 5:1-5:20, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{penschuck:LIPIcs.SEA.2023.5,
  author =	{Penschuck, Manuel},
  title =	{{Engineering Shared-Memory Parallel Shuffling to Generate Random Permutations In-Place}},
  booktitle =	{21st International Symposium on Experimental Algorithms (SEA 2023)},
  pages =	{5:1--5:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-279-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{265},
  editor =	{Georgiadis, Loukas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2023.5},
  URN =		{urn:nbn:de:0030-drops-183550},
  doi =		{10.4230/LIPIcs.SEA.2023.5},
  annote =	{Keywords: Shuffling, random permutation, parallelism, in-place, algorithm engineering, practical implementation}
}
Document
Noisy Sorting Without Searching: Data Oblivious Sorting with Comparison Errors

Authors: Ramtin Afshar, Michael Dillencourt, Michael T. Goodrich, and Evrim Ozel

Published in: LIPIcs, Volume 265, 21st International Symposium on Experimental Algorithms (SEA 2023)


Abstract
We provide and study several algorithms for sorting an array of n comparable distinct elements subject to probabilistic comparison errors. In this model, the comparison of two elements returns the wrong answer according to a fixed probability, p_e < 1/2, and otherwise returns the correct answer. The dislocation of an element is the distance between its position in a given (current or output) array and its position in a sorted array. There are various algorithms that can be utilized for sorting or near-sorting elements subject to probabilistic comparison errors, but these algorithms are not data oblivious because they all make heavy use of noisy binary searching. In this paper, we provide new methods for sorting with comparison errors that are data oblivious while avoiding the use of noisy binary search methods. In addition, we experimentally compare our algorithms and other sorting algorithms.

Cite as

Ramtin Afshar, Michael Dillencourt, Michael T. Goodrich, and Evrim Ozel. Noisy Sorting Without Searching: Data Oblivious Sorting with Comparison Errors. In 21st International Symposium on Experimental Algorithms (SEA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 265, pp. 8:1-8:18, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{afshar_et_al:LIPIcs.SEA.2023.8,
  author =	{Afshar, Ramtin and Dillencourt, Michael and Goodrich, Michael T. and Ozel, Evrim},
  title =	{{Noisy Sorting Without Searching: Data Oblivious Sorting with Comparison Errors}},
  booktitle =	{21st International Symposium on Experimental Algorithms (SEA 2023)},
  pages =	{8:1--8:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-279-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{265},
  editor =	{Georgiadis, Loukas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2023.8},
  URN =		{urn:nbn:de:0030-drops-183585},
  doi =		{10.4230/LIPIcs.SEA.2023.8},
  annote =	{Keywords: sorting, algorithms, randomization, experimentation}
}
Document
Exact and Approximate Range Mode Query Data Structures in Practice

Authors: Meng He and Zhen Liu

Published in: LIPIcs, Volume 265, 21st International Symposium on Experimental Algorithms (SEA 2023)


Abstract
We conduct an experimental study on the range mode problem. In the exact version of the problem, we preprocess an array A, such that given a query range [a, b], the most frequent element in A[a, b] can be found efficiently. For this problem, our most important finding is that the strategy of using succinct data structures to encode more precomputed information not only helped Chan et al. (Linear-space data structures for range mode query in arrays, Theory of Computing Systems, 2013) improve previous results in theory but also helps us achieve the best time/space tradeoff in practice; we even go a step further to replace more components in their solution with succinct data structures and improve the performance further. In the approximate version of this problem, a (1+ε)-approximate range mode query looks for an element whose occurrences in A[a,b] is at least F_{a,b}/(1+ε), where F_{a,b} is the frequency of the mode in A[a,b]. We implement all previous solutions to this problems and find that, even when ε = 1/2, the average approximation ratio of these solutions is close to 1 in practice, and they provide much faster query time than the best exact solution. These solutions achieve different useful time-space tradeoffs, and among them, El-Zein et al. (On Approximate Range Mode and Range Selection, 30th International Symposium on Algorithms and Computation, 2019) provide us with one solution whose space usage is only 35.6% to 93.8% of the cost of storing the input array of 32-bit integers (in most cases, the space cost is closer to the lower end, and the average space cost is 20.2 bits per symbol among all datasets). Its non-succinct version also stands out with query support at least several times faster than other O(n/ε)-word structures while using only slightly more space in practice.

Cite as

Meng He and Zhen Liu. Exact and Approximate Range Mode Query Data Structures in Practice. In 21st International Symposium on Experimental Algorithms (SEA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 265, pp. 19:1-19:22, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{he_et_al:LIPIcs.SEA.2023.19,
  author =	{He, Meng and Liu, Zhen},
  title =	{{Exact and Approximate Range Mode Query Data Structures in Practice}},
  booktitle =	{21st International Symposium on Experimental Algorithms (SEA 2023)},
  pages =	{19:1--19:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-279-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{265},
  editor =	{Georgiadis, Loukas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2023.19},
  URN =		{urn:nbn:de:0030-drops-183693},
  doi =		{10.4230/LIPIcs.SEA.2023.19},
  annote =	{Keywords: range mode query, exact range mode query, approximate range mode query}
}
Document
The Safe and Effective Use of Low-Assurance Predictions in Safety-Critical Systems

Authors: Kunal Agrawal, Sanjoy Baruah, Michael A. Bender, and Alberto Marchetti-Spaccamela

Published in: LIPIcs, Volume 262, 35th Euromicro Conference on Real-Time Systems (ECRTS 2023)


Abstract
The algorithm-design paradigm of algorithms using predictions is explored as a means of incorporating the computations of lower-assurance components (such as machine-learning based ones) into safety-critical systems that must have their correctness validated to very high levels of assurance. The paradigm is applied to two simple example applications that are relevant to the real-time systems community: energy-aware scheduling, and classification using ML-based classifiers in conjunction with more reliable but slower deterministic classifiers. It is shown how algorithms using predictions achieve much-improved performance when the low-assurance computations are correct, at a cost of no more than a slight performance degradation even when they turn out to be completely wrong.

Cite as

Kunal Agrawal, Sanjoy Baruah, Michael A. Bender, and Alberto Marchetti-Spaccamela. The Safe and Effective Use of Low-Assurance Predictions in Safety-Critical Systems. In 35th Euromicro Conference on Real-Time Systems (ECRTS 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 262, pp. 3:1-3:19, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{agrawal_et_al:LIPIcs.ECRTS.2023.3,
  author =	{Agrawal, Kunal and Baruah, Sanjoy and Bender, Michael A. and Marchetti-Spaccamela, Alberto},
  title =	{{The Safe and Effective Use of Low-Assurance Predictions in Safety-Critical Systems}},
  booktitle =	{35th Euromicro Conference on Real-Time Systems (ECRTS 2023)},
  pages =	{3:1--3:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-280-8},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{262},
  editor =	{Papadopoulos, Alessandro V.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECRTS.2023.3},
  URN =		{urn:nbn:de:0030-drops-180323},
  doi =		{10.4230/LIPIcs.ECRTS.2023.3},
  annote =	{Keywords: Algorithms using predictions, robust scheduling, energy minimization, classification, on-line scheduling}
}
Document
When Are Cache-Oblivious Algorithms Cache Adaptive? A Case Study of Matrix Multiplication and Sorting

Authors: Arghya Bhattacharya, Abiyaz Chowdhury, Helen Xu, Rathish Das, Rezaul A. Chowdhury, Rob Johnson, Rishab Nithyanand, and Michael A. Bender

Published in: LIPIcs, Volume 244, 30th Annual European Symposium on Algorithms (ESA 2022)


Abstract
Cache-adaptive algorithms are a class of algorithms that achieve optimal utilization of dynamically changing memory. These memory fluctuations are the norm in today’s multi-threaded shared-memory machines and time-sharing caches. Bender et al. [Bender et al., 2014] proved that many cache-oblivious algorithms are optimally cache-adaptive, but that some cache-oblivious algorithms can be relatively far from optimally cache-adaptive on worst-case memory fluctuations. This worst-case gap between cache obliviousness and cache adaptivity depends on a highly-structured, adversarial memory profile. Existing cache-adaptive analysis does not predict the relative performance of cache-oblivious and cache-adaptive algorithms on non-adversarial profiles. Does the worst-case gap appear in practice, or is it an artifact of an unrealistically powerful adversary? This paper sheds light on the question of whether cache-oblivious algorithms can effectively adapt to realistically fluctuating memory sizes; the paper focuses on matrix multiplication and sorting. The two matrix-multiplication algorithms in this paper are canonical examples of "(a, b, c)-regular" cache-oblivious algorithms, which underlie much of the existing theory on cache-adaptivity. Both algorithms have the same asymptotic I/O performance when the memory size remains fixed, but one is optimally cache-adaptive, and the other is not. In our experiments, we generate both adversarial and non-adversarial memory workloads. The performance gap between the algorithms for matrix multiplication grows with problem size (up to 3.8×) on the adversarial profiles, but the gap does not grow with problem size (stays at 2×) on non-adversarial profiles. The sorting algorithms in this paper are not "(a, b, c)-regular," but they have been well-studied in the classical external-memory model when the memory size does not fluctuate. The relative performance of a non-oblivious (cache-aware) sorting algorithm degrades with the problem size: it incurs up to 6 × the number of disk I/Os compared to an oblivious adaptive algorithm on both adversarial and non-adversarial profiles. To summarize, in all our experiments, the cache-oblivious matrix-multiplication and sorting algorithms that we tested empirically adapt well to memory fluctuations. We conjecture that cache-obliviousness will empirically help achieve adaptivity for other problems with similar structures.

Cite as

Arghya Bhattacharya, Abiyaz Chowdhury, Helen Xu, Rathish Das, Rezaul A. Chowdhury, Rob Johnson, Rishab Nithyanand, and Michael A. Bender. When Are Cache-Oblivious Algorithms Cache Adaptive? A Case Study of Matrix Multiplication and Sorting. In 30th Annual European Symposium on Algorithms (ESA 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 244, pp. 16:1-16:17, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{bhattacharya_et_al:LIPIcs.ESA.2022.16,
  author =	{Bhattacharya, Arghya and Chowdhury, Abiyaz and Xu, Helen and Das, Rathish and Chowdhury, Rezaul A. and Johnson, Rob and Nithyanand, Rishab and Bender, Michael A.},
  title =	{{When Are Cache-Oblivious Algorithms Cache Adaptive? A Case Study of Matrix Multiplication and Sorting}},
  booktitle =	{30th Annual European Symposium on Algorithms (ESA 2022)},
  pages =	{16:1--16:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-247-1},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{244},
  editor =	{Chechik, Shiri and Navarro, Gonzalo and Rotenberg, Eva and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2022.16},
  URN =		{urn:nbn:de:0030-drops-169543},
  doi =		{10.4230/LIPIcs.ESA.2022.16},
  annote =	{Keywords: Cache-adaptive algorithms, cache-oblivious algorithms}
}
Document
What Does Dynamic Optimality Mean in External Memory?

Authors: Michael A. Bender, Martín Farach-Colton, and William Kuszmaul

Published in: LIPIcs, Volume 215, 13th Innovations in Theoretical Computer Science Conference (ITCS 2022)


Abstract
A data structure A is said to be dynamically optimal over a class of data structures 𝒞 if A is constant-competitive with every data structure C ∈ 𝒞. Much of the research on binary search trees in the past forty years has focused on studying dynamic optimality over the class of binary search trees that are modified via rotations (and indeed, the question of whether splay trees are dynamically optimal has gained notoriety as the so-called dynamic-optimality conjecture). Recently, researchers have extended this to consider dynamic optimality over certain classes of external-memory search trees. In particular, Demaine, Iacono, Koumoutsos, and Langerman propose a class of external-memory trees that support a notion of tree rotations, and then give an elegant data structure, called the Belga B-tree, that is within an O(log log N)-factor of being dynamically optimal over this class. In this paper, we revisit the question of how dynamic optimality should be defined in external memory. A defining characteristic of external-memory data structures is that there is a stark asymmetry between queries and inserts/updates/deletes: by making the former slightly asymptotically slower, one can make the latter significantly asymptotically faster (even allowing for operations with sub-constant amortized I/Os). This asymmetry makes it so that rotation-based search trees are not optimal (or even close to optimal) in insert/update/delete-heavy external-memory workloads. To study dynamic optimality for such workloads, one must consider a different class of data structures. The natural class of data structures to consider are what we call buffered-propagation trees. Such trees can adapt dynamically to the locality properties of an input sequence in order to optimize the interactions between different inserts/updates/deletes and queries. We also present a new form of beyond-worst-case analysis that allows for us to formally study a continuum between static and dynamic optimality. Finally, we give a novel data structure, called the Jεllo Tree, that is statically optimal and that achieves dynamic optimality for a large natural class of inputs defined by our beyond-worst-case analysis.

Cite as

Michael A. Bender, Martín Farach-Colton, and William Kuszmaul. What Does Dynamic Optimality Mean in External Memory?. In 13th Innovations in Theoretical Computer Science Conference (ITCS 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 215, pp. 18:1-18:23, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{bender_et_al:LIPIcs.ITCS.2022.18,
  author =	{Bender, Michael A. and Farach-Colton, Mart{\'\i}n and Kuszmaul, William},
  title =	{{What Does Dynamic Optimality Mean in External Memory?}},
  booktitle =	{13th Innovations in Theoretical Computer Science Conference (ITCS 2022)},
  pages =	{18:1--18:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-217-4},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{215},
  editor =	{Braverman, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2022.18},
  URN =		{urn:nbn:de:0030-drops-156145},
  doi =		{10.4230/LIPIcs.ITCS.2022.18},
  annote =	{Keywords: Dynamic optimality, external memory, buffer propagation, search trees}
}
Document
Incremental Edge Orientation in Forests

Authors: Michael A. Bender, Tsvi Kopelowitz, William Kuszmaul, Ely Porat, and Clifford Stein

Published in: LIPIcs, Volume 204, 29th Annual European Symposium on Algorithms (ESA 2021)


Abstract
For any forest G = (V, E) it is possible to orient the edges E so that no vertex in V has out-degree greater than 1. This paper considers the incremental edge-orientation problem, in which the edges E arrive over time and the algorithm must maintain a low-out-degree edge orientation at all times. We give an algorithm that maintains a maximum out-degree of 3 while flipping at most O(log log n) edge orientations per edge insertion, with high probability in n. The algorithm requires worst-case time O(log n log log n) per insertion, and takes amortized time O(1). The previous state of the art required up to O(log n / log log n) edge flips per insertion. We then apply our edge-orientation results to the problem of dynamic Cuckoo hashing. The problem of designing simple families ℋ of hash functions that are compatible with Cuckoo hashing has received extensive attention. These families ℋ are known to satisfy static guarantees, but do not come typically with dynamic guarantees for the running time of inserts and deletes. We show how to transform static guarantees (for 1-associativity) into near-state-of-the-art dynamic guarantees (for O(1)-associativity) in a black-box fashion. Rather than relying on the family ℋ to supply randomness, as in past work, we instead rely on randomness within our table-maintenance algorithm.

Cite as

Michael A. Bender, Tsvi Kopelowitz, William Kuszmaul, Ely Porat, and Clifford Stein. Incremental Edge Orientation in Forests. In 29th Annual European Symposium on Algorithms (ESA 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 204, pp. 12:1-12:18, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{bender_et_al:LIPIcs.ESA.2021.12,
  author =	{Bender, Michael A. and Kopelowitz, Tsvi and Kuszmaul, William and Porat, Ely and Stein, Clifford},
  title =	{{Incremental Edge Orientation in Forests}},
  booktitle =	{29th Annual European Symposium on Algorithms (ESA 2021)},
  pages =	{12:1--12:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-204-4},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{204},
  editor =	{Mutzel, Petra and Pagh, Rasmus and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2021.12},
  URN =		{urn:nbn:de:0030-drops-145933},
  doi =		{10.4230/LIPIcs.ESA.2021.12},
  annote =	{Keywords: edge orientation, graph algorithms, Cuckoo hashing, hash functions}
}
Document
Incremental SCC Maintenance in Sparse Graphs

Authors: Aaron Bernstein, Aditi Dudeja, and Seth Pettie

Published in: LIPIcs, Volume 204, 29th Annual European Symposium on Algorithms (ESA 2021)


Abstract
In the incremental cycle detection problem, edges are added to a directed graph (initially empty), and the algorithm has to report the presence of the first cycle, once it is formed. A closely related problem is the incremental topological sort problem, where edges are added to an acyclic graph, and the algorithm is required to maintain a valid topological ordering. Since these problems arise naturally in many applications such as scheduling tasks, pointer analysis, and circuit evaluation, they have been studied extensively in the last three decades. Motivated by the fact that in many of these applications, the presence of a cycle is not fatal, we study a generalization of these problems, incremental maintenance of strongly connected components (incremental SCC). Several incremental algorithms in the literature which do cycle detection and topological sort in directed acyclic graphs, such as those by [Michael A. Bender et al., 2016] and [Haeupler et al., 2012], also generalize to maintain strongly connected components and their topological sort in general directed graphs. The algorithms of [Haeupler et al., 2012] and [Michael A. Bender et al., 2016] have a total update time of O(m^{3/2}) and O(m⋅ min{m^{1/2},n^{2/3}}) respectively, and this is the state of the art for incremental SCC. But the most recent algorithms for incremental cycle detection and topological sort ([Bernstein and Chechik, 2018] and [Bhattacharya and Kulkarni, 2020]), which yield total (randomized) update time Õ(min{m^{4/3}, n²}), do not extend to incremental SCC. Thus, there is a gap between the best known algorithms for these two closely related problems. In this paper, we bridge this gap by extending the framework of [Bhattacharya and Kulkarni, 2020] to general directed graphs. More concretely, we give a Las Vegas algorithm for incremental SCCs with an expected total update time of Õ(m^{4/3}). A key ingredient in the algorithm of [Bhattacharya and Kulkarni, 2020] is a structural theorem (first introduced in [Bernstein and Chechik, 2018]) that bounds the number of "equivalent" vertices. Unfortunately, this theorem only applies to DAGs. We show a natural way to extend this structural theorem to general directed graphs, and along the way we develop a significantly simpler and more intuitive proof of this theorem.

Cite as

Aaron Bernstein, Aditi Dudeja, and Seth Pettie. Incremental SCC Maintenance in Sparse Graphs. In 29th Annual European Symposium on Algorithms (ESA 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 204, pp. 14:1-14:16, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{bernstein_et_al:LIPIcs.ESA.2021.14,
  author =	{Bernstein, Aaron and Dudeja, Aditi and Pettie, Seth},
  title =	{{Incremental SCC Maintenance in Sparse Graphs}},
  booktitle =	{29th Annual European Symposium on Algorithms (ESA 2021)},
  pages =	{14:1--14:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-204-4},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{204},
  editor =	{Mutzel, Petra and Pagh, Rasmus and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2021.14},
  URN =		{urn:nbn:de:0030-drops-145950},
  doi =		{10.4230/LIPIcs.ESA.2021.14},
  annote =	{Keywords: Directed Graphs, Strongly Connected Components, Dynamic Graph Algorithms}
}
Document
Train Tracks with Gaps

Authors: William Kuszmaul

Published in: LIPIcs, Volume 157, 10th International Conference on Fun with Algorithms (FUN 2021) (2020)


Abstract
We identify a tradeoff curve between the number of wheels on a train car, and the amount of track that must be installed in order to ensure that the train car is supported by the track at all times. The goal is to build an elevated track that covers some large distance 𝓁, but that consists primarily of gaps, so that the total amount of feet of train track that is actually installed is only a small fraction of 𝓁. In order so that the train track can support the train at all points, the requirement is that as the train drives across the track, at least one set of wheels from the rear quarter and at least one set of wheels from the front quarter of the train must be touching the track at all times. We show that, if a train car has n sets of wheels evenly spaced apart in its rear and n sets of wheels evenly spaced apart in its front, then it is possible to build a train track that supports the train car but uses only Θ(𝓁 / n) feet of track. We then consider what happens if the wheels on the train car are not evenly spaced (and may even be configured adversarially). We show that for any configuration of the train car, with n wheels in each of the front and rear quarters of the car, it is possible to build a track that supports the car for distance 𝓁 and uses only O((𝓁 log n)/n) feet of track. Additionally, we show that there exist configurations of the train car for which this tradeoff curve is asymptotically optimal. Both the upper and lower bounds are achieved via applications of the probabilistic method. The algorithms and lower bounds in this paper provide simple illustrative examples of many of the core techniques in probabilistic combinatorics and randomized algorithms. These include the probabilistic method with alterations, the use of McDiarmid’s inequality within the probabilistic method, the algorithmic Lovász Local Lemma, the min-hash technique, and the method of conditional probabilities.

Cite as

William Kuszmaul. Train Tracks with Gaps. In 10th International Conference on Fun with Algorithms (FUN 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 157, pp. 19:1-19:11, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{kuszmaul:LIPIcs.FUN.2021.19,
  author =	{Kuszmaul, William},
  title =	{{Train Tracks with Gaps}},
  booktitle =	{10th International Conference on Fun with Algorithms (FUN 2021)},
  pages =	{19:1--19:11},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-145-0},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{157},
  editor =	{Farach-Colton, Martin and Prencipe, Giuseppe and Uehara, Ryuhei},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FUN.2021.19},
  URN =		{urn:nbn:de:0030-drops-127800},
  doi =		{10.4230/LIPIcs.FUN.2021.19},
  annote =	{Keywords: probabilistic method, algorithms, trains, Lov\'{a}sz Local Lemma, McDiarmid’s Inequality}
}
Document
A Dynamic Space-Efficient Filter with Constant Time Operations

Authors: Ioana O. Bercea and Guy Even

Published in: LIPIcs, Volume 162, 17th Scandinavian Symposium and Workshops on Algorithm Theory (SWAT 2020)


Abstract
A dynamic dictionary is a data structure that maintains sets of cardinality at most n from a given universe and supports insertions, deletions, and membership queries. A filter approximates membership queries with a one-sided error that occurs with probability at most ε. The goal is to obtain dynamic filters that are space-efficient (the space is 1+o(1) times the information-theoretic lower bound) and support all operations in constant time with high probability. One approach to designing filters is to reduce to the retrieval problem. When the size of the universe is polynomial in n, this approach yields a space-efficient dynamic filter as long as the error parameter ε satisfies log(1/ε) = ω(log log n). For the case that log(1/ε) = O(log log n), we present the first space-efficient dynamic filter with constant time operations in the worst case (whp). In contrast, the space-efficient dynamic filter of Pagh et al. [Anna Pagh et al., 2005] supports insertions and deletions in amortized expected constant time. Our approach employs the classic reduction of Carter et al. [Carter et al., 1978] on a new type of dictionary construction that supports random multisets.

Cite as

Ioana O. Bercea and Guy Even. A Dynamic Space-Efficient Filter with Constant Time Operations. In 17th Scandinavian Symposium and Workshops on Algorithm Theory (SWAT 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 162, pp. 11:1-11:17, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{bercea_et_al:LIPIcs.SWAT.2020.11,
  author =	{Bercea, Ioana O. and Even, Guy},
  title =	{{A Dynamic Space-Efficient Filter with Constant Time Operations}},
  booktitle =	{17th Scandinavian Symposium and Workshops on Algorithm Theory (SWAT 2020)},
  pages =	{11:1--11:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-150-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{162},
  editor =	{Albers, Susanne},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SWAT.2020.11},
  URN =		{urn:nbn:de:0030-drops-122582},
  doi =		{10.4230/LIPIcs.SWAT.2020.11},
  annote =	{Keywords: Data Structures}
}
Document
Complete Volume
LIPIcs, Volume 144, ESA'19, Complete Volume

Authors: Michael A. Bender, Ola Svensson, and Grzegorz Herman

Published in: LIPIcs, Volume 144, 27th Annual European Symposium on Algorithms (ESA 2019)


Abstract
LIPIcs, Volume 144, ESA'19, Complete Volume

Cite as

Michael A. Bender, Ola Svensson, and Grzegorz Herman. LIPIcs, Volume 144, ESA'19, Complete Volume. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@Proceedings{bender_et_al:LIPIcs.ESA.2019,
  title =	{{LIPIcs, Volume 144, ESA'19, Complete Volume}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019},
  URN =		{urn:nbn:de:0030-drops-113004},
  doi =		{10.4230/LIPIcs.ESA.2019},
  annote =	{Keywords: Applied computing, Transportation; Computing methodologies, Algebraic algorithms; Hardware, External storage; Human-centered computing, Graph drawings}
}
Document
Front Matter
Front Matter, Table of Contents, Preface, Conference Organization

Authors: Michael A. Bender, Ola Svensson, and Grzegorz Herman

Published in: LIPIcs, Volume 144, 27th Annual European Symposium on Algorithms (ESA 2019)


Abstract
Front Matter, Table of Contents, Preface, Conference Organization

Cite as

Michael A. Bender, Ola Svensson, and Grzegorz Herman. Front Matter, Table of Contents, Preface, Conference Organization. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 0:i-0:xx, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{bender_et_al:LIPIcs.ESA.2019.0,
  author =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  title =	{{Front Matter, Table of Contents, Preface, Conference Organization}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{0:i--0:xx},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.0},
  URN =		{urn:nbn:de:0030-drops-111215},
  doi =		{10.4230/LIPIcs.ESA.2019.0},
  annote =	{Keywords: Front Matter, Table of Contents, Preface, Conference Organization}
}
Document
Constant-Factor FPT Approximation for Capacitated k-Median

Authors: Marek Adamczyk, Jarosław Byrka, Jan Marcinkowski, Syed M. Meesum, and Michał Włodarczyk

Published in: LIPIcs, Volume 144, 27th Annual European Symposium on Algorithms (ESA 2019)


Abstract
Capacitated k-median is one of the few outstanding optimization problems for which the existence of a polynomial time constant factor approximation algorithm remains an open problem. In a series of recent papers algorithms producing solutions violating either the number of facilities or the capacity by a multiplicative factor were obtained. However, to produce solutions without violations appears to be hard and potentially requires different algorithmic techniques. Notably, if parameterized by the number of facilities k, the problem is also W[2] hard, making the existence of an exact FPT algorithm unlikely. In this work we provide an FPT-time constant factor approximation algorithm preserving both cardinality and capacity of the facilities. The algorithm runs in time 2^O(k log k) n^O(1) and achieves an approximation ratio of 7+epsilon.

Cite as

Marek Adamczyk, Jarosław Byrka, Jan Marcinkowski, Syed M. Meesum, and Michał Włodarczyk. Constant-Factor FPT Approximation for Capacitated k-Median. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 1:1-1:14, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{adamczyk_et_al:LIPIcs.ESA.2019.1,
  author =	{Adamczyk, Marek and Byrka, Jaros{\l}aw and Marcinkowski, Jan and Meesum, Syed M. and W{\l}odarczyk, Micha{\l}},
  title =	{{Constant-Factor FPT Approximation for Capacitated k-Median}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{1:1--1:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.1},
  URN =		{urn:nbn:de:0030-drops-111225},
  doi =		{10.4230/LIPIcs.ESA.2019.1},
  annote =	{Keywords: K-median, Clustering, Approximation Algorithms, Fixed Parameter Tractability}
}
  • Refine by Author
  • 8 Bender, Michael A.
  • 4 Penschuck, Manuel
  • 3 Kuszmaul, William
  • 3 Pilipczuk, Marcin
  • 3 Wagner, Dorothea
  • Show More...

  • Refine by Classification
  • 15 Mathematics of computing → Graph algorithms
  • 14 Theory of computation → Design and analysis of algorithms
  • 11 Theory of computation → Computational geometry
  • 11 Theory of computation → Graph algorithms analysis
  • 9 Theory of computation → Approximation algorithms analysis
  • Show More...

  • Refine by Keyword
  • 5 Approximation Algorithms
  • 3 Clustering
  • 3 algorithm engineering
  • 3 algorithms
  • 3 approximation algorithms
  • Show More...

  • Refine by Type
  • 101 document
  • 1 volume

  • Refine by Publication Year
  • 87 2019
  • 5 2023
  • 3 2005
  • 2 2020
  • 2 2021
  • Show More...

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail