121 Search Results for "Sanders, Peter"


Volume

LIPIcs, Volume 173

28th Annual European Symposium on Algorithms (ESA 2020)

ESA 2020, September 7-9, 2020, Pisa, Italy (Virtual Conference)

Editors: Fabrizio Grandoni, Grzegorz Herman, and Peter Sanders

Document
High Performance Construction of RecSplit Based Minimal Perfect Hash Functions

Authors: Dominik Bez, Florian Kurpicz, Hans-Peter Lehmann, and Peter Sanders

Published in: LIPIcs, Volume 274, 31st Annual European Symposium on Algorithms (ESA 2023)


Abstract
A minimal perfect hash function (MPHF) bijectively maps a set S of objects to the first |S| integers. It can be used as a building block in databases and data compression. RecSplit [Esposito et al., ALENEX'20] is currently the most space efficient practical minimal perfect hash function. It heavily relies on trying out hash functions in a brute force way. We introduce rotation fitting, a new technique that makes the search more efficient by drastically reducing the number of tried hash functions. Additionally, we greatly improve the construction time of RecSplit by harnessing parallelism on the level of bits, vectors, cores, and GPUs. In combination, the resulting improvements yield speedups up to 239 on an 8-core CPU and up to 5438 using a GPU. The original single-threaded RecSplit implementation needs 1.5 hours to construct an MPHF for 5 Million objects with 1.56 bits per object. On the GPU, we achieve the same space usage in just 5 seconds. Given that the speedups are larger than the increase in energy consumption, our implementation is more energy efficient than the original implementation.

Cite as

Dominik Bez, Florian Kurpicz, Hans-Peter Lehmann, and Peter Sanders. High Performance Construction of RecSplit Based Minimal Perfect Hash Functions. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 19:1-19:16, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{bez_et_al:LIPIcs.ESA.2023.19,
  author =	{Bez, Dominik and Kurpicz, Florian and Lehmann, Hans-Peter and Sanders, Peter},
  title =	{{High Performance Construction of RecSplit Based Minimal Perfect Hash Functions}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{19:1--19:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.19},
  URN =		{urn:nbn:de:0030-drops-186728},
  doi =		{10.4230/LIPIcs.ESA.2023.19},
  annote =	{Keywords: compressed data structure, parallel perfect hashing, bit parallelism, GPU, SIMD, parallel computing, vector instructions}
}
Document
Learned Monotone Minimal Perfect Hashing

Authors: Paolo Ferragina, Hans-Peter Lehmann, Peter Sanders, and Giorgio Vinciguerra

Published in: LIPIcs, Volume 274, 31st Annual European Symposium on Algorithms (ESA 2023)


Abstract
A Monotone Minimal Perfect Hash Function (MMPHF) constructed on a set S of keys is a function that maps each key in S to its rank. On keys not in S, the function returns an arbitrary value. Applications range from databases, search engines, data encryption, to pattern-matching algorithms. In this paper, we describe LeMonHash, a new technique for constructing MMPHFs for integers. The core idea of LeMonHash is surprisingly simple and effective: we learn a monotone mapping from keys to their rank via an error-bounded piecewise linear model (the PGM-index), and then we solve the collisions that might arise among keys mapping to the same rank estimate by associating small integers with them in a retrieval data structure (BuRR). On synthetic random datasets, LeMonHash needs 34% less space than the next larger competitor, while achieving about 16 times faster queries. On real-world datasets, the space usage is very close to or much better than the best competitors, while achieving up to 19 times faster queries than the next larger competitor. As far as the construction of LeMonHash is concerned, we get an improvement by a factor of up to 2, compared to the competitor with the next best space usage. We also investigate the case of keys being variable-length strings, introducing the so-called LeMonHash-VL: it needs space within 13% of the best competitors while achieving up to 3 times faster queries than the next larger competitor.

Cite as

Paolo Ferragina, Hans-Peter Lehmann, Peter Sanders, and Giorgio Vinciguerra. Learned Monotone Minimal Perfect Hashing. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 46:1-46:17, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{ferragina_et_al:LIPIcs.ESA.2023.46,
  author =	{Ferragina, Paolo and Lehmann, Hans-Peter and Sanders, Peter and Vinciguerra, Giorgio},
  title =	{{Learned Monotone Minimal Perfect Hashing}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{46:1--46:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.46},
  URN =		{urn:nbn:de:0030-drops-186990},
  doi =		{10.4230/LIPIcs.ESA.2023.46},
  annote =	{Keywords: compressed data structure, monotone minimal perfect hashing, retrieval}
}
Document
A Sweep-Plane Algorithm for Calculating the Isolation of Mountains

Authors: Daniel Funke, Nicolai Hüning, and Peter Sanders

Published in: LIPIcs, Volume 274, 31st Annual European Symposium on Algorithms (ESA 2023)


Abstract
One established metric to classify the significance of a mountain peak is its isolation. It specifies the distance between a peak and the closest point of higher elevation. Peaks with high isolation dominate their surroundings and provide a nice view from the top. With the availability of worldwide Digital Elevation Models (DEMs), the isolation of all mountain peaks can be computed automatically. Previous algorithms run in worst case time that is quadratic in the input size. We present a novel sweep-plane algorithm that runs in time 𝒪(nlog n+pT_NN) where n is the input size, p the number of considered peaks and T_NN the time for a 2D nearest-neighbor query in an appropriate geometric search tree. We refine this to a two-level approach that has high locality and good parallel scalability. Our implementation reduces the time for calculating the isolation of every peak on Earth from hours to minutes while improving precision.

Cite as

Daniel Funke, Nicolai Hüning, and Peter Sanders. A Sweep-Plane Algorithm for Calculating the Isolation of Mountains. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 51:1-51:17, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{funke_et_al:LIPIcs.ESA.2023.51,
  author =	{Funke, Daniel and H\"{u}ning, Nicolai and Sanders, Peter},
  title =	{{A Sweep-Plane Algorithm for Calculating the Isolation of Mountains}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{51:1--51:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.51},
  URN =		{urn:nbn:de:0030-drops-187040},
  doi =		{10.4230/LIPIcs.ESA.2023.51},
  annote =	{Keywords: computational geometry, Geo-information systems, sweepline algorithms}
}
Document
Pareto Sums of Pareto Sets

Authors: Demian Hespe, Peter Sanders, Sabine Storandt, and Carina Truschel

Published in: LIPIcs, Volume 274, 31st Annual European Symposium on Algorithms (ESA 2023)


Abstract
In bi-criteria optimization problems, the goal is typically to compute the set of Pareto-optimal solutions. Many algorithms for these types of problems rely on efficient merging or combining of partial solutions and filtering of dominated solutions in the resulting sets. In this paper, we consider the task of computing the Pareto sum of two given Pareto sets A, B of size n. The Pareto sum contains all non-dominated points of the Minkowski sum M = {a+b|a ∈ A, b ∈ B}. Since the Minkowski sum has a size of n², but the Pareto sum C can be much smaller, the goal is to compute C without having to compute and store all of M. We present several new algorithms for efficient Pareto sum computation, including an output-sensitive one with a running time of 𝒪(n log n + nk) and a space consumption of 𝒪(n+k) for k = |C|. We also describe suitable engineering techniques to improve the practical running times of our algorithms and provide a comparative experimental study. As one showcase application, we consider preprocessing-based methods for bi-criteria route planning in road networks. Pareto sum computation is a frequent task in the preprocessing phase. We show that using our algorithms with an output-sensitive space consumption allows to tackle larger instances and reduces the preprocessing time compared to algorithms that fully store M.

Cite as

Demian Hespe, Peter Sanders, Sabine Storandt, and Carina Truschel. Pareto Sums of Pareto Sets. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 60:1-60:17, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{hespe_et_al:LIPIcs.ESA.2023.60,
  author =	{Hespe, Demian and Sanders, Peter and Storandt, Sabine and Truschel, Carina},
  title =	{{Pareto Sums of Pareto Sets}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{60:1--60:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.60},
  URN =		{urn:nbn:de:0030-drops-187132},
  doi =		{10.4230/LIPIcs.ESA.2023.60},
  annote =	{Keywords: Minkowski sum, Skyline, Successive Algorithm}
}
Document
Engineering Shared-Memory Parallel Shuffling to Generate Random Permutations In-Place

Authors: Manuel Penschuck

Published in: LIPIcs, Volume 265, 21st International Symposium on Experimental Algorithms (SEA 2023)


Abstract
Shuffling is the process of placing elements into a random order such that any permutation occurs with equal probability. It is an important building block in virtually all scientific areas. We engineer, - to the best of our knowledge - for the first time, a practically fast, parallel shuffling algorithm with O(√n log n) parallel depth that requires only poly-logarithmic auxiliary memory (with high probability). In an empirical evaluation, we compare our implementations with a number of existing solutions on various computer architectures. Our algorithms consistently achieve the highest through-put on all machines. Further, we demonstrate that the runtime of our parallel algorithm is comparable to the time that other algorithms may take to acquire the memory from the operating system to copy the input.

Cite as

Manuel Penschuck. Engineering Shared-Memory Parallel Shuffling to Generate Random Permutations In-Place. In 21st International Symposium on Experimental Algorithms (SEA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 265, pp. 5:1-5:20, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{penschuck:LIPIcs.SEA.2023.5,
  author =	{Penschuck, Manuel},
  title =	{{Engineering Shared-Memory Parallel Shuffling to Generate Random Permutations In-Place}},
  booktitle =	{21st International Symposium on Experimental Algorithms (SEA 2023)},
  pages =	{5:1--5:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-279-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{265},
  editor =	{Georgiadis, Loukas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2023.5},
  URN =		{urn:nbn:de:0030-drops-183550},
  doi =		{10.4230/LIPIcs.SEA.2023.5},
  annote =	{Keywords: Shuffling, random permutation, parallelism, in-place, algorithm engineering, practical implementation}
}
Document
Simple Runs-Bounded FM-Index Designs Are Fast

Authors: Diego Díaz-Domínguez, Saska Dönges, Simon J. Puglisi, and Leena Salmela

Published in: LIPIcs, Volume 265, 21st International Symposium on Experimental Algorithms (SEA 2023)


Abstract
Given a string X of length n on alphabet σ, the FM-index data structure allows counting all occurrences of a pattern P of length m in O(m) time via an algorithm called backward search. An important difficulty when searching with an FM-index is to support queries on L, the Burrows-Wheeler transform of X, while L is in compressed form. This problem has been the subject of intense research for 25 years now. Run-length encoding of L is an effective way to reduce index size, in particular when the data being indexed is highly-repetitive, which is the case in many types of modern data, including those arising from versioned document collections and in pangenomics. This paper takes a back-to-basics look at supporting backward search in FM-indexes, exploring and engineering two simple designs. The first divides the BWT string into blocks containing b symbols each and then run-length compresses each block separately, possibly introducing new runs (compared to applying run-length encoding once, to the whole string). Each block stores counts of each symbol that occurs before the block. This method supports the operation rank_c(L, i) (i.e., count the number of times c occurs in the prefix L[1..i]) by first determining the block i/b in which i falls and scanning the block to the appropriate position counting occurrences of c along the way. This partial answer to rank_c(L, i) is then added to the stored count of c symbols before the block to determine the final answer. Our second design has a similar structure, but instead divides the run-length-encoded version of L into blocks containing an equal number of runs. The trick then is to determine the block in which a query falls, which is achieved via a predecessor query over the block starting positions. We show via extensive experiments on a wide range of repetitive text collections that these FM-indexes are not only easy to implement, but also fast and space efficient in practice.

Cite as

Diego Díaz-Domínguez, Saska Dönges, Simon J. Puglisi, and Leena Salmela. Simple Runs-Bounded FM-Index Designs Are Fast. In 21st International Symposium on Experimental Algorithms (SEA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 265, pp. 7:1-7:16, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{diazdominguez_et_al:LIPIcs.SEA.2023.7,
  author =	{D{\'\i}az-Dom{\'\i}nguez, Diego and D\"{o}nges, Saska and Puglisi, Simon J. and Salmela, Leena},
  title =	{{Simple Runs-Bounded FM-Index Designs Are Fast}},
  booktitle =	{21st International Symposium on Experimental Algorithms (SEA 2023)},
  pages =	{7:1--7:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-279-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{265},
  editor =	{Georgiadis, Loukas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2023.7},
  URN =		{urn:nbn:de:0030-drops-183579},
  doi =		{10.4230/LIPIcs.SEA.2023.7},
  annote =	{Keywords: data structures, efficient algorithms}
}
Document
FREIGHT: Fast Streaming Hypergraph Partitioning

Authors: Kamal Eyubov, Marcelo Fonseca Faraj, and Christian Schulz

Published in: LIPIcs, Volume 265, 21st International Symposium on Experimental Algorithms (SEA 2023)


Abstract
Partitioning the vertices of a (hyper)graph into k roughly balanced blocks such that few (hyper)edges run between blocks is a key problem for large-scale distributed processing. A current trend for partitioning huge (hyper)graphs using low computational resources are streaming algorithms. In this work, we propose FREIGHT: a Fast stREamInG Hypergraph parTitioning algorithm which is an adaptation of the widely-known graph-based algorithm Fennel. By using an efficient data structure, we make the overall running of FREIGHT linearly dependent on the pin-count of the hypergraph and the memory consumption linearly dependent on the numbers of nets and blocks. The results of our extensive experimentation showcase the promising performance of FREIGHT as a highly efficient and effective solution for streaming hypergraph partitioning. Our algorithm demonstrates competitive running time with the Hashing algorithm, with a difference of a maximum factor of four observed on three fourths of the instances. Significantly, our findings highlight the superiority of FREIGHT over all existing (buffered) streaming algorithms and even the in-memory algorithm HYPE, with respect to both cut-net and connectivity measures. This indicates that our proposed algorithm is a promising hypergraph partitioning tool to tackle the challenge posed by large-scale and dynamic data processing.

Cite as

Kamal Eyubov, Marcelo Fonseca Faraj, and Christian Schulz. FREIGHT: Fast Streaming Hypergraph Partitioning. In 21st International Symposium on Experimental Algorithms (SEA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 265, pp. 15:1-15:16, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{eyubov_et_al:LIPIcs.SEA.2023.15,
  author =	{Eyubov, Kamal and Fonseca Faraj, Marcelo and Schulz, Christian},
  title =	{{FREIGHT: Fast Streaming Hypergraph Partitioning}},
  booktitle =	{21st International Symposium on Experimental Algorithms (SEA 2023)},
  pages =	{15:1--15:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-279-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{265},
  editor =	{Georgiadis, Loukas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2023.15},
  URN =		{urn:nbn:de:0030-drops-183657},
  doi =		{10.4230/LIPIcs.SEA.2023.15},
  annote =	{Keywords: Hypergraph partitioning, graph partitioning, edge partitioning, streaming}
}
Document
Arc-Flags Meet Trip-Based Public Transit Routing

Authors: Ernestine Großmann, Jonas Sauer, Christian Schulz, and Patrick Steil

Published in: LIPIcs, Volume 265, 21st International Symposium on Experimental Algorithms (SEA 2023)


Abstract
We present Arc-Flag TB, a journey planning algorithm for public transit networks which combines Trip-Based Public Transit Routing (TB) with the Arc-Flags speedup technique. Compared to previous attempts to apply Arc-Flags to public transit networks, which saw limited success, our approach uses stronger pruning rules to reduce the search space. Our experiments show that Arc-Flag TB achieves a speedup of up to two orders of magnitude over TB, offering query times of less than a millisecond even on large countrywide networks. Compared to the state-of-the-art speedup technique Trip-Based Public Transit Routing Using Condensed Search Trees (TB-CST), our algorithm achieves similar query times but requires significantly less additional memory. Other state-of-the-art algorithms which achieve even faster query times, e.g., Public Transit Labeling, require enormous memory usage. In contrast, Arc-Flag TB offers a tradeoff between query performance and memory usage due to the fact that the number of regions in the network partition required by our algorithm is a configurable parameter. We also identify a previously undiscovered issue in the transfer precomputation of TB, which causes both TB-CST and Arc-Flag TB to answer some queries incorrectly. We provide discussion on how to resolve this issue in the future. Currently, Arc-Flag TB answers 1-6% of queries incorrectly, compared to over 20% for TB-CST on some networks.

Cite as

Ernestine Großmann, Jonas Sauer, Christian Schulz, and Patrick Steil. Arc-Flags Meet Trip-Based Public Transit Routing. In 21st International Symposium on Experimental Algorithms (SEA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 265, pp. 16:1-16:18, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{gromann_et_al:LIPIcs.SEA.2023.16,
  author =	{Gro{\ss}mann, Ernestine and Sauer, Jonas and Schulz, Christian and Steil, Patrick},
  title =	{{Arc-Flags Meet Trip-Based Public Transit Routing}},
  booktitle =	{21st International Symposium on Experimental Algorithms (SEA 2023)},
  pages =	{16:1--16:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-279-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{265},
  editor =	{Georgiadis, Loukas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2023.16},
  URN =		{urn:nbn:de:0030-drops-183664},
  doi =		{10.4230/LIPIcs.SEA.2023.16},
  annote =	{Keywords: Public transit routing, graph algorithms, algorithm engineering}
}
Document
Greedy Heuristics for Judicious Hypergraph Partitioning

Authors: Noah Wahl and Lars Gottesbüren

Published in: LIPIcs, Volume 265, 21st International Symposium on Experimental Algorithms (SEA 2023)


Abstract
We investigate the efficacy of greedy heuristics for the judicious hypergraph partitioning problem. In contrast to balanced partitioning problems, the goal of judicious hypergraph partitioning is to minimize the maximum load over all blocks of the partition. We devise strategies for initial partitioning and FM-style post-processing. In combination with a multilevel scheme, they beat the previous state-of-the-art solver - based on greedy set covers - in both running time (two to four orders of magnitude) and solution quality (18% to 45%). A major challenge that makes local greedy approaches difficult to use for this problem is the high frequency of zero-gain moves, for which we present and evaluate counteracting mechanisms.

Cite as

Noah Wahl and Lars Gottesbüren. Greedy Heuristics for Judicious Hypergraph Partitioning. In 21st International Symposium on Experimental Algorithms (SEA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 265, pp. 17:1-17:16, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{wahl_et_al:LIPIcs.SEA.2023.17,
  author =	{Wahl, Noah and Gottesb\"{u}ren, Lars},
  title =	{{Greedy Heuristics for Judicious Hypergraph Partitioning}},
  booktitle =	{21st International Symposium on Experimental Algorithms (SEA 2023)},
  pages =	{17:1--17:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-279-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{265},
  editor =	{Georgiadis, Loukas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2023.17},
  URN =		{urn:nbn:de:0030-drops-183674},
  doi =		{10.4230/LIPIcs.SEA.2023.17},
  annote =	{Keywords: hypergraph partitioning, local search algorithms, load balancing, local search}
}
Document
Efficient Yao Graph Construction

Authors: Daniel Funke and Peter Sanders

Published in: LIPIcs, Volume 265, 21st International Symposium on Experimental Algorithms (SEA 2023)


Abstract
Yao graphs are geometric spanners that connect each point of a given point set to its nearest neighbor in each of k cones drawn around it. Yao graphs were introduced to construct minimum spanning trees in d dimensional spaces. Moreover, they are used for instance in topology control in wireless networks. An optimal 𝒪(n log n)-time algorithm to construct Yao graphs for a given point set has been proposed in the literature but - to the best of our knowledge - never been implemented. Instead, algorithms with a quadratic complexity are used in popular packages to construct these graphs. In this paper we present the first implementation of the optimal Yao graph algorithm. We engineer the data structures required to achieve the 𝒪(n log n) time bound and detail algorithmic adaptations necessary to take the original algorithm from theory to practice. We propose a priority queue data structure that separates static and dynamic events and might be of independent interest for other sweepline algorithms. Additionally, we propose a new Yao graph algorithm based on a uniform grid data structure that performs well for medium-sized inputs. We evaluate our implementations on a wide variety of synthetic and real-world datasets and show that our implementation outperforms current publicly available implementations by at least an order of magnitude.

Cite as

Daniel Funke and Peter Sanders. Efficient Yao Graph Construction. In 21st International Symposium on Experimental Algorithms (SEA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 265, pp. 20:1-20:20, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{funke_et_al:LIPIcs.SEA.2023.20,
  author =	{Funke, Daniel and Sanders, Peter},
  title =	{{Efficient Yao Graph Construction}},
  booktitle =	{21st International Symposium on Experimental Algorithms (SEA 2023)},
  pages =	{20:1--20:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-279-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{265},
  editor =	{Georgiadis, Loukas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2023.20},
  URN =		{urn:nbn:de:0030-drops-183706},
  doi =		{10.4230/LIPIcs.SEA.2023.20},
  annote =	{Keywords: computational geometry, geometric spanners, Yao graphs, sweepline algorithms, optimal algorithms}
}
Document
Fast Succinct Retrieval and Approximate Membership Using Ribbon

Authors: Peter C. Dillinger, Lorenz Hübschle-Schneider, Peter Sanders, and Stefan Walzer

Published in: LIPIcs, Volume 233, 20th International Symposium on Experimental Algorithms (SEA 2022)


Abstract
A retrieval data structure for a static function f: S → {0,1}^r supports queries that return f(x) for any x ∈ S. Retrieval data structures can be used to implement a static approximate membership query data structure (AMQ), i.e., a Bloom filter alternative, with false positive rate 2^{-r}. The information-theoretic lower bound for both tasks is r|S| bits. While succinct theoretical constructions using (1+o(1))r|S| bits were known, these could not achieve very small overheads in practice because they have an unfavorable space-time tradeoff hidden in the asymptotic costs or because small overheads would only be reached for physically impossible input sizes. With bumped ribbon retrieval (BuRR), we present the first practical succinct retrieval data structure. In an extensive experimental evaluation BuRR achieves space overheads well below 1% while being faster than most previously used retrieval data structures (typically with space overheads at least an order of magnitude larger) and faster than classical Bloom filters (with space overhead ≥ 44%). This efficiency, including favorable constants, stems from a combination of simplicity, word parallelism, and high locality. We additionally describe homogeneous ribbon filter AMQs, which are even simpler and faster at the price of slightly larger space overhead.

Cite as

Peter C. Dillinger, Lorenz Hübschle-Schneider, Peter Sanders, and Stefan Walzer. Fast Succinct Retrieval and Approximate Membership Using Ribbon. In 20th International Symposium on Experimental Algorithms (SEA 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 233, pp. 4:1-4:20, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{dillinger_et_al:LIPIcs.SEA.2022.4,
  author =	{Dillinger, Peter C. and H\"{u}bschle-Schneider, Lorenz and Sanders, Peter and Walzer, Stefan},
  title =	{{Fast Succinct Retrieval and Approximate Membership Using Ribbon}},
  booktitle =	{20th International Symposium on Experimental Algorithms (SEA 2022)},
  pages =	{4:1--4:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-251-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{233},
  editor =	{Schulz, Christian and U\c{c}ar, Bora},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2022.4},
  URN =		{urn:nbn:de:0030-drops-165385},
  doi =		{10.4230/LIPIcs.SEA.2022.4},
  annote =	{Keywords: AMQ, Bloom filter, dictionary, linear algebra, randomized algorithm, retrieval data structure, static function data structure, succinct data structure, perfect hashing}
}
Document
Parallel Flow-Based Hypergraph Partitioning

Authors: Lars Gottesbüren, Tobias Heuer, and Peter Sanders

Published in: LIPIcs, Volume 233, 20th International Symposium on Experimental Algorithms (SEA 2022)


Abstract
We present a shared-memory parallelization of flow-based refinement, which is considered the most powerful iterative improvement technique for hypergraph partitioning at the moment. Flow-based refinement works on bipartitions, so current sequential partitioners schedule it on different block pairs to improve k-way partitions. We investigate two different sources of parallelism: a parallel scheduling scheme and a parallel maximum flow algorithm based on the well-known push-relabel algorithm. In addition to thoroughly engineered implementations, we propose several optimizations that substantially accelerate the algorithm in practice, enabling the use on extremely large hypergraphs (up to 1 billion pins). We integrate our approach in the state-of-the-art parallel multilevel framework Mt-KaHyPar and conduct extensive experiments on a benchmark set of more than 500 real-world hypergraphs, to show that the partition quality of our code is on par with the highest quality sequential code (KaHyPar), while being an order of magnitude faster with 10 threads.

Cite as

Lars Gottesbüren, Tobias Heuer, and Peter Sanders. Parallel Flow-Based Hypergraph Partitioning. In 20th International Symposium on Experimental Algorithms (SEA 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 233, pp. 5:1-5:21, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{gottesburen_et_al:LIPIcs.SEA.2022.5,
  author =	{Gottesb\"{u}ren, Lars and Heuer, Tobias and Sanders, Peter},
  title =	{{Parallel Flow-Based Hypergraph Partitioning}},
  booktitle =	{20th International Symposium on Experimental Algorithms (SEA 2022)},
  pages =	{5:1--5:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-251-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{233},
  editor =	{Schulz, Christian and U\c{c}ar, Bora},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2022.5},
  URN =		{urn:nbn:de:0030-drops-165393},
  doi =		{10.4230/LIPIcs.SEA.2022.5},
  annote =	{Keywords: multilevel hypergraph partitioning, shared-memory algorithms, maximum flow}
}
Document
Deep Multilevel Graph Partitioning

Authors: Lars Gottesbüren, Tobias Heuer, Peter Sanders, Christian Schulz, and Daniel Seemaier

Published in: LIPIcs, Volume 204, 29th Annual European Symposium on Algorithms (ESA 2021)


Abstract
Partitioning a graph into blocks of "roughly equal" weight while cutting only few edges is a fundamental problem in computer science with a wide range of applications. In particular, the problem is a building block in applications that require parallel processing. While the amount of available cores in parallel architectures has significantly increased in recent years, state-of-the-art graph partitioning algorithms do not work well if the input needs to be partitioned into a large number of blocks. Often currently available algorithms compute highly imbalanced solutions, solutions of low quality, or have excessive running time for this case. This is due to the fact that most high-quality general-purpose graph partitioners are multilevel algorithms which perform graph coarsening to build a hierarchy of graphs, initial partitioning to compute an initial solution, and local improvement to improve the solution throughout the hierarchy. However, for large number of blocks, the smallest graph in the hierarchy that is used for initial partitioning still has to be large. In this work, we substantially mitigate these problems by introducing deep multilevel graph partitioning and a shared-memory implementation thereof. Our scheme continues the multilevel approach deep into initial partitioning - integrating it into a framework where recursive bipartitioning and direct k-way partitioning are combined such that they can operate with high performance and quality. Our integrated approach is stronger, more flexible, arguably more elegant, and reduces bottlenecks for parallelization compared to existing multilevel approaches. For example, for large number of blocks our algorithm is on average at least an order of magnitude faster than competing algorithms while computing partitions with comparable solution quality. At the same time, our algorithm consistently produces balanced solutions. Moreover, for small number of blocks, our algorithms are the fastest among competing systems with comparable quality.

Cite as

Lars Gottesbüren, Tobias Heuer, Peter Sanders, Christian Schulz, and Daniel Seemaier. Deep Multilevel Graph Partitioning. In 29th Annual European Symposium on Algorithms (ESA 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 204, pp. 48:1-48:17, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{gottesburen_et_al:LIPIcs.ESA.2021.48,
  author =	{Gottesb\"{u}ren, Lars and Heuer, Tobias and Sanders, Peter and Schulz, Christian and Seemaier, Daniel},
  title =	{{Deep Multilevel Graph Partitioning}},
  booktitle =	{29th Annual European Symposium on Algorithms (ESA 2021)},
  pages =	{48:1--48:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-204-4},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{204},
  editor =	{Mutzel, Petra and Pagh, Rasmus and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2021.48},
  URN =		{urn:nbn:de:0030-drops-146298},
  doi =		{10.4230/LIPIcs.ESA.2021.48},
  annote =	{Keywords: graph partitioning, graph algorithms, multilevel, shared-memory, parallel}
}
Document
Engineering MultiQueues: Fast Relaxed Concurrent Priority Queues

Authors: Marvin Williams, Peter Sanders, and Roman Dementiev

Published in: LIPIcs, Volume 204, 29th Annual European Symposium on Algorithms (ESA 2021)


Abstract
Priority queues with parallel access are an attractive data structure for applications like prioritized online scheduling, discrete event simulation, or greedy algorithms. However, a classical priority queue constitutes a severe bottleneck in this context, leading to very small throughput. Hence, there has been significant interest in concurrent priority queues with relaxed semantics. We investigate the complementary quality criteria rank error (how close are deleted elements to the global minimum) and delay (for each element x, how many elements with lower priority are deleted before x). In this paper, we introduce MultiQueues as a natural approach to relaxed priority queues based on multiple sequential priority queues. Their naturally high theoretical scalability is further enhanced by using three orthogonal ways of batching operations on the sequential queues. Experiments indicate that MultiQueues present a very good performance-quality tradeoff and considerably outperform competing approaches in at least one of these aspects. We employ a seemingly paradoxical technique of "wait-free locking" that might be of more general interest to convert sequential data structures to relaxed concurrent data structures.

Cite as

Marvin Williams, Peter Sanders, and Roman Dementiev. Engineering MultiQueues: Fast Relaxed Concurrent Priority Queues. In 29th Annual European Symposium on Algorithms (ESA 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 204, pp. 81:1-81:17, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{williams_et_al:LIPIcs.ESA.2021.81,
  author =	{Williams, Marvin and Sanders, Peter and Dementiev, Roman},
  title =	{{Engineering MultiQueues: Fast Relaxed Concurrent Priority Queues}},
  booktitle =	{29th Annual European Symposium on Algorithms (ESA 2021)},
  pages =	{81:1--81:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-204-4},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{204},
  editor =	{Mutzel, Petra and Pagh, Rasmus and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2021.81},
  URN =		{urn:nbn:de:0030-drops-146627},
  doi =		{10.4230/LIPIcs.ESA.2021.81},
  annote =	{Keywords: concurrent data structure, priority queues, randomized algorithms, wait-free locking}
}
  • Refine by Author
  • 25 Sanders, Peter
  • 5 Schulz, Christian
  • 4 Fomin, Fedor V.
  • 3 Golovach, Petr A.
  • 3 Gottesbüren, Lars
  • Show More...

  • Refine by Classification
  • 23 Mathematics of computing → Graph algorithms
  • 17 Theory of computation → Graph algorithms analysis
  • 16 Theory of computation → Computational geometry
  • 15 Theory of computation → Design and analysis of algorithms
  • 14 Theory of computation → Parameterized complexity and exact algorithms
  • Show More...

  • Refine by Keyword
  • 5 algorithm engineering
  • 4 kernelization
  • 3 Approximation Algorithms
  • 3 approximation algorithm
  • 3 approximation algorithms
  • Show More...

  • Refine by Type
  • 120 document
  • 1 volume

  • Refine by Publication Year
  • 87 2020
  • 10 2023
  • 9 2010
  • 4 2017
  • 3 2021
  • Show More...

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail