Search Results

Documents authored by Meyer, Ulrich


Found 2 Possible Name Variants:

Meyer, Ulrich Carsten

Document
Data Structures for the Cloud and External Memory Data (Dagstuhl Seminar 19051)

Authors: Gerth Stølting Brodal, Ulrich Carsten Meyer, Bernhard E. Nebel, and Robert Sedgewick

Published in: Dagstuhl Reports, Volume 9, Issue 1 (2019)


Abstract
This report documents the program and the outcomes of Dagstuhl Seminar 16101 "Data Structures for the Cloud and External Memory Data". In today's computing environment vast amounts of data are processed, exchanged and analyzed. The manner in which information is stored profoundly influences the efficiency of these operations over the data. In spite of the maturity of the field many data structuring problems are still open, while new ones arise due to technological advances. The seminar covered both recent advances in the "classical" data structuring topics as well as new models of computation adapted to modern architectures, scientific studies that reveal the need for such models, applications where large data sets play a central role, modern computing platforms for very large data, and new data structures for large data in modern architectures. The extended abstracts included in this report contain both recent state of the art advances and lay the foundation for new directions within data structures research.

Cite as

Gerth Stølting Brodal, Ulrich Carsten Meyer, Bernhard E. Nebel, and Robert Sedgewick. Data Structures for the Cloud and External Memory Data (Dagstuhl Seminar 19051). In Dagstuhl Reports, Volume 9, Issue 1, pp. 104-124, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@Article{brodal_et_al:DagRep.9.1.104,
  author =	{Brodal, Gerth St{\o}lting and Meyer, Ulrich Carsten and Nebel, Bernhard E. and Sedgewick, Robert},
  title =	{{Data Structures for the Cloud and External Memory Data (Dagstuhl Seminar 19051)}},
  pages =	{104--124},
  journal =	{Dagstuhl Reports},
  ISSN =	{2192-5283},
  year =	{2019},
  volume =	{9},
  number =	{1},
  editor =	{Brodal, Gerth St{\o}lting and Meyer, Ulrich Carsten and Nebel, Bernhard E. and Sedgewick, Robert},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagRep.9.1.104},
  URN =		{urn:nbn:de:0030-drops-105722},
  doi =		{10.4230/DagRep.9.1.104},
  annote =	{Keywords: algorithms, big data, cloud computing, data structures, external memory methods, large data sets, web-scale}
}
Document
Data Structures and Advanced Models of Computation on Big Data (Dagstuhl Seminar 16101)

Authors: Alejandro Lopez-Ortiz, Ulrich Carsten Meyer, Markus E. Nebel, and Robert Sedgewick

Published in: Dagstuhl Reports, Volume 6, Issue 3 (2016)


Abstract
This report documents the program and the outcomes of Dagstuhl Seminar 16101 "Data Structures and Advanced Models of Computation on Big Data". In today's computing environment vast amounts of data are processed, exchanged and analyzed. The manner in which information is stored profoundly influences the efficiency of these operations over the data. In spite of the maturity of the field many data structuring problems are still open, while new ones arise due to technological advances. The seminar covered both recent advances in the "classical" data structuring topics as well as new models of computation adapted to modern architectures, scientific studies that reveal the need for such models, applications where large data sets play a central role, modern computing platforms for very large data, and new data structures for large data in modern architectures. The extended abstracts included in this report contain both recent state of the art advances and lay the foundation for new directions within data structures research.

Cite as

Alejandro Lopez-Ortiz, Ulrich Carsten Meyer, Markus E. Nebel, and Robert Sedgewick. Data Structures and Advanced Models of Computation on Big Data (Dagstuhl Seminar 16101). In Dagstuhl Reports, Volume 6, Issue 3, pp. 1-23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@Article{lopezortiz_et_al:DagRep.6.3.1,
  author =	{Lopez-Ortiz, Alejandro and Meyer, Ulrich Carsten and Nebel, Markus E. and Sedgewick, Robert},
  title =	{{Data Structures and Advanced Models of Computation on Big Data (Dagstuhl Seminar 16101)}},
  pages =	{1--23},
  journal =	{Dagstuhl Reports},
  ISSN =	{2192-5283},
  year =	{2016},
  volume =	{6},
  number =	{3},
  editor =	{Lopez-Ortiz, Alejandro and Meyer, Ulrich Carsten and Nebel, Markus E. and Sedgewick, Robert},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagRep.6.3.1},
  URN =		{urn:nbn:de:0030-drops-61457},
  doi =		{10.4230/DagRep.6.3.1},
  annote =	{Keywords: algorithms, big data, cloud services, data structures, external memory methods, information theory, large data sets, streaming, web-scale}
}
Document
High-performance Graph Algorithms and Applications in Computational Science (Dagstuhl Seminar 14461)

Authors: Ulrich Carsten Meyer, Henning Meyerhenke, Ali Pinar, and Ilya Safro

Published in: Dagstuhl Reports, Volume 4, Issue 11 (2015)


Abstract
This report documents the program and the outcomes of Dagstuhl Seminar 14461 "High- performance Graph Algorithms and Applications in Computational Science". The seminar reflected the recent qualitative change how graph algorithms are used in practice due to (i) the complex structure of graphs in new and emerging applications, (ii) the size of typical inputs, and (iii) the computer systems on which graph problems are solved. This change is having a tremendous impact on the field of graph algorithms in terms of algorithm theory and implementation as well as hardware requirements and application areas. The seminar covered recent advances in all these aspects with a focus on practical algorithms and their efficient implementation for large-scale problems. The abstracts included in this report contain recent state-of-the-art results, but also point to promising new directions for high-performance graph algorithms and their applications.

Cite as

Ulrich Carsten Meyer, Henning Meyerhenke, Ali Pinar, and Ilya Safro. High-performance Graph Algorithms and Applications in Computational Science (Dagstuhl Seminar 14461). In Dagstuhl Reports, Volume 4, Issue 11, pp. 40-58, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@Article{meyer_et_al:DagRep.4.11.40,
  author =	{Meyer, Ulrich Carsten and Meyerhenke, Henning and Pinar, Ali and Safro, Ilya},
  title =	{{High-performance Graph Algorithms and Applications in Computational Science (Dagstuhl Seminar 14461)}},
  pages =	{40--58},
  journal =	{Dagstuhl Reports},
  ISSN =	{2192-5283},
  year =	{2015},
  volume =	{4},
  number =	{11},
  editor =	{Meyer, Ulrich Carsten and Meyerhenke, Henning and Pinar, Ali and Safro, Ilya},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagRep.4.11.40},
  URN =		{urn:nbn:de:0030-drops-49697},
  doi =		{10.4230/DagRep.4.11.40},
  annote =	{Keywords: graphs, graph algorithms, graph theory, computational science, complex networks, network science, graph partitioning, linear algebra, parallel program}
}
Document
Data Structures and Advanced Models of Computation on Big Data (Dagstuhl Seminar 14091)

Authors: Alejandro López-Ortiz, Ulrich Carsten Meyer, and Robert Sedgewick

Published in: Dagstuhl Reports, Volume 4, Issue 2 (2014)


Abstract
This report documents the program and the outcomes of Dagstuhl Seminar 14091 "Data Structures and Advanced Models of Computation on Big Data". In today's computing environment vast amounts of data are processed, exchanged and analyzed. The manner in which information is stored profoundly influences the efficiency of these operations over the data. In spite of the maturity of the field many data structuring problems are still open, while new ones arise due to technological advances. The seminar covered both recent advances in the "classical" data structuring topics as well as new models of computation adapted to modern architectures, scientific studies that reveal the need for such models, applications where large data sets play a central role, modern computing platforms for very large data, and new data structures for large data in modern architectures. The extended abstracts included in this report contain both recent state of the art advances and lay the foundation for new directions within data structures research.

Cite as

Alejandro López-Ortiz, Ulrich Carsten Meyer, and Robert Sedgewick. Data Structures and Advanced Models of Computation on Big Data (Dagstuhl Seminar 14091). In Dagstuhl Reports, Volume 4, Issue 2, pp. 129-149, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)


Copy BibTex To Clipboard

@Article{lopezortiz_et_al:DagRep.4.2.129,
  author =	{L\'{o}pez-Ortiz, Alejandro and Meyer, Ulrich Carsten and Sedgewick, Robert},
  title =	{{Data Structures and Advanced Models of Computation on Big Data (Dagstuhl Seminar 14091)}},
  pages =	{129--149},
  journal =	{Dagstuhl Reports},
  ISSN =	{2192-5283},
  year =	{2014},
  volume =	{4},
  number =	{2},
  editor =	{L\'{o}pez-Ortiz, Alejandro and Meyer, Ulrich Carsten and Sedgewick, Robert},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagRep.4.2.129},
  URN =		{urn:nbn:de:0030-drops-45489},
  doi =		{10.4230/DagRep.4.2.129},
  annote =	{Keywords: data structures, big data, models of computation, I/O Model, sorting, quicksort, graph algorithms, hashing, compression, succinct data structures, trajectories, text search, GPU algorithms, MapReduce}
}

Meyer, Ulrich

Document
Insights into (k, ρ)-Shortcutting Algorithms

Authors: Alexander Leonhardt, Ulrich Meyer, and Manuel Penschuck

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
A graph is called a (k, ρ)-graph iff every node can reach ρ of its nearest neighbors in at most k hops. This property has proven useful in the analysis and design of parallel shortest-path algorithms [Blelloch et al., 2016; Dong et al., 2021]. Any graph can be transformed into a (k, ρ)-graph by adding shortcuts. Formally, the (k,ρ)-Minimum-Shortcut-Problem (kρ-MSP) asks to find an appropriate shortcut set of minimal cardinality. We show that kρ-MSP is NP-complete in the practical regime of k ≥ 3 and ρ = Θ(n^ε) for ε > 0. With a related construction, we bound the approximation factor of known kρ-MSP heuristics [Blelloch et al., 2016] from below and propose algorithmic countermeasures improving the approximation quality. Further, we describe an integer linear problem (ILP) that optimally solves kρ-MSP. Finally, we compare the practical performance and quality of all algorithms empirically.

Cite as

Alexander Leonhardt, Ulrich Meyer, and Manuel Penschuck. Insights into (k, ρ)-Shortcutting Algorithms. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 84:1-84:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{leonhardt_et_al:LIPIcs.ESA.2024.84,
  author =	{Leonhardt, Alexander and Meyer, Ulrich and Penschuck, Manuel},
  title =	{{Insights into (k, \rho)-Shortcutting Algorithms}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{84:1--84:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.84},
  URN =		{urn:nbn:de:0030-drops-211554},
  doi =		{10.4230/LIPIcs.ESA.2024.84},
  annote =	{Keywords: Complexity, Approximation, Optimal algorithms, Parallel shortest path}
}
Document
PACE Solver Description
PACE Solver Description: Exact (GUTHMI) and Heuristic (GUTHM)

Authors: Alexander Leonhardt, Holger Dell, Anselm Haak, Frank Kammer, Johannes Meintrup, Ulrich Meyer, and Manuel Penschuck

Published in: LIPIcs, Volume 285, 18th International Symposium on Parameterized and Exact Computation (IPEC 2023)


Abstract
Twin-width (tww) is a parameter measuring the similarity of an undirected graph to a co-graph [Édouard Bonnet et al., 2022]. It is useful to analyze the parameterized complexity of various graph problems. This paper presents two algorithms to compute the twin-width and to provide a contraction sequence as witness. The two algorithms are motivated by the PACE 2023 challenge, one for the exact track and one for the heuristic track. Each algorithm produces a contraction sequence witnessing (i) the minimal twin-width admissible by the graph in the exact track (ii) an upper bound on the twin-width as tight as possible in the heuristic track. Our heuristic algorithm relies on several greedy approaches with different performance characteristics to find and improve solutions. For large graphs we use locality sensitive hashing to approximately identify suitable contraction candidates. The exact solver follows a branch-and-bound design. It relies on the heuristic algorithm to provide initial upper bounds, and uses lower bounds via contraction sequences to show the optimality of a heuristic solution found in some branch.

Cite as

Alexander Leonhardt, Holger Dell, Anselm Haak, Frank Kammer, Johannes Meintrup, Ulrich Meyer, and Manuel Penschuck. PACE Solver Description: Exact (GUTHMI) and Heuristic (GUTHM). In 18th International Symposium on Parameterized and Exact Computation (IPEC 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 285, pp. 37:1-37:7, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{leonhardt_et_al:LIPIcs.IPEC.2023.37,
  author =	{Leonhardt, Alexander and Dell, Holger and Haak, Anselm and Kammer, Frank and Meintrup, Johannes and Meyer, Ulrich and Penschuck, Manuel},
  title =	{{PACE Solver Description: Exact (GUTHMI) and Heuristic (GUTHM)}},
  booktitle =	{18th International Symposium on Parameterized and Exact Computation (IPEC 2023)},
  pages =	{37:1--37:7},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-305-8},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{285},
  editor =	{Misra, Neeldhara and Wahlstr\"{o}m, Magnus},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.IPEC.2023.37},
  URN =		{urn:nbn:de:0030-drops-194563},
  doi =		{10.4230/LIPIcs.IPEC.2023.37},
  annote =	{Keywords: PACE 2023 Challenge, Heuristic, Exact, Twin-Width}
}
Document
An Experimental Study of External Memory Algorithms for Connected Components

Authors: Gerth Stølting Brodal, Rolf Fagerberg, David Hammer, Ulrich Meyer, Manuel Penschuck, and Hung Tran

Published in: LIPIcs, Volume 190, 19th International Symposium on Experimental Algorithms (SEA 2021)


Abstract
We empirically investigate algorithms for solving Connected Components in the external memory model. In particular, we study whether the randomized O(Sort(E)) algorithm by Karger, Klein, and Tarjan can be implemented to compete with practically promising and simpler algorithms having only slightly worse theoretical cost, namely Borůvka’s algorithm and the algorithm by Sibeyn and collaborators. For all algorithms, we develop and test a number of tuning options. Our experiments are executed on a large set of different graph classes including random graphs, grids, geometric graphs, and hyperbolic graphs. Among our findings are: The Sibeyn algorithm is a very strong contender due to its simplicity and due to an added degree of freedom in its internal workings when used in the Connected Components setting. With the right tunings, the Karger-Klein-Tarjan algorithm can be implemented to be competitive in many cases. Higher graph density seems to benefit Karger-Klein-Tarjan relative to Sibeyn. Borůvka’s algorithm is not competitive with the two others.

Cite as

Gerth Stølting Brodal, Rolf Fagerberg, David Hammer, Ulrich Meyer, Manuel Penschuck, and Hung Tran. An Experimental Study of External Memory Algorithms for Connected Components. In 19th International Symposium on Experimental Algorithms (SEA 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 190, pp. 23:1-23:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{brodal_et_al:LIPIcs.SEA.2021.23,
  author =	{Brodal, Gerth St{\o}lting and Fagerberg, Rolf and Hammer, David and Meyer, Ulrich and Penschuck, Manuel and Tran, Hung},
  title =	{{An Experimental Study of External Memory Algorithms for Connected Components}},
  booktitle =	{19th International Symposium on Experimental Algorithms (SEA 2021)},
  pages =	{23:1--23:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-185-6},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{190},
  editor =	{Coudert, David and Natale, Emanuele},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2021.23},
  URN =		{urn:nbn:de:0030-drops-137958},
  doi =		{10.4230/LIPIcs.SEA.2021.23},
  annote =	{Keywords: Connected Components, Experimental Evaluation, External Memory, Graph Algorithms, Randomization}
}
Document
Simulating Population Protocols in Sub-Constant Time per Interaction

Authors: Petra Berenbrink, David Hammer, Dominik Kaaser, Ulrich Meyer, Manuel Penschuck, and Hung Tran

Published in: LIPIcs, Volume 173, 28th Annual European Symposium on Algorithms (ESA 2020)


Abstract
We consider the efficient simulation of population protocols. In the population model, we are given a system of n agents modeled as identical finite-state machines. In each step, two agents are selected uniformly at random to interact by updating their states according to a common transition function. We empirically and analytically analyze two classes of simulators for this model. First, we consider sequential simulators executing one interaction after the other. Key to the performance of these simulators is the data structure storing the agents' states. For our analysis, we consider plain arrays, binary search trees, and a novel Dynamic Alias Table data structure. Secondly, we consider batch processing to efficiently update the states of multiple independent agents in one step. For many protocols considered in literature, our simulator requires amortized sub-constant time per interaction and is fast in practice: given a fixed time budget, the implementation of our batched simulator is able to simulate population protocols several orders of magnitude larger compared to the sequential competitors, and can carry out 2^50 interactions among the same number of agents in less than 400s.

Cite as

Petra Berenbrink, David Hammer, Dominik Kaaser, Ulrich Meyer, Manuel Penschuck, and Hung Tran. Simulating Population Protocols in Sub-Constant Time per Interaction. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 16:1-16:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{berenbrink_et_al:LIPIcs.ESA.2020.16,
  author =	{Berenbrink, Petra and Hammer, David and Kaaser, Dominik and Meyer, Ulrich and Penschuck, Manuel and Tran, Hung},
  title =	{{Simulating Population Protocols in Sub-Constant Time per Interaction}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{16:1--16:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.16},
  URN =		{urn:nbn:de:0030-drops-128827},
  doi =		{10.4230/LIPIcs.ESA.2020.16},
  annote =	{Keywords: Population Protocols, Simulation, Random Sampling, Dynamic Alias Table}
}
Document
On Optimal Balance in B-Trees: What Does It Cost to Stay in Perfect Shape?

Authors: Rolf Fagerberg, David Hammer, and Ulrich Meyer

Published in: LIPIcs, Volume 149, 30th International Symposium on Algorithms and Computation (ISAAC 2019)


Abstract
Any B-tree has height at least ceil[log_B(n)]. Static B-trees achieving this height are easy to build. In the dynamic case, however, standard B-tree rebalancing algorithms only maintain a height within a constant factor of this optimum. We investigate exactly how close to ceil[log_B(n)] the height of dynamic B-trees can be maintained as a function of the rebalancing cost. In this paper, we prove a lower bound on the cost of maintaining optimal height ceil[log_B(n)], which shows that this cost must increase from Omega(1/B) to Omega(n/B) rebalancing per update as n grows from one power of B to the next. We also provide an almost matching upper bound, demonstrating this lower bound to be essentially tight. We then give a variant upper bound which can maintain near-optimal height at low cost. As two special cases, we can maintain optimal height for all but a vanishing fraction of values of n using Theta(log_B(n)) amortized rebalancing cost per update and we can maintain a height of optimal plus one using O(1/B) amortized rebalancing cost per update. More generally, for any rebalancing budget, we can maintain (as n grows from one power of B to the next) optimal height essentially up to the point where the lower bound requires the budget to be exceeded, after which optimal height plus one is maintained. Finally, we prove that this balancing scheme gives B-trees with very good storage utilization.

Cite as

Rolf Fagerberg, David Hammer, and Ulrich Meyer. On Optimal Balance in B-Trees: What Does It Cost to Stay in Perfect Shape?. In 30th International Symposium on Algorithms and Computation (ISAAC 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 149, pp. 35:1-35:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{fagerberg_et_al:LIPIcs.ISAAC.2019.35,
  author =	{Fagerberg, Rolf and Hammer, David and Meyer, Ulrich},
  title =	{{On Optimal Balance in B-Trees: What Does It Cost to Stay in Perfect Shape?}},
  booktitle =	{30th International Symposium on Algorithms and Computation (ISAAC 2019)},
  pages =	{35:1--35:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-130-6},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{149},
  editor =	{Lu, Pinyan and Zhang, Guochuan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2019.35},
  URN =		{urn:nbn:de:0030-drops-115313},
  doi =		{10.4230/LIPIcs.ISAAC.2019.35},
  annote =	{Keywords: B-trees, Data structures, Lower bounds, Complexity}
}
Document
Fragile Complexity of Comparison-Based Algorithms

Authors: Peyman Afshani, Rolf Fagerberg, David Hammer, Riko Jacob, Irina Kostitsyna, Ulrich Meyer, Manuel Penschuck, and Nodari Sitchinava

Published in: LIPIcs, Volume 144, 27th Annual European Symposium on Algorithms (ESA 2019)


Abstract
We initiate a study of algorithms with a focus on the computational complexity of individual elements, and introduce the fragile complexity of comparison-based algorithms as the maximal number of comparisons any individual element takes part in. We give a number of upper and lower bounds on the fragile complexity for fundamental problems, including Minimum, Selection, Sorting and Heap Construction. The results include both deterministic and randomized upper and lower bounds, and demonstrate a separation between the two settings for a number of problems. The depth of a comparator network is a straight-forward upper bound on the worst case fragile complexity of the corresponding fragile algorithm. We prove that fragile complexity is a different and strictly easier property than the depth of comparator networks, in the sense that for some problems a fragile complexity equal to the best network depth can be achieved with less total work and that with randomization, even a lower fragile complexity is possible.

Cite as

Peyman Afshani, Rolf Fagerberg, David Hammer, Riko Jacob, Irina Kostitsyna, Ulrich Meyer, Manuel Penschuck, and Nodari Sitchinava. Fragile Complexity of Comparison-Based Algorithms. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 2:1-2:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{afshani_et_al:LIPIcs.ESA.2019.2,
  author =	{Afshani, Peyman and Fagerberg, Rolf and Hammer, David and Jacob, Riko and Kostitsyna, Irina and Meyer, Ulrich and Penschuck, Manuel and Sitchinava, Nodari},
  title =	{{Fragile Complexity of Comparison-Based Algorithms}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{2:1--2:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.2},
  URN =		{urn:nbn:de:0030-drops-111235},
  doi =		{10.4230/LIPIcs.ESA.2019.2},
  annote =	{Keywords: Algorithms, comparison based algorithms, lower bounds}
}
Document
Efficiently Generating Geometric Inhomogeneous and Hyperbolic Random Graphs

Authors: Thomas Bläsius, Tobias Friedrich, Maximilian Katzmann, Ulrich Meyer, Manuel Penschuck, and Christopher Weyand

Published in: LIPIcs, Volume 144, 27th Annual European Symposium on Algorithms (ESA 2019)


Abstract
Hyperbolic random graphs (HRG) and geometric inhomogeneous random graphs (GIRG) are two similar generative network models that were designed to resemble complex real world networks. In particular, they have a power-law degree distribution with controllable exponent beta, and high clustering that can be controlled via the temperature T. We present the first implementation of an efficient GIRG generator running in expected linear time. Besides varying temperatures, it also supports underlying geometries of higher dimensions. It is capable of generating graphs with ten million edges in under a second on commodity hardware. The algorithm can be adapted to HRGs. Our resulting implementation is the fastest sequential HRG generator, despite the fact that we support non-zero temperatures. Though non-zero temperatures are crucial for many applications, most existing generators are restricted to T = 0. We also support parallelization, although this is not the focus of this paper. Moreover, we note that our generators draw from the correct probability distribution, i.e., they involve no approximation. Besides the generators themselves, we also provide an efficient algorithm to determine the non-trivial dependency between the average degree of the resulting graph and the input parameters of the GIRG model. This makes it possible to specify the desired expected average degree as input. Moreover, we investigate the differences between HRGs and GIRGs, shedding new light on the nature of the relation between the two models. Although HRGs represent, in a certain sense, a special case of the GIRG model, we find that a straight-forward inclusion does not hold in practice. However, the difference is negligible for most use cases.

Cite as

Thomas Bläsius, Tobias Friedrich, Maximilian Katzmann, Ulrich Meyer, Manuel Penschuck, and Christopher Weyand. Efficiently Generating Geometric Inhomogeneous and Hyperbolic Random Graphs. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 21:1-21:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{blasius_et_al:LIPIcs.ESA.2019.21,
  author =	{Bl\"{a}sius, Thomas and Friedrich, Tobias and Katzmann, Maximilian and Meyer, Ulrich and Penschuck, Manuel and Weyand, Christopher},
  title =	{{Efficiently Generating Geometric Inhomogeneous and Hyperbolic Random Graphs}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{21:1--21:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.21},
  URN =		{urn:nbn:de:0030-drops-111424},
  doi =		{10.4230/LIPIcs.ESA.2019.21},
  annote =	{Keywords: hyperbolic random graphs, geometric inhomogeneous random graph, efficient network generation}
}
Document
Parallel and I/O-efficient Randomisation of Massive Networks using Global Curveball Trades

Authors: Corrie Jacobien Carstens, Michael Hamann, Ulrich Meyer, Manuel Penschuck, Hung Tran, and Dorothea Wagner

Published in: LIPIcs, Volume 112, 26th Annual European Symposium on Algorithms (ESA 2018)


Abstract
Graph randomisation is a crucial task in the analysis and synthesis of networks. It is typically implemented as an edge switching process (ESMC) repeatedly swapping the nodes of random edge pairs while maintaining the degrees involved [Mihail and Zegura, 2003]. Curveball is a novel approach that instead considers the whole neighbourhoods of randomly drawn node pairs. Its Markov chain converges to a uniform distribution, and experiments suggest that it requires less steps than the established ESMC [Carstens et al., 2016]. Since trades however are more expensive, we study Curveball's practical runtime by introducing the first efficient Curveball algorithms: the I/O-efficient EM-CB for simple undirected graphs and its internal memory pendant IM-CB. Further, we investigate global trades [Carstens et al., 2016] processing every node in a single super step, and show that undirected global trades converge to a uniform distribution and perform superior in practice. We then discuss EM-GCB and EM-PGCB for global trades and give experimental evidence that EM-PGCB achieves the quality of the state-of-the-art ESMC algorithm EM-ES [M. Hamann et al., 2017] nearly one order of magnitude faster.

Cite as

Corrie Jacobien Carstens, Michael Hamann, Ulrich Meyer, Manuel Penschuck, Hung Tran, and Dorothea Wagner. Parallel and I/O-efficient Randomisation of Massive Networks using Global Curveball Trades. In 26th Annual European Symposium on Algorithms (ESA 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 112, pp. 11:1-11:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{carstens_et_al:LIPIcs.ESA.2018.11,
  author =	{Carstens, Corrie Jacobien and Hamann, Michael and Meyer, Ulrich and Penschuck, Manuel and Tran, Hung and Wagner, Dorothea},
  title =	{{Parallel and I/O-efficient Randomisation of Massive Networks using Global Curveball Trades}},
  booktitle =	{26th Annual European Symposium on Algorithms (ESA 2018)},
  pages =	{11:1--11:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-081-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{112},
  editor =	{Azar, Yossi and Bast, Hannah and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2018.11},
  URN =		{urn:nbn:de:0030-drops-94745},
  doi =		{10.4230/LIPIcs.ESA.2018.11},
  annote =	{Keywords: Graph randomisation, Curveball, I/O-efficiency, Parallelism}
}
Document
On Dynamic Breadth-First Search in External-Memory

Authors: Ulrich Meyer

Published in: LIPIcs, Volume 1, 25th International Symposium on Theoretical Aspects of Computer Science (2008)


Abstract
We provide the first non-trivial result on dynamic breadth-first search (BFS) in external-memory: For general sparse undirected graphs of initially $n$ nodes and $O(n)$ edges and monotone update sequences of either $Theta(n)$ edge insertions or $Theta(n)$ edge deletions, we prove an amortized high-probability bound of $O(n/B^{2/3}+sort(n)cdot log B)$ I/Os per update. In contrast, the currently best approach for static BFS on sparse undirected graphs requires $Omega(n/B^{1/2}+sort(n))$ I/Os.

Cite as

Ulrich Meyer. On Dynamic Breadth-First Search in External-Memory. In 25th International Symposium on Theoretical Aspects of Computer Science. Leibniz International Proceedings in Informatics (LIPIcs), Volume 1, pp. 551-560, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{meyer:LIPIcs.STACS.2008.1316,
  author =	{Meyer, Ulrich},
  title =	{{On Dynamic Breadth-First Search in External-Memory}},
  booktitle =	{25th International Symposium on Theoretical Aspects of Computer Science},
  pages =	{551--560},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-06-4},
  ISSN =	{1868-8969},
  year =	{2008},
  volume =	{1},
  editor =	{Albers, Susanne and Weil, Pascal},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2008.1316},
  URN =		{urn:nbn:de:0030-drops-13167},
  doi =		{10.4230/LIPIcs.STACS.2008.1316},
  annote =	{Keywords: External Memory, Dynamic Graph Algorithms, BFS, Randomization}
}
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail