12 Search Results for "Heuer, Tobias"


Document
Recent Trends in Graph Decomposition (Dagstuhl Seminar 23331)

Authors: George Karypis, Christian Schulz, Darren Strash, Deepak Ajwani, Rob H. Bisseling, Katrin Casel, Ümit V. Çatalyürek, Cédric Chevalier, Florian Chudigiewitsch, Marcelo Fonseca Faraj, Michael Fellows, Lars Gottesbüren, Tobias Heuer, Kamer Kaya, Jakub Lacki, Johannes Langguth, Xiaoye Sherry Li, Ruben Mayer, Johannes Meintrup, Yosuke Mizutani, François Pellegrini, Fabrizio Petrini, Frances Rosamond, Ilya Safro, Sebastian Schlag, Roohani Sharma, Blair D. Sullivan, Bora Uçar, and Albert-Jan Yzelman

Published in: Dagstuhl Reports, Volume 13, Issue 8 (2024)


Abstract
This report documents the program and the outcomes of Dagstuhl Seminar 23331 "Recent Trends in Graph Decomposition", which took place from 13. August to 18. August, 2023. The seminar brought together 33 experts from academia and industry to discuss graph decomposition, a pivotal technique for handling massive graphs in applications such as social networks and scientific simulations. The seminar addressed the challenges posed by contemporary hardware designs, the potential of deep neural networks and reinforcement learning in developing heuristics, the unique optimization requirements of large sparse data, and the need for scalable algorithms suitable for emerging architectures. Through presentations, discussions, and collaborative sessions, the event fostered an exchange of innovative ideas, leading to the creation of community notes highlighting key open problems in the field.

Cite as

George Karypis, Christian Schulz, Darren Strash, Deepak Ajwani, Rob H. Bisseling, Katrin Casel, Ümit V. Çatalyürek, Cédric Chevalier, Florian Chudigiewitsch, Marcelo Fonseca Faraj, Michael Fellows, Lars Gottesbüren, Tobias Heuer, Kamer Kaya, Jakub Lacki, Johannes Langguth, Xiaoye Sherry Li, Ruben Mayer, Johannes Meintrup, Yosuke Mizutani, François Pellegrini, Fabrizio Petrini, Frances Rosamond, Ilya Safro, Sebastian Schlag, Roohani Sharma, Blair D. Sullivan, Bora Uçar, and Albert-Jan Yzelman. Recent Trends in Graph Decomposition (Dagstuhl Seminar 23331). In Dagstuhl Reports, Volume 13, Issue 8, pp. 1-45, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@Article{karypis_et_al:DagRep.13.8.1,
  author =	{Karypis, George and Schulz, Christian and Strash, Darren and Ajwani, Deepak and Bisseling, Rob H. and Casel, Katrin and \c{C}ataly\"{u}rek, \"{U}mit V. and Chevalier, C\'{e}dric and Chudigiewitsch, Florian and Faraj, Marcelo Fonseca and Fellows, Michael and Gottesb\"{u}ren, Lars and Heuer, Tobias and Kaya, Kamer and Lacki, Jakub and Langguth, Johannes and Li, Xiaoye Sherry and Mayer, Ruben and Meintrup, Johannes and Mizutani, Yosuke and Pellegrini, Fran\c{c}ois and Petrini, Fabrizio and Rosamond, Frances and Safro, Ilya and Schlag, Sebastian and Sharma, Roohani and Sullivan, Blair D. and U\c{c}ar, Bora and Yzelman, Albert-Jan},
  title =	{{Recent Trends in Graph Decomposition (Dagstuhl Seminar 23331)}},
  pages =	{1--45},
  journal =	{Dagstuhl Reports},
  ISSN =	{2192-5283},
  year =	{2024},
  volume =	{13},
  number =	{8},
  editor =	{Karypis, George and Schulz, Christian and Strash, Darren},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagRep.13.8.1},
  URN =		{urn:nbn:de:0030-drops-198114},
  doi =		{10.4230/DagRep.13.8.1},
  annote =	{Keywords: combinatorial optimization, experimental algorithmics, parallel algorithms}
}
Document
The PACE 2022 Parameterized Algorithms and Computational Experiments Challenge: Directed Feedback Vertex Set

Authors: Ernestine Großmann, Tobias Heuer, Christian Schulz, and Darren Strash

Published in: LIPIcs, Volume 249, 17th International Symposium on Parameterized and Exact Computation (IPEC 2022)


Abstract
The Parameterized Algorithms and Computational Experiments challenge (PACE) 2022 was devoted to engineer algorithms solving the NP-hard Directed Feedback Vertex Set (DFVS) problem. The DFVS problem is to find a minimum subset X ⊆ V in a given directed graph G = (V,E) such that, when all vertices of X and their adjacent edges are deleted from G, the remainder is acyclic. Overall, the challenge had 90 participants from 26 teams, 12 countries, and 3 continents that submitted their implementations to this year’s competition. In this report, we briefly describe the setup of the challenge, the selection of benchmark instances, as well as the ranking of the participating teams. We also briefly outline the approaches used in the submitted solvers.

Cite as

Ernestine Großmann, Tobias Heuer, Christian Schulz, and Darren Strash. The PACE 2022 Parameterized Algorithms and Computational Experiments Challenge: Directed Feedback Vertex Set. In 17th International Symposium on Parameterized and Exact Computation (IPEC 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 249, pp. 26:1-26:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{gromann_et_al:LIPIcs.IPEC.2022.26,
  author =	{Gro{\ss}mann, Ernestine and Heuer, Tobias and Schulz, Christian and Strash, Darren},
  title =	{{The PACE 2022 Parameterized Algorithms and Computational Experiments Challenge: Directed Feedback Vertex Set}},
  booktitle =	{17th International Symposium on Parameterized and Exact Computation (IPEC 2022)},
  pages =	{26:1--26:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-260-0},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{249},
  editor =	{Dell, Holger and Nederlof, Jesper},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.IPEC.2022.26},
  URN =		{urn:nbn:de:0030-drops-173826},
  doi =		{10.4230/LIPIcs.IPEC.2022.26},
  annote =	{Keywords: Feedback Vertex Set, Algorithm Engineering, FPT, Kernelization, Heuristics}
}
Document
Parallel Flow-Based Hypergraph Partitioning

Authors: Lars Gottesbüren, Tobias Heuer, and Peter Sanders

Published in: LIPIcs, Volume 233, 20th International Symposium on Experimental Algorithms (SEA 2022)


Abstract
We present a shared-memory parallelization of flow-based refinement, which is considered the most powerful iterative improvement technique for hypergraph partitioning at the moment. Flow-based refinement works on bipartitions, so current sequential partitioners schedule it on different block pairs to improve k-way partitions. We investigate two different sources of parallelism: a parallel scheduling scheme and a parallel maximum flow algorithm based on the well-known push-relabel algorithm. In addition to thoroughly engineered implementations, we propose several optimizations that substantially accelerate the algorithm in practice, enabling the use on extremely large hypergraphs (up to 1 billion pins). We integrate our approach in the state-of-the-art parallel multilevel framework Mt-KaHyPar and conduct extensive experiments on a benchmark set of more than 500 real-world hypergraphs, to show that the partition quality of our code is on par with the highest quality sequential code (KaHyPar), while being an order of magnitude faster with 10 threads.

Cite as

Lars Gottesbüren, Tobias Heuer, and Peter Sanders. Parallel Flow-Based Hypergraph Partitioning. In 20th International Symposium on Experimental Algorithms (SEA 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 233, pp. 5:1-5:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{gottesburen_et_al:LIPIcs.SEA.2022.5,
  author =	{Gottesb\"{u}ren, Lars and Heuer, Tobias and Sanders, Peter},
  title =	{{Parallel Flow-Based Hypergraph Partitioning}},
  booktitle =	{20th International Symposium on Experimental Algorithms (SEA 2022)},
  pages =	{5:1--5:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-251-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{233},
  editor =	{Schulz, Christian and U\c{c}ar, Bora},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2022.5},
  URN =		{urn:nbn:de:0030-drops-165393},
  doi =		{10.4230/LIPIcs.SEA.2022.5},
  annote =	{Keywords: multilevel hypergraph partitioning, shared-memory algorithms, maximum flow}
}
Document
A Branch-And-Bound Algorithm for Cluster Editing

Authors: Thomas Bläsius, Philipp Fischbeck, Lars Gottesbüren, Michael Hamann, Tobias Heuer, Jonas Spinner, Christopher Weyand, and Marcus Wilhelm

Published in: LIPIcs, Volume 233, 20th International Symposium on Experimental Algorithms (SEA 2022)


Abstract
The cluster editing problem asks to transform a given graph into a disjoint union of cliques by inserting and deleting as few edges as possible. We describe and evaluate an exact branch-and-bound algorithm for cluster editing. For this, we introduce new reduction rules and adapt existing ones. Moreover, we generalize a known packing technique to obtain lower bounds and experimentally show that it contributes significantly to the performance of the solver. Our experiments further evaluate the effectiveness of the different reduction rules and examine the effects of structural properties of the input graph on solver performance. Our solver won the exact track of the 2021 PACE challenge.

Cite as

Thomas Bläsius, Philipp Fischbeck, Lars Gottesbüren, Michael Hamann, Tobias Heuer, Jonas Spinner, Christopher Weyand, and Marcus Wilhelm. A Branch-And-Bound Algorithm for Cluster Editing. In 20th International Symposium on Experimental Algorithms (SEA 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 233, pp. 13:1-13:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{blasius_et_al:LIPIcs.SEA.2022.13,
  author =	{Bl\"{a}sius, Thomas and Fischbeck, Philipp and Gottesb\"{u}ren, Lars and Hamann, Michael and Heuer, Tobias and Spinner, Jonas and Weyand, Christopher and Wilhelm, Marcus},
  title =	{{A Branch-And-Bound Algorithm for Cluster Editing}},
  booktitle =	{20th International Symposium on Experimental Algorithms (SEA 2022)},
  pages =	{13:1--13:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-251-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{233},
  editor =	{Schulz, Christian and U\c{c}ar, Bora},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2022.13},
  URN =		{urn:nbn:de:0030-drops-165473},
  doi =		{10.4230/LIPIcs.SEA.2022.13},
  annote =	{Keywords: cluster editing}
}
Document
PACE Solver Description
PACE Solver Description: The KaPoCE Exact Cluster Editing Algorithm

Authors: Thomas Bläsius, Philipp Fischbeck, Lars Gottesbüren, Michael Hamann, Tobias Heuer, Jonas Spinner, Christopher Weyand, and Marcus Wilhelm

Published in: LIPIcs, Volume 214, 16th International Symposium on Parameterized and Exact Computation (IPEC 2021)


Abstract
The cluster editing problem is to transform an input graph into a cluster graph by performing a minimum number of edge editing operations. A cluster graph is a graph where each connected component is a clique. An edit operation can be either adding a new edge or removing an existing edge. In this write-up we outline the core techniques used in the exact cluster editing algorithm of the KaPoCE framework (contains also a heuristic solver), submitted to the exact track of the 2021 PACE challenge.

Cite as

Thomas Bläsius, Philipp Fischbeck, Lars Gottesbüren, Michael Hamann, Tobias Heuer, Jonas Spinner, Christopher Weyand, and Marcus Wilhelm. PACE Solver Description: The KaPoCE Exact Cluster Editing Algorithm. In 16th International Symposium on Parameterized and Exact Computation (IPEC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 214, pp. 27:1-27:3, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{blasius_et_al:LIPIcs.IPEC.2021.27,
  author =	{Bl\"{a}sius, Thomas and Fischbeck, Philipp and Gottesb\"{u}ren, Lars and Hamann, Michael and Heuer, Tobias and Spinner, Jonas and Weyand, Christopher and Wilhelm, Marcus},
  title =	{{PACE Solver Description: The KaPoCE Exact Cluster Editing Algorithm}},
  booktitle =	{16th International Symposium on Parameterized and Exact Computation (IPEC 2021)},
  pages =	{27:1--27:3},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-216-7},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{214},
  editor =	{Golovach, Petr A. and Zehavi, Meirav},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.IPEC.2021.27},
  URN =		{urn:nbn:de:0030-drops-154109},
  doi =		{10.4230/LIPIcs.IPEC.2021.27},
  annote =	{Keywords: cluster editing}
}
Document
PACE Solver Description
PACE Solver Description: KaPoCE: A Heuristic Cluster Editing Algorithm

Authors: Thomas Bläsius, Philipp Fischbeck, Lars Gottesbüren, Michael Hamann, Tobias Heuer, Jonas Spinner, Christopher Weyand, and Marcus Wilhelm

Published in: LIPIcs, Volume 214, 16th International Symposium on Parameterized and Exact Computation (IPEC 2021)


Abstract
The cluster editing problem is to transform an input graph into a cluster graph by performing a minimum number of edge editing operations. A cluster graph is a graph where each connected component is a clique. An edit operation can be either adding a new edge or removing an existing edge. In this write-up we outline the core techniques used in the heuristic cluster editing algorithm of the Karlsruhe and Potsdam Cluster Editing (KaPoCE) framework, submitted to the heuristic track of the 2021 PACE challenge.

Cite as

Thomas Bläsius, Philipp Fischbeck, Lars Gottesbüren, Michael Hamann, Tobias Heuer, Jonas Spinner, Christopher Weyand, and Marcus Wilhelm. PACE Solver Description: KaPoCE: A Heuristic Cluster Editing Algorithm. In 16th International Symposium on Parameterized and Exact Computation (IPEC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 214, pp. 31:1-31:4, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{blasius_et_al:LIPIcs.IPEC.2021.31,
  author =	{Bl\"{a}sius, Thomas and Fischbeck, Philipp and Gottesb\"{u}ren, Lars and Hamann, Michael and Heuer, Tobias and Spinner, Jonas and Weyand, Christopher and Wilhelm, Marcus},
  title =	{{PACE Solver Description: KaPoCE: A Heuristic Cluster Editing Algorithm}},
  booktitle =	{16th International Symposium on Parameterized and Exact Computation (IPEC 2021)},
  pages =	{31:1--31:4},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-216-7},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{214},
  editor =	{Golovach, Petr A. and Zehavi, Meirav},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.IPEC.2021.31},
  URN =		{urn:nbn:de:0030-drops-154147},
  doi =		{10.4230/LIPIcs.IPEC.2021.31},
  annote =	{Keywords: cluster editing, local search, variable neighborhood search}
}
Document
Deep Multilevel Graph Partitioning

Authors: Lars Gottesbüren, Tobias Heuer, Peter Sanders, Christian Schulz, and Daniel Seemaier

Published in: LIPIcs, Volume 204, 29th Annual European Symposium on Algorithms (ESA 2021)


Abstract
Partitioning a graph into blocks of "roughly equal" weight while cutting only few edges is a fundamental problem in computer science with a wide range of applications. In particular, the problem is a building block in applications that require parallel processing. While the amount of available cores in parallel architectures has significantly increased in recent years, state-of-the-art graph partitioning algorithms do not work well if the input needs to be partitioned into a large number of blocks. Often currently available algorithms compute highly imbalanced solutions, solutions of low quality, or have excessive running time for this case. This is due to the fact that most high-quality general-purpose graph partitioners are multilevel algorithms which perform graph coarsening to build a hierarchy of graphs, initial partitioning to compute an initial solution, and local improvement to improve the solution throughout the hierarchy. However, for large number of blocks, the smallest graph in the hierarchy that is used for initial partitioning still has to be large. In this work, we substantially mitigate these problems by introducing deep multilevel graph partitioning and a shared-memory implementation thereof. Our scheme continues the multilevel approach deep into initial partitioning - integrating it into a framework where recursive bipartitioning and direct k-way partitioning are combined such that they can operate with high performance and quality. Our integrated approach is stronger, more flexible, arguably more elegant, and reduces bottlenecks for parallelization compared to existing multilevel approaches. For example, for large number of blocks our algorithm is on average at least an order of magnitude faster than competing algorithms while computing partitions with comparable solution quality. At the same time, our algorithm consistently produces balanced solutions. Moreover, for small number of blocks, our algorithms are the fastest among competing systems with comparable quality.

Cite as

Lars Gottesbüren, Tobias Heuer, Peter Sanders, Christian Schulz, and Daniel Seemaier. Deep Multilevel Graph Partitioning. In 29th Annual European Symposium on Algorithms (ESA 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 204, pp. 48:1-48:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{gottesburen_et_al:LIPIcs.ESA.2021.48,
  author =	{Gottesb\"{u}ren, Lars and Heuer, Tobias and Sanders, Peter and Schulz, Christian and Seemaier, Daniel},
  title =	{{Deep Multilevel Graph Partitioning}},
  booktitle =	{29th Annual European Symposium on Algorithms (ESA 2021)},
  pages =	{48:1--48:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-204-4},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{204},
  editor =	{Mutzel, Petra and Pagh, Rasmus and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2021.48},
  URN =		{urn:nbn:de:0030-drops-146298},
  doi =		{10.4230/LIPIcs.ESA.2021.48},
  annote =	{Keywords: graph partitioning, graph algorithms, multilevel, shared-memory, parallel}
}
Document
Multilevel Hypergraph Partitioning with Vertex Weights Revisited

Authors: Tobias Heuer, Nikolai Maas, and Sebastian Schlag

Published in: LIPIcs, Volume 190, 19th International Symposium on Experimental Algorithms (SEA 2021)


Abstract
The balanced hypergraph partitioning problem (HGP) is to partition the vertex set of a hypergraph into k disjoint blocks of bounded weight, while minimizing an objective function defined on the hyperedges. Whereas real-world applications often use vertex and edge weights to accurately model the underlying problem, the HGP research community commonly works with unweighted instances. In this paper, we argue that, in the presence of vertex weights, current balance constraint definitions either yield infeasible partitioning problems or allow unnecessarily large imbalances and propose a new definition that overcomes these problems. We show that state-of-the-art hypergraph partitioners often struggle considerably with weighted instances and tight balance constraints (even with our new balance definition). Thus, we present a recursive-bipartitioning technique that is able to reliably compute balanced (and hence feasible) solutions. The proposed method balances the partition by pre-assigning a small subset of the heaviest vertices to the two blocks of each bipartition (using an algorithm originally developed for the job scheduling problem) and optimizes the actual partitioning objective on the remaining vertices. We integrate our algorithm into the multilevel hypergraph partitioner KaHyPar and show that our approach is able to compute balanced partitions of high quality on a diverse set of benchmark instances.

Cite as

Tobias Heuer, Nikolai Maas, and Sebastian Schlag. Multilevel Hypergraph Partitioning with Vertex Weights Revisited. In 19th International Symposium on Experimental Algorithms (SEA 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 190, pp. 8:1-8:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{heuer_et_al:LIPIcs.SEA.2021.8,
  author =	{Heuer, Tobias and Maas, Nikolai and Schlag, Sebastian},
  title =	{{Multilevel Hypergraph Partitioning with Vertex Weights Revisited}},
  booktitle =	{19th International Symposium on Experimental Algorithms (SEA 2021)},
  pages =	{8:1--8:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-185-6},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{190},
  editor =	{Coudert, David and Natale, Emanuele},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2021.8},
  URN =		{urn:nbn:de:0030-drops-137802},
  doi =		{10.4230/LIPIcs.SEA.2021.8},
  annote =	{Keywords: multilevel hypergraph partitioning, balanced partitioning, vertex weights}
}
Document
Network Flow-Based Refinement for Multilevel Hypergraph Partitioning

Authors: Tobias Heuer, Peter Sanders, and Sebastian Schlag

Published in: LIPIcs, Volume 103, 17th International Symposium on Experimental Algorithms (SEA 2018)


Abstract
We present a refinement framework for multilevel hypergraph partitioning that uses max-flow computations on pairs of blocks to improve the solution quality of a k-way partition. The framework generalizes the flow-based improvement algorithm of KaFFPa from graphs to hypergraphs and is integrated into the hypergraph partitioner KaHyPar. By reducing the size of hypergraph flow networks, improving the flow model used in KaFFPa, and developing techniques to improve the running time of our algorithm, we obtain a partitioner that computes the best solutions for a wide range of benchmark hypergraphs from different application areas while still having a running time comparable to that of hMetis.

Cite as

Tobias Heuer, Peter Sanders, and Sebastian Schlag. Network Flow-Based Refinement for Multilevel Hypergraph Partitioning. In 17th International Symposium on Experimental Algorithms (SEA 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 103, pp. 1:1-1:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{heuer_et_al:LIPIcs.SEA.2018.1,
  author =	{Heuer, Tobias and Sanders, Peter and Schlag, Sebastian},
  title =	{{Network Flow-Based Refinement for Multilevel Hypergraph Partitioning}},
  booktitle =	{17th International Symposium on Experimental Algorithms (SEA 2018)},
  pages =	{1:1--1:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-070-5},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{103},
  editor =	{D'Angelo, Gianlorenzo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2018.1},
  URN =		{urn:nbn:de:0030-drops-89368},
  doi =		{10.4230/LIPIcs.SEA.2018.1},
  annote =	{Keywords: Multilevel Hypergraph Partitioning, Network Flows, Refinement}
}
Document
Practical Range Minimum Queries Revisited

Authors: Niklas Baumstark, Simon Gog, Tobias Heuer, and Julian Labeit

Published in: LIPIcs, Volume 75, 16th International Symposium on Experimental Algorithms (SEA 2017)


Abstract
Finding the position of the minimal element in a subarray A[i..j] of an array A of size n is a fundamental operation in many applications. In 2011, Fischer and Heun presented the first index of size 2n+o(n) bits which answers the operation in constant time for any subarray. The index can be computed in linear time and queries can be answered without consulting the original array. The most recent and currently fastest practical index is due to Ferrada and Navarro (DCC'16). It reduces the range minimum query (RMQ) to more fundamental and well studied queries on binary vectors, namely rank and select, and a RMQ query on an array of sublinear size derived from A. A range min-max tree is employed to solve this recursive RMQ call. In this paper, we review their practical design and suggest a series of changes which result in consistently faster query times. Specifically, we provide a customized select implementation, switch to two levels of recursion, and use the sparse table solution for the recursion base case instead of a range min-max tree. We provide an extensive empirical evaluation of our new implementation and also compare it to the state of the art. Our experimental study shows that our proposal significantly outperforms the previous solutions on established benchmarks (up to a factor of three) and furthermore accelerates real world applications such as traversing a succinct tree or listing all distinct elements in an interval of an array.

Cite as

Niklas Baumstark, Simon Gog, Tobias Heuer, and Julian Labeit. Practical Range Minimum Queries Revisited. In 16th International Symposium on Experimental Algorithms (SEA 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 75, pp. 12:1-12:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{baumstark_et_al:LIPIcs.SEA.2017.12,
  author =	{Baumstark, Niklas and Gog, Simon and Heuer, Tobias and Labeit, Julian},
  title =	{{Practical Range Minimum Queries Revisited}},
  booktitle =	{16th International Symposium on Experimental Algorithms (SEA 2017)},
  pages =	{12:1--12:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-036-1},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{75},
  editor =	{Iliopoulos, Costas S. and Pissis, Solon P. and Puglisi, Simon J. and Raman, Rajeev},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2017.12},
  URN =		{urn:nbn:de:0030-drops-76158},
  doi =		{10.4230/LIPIcs.SEA.2017.12},
  annote =	{Keywords: Succinct Data Structures, Range Minimum Queries, Algorithm Engineering}
}
Document
The Quantile Index - Succinct Self-Index for Top-k Document Retrieval

Authors: Niklas Baumstark, Simon Gog, Tobias Heuer, and Julian Labeit

Published in: LIPIcs, Volume 75, 16th International Symposium on Experimental Algorithms (SEA 2017)


Abstract
One of the central problems in information retrieval is that of finding the k documents in a large text collection that best match a query given by a user. A recent result of Navarro & Nekrich (SODA 2012) showed that single term and phrase queries of length m can be solved in optimal O(m+k) time using a linear word sized index. While a verbatim implementation of the index would be at least an order of magnitude larger than the original collection, various authors incrementally improved the index to a point where the space requirement is currently within a factor of 1.5 to 2.0 of the text size for standard collections. In this paper, we propose a new time/space trade-off for different top-k indexes. This is achieved by sampling only a quantile of the postings in the original inverted file or suffix array-based index. For those queries that cannot be answered using the sampled version of the index we show how to compute the query results on the fly efficiently. As an example, we apply our method to the top-k framework by Navarro & Nekrich. Under probabilistic assumptions that hold for most standard texts, and for a standard scoring function called term frequency, our index can be represented with only sublinearly many bits plus the space needed for a compressed suffix array of the text, while maintaining poly-logarithmic query times. We evaluate our solution on real-world datasets and compare its practical space usage and performance against state-of-the-art implementations. Our experiments show that our index compresses below the size of the original text. To our knowledge it is the first suffix array-based text index that is able to break this bound in practice even for non-repetitive collections, while still maintaining reasonable query times of under half a millisecond on average for top-10 queries.

Cite as

Niklas Baumstark, Simon Gog, Tobias Heuer, and Julian Labeit. The Quantile Index - Succinct Self-Index for Top-k Document Retrieval. In 16th International Symposium on Experimental Algorithms (SEA 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 75, pp. 15:1-15:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{baumstark_et_al:LIPIcs.SEA.2017.15,
  author =	{Baumstark, Niklas and Gog, Simon and Heuer, Tobias and Labeit, Julian},
  title =	{{The Quantile Index - Succinct Self-Index for Top-k Document Retrieval}},
  booktitle =	{16th International Symposium on Experimental Algorithms (SEA 2017)},
  pages =	{15:1--15:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-036-1},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{75},
  editor =	{Iliopoulos, Costas S. and Pissis, Solon P. and Puglisi, Simon J. and Raman, Rajeev},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2017.15},
  URN =		{urn:nbn:de:0030-drops-76183},
  doi =		{10.4230/LIPIcs.SEA.2017.15},
  annote =	{Keywords: Text Indexing, Succinct Data Structures, Top-k Document Retrieval}
}
Document
Improving Coarsening Schemes for Hypergraph Partitioning by Exploiting Community Structure

Authors: Tobias Heuer and Sebastian Schlag

Published in: LIPIcs, Volume 75, 16th International Symposium on Experimental Algorithms (SEA 2017)


Abstract
We present an improved coarsening process for multilevel hypergraph partitioning that incorporates global information about the community structure. Community detection is performed via modularity maximization on a bipartite graph representation. The approach is made suitable for different classes of hypergraphs by defining weights for the graph edges that express structural properties of the hypergraph. We integrate our approach into a leading multilevel hypergraph partitioner with strong local search algorithms and perform extensive experiments on a large benchmark set of hypergraphs stemming from application areas such as VLSI design, SAT solving, and scientific computing. Our results indicate that respecting community structure during coarsening not only significantly improves the solutions found by the initial partitioning algorithm, but also consistently improves overall solution quality.

Cite as

Tobias Heuer and Sebastian Schlag. Improving Coarsening Schemes for Hypergraph Partitioning by Exploiting Community Structure. In 16th International Symposium on Experimental Algorithms (SEA 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 75, pp. 21:1-21:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{heuer_et_al:LIPIcs.SEA.2017.21,
  author =	{Heuer, Tobias and Schlag, Sebastian},
  title =	{{Improving Coarsening Schemes for Hypergraph Partitioning by Exploiting Community Structure}},
  booktitle =	{16th International Symposium on Experimental Algorithms (SEA 2017)},
  pages =	{21:1--21:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-036-1},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{75},
  editor =	{Iliopoulos, Costas S. and Pissis, Solon P. and Puglisi, Simon J. and Raman, Rajeev},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2017.21},
  URN =		{urn:nbn:de:0030-drops-76226},
  doi =		{10.4230/LIPIcs.SEA.2017.21},
  annote =	{Keywords: multilevel hypergraph partitioning, coarsening algorithms, community detection}
}
  • Refine by Author
  • 12 Heuer, Tobias
  • 6 Gottesbüren, Lars
  • 4 Schlag, Sebastian
  • 3 Bläsius, Thomas
  • 3 Fischbeck, Philipp
  • Show More...

  • Refine by Classification

  • Refine by Keyword
  • 3 cluster editing
  • 3 multilevel hypergraph partitioning
  • 2 Algorithm Engineering
  • 2 Succinct Data Structures
  • 1 FPT
  • Show More...

  • Refine by Type
  • 12 document

  • Refine by Publication Year
  • 4 2021
  • 3 2017
  • 3 2022
  • 1 2018
  • 1 2024

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail