35 Search Results for "Natale, Emanuele"


Volume

LIPIcs, Volume 190

19th International Symposium on Experimental Algorithms (SEA 2021)

SEA 2021, June 7-9, 2021, Nice, France

Editors: David Coudert and Emanuele Natale

Document
Revisiting the Random Subset Sum Problem

Authors: Arthur Carvalho Walraven Da Cunha, Francesco d'Amore, Frédéric Giroire, Hicham Lesfari, Emanuele Natale, and Laurent Viennot

Published in: LIPIcs, Volume 274, 31st Annual European Symposium on Algorithms (ESA 2023)


Abstract
The average properties of the well-known Subset Sum Problem can be studied by means of its randomised version, where we are given a target value z, random variables X_1, …, X_n, and an error parameter ε > 0, and we seek a subset of the X_is whose sum approximates z up to error ε. In this setup, it has been shown that, under mild assumptions on the distribution of the random variables, a sample of size 𝒪(log(1/ε)) suffices to obtain, with high probability, approximations for all values in [-1/2, 1/2]. Recently, this result has been rediscovered outside the algorithms community, enabling meaningful progress in other fields. In this work, we present an alternative proof for this theorem, with a more direct approach and resourcing to more elementary tools.

Cite as

Arthur Carvalho Walraven Da Cunha, Francesco d'Amore, Frédéric Giroire, Hicham Lesfari, Emanuele Natale, and Laurent Viennot. Revisiting the Random Subset Sum Problem. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 37:1-37:11, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{dacunha_et_al:LIPIcs.ESA.2023.37,
  author =	{Da Cunha, Arthur Carvalho Walraven and d'Amore, Francesco and Giroire, Fr\'{e}d\'{e}ric and Lesfari, Hicham and Natale, Emanuele and Viennot, Laurent},
  title =	{{Revisiting the Random Subset Sum Problem}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{37:1--37:11},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.37},
  URN =		{urn:nbn:de:0030-drops-186905},
  doi =		{10.4230/LIPIcs.ESA.2023.37},
  annote =	{Keywords: Random subset sum, Randomised method, Subset-sum, Combinatorics}
}
Document
Complete Volume
LIPIcs, Volume 190, SEA 2021, Complete Volume

Authors: David Coudert and Emanuele Natale

Published in: LIPIcs, Volume 190, 19th International Symposium on Experimental Algorithms (SEA 2021)


Abstract
LIPIcs, Volume 190, SEA 2021, Complete Volume

Cite as

19th International Symposium on Experimental Algorithms (SEA 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 190, pp. 1-434, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@Proceedings{coudert_et_al:LIPIcs.SEA.2021,
  title =	{{LIPIcs, Volume 190, SEA 2021, Complete Volume}},
  booktitle =	{19th International Symposium on Experimental Algorithms (SEA 2021)},
  pages =	{1--434},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-185-6},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{190},
  editor =	{Coudert, David and Natale, Emanuele},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2021},
  URN =		{urn:nbn:de:0030-drops-137716},
  doi =		{10.4230/LIPIcs.SEA.2021},
  annote =	{Keywords: LIPIcs, Volume 190, SEA 2021, Complete Volume}
}
Document
Front Matter
Front Matter, Table of Contents, Preface, Conference Organization

Authors: David Coudert and Emanuele Natale

Published in: LIPIcs, Volume 190, 19th International Symposium on Experimental Algorithms (SEA 2021)


Abstract
Front Matter, Table of Contents, Preface, Conference Organization

Cite as

19th International Symposium on Experimental Algorithms (SEA 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 190, pp. 0:i-0:xii, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{coudert_et_al:LIPIcs.SEA.2021.0,
  author =	{Coudert, David and Natale, Emanuele},
  title =	{{Front Matter, Table of Contents, Preface, Conference Organization}},
  booktitle =	{19th International Symposium on Experimental Algorithms (SEA 2021)},
  pages =	{0:i--0:xii},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-185-6},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{190},
  editor =	{Coudert, David and Natale, Emanuele},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2021.0},
  URN =		{urn:nbn:de:0030-drops-137722},
  doi =		{10.4230/LIPIcs.SEA.2021.0},
  annote =	{Keywords: Front Matter, Table of Contents, Preface, Conference Organization}
}
Document
Engineering Nearly Linear-Time Algorithms for Small Vertex Connectivity

Authors: Max Franck and Sorrachai Yingchareonthawornchai

Published in: LIPIcs, Volume 190, 19th International Symposium on Experimental Algorithms (SEA 2021)


Abstract
Vertex connectivity is a well-studied concept in graph theory with numerous applications. A graph is k-connected if it remains connected after removing any k-1 vertices. The vertex connectivity of a graph is the maximum k such that the graph is k-connected. There is a long history of algorithmic development for efficiently computing vertex connectivity. Recently, two near linear-time algorithms for small k were introduced by [Forster et al. SODA 2020]. Prior to that, the best known algorithm was one by [Henzinger et al. FOCS'96] with quadratic running time when k is small. In this paper, we study the practical performance of the algorithms by Forster et al. In addition, we introduce a new heuristic on a key subroutine called local cut detection, which we call degree counting. We prove that the new heuristic improves space-efficiency (which can be good for caching purposes) and allows the subroutine to terminate earlier. According to experimental results on random graphs with planted vertex cuts, random hyperbolic graphs, and real world graphs with vertex connectivity between 4 and 15, the degree counting heuristic offers a factor of 2-4 speedup over the original non-degree counting version for most of our data. It also outperforms the previous state-of-the-art algorithm by Henzinger et al. even on relatively small graphs.

Cite as

Max Franck and Sorrachai Yingchareonthawornchai. Engineering Nearly Linear-Time Algorithms for Small Vertex Connectivity. In 19th International Symposium on Experimental Algorithms (SEA 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 190, pp. 1:1-1:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{franck_et_al:LIPIcs.SEA.2021.1,
  author =	{Franck, Max and Yingchareonthawornchai, Sorrachai},
  title =	{{Engineering Nearly Linear-Time Algorithms for Small Vertex Connectivity}},
  booktitle =	{19th International Symposium on Experimental Algorithms (SEA 2021)},
  pages =	{1:1--1:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-185-6},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{190},
  editor =	{Coudert, David and Natale, Emanuele},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2021.1},
  URN =		{urn:nbn:de:0030-drops-137735},
  doi =		{10.4230/LIPIcs.SEA.2021.1},
  annote =	{Keywords: Algorithm Engineering, Algorithmic Graph Theory, Sublinear Algorithms}
}
Document
Parallel Five-Cycle Counting Algorithms

Authors: Louisa Ruixue Huang, Jessica Shi, and Julian Shun

Published in: LIPIcs, Volume 190, 19th International Symposium on Experimental Algorithms (SEA 2021)


Abstract
Counting the frequency of subgraphs in large networks is a classic research question that reveals the underlying substructures of these networks for important applications. However, subgraph counting is a challenging problem, even for subgraph sizes as small as five, due to the combinatorial explosion in the number of possible occurrences. This paper focuses on the five-cycle, which is an important special case of five-vertex subgraph counting and one of the most difficult to count efficiently. We design two new parallel five-cycle counting algorithms and prove that they are work-efficient and achieve polylogarithmic span. Both algorithms are based on computing low out-degree orientations, which enables the efficient computation of directed two-paths and three-paths, and the algorithms differ in the ways in which they use this orientation to eliminate double-counting. We develop fast multicore implementations of the algorithms and propose a work scheduling optimization to improve their performance. Our experiments on a variety of real-world graphs using a 36-core machine with two-way hyper-threading show that our algorithms achieves 10-46x self-relative speed-up, outperform our serial benchmarks by 10-32x, and outperform the previous state-of-the-art serial algorithm by up to 818x.

Cite as

Louisa Ruixue Huang, Jessica Shi, and Julian Shun. Parallel Five-Cycle Counting Algorithms. In 19th International Symposium on Experimental Algorithms (SEA 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 190, pp. 2:1-2:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{huang_et_al:LIPIcs.SEA.2021.2,
  author =	{Huang, Louisa Ruixue and Shi, Jessica and Shun, Julian},
  title =	{{Parallel Five-Cycle Counting Algorithms}},
  booktitle =	{19th International Symposium on Experimental Algorithms (SEA 2021)},
  pages =	{2:1--2:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-185-6},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{190},
  editor =	{Coudert, David and Natale, Emanuele},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2021.2},
  URN =		{urn:nbn:de:0030-drops-137749},
  doi =		{10.4230/LIPIcs.SEA.2021.2},
  annote =	{Keywords: Cycle counting, parallel algorithms, graph algorithms}
}
Document
Fast and Robust Vectorized In-Place Sorting of Primitive Types

Authors: Mark Blacher, Joachim Giesen, and Lars Kühne

Published in: LIPIcs, Volume 190, 19th International Symposium on Experimental Algorithms (SEA 2021)


Abstract
Modern CPUs provide single instruction-multiple data (SIMD) instructions. SIMD instructions process several elements of a primitive data type simultaneously in fixed-size vectors. Classical sorting algorithms are not directly expressible in SIMD instructions. Accelerating sorting algorithms with SIMD instruction is therefore a creative endeavor. A promising approach for sorting with SIMD instructions is to use sorting networks for small arrays and Quicksort for large arrays. In this paper we improve vectorization techniques for sorting networks and Quicksort. In particular, we show how to use the full capacity of vector registers in sorting networks and how to make vectorized Quicksort robust with respect to different key distributions. To demonstrate the performance of our techniques we implement an in-place hybrid sorting algorithm for the data type int with AVX2 intrinsics. Our implementation is at least 30% faster than state-of-the-art high-performance sorting alternatives.

Cite as

Mark Blacher, Joachim Giesen, and Lars Kühne. Fast and Robust Vectorized In-Place Sorting of Primitive Types. In 19th International Symposium on Experimental Algorithms (SEA 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 190, pp. 3:1-3:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{blacher_et_al:LIPIcs.SEA.2021.3,
  author =	{Blacher, Mark and Giesen, Joachim and K\"{u}hne, Lars},
  title =	{{Fast and Robust Vectorized In-Place Sorting of Primitive Types}},
  booktitle =	{19th International Symposium on Experimental Algorithms (SEA 2021)},
  pages =	{3:1--3:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-185-6},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{190},
  editor =	{Coudert, David and Natale, Emanuele},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2021.3},
  URN =		{urn:nbn:de:0030-drops-137758},
  doi =		{10.4230/LIPIcs.SEA.2021.3},
  annote =	{Keywords: Quicksort, Sorting Networks, Vectorization, SIMD, AVX2}
}
Document
Minimum Scan Cover and Variants - Theory and Experiments

Authors: Kevin Buchin, Sándor P. Fekete, Alexander Hill, Linda Kleist, Irina Kostitsyna, Dominik Krupke, Roel Lambers, and Martijn Struijs

Published in: LIPIcs, Volume 190, 19th International Symposium on Experimental Algorithms (SEA 2021)


Abstract
We consider a spectrum of geometric optimization problems motivated by contexts such as satellite communication and astrophysics. In the problem Minimum Scan Cover with Angular Costs, we are given a graph G that is embedded in Euclidean space. The edges of G need to be scanned, i.e., probed from both of their vertices. In order to scan their edge, two vertices need to face each other; changing the heading of a vertex incurs some cost in terms of energy or rotation time that is proportional to the corresponding rotation angle. Our goal is to compute schedules that minimize the following objective functions: (i) in Minimum Makespan Scan Cover (MSC-MS), this is the time until all edges are scanned; (ii) in Minimum Total Energy Scan Cover (MSC-TE), the sum of all rotation angles; (iii) in Minimum Bottleneck Energy Scan Cover (MSC-BE), the maximum total rotation angle at one vertex. Previous theoretical work on MSC-MS revealed a close connection to graph coloring and the cut cover problem, leading to hardness and approximability results. In this paper, we present polynomial-time algorithms for 1D instances of MSC-TE and MSC-BE, but NP-hardness proofs for bipartite 2D instances. For bipartite graphs in 2D, we also give 2-approximation algorithms for both MSC-TE and MSC-BE. Most importantly, we provide a comprehensive study of practical methods for all three problems. We compare three different mixed-integer programming and two constraint programming approaches, and show how to compute provably optimal solutions for geometric instances with up to 300 edges. Additionally, we compare the performance of different meta-heuristics for even larger instances.

Cite as

Kevin Buchin, Sándor P. Fekete, Alexander Hill, Linda Kleist, Irina Kostitsyna, Dominik Krupke, Roel Lambers, and Martijn Struijs. Minimum Scan Cover and Variants - Theory and Experiments. In 19th International Symposium on Experimental Algorithms (SEA 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 190, pp. 4:1-4:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{buchin_et_al:LIPIcs.SEA.2021.4,
  author =	{Buchin, Kevin and Fekete, S\'{a}ndor P. and Hill, Alexander and Kleist, Linda and Kostitsyna, Irina and Krupke, Dominik and Lambers, Roel and Struijs, Martijn},
  title =	{{Minimum Scan Cover and Variants - Theory and Experiments}},
  booktitle =	{19th International Symposium on Experimental Algorithms (SEA 2021)},
  pages =	{4:1--4:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-185-6},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{190},
  editor =	{Coudert, David and Natale, Emanuele},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2021.4},
  URN =		{urn:nbn:de:0030-drops-137765},
  doi =		{10.4230/LIPIcs.SEA.2021.4},
  annote =	{Keywords: Graph scanning, angular metric, makespan, energy, bottleneck, complexity, approximation, algorithm engineering, mixed-integer programming, constraint programming}
}
Document
Three Is Enough for Steiner Trees

Authors: Emmanuel Arrighi and Mateus de Oliveira Oliveira

Published in: LIPIcs, Volume 190, 19th International Symposium on Experimental Algorithms (SEA 2021)


Abstract
In the Steiner tree problem, the input consists of an edge-weighted graph G together with a set S of terminal vertices. The goal is to find a minimum weight tree in G that spans all terminals. This fundamental NP-hard problem has direct applications in many subfields of combinatorial optimization, such as planning, scheduling, etc. In this work we introduce a new heuristic for the Steiner tree problem, based on a simple routine for improving the cost of sub-optimal Steiner trees: first, the sub-optimal tree is split into three connected components, and then these components are reconnected by using an algorithm that computes an optimal Steiner tree with 3-terminals (the roots of the three components). We have implemented our heuristic into a solver and compared it with several state-of-the-art solvers on well-known data sets. Our solver performs very well across all the data sets, and outperforms most of the other benchmarked solvers on very large graphs, which have been either obtained from real-world applications or from randomly generated data sets.

Cite as

Emmanuel Arrighi and Mateus de Oliveira Oliveira. Three Is Enough for Steiner Trees. In 19th International Symposium on Experimental Algorithms (SEA 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 190, pp. 5:1-5:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{arrighi_et_al:LIPIcs.SEA.2021.5,
  author =	{Arrighi, Emmanuel and de Oliveira Oliveira, Mateus},
  title =	{{Three Is Enough for Steiner Trees}},
  booktitle =	{19th International Symposium on Experimental Algorithms (SEA 2021)},
  pages =	{5:1--5:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-185-6},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{190},
  editor =	{Coudert, David and Natale, Emanuele},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2021.5},
  URN =		{urn:nbn:de:0030-drops-137775},
  doi =		{10.4230/LIPIcs.SEA.2021.5},
  annote =	{Keywords: Steiner Tree, Heuristics, 3TST}
}
Document
A Fast and Tight Heuristic for A* in Road Networks

Authors: Ben Strasser and Tim Zeitz

Published in: LIPIcs, Volume 190, 19th International Symposium on Experimental Algorithms (SEA 2021)


Abstract
We study exact, efficient and practical algorithms for route planning in large road networks. Routing applications often require integrating the current traffic situation, planning ahead with traffic predictions for the future, respecting forbidden turns, and many other features depending on the exact application. While Dijkstra’s algorithm can be used to solve these problems, it is too slow for many applications. A* is a classical approach to accelerate Dijkstra’s algorithm. A* can support many extended scenarios without much additional implementation complexity. However, A*’s performance depends on the availability of a good heuristic that estimates distances. Computing tight distance estimates is a challenge on its own. On road networks, shortest paths can also be quickly computed using hierarchical speedup techniques. They achieve speed and exactness but sacrifice A*’s flexibility. Extending them to certain practical applications can be hard. In this paper, we present an algorithm to efficiently extract distance estimates for A* from Contraction Hierarchies (CH), a hierarchical technique. We call our heuristic CH-Potentials. Our approach allows decoupling the supported extensions from the hierarchical speed-up technique. Additionally, we describe A* optimizations to accelerate the processing of low degree nodes, which often occur in road networks.

Cite as

Ben Strasser and Tim Zeitz. A Fast and Tight Heuristic for A* in Road Networks. In 19th International Symposium on Experimental Algorithms (SEA 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 190, pp. 6:1-6:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{strasser_et_al:LIPIcs.SEA.2021.6,
  author =	{Strasser, Ben and Zeitz, Tim},
  title =	{{A Fast and Tight Heuristic for A* in Road Networks}},
  booktitle =	{19th International Symposium on Experimental Algorithms (SEA 2021)},
  pages =	{6:1--6:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-185-6},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{190},
  editor =	{Coudert, David and Natale, Emanuele},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2021.6},
  URN =		{urn:nbn:de:0030-drops-137780},
  doi =		{10.4230/LIPIcs.SEA.2021.6},
  annote =	{Keywords: route planning, shortest paths, realistic road networks}
}
Document
Engineering Predecessor Data Structures for Dynamic Integer Sets

Authors: Patrick Dinklage, Johannes Fischer, and Alexander Herlez

Published in: LIPIcs, Volume 190, 19th International Symposium on Experimental Algorithms (SEA 2021)


Abstract
We present highly optimized data structures for the dynamic predecessor problem, where the task is to maintain a set S of w-bit numbers under insertions, deletions, and predecessor queries (return the largest element in S no larger than a given key). The problem of finding predecessors can be viewed as a generalized form of the membership problem, or as a simple version of the nearest neighbour problem. It lies at the core of various real-world problems such as internet routing. In this work, we engineer (1) a simple implementation of the idea of universe reduction, similar to van-Emde-Boas trees (2) variants of y-fast tries [Willard, IPL'83], and (3) B-trees with different strategies for organizing the keys contained in the nodes, including an implementation of dynamic fusion nodes [Pǎtraşcu and Thorup, FOCS'14]. We implement our data structures for w = 32,40,64, which covers most typical scenarios. Our data structures finish workloads faster than previous approaches while being significantly more space-efficient, e.g., they clearly outperform standard implementations of the STL by finishing up to four times as fast using less than a third of the memory. Our tests also provide more general insights on data structure design, such as how small sets should be stored and handled and if and when new CPU instructions such as advanced vector extensions pay off.

Cite as

Patrick Dinklage, Johannes Fischer, and Alexander Herlez. Engineering Predecessor Data Structures for Dynamic Integer Sets. In 19th International Symposium on Experimental Algorithms (SEA 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 190, pp. 7:1-7:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{dinklage_et_al:LIPIcs.SEA.2021.7,
  author =	{Dinklage, Patrick and Fischer, Johannes and Herlez, Alexander},
  title =	{{Engineering Predecessor Data Structures for Dynamic Integer Sets}},
  booktitle =	{19th International Symposium on Experimental Algorithms (SEA 2021)},
  pages =	{7:1--7:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-185-6},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{190},
  editor =	{Coudert, David and Natale, Emanuele},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2021.7},
  URN =		{urn:nbn:de:0030-drops-137799},
  doi =		{10.4230/LIPIcs.SEA.2021.7},
  annote =	{Keywords: integer data structures, dynamic data structures, predecessor, universe reduction, y-fast trie, fusion tree, B-tree}
}
Document
Multilevel Hypergraph Partitioning with Vertex Weights Revisited

Authors: Tobias Heuer, Nikolai Maas, and Sebastian Schlag

Published in: LIPIcs, Volume 190, 19th International Symposium on Experimental Algorithms (SEA 2021)


Abstract
The balanced hypergraph partitioning problem (HGP) is to partition the vertex set of a hypergraph into k disjoint blocks of bounded weight, while minimizing an objective function defined on the hyperedges. Whereas real-world applications often use vertex and edge weights to accurately model the underlying problem, the HGP research community commonly works with unweighted instances. In this paper, we argue that, in the presence of vertex weights, current balance constraint definitions either yield infeasible partitioning problems or allow unnecessarily large imbalances and propose a new definition that overcomes these problems. We show that state-of-the-art hypergraph partitioners often struggle considerably with weighted instances and tight balance constraints (even with our new balance definition). Thus, we present a recursive-bipartitioning technique that is able to reliably compute balanced (and hence feasible) solutions. The proposed method balances the partition by pre-assigning a small subset of the heaviest vertices to the two blocks of each bipartition (using an algorithm originally developed for the job scheduling problem) and optimizes the actual partitioning objective on the remaining vertices. We integrate our algorithm into the multilevel hypergraph partitioner KaHyPar and show that our approach is able to compute balanced partitions of high quality on a diverse set of benchmark instances.

Cite as

Tobias Heuer, Nikolai Maas, and Sebastian Schlag. Multilevel Hypergraph Partitioning with Vertex Weights Revisited. In 19th International Symposium on Experimental Algorithms (SEA 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 190, pp. 8:1-8:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{heuer_et_al:LIPIcs.SEA.2021.8,
  author =	{Heuer, Tobias and Maas, Nikolai and Schlag, Sebastian},
  title =	{{Multilevel Hypergraph Partitioning with Vertex Weights Revisited}},
  booktitle =	{19th International Symposium on Experimental Algorithms (SEA 2021)},
  pages =	{8:1--8:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-185-6},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{190},
  editor =	{Coudert, David and Natale, Emanuele},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2021.8},
  URN =		{urn:nbn:de:0030-drops-137802},
  doi =		{10.4230/LIPIcs.SEA.2021.8},
  annote =	{Keywords: multilevel hypergraph partitioning, balanced partitioning, vertex weights}
}
Document
On Tamaki’s Algorithm to Compute Treewidths

Authors: Ernst Althaus, Daniela Schnurbusch, Julian Wüschner, and Sarah Ziegler

Published in: LIPIcs, Volume 190, 19th International Symposium on Experimental Algorithms (SEA 2021)


Abstract
We revisit the exact algorithm to compute the treewidth of a graph of Tamaki and present it in a way that facilitates improvements. The so-called I-blocks and O-blocks enumerated by the algorithm are interpreted as subtrees of a tree-decomposition that is constructed. This simplifies the proof of correctness and allows to discard subtrees from the enumeration by some simple observations. In our experiments, we show that one of these modifications in particular reduces the number of enumerated objects considerably.

Cite as

Ernst Althaus, Daniela Schnurbusch, Julian Wüschner, and Sarah Ziegler. On Tamaki’s Algorithm to Compute Treewidths. In 19th International Symposium on Experimental Algorithms (SEA 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 190, pp. 9:1-9:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{althaus_et_al:LIPIcs.SEA.2021.9,
  author =	{Althaus, Ernst and Schnurbusch, Daniela and W\"{u}schner, Julian and Ziegler, Sarah},
  title =	{{On Tamaki’s Algorithm to Compute Treewidths}},
  booktitle =	{19th International Symposium on Experimental Algorithms (SEA 2021)},
  pages =	{9:1--9:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-185-6},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{190},
  editor =	{Coudert, David and Natale, Emanuele},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2021.9},
  URN =		{urn:nbn:de:0030-drops-137818},
  doi =		{10.4230/LIPIcs.SEA.2021.9},
  annote =	{Keywords: Tree Decomposition, Exact Algorithm, Algorithms Engineering}
}
Document
Practical Implementation of Encoding Range Top-2 Queries

Authors: Seungbum Jo, Wooyoung Park, and Srinivasa Rao Satti

Published in: LIPIcs, Volume 190, 19th International Symposium on Experimental Algorithms (SEA 2021)


Abstract
We design a practical variant of an encoding for range Top-2 queries (RT2Q), and evaluate its performance. Given an array A[1,n] of n elements from a total order, the range Top-2 encoding problem is to construct a data structure that can answer RT2Q queries, which return the positions of the first and the second largest elements within a given query range of A, without accessing the array A at query time. Davoodi et al. [Phil. Trans. Royal Soc. A, 2016] proposed a (3.272n + o(n))-bit encoding, which answers RT2Q queries in O(1) time, while Gawrychowski and Nicholson [ICALP, 2015] gave an optimal (2.755n + (n))-bit encoding which doesn't support efficient queries. In this paper, we propose the first practical implementation of the encoding data structure for answering RT2Q. Our implementation is based on an alternative representation of Davoodi et al.’s data structure. The experimental results show that our implementation is efficient in practice, and gives improved time-space trade-offs compared to the indexing data structures (which keep the original array A as part of the data structure) for range maximum queries.

Cite as

Seungbum Jo, Wooyoung Park, and Srinivasa Rao Satti. Practical Implementation of Encoding Range Top-2 Queries. In 19th International Symposium on Experimental Algorithms (SEA 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 190, pp. 10:1-10:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{jo_et_al:LIPIcs.SEA.2021.10,
  author =	{Jo, Seungbum and Park, Wooyoung and Satti, Srinivasa Rao},
  title =	{{Practical Implementation of Encoding Range Top-2 Queries}},
  booktitle =	{19th International Symposium on Experimental Algorithms (SEA 2021)},
  pages =	{10:1--10:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-185-6},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{190},
  editor =	{Coudert, David and Natale, Emanuele},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2021.10},
  URN =		{urn:nbn:de:0030-drops-137827},
  doi =		{10.4230/LIPIcs.SEA.2021.10},
  annote =	{Keywords: Range top-2 query, Range minimum query, Cartesian tree, Succinct encoding}
}
Document
On Computing the Diameter of (Weighted) Link Streams

Authors: Marco Calamai, Pierluigi Crescenzi, and Andrea Marino

Published in: LIPIcs, Volume 190, 19th International Symposium on Experimental Algorithms (SEA 2021)


Abstract
A weighted link stream is a pair (V,𝔼) comprising V, the set of nodes, and 𝔼, the list of temporal edges (u,v,t,λ), where u,v are two nodes in V, t is the starting time of the temporal edge, and λ is its travel time. By making use of this model, different notions of diameter can be defined, which refer to the following distances: earliest arrival time, latest departure time, fastest time, and shortest time. After proving that any of these diameters cannot be computed in time sub-quadratic with respect to the number of temporal edges, we propose different algorithms (inspired by the approach used for computing the diameter of graphs) which allow us to compute, in practice very efficiently, the diameter of quite large real-world weighted link stream for several definitions of the diameter. Indeed, all the proposed algorithms require very often a very low number of single source (or target) best path computations. We verify the effectiveness of our approach by means of an extensive set of experiments on real-world link streams. We also experimentally prove that the temporal version of the well-known 2-sweep technique, for computing a lower bound on the diameter of a graph, is quite effective in the case of weighted link stream, by returning very often tight bounds.

Cite as

Marco Calamai, Pierluigi Crescenzi, and Andrea Marino. On Computing the Diameter of (Weighted) Link Streams. In 19th International Symposium on Experimental Algorithms (SEA 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 190, pp. 11:1-11:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{calamai_et_al:LIPIcs.SEA.2021.11,
  author =	{Calamai, Marco and Crescenzi, Pierluigi and Marino, Andrea},
  title =	{{On Computing the Diameter of (Weighted) Link Streams}},
  booktitle =	{19th International Symposium on Experimental Algorithms (SEA 2021)},
  pages =	{11:1--11:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-185-6},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{190},
  editor =	{Coudert, David and Natale, Emanuele},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2021.11},
  URN =		{urn:nbn:de:0030-drops-137836},
  doi =		{10.4230/LIPIcs.SEA.2021.11},
  annote =	{Keywords: Temporal graph, shortest path, diameter}
}
  • Refine by Author
  • 10 Natale, Emanuele
  • 3 Clementi, Andrea
  • 3 Gualà, Luciano
  • 3 Pasquale, Francesco
  • 2 Coudert, David
  • Show More...

  • Refine by Classification
  • 5 Mathematics of computing → Graph algorithms
  • 4 Theory of computation → Design and analysis of algorithms
  • 4 Theory of computation → Graph algorithms analysis
  • 3 Theory of computation → Distributed algorithms
  • 3 Theory of computation → Shortest paths
  • Show More...

  • Refine by Keyword
  • 3 Graph Algorithms
  • 2 Algorithm Engineering
  • 2 approximation algorithms
  • 2 clustering
  • 1 2-Connectivity
  • Show More...

  • Refine by Type
  • 34 document
  • 1 volume

  • Refine by Publication Year
  • 27 2021
  • 3 2016
  • 3 2018
  • 1 2020
  • 1 2023

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail