LIPIcs, Volume 233

20th International Symposium on Experimental Algorithms (SEA 2022)



Thumbnail PDF

Event

SEA 2022, July 25-27, 2022, Heidelberg, Germany

Editors

Christian Schulz
  • Heidelberg University, Germany
Bora Uçar
  • CNRS, Laboratoire LIP, Lyon, France

Publication Details

  • published at: 2022-07-11
  • Publisher: Schloss Dagstuhl – Leibniz-Zentrum für Informatik
  • ISBN: 978-3-95977-251-8
  • DBLP: db/conf/wea/sea2022

Access Numbers

Documents

No documents found matching your filter selection.
Document
Complete Volume
LIPIcs, Volume 233, SEA 2022, Complete Volume

Authors: Christian Schulz and Bora Uçar


Abstract
LIPIcs, Volume 233, SEA 2022, Complete Volume

Cite as

20th International Symposium on Experimental Algorithms (SEA 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 233, pp. 1-434, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@Proceedings{schulz_et_al:LIPIcs.SEA.2022,
  title =	{{LIPIcs, Volume 233, SEA 2022, Complete Volume}},
  booktitle =	{20th International Symposium on Experimental Algorithms (SEA 2022)},
  pages =	{1--434},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-251-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{233},
  editor =	{Schulz, Christian and U\c{c}ar, Bora},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2022},
  URN =		{urn:nbn:de:0030-drops-165331},
  doi =		{10.4230/LIPIcs.SEA.2022},
  annote =	{Keywords: LIPIcs, Volume 233, SEA 2022, Complete Volume}
}
Document
Front Matter
Front Matter, Table of Contents, Preface, Conference Organization

Authors: Christian Schulz and Bora Uçar


Abstract
Front Matter, Table of Contents, Preface, Conference Organization

Cite as

20th International Symposium on Experimental Algorithms (SEA 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 233, pp. 0:i-0:xii, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{schulz_et_al:LIPIcs.SEA.2022.0,
  author =	{Schulz, Christian and U\c{c}ar, Bora},
  title =	{{Front Matter, Table of Contents, Preface, Conference Organization}},
  booktitle =	{20th International Symposium on Experimental Algorithms (SEA 2022)},
  pages =	{0:i--0:xii},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-251-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{233},
  editor =	{Schulz, Christian and U\c{c}ar, Bora},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2022.0},
  URN =		{urn:nbn:de:0030-drops-165342},
  doi =		{10.4230/LIPIcs.SEA.2022.0},
  annote =	{Keywords: Front Matter, Table of Contents, Preface, Conference Organization}
}
Document
Discrete Hyperbolic Random Graph Model

Authors: Dorota Celińska-Kopczyńska and Eryk Kopczyński


Abstract
The hyperbolic random graph model (HRG) has proven useful in the analysis of scale-free networks, which are ubiquitous in many fields, from social network analysis to biology. However, working with this model is algorithmically and conceptually challenging because of the nature of the distances in the hyperbolic plane. In this paper, we propose a discrete variant of the HRG model (DHRG) where nodes are mapped to the vertices of a triangulation; our algorithms allow us to work with this model in a simple yet efficient way. We present experimental results conducted on networks, both real-world and simulated, to evaluate the practical benefits of DHRG in comparison to the HRG model.

Cite as

Dorota Celińska-Kopczyńska and Eryk Kopczyński. Discrete Hyperbolic Random Graph Model. In 20th International Symposium on Experimental Algorithms (SEA 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 233, pp. 1:1-1:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{celinskakopczynska_et_al:LIPIcs.SEA.2022.1,
  author =	{Celi\'{n}ska-Kopczy\'{n}ska, Dorota and Kopczy\'{n}ski, Eryk},
  title =	{{Discrete Hyperbolic Random Graph Model}},
  booktitle =	{20th International Symposium on Experimental Algorithms (SEA 2022)},
  pages =	{1:1--1:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-251-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{233},
  editor =	{Schulz, Christian and U\c{c}ar, Bora},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2022.1},
  URN =		{urn:nbn:de:0030-drops-165356},
  doi =		{10.4230/LIPIcs.SEA.2022.1},
  annote =	{Keywords: hyperbolic geometry, scale-free networks, routing, tessellation}
}
Document
Solving and Generating Nagareru Puzzles

Authors: Masakazu Ishihata and Fumiya Tokumasu


Abstract
Solving paper-and-pencil puzzles is fun for people, and their analysis is also an essential issue in computational complexity theory. There are some practically efficient solvers for some NP-complete puzzles; however, the automatic generation of interesting puzzle instances still stands out as a complex problem because it requires checking whether the generated instance has a unique solution. In this paper, we focus on a puzzle called Nagareru and propose two methods: one is for implicitly enumerating all the solutions of its instance, and the other is for efficiently generating an instance with a unique solution. The former constructs a ZDD that implicitly represents all the solutions. The latter employs the ZDD-based solver as a building block to check the uniqueness of the solution of generated instances. We experimentally showed that the ZDD-based solver was drastically faster than a CSP-based solver, and our generation method created an interesting instance in a reasonable time.

Cite as

Masakazu Ishihata and Fumiya Tokumasu. Solving and Generating Nagareru Puzzles. In 20th International Symposium on Experimental Algorithms (SEA 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 233, pp. 2:1-2:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{ishihata_et_al:LIPIcs.SEA.2022.2,
  author =	{Ishihata, Masakazu and Tokumasu, Fumiya},
  title =	{{Solving and Generating Nagareru Puzzles}},
  booktitle =	{20th International Symposium on Experimental Algorithms (SEA 2022)},
  pages =	{2:1--2:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-251-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{233},
  editor =	{Schulz, Christian and U\c{c}ar, Bora},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2022.2},
  URN =		{urn:nbn:de:0030-drops-165366},
  doi =		{10.4230/LIPIcs.SEA.2022.2},
  annote =	{Keywords: Paper-and-pencil puzzle, SAT, CSP, ZDD}
}
Document
Fast Computation of Shortest Smooth Paths and Uniformly Bounded Stretch with Lazy RPHAST

Authors: Tim Zeitz


Abstract
We study the shortest smooth path problem (SSPP), which is motivated by traffic-aware routing in road networks. The goal is to compute the fastest route according to the current traffic situation while avoiding undesired detours, such as briefly using a parking area to bypass a jammed highway. Detours are prevented by limiting the uniformly bounded stretch (UBS) with respect to a second weight function which disregards the traffic situation. The UBS is a path quality metric which measures the maximum relative length of detours on a path. In this paper, we settle the complexity of the SSPP and show that it is strongly NP-complete. We then present practical algorithms to solve the problem on continental-sized road networks both heuristically and exactly. A crucial building block of these algorithms is the UBS evaluation. We propose a novel algorithm to compute the UBS with only a few shortest path computations on typical paths. All our algorithms utilize Lazy RPHAST, a recently proposed technique to incrementally compute distances from many vertices towards a common target. An extensive evaluation shows that our algorithms outperform competing SSPP algorithms by up to two orders of magnitude and that our new UBS algorithm is the first to consistently compute exact UBS values in a matter of milliseconds.

Cite as

Tim Zeitz. Fast Computation of Shortest Smooth Paths and Uniformly Bounded Stretch with Lazy RPHAST. In 20th International Symposium on Experimental Algorithms (SEA 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 233, pp. 3:1-3:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{zeitz:LIPIcs.SEA.2022.3,
  author =	{Zeitz, Tim},
  title =	{{Fast Computation of Shortest Smooth Paths and Uniformly Bounded Stretch with Lazy RPHAST}},
  booktitle =	{20th International Symposium on Experimental Algorithms (SEA 2022)},
  pages =	{3:1--3:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-251-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{233},
  editor =	{Schulz, Christian and U\c{c}ar, Bora},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2022.3},
  URN =		{urn:nbn:de:0030-drops-165378},
  doi =		{10.4230/LIPIcs.SEA.2022.3},
  annote =	{Keywords: realistic road networks, route planning, shortest paths, traffic-aware routing, live traffic, uniformly bounded stretch}
}
Document
Fast Succinct Retrieval and Approximate Membership Using Ribbon

Authors: Peter C. Dillinger, Lorenz Hübschle-Schneider, Peter Sanders, and Stefan Walzer


Abstract
A retrieval data structure for a static function f: S → {0,1}^r supports queries that return f(x) for any x ∈ S. Retrieval data structures can be used to implement a static approximate membership query data structure (AMQ), i.e., a Bloom filter alternative, with false positive rate 2^{-r}. The information-theoretic lower bound for both tasks is r|S| bits. While succinct theoretical constructions using (1+o(1))r|S| bits were known, these could not achieve very small overheads in practice because they have an unfavorable space-time tradeoff hidden in the asymptotic costs or because small overheads would only be reached for physically impossible input sizes. With bumped ribbon retrieval (BuRR), we present the first practical succinct retrieval data structure. In an extensive experimental evaluation BuRR achieves space overheads well below 1% while being faster than most previously used retrieval data structures (typically with space overheads at least an order of magnitude larger) and faster than classical Bloom filters (with space overhead ≥ 44%). This efficiency, including favorable constants, stems from a combination of simplicity, word parallelism, and high locality. We additionally describe homogeneous ribbon filter AMQs, which are even simpler and faster at the price of slightly larger space overhead.

Cite as

Peter C. Dillinger, Lorenz Hübschle-Schneider, Peter Sanders, and Stefan Walzer. Fast Succinct Retrieval and Approximate Membership Using Ribbon. In 20th International Symposium on Experimental Algorithms (SEA 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 233, pp. 4:1-4:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{dillinger_et_al:LIPIcs.SEA.2022.4,
  author =	{Dillinger, Peter C. and H\"{u}bschle-Schneider, Lorenz and Sanders, Peter and Walzer, Stefan},
  title =	{{Fast Succinct Retrieval and Approximate Membership Using Ribbon}},
  booktitle =	{20th International Symposium on Experimental Algorithms (SEA 2022)},
  pages =	{4:1--4:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-251-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{233},
  editor =	{Schulz, Christian and U\c{c}ar, Bora},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2022.4},
  URN =		{urn:nbn:de:0030-drops-165385},
  doi =		{10.4230/LIPIcs.SEA.2022.4},
  annote =	{Keywords: AMQ, Bloom filter, dictionary, linear algebra, randomized algorithm, retrieval data structure, static function data structure, succinct data structure, perfect hashing}
}
Document
Parallel Flow-Based Hypergraph Partitioning

Authors: Lars Gottesbüren, Tobias Heuer, and Peter Sanders


Abstract
We present a shared-memory parallelization of flow-based refinement, which is considered the most powerful iterative improvement technique for hypergraph partitioning at the moment. Flow-based refinement works on bipartitions, so current sequential partitioners schedule it on different block pairs to improve k-way partitions. We investigate two different sources of parallelism: a parallel scheduling scheme and a parallel maximum flow algorithm based on the well-known push-relabel algorithm. In addition to thoroughly engineered implementations, we propose several optimizations that substantially accelerate the algorithm in practice, enabling the use on extremely large hypergraphs (up to 1 billion pins). We integrate our approach in the state-of-the-art parallel multilevel framework Mt-KaHyPar and conduct extensive experiments on a benchmark set of more than 500 real-world hypergraphs, to show that the partition quality of our code is on par with the highest quality sequential code (KaHyPar), while being an order of magnitude faster with 10 threads.

Cite as

Lars Gottesbüren, Tobias Heuer, and Peter Sanders. Parallel Flow-Based Hypergraph Partitioning. In 20th International Symposium on Experimental Algorithms (SEA 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 233, pp. 5:1-5:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{gottesburen_et_al:LIPIcs.SEA.2022.5,
  author =	{Gottesb\"{u}ren, Lars and Heuer, Tobias and Sanders, Peter},
  title =	{{Parallel Flow-Based Hypergraph Partitioning}},
  booktitle =	{20th International Symposium on Experimental Algorithms (SEA 2022)},
  pages =	{5:1--5:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-251-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{233},
  editor =	{Schulz, Christian and U\c{c}ar, Bora},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2022.5},
  URN =		{urn:nbn:de:0030-drops-165393},
  doi =		{10.4230/LIPIcs.SEA.2022.5},
  annote =	{Keywords: multilevel hypergraph partitioning, shared-memory algorithms, maximum flow}
}
Document
Routing in Multimodal Transportation Networks with Non-Scheduled Lines

Authors: Darko Drakulic, Christelle Loiodice, and Vassilissa Lehoux


Abstract
Over the last decades, new mobility offers have emerged to enlarge the coverage and the accessibility of public transportation systems. In many areas, public transit now incorporates on-demand transport lines, that can be activated at user need. In this paper, we propose to integrate lines without predefined schedules but with predefined stop sequences into a state-of-the-art trip planning algorithm for public transit, the Trip-Based Public Transit Routing algorithm [Witt, 2015]. We extend this algorithm to non-scheduled lines and explain how to model other modes of transportation, such as bike sharing, with this approach. The resulting algorithm is exact and optimizes two criteria: the earliest arrival time and the minimal number of transfers. Experiments on two large datasets show the interest of the proposed method over a baseline modelling.

Cite as

Darko Drakulic, Christelle Loiodice, and Vassilissa Lehoux. Routing in Multimodal Transportation Networks with Non-Scheduled Lines. In 20th International Symposium on Experimental Algorithms (SEA 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 233, pp. 6:1-6:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{drakulic_et_al:LIPIcs.SEA.2022.6,
  author =	{Drakulic, Darko and Loiodice, Christelle and Lehoux, Vassilissa},
  title =	{{Routing in Multimodal Transportation Networks with Non-Scheduled Lines}},
  booktitle =	{20th International Symposium on Experimental Algorithms (SEA 2022)},
  pages =	{6:1--6:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-251-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{233},
  editor =	{Schulz, Christian and U\c{c}ar, Bora},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2022.6},
  URN =		{urn:nbn:de:0030-drops-165406},
  doi =		{10.4230/LIPIcs.SEA.2022.6},
  annote =	{Keywords: Multimodal routing, on-demand public transportation, bicriteria shortest paths}
}
Document
Relating Real and Synthetic Social Networks Through Centrality Measures

Authors: Maria J. Blesa, Mihail Eduard Popa, and Maria Serna


Abstract
We perform here a comparative study on the behaviour of real and synthetic social networks with respect to a selection of nine centrality measures. Some of them are topology based (degree, closeness, betweenness), while others consider the relevance of the actors within the network (Katz, PageRank) or their ability to spread influence through it (Independent Cascade rank, Linear Threshold Rank). We run different experiments on synthetic social networks, with 1K, 10K, and 100K nodes, generated according to the Gaussian Random partition model, the stochastic block model, the LFR benchmark graph model and hyperbolic geometric graphs model. Some real social networks are also considered, with the aim of discovering how do they relate to the synthetic models in terms of centrality. Apart from usual statistical measures, we perform a correlation analysis between all the nine measures. Our results indicate that, in general, the correlation matrices of the different models scale nicely with size. Moreover, the correlation plots distinguish four categories that classify most of the real networks studied here. Those categories have a clear correspondence with particular configurations of the models for synthetic networks.

Cite as

Maria J. Blesa, Mihail Eduard Popa, and Maria Serna. Relating Real and Synthetic Social Networks Through Centrality Measures. In 20th International Symposium on Experimental Algorithms (SEA 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 233, pp. 7:1-7:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{blesa_et_al:LIPIcs.SEA.2022.7,
  author =	{Blesa, Maria J. and Popa, Mihail Eduard and Serna, Maria},
  title =	{{Relating Real and Synthetic Social Networks Through Centrality Measures}},
  booktitle =	{20th International Symposium on Experimental Algorithms (SEA 2022)},
  pages =	{7:1--7:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-251-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{233},
  editor =	{Schulz, Christian and U\c{c}ar, Bora},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2022.7},
  URN =		{urn:nbn:de:0030-drops-165410},
  doi =		{10.4230/LIPIcs.SEA.2022.7},
  annote =	{Keywords: centrality measures, influence spread models, synthetic social networks}
}
Document
Efficient and Accurate Group Testing via Belief Propagation: An Empirical Study

Authors: Amin Coja-Oghlan, Max Hahn-Klimroth, Philipp Loick, and Manuel Penschuck


Abstract
The group testing problem asks for efficient pooling schemes and inference algorithms that allow to screen moderately large numbers of samples for rare infections. The goal is to accurately identify the infected individuals while minimizing the number of tests. We propose the novel adaptive pooling scheme adaptive Belief Propagation (ABP) that acknowledges practical limitations such as limited pooling sizes and noisy tests that may give imperfect answers. We demonstrate that the accuracy of ABP surpasses that of individual testing despite using few overall tests. The new design comes with Belief Propagation as an efficient inference algorithm. While the development of ABP is guided by mathematical analyses and asymptotic insights, we conduct an experimental study to obtain results on practical population sizes.

Cite as

Amin Coja-Oghlan, Max Hahn-Klimroth, Philipp Loick, and Manuel Penschuck. Efficient and Accurate Group Testing via Belief Propagation: An Empirical Study. In 20th International Symposium on Experimental Algorithms (SEA 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 233, pp. 8:1-8:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{cojaoghlan_et_al:LIPIcs.SEA.2022.8,
  author =	{Coja-Oghlan, Amin and Hahn-Klimroth, Max and Loick, Philipp and Penschuck, Manuel},
  title =	{{Efficient and Accurate Group Testing via Belief Propagation: An Empirical Study}},
  booktitle =	{20th International Symposium on Experimental Algorithms (SEA 2022)},
  pages =	{8:1--8:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-251-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{233},
  editor =	{Schulz, Christian and U\c{c}ar, Bora},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2022.8},
  URN =		{urn:nbn:de:0030-drops-165422},
  doi =		{10.4230/LIPIcs.SEA.2022.8},
  annote =	{Keywords: Group testing, Probabilistic Construction, Belief Propagation, Simulation}
}
Document
Efficient Exact Learning Algorithms for Road Networks and Other Graphs with Bounded Clustering Degrees

Authors: Ramtin Afshar, Michael T. Goodrich, and Evrim Ozel


Abstract
The completeness of road network data is significant in the quality of various routing services and applications. We introduce an efficient randomized algorithm for exact learning of road networks using simple distance queries, which can find missing roads and improve the quality of routing services. The efficiency of our algorithm depends on a cluster degree parameter, d_max, which is an upper bound on the degrees of vertex clusters defined during our algorithm. Unfortunately, we leave open the problem of theoretically bounding d_max, although we conjecture that d_max is small for road networks and other similar types of graphs. We support this conjecture by experimentally evaluating our algorithm on road network data for the U.S. and 5 European countries of various sizes. This analysis provides experimental evidence that our algorithm issues a quasilinear number of queries in expectation for road networks and similar graphs.

Cite as

Ramtin Afshar, Michael T. Goodrich, and Evrim Ozel. Efficient Exact Learning Algorithms for Road Networks and Other Graphs with Bounded Clustering Degrees. In 20th International Symposium on Experimental Algorithms (SEA 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 233, pp. 9:1-9:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{afshar_et_al:LIPIcs.SEA.2022.9,
  author =	{Afshar, Ramtin and Goodrich, Michael T. and Ozel, Evrim},
  title =	{{Efficient Exact Learning Algorithms for Road Networks and Other Graphs with Bounded Clustering Degrees}},
  booktitle =	{20th International Symposium on Experimental Algorithms (SEA 2022)},
  pages =	{9:1--9:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-251-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{233},
  editor =	{Schulz, Christian and U\c{c}ar, Bora},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2022.9},
  URN =		{urn:nbn:de:0030-drops-165432},
  doi =		{10.4230/LIPIcs.SEA.2022.9},
  annote =	{Keywords: Road Networks, Exact Learning, Graph Reconstruction, Randomized Algorithms}
}
Document
A Parallel Framework for Approximate Max-Dicut in Partitionable Graphs

Authors: Nico Bertram, Jonas Ellert, and Johannes Fischer


Abstract
Computing a maximum cut in undirected and weighted graphs is a well studied problem and has many practical solutions that also scale well in shared memory (despite its NP-completeness). For its counterpart in directed graphs, however, we are not aware of practical solutions that also utilize parallelism. We engineer a framework that computes a high quality approximate cut in directed and weighted graphs by using a graph partitioning approach. The general idea is to partition a graph into k subgraphs using a parallel partitioning algorithm of our choice (the first ingredient of our framework). Then, for each subgraph in parallel, we compute a cut using any polynomial time approximation algorithm (the second ingredient). In a final step, we merge the locally computed solutions using a high-quality or exact parallel Max-Dicut algorithm (the third ingredient). On graphs that can be partitioned well, the quality of the computed cut is significantly better than the best cut achieved by any linear time algorithm. This is particularly relevant for large graphs, where linear time algorithms used to be the only feasible option.

Cite as

Nico Bertram, Jonas Ellert, and Johannes Fischer. A Parallel Framework for Approximate Max-Dicut in Partitionable Graphs. In 20th International Symposium on Experimental Algorithms (SEA 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 233, pp. 10:1-10:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{bertram_et_al:LIPIcs.SEA.2022.10,
  author =	{Bertram, Nico and Ellert, Jonas and Fischer, Johannes},
  title =	{{A Parallel Framework for Approximate Max-Dicut in Partitionable Graphs}},
  booktitle =	{20th International Symposium on Experimental Algorithms (SEA 2022)},
  pages =	{10:1--10:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-251-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{233},
  editor =	{Schulz, Christian and U\c{c}ar, Bora},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2022.10},
  URN =		{urn:nbn:de:0030-drops-165441},
  doi =		{10.4230/LIPIcs.SEA.2022.10},
  annote =	{Keywords: maximum directed cut, graph partitioning, algorithm engineering, approximation, parallel algorithms}
}
Document
A Fast Data Structure for Dynamic Graphs Based on Hash-Indexed Adjacency Blocks

Authors: Alexander van der Grinten, Maria Predari, and Florian Willich


Abstract
Several dynamic graph data structures have been proposed in literature. Yet, these data structures either offer limited support for arbitrary graph algorithms or they are designed as part of specific frameworks (e.g., for GPUs or specialized hardware). Such frameworks are difficult to adopt to arbitrary graph computations and lead practitioners to fall back to less sophisticated solutions when dealing with dynamic graphs. In this work, we propose a new "dynamic hashed blocks" (DHB) data structure for sparse dynamic graphs and matrices on general-purpose CPU architectures. DHB combines an efficient block-based memory layout to store incident edges with an additional per-vertex hash index for high degree vertices. This hash index allows us to quickly insert edges without introducing duplicates, while the block-based memory layout retains advantageous cache locality properties of traditional adjacency arrays. Experiments show that DHB outperforms competing dynamic graph structures for edge insertions, updates, deletions, and traversal operations. Compared to static CSR layouts, DHB exhibits only a small overhead in traversal performance. DHB’s interface is similar to general-purpose abstract graph data types and can be easily used as a drop-in replacement for traditional adjacency arrays. To demonstrate that, we modify the well-known NetworKit framework to use DHB instead of its own dynamic graph representation. Experiments show that this modification only slightly penalizes the performance of graph algorithms while considerably boosting update rates.

Cite as

Alexander van der Grinten, Maria Predari, and Florian Willich. A Fast Data Structure for Dynamic Graphs Based on Hash-Indexed Adjacency Blocks. In 20th International Symposium on Experimental Algorithms (SEA 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 233, pp. 11:1-11:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{vandergrinten_et_al:LIPIcs.SEA.2022.11,
  author =	{van der Grinten, Alexander and Predari, Maria and Willich, Florian},
  title =	{{A Fast Data Structure for Dynamic Graphs Based on Hash-Indexed Adjacency Blocks}},
  booktitle =	{20th International Symposium on Experimental Algorithms (SEA 2022)},
  pages =	{11:1--11:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-251-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{233},
  editor =	{Schulz, Christian and U\c{c}ar, Bora},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2022.11},
  URN =		{urn:nbn:de:0030-drops-165453},
  doi =		{10.4230/LIPIcs.SEA.2022.11},
  annote =	{Keywords: dynamic graph data structures, sparse matrix layout, dynamic algorithms, parallel algorithms, graph analysis}
}
Document
Efficient Minimum Weight Vertex Cover Heuristics Using Graph Neural Networks

Authors: Kenneth Langedal, Johannes Langguth, Fredrik Manne, and Daniel Thilo Schroeder


Abstract
Minimum weighted vertex cover is the NP-hard graph problem of choosing a subset of vertices incident to all edges such that the sum of the weights of the chosen vertices is minimum. Previous efforts for solving this in practice have typically been based on search-based iterative heuristics or exact algorithms that rely on reduction rules and branching techniques. Although exact methods have shown success in solving instances with up to millions of vertices efficiently, they are limited in practice due to the NP-hardness of the problem. We present a new hybrid method that combines elements from exact methods, iterative search, and graph neural networks (GNNs). More specifically, we first compute a greedy solution using reduction rules whenever possible. If no such rule applies, we consult a GNN model that selects a vertex that is likely to be in or out of the solution, potentially opening up for further reductions. Finally, we use an improved local search strategy to enhance the solution further. Extensive experiments on graphs of up to a billion edges show that the proposed GNN-based approach finds better solutions than existing heuristics. Compared to exact solvers, the method produced solutions that are, on average, 0.04% away from the optimum while taking less time than all state-of-the-art alternatives.

Cite as

Kenneth Langedal, Johannes Langguth, Fredrik Manne, and Daniel Thilo Schroeder. Efficient Minimum Weight Vertex Cover Heuristics Using Graph Neural Networks. In 20th International Symposium on Experimental Algorithms (SEA 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 233, pp. 12:1-12:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{langedal_et_al:LIPIcs.SEA.2022.12,
  author =	{Langedal, Kenneth and Langguth, Johannes and Manne, Fredrik and Schroeder, Daniel Thilo},
  title =	{{Efficient Minimum Weight Vertex Cover Heuristics Using Graph Neural Networks}},
  booktitle =	{20th International Symposium on Experimental Algorithms (SEA 2022)},
  pages =	{12:1--12:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-251-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{233},
  editor =	{Schulz, Christian and U\c{c}ar, Bora},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2022.12},
  URN =		{urn:nbn:de:0030-drops-165462},
  doi =		{10.4230/LIPIcs.SEA.2022.12},
  annote =	{Keywords: Minimum weighted vertex cover, Maximum weighted independent set, Graph neural networks, Reducing-peeling}
}
Document
A Branch-And-Bound Algorithm for Cluster Editing

Authors: Thomas Bläsius, Philipp Fischbeck, Lars Gottesbüren, Michael Hamann, Tobias Heuer, Jonas Spinner, Christopher Weyand, and Marcus Wilhelm


Abstract
The cluster editing problem asks to transform a given graph into a disjoint union of cliques by inserting and deleting as few edges as possible. We describe and evaluate an exact branch-and-bound algorithm for cluster editing. For this, we introduce new reduction rules and adapt existing ones. Moreover, we generalize a known packing technique to obtain lower bounds and experimentally show that it contributes significantly to the performance of the solver. Our experiments further evaluate the effectiveness of the different reduction rules and examine the effects of structural properties of the input graph on solver performance. Our solver won the exact track of the 2021 PACE challenge.

Cite as

Thomas Bläsius, Philipp Fischbeck, Lars Gottesbüren, Michael Hamann, Tobias Heuer, Jonas Spinner, Christopher Weyand, and Marcus Wilhelm. A Branch-And-Bound Algorithm for Cluster Editing. In 20th International Symposium on Experimental Algorithms (SEA 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 233, pp. 13:1-13:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{blasius_et_al:LIPIcs.SEA.2022.13,
  author =	{Bl\"{a}sius, Thomas and Fischbeck, Philipp and Gottesb\"{u}ren, Lars and Hamann, Michael and Heuer, Tobias and Spinner, Jonas and Weyand, Christopher and Wilhelm, Marcus},
  title =	{{A Branch-And-Bound Algorithm for Cluster Editing}},
  booktitle =	{20th International Symposium on Experimental Algorithms (SEA 2022)},
  pages =	{13:1--13:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-251-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{233},
  editor =	{Schulz, Christian and U\c{c}ar, Bora},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2022.13},
  URN =		{urn:nbn:de:0030-drops-165473},
  doi =		{10.4230/LIPIcs.SEA.2022.13},
  annote =	{Keywords: cluster editing}
}
Document
An Experimental Study of Algorithms for Packing Arborescences

Authors: Loukas Georgiadis, Dionysios Kefallinos, Anna Mpanti, and Stavros D. Nikolopoulos


Abstract
A classic result of Edmonds states that the maximum number of edge-disjoint arborescences of a directed graph G, rooted at a designated vertex s, equals the minimum cardinality c_G(s) of an s-cut of G. This concept is related to the edge connectivity λ(G) of a strongly connected directed graph G, defined as the minimum number of edges whose deletion leaves a graph that is not strongly connected. In this paper, we address the question of how efficiently we can compute a maximum packing of edge-disjoint arborescences in practice, compared to the time required to determine the edge connectivity of a graph. To that end, we explore the design space of efficient algorithms for packing arborescences of a directed graph in practice and conduct a thorough empirical study to highlight the merits and weaknesses of each technique. In particular, we present an efficient implementation of Gabow’s arborescence packing algorithm and provide a simple but efficient heuristic that significantly improves its running time in practice.

Cite as

Loukas Georgiadis, Dionysios Kefallinos, Anna Mpanti, and Stavros D. Nikolopoulos. An Experimental Study of Algorithms for Packing Arborescences. In 20th International Symposium on Experimental Algorithms (SEA 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 233, pp. 14:1-14:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{georgiadis_et_al:LIPIcs.SEA.2022.14,
  author =	{Georgiadis, Loukas and Kefallinos, Dionysios and Mpanti, Anna and Nikolopoulos, Stavros D.},
  title =	{{An Experimental Study of Algorithms for Packing Arborescences}},
  booktitle =	{20th International Symposium on Experimental Algorithms (SEA 2022)},
  pages =	{14:1--14:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-251-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{233},
  editor =	{Schulz, Christian and U\c{c}ar, Bora},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2022.14},
  URN =		{urn:nbn:de:0030-drops-165480},
  doi =		{10.4230/LIPIcs.SEA.2022.14},
  annote =	{Keywords: Arborescences, Edge Connectivity, Graph Algorithms}
}
Document
Stochastic Route Planning for Electric Vehicles

Authors: Payas Rajan and Chinya V. Ravishankar


Abstract
Electric Vehicle routing is often modeled as a generalization of the energy-constrained shortest path problem, taking travel times and energy consumptions on road network edges to be deterministic. In practice, however, energy consumption and travel times are stochastic distributions, typically estimated from real-world data. Consequently, real-world routing algorithms can make only probabilistic feasibility guarantees. Current stochastic route planning methods either fail to ensure that routes are energy-feasible, or if they do, have not been shown to scale well to large graphs. Our work bridges this gap by finding routes to maximize on-time arrival probability and the set of non-dominated routes under two criteria for stochastic route feasibility: 𝔼-feasibility and p-feasibility. Our 𝔼-feasibility criterion ensures energy-feasibility in expectation, using expected energy values along network edges. Our p-feasibility criterion accounts for the actual distribution along edges, and keeps the stranding probability along the route below a user-specified threshold p. We generalize the charging function propagation algorithm to accept stochastic edge weights to find routes that maximize the probability of on-time arrival, while maintaining 𝔼- or p-feasibility. We also extend multi-criteria Contraction Hierarchies to accept stochastic edge weights and offer heuristics to speed up queries. Our experiments on a real-world road network instance of the Los Angeles area show that our methods answer stochastic queries in reasonable time, that the two criteria produce similar routes for longer deadlines, but that 𝔼-feasibility queries can be much faster than p-feasibility queries.

Cite as

Payas Rajan and Chinya V. Ravishankar. Stochastic Route Planning for Electric Vehicles. In 20th International Symposium on Experimental Algorithms (SEA 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 233, pp. 15:1-15:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{rajan_et_al:LIPIcs.SEA.2022.15,
  author =	{Rajan, Payas and Ravishankar, Chinya V.},
  title =	{{Stochastic Route Planning for Electric Vehicles}},
  booktitle =	{20th International Symposium on Experimental Algorithms (SEA 2022)},
  pages =	{15:1--15:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-251-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{233},
  editor =	{Schulz, Christian and U\c{c}ar, Bora},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2022.15},
  URN =		{urn:nbn:de:0030-drops-165497},
  doi =		{10.4230/LIPIcs.SEA.2022.15},
  annote =	{Keywords: Stochastic Routing, Electric Vehicles, Route Planning Algorithms}
}
Document
RLBWT Tricks

Authors: Nathaniel K. Brown, Travis Gagie, and Massimiliano Rossi


Abstract
Until recently, most experts would probably have agreed we cannot backwards-step in constant time with a run-length compressed Burrows-Wheeler Transform (RLBWT), since doing so relies on rank queries on sparse bitvectors and those inherit lower bounds from predecessor queries. At ICALP '21, however, Nishimoto and Tabei described a new, simple and constant-time implementation. For a permutation π, it stores an O (r)-space table - where r is the number of positions i where either i = 0 or π (i + 1) ≠ π (i) + 1 - that enables the computation of successive values of π(i) by table look-ups and linear scans. Nishimoto and Tabei showed how to increase the number of rows in the table to bound the length of the linear scans such that the query time for computing π(i) is constant while maintaining O (r)-space. In this paper we refine Nishimoto and Tabei’s approach, including a time-space tradeoff, and experimentally evaluate different implementations demonstrating the practicality of part of their result. We show that even without adding rows to the table, in practice we almost always scan only a few entries during queries. We propose a decomposition scheme of the permutation π corresponding to the LF-mapping that allows an improved compression of the data structure, while limiting the query time. We tested our implementation on real-world genomic datasets and found that without compression of the table, backward-stepping is drastically faster than with sparse bitvector implementations but, unfortunately, also uses drastically more space. After compression, backward-stepping is competitive both in time and space with the best existing implementations.

Cite as

Nathaniel K. Brown, Travis Gagie, and Massimiliano Rossi. RLBWT Tricks. In 20th International Symposium on Experimental Algorithms (SEA 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 233, pp. 16:1-16:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{brown_et_al:LIPIcs.SEA.2022.16,
  author =	{Brown, Nathaniel K. and Gagie, Travis and Rossi, Massimiliano},
  title =	{{RLBWT Tricks}},
  booktitle =	{20th International Symposium on Experimental Algorithms (SEA 2022)},
  pages =	{16:1--16:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-251-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{233},
  editor =	{Schulz, Christian and U\c{c}ar, Bora},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2022.16},
  URN =		{urn:nbn:de:0030-drops-165500},
  doi =		{10.4230/LIPIcs.SEA.2022.16},
  annote =	{Keywords: Compressed String Indexes, Repetitive Text Collections, Burrows-Wheeler Transform}
}
Document
Heuristic Computation of Exact Treewidth

Authors: Hisao Tamaki


Abstract
We are interested in computing the treewidth tw(G) of a given graph G. Our approach is to design heuristic algorithms for computing a sequence of improving upper bounds and a sequence of improving lower bounds, which would hopefully converge to tw(G) from both sides. The upper bound algorithm extends and simplifies the present author’s unpublished work on a heuristic use of the dynamic programming algorithm for deciding treewidth due to Bouchitté and Todinca. The lower bound algorithm is based on the well-known fact that, for every minor H of G, we have tw(H) ≤ tw(G). Starting from a greedily computed minor H_0 of G, the algorithm tries to construct a sequence of minors H_0, H_1, ..., H_k with tw(H_i) < tw(H_{i + 1}) for 0 ≤ i < k and hopefully tw(H_k) = tw(G). We have implemented a treewidth solver based on this approach and have evaluated it on the bonus instances from the exact treewidth track of PACE 2017 algorithm implementation challenge. The results show that our approach is extremely effective in tackling instances that are hard for conventional solvers. Our solver has an additional advantage over conventional ones in that it attaches a compact certificate to the lower bound it computes.

Cite as

Hisao Tamaki. Heuristic Computation of Exact Treewidth. In 20th International Symposium on Experimental Algorithms (SEA 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 233, pp. 17:1-17:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{tamaki:LIPIcs.SEA.2022.17,
  author =	{Tamaki, Hisao},
  title =	{{Heuristic Computation of Exact Treewidth}},
  booktitle =	{20th International Symposium on Experimental Algorithms (SEA 2022)},
  pages =	{17:1--17:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-251-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{233},
  editor =	{Schulz, Christian and U\c{c}ar, Bora},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2022.17},
  URN =		{urn:nbn:de:0030-drops-165512},
  doi =		{10.4230/LIPIcs.SEA.2022.17},
  annote =	{Keywords: graph algorithm, treewidth, heuristics, BT dynamic programming, contraction, obstruction, minimal forbidden minor, certifying algorithms}
}
Document
On the Satisfiability of Smooth Grid CSPs

Authors: Vasily Alferov and Mateus de Oliveira Oliveira


Abstract
Many important NP-hard problems, arising in a wide variety of contexts, can be reduced straightforwardly to the satisfiability problem for CSPs whose underlying graph is a grid. In this work, we push forward the study of grid CSPs by analyzing, from an experimental perspective, a symbolic parameter called smoothness. More specifically, we implement an algorithm that provably works in polynomial time on grids of polynomial smoothness. Subsequently, we compare our algorithm with standard combinatorial optimization techniques, such as SAT-solving and integer linear programming (ILP). For this comparison, we use a class of grid-CSPs encoding the pigeonhole principle. We demonstrate, empirically, that these CSPs have polynomial smoothness and that our algorithm terminates in polynomial time. On the other hand, as strong evidence that the grid-like encoding is not destroying the essence of the pigeonhole principle, we show that the standard propositional translation of pigeonhole CSPs remains hard for state-of-the-art SAT solvers, such as minisat and glucose, and even to state-of-the-art integer linear-programming solvers, such as Coin-OR CBC.

Cite as

Vasily Alferov and Mateus de Oliveira Oliveira. On the Satisfiability of Smooth Grid CSPs. In 20th International Symposium on Experimental Algorithms (SEA 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 233, pp. 18:1-18:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{alferov_et_al:LIPIcs.SEA.2022.18,
  author =	{Alferov, Vasily and de Oliveira Oliveira, Mateus},
  title =	{{On the Satisfiability of Smooth Grid CSPs}},
  booktitle =	{20th International Symposium on Experimental Algorithms (SEA 2022)},
  pages =	{18:1--18:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-251-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{233},
  editor =	{Schulz, Christian and U\c{c}ar, Bora},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2022.18},
  URN =		{urn:nbn:de:0030-drops-165526},
  doi =		{10.4230/LIPIcs.SEA.2022.18},
  annote =	{Keywords: Grid CSPs, Smoothness, SAT Solving, Linear Programming}
}
Document
An Experimental Evaluation of Semidefinite Programming and Spectral Algorithms for Max Cut

Authors: Renee Mirka and David P. Williamson


Abstract
We experimentally evaluate the performance of several Max Cut approximation algorithms. In particular, we compare the results of the Goemans and Williamson algorithm using semidefinite programming with Trevisan’s algorithm using spectral partitioning. The former algorithm has a known .878 approximation guarantee whereas the latter has a .614 approximation guarantee. We investigate whether this gap in approximation guarantees is evident in practice or whether the spectral algorithm performs as well as the SDP. We also compare the performances to the standard greedy Max Cut algorithm which has a .5 approximation guarantee and two additional spectral algorithms. The algorithms are tested on Erdős-Renyi random graphs, complete graphs from TSPLIB, and real-world graphs from the Network Repository. We find, unsurprisingly, that the spectral algorithms provide a significant speed advantage over the SDP. In our experiments, the spectral algorithms return cuts with values which are competitive with those of the SDP.

Cite as

Renee Mirka and David P. Williamson. An Experimental Evaluation of Semidefinite Programming and Spectral Algorithms for Max Cut. In 20th International Symposium on Experimental Algorithms (SEA 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 233, pp. 19:1-19:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{mirka_et_al:LIPIcs.SEA.2022.19,
  author =	{Mirka, Renee and Williamson, David P.},
  title =	{{An Experimental Evaluation of Semidefinite Programming and Spectral Algorithms for Max Cut}},
  booktitle =	{20th International Symposium on Experimental Algorithms (SEA 2022)},
  pages =	{19:1--19:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-251-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{233},
  editor =	{Schulz, Christian and U\c{c}ar, Bora},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2022.19},
  URN =		{urn:nbn:de:0030-drops-165533},
  doi =		{10.4230/LIPIcs.SEA.2022.19},
  annote =	{Keywords: Max Cut, Approximation Algorithms}
}
Document
Digraph k-Coloring Games: From Theory to Practice

Authors: Andrea D'Ascenzo, Mattia D'Emidio, Michele Flammini, and Gianpiero Monaco


Abstract
We study digraph k-coloring games where agents are vertices of a directed unweighted graph and arcs represent agents' mutual unidirectional idiosyncrasies or conflicts. Each agent can select one of k different colors, and her payoff in a given state is given by the number of outgoing neighbors with a different color. Such games model lots of strategic real-world scenarios and are related to several fundamental classes of anti-coordination games. Unfortunately, the problem of understanding whether an instance of the game admits a pure Nash equilibrium is NP-complete [Jeremy Kun et al., 2013]. Therefore, in the last few years a relevant research focus has been that of designing polynomial time algorithms able to compute approximate Nash equilibria, i.e., states in which no agent, changing her strategy, can improve her payoff by some bounded multiplicative factor. The only two known algorithms in this respect are those in [Raffaello Carosi et al., 2017]. While they provide theoretical guarantees, their practical performance over real-world instances so far has not been investigated. In this paper, under the further motivation of the lack of practical approximation algorithms for the problem, we experimentally evaluate the above algorithms with the conclusion that, while they were suitably designed for achieving a bounded worst case behavior, they generally have a poor performance. Therefore, we next focus on classical best-response dynamics, and show that, despite of the fact that they might not always converge, they are very effective in practice. In particular, we provide a strong empirical evidence that they outperform existing methods, since surprisingly they quickly converge to exact Nash equilibria in almost all instances arising in practice. This also shows that, while this class of games is known to not always possess pure Nash equilibria, in almost all cases such equilibria exist and can be efficiently computed, even in a distributed uncoordinated way by a decentralized interaction of the agents.

Cite as

Andrea D'Ascenzo, Mattia D'Emidio, Michele Flammini, and Gianpiero Monaco. Digraph k-Coloring Games: From Theory to Practice. In 20th International Symposium on Experimental Algorithms (SEA 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 233, pp. 20:1-20:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{dascenzo_et_al:LIPIcs.SEA.2022.20,
  author =	{D'Ascenzo, Andrea and D'Emidio, Mattia and Flammini, Michele and Monaco, Gianpiero},
  title =	{{Digraph k-Coloring Games: From Theory to Practice}},
  booktitle =	{20th International Symposium on Experimental Algorithms (SEA 2022)},
  pages =	{20:1--20:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-251-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{233},
  editor =	{Schulz, Christian and U\c{c}ar, Bora},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2022.20},
  URN =		{urn:nbn:de:0030-drops-165540},
  doi =		{10.4230/LIPIcs.SEA.2022.20},
  annote =	{Keywords: Algorithmic Game Theory, Coloring Games, Experimental Algorithmics, Exact vs Approximate Nash Equilibria, Decentralized Dynamics}
}
Document
Practical Performance of Random Projections in Linear Programming

Authors: Leo Liberti, Benedetto Manca, and Pierre-Louis Poirion


Abstract
The use of random projections in mathematical programming allows standard solution algorithms to solve instances of much larger sizes, at least approximately. Approximation results have been derived in the relevant literature for many specific problems, as well as for several mathematical programming subclasses. Despite the theoretical developments, it is not always clear that random projections are actually useful in solving mathematical programs in practice. In this paper we provide a computational assessment of the application of random projections to linear programming.

Cite as

Leo Liberti, Benedetto Manca, and Pierre-Louis Poirion. Practical Performance of Random Projections in Linear Programming. In 20th International Symposium on Experimental Algorithms (SEA 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 233, pp. 21:1-21:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{liberti_et_al:LIPIcs.SEA.2022.21,
  author =	{Liberti, Leo and Manca, Benedetto and Poirion, Pierre-Louis},
  title =	{{Practical Performance of Random Projections in Linear Programming}},
  booktitle =	{20th International Symposium on Experimental Algorithms (SEA 2022)},
  pages =	{21:1--21:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-251-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{233},
  editor =	{Schulz, Christian and U\c{c}ar, Bora},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2022.21},
  URN =		{urn:nbn:de:0030-drops-165550},
  doi =		{10.4230/LIPIcs.SEA.2022.21},
  annote =	{Keywords: Linear Programming, Johnson-Lindenstrauss Lemma, Computational testing}
}
Document
Computing Maximal Unique Matches with the r-Index

Authors: Sara Giuliani, Giuseppe Romana, and Massimiliano Rossi


Abstract
In recent years, pangenomes received increasing attention from the scientific community for their ability to incorporate population variation information and alleviate reference genome bias. Maximal Exact Matches (MEMs) and Maximal Unique Matches (MUMs) have proven themselves to be useful in multiple bioinformatic contexts, for example short-read alignment and multiple-genome alignment. However, standard techniques using suffix trees and FM-indexes do not scale to a pangenomic level. Recently, Gagie et al. [JACM 20] introduced the r-index that is a Burrows-Wheeler Transform (BWT)-based index able to handle hundreds of human genomes. Later, Rossi et al. [JCB 22] enabled the computation of MEMs using the r-index, and Boucher et al. [DCC 21] showed how to compute them in a streaming fashion. In this paper, we show how to augment Boucher et al.’s approach to enable the computation of MUMs on the r-index, while preserving the space and time bounds. We add additional O(r) samples of the longest common prefix (LCP) array, where r is the number of equal-letter runs of the BWT, that permits the computation of the second longest match of the pattern suffix with respect to the input text, which in turn allows the computation of candidate MUMs. We implemented a proof-of-concept of our approach, that we call MUM-PHINDER, and tested on real-world datasets. We compared our approach with competing methods that are able to compute MUMs. We observe that our method is up to 8 times smaller, while up to 19 times slower when the dataset is not highly repetitive, while on highly repetitive data, our method is up to 6.5 times slower and uses up to 25 times less memory.

Cite as

Sara Giuliani, Giuseppe Romana, and Massimiliano Rossi. Computing Maximal Unique Matches with the r-Index. In 20th International Symposium on Experimental Algorithms (SEA 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 233, pp. 22:1-22:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{giuliani_et_al:LIPIcs.SEA.2022.22,
  author =	{Giuliani, Sara and Romana, Giuseppe and Rossi, Massimiliano},
  title =	{{Computing Maximal Unique Matches with the r-Index}},
  booktitle =	{20th International Symposium on Experimental Algorithms (SEA 2022)},
  pages =	{22:1--22:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-251-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{233},
  editor =	{Schulz, Christian and U\c{c}ar, Bora},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2022.22},
  URN =		{urn:nbn:de:0030-drops-165568},
  doi =		{10.4230/LIPIcs.SEA.2022.22},
  annote =	{Keywords: Burrows-Wheeler Transform, r-index, maximal unique matches, bioinformatics, pangenomics}
}
Document
Automatic Reformulations for Convex Mixed-Integer Nonlinear Optimization: Perspective and Separability

Authors: Meenarli Sharma and Ashutosh Mahajan


Abstract
Tight reformulations of combinatorial optimization problems like Convex Mixed-Integer Nonlinear Programs (MINLPs) enable one to solve these problems faster by obtaining tight bounds on optimal value. We consider two techniques for reformulation: perspective reformulation and separability detection. We develop routines for automatic detection of problem structures suitable for these reformulations, and implement new extensions. Since detecting all "on-off" sets for perspective reformulation in a problem can be as hard as solving the original problem, we develop heuristic methods to automatically identify them. The LP/NLP branch-and-bound method is strengthened via "perspective cuts" derived from these automatic routines. We also provide methods to generate tight perspective cuts at different nodes in the branch-and-bound tree. The second structure, i.e., separability of nonlinear functions, is detected by means of the computational graph of the function. Our routines have been implemented in the open-source Minotaur solver for general convex MINLPs. Computational results show an improvement of up to 45% in the solution time and the size of the branch-and-bound tree for convex instances from benchmark library MINLPLib. On instances where reformulation using function separability induces structures that are amenable to perspective reformulation, we observe an improvement of up to 88% in the solution time.

Cite as

Meenarli Sharma and Ashutosh Mahajan. Automatic Reformulations for Convex Mixed-Integer Nonlinear Optimization: Perspective and Separability. In 20th International Symposium on Experimental Algorithms (SEA 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 233, pp. 23:1-23:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{sharma_et_al:LIPIcs.SEA.2022.23,
  author =	{Sharma, Meenarli and Mahajan, Ashutosh},
  title =	{{Automatic Reformulations for Convex Mixed-Integer Nonlinear Optimization: Perspective and Separability}},
  booktitle =	{20th International Symposium on Experimental Algorithms (SEA 2022)},
  pages =	{23:1--23:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-251-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{233},
  editor =	{Schulz, Christian and U\c{c}ar, Bora},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2022.23},
  URN =		{urn:nbn:de:0030-drops-165579},
  doi =		{10.4230/LIPIcs.SEA.2022.23},
  annote =	{Keywords: Convex MINLP, perspective reformulation, branch-and-bound, outer approximation, function separability}
}
Document
An Adaptive Refinement Algorithm for Discretizations of Nonconvex QCQP

Authors: Akshay Gupte, Arie M. C. A. Koster, and Sascha Kuhnke


Abstract
We present an iterative algorithm to compute feasible solutions in reasonable running time to quadratically constrained quadratic programs (QCQPs), which form a challenging class of nonconvex continuous optimization. This algorithm is based on a mixed-integer linear program (MILP) which is a restriction of the original QCQP obtained by discretizing all quadratic terms. In each iteration, this MILP restriction is solved to get a feasible QCQP solution. Since the quality of this solution heavily depends on the chosen discretization of the MILP, we iteratively adapt the discretization values based on the MILP solution of the previous iteration. To maintain a reasonable problem size in each iteration of the algorithm, the discretization sizes are fixed at predefined values. Although our algorithm did not always yield good feasible solutions on arbitrary QCQP instances, an extensive computational study on almost 1300 test instances of two different problem classes - box-constrained quadratic programs with complementarity constraints and disjoint bilinear programs, demonstrates the effectiveness of our approach. We compare the quality of our solutions against those from heuristics and local optimization algorithms in two state-of-the-art commercial solvers and observe that on one instance class we clearly outperform the other methods whereas on the other class we obtain competitive results.

Cite as

Akshay Gupte, Arie M. C. A. Koster, and Sascha Kuhnke. An Adaptive Refinement Algorithm for Discretizations of Nonconvex QCQP. In 20th International Symposium on Experimental Algorithms (SEA 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 233, pp. 24:1-24:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{gupte_et_al:LIPIcs.SEA.2022.24,
  author =	{Gupte, Akshay and Koster, Arie M. C. A. and Kuhnke, Sascha},
  title =	{{An Adaptive Refinement Algorithm for Discretizations of Nonconvex QCQP}},
  booktitle =	{20th International Symposium on Experimental Algorithms (SEA 2022)},
  pages =	{24:1--24:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-251-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{233},
  editor =	{Schulz, Christian and U\c{c}ar, Bora},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2022.24},
  URN =		{urn:nbn:de:0030-drops-165585},
  doi =		{10.4230/LIPIcs.SEA.2022.24},
  annote =	{Keywords: Quadratically Constrained Quadratic Programs, Mixed Integer Linear Programming, Heuristics, BoxQP, Disjoint Bilinear}
}

Filters


Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail