Search Results

Documents authored by Leucci, Stefano


Document
Graph Spanners for Group Steiner Distances

Authors: Davide Bilò, Luciano Gualà, Stefano Leucci, and Alessandro Straziota

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
A spanner is a sparse subgraph of a given graph G which preserves distances, measured w.r.t. some distance metric, up to a multiplicative stretch factor. This paper addresses the problem of constructing graph spanners w.r.t. the group Steiner metric, which generalizes the recently introduced beer distance metric. In such a metric we are given a collection of groups of required vertices, and we measure the distance between two vertices as the length of the shortest path between them that traverses at least one required vertex from each group. We discuss the relation between group Steiner spanners and classic spanners and we show that they exhibit strong ties with sourcewise spanners w.r.t. the shortest path metric. Nevertheless, group Steiner spanners capture several interesting scenarios that are not encompassed by existing spanners. This happens, e.g., for the singleton case, in which each group consists of a single required vertex, thus modeling the setting in which routes need to traverse certain points of interests (in any order). We provide several constructions of group Steiner spanners for both the all-pairs and single-source case, which exhibit various size-stretch trade-offs. Notably, we provide spanners with almost-optimal trade-offs for the singleton case. Moreover, some of our spanners also yield novel trade-offs for classical sourcewise spanners. Finally, we also investigate the query times that can be achieved when our spanners are turned into group Steiner distance oracles with the same size, stretch, and building time.

Cite as

Davide Bilò, Luciano Gualà, Stefano Leucci, and Alessandro Straziota. Graph Spanners for Group Steiner Distances. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 25:1-25:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{bilo_et_al:LIPIcs.ESA.2024.25,
  author =	{Bil\`{o}, Davide and Gual\`{a}, Luciano and Leucci, Stefano and Straziota, Alessandro},
  title =	{{Graph Spanners for Group Steiner Distances}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{25:1--25:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.25},
  URN =		{urn:nbn:de:0030-drops-210968},
  doi =		{10.4230/LIPIcs.ESA.2024.25},
  annote =	{Keywords: Network sparsification, Graph spanners, Group Steiner tree, Distance oracles}
}
Document
Uniform-Budget Solo Chess with Only Rooks or Only Knights Is Hard

Authors: Davide Bilò, Luca Di Donato, Luciano Gualà, and Stefano Leucci

Published in: LIPIcs, Volume 291, 12th International Conference on Fun with Algorithms (FUN 2024)


Abstract
We study the Solo-Chess problem which has been introduced in [Aravind et al., FUN 2022]. This is a single-player variant of chess in which the player must clear all but one piece from the board via a sequence captures while ensuring that the number of captures performed by each piece does not exceed the piece’s budget. The time complexity of finding a winning sequence of captures has already been pinpointed for several combination of piece types and initial budgets. We contribute to a better understanding of the computational landscape of Solo-Chess by closing two problems left open in [Aravind et al., FUN 2022]. Namely, we show that Solo-Chess is hard even when all pieces are restricted to be only rooks with budget exactly 2, or only knights with budget exactly 11.

Cite as

Davide Bilò, Luca Di Donato, Luciano Gualà, and Stefano Leucci. Uniform-Budget Solo Chess with Only Rooks or Only Knights Is Hard. In 12th International Conference on Fun with Algorithms (FUN 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 291, pp. 4:1-4:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{bilo_et_al:LIPIcs.FUN.2024.4,
  author =	{Bil\`{o}, Davide and Di Donato, Luca and Gual\`{a}, Luciano and Leucci, Stefano},
  title =	{{Uniform-Budget Solo Chess with Only Rooks or Only Knights Is Hard}},
  booktitle =	{12th International Conference on Fun with Algorithms (FUN 2024)},
  pages =	{4:1--4:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-314-0},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{291},
  editor =	{Broder, Andrei Z. and Tamir, Tami},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FUN.2024.4},
  URN =		{urn:nbn:de:0030-drops-199121},
  doi =		{10.4230/LIPIcs.FUN.2024.4},
  annote =	{Keywords: solo chess, puzzle games, board games, NP-completeness}
}
Document
Swapping Mixed-Up Beers to Keep Them Cool

Authors: Davide Bilò, Maurizio Fiusco, Luciano Gualà, and Stefano Leucci

Published in: LIPIcs, Volume 291, 12th International Conference on Fun with Algorithms (FUN 2024)


Abstract
There was a mix-up in Escher’s bar and n customers sitting at the same table have each received a beer ordered by somebody else in the party. The drinks can be rearranged by swapping them in pairs, but the eccentric table shape only allows drinks to be exchanged between people sitting on opposite sides of the table. We study the problem of finding the minimum number of swaps needed so that each customer receives its desired beer before it gets warm. Formally, we consider the Colored Token Swapping problem on complete bipartite graphs. This problem is known to be solvable in polynomial time when all ordered drinks are different [Yamanaka et al., FUN 2014], but no results are known for the more general case in which multiple people in the party can order the same beer. We prove that Colored Token Swapping on complete bipartite graphs is NP-hard and that it is fixed-parameter tractable when parameterized by the number of distinct types of beer served by the bar.

Cite as

Davide Bilò, Maurizio Fiusco, Luciano Gualà, and Stefano Leucci. Swapping Mixed-Up Beers to Keep Them Cool. In 12th International Conference on Fun with Algorithms (FUN 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 291, pp. 5:1-5:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{bilo_et_al:LIPIcs.FUN.2024.5,
  author =	{Bil\`{o}, Davide and Fiusco, Maurizio and Gual\`{a}, Luciano and Leucci, Stefano},
  title =	{{Swapping Mixed-Up Beers to Keep Them Cool}},
  booktitle =	{12th International Conference on Fun with Algorithms (FUN 2024)},
  pages =	{5:1--5:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-314-0},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{291},
  editor =	{Broder, Andrei Z. and Tamir, Tami},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FUN.2024.5},
  URN =		{urn:nbn:de:0030-drops-199132},
  doi =		{10.4230/LIPIcs.FUN.2024.5},
  annote =	{Keywords: Colored Token Swapping, Complete Bipartite Graphs, Labeled Token Swapping, FPT Algorithms, NP-Hardness}
}
Document
Sparse Temporal Spanners with Low Stretch

Authors: Davide Bilò, Gianlorenzo D'Angelo, Luciano Gualà, Stefano Leucci, and Mirko Rossi

Published in: LIPIcs, Volume 244, 30th Annual European Symposium on Algorithms (ESA 2022)


Abstract
A temporal graph is an undirected graph G = (V,E) along with a function λ : E → ℕ^+ that assigns a time-label to each edge in E. A path in G such that the traversed time-labels are non-decreasing is called a temporal path. Accordingly, the distance from u to v is the minimum length (i.e., the number of edges) of a temporal path from u to v. A temporal α-spanner of G is a (temporal) subgraph H that preserves the distances between any pair of vertices in V, up to a multiplicative stretch factor of α. The size of H is measured as the number of its edges. In this work, we study the size-stretch trade-offs of temporal spanners. In particular we show that temporal cliques always admit a temporal (2k-1)-spanner with Õ(kn^{1+1/k}) edges, where k > 1 is an integer parameter of choice. Choosing k = ⌊log n⌋, we obtain a temporal O(log n)-spanner with Õ(n) edges that has almost the same size (up to logarithmic factors) as the temporal spanner given in [Casteigts et al., JCSS 2021] which only preserves temporal connectivity. We then turn our attention to general temporal graphs. Since Ω(n²) edges might be needed by any connectivity-preserving temporal subgraph [Axiotis et al., ICALP'16], we focus on approximating distances from a single source. We show that Õ(n/log(1+ε)) edges suffice to obtain a stretch of (1+ε), for any small ε > 0. This result is essentially tight in the following sense: there are temporal graphs G for which any temporal subgraph preserving exact distances from a single-source must use Ω(n²) edges. Interestingly enough, our analysis can be extended to the case of additive stretch for which we prove an upper bound of Õ(n² / β) on the size of any temporal β-additive spanner, which we show to be tight up to polylogarithmic factors. Finally, we investigate how the lifetime of G, i.e., the number of its distinct time-labels, affects the trade-off between the size and the stretch of a temporal spanner.

Cite as

Davide Bilò, Gianlorenzo D'Angelo, Luciano Gualà, Stefano Leucci, and Mirko Rossi. Sparse Temporal Spanners with Low Stretch. In 30th Annual European Symposium on Algorithms (ESA 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 244, pp. 19:1-19:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{bilo_et_al:LIPIcs.ESA.2022.19,
  author =	{Bil\`{o}, Davide and D'Angelo, Gianlorenzo and Gual\`{a}, Luciano and Leucci, Stefano and Rossi, Mirko},
  title =	{{Sparse Temporal Spanners with Low Stretch}},
  booktitle =	{30th Annual European Symposium on Algorithms (ESA 2022)},
  pages =	{19:1--19:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-247-1},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{244},
  editor =	{Chechik, Shiri and Navarro, Gonzalo and Rotenberg, Eva and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2022.19},
  URN =		{urn:nbn:de:0030-drops-169575},
  doi =		{10.4230/LIPIcs.ESA.2022.19},
  annote =	{Keywords: temporal spanners, temporal graphs, graph sparsification, approximate distances}
}
Document
Single-Source Shortest p-Disjoint Paths: Fast Computation and Sparse Preservers

Authors: Davide Bilò, Gianlorenzo D'Angelo, Luciano Gualà, Stefano Leucci, Guido Proietti, and Mirko Rossi

Published in: LIPIcs, Volume 219, 39th International Symposium on Theoretical Aspects of Computer Science (STACS 2022)


Abstract
Let G be a directed graph with n vertices, m edges, and non-negative edge costs. Given G, a fixed source vertex s, and a positive integer p, we consider the problem of computing, for each vertex t≠ s, p edge-disjoint paths of minimum total cost from s to t in G. Suurballe and Tarjan [Networks, 1984] solved the above problem for p = 2 by designing a O(m+nlog n) time algorithm which also computes a sparse single-source 2-multipath preserver, i.e., a subgraph containing 2 edge-disjoint paths of minimum total cost from s to every other vertex of G. The case p ≥ 3 was left as an open problem. We study the general problem (p ≥ 2) and prove that any graph admits a sparse single-source p-multipath preserver with p(n-1) edges. This size is optimal since the in-degree of each non-root vertex v must be at least p. Moreover, we design an algorithm that requires O(pn² (p + log n)) time to compute both p edge-disjoint paths of minimum total cost from the source to all other vertices and an optimal-size single-source p-multipath preserver. The running time of our algorithm outperforms that of a natural approach that solves n-1 single-pair instances using the well-known successive shortest paths algorithm by a factor of Θ(m/(np)) and is asymptotically near optimal if p = O(1) and m = Θ(n²). Our results extend naturally to the case of p vertex-disjoint paths.

Cite as

Davide Bilò, Gianlorenzo D'Angelo, Luciano Gualà, Stefano Leucci, Guido Proietti, and Mirko Rossi. Single-Source Shortest p-Disjoint Paths: Fast Computation and Sparse Preservers. In 39th International Symposium on Theoretical Aspects of Computer Science (STACS 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 219, pp. 12:1-12:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{bilo_et_al:LIPIcs.STACS.2022.12,
  author =	{Bil\`{o}, Davide and D'Angelo, Gianlorenzo and Gual\`{a}, Luciano and Leucci, Stefano and Proietti, Guido and Rossi, Mirko},
  title =	{{Single-Source Shortest p-Disjoint Paths: Fast Computation and Sparse Preservers}},
  booktitle =	{39th International Symposium on Theoretical Aspects of Computer Science (STACS 2022)},
  pages =	{12:1--12:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-222-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{219},
  editor =	{Berenbrink, Petra and Monmege, Benjamin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2022.12},
  URN =		{urn:nbn:de:0030-drops-158221},
  doi =		{10.4230/LIPIcs.STACS.2022.12},
  annote =	{Keywords: multipath spanners, graph sparsification, edge-disjoint paths, min-cost flow}
}
Document
Resilient Level Ancestor, Bottleneck, and Lowest Common Ancestor Queries in Dynamic Trees

Authors: Luciano Gualà, Stefano Leucci, and Isabella Ziccardi

Published in: LIPIcs, Volume 212, 32nd International Symposium on Algorithms and Computation (ISAAC 2021)


Abstract
We study the problem of designing a resilient data structure maintaining a tree under the Faulty-RAM model [Finocchi and Italiano, STOC'04] in which up to δ memory words can be corrupted by an adversary. Our data structure stores a rooted dynamic tree that can be updated via the addition of new leaves, requires linear size, and supports resilient (weighted) level ancestor queries, lowest common ancestor queries, and bottleneck vertex queries in O(δ) worst-case time per operation.

Cite as

Luciano Gualà, Stefano Leucci, and Isabella Ziccardi. Resilient Level Ancestor, Bottleneck, and Lowest Common Ancestor Queries in Dynamic Trees. In 32nd International Symposium on Algorithms and Computation (ISAAC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 212, pp. 66:1-66:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{guala_et_al:LIPIcs.ISAAC.2021.66,
  author =	{Gual\`{a}, Luciano and Leucci, Stefano and Ziccardi, Isabella},
  title =	{{Resilient Level Ancestor, Bottleneck, and Lowest Common Ancestor Queries in Dynamic Trees}},
  booktitle =	{32nd International Symposium on Algorithms and Computation (ISAAC 2021)},
  pages =	{66:1--66:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-214-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{212},
  editor =	{Ahn, Hee-Kap and Sadakane, Kunihiko},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2021.66},
  URN =		{urn:nbn:de:0030-drops-154998},
  doi =		{10.4230/LIPIcs.ISAAC.2021.66},
  annote =	{Keywords: level ancestor queries, lowest common ancestor queries, bottleneck vertex queries, resilient data structures, faulty-RAM model, dynamic trees}
}
Document
Cutting Bamboo down to Size

Authors: Davide Bilò, Luciano Gualà, Stefano Leucci, Guido Proietti, and Giacomo Scornavacca

Published in: LIPIcs, Volume 157, 10th International Conference on Fun with Algorithms (FUN 2021) (2020)


Abstract
This paper studies the problem of programming a robotic panda gardener to keep a bamboo garden from obstructing the view of the lake by your house. The garden consists of n bamboo stalks with known daily growth rates and the gardener can cut at most one bamboo per day. As a computer scientist, you found out that this problem has already been formalized in [Gąsieniec et al., SOFSEM'17] as the Bamboo Garden Trimming (BGT) problem, where the goal is that of computing a perpetual schedule (i.e., the sequence of bamboos to cut) for the robotic gardener to follow in order to minimize the makespan, i.e., the maximum height ever reached by a bamboo. Two natural strategies are Reduce-Max and Reduce-Fastest(x). Reduce-Max trims the tallest bamboo of the day, while Reduce-Fastest(x) trims the fastest growing bamboo among the ones that are taller than x. It is known that Reduce-Max and Reduce-Fastest(x) achieve a makespan of O(log n) and 4 for the best choice of x = 2, respectively. We prove the first constant upper bound of 9 for Reduce-Max and improve the one for Reduce-Fastest(x) to (3+√5)/2 < 2.62 for x = 1+1/√5. Another critical aspect stems from the fact that your robotic gardener has a limited amount of processing power and memory. It is then important for the algorithm to be able to quickly determine the next bamboo to cut while requiring at most linear space. We formalize this aspect as the problem of designing a Trimming Oracle data structure, and we provide three efficient Trimming Oracles implementing different perpetual schedules, including those produced by Reduce-Max and Reduce-Fastest(x).

Cite as

Davide Bilò, Luciano Gualà, Stefano Leucci, Guido Proietti, and Giacomo Scornavacca. Cutting Bamboo down to Size. In 10th International Conference on Fun with Algorithms (FUN 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 157, pp. 5:1-5:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{bilo_et_al:LIPIcs.FUN.2021.5,
  author =	{Bil\`{o}, Davide and Gual\`{a}, Luciano and Leucci, Stefano and Proietti, Guido and Scornavacca, Giacomo},
  title =	{{Cutting Bamboo down to Size}},
  booktitle =	{10th International Conference on Fun with Algorithms (FUN 2021)},
  pages =	{5:1--5:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-145-0},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{157},
  editor =	{Farach-Colton, Martin and Prencipe, Giuseppe and Uehara, Ryuhei},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FUN.2021.5},
  URN =		{urn:nbn:de:0030-drops-127663},
  doi =		{10.4230/LIPIcs.FUN.2021.5},
  annote =	{Keywords: bamboo garden trimming, trimming oracles, approximation algorithms, pinwheel scheduling}
}
Document
Dual-Mode Greedy Algorithms Can Save Energy

Authors: Barbara Geissmann, Stefano Leucci, Chih-Hung Liu, Paolo Penna, and Guido Proietti

Published in: LIPIcs, Volume 149, 30th International Symposium on Algorithms and Computation (ISAAC 2019)


Abstract
In real world applications, important resources like energy are saved by deliberately using so-called low-cost operations that are less reliable. Some of these approaches are based on a dual mode technology where it is possible to choose between high-energy operations (always correct) and low-energy operations (prone to errors), and thus enable to trade energy for correctness. In this work we initiate the study of algorithms for solving optimization problems that in their computation are allowed to choose between two types of operations: high-energy comparisons (always correct but expensive) and low-energy comparisons (cheaper but prone to errors). For the errors in low-energy comparisons, we assume the persistent setting, which usually makes it impossible to achieve optimal solutions without high-energy comparisons. We propose to study a natural complexity measure which accounts for the number of operations of either type separately. We provide a new family of algorithms which, for a fairly large class of maximization problems, return a constant approximation using only polylogarithmic many high-energy comparisons and only O(n log n) low-energy comparisons. This result applies to the class of p-extendible system s [Mestre, 2006], which includes several NP-hard problems and matroids as a special case (p=1). These algorithmic solutions relate to some fundamental aspects studied earlier in different contexts: (i) the approximation guarantee when only ordinal information is available to the algorithm; (ii) the fact that even such ordinal information may be erroneous because of low-energy comparisons and (iii) the ability to approximately sort a sequence of elements when comparisons are subject to persistent errors. Finally, our main result is quite general and can be parametrized and adapted to other error models.

Cite as

Barbara Geissmann, Stefano Leucci, Chih-Hung Liu, Paolo Penna, and Guido Proietti. Dual-Mode Greedy Algorithms Can Save Energy. In 30th International Symposium on Algorithms and Computation (ISAAC 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 149, pp. 64:1-64:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{geissmann_et_al:LIPIcs.ISAAC.2019.64,
  author =	{Geissmann, Barbara and Leucci, Stefano and Liu, Chih-Hung and Penna, Paolo and Proietti, Guido},
  title =	{{Dual-Mode Greedy Algorithms Can Save Energy}},
  booktitle =	{30th International Symposium on Algorithms and Computation (ISAAC 2019)},
  pages =	{64:1--64:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-130-6},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{149},
  editor =	{Lu, Pinyan and Zhang, Guochuan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2019.64},
  URN =		{urn:nbn:de:0030-drops-115604},
  doi =		{10.4230/LIPIcs.ISAAC.2019.64},
  annote =	{Keywords: matroids, p-extendible systems, greedy algorithm, approximation algorithms, high-low energy}
}
Document
Optimal Sorting with Persistent Comparison Errors

Authors: Barbara Geissmann, Stefano Leucci, Chih-Hung Liu, and Paolo Penna

Published in: LIPIcs, Volume 144, 27th Annual European Symposium on Algorithms (ESA 2019)


Abstract
We consider the problem of sorting n elements in the case of persistent comparison errors. In this problem, each comparison between two elements can be wrong with some fixed (small) probability p, and comparisons cannot be repeated (Braverman and Mossel, SODA'08). Sorting perfectly in this model is impossible, and the objective is to minimize the dislocation of each element in the output sequence, that is, the difference between its true rank and its position. Existing lower bounds for this problem show that no algorithm can guarantee, with high probability, maximum dislocation and total dislocation better than Omega(log n) and Omega(n), respectively, regardless of its running time. In this paper, we present the first O(n log n)-time sorting algorithm that guarantees both O(log n) maximum dislocation and O(n) total dislocation with high probability. This settles the time complexity of this problem and shows that comparison errors do not increase its computational difficulty: a sequence with the best possible dislocation can be obtained in O(n log n) time and, even without comparison errors, Omega(n log n) time is necessary to guarantee such dislocation bounds. In order to achieve this optimality result, we solve two sub-problems in the persistent error comparisons model, and the respective methods have their own merits for further application. One is how to locate a position in which to insert an element in an almost-sorted sequence having O(log n) maximum dislocation in such a way that the dislocation of the resulting sequence will still be O(log n). The other is how to simultaneously insert m elements into an almost sorted sequence of m different elements, such that the resulting sequence of 2m elements remains almost sorted.

Cite as

Barbara Geissmann, Stefano Leucci, Chih-Hung Liu, and Paolo Penna. Optimal Sorting with Persistent Comparison Errors. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 49:1-49:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{geissmann_et_al:LIPIcs.ESA.2019.49,
  author =	{Geissmann, Barbara and Leucci, Stefano and Liu, Chih-Hung and Penna, Paolo},
  title =	{{Optimal Sorting with Persistent Comparison Errors}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{49:1--49:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.49},
  URN =		{urn:nbn:de:0030-drops-111706},
  doi =		{10.4230/LIPIcs.ESA.2019.49},
  annote =	{Keywords: approximate sorting, comparison errors, persistent errors}
}
Document
Resilient Dictionaries for Randomly Unreliable Memory

Authors: Stefano Leucci, Chih-Hung Liu, and Simon Meierhans

Published in: LIPIcs, Volume 144, 27th Annual European Symposium on Algorithms (ESA 2019)


Abstract
We study the problem of designing a dictionary data structure that is resilient to memory corruptions. Our error model is a variation of the faulty RAM model in which, except for constant amount of definitely reliable memory, each memory word is randomly unreliable with a probability p < 1/2, and the locations of the unreliable words are unknown to the algorithm. An adversary observes the whole memory and can, at any time, arbitrarily corrupt (i.e., modify) the contents of one or more unreliable words. Our dictionary has capacity n, stores N<n keys in the optimal O(N) amount of space, supports insertions and deletions in O(log n) amortized time, and allows to search for a key in O(log n) worst-case time. With a global probability of at least 1-1/n, all possible search operations are guaranteed to return the correct answer w.r.t. the set of uncorrupted keys. The closest related results are the ones of Finocchi et al. [Irene Finocchi et al., 2009] and Brodal et al. [Brodal et al., 2007] on the faulty RAM model, in which all but O(1) memory is unreliable. There, if an upper bound delta on the number of corruptions is known in advance, all dictionary operations can be implemented in Theta(log n + delta) amortized time, thus trading resiliency for speed as soon as delta = omega(log n). Our construction does not need to know the value of delta in advance and remains fast and effective even when up to a constant fraction of the available memory is corrupted. Our techniques can be immediately extended to implement other data types (e.g., associative containers and priority queues), which can then be used as a building block in the design of other resilient algorithms. For example, we are able to solve the resilient sorting problem in our model using O(n log n) time.

Cite as

Stefano Leucci, Chih-Hung Liu, and Simon Meierhans. Resilient Dictionaries for Randomly Unreliable Memory. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 70:1-70:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{leucci_et_al:LIPIcs.ESA.2019.70,
  author =	{Leucci, Stefano and Liu, Chih-Hung and Meierhans, Simon},
  title =	{{Resilient Dictionaries for Randomly Unreliable Memory}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{70:1--70:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.70},
  URN =		{urn:nbn:de:0030-drops-111911},
  doi =		{10.4230/LIPIcs.ESA.2019.70},
  annote =	{Keywords: resilient dictionary, unreliable memory, faulty RAM}
}
Document
Tracks from hell - when finding a proof may be easier than checking it

Authors: Matteo Almanza, Stefano Leucci, and Alessandro Panconesi

Published in: LIPIcs, Volume 100, 9th International Conference on Fun with Algorithms (FUN 2018)


Abstract
We consider the popular smartphone game Trainyard: a puzzle game that requires the player to lay down tracks in order to route colored trains from departure stations to suitable arrival stations. While it is already known [Almanza et al., FUN 2016] that the problem of finding a solution to a given Trainyard instance (i.e., game level) is NP-hard, determining the computational complexity of checking whether a candidate solution (i.e., a track layout) solves the level was left as an open problem. In this paper we prove that this verification problem is PSPACE-complete, thus implying that Trainyard players might not only have a hard time finding solutions to a given level, but they might even be unable to efficiently recognize them.

Cite as

Matteo Almanza, Stefano Leucci, and Alessandro Panconesi. Tracks from hell - when finding a proof may be easier than checking it. In 9th International Conference on Fun with Algorithms (FUN 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 100, pp. 4:1-4:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{almanza_et_al:LIPIcs.FUN.2018.4,
  author =	{Almanza, Matteo and Leucci, Stefano and Panconesi, Alessandro},
  title =	{{Tracks from hell - when finding a proof may be easier than checking it}},
  booktitle =	{9th International Conference on Fun with Algorithms (FUN 2018)},
  pages =	{4:1--4:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-067-5},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{100},
  editor =	{Ito, Hiro and Leonardi, Stefano and Pagli, Linda and Prencipe, Giuseppe},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FUN.2018.4},
  URN =		{urn:nbn:de:0030-drops-87954},
  doi =		{10.4230/LIPIcs.FUN.2018.4},
  annote =	{Keywords: puzzle games, solitaire games, Trainyard, verification}
}
Document
On the Complexity of Two Dots for Narrow Boards and Few Colors

Authors: Davide Bilò, Luciano Gualà, Stefano Leucci, and Neeldhara Misra

Published in: LIPIcs, Volume 100, 9th International Conference on Fun with Algorithms (FUN 2018)


Abstract
Two Dots is a popular single-player puzzle video game for iOS and Android. A level of this game consists of a grid of colored dots. The player connects two or more adjacent dots, removing them from the grid and causing the remaining dots to fall, as if influenced by gravity. One special move, which is frequently a game-changer, consists of connecting a cycle of dots: this removes all the dots of the given color from the grid. The goal is to remove a certain number of dots of each color using a limited number of moves. The computational complexity of Two Dots has already been addressed in [Misra, FUN 2016], where it has been shown that the general version of the problem is NP-complete. Unfortunately, the known reductions produce Two Dots levels having both a large number of colors and many columns. This does not completely match the spirit of the game, where, on the one hand, only few colors are allowed, and on the other hand, the grid of the game has only a constant number of columns. In this paper, we partially fill this gap by assessing the computational complexity of Two Dots instances having a small number of colors or columns. More precisely, we show that Two Dots is hard even for instances involving only 3 colors or 2 columns. As a contrast, we also prove that the problem can be solved in polynomial-time on single-column instances with a constant number of goals.

Cite as

Davide Bilò, Luciano Gualà, Stefano Leucci, and Neeldhara Misra. On the Complexity of Two Dots for Narrow Boards and Few Colors. In 9th International Conference on Fun with Algorithms (FUN 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 100, pp. 7:1-7:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{bilo_et_al:LIPIcs.FUN.2018.7,
  author =	{Bil\`{o}, Davide and Gual\`{a}, Luciano and Leucci, Stefano and Misra, Neeldhara},
  title =	{{On the Complexity of Two Dots for Narrow Boards and Few Colors}},
  booktitle =	{9th International Conference on Fun with Algorithms (FUN 2018)},
  pages =	{7:1--7:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-067-5},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{100},
  editor =	{Ito, Hiro and Leonardi, Stefano and Pagli, Linda and Prencipe, Giuseppe},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FUN.2018.7},
  URN =		{urn:nbn:de:0030-drops-87988},
  doi =		{10.4230/LIPIcs.FUN.2018.7},
  annote =	{Keywords: puzzle, NP-complete, perfect information, combinatorial game theory}
}
Document
On the PSPACE-completeness of Peg Duotaire and other Peg-Jumping Games

Authors: Davide Bilò, Luciano Gualà, Stefano Leucci, Guido Proietti, and Mirko Rossi

Published in: LIPIcs, Volume 100, 9th International Conference on Fun with Algorithms (FUN 2018)


Abstract
Peg Duotaire is a two-player version of the classical puzzle called Peg Solitaire. Players take turns making peg-jumping moves, and the first player which is left without available moves loses the game. Peg Duotaire has been studied from a combinatorial point of view and two versions of the game have been considered, namely the single- and the multi-hop variant. On the other hand, understanding the computational complexity of the game is explicitly mentioned as an open problem in the literature. We close this problem and prove that both versions of the game are PSPACE-complete. We also prove the PSPACE-completeness of other peg-jumping games where two players control pegs of different colors.

Cite as

Davide Bilò, Luciano Gualà, Stefano Leucci, Guido Proietti, and Mirko Rossi. On the PSPACE-completeness of Peg Duotaire and other Peg-Jumping Games. In 9th International Conference on Fun with Algorithms (FUN 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 100, pp. 8:1-8:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{bilo_et_al:LIPIcs.FUN.2018.8,
  author =	{Bil\`{o}, Davide and Gual\`{a}, Luciano and Leucci, Stefano and Proietti, Guido and Rossi, Mirko},
  title =	{{On the PSPACE-completeness of Peg Duotaire and other Peg-Jumping Games}},
  booktitle =	{9th International Conference on Fun with Algorithms (FUN 2018)},
  pages =	{8:1--8:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-067-5},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{100},
  editor =	{Ito, Hiro and Leonardi, Stefano and Pagli, Linda and Prencipe, Giuseppe},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FUN.2018.8},
  URN =		{urn:nbn:de:0030-drops-87994},
  doi =		{10.4230/LIPIcs.FUN.2018.8},
  annote =	{Keywords: peg duotaire, pspace-completeness, peg solitaire, two-player games}
}
Document
Efficient Oracles and Routing Schemes for Replacement Paths

Authors: Davide Bilò, Keerti Choudhary, Luciano Gualà, Stefano Leucci, Merav Parter, and Guido Proietti

Published in: LIPIcs, Volume 96, 35th Symposium on Theoretical Aspects of Computer Science (STACS 2018)


Abstract
Real life graphs and networks are prone to failure of nodes (vertices) and links (edges). In particular, for a pair of nodes s and t and a failing edge e in an n-vertex unweighted graph G=(V(G),E(G)), the replacement path pi_{G-e}(s,t) is a shortest s-t path that avoids e. In this paper we present several efficient constructions that, for every (s,t) \in S x T, where S, T \subseteq V(G), and every e \in E(G), maintain the collection of all pi_{G-e}(s,t), either implicitly (i.e., through compact data structures a.k.a. distance sensitivity oracles (DSO)), or explicitly (i.e., through sparse subgraphs a.k.a. fault-tolerant preservers (FTP)). More precisely, we provide the following results: (1) DSO: For every S,T \subseteq V(G), we construct a DSO for maintaining S x T distances under single edge (or vertex) faults. This DSO has size tilde{O}(n\sqrt{|S||T|}) and query time of O(\sqrt{|S||T|}). At the expense of having quasi-polynomial query time, the size of the oracle can be improved to tilde{O}(n|S|+|T|\sqrt{|S|n}), which is optimal for |T| = Omega(sqrt{n|S|}). When |T| = Omega(n^frac{3}{4} |S|^frac{1}{4}), the construction can be further refined in order to get a polynomial query time. We also consider the approximate additive setting, and show a family of DSOs that exhibits a tradeoff between the additive stretch and the size of the oracle. Finally, for the meaningful single-source case, the above result is complemented by a lower bound conditioned on the Set-Intersection conjecture. This lower bound establishes a separation between the oracle and the subgraph settings. (2) FTP: We show the construction of a path-reporting DSO of size tilde{O}(n^{4/3}(|S||T|)^{1/3}) reporting pi_{G-e}(s,t) in O(|pi_{G-e}(s,t)|+(n|S||T|)^{1/3}) time. Such a DSO can be transformed into a FTP having the same size, and moreover it can be elaborated in order to make it optimal (up to a poly-logarithmic factor) both in space and query time for the special case in which T=V(G). Our FTP improves over previous constructions when |T|=O(sqrt{|S|n}) (up to inverse poly-logarithmic factors). (3) Routing and Labeling Schemes: For the well-studied single-source setting, we present a novel routing scheme, that allows to route messages on pi_{G-e}(s,t) by using edge labels and routing tables of size tilde{O}(\sqrt{n}), and a header message of poly-logarithmic size. We also present a labeling scheme for the setting which is optimal in space up to constant factors.

Cite as

Davide Bilò, Keerti Choudhary, Luciano Gualà, Stefano Leucci, Merav Parter, and Guido Proietti. Efficient Oracles and Routing Schemes for Replacement Paths. In 35th Symposium on Theoretical Aspects of Computer Science (STACS 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 96, pp. 13:1-13:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{bilo_et_al:LIPIcs.STACS.2018.13,
  author =	{Bil\`{o}, Davide and Choudhary, Keerti and Gual\`{a}, Luciano and Leucci, Stefano and Parter, Merav and Proietti, Guido},
  title =	{{Efficient Oracles and Routing Schemes for Replacement Paths}},
  booktitle =	{35th Symposium on Theoretical Aspects of Computer Science (STACS 2018)},
  pages =	{13:1--13:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-062-0},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{96},
  editor =	{Niedermeier, Rolf and Vall\'{e}e, Brigitte},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2018.13},
  URN =		{urn:nbn:de:0030-drops-85249},
  doi =		{10.4230/LIPIcs.STACS.2018.13},
  annote =	{Keywords: Fault tolerant, Shortest path, Oracle, Routing}
}
Document
Optimal Dislocation with Persistent Errors in Subquadratic Time

Authors: Barbara Geissmann, Stefano Leucci, Chih-Hung Liu, and Paolo Penna

Published in: LIPIcs, Volume 96, 35th Symposium on Theoretical Aspects of Computer Science (STACS 2018)


Abstract
We study the problem of sorting N elements in presence of persistent errors in comparisons: In this classical model, each comparison between two elements is wrong independently with some probability p, but repeating the same comparison gives always the same result. The best known algorithms for this problem have running time O(N^2) and achieve an optimal maximum dislocation of O(log N) for constant error probability. Note that no algorithm can achieve dislocation o(log N), regardless of its running time. In this work we present the first subquadratic time algorithm with optimal maximum dislocation: Our algorithm runs in tilde{O}(N^{3/2}) time and guarantees O(log N) maximum dislocation with high probability. Though the first version of our algorithm is randomized, it can be derandomized by extracting the necessary random bits from the results of the comparisons (errors).

Cite as

Barbara Geissmann, Stefano Leucci, Chih-Hung Liu, and Paolo Penna. Optimal Dislocation with Persistent Errors in Subquadratic Time. In 35th Symposium on Theoretical Aspects of Computer Science (STACS 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 96, pp. 36:1-36:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{geissmann_et_al:LIPIcs.STACS.2018.36,
  author =	{Geissmann, Barbara and Leucci, Stefano and Liu, Chih-Hung and Penna, Paolo},
  title =	{{Optimal Dislocation with Persistent Errors in Subquadratic Time}},
  booktitle =	{35th Symposium on Theoretical Aspects of Computer Science (STACS 2018)},
  pages =	{36:1--36:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-062-0},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{96},
  editor =	{Niedermeier, Rolf and Vall\'{e}e, Brigitte},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2018.36},
  URN =		{urn:nbn:de:0030-drops-85266},
  doi =		{10.4230/LIPIcs.STACS.2018.36},
  annote =	{Keywords: sorting, recurrent comparison errors, maximum dislocation}
}
Document
An Improved Algorithm for Computing All the Best Swap Edges of a Tree Spanner

Authors: Davide Bilò, Feliciano Colella, Luciano Gualà, Stefano Leucci, and Guido Proietti

Published in: LIPIcs, Volume 92, 28th International Symposium on Algorithms and Computation (ISAAC 2017)


Abstract
A tree sigma-spanner of a positively real-weighted n-vertex and m-edge undirected graph G is a spanning tree T of G which approximately preserves (i.e., up to a multiplicative stretch factor sigma) distances in G. Tree spanners with provably good stretch factors find applications in communication networks, distributed systems, and network design. However, finding an optimal or even a good tree spanner is a very hard computational task. Thus, if one has to face a transient edge failure in T, the overall effort that has to be afforded to rebuild a new tree spanner (i.e., computational costs, set-up of new links, updating of the routing tables, etc.) can be rather prohibitive. To circumvent this drawback, an effective alternative is that of associating with each tree edge a best possible (in terms of resulting stretch) swap edge -- a well-established approach in the literature for several other tree topologies. Correspondingly, the problem of computing all the best swap edges of a tree spanner is a challenging algorithmic problem, since solving it efficiently means to exploit the structure of shortest paths not only in G, but also in all the scenarios in which an edge of T has failed. For this problem we provide a very efficient solution, running in O(n^2 log^4 n) time, which drastically improves (almost by a quadratic factor in n in dense graphs!) on the previous known best result.

Cite as

Davide Bilò, Feliciano Colella, Luciano Gualà, Stefano Leucci, and Guido Proietti. An Improved Algorithm for Computing All the Best Swap Edges of a Tree Spanner. In 28th International Symposium on Algorithms and Computation (ISAAC 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 92, pp. 14:1-14:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{bilo_et_al:LIPIcs.ISAAC.2017.14,
  author =	{Bil\`{o}, Davide and Colella, Feliciano and Gual\`{a}, Luciano and Leucci, Stefano and Proietti, Guido},
  title =	{{An Improved Algorithm for Computing All the Best Swap Edges of a Tree Spanner}},
  booktitle =	{28th International Symposium on Algorithms and Computation (ISAAC 2017)},
  pages =	{14:1--14:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-054-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{92},
  editor =	{Okamoto, Yoshio and Tokuyama, Takeshi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2017.14},
  URN =		{urn:nbn:de:0030-drops-82663},
  doi =		{10.4230/LIPIcs.ISAAC.2017.14},
  annote =	{Keywords: Transient edge failure, Swap algorithm, Tree spanner}
}
Document
Sorting with Recurrent Comparison Errors

Authors: Barbara Geissmann, Stefano Leucci, Chih-Hung Liu, and Paolo Penna

Published in: LIPIcs, Volume 92, 28th International Symposium on Algorithms and Computation (ISAAC 2017)


Abstract
We present a sorting algorithm for the case of recurrent random comparison errors. The algorithm essentially achieves simultaneously good properties of previous algorithms for sorting n distinct elements in this model. In particular, it runs in O(n^2) time, the maximum dislocation of the elements in the output is O(log n), while the total dislocation is O(n). These guarantees are the best possible since we prove that even randomized algorithms cannot achieve o(log n) maximum dislocation with high probability, or o(n) total dislocation in expectation, regardless of their running time.

Cite as

Barbara Geissmann, Stefano Leucci, Chih-Hung Liu, and Paolo Penna. Sorting with Recurrent Comparison Errors. In 28th International Symposium on Algorithms and Computation (ISAAC 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 92, pp. 38:1-38:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{geissmann_et_al:LIPIcs.ISAAC.2017.38,
  author =	{Geissmann, Barbara and Leucci, Stefano and Liu, Chih-Hung and Penna, Paolo},
  title =	{{Sorting with Recurrent Comparison Errors}},
  booktitle =	{28th International Symposium on Algorithms and Computation (ISAAC 2017)},
  pages =	{38:1--38:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-054-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{92},
  editor =	{Okamoto, Yoshio and Tokuyama, Takeshi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2017.38},
  URN =		{urn:nbn:de:0030-drops-82652},
  doi =		{10.4230/LIPIcs.ISAAC.2017.38},
  annote =	{Keywords: sorting, recurrent comparison error, maximum and total dislocation}
}
Document
Compact and Fast Sensitivity Oracles for Single-Source Distances

Authors: Davide Bilo, Luciano Guala, Stefano Leucci, and Guido Proietti

Published in: LIPIcs, Volume 57, 24th Annual European Symposium on Algorithms (ESA 2016)


Abstract
Let s denote a distinguished source vertex of a non-negatively real weighted and undirected graph G with n vertices and m edges. In this paper we present two efficient single-source approximate-distance sensitivity oracles, namely compact data structures which are able to quickly report an approximate (by a multiplicative stretch factor) distance from s to any node of G following the failure of any edge in G. More precisely, we first present a sensitivity oracle of size O(n) which is able to report 2-approximate distances from the source in O(1) time. Then, we further develop our construction by building, for any 0<epsilon<1, another sensitivity oracle having size O(n*1/epsilon*log(1/epsilon)), and is able to report a (1+epsilon)-approximate distance from s to any vertex of G in O(log(n)*1/epsilon*log(1/epsilon)) time. Thus, this latter oracle is essentially optimal as far as size and stretch are concerned, and it only asks for a logarithmic query time. Finally, our results are complemented with a space lower bound for the related class of single-source additively-stretched sensitivity oracles, which is helpful to realize the hardness of designing compact oracles of this type.

Cite as

Davide Bilo, Luciano Guala, Stefano Leucci, and Guido Proietti. Compact and Fast Sensitivity Oracles for Single-Source Distances. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 13:1-13:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{bilo_et_al:LIPIcs.ESA.2016.13,
  author =	{Bilo, Davide and Guala, Luciano and Leucci, Stefano and Proietti, Guido},
  title =	{{Compact and Fast Sensitivity Oracles for Single-Source Distances}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{13:1--13:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.13},
  URN =		{urn:nbn:de:0030-drops-63640},
  doi =		{10.4230/LIPIcs.ESA.2016.13},
  annote =	{Keywords: fault-tolerant shortest-path tree, approximate distance, distance sensitivity oracle}
}
Document
Trainyard is NP-hard

Authors: Matteo Almanza, Stefano Leucci, and Alessandro Panconesi

Published in: LIPIcs, Volume 49, 8th International Conference on Fun with Algorithms (FUN 2016)


Abstract
Recently, due to the widespread diffusion of smart-phones, mobile puzzle games have experienced a huge increase in their popularity. A successful puzzle has to be both captivating and challenging, and it has been suggested that this features are somehow related to their computational complexity. Indeed, many puzzle games - such as Mah-Jongg, Sokoban, Candy Crush, and 2048, to name a few - are known to be NP-hard. In this paper we consider Trainyard: a popular mobile puzzle game whose goal is to get colored trains from their initial stations to suitable destination stations. We prove that the problem of determining whether there exists a solution to a given Trainyard level is NP. We also provide an implementation of our hardness reduction (see http://trainyard.isnphard.com).

Cite as

Matteo Almanza, Stefano Leucci, and Alessandro Panconesi. Trainyard is NP-hard. In 8th International Conference on Fun with Algorithms (FUN 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 49, pp. 2:1-2:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{almanza_et_al:LIPIcs.FUN.2016.2,
  author =	{Almanza, Matteo and Leucci, Stefano and Panconesi, Alessandro},
  title =	{{Trainyard is NP-hard}},
  booktitle =	{8th International Conference on Fun with Algorithms (FUN 2016)},
  pages =	{2:1--2:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-005-7},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{49},
  editor =	{Demaine, Erik D. and Grandoni, Fabrizio},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FUN.2016.2},
  URN =		{urn:nbn:de:0030-drops-58796},
  doi =		{10.4230/LIPIcs.FUN.2016.2},
  annote =	{Keywords: Complexity of Games, Trainyard}
}
Document
Large Peg-Army Maneuvers

Authors: Luciano Gualà, Stefano Leucci, Emanuele Natale, and Roberto Tauraso

Published in: LIPIcs, Volume 49, 8th International Conference on Fun with Algorithms (FUN 2016)


Abstract
Despite its long history, the classical game of peg solitaire continues to attract the attention of the scientific community. In this paper, we consider two problems with an algorithmic flavour which are related with this game, namely Solitaire-Reachability and Solitaire-Army. In the first one, we show that deciding whether there is a sequence of jumps which allows a given initial configuration of pegs to reach a target position is NP-complete. Regarding Solitaire-Army, the aim is to successfully deploy an army of pegs in a given region of the board in order to reach a target position. By solving an auxiliary problem with relaxed constraints, we are able to answer some open questions raised by Csakany and Juhasz (Mathematics Magazine, 2000).

Cite as

Luciano Gualà, Stefano Leucci, Emanuele Natale, and Roberto Tauraso. Large Peg-Army Maneuvers. In 8th International Conference on Fun with Algorithms (FUN 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 49, pp. 18:1-18:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{guala_et_al:LIPIcs.FUN.2016.18,
  author =	{Gual\`{a}, Luciano and Leucci, Stefano and Natale, Emanuele and Tauraso, Roberto},
  title =	{{Large Peg-Army Maneuvers}},
  booktitle =	{8th International Conference on Fun with Algorithms (FUN 2016)},
  pages =	{18:1--18:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-005-7},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{49},
  editor =	{Demaine, Erik D. and Grandoni, Fabrizio},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FUN.2016.18},
  URN =		{urn:nbn:de:0030-drops-58709},
  doi =		{10.4230/LIPIcs.FUN.2016.18},
  annote =	{Keywords: Complexity of Games, Solitaire Army}
}
Document
Multiple-Edge-Fault-Tolerant Approximate Shortest-Path Trees

Authors: Davide Bilò, Luciano Gualà, Stefano Leucci, and Guido Proietti

Published in: LIPIcs, Volume 47, 33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016)


Abstract
Let G be an n-node and m-edge positively real-weighted undirected graph. For any given integer f >= 1, we study the problem of designing a sparse f-edge-fault-tolerant (f-EFT) sigma-approximate single-source shortest-path tree (sigma-ASPT), namely a subgraph of G having as few edges as possible and which, following the failure of a set F of at most f edges in G, contains paths from a fixed source that are stretched at most by a factor of sigma. To this respect, we provide an algorithm that efficiently computes an f-EFT (2|F|+1)-ASPT of size O(f n). Our structure improves on a previous related construction designed for unweighted graphs, having the same size but guaranteeing a larger stretch factor of 3(f+1), plus an additive term of (f+1)*log(n). Then, we show how to convert our structure into an efficient f-EFT single-source distance oracle (SSDO), that can be built in ~{O}(f m) time, has size O(fn *log^2(n)), and is able to report, after the failure of the edge set F, in O(|F|^2 * log^2(n)) time a (2|F|+1)-approximate distance from the source to any node, and a corresponding approximate path in the same amount of time plus the path's size. Such an oracle is obtained by handling another fundamental problem, namely that of updating a minimum spanning forest (MSF) of G after that a batch of k simultaneous edge modifications (i.e., edge insertions, deletions and weight changes) is performed. For this problem, we build in O(m * log^3(n)) time a sensitivity oracle of size O(m * log^2(n)), that reports in O(k^2 * log^2(n)) time the (at most 2k) edges either exiting from or entering into the MSF. As a result of independent interest, it is worth noticing that our MSF oracle can be employed to handle arbitrary sequences of o(sqrt[4]{n}/log(n)) (non-simultaneous) updates with a worst-case time per update of o(sqrt{n}). Thus, for relatively short sequences of updates, our oracle should be preferred w.r.t. the best-known (in a worst-case sense) MSF fully-dynamic algorithm, requiring O(sqrt{n}) time per update.

Cite as

Davide Bilò, Luciano Gualà, Stefano Leucci, and Guido Proietti. Multiple-Edge-Fault-Tolerant Approximate Shortest-Path Trees. In 33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 47, pp. 18:1-18:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{bilo_et_al:LIPIcs.STACS.2016.18,
  author =	{Bil\`{o}, Davide and Gual\`{a}, Luciano and Leucci, Stefano and Proietti, Guido},
  title =	{{Multiple-Edge-Fault-Tolerant Approximate Shortest-Path Trees}},
  booktitle =	{33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016)},
  pages =	{18:1--18:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-001-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{47},
  editor =	{Ollinger, Nicolas and Vollmer, Heribert},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2016.18},
  URN =		{urn:nbn:de:0030-drops-57196},
  doi =		{10.4230/LIPIcs.STACS.2016.18},
  annote =	{Keywords: fault-tolerant shortest-path tree, distance oracle, minimum spanning tree}
}
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail