111 Search Results for "Liu, Yu"


Document
Indexing Graphs for Shortest Beer Path Queries

Authors: David Coudert, Andrea D'Ascenzo, and Mattia D'Emidio

Published in: OASIcs, Volume 123, 24th Symposium on Algorithmic Approaches for Transportation Modelling, Optimization, and Systems (ATMOS 2024)


Abstract
A beer graph is an edge-weighted graph G = (V,E,ω) with beer vertices B ⊆ V. A beer path between two vertices s and t of a beer graph is a path that connects s and t and visits at least one vertex in B. The beer distance between two vertices is the weight of a shortest beer path, i.e. a beer path having minimum total weight. A graph indexing scheme is a two-phase method that constructs an index data structure by a one-time preprocessing of an input graph and then exploits it to compute (or accelerate the computation of) answers to queries on structures of the graph dataset. In the last decade, such indexing schemes have been designed to perform, effectively, many relevant types of queries, e.g. on reachability, and have gained significant popularity in essentially all data-intensive application domains where large number of queries have to be routinely answered (e.g. journey planners), since they have been shown, through many experimental studies, to offer extremely low query times at the price of limited preprocessing time and space overheads. In this paper, we showcase that an indexing scheme, to efficiently execute queries on beer distances or shortest beer paths for pairs of vertices of a beer graph, can be obtained by adapting the highway labeling, a recently introduced indexing method to accelerate the computation of classical shortest paths. We design a preprocessing algorithm to build a whl index, i.e. a weighted highway labeling of a beer graph, and show how it can be queried to compute beer distances and shortest beer paths. Through extensive experimentation on real networks, we empirically demonstrate its practical effectiveness and superiority, in terms of offered trade-off between preprocessing time, space overhead and query time, with respect to the state-of-the-art.

Cite as

David Coudert, Andrea D'Ascenzo, and Mattia D'Emidio. Indexing Graphs for Shortest Beer Path Queries. In 24th Symposium on Algorithmic Approaches for Transportation Modelling, Optimization, and Systems (ATMOS 2024). Open Access Series in Informatics (OASIcs), Volume 123, pp. 2:1-2:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{coudert_et_al:OASIcs.ATMOS.2024.2,
  author =	{Coudert, David and D'Ascenzo, Andrea and D'Emidio, Mattia},
  title =	{{Indexing Graphs for Shortest Beer Path Queries}},
  booktitle =	{24th Symposium on Algorithmic Approaches for Transportation Modelling, Optimization, and Systems (ATMOS 2024)},
  pages =	{2:1--2:18},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-350-8},
  ISSN =	{2190-6807},
  year =	{2024},
  volume =	{123},
  editor =	{Bouman, Paul C. and Kontogiannis, Spyros C.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.ATMOS.2024.2},
  URN =		{urn:nbn:de:0030-drops-211907},
  doi =		{10.4230/OASIcs.ATMOS.2024.2},
  annote =	{Keywords: Graph Algorithms, Indexing Schemes, Beer Distances, Algorithms Engineering}
}
Document
Improved Algorithms for the Capacitated Team Orienteering Problem

Authors: Gianlorenzo D'Angelo, Mattia D'Emidio, Esmaeil Delfaraz, and Gabriele Di Stefano

Published in: OASIcs, Volume 123, 24th Symposium on Algorithmic Approaches for Transportation Modelling, Optimization, and Systems (ATMOS 2024)


Abstract
We study the Capacitated Team Orienteering Problem, where a fleet of vehicles with capacities have to meet customers with known demands and prizes for a single commodity. The objective is to maximize the total prize and to assign a sequence of customers to each vehicle while keeping the total distance traveled within a given budget and such that the total demand served by each vehicle does not exceed its capacity. The problem has been widely studied both from a theoretical and a practical point of view. The contribution of this paper is twofold: (1) We advance the theoretical knowledge on the problem by providing new approximation algorithms that achieve, under some natural assumption, improved approximation ratios compared to the current best algorithms; (2) We propose four efficient heuristics that outperform the current state-of-the-art practical methods in the sense that they compute solutions that collect nearly the same prize in a significantly smaller running time. We also experimentally test the scalability of the new heuristics, showing that their running time increases approximately linearly with the size of the input, allowing us to process large graphs which were not possible to analyze before.

Cite as

Gianlorenzo D'Angelo, Mattia D'Emidio, Esmaeil Delfaraz, and Gabriele Di Stefano. Improved Algorithms for the Capacitated Team Orienteering Problem. In 24th Symposium on Algorithmic Approaches for Transportation Modelling, Optimization, and Systems (ATMOS 2024). Open Access Series in Informatics (OASIcs), Volume 123, pp. 7:1-7:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{dangelo_et_al:OASIcs.ATMOS.2024.7,
  author =	{D'Angelo, Gianlorenzo and D'Emidio, Mattia and Delfaraz, Esmaeil and Di Stefano, Gabriele},
  title =	{{Improved Algorithms for the Capacitated Team Orienteering Problem}},
  booktitle =	{24th Symposium on Algorithmic Approaches for Transportation Modelling, Optimization, and Systems (ATMOS 2024)},
  pages =	{7:1--7:17},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-350-8},
  ISSN =	{2190-6807},
  year =	{2024},
  volume =	{123},
  editor =	{Bouman, Paul C. and Kontogiannis, Spyros C.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.ATMOS.2024.7},
  URN =		{urn:nbn:de:0030-drops-211957},
  doi =		{10.4230/OASIcs.ATMOS.2024.7},
  annote =	{Keywords: Vehicle Routing, Approximation algorithms, Algorithm Engineering}
}
Document
Solving the Electric Bus Scheduling Problem by an Integrated Flow and Set Partitioning Approach

Authors: Ralf Borndörfer, Andreas Löbel, Fabian Löbel, and Steffen Weider

Published in: OASIcs, Volume 123, 24th Symposium on Algorithmic Approaches for Transportation Modelling, Optimization, and Systems (ATMOS 2024)


Abstract
Attractive and cost-efficient public transport requires solving computationally difficult optimization problems from network design to crew rostering. While great progress has been made in many areas, new requirements to handle increasingly complex constraints are constantly coming up. One such challenge is a new type of resource constraints that are used to deal with the state-of-charge of battery-electric vehicles, which have limited driving ranges and need to be recharged in-service. Resource constrained vehicle scheduling problems can classically be modelled in terms of either a resource constrained (multi-commodity) flow problem or in terms of a path-based set partition problem. We demonstrate how a novel integrated version of both formulations can be leveraged to solve resource constrained vehicle scheduling with replenishment in general and the electric bus scheduling problem in particular by Lagrangian relaxation and the proximal bundle method.

Cite as

Ralf Borndörfer, Andreas Löbel, Fabian Löbel, and Steffen Weider. Solving the Electric Bus Scheduling Problem by an Integrated Flow and Set Partitioning Approach. In 24th Symposium on Algorithmic Approaches for Transportation Modelling, Optimization, and Systems (ATMOS 2024). Open Access Series in Informatics (OASIcs), Volume 123, pp. 11:1-11:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{borndorfer_et_al:OASIcs.ATMOS.2024.11,
  author =	{Bornd\"{o}rfer, Ralf and L\"{o}bel, Andreas and L\"{o}bel, Fabian and Weider, Steffen},
  title =	{{Solving the Electric Bus Scheduling Problem by an Integrated Flow and Set Partitioning Approach}},
  booktitle =	{24th Symposium on Algorithmic Approaches for Transportation Modelling, Optimization, and Systems (ATMOS 2024)},
  pages =	{11:1--11:16},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-350-8},
  ISSN =	{2190-6807},
  year =	{2024},
  volume =	{123},
  editor =	{Bouman, Paul C. and Kontogiannis, Spyros C.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.ATMOS.2024.11},
  URN =		{urn:nbn:de:0030-drops-211992},
  doi =		{10.4230/OASIcs.ATMOS.2024.11},
  annote =	{Keywords: Electric Bus Scheduling, Electric Vehicle Scheduling, Non-linear Charging, Multi-commodity Flow, Set Partition, Lagrangian Relaxation, Proximal Bundle Method}
}
Document
Game Development: Enhancing Creativity and Independent Creation in University Course

Authors: Lenka Bubenkova and Emilia Pietrikova

Published in: OASIcs, Volume 122, 5th International Computer Programming Education Conference (ICPEC 2024)


Abstract
In this study, we tested a novel method of teaching the Unity engine to computer game design and development students. Our objective was to determine if a flexible assignment structure is the most effective for students with minimal engine experience. The study demonstrated that independent work significantly improves students’ comprehension and problem-solving skills. Key findings include a 90% increase in students achieving more than the minimum required grade, a significant improvement in self-reported confidence with Unity (with 66.3% of students moving from no experience to higher skill levels), and diverse, innovative final projects that exceeded initial expectations. These results suggest that the flexible assignment approach enhances creativity and maintains high expectations for student work, ensuring their success in the game development industry. The combination of student project grades, innovative project elements, and positive feedback indicates that this method is highly beneficial and could be applied effectively in various educational settings.

Cite as

Lenka Bubenkova and Emilia Pietrikova. Game Development: Enhancing Creativity and Independent Creation in University Course. In 5th International Computer Programming Education Conference (ICPEC 2024). Open Access Series in Informatics (OASIcs), Volume 122, pp. 12:1-12:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{bubenkova_et_al:OASIcs.ICPEC.2024.12,
  author =	{Bubenkova, Lenka and Pietrikova, Emilia},
  title =	{{Game Development: Enhancing Creativity and Independent Creation in University Course}},
  booktitle =	{5th International Computer Programming Education Conference (ICPEC 2024)},
  pages =	{12:1--12:13},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-347-8},
  ISSN =	{2190-6807},
  year =	{2024},
  volume =	{122},
  editor =	{Santos, Andr\'{e} L. and Pinto-Albuquerque, Maria},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.ICPEC.2024.12},
  URN =		{urn:nbn:de:0030-drops-209817},
  doi =		{10.4230/OASIcs.ICPEC.2024.12},
  annote =	{Keywords: novice programmers, assessment, learning analytics, motivation, unity engine, game development, problem-solving skills}
}
Document
Worst-Case to Expander-Case Reductions: Derandomized and Generalized

Authors: Amir Abboud and Nathan Wallheimer

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
A recent paper by Abboud and Wallheimer [ITCS 2023] presents self-reductions for various fundamental graph problems, which transform worst-case instances to expanders, thus proving that the complexity remains unchanged if the input is assumed to be an expander. An interesting corollary of their self-reductions is that if some problem admits such reduction, then the popular algorithmic paradigm based on expander-decompositions is useless against it. In this paper, we improve their core gadget, which augments a graph to make it an expander while retaining its important structure. Our new core construction has the benefit of being simple to analyze and generalize while obtaining the following results: - A derandomization of the self-reductions, showing that the equivalence between worst-case and expander-case holds even for deterministic algorithms, and ruling out the use of expander-decompositions as a derandomization tool. - An extension of the results to other models of computation, such as the Fully Dynamic model and the Congested Clique model. In the former, we either improve or provide an alternative approach to some recent hardness results for dynamic expander graphs by Henzinger, Paz, and Sricharan [ESA 2022]. In addition, we continue this line of research by designing new self-reductions for more problems, such as Max-Cut and dynamic Densest Subgraph, and demonstrating that the core gadget can be utilized to lift lower bounds based on the OMv Conjecture to expanders.

Cite as

Amir Abboud and Nathan Wallheimer. Worst-Case to Expander-Case Reductions: Derandomized and Generalized. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 4:1-4:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{abboud_et_al:LIPIcs.ESA.2024.4,
  author =	{Abboud, Amir and Wallheimer, Nathan},
  title =	{{Worst-Case to Expander-Case Reductions: Derandomized and Generalized}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{4:1--4:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.4},
  URN =		{urn:nbn:de:0030-drops-210751},
  doi =		{10.4230/LIPIcs.ESA.2024.4},
  annote =	{Keywords: Fine-grained complexity, expander graphs, self-reductions, worst-case to expander-case, expander decomposition, dynamic algorithms, exact and parameterized complexity, max-cut, maximum matching, k-clique detection, densest subgraph}
}
Document
On Connections Between k-Coloring and Euclidean k-Means

Authors: Enver Aman, Karthik C. S., and Sharath Punna

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
In the Euclidean k-means problems we are given as input a set of n points in ℝ^d and the goal is to find a set of k points C ⊆ ℝ^d, so as to minimize the sum of the squared Euclidean distances from each point in P to its closest center in C. In this paper, we formally explore connections between the k-coloring problem on graphs and the Euclidean k-means problem. Our results are as follows: - For all k ≥ 3, we provide a simple reduction from the k-coloring problem on regular graphs to the Euclidean k-means problem. Moreover, our technique extends to enable a reduction from a structured max-cut problem (which may be considered as a partial 2-coloring problem) to the Euclidean 2-means problem. Thus, we have a simple and alternate proof of the NP-hardness of Euclidean 2-means problem. - In the other direction, we mimic the O(1.7297ⁿ) time algorithm of Williams [TCS'05] for the max-cut of problem on n vertices to obtain an algorithm for the Euclidean 2-means problem with the same runtime, improving on the naive exhaustive search running in 2ⁿ⋅ poly(n,d) time. - We prove similar results and connections as above for the Euclidean k-min-sum problem.

Cite as

Enver Aman, Karthik C. S., and Sharath Punna. On Connections Between k-Coloring and Euclidean k-Means. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 9:1-9:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{aman_et_al:LIPIcs.ESA.2024.9,
  author =	{Aman, Enver and Karthik C. S. and Punna, Sharath},
  title =	{{On Connections Between k-Coloring and Euclidean k-Means}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{9:1--9:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.9},
  URN =		{urn:nbn:de:0030-drops-210808},
  doi =		{10.4230/LIPIcs.ESA.2024.9},
  annote =	{Keywords: k-means, k-minsum, Euclidean space, fine-grained complexity}
}
Document
Longest Common Substring with Gaps and Related Problems

Authors: Aranya Banerjee, Daniel Gibney, and Sharma V. Thankachan

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
The longest common substring (also known as longest common factor) and longest common subsequence problems are two well-studied classical string problems. The former is solvable in optimal 𝒪(n) time for two strings of length m and n with m ≤ n, and the latter is solvable in 𝒪(nm) time, which is conditionally optimal under the Strong Exponential Time Hypothesis. In this work, we study the problem of longest common factor with gaps, that is, finding a set of at most k matching substrings obeying precedence conditions with maximum total length. For k = 1, this is equivalent to the longest common factor problem, and for k = m, this is equivalent to the longest common subsequence problem. Our work demonstrates that, for constant k, this problem can be solved in strongly subquadratic time, i.e., nm^{1 - Θ(1)}. Motivated by co-linear chaining applications in Computational Biology, we further demonstrate that the longest common factor with gaps results can be extended to the case where the matches are restricted to maximal exact matches (MEMs). To further demonstrate the applicability of our techniques, we show that a similar approach can be used for a restricted version of the episode matching problem where one seeks an ordered set of at most k matches whose concatenation equals a query pattern P and the length of the substring of T containing the matches is minimized. These solutions all run in strongly subquadratic time for constant k.

Cite as

Aranya Banerjee, Daniel Gibney, and Sharma V. Thankachan. Longest Common Substring with Gaps and Related Problems. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 16:1-16:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{banerjee_et_al:LIPIcs.ESA.2024.16,
  author =	{Banerjee, Aranya and Gibney, Daniel and Thankachan, Sharma V.},
  title =	{{Longest Common Substring with Gaps and Related Problems}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{16:1--16:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.16},
  URN =		{urn:nbn:de:0030-drops-210877},
  doi =		{10.4230/LIPIcs.ESA.2024.16},
  annote =	{Keywords: Pattern Matching, Longest Common Subsequence, Episode Matching}
}
Document
Hypergraph Connectivity Augmentation in Strongly Polynomial Time

Authors: Kristóf Bérczi, Karthekeyan Chandrasekaran, Tamás Király, and Shubhang Kulkarni

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
We consider hypergraph network design problems where the goal is to construct a hypergraph that satisfies certain connectivity requirements. For graph network design problems where the goal is to construct a graph that satisfies certain connectivity requirements, the number of edges in every feasible solution is at most quadratic in the number of vertices. In contrast, for hypergraph network design problems, we might have feasible solutions in which the number of hyperedges is exponential in the number of vertices. This presents an additional technical challenge in hypergraph network design problems compared to graph network design problems: in order to solve the problem in polynomial time, we first need to show that there exists a feasible solution in which the number of hyperedges is polynomial in the input size. The central theme of this work is to overcome this additional technical challenge for certain hypergraph network design problems. We show that these hypergraph network design problems admit solutions in which the number of hyperedges is polynomial in the number of vertices and moreover, can be solved in strongly polynomial time. Our work improves on the previous fastest pseudo-polynomial run-time for these problems. As applications of our results, we derive the first strongly polynomial time algorithms for (i) degree-specified hypergraph connectivity augmentation using hyperedges and (ii) degree-specified hypergraph node-to-area connectivity augmentation using hyperedges.

Cite as

Kristóf Bérczi, Karthekeyan Chandrasekaran, Tamás Király, and Shubhang Kulkarni. Hypergraph Connectivity Augmentation in Strongly Polynomial Time. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 22:1-22:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{berczi_et_al:LIPIcs.ESA.2024.22,
  author =	{B\'{e}rczi, Krist\'{o}f and Chandrasekaran, Karthekeyan and Kir\'{a}ly, Tam\'{a}s and Kulkarni, Shubhang},
  title =	{{Hypergraph Connectivity Augmentation in Strongly Polynomial Time}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{22:1--22:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.22},
  URN =		{urn:nbn:de:0030-drops-210938},
  doi =		{10.4230/LIPIcs.ESA.2024.22},
  annote =	{Keywords: Hypergraphs, Hypergraph Connectivity, Submodular Functions, Combinatorial Optimization}
}
Document
Improved Algorithms for Maximum Coverage in Dynamic and Random Order Streams

Authors: Amit Chakrabarti, Andrew McGregor, and Anthony Wirth

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
The maximum coverage problem is to select k sets, from a collection of m sets, such that the cardinality of their union, in a universe of size n, is maximized. We consider (1-1/e-ε)-approximation algorithms for this NP-hard problem in three standard data stream models. 1) Dynamic Model. The stream consists of a sequence of sets being inserted and deleted. Our multi-pass algorithm uses ε^{-2} k ⋅ polylog(n,m) space. The best previous result (Assadi and Khanna, SODA 2018) used (n +ε^{-4} k) polylog(n,m) space. While both algorithms use O(ε^{-1} log m) passes, our analysis shows that, when ε ≤ 1/log log m, it is possible to reduce the number of passes by a 1/log log m factor without incurring additional space. 2) Random Order Model. In this model, there are no deletions, and the sets forming the instance are uniformly randomly permuted to form the input stream. We show that a single pass and k polylog(n,m) space suffices for arbitrary small constant ε. The best previous result, by Warneke et al. (ESA 2023), used k² polylog(n,m) space. 3) Insert-Only Model. Lastly, our results, along with numerous previous results, use a sub-sampling technique introduced by McGregor and Vu (ICDT 2017) to sparsify the input instance. We explain how this technique and others used in the paper can be implemented such that the amortized update time of our algorithm is polylogarithmic. This also implies an improvement of the state-of-the-art insert only algorithms in terms of the update time: polylog(m,n) update time suffices, whereas the best previous result by Jaud et al. (SEA 2023) required update time that was linear in k.

Cite as

Amit Chakrabarti, Andrew McGregor, and Anthony Wirth. Improved Algorithms for Maximum Coverage in Dynamic and Random Order Streams. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 40:1-40:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{chakrabarti_et_al:LIPIcs.ESA.2024.40,
  author =	{Chakrabarti, Amit and McGregor, Andrew and Wirth, Anthony},
  title =	{{Improved Algorithms for Maximum Coverage in Dynamic and Random Order Streams}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{40:1--40:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.40},
  URN =		{urn:nbn:de:0030-drops-211114},
  doi =		{10.4230/LIPIcs.ESA.2024.40},
  annote =	{Keywords: Data Stream Computation, Maximum Coverage, Submodular Maximization}
}
Document
Faster Min-Cost Flow and Approximate Tree Decomposition on Bounded Treewidth Graphs

Authors: Sally Dong and Guanghao Ye

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
We present an algorithm for min-cost flow in graphs with n vertices and m edges, given a tree decomposition of width τ and size S, and polynomially bounded, integral edge capacities and costs, running in Õ(m√{τ} + S) time. This improves upon the previous fastest algorithm in this setting achieved by the bounded-treewidth linear program solver of [Gu and Song, 2022; Dong et al., 2024], which runs in Õ(m τ^{(ω+1)/2}) time, where ω ≈ 2.37 is the matrix multiplication exponent. Our approach leverages recent advances in structured linear program solvers and robust interior point methods (IPM). In general graphs where treewidth is trivially bounded by n, the algorithm runs in Õ(m √ n) time, which is the best-known result without using the Lee-Sidford barrier or 𝓁₁ IPM, demonstrating the surprising power of robust interior point methods. As a corollary, we obtain a Õ(tw³ ⋅ m) time algorithm to compute a tree decomposition of width O(tw⋅ log(n)), given a graph with m edges.

Cite as

Sally Dong and Guanghao Ye. Faster Min-Cost Flow and Approximate Tree Decomposition on Bounded Treewidth Graphs. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 49:1-49:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{dong_et_al:LIPIcs.ESA.2024.49,
  author =	{Dong, Sally and Ye, Guanghao},
  title =	{{Faster Min-Cost Flow and Approximate Tree Decomposition on Bounded Treewidth Graphs}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{49:1--49:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.49},
  URN =		{urn:nbn:de:0030-drops-211207},
  doi =		{10.4230/LIPIcs.ESA.2024.49},
  annote =	{Keywords: Min-cost flow, tree decomposition, interior point method, bounded treewidth graphs}
}
Document
Invertible Bloom Lookup Tables with Less Memory and Randomness

Authors: Nils Fleischhacker, Kasper Green Larsen, Maciej Obremski, and Mark Simkin

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
In this work we study Invertible Bloom Lookup Tables (IBLTs) with small failure probabilities. IBLTs are highly versatile data structures that have found applications in set reconciliation protocols, error-correcting codes, and even the design of advanced cryptographic primitives. For storing n elements and ensuring correctness with probability at least 1 - δ, existing IBLT constructions require Ω(n((log(1/δ))/(log n))+1)) space and they crucially rely on fully random hash functions. We present new constructions of IBLTs that are simultaneously more space efficient and require less randomness. For storing n elements with a failure probability of at most δ, our data structure only requires O{n + log(1/δ)log log(1/δ)} space and O{log(log(n)/δ)}-wise independent hash functions. As a key technical ingredient we show that hashing n keys with any k-wise independent hash function h:U → [Cn] for some sufficiently large constant C guarantees with probability 1 - 2^{-Ω(k)} that at least n/2 keys will have a unique hash value. Proving this is non-trivial as k approaches n. We believe that the techniques used to prove this statement may be of independent interest. We apply our new IBLTs to the encrypted compression problem, recently studied by Fleischhacker, Larsen, Simkin (Eurocrypt 2023). We extend their approach to work for a more general class of encryption schemes and using our new IBLT we achieve an asymptotically better compression rate.

Cite as

Nils Fleischhacker, Kasper Green Larsen, Maciej Obremski, and Mark Simkin. Invertible Bloom Lookup Tables with Less Memory and Randomness. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 54:1-54:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{fleischhacker_et_al:LIPIcs.ESA.2024.54,
  author =	{Fleischhacker, Nils and Larsen, Kasper Green and Obremski, Maciej and Simkin, Mark},
  title =	{{Invertible Bloom Lookup Tables with Less Memory and Randomness}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{54:1--54:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.54},
  URN =		{urn:nbn:de:0030-drops-211252},
  doi =		{10.4230/LIPIcs.ESA.2024.54},
  annote =	{Keywords: Invertible Bloom Lookup Tables}
}
Document
Random-Order Online Independent Set of Intervals and Hyperrectangles

Authors: Mohit Garg, Debajyoti Kar, and Arindam Khan

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
In the Maximum Independent Set of Hyperrectangles problem, we are given a set of n (possibly overlapping) d-dimensional axis-aligned hyperrectangles, and the goal is to find a subset of non-overlapping hyperrectangles of maximum cardinality. For d = 1, this corresponds to the classical Interval Scheduling problem, where a simple greedy algorithm returns an optimal solution. In the offline setting, for d-dimensional hyperrectangles, polynomial time (log n)^{O(d)}-approximation algorithms are known [Chalermsook and Chuzhoy, 2009]. However, the problem becomes notably challenging in the online setting, where the input objects (hyperrectangles) appear one by one in an adversarial order, and on the arrival of an object, the algorithm needs to make an immediate and irrevocable decision whether or not to select the object while maintaining the feasibility. Even for interval scheduling, an Ω(n) lower bound is known on the competitive ratio. To circumvent these negative results, in this work, we study the online maximum independent set of axis-aligned hyperrectangles in the random-order arrival model, where the adversary specifies the set of input objects which then arrive in a uniformly random order. Starting from the prototypical secretary problem, the random-order model has received significant attention to study algorithms beyond the worst-case competitive analysis (see the survey by Gupta and Singla [Anupam Gupta and Sahil Singla, 2020]). Surprisingly, we show that the problem in the random-order model almost matches the best-known offline approximation guarantees, up to polylogarithmic factors. In particular, we give a simple (log n)^{O(d)}-competitive algorithm for d-dimensional hyperrectangles in this model, which runs in O_d̃(n) time. Our approach also yields (log n)^{O(d)}-competitive algorithms in the random-order model for more general objects such as d-dimensional fat objects and ellipsoids. Furthermore, all our competitiveness guarantees hold with high probability, and not just in expectation.

Cite as

Mohit Garg, Debajyoti Kar, and Arindam Khan. Random-Order Online Independent Set of Intervals and Hyperrectangles. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 58:1-58:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{garg_et_al:LIPIcs.ESA.2024.58,
  author =	{Garg, Mohit and Kar, Debajyoti and Khan, Arindam},
  title =	{{Random-Order Online Independent Set of Intervals and Hyperrectangles}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{58:1--58:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.58},
  URN =		{urn:nbn:de:0030-drops-211298},
  doi =		{10.4230/LIPIcs.ESA.2024.58},
  annote =	{Keywords: Online Algorithms, Random-Order Model, Maximum Independent Set of Rectangles, Hyperrectangles, Fat Objects, Interval Scheduling}
}
Document
Finding Perfect Matchings in Bridgeless Cubic Multigraphs Without Dynamic (2-)connectivity

Authors: Paweł Gawrychowski and Mateusz Wasylkiewicz

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
Petersen’s theorem, one of the earliest results in graph theory, states that every bridgeless cubic multigraph contains a perfect matching. While the original proof was neither constructive nor algorithmic, Biedl, Bose, Demaine, and Lubiw [J. Algorithms 38(1)] showed how to implement a later constructive proof by Frink in 𝒪(nlog⁴n) time using a fully dynamic 2-edge-connectivity structure. Then, Diks and Stańczyk [SOFSEM 2010] described a faster approach that only needs a fully dynamic connectivity structure and works in 𝒪(nlog²n) time. Both algorithms, while reasonable simple, utilize non-trivial (2-edge-)connectivity structures. We show that this is not necessary, and in fact a structure for maintaining a dynamic tree, e.g. link-cut trees, suffices to obtain a simple 𝒪(nlog n) time algorithm.

Cite as

Paweł Gawrychowski and Mateusz Wasylkiewicz. Finding Perfect Matchings in Bridgeless Cubic Multigraphs Without Dynamic (2-)connectivity. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 59:1-59:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{gawrychowski_et_al:LIPIcs.ESA.2024.59,
  author =	{Gawrychowski, Pawe{\l} and Wasylkiewicz, Mateusz},
  title =	{{Finding Perfect Matchings in Bridgeless Cubic Multigraphs Without Dynamic (2-)connectivity}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{59:1--59:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.59},
  URN =		{urn:nbn:de:0030-drops-211301},
  doi =		{10.4230/LIPIcs.ESA.2024.59},
  annote =	{Keywords: perfect matching, cubic graphs, bridgeless graphs, link-cut tree}
}
Document
New Algorithms and Lower Bounds for Streaming Tournaments

Authors: Prantar Ghosh and Sahil Kuchlous

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
We study fundamental directed graph (digraph) problems in the streaming model. An initial investigation by Chakrabarti, Ghosh, McGregor, and Vorotnikova [SODA'20] on streaming digraphs showed that while most of these problems are provably hard in general, some of them become tractable when restricted to the well-studied class of tournament graphs where every pair of nodes shares exactly one directed edge. Thus, we focus on tournaments and improve the state of the art for multiple problems in terms of both upper and lower bounds. Our primary upper bound is a deterministic single-pass semi-streaming algorithm (using Õ(n) space for n-node graphs, where Õ(.) hides polylog(n) factors) for decomposing a tournament into strongly connected components (SCC). It improves upon the previously best-known algorithm by Baweja, Jia, and Woodruff [ITCS'22] in terms of both space and passes: for p ⩾ 1, they used (p+1) passes and Õ(n^{1+1/p}) space. We further extend our algorithm to digraphs that are close to tournaments and establish tight bounds demonstrating that the problem’s complexity grows smoothly with the "distance" from tournaments. Applying our SCC-decomposition framework, we obtain improved - and in some cases, optimal - tournament algorithms for s,t-reachability, strong connectivity, Hamiltonian paths and cycles, and feedback arc set. On the other hand, we prove lower bounds exhibiting that some well-studied problems - such as (exact) feedback arc set and s,t-distance - remain hard (require Ω(n²) space) on tournaments. Moreover, we generalize the former problem’s lower bound to establish space-approximation tradeoffs: any single-pass (1± ε)-approximation algorithm requires Ω(n/√{ε}) space. Finally, we settle the streaming complexities of two basic digraph problems studied by prior work: acyclicity testing of tournaments and sink finding in DAGs. As a whole, our collection of results contributes significantly to the growing literature on streaming digraphs.

Cite as

Prantar Ghosh and Sahil Kuchlous. New Algorithms and Lower Bounds for Streaming Tournaments. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 60:1-60:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{ghosh_et_al:LIPIcs.ESA.2024.60,
  author =	{Ghosh, Prantar and Kuchlous, Sahil},
  title =	{{New Algorithms and Lower Bounds for Streaming Tournaments}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{60:1--60:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.60},
  URN =		{urn:nbn:de:0030-drops-211318},
  doi =		{10.4230/LIPIcs.ESA.2024.60},
  annote =	{Keywords: tournaments, streaming algorithms, graph algorithms, communication complexity, strongly connected components, reachability, feedback arc set}
}
Document
Practical Expander Decomposition

Authors: Lars Gottesbüren, Nikos Parotsidis, and Maximilian Probst Gutenberg

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
The expander decomposition of a graph decomposes the set of vertices into clusters such that the induced subgraph of each cluster is a subgraph with high conductance, and there is only a small number of inter-cluster edges. Expander decompositions are at the forefront of recent theoretical developments in the area of efficient graph algorithms and act as a central component in several state-of-the-art graph algorithms for fundamental problems like maximum flow, min-cost flow, Gomory-Hu trees, global min-cut, and more. Despite this crucial role and the existence of theoretically efficient expander decomposition algorithms, little is known on their behavior in practice. In this paper we explore the engineering design space in implementations for computing expander decompositions. We base our implementation on the near-linear time algorithm of Saranurak and Wang [SODA'19], and enhance it with practical optimizations that accelerate its running time in practice and at the same time preserve the theoretical runtime and approximation guarantees. We evaluate our algorithm on real-world graphs with up to tens of millions of edges. We demonstrate significant speedups of up to two orders of magnitude over the only prior implementation. To the best of our knowledge, our implementation is the first to compute expander decompositions at this scale within reasonable time.

Cite as

Lars Gottesbüren, Nikos Parotsidis, and Maximilian Probst Gutenberg. Practical Expander Decomposition. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 61:1-61:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{gottesburen_et_al:LIPIcs.ESA.2024.61,
  author =	{Gottesb\"{u}ren, Lars and Parotsidis, Nikos and Gutenberg, Maximilian Probst},
  title =	{{Practical Expander Decomposition}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{61:1--61:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.61},
  URN =		{urn:nbn:de:0030-drops-211323},
  doi =		{10.4230/LIPIcs.ESA.2024.61},
  annote =	{Keywords: Expander Decomposition, Clustering, Graph Algorithms}
}
  • Refine by Author
  • 3 Chandrasekaran, Karthekeyan
  • 3 Ghosh, Prantar
  • 2 Bodden, Eric
  • 2 Bérczi, Kristóf
  • 2 Chang, Yu-Sheng
  • Show More...

  • Refine by Classification
  • 10 Theory of computation → Graph algorithms analysis
  • 6 Theory of computation → Design and analysis of algorithms
  • 5 Computing methodologies → Artificial intelligence
  • 5 Theory of computation → Constraint and logic programming
  • 4 Theory of computation
  • Show More...

  • Refine by Keyword
  • 3 Approximation algorithms
  • 3 Large Language Models
  • 3 Quantum computing
  • 2 Axiomatisation
  • 2 Blockchain
  • Show More...

  • Refine by Type
  • 111 document

  • Refine by Publication Year
  • 101 2024
  • 3 2020
  • 3 2022
  • 2 2018
  • 1 2015
  • Show More...

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail