15 Search Results for "Rosén, Adi"


Document
Interval Selection in Sliding Windows

Authors: Cezar-Mihail Alexandru and Christian Konrad

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
We initiate the study of the Interval Selection problem in the (streaming) sliding window model of computation. In this problem, an algorithm receives a potentially infinite stream of intervals on the line, and the objective is to maintain at every moment an approximation to a largest possible subset of disjoint intervals among the L most recent intervals, for some integer L. We give the following results: 1) In the unit-length intervals case, we give a 2-approximation sliding window algorithm with space Õ(|OPT|), and we show that any sliding window algorithm that computes a (2-ε)-approximation requires space Ω(L), for any ε > 0. 2) In the arbitrary-length case, we give a (11/3+ε)-approximation sliding window algorithm with space Õ(|OPT|), for any constant ε > 0, which constitutes our main result. We also show that space Ω(L) is needed for algorithms that compute a (2.5-ε)-approximation, for any ε > 0. Our main technical contribution is an improvement over the smooth histogram technique, which consists of running independent copies of a traditional streaming algorithm with different start times. By employing the one-pass 2-approximation streaming algorithm by Cabello and Pérez-Lantero [Theor. Comput. Sci. '17] for Interval Selection on arbitrary-length intervals as the underlying algorithm, the smooth histogram technique immediately yields a (4+ε)-approximation in this setting. Our improvement is obtained by forwarding the structure of the intervals identified in a run to the subsequent run, which constrains the shape of an optimal solution and allows us to target optimal intervals differently.

Cite as

Cezar-Mihail Alexandru and Christian Konrad. Interval Selection in Sliding Windows. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 8:1-8:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{alexandru_et_al:LIPIcs.ESA.2024.8,
  author =	{Alexandru, Cezar-Mihail and Konrad, Christian},
  title =	{{Interval Selection in Sliding Windows}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{8:1--8:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.8},
  URN =		{urn:nbn:de:0030-drops-210795},
  doi =		{10.4230/LIPIcs.ESA.2024.8},
  annote =	{Keywords: Sliding window algorithms, Streaming algorithms, Interval selection}
}
Document
Improved Algorithms for Maximum Coverage in Dynamic and Random Order Streams

Authors: Amit Chakrabarti, Andrew McGregor, and Anthony Wirth

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
The maximum coverage problem is to select k sets, from a collection of m sets, such that the cardinality of their union, in a universe of size n, is maximized. We consider (1-1/e-ε)-approximation algorithms for this NP-hard problem in three standard data stream models. 1) Dynamic Model. The stream consists of a sequence of sets being inserted and deleted. Our multi-pass algorithm uses ε^{-2} k ⋅ polylog(n,m) space. The best previous result (Assadi and Khanna, SODA 2018) used (n +ε^{-4} k) polylog(n,m) space. While both algorithms use O(ε^{-1} log m) passes, our analysis shows that, when ε ≤ 1/log log m, it is possible to reduce the number of passes by a 1/log log m factor without incurring additional space. 2) Random Order Model. In this model, there are no deletions, and the sets forming the instance are uniformly randomly permuted to form the input stream. We show that a single pass and k polylog(n,m) space suffices for arbitrary small constant ε. The best previous result, by Warneke et al. (ESA 2023), used k² polylog(n,m) space. 3) Insert-Only Model. Lastly, our results, along with numerous previous results, use a sub-sampling technique introduced by McGregor and Vu (ICDT 2017) to sparsify the input instance. We explain how this technique and others used in the paper can be implemented such that the amortized update time of our algorithm is polylogarithmic. This also implies an improvement of the state-of-the-art insert only algorithms in terms of the update time: polylog(m,n) update time suffices, whereas the best previous result by Jaud et al. (SEA 2023) required update time that was linear in k.

Cite as

Amit Chakrabarti, Andrew McGregor, and Anthony Wirth. Improved Algorithms for Maximum Coverage in Dynamic and Random Order Streams. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 40:1-40:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{chakrabarti_et_al:LIPIcs.ESA.2024.40,
  author =	{Chakrabarti, Amit and McGregor, Andrew and Wirth, Anthony},
  title =	{{Improved Algorithms for Maximum Coverage in Dynamic and Random Order Streams}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{40:1--40:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.40},
  URN =		{urn:nbn:de:0030-drops-211114},
  doi =		{10.4230/LIPIcs.ESA.2024.40},
  annote =	{Keywords: Data Stream Computation, Maximum Coverage, Submodular Maximization}
}
Document
Random-Order Online Independent Set of Intervals and Hyperrectangles

Authors: Mohit Garg, Debajyoti Kar, and Arindam Khan

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
In the Maximum Independent Set of Hyperrectangles problem, we are given a set of n (possibly overlapping) d-dimensional axis-aligned hyperrectangles, and the goal is to find a subset of non-overlapping hyperrectangles of maximum cardinality. For d = 1, this corresponds to the classical Interval Scheduling problem, where a simple greedy algorithm returns an optimal solution. In the offline setting, for d-dimensional hyperrectangles, polynomial time (log n)^{O(d)}-approximation algorithms are known [Chalermsook and Chuzhoy, 2009]. However, the problem becomes notably challenging in the online setting, where the input objects (hyperrectangles) appear one by one in an adversarial order, and on the arrival of an object, the algorithm needs to make an immediate and irrevocable decision whether or not to select the object while maintaining the feasibility. Even for interval scheduling, an Ω(n) lower bound is known on the competitive ratio. To circumvent these negative results, in this work, we study the online maximum independent set of axis-aligned hyperrectangles in the random-order arrival model, where the adversary specifies the set of input objects which then arrive in a uniformly random order. Starting from the prototypical secretary problem, the random-order model has received significant attention to study algorithms beyond the worst-case competitive analysis (see the survey by Gupta and Singla [Anupam Gupta and Sahil Singla, 2020]). Surprisingly, we show that the problem in the random-order model almost matches the best-known offline approximation guarantees, up to polylogarithmic factors. In particular, we give a simple (log n)^{O(d)}-competitive algorithm for d-dimensional hyperrectangles in this model, which runs in O_d̃(n) time. Our approach also yields (log n)^{O(d)}-competitive algorithms in the random-order model for more general objects such as d-dimensional fat objects and ellipsoids. Furthermore, all our competitiveness guarantees hold with high probability, and not just in expectation.

Cite as

Mohit Garg, Debajyoti Kar, and Arindam Khan. Random-Order Online Independent Set of Intervals and Hyperrectangles. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 58:1-58:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{garg_et_al:LIPIcs.ESA.2024.58,
  author =	{Garg, Mohit and Kar, Debajyoti and Khan, Arindam},
  title =	{{Random-Order Online Independent Set of Intervals and Hyperrectangles}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{58:1--58:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.58},
  URN =		{urn:nbn:de:0030-drops-211298},
  doi =		{10.4230/LIPIcs.ESA.2024.58},
  annote =	{Keywords: Online Algorithms, Random-Order Model, Maximum Independent Set of Rectangles, Hyperrectangles, Fat Objects, Interval Scheduling}
}
Document
Locally Computing Edge Orientations

Authors: Slobodan Mitrović, Ronitt Rubinfeld, and Mihir Singhal

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
We consider the question of orienting the edges in a graph G such that every vertex has bounded out-degree. For graphs of arboricity α, there is an orientation in which every vertex has out-degree at most α and, moreover, the best possible maximum out-degree of an orientation is at least α - 1. We are thus interested in algorithms that can achieve a maximum out-degree of close to α. A widely studied approach for this problem in the distributed algorithms setting is a "peeling algorithm" that provides an orientation with maximum out-degree α(2+ε) in a logarithmic number of iterations. We consider this problem in the local computation algorithm (LCA) model, which quickly answers queries of the form "What is the orientation of edge (u,v)?" by probing the input graph. When the peeling algorithm is executed in the LCA setting by applying standard techniques, e.g., the Parnas-Ron paradigm, it requires Ω(n) probes per query on an n-vertex graph. In the case where G has unbounded degree, we show that any LCA that orients its edges to yield maximum out-degree r must use Ω(√ n/r) probes to G per query in the worst case, even if G is known to be a forest (that is, α = 1). We also show several algorithms with sublinear probe complexity when G has unbounded degree. When G is a tree such that the maximum degree Δ of G is bounded, we demonstrate an algorithm that uses Δ n^{1-log_Δ r + o(1)} probes to G per query. To obtain this result, we develop an edge-coloring approach that ultimately yields a graph-shattering-like result. We also use this shattering-like approach to demonstrate an LCA which 4-colors any tree using sublinear probes per query.

Cite as

Slobodan Mitrović, Ronitt Rubinfeld, and Mihir Singhal. Locally Computing Edge Orientations. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 89:1-89:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{mitrovic_et_al:LIPIcs.ESA.2024.89,
  author =	{Mitrovi\'{c}, Slobodan and Rubinfeld, Ronitt and Singhal, Mihir},
  title =	{{Locally Computing Edge Orientations}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{89:1--89:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.89},
  URN =		{urn:nbn:de:0030-drops-211603},
  doi =		{10.4230/LIPIcs.ESA.2024.89},
  annote =	{Keywords: local computation algorithms, edge orientation, tree coloring}
}
Document
APPROX
Improved Online Load Balancing with Known Makespan

Authors: Martin Böhm, Matej Lieskovský, Sören Schmitt, Jiří Sgall, and Rob van Stee

Published in: LIPIcs, Volume 317, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)


Abstract
We break the barrier of 3/2 for the problem of online load balancing with known makespan, also known as bin stretching. In this problem, m identical machines and the optimal makespan are given. The load of a machine is the total size of all the jobs assigned to it and the makespan is the maximum load of all the machines. Jobs arrive online and the goal is to assign each job to a machine while staying within a small factor (the competitive ratio) of the optimal makespan. We present an algorithm that maintains a competitive ratio of 139/93 < 1.495 for sufficiently large values of m, improving the previous bound of 3/2. The value 3/2 represents a natural bound for this problem: as long as the online bins are of size at least 3/2 of the offline bin, all items that fit at least two times in an offline bin have two nice properties. They fit three times in an online bin and a single such item can be packed together with an item of any size in an online bin. These properties are now both lost, which means that putting even one job on a wrong machine can leave some job unassigned at the end. It also makes it harder to determine good thresholds for the item types. This was one of the main technical issues in getting below 3/2. The analysis consists of an intricate mixture of size and weight arguments.

Cite as

Martin Böhm, Matej Lieskovský, Sören Schmitt, Jiří Sgall, and Rob van Stee. Improved Online Load Balancing with Known Makespan. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 317, pp. 10:1-10:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{bohm_et_al:LIPIcs.APPROX/RANDOM.2024.10,
  author =	{B\"{o}hm, Martin and Lieskovsk\'{y}, Matej and Schmitt, S\"{o}ren and Sgall, Ji\v{r}{\'\i} and van Stee, Rob},
  title =	{{Improved Online Load Balancing with Known Makespan}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)},
  pages =	{10:1--10:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-348-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{317},
  editor =	{Kumar, Amit and Ron-Zewi, Noga},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2024.10},
  URN =		{urn:nbn:de:0030-drops-210032},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2024.10},
  annote =	{Keywords: Online algorithms, bin stretching, bin packing}
}
Document
Gap MCSP Is Not (Levin) NP-Complete in Obfustopia

Authors: Noam Mazor and Rafael Pass

Published in: LIPIcs, Volume 300, 39th Computational Complexity Conference (CCC 2024)


Abstract
We demonstrate that under believable cryptographic hardness assumptions, Gap versions of standard meta-complexity problems, such as the Minimum Circuit Size Problem (MCSP) and the Minimum Time-Bounded Kolmogorov Complexity problem (MKTP) are not NP-complete w.r.t. Levin (i.e., witness-preserving many-to-one) reductions. In more detail: - Assuming the existence of indistinguishability obfuscation, and subexponentially-secure one-way functions, an appropriate Gap version of MCSP is not NP-complete under randomized Levin-reductions. - Assuming the existence of subexponentially-secure indistinguishability obfuscation, subexponentially-secure one-way functions and injective PRGs, an appropriate Gap version of MKTP is not NP-complete under randomized Levin-reductions.

Cite as

Noam Mazor and Rafael Pass. Gap MCSP Is Not (Levin) NP-Complete in Obfustopia. In 39th Computational Complexity Conference (CCC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 300, pp. 36:1-36:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{mazor_et_al:LIPIcs.CCC.2024.36,
  author =	{Mazor, Noam and Pass, Rafael},
  title =	{{Gap MCSP Is Not (Levin) NP-Complete in Obfustopia}},
  booktitle =	{39th Computational Complexity Conference (CCC 2024)},
  pages =	{36:1--36:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-331-7},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{300},
  editor =	{Santhanam, Rahul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2024.36},
  URN =		{urn:nbn:de:0030-drops-204322},
  doi =		{10.4230/LIPIcs.CCC.2024.36},
  annote =	{Keywords: Kolmogorov complexity, MCSP, Levin Reduction}
}
Document
Streaming Matching and Edge Cover in Practice

Authors: S M Ferdous, Alex Pothen, and Mahantesh Halappanavar

Published in: LIPIcs, Volume 301, 22nd International Symposium on Experimental Algorithms (SEA 2024)


Abstract
Graph algorithms with polynomial space and time requirements often become infeasible for massive graphs with billions of edges or more. State-of-the-art approaches therefore employ approximate serial, parallel, and distributed algorithms to tackle these challenges. However, such approaches require storing the entire graph in memory and thus need access to costly computing resources such as clusters and supercomputers. In this paper, we present practical streaming approaches for solving massive graph problems using limited memory for two prototypical graph problems: maximum weighted matching and minimum weighted edge cover. For matching, we conduct a thorough computational study on two of the semi-streaming algorithms including a recent breakthrough result that achieves a 1/(2+ε)-approximation of the weight while using O(n log W /ε) memory (here n is the number of vertices and W is the maximum edge weight), designed by Paz and Schwartzman [SODA, 2017]. Empirically, we show that the semi-streaming algorithms produce matchings whose weight is close to the best 1/2-approximate offline algorithm while requiring less time and an order-of-magnitude less memory. For minimum weighted edge cover, we develop three novel semi-streaming algorithms. Two of these algorithms require a single pass through the input graph, require O(n log n) memory, and provide a 2-approximation guarantee on the objective. We also leverage a relationship between approximate maximum weighted matching and approximate minimum weighted edge cover to develop a two-pass 3/2+ε-approximate algorithm with the memory requirement of Paz and Schwartzman’s semi-streaming matching algorithm. These streaming approaches are compared against the state-of-the-art 3/2-approximate offline algorithm. The semi-streaming matching and the novel edge cover algorithms proposed in this paper can process graphs with several billions of edges in under 30 minutes using 6 GB of memory, which is at least an order of magnitude improvement from the offline (non-streaming) algorithms. For the largest graph, the best alternative offline parallel approximation algorithm (GPA+ROMA) could not finish in three hours even while employing hundreds of processors and 1 TB of memory. We also demonstrate an application of semi-streaming algorithm by computing a matching using linearly bounded memory on intersection graphs derived from three machine learning datasets, while the existing offline algorithms could not complete on one of these datasets since its memory requirement exceeded 1TB.

Cite as

S M Ferdous, Alex Pothen, and Mahantesh Halappanavar. Streaming Matching and Edge Cover in Practice. In 22nd International Symposium on Experimental Algorithms (SEA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 301, pp. 12:1-12:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{ferdous_et_al:LIPIcs.SEA.2024.12,
  author =	{Ferdous, S M and Pothen, Alex and Halappanavar, Mahantesh},
  title =	{{Streaming Matching and Edge Cover in Practice}},
  booktitle =	{22nd International Symposium on Experimental Algorithms (SEA 2024)},
  pages =	{12:1--12:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-325-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{301},
  editor =	{Liberti, Leo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2024.12},
  URN =		{urn:nbn:de:0030-drops-203773},
  doi =		{10.4230/LIPIcs.SEA.2024.12},
  annote =	{Keywords: Matching, Edge Cover, Semi-Streaming Algorithm, Parallel Algorithms, Algorithm Engineering}
}
Document
Distributed Partial Coloring via Gradual Rounding

Authors: Avinandan Das, Pierre Fraigniaud, and Adi Rosén

Published in: LIPIcs, Volume 286, 27th International Conference on Principles of Distributed Systems (OPODIS 2023)


Abstract
For k ≥ 0, k-partial (k+1)-coloring asks to color the nodes of an n-node graph using a palette of k+1 colors such that every node v has at least min{k,deg(v)} neighbors colored with colors different from its own color. Hence, proper (Δ+1)-coloring is the special case of k-partial (k+1)-coloring when k = Δ. Ghaffari and Kuhn [FOCS 2021] recently proved that there exists a deterministic distributed algorithm that solves proper (Δ+1)-coloring of n-node graphs with maximum degree Δ in O(log n ⋅ log²Δ) rounds under the LOCAL model of distributed computing. This breakthrough result is achieved via an original iterated rounding approach. Using the same technique, Ghaffari and Kuhn also showed that there exists a deterministic algorithm that solves proper O(a)-coloring of n-node graphs with arboricity a in O(log n ⋅ log³a) rounds. It directly follows from this latter result that k-partial O(k)-coloring can be solved deterministically in O(log n ⋅ log³k) rounds. We develop an extension of the Ghaffari and Kuhn algorithm for proper (Δ+1)-coloring, and show that it solves k-partial (k+1)-coloring, thus generalizing their main result. Our algorithm runs in O(log n ⋅ log³k) rounds, like the algorithm that follows from Ghaffari and Kuhn’s algorithm for graphs with bounded arboricity, but uses only k+1 color, i.e., the smallest number c of colors such that every graph has a k-partial c-coloring. Like all the previously mentioned algorithms, our algorithm actually solves the general list-coloring version of the problem. Specifically, every node v receives as input an integer demand d(v) ≤ deg(v), and a list of at least d(v)+1 colors. Every node must then output a color from its list such that the resulting coloring satisfies that every node v has at least d(v) neighbors with colors different from its own. Our algorithm solves this problem in O(log n ⋅ log³k) rounds where k = max_v d(v). Moreover, in the specific case where all lists of colors given to the nodes as input share a common colors c^* known to all nodes, one can save one log k factor. In particular, for standard k-partial (k+1)-coloring, which corresponds to the case where all nodes are given the same list {1,… ,k+1}, one can modify our algorithm so that it runs in O(log n ⋅ log²k) rounds, and thus matches the complexity of Ghaffari and Kuhn’s algorithm for (Δ+1)-coloring for k = Δ.

Cite as

Avinandan Das, Pierre Fraigniaud, and Adi Rosén. Distributed Partial Coloring via Gradual Rounding. In 27th International Conference on Principles of Distributed Systems (OPODIS 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 286, pp. 30:1-30:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{das_et_al:LIPIcs.OPODIS.2023.30,
  author =	{Das, Avinandan and Fraigniaud, Pierre and Ros\'{e}n, Adi},
  title =	{{Distributed Partial Coloring via Gradual Rounding}},
  booktitle =	{27th International Conference on Principles of Distributed Systems (OPODIS 2023)},
  pages =	{30:1--30:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-308-9},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{286},
  editor =	{Bessani, Alysson and D\'{e}fago, Xavier and Nakamura, Junya and Wada, Koichi and Yamauchi, Yukiko},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.OPODIS.2023.30},
  URN =		{urn:nbn:de:0030-drops-195205},
  doi =		{10.4230/LIPIcs.OPODIS.2023.30},
  annote =	{Keywords: Distributed graph coloring, partial coloring, weak coloring}
}
Document
A New Approach to Multi-Party Peer-to-Peer Communication Complexity

Authors: Adi Rosén and Florent Urrutia

Published in: LIPIcs, Volume 124, 10th Innovations in Theoretical Computer Science Conference (ITCS 2019)


Abstract
We introduce new models and new information theoretic measures for the study of communication complexity in the natural peer-to-peer, multi-party, number-in-hand setting. We prove a number of properties of our new models and measures, and then, in order to exemplify their effectiveness, we use them to prove two lower bounds. The more elaborate one is a tight lower bound of Omega(kn) on the multi-party peer-to-peer randomized communication complexity of the k-player, n-bit function Disjointness, Disj_k^n. The other one is a tight lower bound of Omega(kn) on the multi-party peer-to-peer randomized communication complexity of the k-player, n-bit bitwise parity function, Par_k^n. Both lower bounds hold when n=Omega(k). The lower bound for Disj_k^n improves over the lower bound that can be inferred from the result of Braverman et al. (FOCS 2013), which was proved in the coordinator model and can yield a lower bound of Omega(kn/log k) in the peer-to-peer model. To the best of our knowledge, our lower bounds are the first tight (non-trivial) lower bounds on communication complexity in the natural peer-to-peer multi-party setting. In addition to the above results for communication complexity, we also prove, using the same tools, an Omega(n) lower bound on the number of random bits necessary for the (information theoretic) private computation of the function Disj_k^n.

Cite as

Adi Rosén and Florent Urrutia. A New Approach to Multi-Party Peer-to-Peer Communication Complexity. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 64:1-64:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{rosen_et_al:LIPIcs.ITCS.2019.64,
  author =	{Ros\'{e}n, Adi and Urrutia, Florent},
  title =	{{A New Approach to Multi-Party Peer-to-Peer Communication Complexity}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{64:1--64:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.64},
  URN =		{urn:nbn:de:0030-drops-101576},
  doi =		{10.4230/LIPIcs.ITCS.2019.64},
  annote =	{Keywords: communication complexity, multi-party communication complexity, peer-to-peer communication complexity, information complexity, private computation}
}
Document
Sublinear Random Access Generators for Preferential Attachment Graphs

Authors: Guy Even, Reut Levi, Moti Medina, and Adi Rosén

Published in: LIPIcs, Volume 80, 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)


Abstract
We consider the problem of sampling from a distribution on graphs, specifically when the distribution is defined by an evolving graph model, and consider the time, space and randomness complexities of such samplers. In the standard approach, the whole graph is chosen randomly according to the randomized evolving process, stored in full, and then queries on the sampled graph are answered by simply accessing the stored graph. This may require prohibitive amounts of time, space and random bits, especially when only a small number of queries are actually issued. Instead, we propose to generate the graph on-the-fly, in response to queries, and therefore to require amounts of time, space, and random bits which are a function of the actual number of queries. We focus on two random graph models: the Barabási-Albert Preferential Attachment model (BA-graphs) and the random recursive tree model. We give on-the-fly generation algorithms for both models. With probability 1-1/poly(n), each and every query is answered in polylog(n) time, and the increase in space and the number of random bits consumed by any single query are both polylog(n), where n denotes the number of vertices in the graph. Our results show that, although the BA random graph model is defined by a sequential process, efficient random access to the graph's nodes is possible. In addition to the conceptual contribution, efficient on-the-fly generation of random graphs can serve as a tool for the efficient simulation of sublinear algorithms over large BA-graphs, and the efficient estimation of their performance on such graphs.

Cite as

Guy Even, Reut Levi, Moti Medina, and Adi Rosén. Sublinear Random Access Generators for Preferential Attachment Graphs. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 6:1-6:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{even_et_al:LIPIcs.ICALP.2017.6,
  author =	{Even, Guy and Levi, Reut and Medina, Moti and Ros\'{e}n, Adi},
  title =	{{Sublinear Random Access Generators for Preferential Attachment Graphs}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{6:1--6:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.6},
  URN =		{urn:nbn:de:0030-drops-74242},
  doi =		{10.4230/LIPIcs.ICALP.2017.6},
  annote =	{Keywords: local computation algorithms, preferential attachment graphs, random recursive trees, sublinear algorithms}
}
Document
Multi-Party Protocols, Information Complexity and Privacy

Authors: Iordanis Kerenidis, Adi Rosén, and Florent Urrutia

Published in: LIPIcs, Volume 58, 41st International Symposium on Mathematical Foundations of Computer Science (MFCS 2016)


Abstract
We introduce the new measure of Public Information Complexity (PIC), as a tool for the study of multi-party computation protocols, and of quantities such as their communication complexity, or the amount of randomness they require in the context of information-theoretic private computations. We are able to use this measure directly in the natural asynchronous message-passing peer-to-peer model and show a number of interesting properties and applications of our new notion: the Public Information Complexity is a lower bound on the Communication Complexity and an upper bound on the Information Complexity; the difference between the Public Information Complexity and the Information Complexity provides a lower bound on the amount of randomness used in a protocol; any communication protocol can be compressed to its Public Information Cost; an explicit calculation of the zero-error Public Information Complexity of the k-party, n-bit Parity function, where a player outputs the bit-wise parity of the inputs. The latter result establishes that the amount of randomness needed for a private protocol that computes this function is Omega(n).

Cite as

Iordanis Kerenidis, Adi Rosén, and Florent Urrutia. Multi-Party Protocols, Information Complexity and Privacy. In 41st International Symposium on Mathematical Foundations of Computer Science (MFCS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 58, pp. 57:1-57:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{kerenidis_et_al:LIPIcs.MFCS.2016.57,
  author =	{Kerenidis, Iordanis and Ros\'{e}n, Adi and Urrutia, Florent},
  title =	{{Multi-Party Protocols, Information Complexity and Privacy}},
  booktitle =	{41st International Symposium on Mathematical Foundations of Computer Science (MFCS 2016)},
  pages =	{57:1--57:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-016-3},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{58},
  editor =	{Faliszewski, Piotr and Muscholl, Anca and Niedermeier, Rolf},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2016.57},
  URN =		{urn:nbn:de:0030-drops-64696},
  doi =		{10.4230/LIPIcs.MFCS.2016.57},
  annote =	{Keywords: multi-party protocols, information theory, communication complexity, multi-party private computation (MPC), randomness}
}
Document
A Constant Approximation Algorithm for Scheduling Packets on Line Networks

Authors: Guy Even, Moti Medina, and Adi Rosén

Published in: LIPIcs, Volume 57, 24th Annual European Symposium on Algorithms (ESA 2016)


Abstract
In this paper we improve the approximation ratio for the problem of scheduling packets on line networks with bounded buffers with the aim of maximizing the throughput. Each node in the network has a local buffer of bounded size B, and each edge (or link) can transmit a limited number c of packets in every time unit. The input to the problem consists of a set of packet requests, each defined by a source node, a destination node, and a release time. We denote by n the size of the network. A solution for this problem is a schedule that delivers (some of the) packets to their destinations without violating the capacity constraints of the network (buffers or edges). Our goal is to design an efficient algorithm that computes a schedule that maximizes the number of packets that arrive to their respective destinations. We give a randomized approximation algorithm with constant approximation ratio for the case where the buffer-size to link-capacity ratio, B/c, does not depend on the input size. This improves over the previously best result of O(log^* n) [Räcke and Rosén SPAA 2009]. Our improvement is based on a new combinatorial lemma that we prove, stating, roughly speaking, that if packets are allowed to stay put in buffers only a limited number of time steps, 2d, where d is the longest source-destination distance, then the optimal solution is decreased by only a constant factor. This claim was not previously known in the integral (unsplitable, zero-one) case, and may find additional applications for routing and scheduling algorithms. While we are not able to give the same improvement for the related problem when packets have hard deadlines, our algorithm does support "soft deadlines". That is, if packets have deadlines, we achieve a constant approximation ratio when the produced solution is allowed to miss deadlines by at most log n time units.

Cite as

Guy Even, Moti Medina, and Adi Rosén. A Constant Approximation Algorithm for Scheduling Packets on Line Networks. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 40:1-40:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{even_et_al:LIPIcs.ESA.2016.40,
  author =	{Even, Guy and Medina, Moti and Ros\'{e}n, Adi},
  title =	{{A Constant Approximation Algorithm for Scheduling Packets on Line Networks}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{40:1--40:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.40},
  URN =		{urn:nbn:de:0030-drops-63524},
  doi =		{10.4230/LIPIcs.ESA.2016.40},
  annote =	{Keywords: approximation algorithms, linear programming, randomized rounding, packet scheduling, admission control}
}
Document
Online Budgeted Maximum Coverage

Authors: Dror Rawitz and Adi Rosén

Published in: LIPIcs, Volume 57, 24th Annual European Symposium on Algorithms (ESA 2016)


Abstract
We study the Online Budgeted Maximum Coverage (OBMC) problem. Subsets of a weighted ground set U arrive one by one, where each set has a cost. The online algorithm has to select a collection of sets, under the constraint that their cost is at most a given budget. Upon arrival of a set the algorithm must decide whether to accept or to reject the arriving set, and it may also drop previously accepted sets (preemption). Rejecting or dropping a set is irrevocable. The goal is to maximize the total weight of the elements covered by the sets in the chosen collection. We present a deterministic 4/(1-r)-competitive algorithm for OBMC, where r is the maximum ratio between the cost of a set and the total budget. Building on that algorithm, we then present a randomized O(1)-competitive algorithm for OBMC. On the other hand, we show that the competitive ratio of any deterministic online algorithm is Omega(1/(sqrt{1-r})). We also give a deterministic O(Delta)-competitive algorithm, where Delta is the maximum weight of a set (given that the minimum element weight is 1), and if the total weight of all elements, w(U), is known in advance, we show that a slight modification of that algorithm is O(min{Delta,sqrt{w(U)}})-competitive. A matching lower bound of Omega(min{Delta,sqrt{w(U)}}) is also given. Previous to the present work, only the unit cost version of OBMC was studied under the online setting, giving a 4-competitive algorithm [Saha, Getoor, 2009]. Finally, our results, including the lower bounds, apply to Removable Online Knapsack which is the preemptive version of the Online Knapsack problem.

Cite as

Dror Rawitz and Adi Rosén. Online Budgeted Maximum Coverage. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 73:1-73:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{rawitz_et_al:LIPIcs.ESA.2016.73,
  author =	{Rawitz, Dror and Ros\'{e}n, Adi},
  title =	{{Online Budgeted Maximum Coverage}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{73:1--73:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.73},
  URN =		{urn:nbn:de:0030-drops-64146},
  doi =		{10.4230/LIPIcs.ESA.2016.73},
  annote =	{Keywords: budgeted coverage, maximum coverage, online algorithms, competitive analysis, removable online knapsack}
}
Document
Paid Exchanges are Worth the Price

Authors: Alejandro López-Ortiz, Marc P. Renault, and Adi Rosén

Published in: LIPIcs, Volume 30, 32nd International Symposium on Theoretical Aspects of Computer Science (STACS 2015)


Abstract
We consider the list update problem as defined in the seminal work on competitive analysis by Sleator and Tarjan [12]. In this problem, a sequence of requests, consisting of items to access in a linked list, is given. After an item is accessed it can be moved to any position forward in the list at no cost (free exchange), and, at any time, any two adjacent items can be swapped at a cost of 1 (paid exchange). The cost to access an item is its current position in the list. The goal is to dynamically rearrange the list so as to minimize the total cost (accrued from accesses and exchanges) over the request sequence. We show a lower bound of 12/11 on the worst-case ratio between the performance of an (offline) optimal algorithm that can only perform free exchanges and that of an (offline) optimal algorithm that can perform both paid and free exchanges. This answers an outstanding question that has been open since 1996 [10].

Cite as

Alejandro López-Ortiz, Marc P. Renault, and Adi Rosén. Paid Exchanges are Worth the Price. In 32nd International Symposium on Theoretical Aspects of Computer Science (STACS 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 30, pp. 636-648, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{lopezortiz_et_al:LIPIcs.STACS.2015.636,
  author =	{L\'{o}pez-Ortiz, Alejandro and Renault, Marc P. and Ros\'{e}n, Adi},
  title =	{{Paid Exchanges are Worth the Price}},
  booktitle =	{32nd International Symposium on Theoretical Aspects of Computer Science (STACS 2015)},
  pages =	{636--648},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-78-1},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{30},
  editor =	{Mayr, Ernst W. and Ollinger, Nicolas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2015.636},
  URN =		{urn:nbn:de:0030-drops-49476},
  doi =		{10.4230/LIPIcs.STACS.2015.636},
  annote =	{Keywords: list update problem, online computation, online algorithms, competitive analysis, lower bounds}
}
Document
10071 Open Problems – Scheduling

Authors: Jim Anderson, Björn Andersson, Yossi Azar, Nikhil Bansal, Enrico Bini, Marek Chrobak, José Correa, Liliana Cucu-Grosjean, Rob Davis, Arvind Easwaran, Jeff Edmonds, Shelby Funk, Sathish Gopalakrishnan, Han Hoogeveen, Claire Mathieu, Nicole Megow, Seffi Naor, Kirk Pruhs, Maurice Queyranne, Adi Rosén, Nicolas Schabanel, Jiří Sgall, René Sitters, Sebastian Stiller, Marc Uetz, Tjark Vredeveld, and Gerhard J. Woeginger

Published in: Dagstuhl Seminar Proceedings, Volume 10071, Scheduling (2010)


Abstract
Collection of the open problems presented at the scheduling seminar.

Cite as

Jim Anderson, Björn Andersson, Yossi Azar, Nikhil Bansal, Enrico Bini, Marek Chrobak, José Correa, Liliana Cucu-Grosjean, Rob Davis, Arvind Easwaran, Jeff Edmonds, Shelby Funk, Sathish Gopalakrishnan, Han Hoogeveen, Claire Mathieu, Nicole Megow, Seffi Naor, Kirk Pruhs, Maurice Queyranne, Adi Rosén, Nicolas Schabanel, Jiří Sgall, René Sitters, Sebastian Stiller, Marc Uetz, Tjark Vredeveld, and Gerhard J. Woeginger. 10071 Open Problems – Scheduling. In Scheduling. Dagstuhl Seminar Proceedings, Volume 10071, pp. 1-24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{anderson_et_al:DagSemProc.10071.3,
  author =	{Anderson, Jim and Andersson, Bj\"{o}rn and Azar, Yossi and Bansal, Nikhil and Bini, Enrico and Chrobak, Marek and Correa, Jos\'{e} and Cucu-Grosjean, Liliana and Davis, Rob and Easwaran, Arvind and Edmonds, Jeff and Funk, Shelby and Gopalakrishnan, Sathish and Hoogeveen, Han and Mathieu, Claire and Megow, Nicole and Naor, Seffi and Pruhs, Kirk and Queyranne, Maurice and Ros\'{e}n, Adi and Schabanel, Nicolas and Sgall, Ji\v{r}{\'\i} and Sitters, Ren\'{e} and Stiller, Sebastian and Uetz, Marc and Vredeveld, Tjark and Woeginger, Gerhard J.},
  title =	{{10071 Open Problems – Scheduling}},
  booktitle =	{Scheduling},
  pages =	{1--24},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2010},
  volume =	{10071},
  editor =	{Susanne Albers and Sanjoy K. Baruah and Rolf H. M\"{o}hring and Kirk Pruhs},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagSemProc.10071.3},
  URN =		{urn:nbn:de:0030-drops-25367},
  doi =		{10.4230/DagSemProc.10071.3},
  annote =	{Keywords: Open problems, scheduling}
}
  • Refine by Author
  • 8 Rosén, Adi
  • 2 Even, Guy
  • 2 Medina, Moti
  • 2 Sgall, Jiří
  • 2 Urrutia, Florent
  • Show More...

  • Refine by Classification
  • 2 Theory of computation → Online algorithms
  • 2 Theory of computation → Streaming, sublinear and near linear time algorithms
  • 1 Computing methodologies
  • 1 Computing methodologies → Shared memory algorithms
  • 1 Mathematics of computing → Information theory
  • Show More...

  • Refine by Keyword
  • 2 communication complexity
  • 2 competitive analysis
  • 2 local computation algorithms
  • 2 online algorithms
  • 1 Algorithm Engineering
  • Show More...

  • Refine by Type
  • 15 document

  • Refine by Publication Year
  • 8 2024
  • 3 2016
  • 1 2010
  • 1 2015
  • 1 2017
  • Show More...

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail