18 Search Results for "Schwartzman, Gregory"


Document
Parallel Set Cover and Hypergraph Matching via Uniform Random Sampling

Authors: Laxman Dhulipala, Michael Dinitz, Jakub Łącki, and Slobodan Mitrović

Published in: LIPIcs, Volume 319, 38th International Symposium on Distributed Computing (DISC 2024)


Abstract
The SetCover problem has been extensively studied in many different models of computation, including parallel and distributed settings. From an approximation point of view, there are two standard guarantees: an O(log Δ)-approximation (where Δ is the maximum set size) and an O(f)-approximation (where f is the maximum number of sets containing any given element). In this paper, we introduce a new, surprisingly simple, model-independent approach to solving SetCover in unweighted graphs. We obtain multiple improved algorithms in the MPC and CRCW PRAM models. First, in the MPC model with sublinear space per machine, our algorithms can compute an O(f) approximation to SetCover in Ô(√{log Δ} + log f) rounds and a O(log Δ) approximation in O(log^{3/2} n) rounds. Moreover, in the PRAM model, we give a O(f) approximate algorithm using linear work and O(log n) depth. All these bounds improve the existing round complexity/depth bounds by a log^{Ω(1)} n factor. Moreover, our approach leads to many other new algorithms, including improved algorithms for the HypergraphMatching problem in the MPC model, as well as simpler SetCover algorithms that match the existing bounds.

Cite as

Laxman Dhulipala, Michael Dinitz, Jakub Łącki, and Slobodan Mitrović. Parallel Set Cover and Hypergraph Matching via Uniform Random Sampling. In 38th International Symposium on Distributed Computing (DISC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 319, pp. 19:1-19:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{dhulipala_et_al:LIPIcs.DISC.2024.19,
  author =	{Dhulipala, Laxman and Dinitz, Michael and {\L}\k{a}cki, Jakub and Mitrovi\'{c}, Slobodan},
  title =	{{Parallel Set Cover and Hypergraph Matching via Uniform Random Sampling}},
  booktitle =	{38th International Symposium on Distributed Computing (DISC 2024)},
  pages =	{19:1--19:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-352-2},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{319},
  editor =	{Alistarh, Dan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.DISC.2024.19},
  URN =		{urn:nbn:de:0030-drops-212453},
  doi =		{10.4230/LIPIcs.DISC.2024.19},
  annote =	{Keywords: approximate maximum matching, set cover, hypergraph matching, PRAM, massively parallel computation}
}
Document
Broadcast and Consensus in Stochastic Dynamic Networks with Byzantine Nodes and Adversarial Edges

Authors: Antoine El-Hayek, Monika Henzinger, and Stefan Schmid

Published in: LIPIcs, Volume 319, 38th International Symposium on Distributed Computing (DISC 2024)


Abstract
Broadcast and Consensus are most fundamental tasks in distributed computing. These tasks are particularly challenging in dynamic networks where communication across the network links may be unreliable, e.g., due to mobility or failures. Over the last years, researchers have derived several impossibility results and high time complexity lower bounds for these tasks. Specifically for the setting where in each round of communication the adversary is allowed to choose one rooted tree along which the information is disseminated, there is a lower as well as an upper bound that is linear in the number n of nodes for Broadcast and for n ≥ 3 the adversary can guarantee that Consensus never happens. This setting is called the oblivious message adversary for rooted trees. Also note that if the adversary is allowed to choose a graph that does not contain a rooted tree, then it can guarantee that Broadcast and Consensus will never happen. However, such deterministic adversarial models may be overly pessimistic, as many processes in real-world settings are stochastic in nature rather than worst-case. This paper studies Broadcast on stochastic dynamic networks and shows that the situation is very different to the deterministic case. In particular, we show that if information dissemination occurs along random rooted trees and directed Erdős–Rényi graphs, Broadcast completes in O(log n) rounds of communication with high probability. The fundamental insight in our analysis is that key variables are mutually independent. We then study two adversarial models, (a) one with Byzantine nodes and (b) one where an adversary controls the edges. (a) Our techniques without Byzantine nodes are general enough so that they can be extended to Byzantine nodes. (b) In the spirit of smoothed analysis, we introduce the notion of randomized oblivious message adversary, where in each round, an adversary picks k ≤ 2n/3 edges to appear in the communication network, and then a graph (e.g. rooted tree or directed Erdős–Rényi graph) is chosen uniformly at random among the set of all such graphs that include these edges. We show that Broadcast completes in a finite number of rounds, which is, e.g., O(k+log n) rounds in rooted trees. We then extend these results to All-to-All Broadcast, and Consensus, and give lower bounds that show that most of our upper bounds are tight.

Cite as

Antoine El-Hayek, Monika Henzinger, and Stefan Schmid. Broadcast and Consensus in Stochastic Dynamic Networks with Byzantine Nodes and Adversarial Edges. In 38th International Symposium on Distributed Computing (DISC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 319, pp. 21:1-21:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{elhayek_et_al:LIPIcs.DISC.2024.21,
  author =	{El-Hayek, Antoine and Henzinger, Monika and Schmid, Stefan},
  title =	{{Broadcast and Consensus in Stochastic Dynamic Networks with Byzantine Nodes and Adversarial Edges}},
  booktitle =	{38th International Symposium on Distributed Computing (DISC 2024)},
  pages =	{21:1--21:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-352-2},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{319},
  editor =	{Alistarh, Dan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.DISC.2024.21},
  URN =		{urn:nbn:de:0030-drops-212476},
  doi =		{10.4230/LIPIcs.DISC.2024.21},
  annote =	{Keywords: Broadcast, Smoothed Analysis, Stochastic Networks, Dynamic Networks}
}
Document
Massively Parallel Ruling Set Made Deterministic

Authors: Jeff Giliberti and Zahra Parsaeian

Published in: LIPIcs, Volume 319, 38th International Symposium on Distributed Computing (DISC 2024)


Abstract
We study the deterministic complexity of the 2-Ruling Set problem in the model of Massively Parallel Computation (MPC) with linear and strongly sublinear local memory. - Linear MPC: We present a constant-round deterministic algorithm for the 2-Ruling Set problem that matches the randomized round complexity recently settled by Cambus, Kuhn, Pai, and Uitto [DISC'23], and improves upon the deterministic O(log log n)-round algorithm by Pai and Pemmaraju [PODC'22]. Our main ingredient is a simpler analysis of CKPU’s algorithm based solely on bounded independence, which makes its efficient derandomization possible. - Sublinear MPC: We present a deterministic algorithm that computes a 2-Ruling Set in Õ(√{log n}) rounds deterministically. Notably, this is the first deterministic ruling set algorithm with sublogarithmic round complexity, improving on the O(log Δ + log log^* n)-round complexity that stems from the deterministic MIS algorithm of Czumaj, Davies, and Parter [TALG'21]. Our result is based on a simple and fast randomness-efficient construction that achieves the same sparsification as that of the randomized Õ(√{log n})-round LOCAL algorithm by Kothapalli and Pemmaraju [FSTTCS'12].

Cite as

Jeff Giliberti and Zahra Parsaeian. Massively Parallel Ruling Set Made Deterministic. In 38th International Symposium on Distributed Computing (DISC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 319, pp. 29:1-29:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{giliberti_et_al:LIPIcs.DISC.2024.29,
  author =	{Giliberti, Jeff and Parsaeian, Zahra},
  title =	{{Massively Parallel Ruling Set Made Deterministic}},
  booktitle =	{38th International Symposium on Distributed Computing (DISC 2024)},
  pages =	{29:1--29:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-352-2},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{319},
  editor =	{Alistarh, Dan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.DISC.2024.29},
  URN =		{urn:nbn:de:0030-drops-212551},
  doi =		{10.4230/LIPIcs.DISC.2024.29},
  annote =	{Keywords: deterministic algorithms, distributed computing, massively parallel computation, graph algorithms, derandomization}
}
Document
Interval Selection in Sliding Windows

Authors: Cezar-Mihail Alexandru and Christian Konrad

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
We initiate the study of the Interval Selection problem in the (streaming) sliding window model of computation. In this problem, an algorithm receives a potentially infinite stream of intervals on the line, and the objective is to maintain at every moment an approximation to a largest possible subset of disjoint intervals among the L most recent intervals, for some integer L. We give the following results: 1) In the unit-length intervals case, we give a 2-approximation sliding window algorithm with space Õ(|OPT|), and we show that any sliding window algorithm that computes a (2-ε)-approximation requires space Ω(L), for any ε > 0. 2) In the arbitrary-length case, we give a (11/3+ε)-approximation sliding window algorithm with space Õ(|OPT|), for any constant ε > 0, which constitutes our main result. We also show that space Ω(L) is needed for algorithms that compute a (2.5-ε)-approximation, for any ε > 0. Our main technical contribution is an improvement over the smooth histogram technique, which consists of running independent copies of a traditional streaming algorithm with different start times. By employing the one-pass 2-approximation streaming algorithm by Cabello and Pérez-Lantero [Theor. Comput. Sci. '17] for Interval Selection on arbitrary-length intervals as the underlying algorithm, the smooth histogram technique immediately yields a (4+ε)-approximation in this setting. Our improvement is obtained by forwarding the structure of the intervals identified in a run to the subsequent run, which constrains the shape of an optimal solution and allows us to target optimal intervals differently.

Cite as

Cezar-Mihail Alexandru and Christian Konrad. Interval Selection in Sliding Windows. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 8:1-8:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{alexandru_et_al:LIPIcs.ESA.2024.8,
  author =	{Alexandru, Cezar-Mihail and Konrad, Christian},
  title =	{{Interval Selection in Sliding Windows}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{8:1--8:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.8},
  URN =		{urn:nbn:de:0030-drops-210795},
  doi =		{10.4230/LIPIcs.ESA.2024.8},
  annote =	{Keywords: Sliding window algorithms, Streaming algorithms, Interval selection}
}
Document
Semi-Streaming Algorithms for Weighted k-Disjoint Matchings

Authors: S M Ferdous, Bhargav Samineni, Alex Pothen, Mahantesh Halappanavar, and Bala Krishnamoorthy

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
We design and implement two single-pass semi-streaming algorithms for the maximum weight k-disjoint matching (k-DM) problem. Given an integer k, the k-DM problem is to find k pairwise edge-disjoint matchings such that the sum of the weights of the matchings is maximized. For k ≥ 2, this problem is NP-hard. Our first algorithm is based on the primal-dual framework of a linear programming relaxation of the problem and is 1/(3+ε)-approximate. We also develop an approximation preserving reduction from k-DM to the maximum weight b-matching problem. Leveraging this reduction and an existing semi-streaming b-matching algorithm, we design a (1/(2+ε))(1 - 1/(k+1))-approximate semi-streaming algorithm for k-DM. For any constant ε > 0, both of these algorithms require O(nk log_{1+ε}² n) bits of space. To the best of our knowledge, this is the first study of semi-streaming algorithms for the k-DM problem. We compare our two algorithms to state-of-the-art offline algorithms on 95 real-world and synthetic test problems, including thirteen graphs generated from data center network traces. On these instances, our streaming algorithms used significantly less memory (ranging from 6× to 512× less) and were faster in runtime than the offline algorithms. Our solutions were often within 5% of the best weights from the offline algorithms. We highlight that the existing offline algorithms run out of 1 TB memory for most of the large instances (> 1 billion edges), whereas our streaming algorithms can solve these problems using only 100 GB memory for k = 8.

Cite as

S M Ferdous, Bhargav Samineni, Alex Pothen, Mahantesh Halappanavar, and Bala Krishnamoorthy. Semi-Streaming Algorithms for Weighted k-Disjoint Matchings. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 53:1-53:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{ferdous_et_al:LIPIcs.ESA.2024.53,
  author =	{Ferdous, S M and Samineni, Bhargav and Pothen, Alex and Halappanavar, Mahantesh and Krishnamoorthy, Bala},
  title =	{{Semi-Streaming Algorithms for Weighted k-Disjoint Matchings}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{53:1--53:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.53},
  URN =		{urn:nbn:de:0030-drops-211245},
  doi =		{10.4230/LIPIcs.ESA.2024.53},
  annote =	{Keywords: Matchings, Semi-Streaming Algorithms, Approximation Algorithms}
}
Document
Local Max-Cut on Sparse Graphs

Authors: Gregory Schwartzman

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
We bound the smoothed running time of the FLIP algorithm for local Max-Cut as a function of α, the arboricity of the input graph. We show that, with high probability and in expectation, the following holds (where n is the number of nodes and ϕ is the smoothing parameter): 1) When α = O(log^{1-δ} n) FLIP terminates in ϕ poly(n) iterations, where δ ∈ (0,1] is an arbitrarily small constant. Previous to our results the only graph families for which FLIP was known to achieve a smoothed polynomial running time were complete graphs and graphs with logarithmic maximum degree. 2) For arbitrary values of α we get a running time of ϕ n^{O(α/(log n) + log α)}. This improves over the best known running time for general graphs of ϕ n^{O(√{log n})} for α = o(log^{1.5} n). Specifically, when α = O(log n) we get a significantly faster running time of ϕ n^{O(log log n)}.

Cite as

Gregory Schwartzman. Local Max-Cut on Sparse Graphs. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 98:1-98:6, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{schwartzman:LIPIcs.ESA.2024.98,
  author =	{Schwartzman, Gregory},
  title =	{{Local Max-Cut on Sparse Graphs}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{98:1--98:6},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.98},
  URN =		{urn:nbn:de:0030-drops-211694},
  doi =		{10.4230/LIPIcs.ESA.2024.98},
  annote =	{Keywords: Algorithms, smoothed analysis}
}
Document
Parameterized Quantum Query Algorithms for Graph Problems

Authors: Tatsuya Terao and Ryuhei Mori

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
In this paper, we consider the parameterized quantum query complexity for graph problems. We design parameterized quantum query algorithms for k-vertex cover and k-matching problems, and present lower bounds on the parameterized quantum query complexity. Then, we show that our quantum query algorithms are optimal up to a constant factor when the parameters are small. Our main results are as follows. Parameterized quantum query complexity of vertex cover. In the k-vertex cover problem, we are given an undirected graph G with n vertices and an integer k, and the objective is to determine whether G has a vertex cover of size at most k. We show that the quantum query complexity of the k-vertex cover problem is O(√kn + k^{3/2}√n) in the adjacency matrix model. For the design of the quantum query algorithm, we use the method of kernelization, a well-known tool for the design of parameterized classical algorithms, combined with Grover’s search. Parameterized quantum query complexity of matching. In the k-matching problem, we are given an undirected graph G with n vertices and an integer k, and the objective is to determine whether G has a matching of size at least k. We show that the quantum query complexity of the k-matching problem is O(√kn + k²) in the adjacency matrix model. We obtain this upper bound by using Grover’s search carefully and analyzing the number of Grover’s searches by making use of potential functions. We also show that the quantum query complexity of the maximum matching problem is O(√pn + p²) where p is the size of the maximum matching. For small p, it improves known bounds Õ(n^{3/2}) for bipartite graphs [Blikstad-v.d.Brand-Efron-Mukhopadhyay-Nanongkai, FOCS 2022] and O(n^{7/4}) for general graphs [Kimmel-Witter, WADS 2021]. Lower bounds on parameterized quantum query complexity. We also present lower bounds on the quantum query complexities of the k-vertex cover and k-matching problems. The lower bounds prove the optimality of the above parameterized quantum query algorithms up to a constant factor when k is small. Indeed, the quantum query complexities of the k-vertex cover and k-matching problems are both Θ(√k n) when k = O(√n) and k = O(n^{2/3}), respectively.

Cite as

Tatsuya Terao and Ryuhei Mori. Parameterized Quantum Query Algorithms for Graph Problems. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 99:1-99:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{terao_et_al:LIPIcs.ESA.2024.99,
  author =	{Terao, Tatsuya and Mori, Ryuhei},
  title =	{{Parameterized Quantum Query Algorithms for Graph Problems}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{99:1--99:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.99},
  URN =		{urn:nbn:de:0030-drops-211707},
  doi =		{10.4230/LIPIcs.ESA.2024.99},
  annote =	{Keywords: Quantum query complexity, parameterized algorithms, vertex cover, matching, kernelization}
}
Document
APPROX
Weighted Matching in the Random-Order Streaming and Robust Communication Models

Authors: Diba Hashemi and Weronika Wrzos-Kaminska

Published in: LIPIcs, Volume 317, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)


Abstract
We study the maximum weight matching problem in the random-order semi-streaming model and in the robust communication model. Unlike many other sublinear models, in these two frameworks, there is a large gap between the guarantees of the best known algorithms for the unweighted and weighted versions of the problem. In the random-order semi-streaming setting, the edges of an n-vertex graph arrive in a stream in a random order. The goal is to compute an approximate maximum weight matching with a single pass over the stream using O(npolylog n) space. Our main result is a (2/3-ε)-approximation algorithm for maximum weight matching in random-order streams, using space O(n log n log R), where R is the ratio between the heaviest and the lightest edge in the graph. Our result nearly matches the best known unweighted (2/3+ε₀)-approximation (where ε₀ ∼ 10^{-14} is a small constant) achieved by Assadi and Behnezhad [Assadi and Behnezhad, 2021], and significantly improves upon previous weighted results. Our techniques also extend to the related robust communication model, in which the edges of a graph are partitioned randomly between Alice and Bob. Alice sends a single message of size O(npolylog n) to Bob, who must compute an approximate maximum weight matching. We achieve a (5/6-ε)-approximation using O(n log n log R) words of communication, matching the results of Azarmehr and Behnezhad [Azarmehr and Behnezhad, 2023] for unweighted graphs.

Cite as

Diba Hashemi and Weronika Wrzos-Kaminska. Weighted Matching in the Random-Order Streaming and Robust Communication Models. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 317, pp. 16:1-16:26, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{hashemi_et_al:LIPIcs.APPROX/RANDOM.2024.16,
  author =	{Hashemi, Diba and Wrzos-Kaminska, Weronika},
  title =	{{Weighted Matching in the Random-Order Streaming and Robust Communication Models}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)},
  pages =	{16:1--16:26},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-348-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{317},
  editor =	{Kumar, Amit and Ron-Zewi, Noga},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2024.16},
  URN =		{urn:nbn:de:0030-drops-210097},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2024.16},
  annote =	{Keywords: Maximum Weight Matching, Streaming, Random-Order Streaming, Robust Communication Complexity}
}
Document
RANDOM
Stochastic Distance in Property Testing

Authors: Uri Meir, Gregory Schwartzman, and Yuichi Yoshida

Published in: LIPIcs, Volume 317, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)


Abstract
We introduce a novel concept termed "stochastic distance" for property testing. Diverging from the traditional definition of distance, where a distance t implies that there exist t edges that can be added to ensure a graph possesses a certain property (such as k-edge-connectivity), our new notion implies that there is a high probability that adding t random edges will endow the graph with the desired property. While formulating testers based on this new distance proves challenging in a sequential environment, it is much easier in a distributed setting. Taking k-edge-connectivity as a case study, we design ultra-fast testing algorithms in the CONGEST model. Our introduction of stochastic distance offers a more natural fit for the distributed setting, providing a promising avenue for future research in emerging models of computation.

Cite as

Uri Meir, Gregory Schwartzman, and Yuichi Yoshida. Stochastic Distance in Property Testing. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 317, pp. 57:1-57:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{meir_et_al:LIPIcs.APPROX/RANDOM.2024.57,
  author =	{Meir, Uri and Schwartzman, Gregory and Yoshida, Yuichi},
  title =	{{Stochastic Distance in Property Testing}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)},
  pages =	{57:1--57:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-348-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{317},
  editor =	{Kumar, Amit and Ron-Zewi, Noga},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2024.57},
  URN =		{urn:nbn:de:0030-drops-210506},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2024.57},
  annote =	{Keywords: Connectivity, k-edge connectivity}
}
Document
Streaming Matching and Edge Cover in Practice

Authors: S M Ferdous, Alex Pothen, and Mahantesh Halappanavar

Published in: LIPIcs, Volume 301, 22nd International Symposium on Experimental Algorithms (SEA 2024)


Abstract
Graph algorithms with polynomial space and time requirements often become infeasible for massive graphs with billions of edges or more. State-of-the-art approaches therefore employ approximate serial, parallel, and distributed algorithms to tackle these challenges. However, such approaches require storing the entire graph in memory and thus need access to costly computing resources such as clusters and supercomputers. In this paper, we present practical streaming approaches for solving massive graph problems using limited memory for two prototypical graph problems: maximum weighted matching and minimum weighted edge cover. For matching, we conduct a thorough computational study on two of the semi-streaming algorithms including a recent breakthrough result that achieves a 1/(2+ε)-approximation of the weight while using O(n log W /ε) memory (here n is the number of vertices and W is the maximum edge weight), designed by Paz and Schwartzman [SODA, 2017]. Empirically, we show that the semi-streaming algorithms produce matchings whose weight is close to the best 1/2-approximate offline algorithm while requiring less time and an order-of-magnitude less memory. For minimum weighted edge cover, we develop three novel semi-streaming algorithms. Two of these algorithms require a single pass through the input graph, require O(n log n) memory, and provide a 2-approximation guarantee on the objective. We also leverage a relationship between approximate maximum weighted matching and approximate minimum weighted edge cover to develop a two-pass 3/2+ε-approximate algorithm with the memory requirement of Paz and Schwartzman’s semi-streaming matching algorithm. These streaming approaches are compared against the state-of-the-art 3/2-approximate offline algorithm. The semi-streaming matching and the novel edge cover algorithms proposed in this paper can process graphs with several billions of edges in under 30 minutes using 6 GB of memory, which is at least an order of magnitude improvement from the offline (non-streaming) algorithms. For the largest graph, the best alternative offline parallel approximation algorithm (GPA+ROMA) could not finish in three hours even while employing hundreds of processors and 1 TB of memory. We also demonstrate an application of semi-streaming algorithm by computing a matching using linearly bounded memory on intersection graphs derived from three machine learning datasets, while the existing offline algorithms could not complete on one of these datasets since its memory requirement exceeded 1TB.

Cite as

S M Ferdous, Alex Pothen, and Mahantesh Halappanavar. Streaming Matching and Edge Cover in Practice. In 22nd International Symposium on Experimental Algorithms (SEA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 301, pp. 12:1-12:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{ferdous_et_al:LIPIcs.SEA.2024.12,
  author =	{Ferdous, S M and Pothen, Alex and Halappanavar, Mahantesh},
  title =	{{Streaming Matching and Edge Cover in Practice}},
  booktitle =	{22nd International Symposium on Experimental Algorithms (SEA 2024)},
  pages =	{12:1--12:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-325-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{301},
  editor =	{Liberti, Leo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2024.12},
  URN =		{urn:nbn:de:0030-drops-203773},
  doi =		{10.4230/LIPIcs.SEA.2024.12},
  annote =	{Keywords: Matching, Edge Cover, Semi-Streaming Algorithm, Parallel Algorithms, Algorithm Engineering}
}
Document
Smoothed Analysis of Population Protocols

Authors: Gregory Schwartzman and Yuichi Sudo

Published in: LIPIcs, Volume 209, 35th International Symposium on Distributed Computing (DISC 2021)


Abstract
In this work, we initiate the study of smoothed analysis of population protocols. We consider a population protocol model where an adaptive adversary dictates the interactions between agents, but with probability p every such interaction may change into an interaction between two agents chosen uniformly at random. That is, p-fraction of the interactions are random, while (1-p)-fraction are adversarial. The aim of our model is to bridge the gap between a uniformly random scheduler (which is too idealistic) and an adversarial scheduler (which is too strict). We focus on the fundamental problem of leader election in population protocols. We show that, for a population of size n, the leader election problem can be solved in O(p^{-2}n log³ n) steps with high probability, using O((log² n) ⋅ (log (n/p))) states per agent, for all values of p ≤ 1. Although our result does not match the best known running time of O(n log n) for the uniformly random scheduler (p = 1), we are able to present a smooth transition between a running time of O(n polylog n) for p = 1 and an infinite running time for the adversarial scheduler (p = 0), where the problem cannot be solved. The key technical contribution of our work is a novel phase clock algorithm for our model. This is a key primitive for much-studied fundamental population protocol algorithms (leader election, majority), and we believe it is of independent interest.

Cite as

Gregory Schwartzman and Yuichi Sudo. Smoothed Analysis of Population Protocols. In 35th International Symposium on Distributed Computing (DISC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 209, pp. 34:1-34:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{schwartzman_et_al:LIPIcs.DISC.2021.34,
  author =	{Schwartzman, Gregory and Sudo, Yuichi},
  title =	{{Smoothed Analysis of Population Protocols}},
  booktitle =	{35th International Symposium on Distributed Computing (DISC 2021)},
  pages =	{34:1--34:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-210-5},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{209},
  editor =	{Gilbert, Seth},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.DISC.2021.34},
  URN =		{urn:nbn:de:0030-drops-148362},
  doi =		{10.4230/LIPIcs.DISC.2021.34},
  annote =	{Keywords: Population protocols, Smoothed analysis, Leader election}
}
Document
Fast Deterministic Algorithms for Highly-Dynamic Networks

Authors: Keren Censor-Hillel, Neta Dafni, Victor I. Kolobov, Ami Paz, and Gregory Schwartzman

Published in: LIPIcs, Volume 184, 24th International Conference on Principles of Distributed Systems (OPODIS 2020)


Abstract
This paper provides an algorithmic framework for obtaining fast distributed algorithms for a highly-dynamic setting, in which arbitrarily many edge changes may occur in each round. Our algorithm significantly improves upon prior work in its combination of (1) having an O(1) amortized time complexity, (2) using only O(log{n})-bit messages, (3) not posing any restrictions on the dynamic behavior of the environment, (4) being deterministic, (5) having strong guarantees for intermediate solutions, and (6) being applicable for a wide family of tasks. The tasks for which we deduce such an algorithm are maximal matching, (degree+1)-coloring, 2-approximation for minimum weight vertex cover, and maximal independent set (which is the most subtle case). For some of these tasks, node insertions can also be among the allowed topology changes, and for some of them also abrupt node deletions.

Cite as

Keren Censor-Hillel, Neta Dafni, Victor I. Kolobov, Ami Paz, and Gregory Schwartzman. Fast Deterministic Algorithms for Highly-Dynamic Networks. In 24th International Conference on Principles of Distributed Systems (OPODIS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 184, pp. 28:1-28:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{censorhillel_et_al:LIPIcs.OPODIS.2020.28,
  author =	{Censor-Hillel, Keren and Dafni, Neta and Kolobov, Victor I. and Paz, Ami and Schwartzman, Gregory},
  title =	{{Fast Deterministic Algorithms for Highly-Dynamic Networks}},
  booktitle =	{24th International Conference on Principles of Distributed Systems (OPODIS 2020)},
  pages =	{28:1--28:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-176-4},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{184},
  editor =	{Bramas, Quentin and Oshman, Rotem and Romano, Paolo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.OPODIS.2020.28},
  URN =		{urn:nbn:de:0030-drops-135138},
  doi =		{10.4230/LIPIcs.OPODIS.2020.28},
  annote =	{Keywords: dynamic distributed algorithms}
}
Document
Improved Distributed Approximations for Maximum Independent Set

Authors: Ken-ichi Kawarabayashi, Seri Khoury, Aaron Schild, and Gregory Schwartzman

Published in: LIPIcs, Volume 179, 34th International Symposium on Distributed Computing (DISC 2020)


Abstract
We present improved results for approximating maximum-weight independent set (MaxIS) in the CONGEST and LOCAL models of distributed computing. Given an input graph, let n and Δ be the number of nodes and maximum degree, respectively, and let MIS(n,Δ) be the running time of finding a maximal independent set (MIS) in the CONGEST model. Bar-Yehuda et al. [PODC 2017] showed that there is an algorithm in the CONGEST model that finds a Δ-approximation for MaxIS in O(MIS(n,Δ)log W) rounds, where W is the maximum weight of a node in the graph, which can be as large as poly (n). Whether their algorithm is deterministic or randomized that succeeds with high probability depends on the MIS algorithm that is used as a black-box. Our results: 1) A deterministic O(MIS(n,Δ)/ε)-round algorithm that finds a (1+ε)Δ-approximation for MaxIS in the CONGEST model. 2) A randomized (poly(log log n)/ε)-round algorithm that finds, with high probability, a (1+ε)Δ-approximation for MaxIS in the CONGEST model. That is, by sacrificing only a tiny fraction of the approximation guarantee, we achieve an exponential speed-up in the running time over the previous best known result. 3) A randomized O(log n⋅ poly(log log n)/ε)-round algorithm that finds, with high probability, a 8(1+ε)α-approximation for MaxIS in the CONGEST model, where α is the arboricity of the graph. For graphs of arboricity α < Δ/(8(1+ε)), this result improves upon the previous best known result in both the approximation factor and the running time. One may wonder whether it is possible to approximate MaxIS with high probability in fewer than poly(log log n) rounds. Interestingly, a folklore randomized ranking algorithm by Boppana implies a single round algorithm that gives an expected Δ-approximation in the CONGEST model. However, it is unclear how to convert this algorithm to one that succeeds with high probability without sacrificing a large number of rounds. For unweighted graphs of maximum degree Δ ≤ n/log n, we show a new analysis of the randomized ranking algorithm, which we combine with the local-ratio technique, to provide a O(1/ε)-round algorithm in the CONGEST model that, with high probability, finds an independent set of size at least n/((1+ε)(Δ+1)). This result cannot be extended to very high degree graphs, as we show a lower bound of Ω(log^*n) rounds for any randomized algorithm that with probability at least 1-1/log n finds an independent set of size Ω(n/Δ). This lower bound holds even for the LOCAL model. The hard instances that we use to prove our lower bound are graphs of maximum degree Δ = Ω(n/log^*n).

Cite as

Ken-ichi Kawarabayashi, Seri Khoury, Aaron Schild, and Gregory Schwartzman. Improved Distributed Approximations for Maximum Independent Set. In 34th International Symposium on Distributed Computing (DISC 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 179, pp. 35:1-35:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{kawarabayashi_et_al:LIPIcs.DISC.2020.35,
  author =	{Kawarabayashi, Ken-ichi and Khoury, Seri and Schild, Aaron and Schwartzman, Gregory},
  title =	{{Improved Distributed Approximations for Maximum Independent Set}},
  booktitle =	{34th International Symposium on Distributed Computing (DISC 2020)},
  pages =	{35:1--35:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-168-9},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{179},
  editor =	{Attiya, Hagit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.DISC.2020.35},
  URN =		{urn:nbn:de:0030-drops-131135},
  doi =		{10.4230/LIPIcs.DISC.2020.35},
  annote =	{Keywords: Distributed graph algorithms, Approximation algorithms, Lower bounds}
}
Document
Models of Smoothing in Dynamic Networks

Authors: Uri Meir, Ami Paz, and Gregory Schwartzman

Published in: LIPIcs, Volume 179, 34th International Symposium on Distributed Computing (DISC 2020)


Abstract
Smoothed analysis is a framework suggested for mediating gaps between worst-case and average-case complexities. In a recent work, Dinitz et al. [Distributed Computing, 2018] suggested to use smoothed analysis in order to study dynamic networks. Their aim was to explain the gaps between real-world networks that function well despite being dynamic, and the strong theoretical lower bounds for arbitrary networks. To this end, they introduced a basic model of smoothing in dynamic networks, where an adversary picks a sequence of graphs, representing the topology of the network over time, and then each of these graphs is slightly perturbed in a random manner. The model suggested above is based on a per-round noise, and our aim in this work is to extend it to models of noise more suited for multiple rounds. This is motivated by long-lived networks, where the amount and location of noise may vary over time. To this end, we present several different models of noise. First, we extend the previous model to cases where the amount of noise is very small. Then, we move to more refined models, where the amount of noise can change between different rounds, e.g., as a function of the number of changes the network undergoes. We also study a model where the noise is not arbitrarily spread among the network, but focuses in each round in the areas where changes have occurred. Finally, we study the power of an adaptive adversary, who can choose its actions in accordance with the changes that have occurred so far. We use the flooding problem as a running case-study, presenting very different behaviors under the different models of noise, and analyze the flooding time in different models.

Cite as

Uri Meir, Ami Paz, and Gregory Schwartzman. Models of Smoothing in Dynamic Networks. In 34th International Symposium on Distributed Computing (DISC 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 179, pp. 36:1-36:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{meir_et_al:LIPIcs.DISC.2020.36,
  author =	{Meir, Uri and Paz, Ami and Schwartzman, Gregory},
  title =	{{Models of Smoothing in Dynamic Networks}},
  booktitle =	{34th International Symposium on Distributed Computing (DISC 2020)},
  pages =	{36:1--36:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-168-9},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{179},
  editor =	{Attiya, Hagit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.DISC.2020.36},
  URN =		{urn:nbn:de:0030-drops-131145},
  doi =		{10.4230/LIPIcs.DISC.2020.36},
  annote =	{Keywords: Distributed dynamic graph algorithms, Smoothed analysis, Flooding}
}
Document
Optimal Distributed Covering Algorithms

Authors: Ran Ben-Basat, Guy Even, Ken-ichi Kawarabayashi, and Gregory Schwartzman

Published in: LIPIcs, Volume 146, 33rd International Symposium on Distributed Computing (DISC 2019)


Abstract
We present a time-optimal deterministic distributed algorithm for approximating a minimum weight vertex cover in hypergraphs of rank f. This problem is equivalent to the Minimum Weight Set Cover problem in which the frequency of every element is bounded by f. The approximation factor of our algorithm is (f+epsilon). Let Delta denote the maximum degree in the hypergraph. Our algorithm runs in the congest model and requires O(log{Delta} / log log Delta) rounds, for constants epsilon in (0,1] and f in N^+. This is the first distributed algorithm for this problem whose running time does not depend on the vertex weights nor the number of vertices. Thus adding another member to the exclusive family of provably optimal distributed algorithms. For constant values of f and epsilon, our algorithm improves over the (f+epsilon)-approximation algorithm of [Fabian Kuhn et al., 2006] whose running time is O(log Delta + log W), where W is the ratio between the largest and smallest vertex weights in the graph. Our algorithm also achieves an f-approximation for the problem in O(f log n) rounds, improving over the classical result of [Samir Khuller et al., 1994] that achieves a running time of O(f log^2 n). Finally, for weighted vertex cover (f=2) our algorithm achieves a deterministic running time of O(log n), matching the randomized previously best result of [Koufogiannakis and Young, 2011]. We also show that integer covering-programs can be reduced to the Minimum Weight Set Cover problem in the distributed setting. This allows us to achieve an (f+epsilon)-approximate integral solution in O((1+f/log n)* ((log Delta)/(log log Delta) + (f * log M)^{1.01}* log epsilon^{-1}* (log Delta)^{0.01})) rounds, where f bounds the number of variables in a constraint, Delta bounds the number of constraints a variable appears in, and M=max {1, ceil[1/a_{min}]}, where a_{min} is the smallest normalized constraint coefficient. This improves over the results of [Fabian Kuhn et al., 2006] for the integral case, which combined with rounding achieves the same guarantees in O(epsilon^{-4}* f^4 * log f * log(M * Delta)) rounds.

Cite as

Ran Ben-Basat, Guy Even, Ken-ichi Kawarabayashi, and Gregory Schwartzman. Optimal Distributed Covering Algorithms. In 33rd International Symposium on Distributed Computing (DISC 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 146, pp. 5:1-5:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{benbasat_et_al:LIPIcs.DISC.2019.5,
  author =	{Ben-Basat, Ran and Even, Guy and Kawarabayashi, Ken-ichi and Schwartzman, Gregory},
  title =	{{Optimal Distributed Covering Algorithms}},
  booktitle =	{33rd International Symposium on Distributed Computing (DISC 2019)},
  pages =	{5:1--5:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-126-9},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{146},
  editor =	{Suomela, Jukka},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.DISC.2019.5},
  URN =		{urn:nbn:de:0030-drops-113129},
  doi =		{10.4230/LIPIcs.DISC.2019.5},
  annote =	{Keywords: Distributed Algorithms, Approximation Algorithms, Vertex Cover, Set Cover}
}
  • Refine by Author
  • 10 Schwartzman, Gregory
  • 4 Kawarabayashi, Ken-ichi
  • 2 Ben-Basat, Ran
  • 2 Censor-Hillel, Keren
  • 2 Ferdous, S M
  • Show More...

  • Refine by Classification
  • 5 Theory of computation → Distributed algorithms
  • 4 Theory of computation → Streaming, sublinear and near linear time algorithms
  • 3 Theory of computation → Distributed computing models
  • 3 Theory of computation → Graph algorithms analysis
  • 2 Theory of computation → Dynamic graph algorithms
  • Show More...

  • Refine by Keyword
  • 4 Approximation Algorithms
  • 2 Distributed Algorithms
  • 2 Smoothed analysis
  • 2 derandomization
  • 2 massively parallel computation
  • Show More...

  • Refine by Type
  • 18 document

  • Refine by Publication Year
  • 10 2024
  • 2 2019
  • 2 2020
  • 2 2021
  • 1 2017
  • Show More...

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail