Search Results

Documents authored by Bernstein, Aaron


Document
Parallel, Distributed, and Quantum Exact Single-Source Shortest Paths with Negative Edge Weights

Authors: Vikrant Ashvinkumar, Aaron Bernstein, Nairen Cao, Christoph Grunau, Bernhard Haeupler, Yonggang Jiang, Danupon Nanongkai, and Hsin-Hao Su

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
This paper presents parallel, distributed, and quantum algorithms for single-source shortest paths when edges can have negative integer weights (negative-weight SSSP). We show a framework that reduces negative-weight SSSP in all these settings to n^{o(1)} calls to any SSSP algorithm that works on inputs with non-negative integer edge weights (non-negative-weight SSSP) with a virtual source. More specifically, for a directed graph with m edges, n vertices, undirected hop-diameter D, and polynomially bounded integer edge weights, we show randomized algorithms for negative-weight SSSP with - W_{SSSP}(m,n)n^{o(1)} work and S_{SSSP}(m,n)n^{o(1)} span, given access to a non-negative-weight SSSP algorithm with W_{SSSP}(m,n) work and S_{SSSP}(m,n) span in the parallel model, and - T_{SSSP}(n,D)n^{o(1)} rounds, given access to a non-negative-weight SSSP algorithm that takes T_{SSSP}(n,D) rounds in CONGEST, and - Q_{SSSP}(m,n)n^{o(1)} quantum edge queries, given access to a non-negative-weight SSSP algorithm that takes Q_{SSSP}(m,n) queries in the quantum edge query model. This work builds off the recent result of Bernstein, Nanongkai, Wulff-Nilsen [Bernstein et al., 2022], which gives a near-linear time algorithm for negative-weight SSSP in the sequential setting. Using current state-of-the-art non-negative-weight SSSP algorithms yields randomized algorithms for negative-weight SSSP with - m^{1+o(1)} work and n^{1/2+o(1)} span in the parallel model, and - (n^{2/5}D^{2/5} + √n + D)n^{o(1)} rounds in CONGEST, and - m^{1/2}n^{1/2+o(1)} quantum queries to the adjacency list or n^{1.5+o(1)} quantum queries to the adjacency matrix. Up to a n^{o(1)} factor, the parallel and distributed results match the current best upper bounds for reachability [Jambulapati et al., 2019; Cao et al., 2021]. Consequently, any improvement to negative-weight SSSP in these models beyond the n^{o(1)} factor necessitates an improvement to the current best bounds for reachability. The quantum result matches the lower bound up to an n^{o(1)} factor [Aija Berzina et al., 2004]. Our main technical contribution is an efficient reduction from computing a low-diameter decomposition (LDD) of directed graphs to computations of non-negative-weight SSSP with a virtual source. Efficiently computing an LDD has heretofore only been known for undirected graphs in both the parallel and distributed models, and been rather unstudied in quantum models. The directed LDD is a crucial step of the sequential algorithm in [Bernstein et al., 2022], and we think that its applications to other problems in parallel and distributed models are far from being exhausted. Other ingredients of our results include altering the recursion structure of the scaling algorithm in [Bernstein et al., 2022] to surmount difficulties that arise in these models, and also an efficient reduction from computing strongly connected components to computations of SSSP with a virtual source in CONGEST. The latter result answers a question posed in [Bernstein and Nanongkai, 2019] in the negative.

Cite as

Vikrant Ashvinkumar, Aaron Bernstein, Nairen Cao, Christoph Grunau, Bernhard Haeupler, Yonggang Jiang, Danupon Nanongkai, and Hsin-Hao Su. Parallel, Distributed, and Quantum Exact Single-Source Shortest Paths with Negative Edge Weights. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 13:1-13:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{ashvinkumar_et_al:LIPIcs.ESA.2024.13,
  author =	{Ashvinkumar, Vikrant and Bernstein, Aaron and Cao, Nairen and Grunau, Christoph and Haeupler, Bernhard and Jiang, Yonggang and Nanongkai, Danupon and Su, Hsin-Hao},
  title =	{{Parallel, Distributed, and Quantum Exact Single-Source Shortest Paths with Negative Edge Weights}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{13:1--13:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.13},
  URN =		{urn:nbn:de:0030-drops-210849},
  doi =		{10.4230/LIPIcs.ESA.2024.13},
  annote =	{Keywords: Parallel algorithm, distributed algorithm, shortest paths}
}
Document
Are There Graphs Whose Shortest Path Structure Requires Large Edge Weights?

Authors: Aaron Bernstein, Greg Bodwin, and Nicole Wein

Published in: LIPIcs, Volume 287, 15th Innovations in Theoretical Computer Science Conference (ITCS 2024)


Abstract
The aspect ratio of a (positively) weighted graph G is the ratio of its maximum edge weight to its minimum edge weight. Aspect ratio commonly arises as a complexity measure in graph algorithms, especially related to the computation of shortest paths. Popular paradigms are to interpolate between the settings of weighted and unweighted input graphs by incurring a dependence on aspect ratio, or by simply restricting attention to input graphs of low aspect ratio. This paper studies the effects of these paradigms, investigating whether graphs of low aspect ratio have more structured shortest paths than graphs in general. In particular, we raise the question of whether one can generally take a graph of large aspect ratio and reweight its edges, to obtain a graph with bounded aspect ratio while preserving the structure of its shortest paths. Our findings are: - Every weighted DAG on n nodes has a shortest-paths preserving graph of aspect ratio O(n). A simple lower bound shows that this is tight. - The previous result does not extend to general directed or undirected graphs; in fact, the answer turns out to be exponential in these settings. In particular, we construct directed and undirected n-node graphs for which any shortest-paths preserving graph has aspect ratio 2^{Ω(n)}. We also consider the approximate version of this problem, where the goal is for shortest paths in H to correspond to approximate shortest paths in G. We show that our exponential lower bounds extend even to this setting. We also show that in a closely related model, where approximate shortest paths in H must also correspond to approximate shortest paths in G, even DAGs require exponential aspect ratio.

Cite as

Aaron Bernstein, Greg Bodwin, and Nicole Wein. Are There Graphs Whose Shortest Path Structure Requires Large Edge Weights?. In 15th Innovations in Theoretical Computer Science Conference (ITCS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 287, pp. 12:1-12:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{bernstein_et_al:LIPIcs.ITCS.2024.12,
  author =	{Bernstein, Aaron and Bodwin, Greg and Wein, Nicole},
  title =	{{Are There Graphs Whose Shortest Path Structure Requires Large Edge Weights?}},
  booktitle =	{15th Innovations in Theoretical Computer Science Conference (ITCS 2024)},
  pages =	{12:1--12:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-309-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{287},
  editor =	{Guruswami, Venkatesan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2024.12},
  URN =		{urn:nbn:de:0030-drops-195405},
  doi =		{10.4230/LIPIcs.ITCS.2024.12},
  annote =	{Keywords: shortest paths, graph theory, weighted graphs}
}
Document
Dynamic Graph Algorithms (Dagstuhl Seminar 22461)

Authors: Aaron Bernstein, Shiri Chechik, Sebastian Forster, Tsvi Kopelowitz, Yasamin Nazari, and Nicole Wein

Published in: Dagstuhl Reports, Volume 12, Issue 11 (2023)


Abstract
This report documents the program and the outcomes of Dagstuhl Seminar 22461 “Dynamic Graph Algorithms”, which took place from November 13 to November 18, 2022. The field of dynamic graph algorithms studies algorithms for processing graphs that are changing over time. Formally, the goal is to process an interleaved sequence of update and query operations, where an update operation changes the input graph (e.g. inserts/deletes an edge), while the query operation is problem-specific and asks for some information about the current graph – for example, an s-t path, or a minimum spanning tree. The field has evolved rapidly over the past decade, and this Dagstuhl Seminar brought together leading researchers in dynamic algorithms and related areas of graph algorithms.

Cite as

Aaron Bernstein, Shiri Chechik, Sebastian Forster, Tsvi Kopelowitz, Yasamin Nazari, and Nicole Wein. Dynamic Graph Algorithms (Dagstuhl Seminar 22461). In Dagstuhl Reports, Volume 12, Issue 11, pp. 45-65, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@Article{bernstein_et_al:DagRep.12.11.45,
  author =	{Bernstein, Aaron and Chechik, Shiri and Forster, Sebastian and Kopelowitz, Tsvi and Nazari, Yasamin and Wein, Nicole},
  title =	{{Dynamic Graph Algorithms (Dagstuhl Seminar 22461)}},
  pages =	{45--65},
  journal =	{Dagstuhl Reports},
  ISSN =	{2192-5283},
  year =	{2023},
  volume =	{12},
  number =	{11},
  editor =	{Bernstein, Aaron and Chechik, Shiri and Forster, Sebastian and Kopelowitz, Tsvi and Nazari, Yasamin and Wein, Nicole},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagRep.12.11.45},
  URN =		{urn:nbn:de:0030-drops-178354},
  doi =		{10.4230/DagRep.12.11.45},
  annote =	{Keywords: dynamic graphs, graph algorithms}
}
Document
All-Norm Load Balancing in Graph Streams via the Multiplicative Weights Update Method

Authors: Sepehr Assadi, Aaron Bernstein, and Zachary Langley

Published in: LIPIcs, Volume 251, 14th Innovations in Theoretical Computer Science Conference (ITCS 2023)


Abstract
In the weighted load balancing problem, the input is an n-vertex bipartite graph between a set of clients and a set of servers, and each client comes with some nonnegative real weight. The output is an assignment that maps each client to one of its adjacent servers, and the load of a server is then the sum of the weights of the clients assigned to it. The goal is to find an assignment that is well-balanced, typically captured by (approximately) minimizing either the 𝓁_∞- or 𝓁₂-norm of the server loads. Generalizing both of these objectives, the all-norm load balancing problem asks for an assignment that approximately minimizes all 𝓁_p-norm objectives for p ≥ 1, including p = ∞, simultaneously. Our main result is a deterministic O(log n)-pass O(1)-approximation semi-streaming algorithm for the all-norm load balancing problem. Prior to our work, only an O(log n)-pass O(log n)-approximation algorithm for the 𝓁_∞-norm objective was known in the semi-streaming setting. Our algorithm uses a novel application of the multiplicative weights update method to a mixed covering/packing convex program for the all-norm load balancing problem involving an infinite number of constraints.

Cite as

Sepehr Assadi, Aaron Bernstein, and Zachary Langley. All-Norm Load Balancing in Graph Streams via the Multiplicative Weights Update Method. In 14th Innovations in Theoretical Computer Science Conference (ITCS 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 251, pp. 7:1-7:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{assadi_et_al:LIPIcs.ITCS.2023.7,
  author =	{Assadi, Sepehr and Bernstein, Aaron and Langley, Zachary},
  title =	{{All-Norm Load Balancing in Graph Streams via the Multiplicative Weights Update Method}},
  booktitle =	{14th Innovations in Theoretical Computer Science Conference (ITCS 2023)},
  pages =	{7:1--7:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-263-1},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{251},
  editor =	{Tauman Kalai, Yael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2023.7},
  URN =		{urn:nbn:de:0030-drops-175106},
  doi =		{10.4230/LIPIcs.ITCS.2023.7},
  annote =	{Keywords: Load Balancing, Semi-Streaming Algorithms, Semi-Matching}
}
Document
Track A: Algorithms, Complexity and Games
Decremental Matching in General Graphs

Authors: Sepehr Assadi, Aaron Bernstein, and Aditi Dudeja

Published in: LIPIcs, Volume 229, 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)


Abstract
We consider the problem of maintaining an approximate maximum integral matching in a dynamic graph G, while the adversary makes changes to the edges of the graph. The goal is to maintain a (1+ε)-approximate maximum matching for constant ε > 0, while minimizing the update time. In the fully dynamic setting, where both edge insertion and deletions are allowed, Gupta and Peng (see [Manoj Gupta and Richard Peng, 2013]) gave an algorithm for this problem with an update time of O(√m/ε²). Motivated by the fact that the O_ε(√m) barrier is hard to overcome (see Henzinger, Krinninger, Nanongkai, and Saranurak [Henzinger et al., 2015]; Kopelowitz, Pettie, and Porat [Kopelowitz et al., 2016]), we study this problem in the decremental model, where the adversary is only allowed to delete edges. Recently, Bernstein, Probst-Gutenberg, and Saranurak (see [Bernstein et al., 2020]) gave an O(poly({log n}/ε)) update time decremental algorithm for this problem in bipartite graphs. However, beating O(√m) update time remained an open problem for general graphs. In this paper, we bridge the gap between bipartite and general graphs, by giving an O_ε(poly(log n)) update time algorithm that maintains a (1+ε)-approximate maximum integral matching under adversarial deletions. Our algorithm is randomized, but works against an adaptive adversary. Together with the work of Grandoni, Leonardi, Sankowski, Schwiegelshohn, and Solomon [Fabrizio Grandoni et al., 2019] who give an O_ε(1) update time algorithm for general graphs in the incremental (insertion-only) model, our result essentially completes the picture for partially dynamic matching.

Cite as

Sepehr Assadi, Aaron Bernstein, and Aditi Dudeja. Decremental Matching in General Graphs. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 11:1-11:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{assadi_et_al:LIPIcs.ICALP.2022.11,
  author =	{Assadi, Sepehr and Bernstein, Aaron and Dudeja, Aditi},
  title =	{{Decremental Matching in General Graphs}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{11:1--11:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.11},
  URN =		{urn:nbn:de:0030-drops-163528},
  doi =		{10.4230/LIPIcs.ICALP.2022.11},
  annote =	{Keywords: Dynamic algorithms, matching, primal-dual algorithms}
}
Document
Track A: Algorithms, Complexity and Games
Fully-Dynamic Graph Sparsifiers Against an Adaptive Adversary

Authors: Aaron Bernstein, Jan van den Brand, Maximilian Probst Gutenberg, Danupon Nanongkai, Thatchaphol Saranurak, Aaron Sidford, and He Sun

Published in: LIPIcs, Volume 229, 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)


Abstract
Designing efficient dynamic graph algorithms against an adaptive adversary is a major goal in the field of dynamic graph algorithms and has witnessed many exciting recent developments in, e.g., dynamic matching (Wajc STOC'20) and decremental shortest paths (Chuzhoy and Khanna STOC'19). Compared to other graph primitives (e.g. spanning trees and matchings), designing such algorithms for graph spanners and (more broadly) graph sparsifiers poses a unique challenge since there is no fast deterministic algorithm known for static computation and the lack of a way to adjust the output slowly (known as "small recourse/replacements"). This paper presents the first non-trivial efficient adaptive algorithms for maintaining many sparsifiers against an adaptive adversary. Specifically, we present algorithms that maintain 1) a polylog(n)-spanner of size Õ(n) in polylog(n) amortized update time, 2) an O(k)-approximate cut sparsifier of size Õ(n) in Õ(n^{1/k}) amortized update time, and 3) a polylog(n)-approximate spectral sparsifier in polylog(n) amortized update time. Our bounds are the first non-trivial ones even when only the recourse is concerned. Our results hold even against a stronger adversary, who can access the random bits previously used by the algorithms and the amortized update time of all algorithms can be made worst-case by paying sub-polynomial factors. Our spanner result resolves an open question by Ahmed et al. (2019) and our results and techniques imply additional improvements over existing results, including (i) answering open questions about decremental single-source shortest paths by Chuzhoy and Khanna (STOC'19) and Gutenberg and Wulff-Nilsen (SODA'20), implying a nearly-quadratic time algorithm for approximating minimum-cost unit-capacity flow and (ii) de-amortizing a result of Abraham et al. (FOCS'16) for dynamic spectral sparsifiers. Our results are based on two novel techniques. The first technique is a generic black-box reduction that allows us to assume that the graph is initially an expander with almost uniform-degree and, more importantly, stays as an almost uniform-degree expander while undergoing only edge deletions. The second technique is called proactive resampling: here we constantly re-sample parts of the input graph so that, independent of an adversary’s computational power, a desired structure of the underlying graph can be always maintained. Despite its simplicity, the analysis of this sampling scheme is far from trivial, because the adversary can potentially create dependencies between the random choices used by the algorithm. We believe these two techniques could be useful for developing other adaptive algorithms.

Cite as

Aaron Bernstein, Jan van den Brand, Maximilian Probst Gutenberg, Danupon Nanongkai, Thatchaphol Saranurak, Aaron Sidford, and He Sun. Fully-Dynamic Graph Sparsifiers Against an Adaptive Adversary. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 20:1-20:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{bernstein_et_al:LIPIcs.ICALP.2022.20,
  author =	{Bernstein, Aaron and van den Brand, Jan and Probst Gutenberg, Maximilian and Nanongkai, Danupon and Saranurak, Thatchaphol and Sidford, Aaron and Sun, He},
  title =	{{Fully-Dynamic Graph Sparsifiers Against an Adaptive Adversary}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{20:1--20:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.20},
  URN =		{urn:nbn:de:0030-drops-163611},
  doi =		{10.4230/LIPIcs.ICALP.2022.20},
  annote =	{Keywords: dynamic graph algorithm, adaptive adversary, spanner, sparsifier}
}
Document
Incremental SCC Maintenance in Sparse Graphs

Authors: Aaron Bernstein, Aditi Dudeja, and Seth Pettie

Published in: LIPIcs, Volume 204, 29th Annual European Symposium on Algorithms (ESA 2021)


Abstract
In the incremental cycle detection problem, edges are added to a directed graph (initially empty), and the algorithm has to report the presence of the first cycle, once it is formed. A closely related problem is the incremental topological sort problem, where edges are added to an acyclic graph, and the algorithm is required to maintain a valid topological ordering. Since these problems arise naturally in many applications such as scheduling tasks, pointer analysis, and circuit evaluation, they have been studied extensively in the last three decades. Motivated by the fact that in many of these applications, the presence of a cycle is not fatal, we study a generalization of these problems, incremental maintenance of strongly connected components (incremental SCC). Several incremental algorithms in the literature which do cycle detection and topological sort in directed acyclic graphs, such as those by [Michael A. Bender et al., 2016] and [Haeupler et al., 2012], also generalize to maintain strongly connected components and their topological sort in general directed graphs. The algorithms of [Haeupler et al., 2012] and [Michael A. Bender et al., 2016] have a total update time of O(m^{3/2}) and O(m⋅ min{m^{1/2},n^{2/3}}) respectively, and this is the state of the art for incremental SCC. But the most recent algorithms for incremental cycle detection and topological sort ([Bernstein and Chechik, 2018] and [Bhattacharya and Kulkarni, 2020]), which yield total (randomized) update time Õ(min{m^{4/3}, n²}), do not extend to incremental SCC. Thus, there is a gap between the best known algorithms for these two closely related problems. In this paper, we bridge this gap by extending the framework of [Bhattacharya and Kulkarni, 2020] to general directed graphs. More concretely, we give a Las Vegas algorithm for incremental SCCs with an expected total update time of Õ(m^{4/3}). A key ingredient in the algorithm of [Bhattacharya and Kulkarni, 2020] is a structural theorem (first introduced in [Bernstein and Chechik, 2018]) that bounds the number of "equivalent" vertices. Unfortunately, this theorem only applies to DAGs. We show a natural way to extend this structural theorem to general directed graphs, and along the way we develop a significantly simpler and more intuitive proof of this theorem.

Cite as

Aaron Bernstein, Aditi Dudeja, and Seth Pettie. Incremental SCC Maintenance in Sparse Graphs. In 29th Annual European Symposium on Algorithms (ESA 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 204, pp. 14:1-14:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{bernstein_et_al:LIPIcs.ESA.2021.14,
  author =	{Bernstein, Aaron and Dudeja, Aditi and Pettie, Seth},
  title =	{{Incremental SCC Maintenance in Sparse Graphs}},
  booktitle =	{29th Annual European Symposium on Algorithms (ESA 2021)},
  pages =	{14:1--14:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-204-4},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{204},
  editor =	{Mutzel, Petra and Pagh, Rasmus and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2021.14},
  URN =		{urn:nbn:de:0030-drops-145950},
  doi =		{10.4230/LIPIcs.ESA.2021.14},
  annote =	{Keywords: Directed Graphs, Strongly Connected Components, Dynamic Graph Algorithms}
}
Document
Online Matching with Recourse: Random Edge Arrivals

Authors: Aaron Bernstein and Aditi Dudeja

Published in: LIPIcs, Volume 182, 40th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2020)


Abstract
The matching problem in the online setting models the following situation: we are given a set of servers in advance, the clients arrive one at a time, and each client has edges to some of the servers. Each client must be matched to some incident server upon arrival (or left unmatched) and the algorithm is not allowed to reverse its decisions. Due to this no-reversal restriction, we are not able to guarantee an exact maximum matching in this model, only an approximate one. Therefore, it is natural to study a different setting, where the top priority is to match as many clients as possible, and changes to the matching are possible but expensive. Formally, the goal is to always maintain a maximum matching while minimizing the number of changes made to the matching (denoted the recourse). This model is called the online model with recourse, and has been studied extensively over the past few years. For the specific problem of matching, the focus has been on vertex-arrival model, where clients arrive one at a time with all their edges. A recent result of Bernstein et al. [Bernstein et al., 2019] gives an upper bound of O (nlog² n) recourse for the case of general bipartite graphs. For trees the best known bound is O(nlog n) recourse, due to Bosek et al. [Bosek et al., 2018]. These are nearly tight, as a lower bound of Ω(nlog n) is known. In this paper, we consider the more general model where all the vertices are known in advance, but the edges of the graph are revealed one at a time. Even for the simple case where the graph is a path, there is a lower bound of Ω(n²). Therefore, we instead consider the natural relaxation where the graph is worst-case, but the edges are revealed in a random order. This relaxation is motivated by the fact that in many related models, such as the streaming setting or the standard online setting without recourse, faster algorithms have been obtained for the matching problem when the input comes in a random order. Our results are as follows: - Our main result is that for the case of general (non-bipartite) graphs, the problem with random edge arrivals is almost as hard as in the adversarial setting: we show a family of graphs for which the expected recourse is Ω(n²/log n). - We show that for some special cases of graphs, random arrival is significantly easier. For the case of trees, we get an upper bound of O(nlog²n) on the expected recourse. For the case of paths, this upper bound is O(nlog n). We also show that the latter bound is tight, i.e. that the expected recourse is at least Ω(nlog n).

Cite as

Aaron Bernstein and Aditi Dudeja. Online Matching with Recourse: Random Edge Arrivals. In 40th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 182, pp. 11:1-11:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{bernstein_et_al:LIPIcs.FSTTCS.2020.11,
  author =	{Bernstein, Aaron and Dudeja, Aditi},
  title =	{{Online Matching with Recourse: Random Edge Arrivals}},
  booktitle =	{40th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2020)},
  pages =	{11:1--11:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-174-0},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{182},
  editor =	{Saxena, Nitin and Simon, Sunil},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2020.11},
  URN =		{urn:nbn:de:0030-drops-132521},
  doi =		{10.4230/LIPIcs.FSTTCS.2020.11},
  annote =	{Keywords: matchings, edge-arrival, online model}
}
Document
Improved Bounds for Distributed Load Balancing

Authors: Sepehr Assadi, Aaron Bernstein, and Zachary Langley

Published in: LIPIcs, Volume 179, 34th International Symposium on Distributed Computing (DISC 2020)


Abstract
In the load balancing problem, the input is an n-vertex bipartite graph G = (C ∪ S, E) - where the two sides of the bipartite graph are referred to as the clients and the servers - and a positive weight for each client c ∈ C. The algorithm must assign each client c ∈ C to an adjacent server s ∈ S. The load of a server is then the weighted sum of all the clients assigned to it. The goal is to compute an assignment that minimizes some function of the server loads, typically either the maximum server load (i.e., the 𝓁_∞-norm) or the 𝓁_p-norm of the server loads. This problem has a variety of applications and has been widely studied under several different names, including: scheduling with restricted assignment, semi-matching, and distributed backup placement. We study load balancing in the distributed setting. There are two existing results in the CONGEST model. Czygrinow et al. [DISC 2012] showed a 2-approximation for unweighted clients with round-complexity O(Δ⁵), where Δ is the maximum degree of the input graph. Halldórsson et al. [SPAA 2015] showed an O(log n / log log n)-approximation for unweighted clients and O(log²n/log log n)-approximation for weighted clients with round-complexity polylog(n). In this paper, we show the first distributed algorithms to compute an O(1)-approximation to the load balancing problem in polylog(n) rounds: - In the CONGEST model, we give an O(1)-approximation algorithm in polylog(n) rounds for unweighted clients. For weighted clients, the approximation ratio is O(log{n}). - In the less constrained LOCAL model, we give an O(1)-approximation algorithm for weighted clients in polylog(n) rounds. Our approach also has implications for the standard sequential setting in which we obtain the first O(1)-approximation for this problem that runs in near-linear time. A 2-approximation is already known, but it requires solving a linear program and is hence much slower. Finally, we note that all of our results simultaneously approximate all 𝓁_p-norms, including the 𝓁_∞-norm.

Cite as

Sepehr Assadi, Aaron Bernstein, and Zachary Langley. Improved Bounds for Distributed Load Balancing. In 34th International Symposium on Distributed Computing (DISC 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 179, pp. 1:1-1:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{assadi_et_al:LIPIcs.DISC.2020.1,
  author =	{Assadi, Sepehr and Bernstein, Aaron and Langley, Zachary},
  title =	{{Improved Bounds for Distributed Load Balancing}},
  booktitle =	{34th International Symposium on Distributed Computing (DISC 2020)},
  pages =	{1:1--1:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-168-9},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{179},
  editor =	{Attiya, Hagit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.DISC.2020.1},
  URN =		{urn:nbn:de:0030-drops-130798},
  doi =		{10.4230/LIPIcs.DISC.2020.1},
  annote =	{Keywords: Load Balancing, Distributed Algorithms, Matching, Semi-Matching}
}
Document
Track A: Algorithms, Complexity and Games
Improved Bounds for Matching in Random-Order Streams

Authors: Aaron Bernstein

Published in: LIPIcs, Volume 168, 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)


Abstract
We study the problem of computing an approximate maximum cardinality matching in the semi-streaming model when edges arrive in a random order. In the semi-streaming model, the edges of the input graph G = (V,E) are given as a stream e₁, …, e_m, and the algorithm is allowed to make a single pass over this stream while using O(n polylog(n)) space (m = |E| and n = |V|). If the order of edges is adversarial, a simple single-pass greedy algorithm yields a 1/2-approximation in O(n) space; achieving a better approximation in adversarial streams remains an elusive open question. A line of recent work shows that one can improve upon the 1/2-approximation if the edges of the stream arrive in a random order. The state of the art for this model is two-fold: Assadi et al. [SODA 2019] show how to compute a 2/3(∼.66)-approximate matching, but the space requirement is O(n^1.5 polylog(n)). Very recently, Farhadi et al. [SODA 2020] presented an algorithm with the desired space usage of O(n polylog(n)), but a worse approximation ratio of 6/11(∼.545), or 3/5(=.6) in bipartite graphs. In this paper, we present an algorithm that computes a 2/3(∼.66)-approximate matching using only O(n log(n)) space, improving upon both results above. We also note that for adversarial streams, a lower bound of Kapralov [SODA 2013] shows that any algorithm that achieves a 1-1/e(∼.63)-approximation requires (n^{1+Ω(1/log log(n))}) space. Our result for random-order streams is the first to go beyond the adversarial-order lower bound, thus establishing that computing a maximum matching is provably easier in random-order streams.

Cite as

Aaron Bernstein. Improved Bounds for Matching in Random-Order Streams. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 12:1-12:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{bernstein:LIPIcs.ICALP.2020.12,
  author =	{Bernstein, Aaron},
  title =	{{Improved Bounds for Matching in Random-Order Streams}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{12:1--12:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.12},
  URN =		{urn:nbn:de:0030-drops-124194},
  doi =		{10.4230/LIPIcs.ICALP.2020.12},
  annote =	{Keywords: Graph Algorithms, Sublinear Algorithms, Matching, Streaming}
}
Document
Towards a Unified Theory of Sparsification for Matching Problems

Authors: Sepehr Assadi and Aaron Bernstein

Published in: OASIcs, Volume 69, 2nd Symposium on Simplicity in Algorithms (SOSA 2019)


Abstract
In this paper, we present a construction of a "matching sparsifier", that is, a sparse subgraph of the given graph that preserves large matchings approximately and is robust to modifications of the graph. We use this matching sparsifier to obtain several new algorithmic results for the maximum matching problem: - An almost (3/2)-approximation one-way communication protocol for the maximum matching problem, significantly simplifying the (3/2)-approximation protocol of Goel, Kapralov, and Khanna (SODA 2012) and extending it from bipartite graphs to general graphs. - An almost (3/2)-approximation algorithm for the stochastic matching problem, improving upon and significantly simplifying the previous 1.999-approximation algorithm of Assadi, Khanna, and Li (EC 2017). - An almost (3/2)-approximation algorithm for the fault-tolerant matching problem, which, to our knowledge, is the first non-trivial algorithm for this problem. Our matching sparsifier is obtained by proving new properties of the edge-degree constrained subgraph (EDCS) of Bernstein and Stein (ICALP 2015; SODA 2016) - designed in the context of maintaining matchings in dynamic graphs - that identifies EDCS as an excellent choice for a matching sparsifier. This leads to surprisingly simple and non-technical proofs of the above results in a unified way. Along the way, we also provide a much simpler proof of the fact that an EDCS is guaranteed to contain a large matching, which may be of independent interest.

Cite as

Sepehr Assadi and Aaron Bernstein. Towards a Unified Theory of Sparsification for Matching Problems. In 2nd Symposium on Simplicity in Algorithms (SOSA 2019). Open Access Series in Informatics (OASIcs), Volume 69, pp. 11:1-11:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{assadi_et_al:OASIcs.SOSA.2019.11,
  author =	{Assadi, Sepehr and Bernstein, Aaron},
  title =	{{Towards a Unified Theory of Sparsification for Matching Problems}},
  booktitle =	{2nd Symposium on Simplicity in Algorithms (SOSA 2019)},
  pages =	{11:1--11:20},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-099-6},
  ISSN =	{2190-6807},
  year =	{2019},
  volume =	{69},
  editor =	{Fineman, Jeremy T. and Mitzenmacher, Michael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SOSA.2019.11},
  URN =		{urn:nbn:de:0030-drops-100370},
  doi =		{10.4230/OASIcs.SOSA.2019.11},
  annote =	{Keywords: Maximum matching, matching sparsifiers, one-way communication complexity, stochastic matching, fault-tolerant matching}
}
Document
Distance-Preserving Graph Contractions

Authors: Aaron Bernstein, Karl Däubel, Yann Disser, Max Klimm, Torsten Mütze, and Frieder Smolny

Published in: LIPIcs, Volume 94, 9th Innovations in Theoretical Computer Science Conference (ITCS 2018)


Abstract
Compression and sparsification algorithms are frequently applied in a preprocessing step before analyzing or optimizing large networks/graphs. In this paper we propose and study a new framework contracting edges of a graph (merging vertices into super-vertices) with the goal of preserving pairwise distances as accurately as possible. Formally, given an edge-weighted graph, the contraction should guarantee that for any two vertices at distance d, the corresponding super-vertices remain at distance at least \varphi(d) in the contracted graph, where \varphi is a tolerance function bounding the permitted distance distortion. We present a comprehensive picture of the algorithmic complexity of the contraction problem for affine tolerance functions \varphi(x)=x/\alpha-\beta, where \alpha \geq 1 and \beta \geq 0 are arbitrary real-valued parameters. Specifically, we present polynomial-time algorithms for trees as well as hardness and inapproximability results for different graph classes, precisely separating easy and hard cases. Further we analyze the asymptotic behavior of the size of contractions, and find efficient algorithms to compute (non-optimal) contractions despite our hardness results.

Cite as

Aaron Bernstein, Karl Däubel, Yann Disser, Max Klimm, Torsten Mütze, and Frieder Smolny. Distance-Preserving Graph Contractions. In 9th Innovations in Theoretical Computer Science Conference (ITCS 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 94, pp. 51:1-51:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{bernstein_et_al:LIPIcs.ITCS.2018.51,
  author =	{Bernstein, Aaron and D\"{a}ubel, Karl and Disser, Yann and Klimm, Max and M\"{u}tze, Torsten and Smolny, Frieder},
  title =	{{Distance-Preserving Graph Contractions}},
  booktitle =	{9th Innovations in Theoretical Computer Science Conference (ITCS 2018)},
  pages =	{51:1--51:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-060-6},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{94},
  editor =	{Karlin, Anna R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2018.51},
  URN =		{urn:nbn:de:0030-drops-83427},
  doi =		{10.4230/LIPIcs.ITCS.2018.51},
  annote =	{Keywords: distance oracle, contraction, spanner}
}
Document
Simultaneously Load Balancing for Every p-norm, With Reassignments

Authors: Aaron Bernstein, Tsvi Kopelowitz, Seth Pettie, Ely Porat, and Clifford Stein

Published in: LIPIcs, Volume 67, 8th Innovations in Theoretical Computer Science Conference (ITCS 2017)


Abstract
This paper investigates the task of load balancing where the objective function is to minimize the p-norm of loads, for p\geq 1, in both static and incremental settings. We consider two closely related load balancing problems. In the bipartite matching problem we are given a bipartite graph G=(C\cup S, E) and the goal is to assign each client c\in C to a server s\in S so that the p-norm of assignment loads on S is minimized. In the graph orientation problem the goal is to orient (direct) the edges of a given undirected graph while minimizing the p-norm of the out-degrees. The graph orientation problem is a special case of the bipartite matching problem, but less complex, which leads to simpler algorithms. For the graph orientation problem we show that the celebrated Chiba-Nishizeki peeling algorithm provides a simple linear time load balancing scheme whose output is an orientation that is 2-competitive, in a p-norm sense, for all p\geq 1. For the bipartite matching problem we first provide an offline algorithm that computes an optimal assignment. We then extend this solution to the online bipartite matching problem with reassignments, where vertices from C arrive in an online fashion together with their corresponding edges, and we are allowed to reassign an amortized O(1) vertices from C each time a new vertex arrives. In this online scenario we show how to maintain a single assignment that is 8-competitive, in a p-norm sense, for all p\geq 1.

Cite as

Aaron Bernstein, Tsvi Kopelowitz, Seth Pettie, Ely Porat, and Clifford Stein. Simultaneously Load Balancing for Every p-norm, With Reassignments. In 8th Innovations in Theoretical Computer Science Conference (ITCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 67, pp. 51:1-51:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{bernstein_et_al:LIPIcs.ITCS.2017.51,
  author =	{Bernstein, Aaron and Kopelowitz, Tsvi and Pettie, Seth and Porat, Ely and Stein, Clifford},
  title =	{{Simultaneously Load Balancing for Every p-norm, With Reassignments}},
  booktitle =	{8th Innovations in Theoretical Computer Science Conference (ITCS 2017)},
  pages =	{51:1--51:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-029-3},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{67},
  editor =	{Papadimitriou, Christos H.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2017.51},
  URN =		{urn:nbn:de:0030-drops-82009},
  doi =		{10.4230/LIPIcs.ITCS.2017.51},
  annote =	{Keywords: Online Matching, Graph Orientation, Minmizing the p-norm}
}
Document
General Bounds for Incremental Maximization

Authors: Aaron Bernstein, Yann Disser, and Martin Groß

Published in: LIPIcs, Volume 80, 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)


Abstract
We propose a theoretical framework to capture incremental solutions to cardinality constrained maximization problems. The defining characteristic of our framework is that the cardinality/support of the solution is bounded by a value k in N that grows over time, and we allow the solution to be extended one element at a time. We investigate the best-possible competitive ratio of such an incremental solution, i.e., the worst ratio over all k between the incremental solution after~$k$ steps and an optimum solution of cardinality k. We define a large class of problems that contains many important cardinality constrained maximization problems like maximum matching, knapsack, and packing/covering problems. We provide a general 2.618-competitive incremental algorithm for this class of problems, and show that no algorithm can have competitive ratio below 2.18 in general. In the second part of the paper, we focus on the inherently incremental greedy algorithm that increases the objective value as much as possible in each step. This algorithm is known to be 1.58-competitive for submodular objective functions, but it has unbounded competitive ratio for the class of incremental problems mentioned above. We define a relaxed submodularity condition for the objective function, capturing problems like maximum (weighted) (b-)matching and a variant of the maximum flow problem. We show that the greedy algorithm has competitive ratio (exactly) 2.313 for the class of problems that satisfy this relaxed submodularity condition. Note that our upper bounds on the competitive ratios translate to approximation ratios for the underlying cardinality constrained problems.

Cite as

Aaron Bernstein, Yann Disser, and Martin Groß. General Bounds for Incremental Maximization. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 43:1-43:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{bernstein_et_al:LIPIcs.ICALP.2017.43,
  author =	{Bernstein, Aaron and Disser, Yann and Gro{\ss}, Martin},
  title =	{{General Bounds for Incremental Maximization}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{43:1--43:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.43},
  URN =		{urn:nbn:de:0030-drops-74650},
  doi =		{10.4230/LIPIcs.ICALP.2017.43},
  annote =	{Keywords: incremental optimization, maximization problems, greedy algorithm, competitive analysis, cardinality constraint}
}
Document
Deterministic Partially Dynamic Single Source Shortest Paths in Weighted Graphs

Authors: Aaron Bernstein

Published in: LIPIcs, Volume 80, 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)


Abstract
In this paper we consider the decremental single-source shortest paths (SSSP) problem, where given a graph G and a source node s the goal is to maintain shortest distances between s and all other nodes in G under a sequence of online adversarial edge deletions. In their seminal work, Even and Shiloach [JACM 1981] presented an exact solution to the problem in unweighted graphs with only O(mn) total update time over all edge deletions. Their classic algorithm was the state of the art for the decremental SSSP problem for three decades, even when approximate shortest paths are allowed. The first improvement over the Even-Shiloach algorithm was given by Bernstein and Roditty [SODA 2011], who for the case of an unweighted and undirected graph presented a (1+epsilon)-approximate algorithm with constant query time and a total update time of O(n^{2+o(1)}). This work triggered a series of new results, culminating in a recent breakthrough of Henzinger, Krinninger and Nanongkai [FOCS 14], who presented a (1+epsilon)-approximate algorithm for undirected weighted graphs whose total update time is near linear: O(m^{1+o(1)} log(W)), where W is the ratio of the heaviest to the lightest edge weight in the graph. In this paper they posed as a major open problem the question of derandomizing their result. Until very recently, all known improvements over the Even-Shiloach algorithm were randomized and required the assumption of a non-adaptive adversary. In STOC 2016, Bernstein and Chechik showed the first deterministic algorithm to go beyond O(mn) total update time: the algorithm is also (1+\epsilon)-approximate, and has total update time \tilde{O}(n^2). In SODA 2017, the same authors presented an algorithm with total update time \tilde{O}(mn^{3/4}). However, both algorithms are restricted to undirected, unweighted graphs. We present the first deterministic algorithm for weighted undirected graphs to go beyond the O(mn) bound. The total update time is \tilde{O}(n^2 \log(W)).

Cite as

Aaron Bernstein. Deterministic Partially Dynamic Single Source Shortest Paths in Weighted Graphs. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 44:1-44:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{bernstein:LIPIcs.ICALP.2017.44,
  author =	{Bernstein, Aaron},
  title =	{{Deterministic Partially Dynamic Single Source Shortest Paths in Weighted Graphs}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{44:1--44:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.44},
  URN =		{urn:nbn:de:0030-drops-74013},
  doi =		{10.4230/LIPIcs.ICALP.2017.44},
  annote =	{Keywords: Shortest Paths, Dynamic Algorithms, Deterministic, Weighted Graph}
}
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail