Search Results

Documents authored by Vassilevska Williams, Virginia


Document
Faster Cycle Detection in the Congested Clique

Authors: Keren Censor-Hillel, Tomer Even, and Virginia Vassilevska Williams

Published in: LIPIcs, Volume 319, 38th International Symposium on Distributed Computing (DISC 2024)


Abstract
We provide a fast distributed algorithm for detecting h-cycles in the Congested Clique model, whose running time decreases as the number of h-cycles in the graph increases. In undirected graphs, constant-round algorithms are known for cycles of even length. Our algorithm greatly improves upon the state of the art for odd values of h. Moreover, our running time applies also to directed graphs, in which case the improvement is for all values of h. Further, our techniques allow us to obtain a triangle detection algorithm in the quantum variant of this model, which is faster than prior work. A key technical contribution we develop to obtain our fast cycle detection algorithm is a new algorithm for computing the product of many pairs of small matrices in parallel, which may be of independent interest.

Cite as

Keren Censor-Hillel, Tomer Even, and Virginia Vassilevska Williams. Faster Cycle Detection in the Congested Clique. In 38th International Symposium on Distributed Computing (DISC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 319, pp. 12:1-12:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{censorhillel_et_al:LIPIcs.DISC.2024.12,
  author =	{Censor-Hillel, Keren and Even, Tomer and Vassilevska Williams, Virginia},
  title =	{{Faster Cycle Detection in the Congested Clique}},
  booktitle =	{38th International Symposium on Distributed Computing (DISC 2024)},
  pages =	{12:1--12:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-352-2},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{319},
  editor =	{Alistarh, Dan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.DISC.2024.12},
  URN =		{urn:nbn:de:0030-drops-212382},
  doi =		{10.4230/LIPIcs.DISC.2024.12},
  annote =	{Keywords: triangle detection, cycle detection, distributed computing, Congested Clique, quantum computing, Fast matrix multiplication, Fast rectangular matrix multiplication}
}
Document
Track A: Algorithms, Complexity and Games
Detecting Disjoint Shortest Paths in Linear Time and More

Authors: Shyan Akmal, Virginia Vassilevska Williams, and Nicole Wein

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
In the k-Disjoint Shortest Paths (k-DSP) problem, we are given a graph G (with positive edge weights) on n nodes and m edges with specified source vertices s_1, … , s_k, and target vertices t_1, … , t_k, and are tasked with determining if G contains vertex-disjoint (s_i,t_i)-shortest paths. For any constant k, it is known that k-DSP can be solved in polynomial time over undirected graphs and directed acyclic graphs (DAGs). However, the exact time complexity of k-DSP remains mysterious, with large gaps between the fastest known algorithms and best conditional lower bounds. In this paper, we obtain faster algorithms for important cases of k-DSP, and present better conditional lower bounds for k-DSP and its variants. Previous work solved 2-DSP over weighted undirected graphs in O(n⁷) time, and weighted DAGs in O(mn) time. For the main result of this paper, we present optimal linear time algorithms for solving 2-DSP on weighted undirected graphs and DAGs. Our linear time algorithms are algebraic however, and so only solve the detection rather than search version of 2-DSP (we show how to solve the search version in O(mn) time, which is faster than the previous best runtime in weighted undirected graphs, but only matches the previous best runtime for DAGs). We also obtain a faster algorithm for k-Edge Disjoint Shortest Paths (k-EDSP) in DAGs, the variant of k-DSP where one seeks edge-disjoint instead of vertex-disjoint paths between sources and their corresponding targets. Algorithms for k-EDSP on DAGs from previous work take Ω(m^k) time. We show that k-EDSP can be solved over DAGs in O(mn^{k-1}) time, matching the fastest known runtime for solving k-DSP over DAGs. Previous work established conditional lower bounds for solving k-DSP and its variants via reductions from detecting cliques in graphs. Prior work implied that k-Clique can be reduced to 2k-DSP in DAGs and undirected graphs with O((kn)²) nodes. We improve this reduction, by showing how to reduce from k-Clique to k-DSP in DAGs and undirected graphs with O((kn)²) nodes (halving the number of paths needed in the reduced instance). A variant of k-DSP is the k-Disjoint Paths (k-DP) problem, where the solution paths no longer need to be shortest paths. Previous work reduced from k-Clique to p-DP in DAGs with O(kn) nodes, for p = k + k(k-1)/2. We improve this by showing a reduction from k-Clique to p-DP, for p = k + ⌊k²/4⌋. Under the k-Clique Hypothesis from fine-grained complexity, our results establish better conditional lower bounds for k-DSP for all k ≥ 4, and better conditional lower bounds for p-DP for all p ≤ 4031. Notably, our work gives the first nontrivial conditional lower bounds 4-DP in DAGs and 4-DSP in undirected graphs and DAGs. Before our work, nontrivial conditional lower bounds were only known for k-DP and k-DSP on such graphs when k ≥ 6.

Cite as

Shyan Akmal, Virginia Vassilevska Williams, and Nicole Wein. Detecting Disjoint Shortest Paths in Linear Time and More. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 9:1-9:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{akmal_et_al:LIPIcs.ICALP.2024.9,
  author =	{Akmal, Shyan and Vassilevska Williams, Virginia and Wein, Nicole},
  title =	{{Detecting Disjoint Shortest Paths in Linear Time and More}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{9:1--9:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.9},
  URN =		{urn:nbn:de:0030-drops-201529},
  doi =		{10.4230/LIPIcs.ICALP.2024.9},
  annote =	{Keywords: disjoint shortest paths, algebraic graph algorithms, disjoint paths, fine-grained complexity, clique}
}
Document
Track A: Algorithms, Complexity and Games
Additive Spanner Lower Bounds with Optimal Inner Graph Structure

Authors: Greg Bodwin, Gary Hoppenworth, Virginia Vassilevska Williams, Nicole Wein, and Zixuan Xu

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
We construct n-node graphs on which any O(n)-size spanner has additive error at least +Ω(n^{3/17}), improving on the previous best lower bound of Ω(n^{1/7}) [Bodwin-Hoppenworth FOCS '22]. Our construction completes the first two steps of a particular three-step research program, introduced in prior work and overviewed here, aimed at producing tight bounds for the problem by aligning aspects of the upper and lower bound constructions. More specifically, we develop techniques that enable the use of inner graphs in the lower bound framework whose technical properties are provably tight with the corresponding assumptions made in the upper bounds. As an additional application of our techniques, we improve the corresponding lower bound for O(n)-size additive emulators to +Ω(n^{1/14}).

Cite as

Greg Bodwin, Gary Hoppenworth, Virginia Vassilevska Williams, Nicole Wein, and Zixuan Xu. Additive Spanner Lower Bounds with Optimal Inner Graph Structure. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 28:1-28:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{bodwin_et_al:LIPIcs.ICALP.2024.28,
  author =	{Bodwin, Greg and Hoppenworth, Gary and Vassilevska Williams, Virginia and Wein, Nicole and Xu, Zixuan},
  title =	{{Additive Spanner Lower Bounds with Optimal Inner Graph Structure}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{28:1--28:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.28},
  URN =		{urn:nbn:de:0030-drops-201715},
  doi =		{10.4230/LIPIcs.ICALP.2024.28},
  annote =	{Keywords: Additive Spanners, Graph Theory}
}
Document
Track A: Algorithms, Complexity and Games
Fast Approximate Counting of Cycles

Authors: Keren Censor-Hillel, Tomer Even, and Virginia Vassilevska Williams

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
We consider the problem of approximate counting of triangles and longer fixed length cycles in directed graphs. For triangles, Tětek [ICALP'22] gave an algorithm that returns a (1±ε)-approximation in Õ(n^ω/t^{ω-2}) time, where t is the unknown number of triangles in the given n node graph and ω < 2.372 is the matrix multiplication exponent. We obtain an improved algorithm whose running time is, within polylogarithmic factors the same as that for multiplying an n× n/t matrix by an n/t × n matrix. We then extend our framework to obtain the first nontrivial (1± ε)-approximation algorithms for the number of h-cycles in a graph, for any constant h ≥ 3. Our running time is Õ(MM(n,n/t^{1/(h-2)},n)), the time to multiply n × n/(t^{1/(h-2)}) by n/(t^{1/(h-2)) × n matrices. Finally, we show that under popular fine-grained hypotheses, this running time is optimal.

Cite as

Keren Censor-Hillel, Tomer Even, and Virginia Vassilevska Williams. Fast Approximate Counting of Cycles. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 37:1-37:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{censorhillel_et_al:LIPIcs.ICALP.2024.37,
  author =	{Censor-Hillel, Keren and Even, Tomer and Vassilevska Williams, Virginia},
  title =	{{Fast Approximate Counting of Cycles}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{37:1--37:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.37},
  URN =		{urn:nbn:de:0030-drops-201809},
  doi =		{10.4230/LIPIcs.ICALP.2024.37},
  annote =	{Keywords: Approximate triangle counting, Approximate cycle counting Fast matrix multiplication, Fast rectangular matrix multiplication}
}
Document
On Diameter Approximation in Directed Graphs

Authors: Amir Abboud, Mina Dalirrooyfard, Ray Li, and Virginia Vassilevska Williams

Published in: LIPIcs, Volume 274, 31st Annual European Symposium on Algorithms (ESA 2023)


Abstract
Computing the diameter of a graph, i.e. the largest distance, is a fundamental problem that is central in fine-grained complexity. In undirected graphs, the Strong Exponential Time Hypothesis (SETH) yields a lower bound on the time vs. approximation trade-off that is quite close to the upper bounds. In directed graphs, however, where only some of the upper bounds apply, much larger gaps remain. Since d(u,v) may not be the same as d(v,u), there are multiple ways to define the problem, the two most natural being the (one-way) diameter (max_(u,v) d(u,v)) and the roundtrip diameter (max_{u,v} d(u,v)+d(v,u)). In this paper we make progress on the outstanding open question for each of them. - We design the first algorithm for diameter in sparse directed graphs to achieve n^{1.5-ε} time with an approximation factor better than 2. The new upper bound trade-off makes the directed case appear more similar to the undirected case. Notably, this is the first algorithm for diameter in sparse graphs that benefits from fast matrix multiplication. - We design new hardness reductions separating roundtrip diameter from directed and undirected diameter. In particular, a 1.5-approximation in subquadratic time would refute the All-Nodes k-Cycle hypothesis, and any (2-ε)-approximation would imply a breakthrough algorithm for approximate 𝓁_∞-Closest-Pair. Notably, these are the first conditional lower bounds for diameter that are not based on SETH.

Cite as

Amir Abboud, Mina Dalirrooyfard, Ray Li, and Virginia Vassilevska Williams. On Diameter Approximation in Directed Graphs. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 2:1-2:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{abboud_et_al:LIPIcs.ESA.2023.2,
  author =	{Abboud, Amir and Dalirrooyfard, Mina and Li, Ray and Vassilevska Williams, Virginia},
  title =	{{On Diameter Approximation in Directed Graphs}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{2:1--2:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.2},
  URN =		{urn:nbn:de:0030-drops-186552},
  doi =		{10.4230/LIPIcs.ESA.2023.2},
  annote =	{Keywords: Diameter, Directed Graphs, Approximation Algorithms, Fine-grained complexity}
}
Document
Faster Detours in Undirected Graphs

Authors: Shyan Akmal, Virginia Vassilevska Williams, Ryan Williams, and Zixuan Xu

Published in: LIPIcs, Volume 274, 31st Annual European Symposium on Algorithms (ESA 2023)


Abstract
The k-Detour problem is a basic path-finding problem: given a graph G on n vertices, with specified nodes s and t, and a positive integer k, the goal is to determine if G has an st-path of length exactly dist(s,t) + k, where dist(s,t) is the length of a shortest path from s to t. The k-Detour problem is NP-hard when k is part of the input, so researchers have sought efficient parameterized algorithms for this task, running in f(k)poly(n) time, for f(⋅) as slow-growing as possible. We present faster algorithms for k-Detour in undirected graphs, running in 1.853^k poly(n) randomized and 4.082^kpoly(n) deterministic time. The previous fastest algorithms for this problem took 2.746^k poly(n) randomized and 6.523^k poly(n) deterministic time [Bezáková-Curticapean-Dell-Fomin, ICALP 2017]. Our algorithms use the fact that detecting a path of a given length in an undirected graph is easier if we are promised that the path belongs to what we call a "bipartitioned" subgraph, where the nodes are split into two parts and the path must satisfy constraints on those parts. Previously, this idea was used to obtain the fastest known algorithm for finding paths of length k in undirected graphs [Björklund-Husfeldt-Kaski-Koivisto, JCSS 2017], intuitively by looking for paths of length k in randomly bipartitioned subgraphs. Our algorithms for k-Detour stem from a new application of this idea, which does not involve choosing the bipartitioned subgraphs randomly. Our work has direct implications for the k-Longest Detour problem, another related path-finding problem. In this problem, we are given the same input as in k-Detour, but are now tasked with determining if G has an st-path of length at least dist(s,t)+k. Our results for k-Detour imply that we can solve k-Longest Detour in 3.432^k poly(n) randomized and 16.661^k poly(n) deterministic time. The previous fastest algorithms for this problem took 7.539^k poly(n) randomized and 42.549^k poly(n) deterministic time [Fomin et al., STACS 2022].

Cite as

Shyan Akmal, Virginia Vassilevska Williams, Ryan Williams, and Zixuan Xu. Faster Detours in Undirected Graphs. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 7:1-7:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{akmal_et_al:LIPIcs.ESA.2023.7,
  author =	{Akmal, Shyan and Vassilevska Williams, Virginia and Williams, Ryan and Xu, Zixuan},
  title =	{{Faster Detours in Undirected Graphs}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{7:1--7:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.7},
  URN =		{urn:nbn:de:0030-drops-186601},
  doi =		{10.4230/LIPIcs.ESA.2023.7},
  annote =	{Keywords: path finding, detours, parameterized complexity, exact algorithms}
}
Document
Approximating Min-Diameter: Standard and Bichromatic

Authors: Aaron Berger, Jenny Kaufmann, and Virginia Vassilevska Williams

Published in: LIPIcs, Volume 274, 31st Annual European Symposium on Algorithms (ESA 2023)


Abstract
The min-diameter of a directed graph G is a measure of the largest distance between nodes. It is equal to the maximum min-distance d_{min}(u,v) across all pairs u,v ∈ V(G), where d_{min}(u,v) = min(d(u,v), d(v,u)). Min-diameter approximation in directed graphs has attracted attention recently as an offshoot of the classical and well-studied diameter approximation problem. Our work provides a 3/2-approximation algorithm for min-diameter in DAGs running in time O(m^{1.426} n^{0.288}), and a faster almost-3/2-approximation variant which runs in time O(m^{0.713} n). (An almost-α-approximation algorithm determines the min-diameter to within a multiplicative factor of α plus constant additive error.) This is the first known algorithm to solve 3/2-approximation for min-diameter in sparse DAGs in truly subquadratic time O(m^{2-ε}) for ε > 0; previously only a 2-approximation was known. By a conditional lower bound result of [Abboud et al, SODA 2016], a better than 3/2-approximation can't be achieved in truly subquadratic time under the Strong Exponential Time Hypothesis (SETH), so our result is conditionally tight. We additionally obtain a new conditional lower bound for min-diameter approximation in general directed graphs, showing that under SETH, one cannot achieve an approximation factor below 2 in truly subquadratic time. Our work also presents the first study of approximating bichromatic min-diameter, which is the maximum min-distance between oppositely colored vertices in a 2-colored graph. We show that SETH implies that in DAGs, a better than 2 approximation cannot be achieved in truly subquadratic time, and that in general graphs, an approximation within a factor below 5/2 is similarly out of reach. We then obtain an O(m)-time algorithm which determines if bichromatic min-diameter is finite, and an almost-2-approximation algorithm for bichromatic min-diameter with runtime Õ(min(m^{4/3} n^{1/3}, m^{1/2} n^{3/2})).

Cite as

Aaron Berger, Jenny Kaufmann, and Virginia Vassilevska Williams. Approximating Min-Diameter: Standard and Bichromatic. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 17:1-17:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{berger_et_al:LIPIcs.ESA.2023.17,
  author =	{Berger, Aaron and Kaufmann, Jenny and Vassilevska Williams, Virginia},
  title =	{{Approximating Min-Diameter: Standard and Bichromatic}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{17:1--17:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.17},
  URN =		{urn:nbn:de:0030-drops-186705},
  doi =		{10.4230/LIPIcs.ESA.2023.17},
  annote =	{Keywords: diameter, min distances, fine-grained, approximation algorithm}
}
Document
Hardness of Token Swapping on Trees

Authors: Oswin Aichholzer, Erik D. Demaine, Matias Korman, Anna Lubiw, Jayson Lynch, Zuzana Masárová, Mikhail Rudoy, Virginia Vassilevska Williams, and Nicole Wein

Published in: LIPIcs, Volume 244, 30th Annual European Symposium on Algorithms (ESA 2022)


Abstract
Given a graph where every vertex has exactly one labeled token, how can we most quickly execute a given permutation on the tokens? In (sequential) token swapping, the goal is to use the shortest possible sequence of swaps, each of which exchanges the tokens at the two endpoints of an edge of the graph. In parallel token swapping, the goal is to use the fewest rounds, each of which consists of one or more swaps on the edges of a matching. We prove that both of these problems remain NP-hard when the graph is restricted to be a tree. These token swapping problems have been studied by disparate groups of researchers in discrete mathematics, theoretical computer science, robot motion planning, game theory, and engineering. Previous work establishes NP-completeness on general graphs (for both problems), constant-factor approximation algorithms, and some poly-time exact algorithms for simple graph classes such as cliques, stars, paths, and cycles. Sequential and parallel token swapping on trees were first studied over thirty years ago (as "sorting with a transposition tree") and over twenty-five years ago (as "routing permutations via matchings"), yet their complexities were previously unknown. We also show limitations on approximation of sequential token swapping on trees: we identify a broad class of algorithms that encompass all three known polynomial-time algorithms that achieve the best known approximation factor (which is 2) and show that no such algorithm can achieve an approximation factor less than 2.

Cite as

Oswin Aichholzer, Erik D. Demaine, Matias Korman, Anna Lubiw, Jayson Lynch, Zuzana Masárová, Mikhail Rudoy, Virginia Vassilevska Williams, and Nicole Wein. Hardness of Token Swapping on Trees. In 30th Annual European Symposium on Algorithms (ESA 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 244, pp. 3:1-3:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{aichholzer_et_al:LIPIcs.ESA.2022.3,
  author =	{Aichholzer, Oswin and Demaine, Erik D. and Korman, Matias and Lubiw, Anna and Lynch, Jayson and Mas\'{a}rov\'{a}, Zuzana and Rudoy, Mikhail and Vassilevska Williams, Virginia and Wein, Nicole},
  title =	{{Hardness of Token Swapping on Trees}},
  booktitle =	{30th Annual European Symposium on Algorithms (ESA 2022)},
  pages =	{3:1--3:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-247-1},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{244},
  editor =	{Chechik, Shiri and Navarro, Gonzalo and Rotenberg, Eva and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2022.3},
  URN =		{urn:nbn:de:0030-drops-169413},
  doi =		{10.4230/LIPIcs.ESA.2022.3},
  annote =	{Keywords: Sorting, Token swapping, Trees, NP-hard, Approximation}
}
Document
New Lower Bounds and Upper Bounds for Listing Avoidable Vertices

Authors: Mingyang Deng, Virginia Vassilevska Williams, and Ziqian Zhong

Published in: LIPIcs, Volume 241, 47th International Symposium on Mathematical Foundations of Computer Science (MFCS 2022)


Abstract
We consider the problem of listing all avoidable vertices in a given n vertex graph. A vertex is avoidable if every pair of its neighbors is connected by a path whose internal vertices are not neighbors of the vertex or the vertex itself. Recently, Papadopolous and Zisis showed that one can list all avoidable vertices in O(n^{ω+1}) time, where ω < 2.373 is the square matrix multiplication exponent, and conjectured that a faster algorithm is not possible. In this paper we show that under the 3-OV Hypothesis, and thus the Strong Exponential Time Hypothesis, n^{3-o(1)} time is needed to list all avoidable vertices, and thus the current best algorithm is conditionally optimal if ω = 2. We then show that if ω > 2, one can obtain an improved algorithm that for the current value of ω runs in O(n^3.32) time. We also show that our conditional lower bound is actually higher and supercubic, under a natural High Dimensional 3-OV hypothesis, implying that for our current knowledge of rectangular matrix multiplication, the avoidable vertex listing problem likely requires Ω(n^3.25) time. We obtain further algorithmic improvements for sparse graphs and bounded degree graphs.

Cite as

Mingyang Deng, Virginia Vassilevska Williams, and Ziqian Zhong. New Lower Bounds and Upper Bounds for Listing Avoidable Vertices. In 47th International Symposium on Mathematical Foundations of Computer Science (MFCS 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 241, pp. 41:1-41:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{deng_et_al:LIPIcs.MFCS.2022.41,
  author =	{Deng, Mingyang and Vassilevska Williams, Virginia and Zhong, Ziqian},
  title =	{{New Lower Bounds and Upper Bounds for Listing Avoidable Vertices}},
  booktitle =	{47th International Symposium on Mathematical Foundations of Computer Science (MFCS 2022)},
  pages =	{41:1--41:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-256-3},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{241},
  editor =	{Szeider, Stefan and Ganian, Robert and Silva, Alexandra},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2022.41},
  URN =		{urn:nbn:de:0030-drops-168392},
  doi =		{10.4230/LIPIcs.MFCS.2022.41},
  annote =	{Keywords: Avoidable Vertex, Fine-Grained Complexity}
}
Document
Track A: Algorithms, Complexity and Games
New Additive Approximations for Shortest Paths and Cycles

Authors: Mingyang Deng, Yael Kirkpatrick, Victor Rong, Virginia Vassilevska Williams, and Ziqian Zhong

Published in: LIPIcs, Volume 229, 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)


Abstract
This paper considers additive approximation algorithms for All-Pairs Shortest Paths (APSP) and Shortest Cycle in undirected unweighted graphs. The results are as follows: - We obtain the first +2-approximation algorithm for APSP in n-vertex graphs that improves upon Dor, Halperin and Zwick’s (SICOMP'00) Õ(n^{7/3}) time algorithm. The new algorithm runs in Õ(n^2.29) time and is obtained via a reduction to Min-Plus product of bounded difference matrices. - We obtain the first additive approximation scheme for Shortest Cycle, generalizing the approximation algorithms of Itai and Rodeh (SICOMP'78) and Roditty and Vassilevska W. (SODA'12). For every integer r ≥ 0, we give an Õ(n+n^{2+r}/m^r) time algorithm that returns a +(2r+1)-approximate shortest cycle in any n-vertex, m-edge graph.

Cite as

Mingyang Deng, Yael Kirkpatrick, Victor Rong, Virginia Vassilevska Williams, and Ziqian Zhong. New Additive Approximations for Shortest Paths and Cycles. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 50:1-50:10, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{deng_et_al:LIPIcs.ICALP.2022.50,
  author =	{Deng, Mingyang and Kirkpatrick, Yael and Rong, Victor and Vassilevska Williams, Virginia and Zhong, Ziqian},
  title =	{{New Additive Approximations for Shortest Paths and Cycles}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{50:1--50:10},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.50},
  URN =		{urn:nbn:de:0030-drops-163919},
  doi =		{10.4230/LIPIcs.ICALP.2022.50},
  annote =	{Keywords: Fine-grained Complexity, Additive Approximation}
}
Document
Track A: Algorithms, Complexity and Games
Listing, Verifying and Counting Lowest Common Ancestors in DAGs: Algorithms and Fine-Grained Lower Bounds

Authors: Surya Mathialagan, Virginia Vassilevska Williams, and Yinzhan Xu

Published in: LIPIcs, Volume 229, 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)


Abstract
The AP-LCA problem asks, given an n-node directed acyclic graph (DAG), to compute for every pair of vertices u and v in the DAG a lowest common ancestor (LCA) of u and v if one exists, i.e. a node that is an ancestor of both u and v but no proper descendent of it is their common ancestor. Recently [Grandoni et al. SODA'21] obtained the first sub-n^{2.5} time algorithm for AP-LCA running in O(n^{2.447}) time. Meanwhile, the only known conditional lower bound for AP-LCA is that the problem requires n^{ω-o(1)} time where ω is the matrix multiplication exponent. In this paper we study several interesting variants of AP-LCA, providing both algorithms and fine-grained lower bounds for them. The lower bounds we obtain are the first conditional lower bounds for LCA problems higher than n^{ω-o(1)}. Some of our results include: - In any DAG, we can detect all vertex pairs that have at most two LCAs and list all of their LCAs in O(n^ω) time. This algorithm extends a result of [Kowaluk and Lingas ESA'07] which showed an Õ(n^ω) time algorithm that detects all pairs with a unique LCA in a DAG and outputs their corresponding LCAs. - Listing 7 LCAs per vertex pair in DAGs requires n^{3-o(1)} time under the popular assumption that 3-uniform 5-hyperclique detection requires n^{5-o(1)} time. This is surprising since essentially cubic time is sufficient to list all LCAs (if ω = 2). - Counting the number of LCAs for every vertex pair in a DAG requires n^{3-o(1)} time under the Strong Exponential Time Hypothesis, and n^{ω(1,2,1)-o(1)} time under the 4-Clique hypothesis. This shows that the algorithm of [Echkardt, Mühling and Nowak ESA'07] for listing all LCAs for every pair of vertices is likely optimal. - Given a DAG and a vertex w_{u,v} for every vertex pair u,v, verifying whether all w_{u,v} are valid LCAs requires n^{2.5-o(1)} time assuming 3-uniform 4-hyperclique requires n^{4-o(1)} time. This defies the common intuition that verification is easier than computation since returning some LCA per vertex pair can be solved in O(n^{2.447}) time.

Cite as

Surya Mathialagan, Virginia Vassilevska Williams, and Yinzhan Xu. Listing, Verifying and Counting Lowest Common Ancestors in DAGs: Algorithms and Fine-Grained Lower Bounds. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 94:1-94:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{mathialagan_et_al:LIPIcs.ICALP.2022.94,
  author =	{Mathialagan, Surya and Vassilevska Williams, Virginia and Xu, Yinzhan},
  title =	{{Listing, Verifying and Counting Lowest Common Ancestors in DAGs: Algorithms and Fine-Grained Lower Bounds}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{94:1--94:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.94},
  URN =		{urn:nbn:de:0030-drops-164359},
  doi =		{10.4230/LIPIcs.ICALP.2022.94},
  annote =	{Keywords: All-Pairs Lowest Common Ancestor, Fine-Grained Complexity}
}
Document
Dynamic Matching Algorithms Under Vertex Updates

Authors: Hung Le, Lazar Milenković, Shay Solomon, and Virginia Vassilevska Williams

Published in: LIPIcs, Volume 215, 13th Innovations in Theoretical Computer Science Conference (ITCS 2022)


Abstract
Dynamic graph matching algorithms have been extensively studied, but mostly under edge updates. This paper concerns dynamic matching algorithms under vertex updates, where in each update step a single vertex is either inserted or deleted along with its incident edges. A basic setting arising in online algorithms and studied by Bosek et al. [FOCS'14] and Bernstein et al. [SODA'18] is that of dynamic approximate maximum cardinality matching (MCM) in bipartite graphs in which one side is fixed and vertices on the other side either arrive or depart via vertex updates. In the BASIC-incremental setting, vertices only arrive, while in the BASIC-decremental setting vertices only depart. When vertices can both arrive and depart, we have the BASIC-dynamic setting. In this paper we also consider the setting in which both sides of the bipartite graph are dynamic. We call this the MEDIUM-dynamic setting, and MEDIUM-decremental is the restriction when vertices can only depart. The GENERAL-dynamic setting is when the graph is not necessarily bipartite and the vertices can both depart and arrive. Denote by K the total number of edges inserted and deleted to and from the graph throughout the entire update sequence. A well-studied measure, the recourse of a dynamic matching algorithm is the number of changes made to the matching per step. We largely focus on Maximal Matching (MM) which is a 2-approximation to the MCM. Our main results are as follows. - In the BASIC-dynamic setting, there is a straightforward algorithm for maintaining a MM, with a total runtime of O(K) and constant worst-case recourse. In fact, this algorithm never removes an edge from the matching; we refer to such an algorithm as irrevocable. - For the MEDIUM-dynamic setting we give a strong conditional lower bound that even holds in the MEDIUM-decremental setting: if for any fixed η > 0, there is an irrevocable decremental MM algorithm with a total runtime of O(K ⋅ n^{1-η}), this would refute the OMv conjecture; a similar (but weaker) hardness result can be achieved via a reduction from the Triangle Detection conjecture. - Next, we consider the GENERAL-dynamic setting, and design an MM algorithm with a total runtime of O(K) and constant worst-case recourse. We achieve this result via a 1-revocable algorithm, which may remove just one edge per update step. As argued above, an irrevocable algorithm with such a runtime is not likely to exist. - Finally, back to the BASIC-dynamic setting, we present an algorithm with a total runtime of O(K), which provides an (e/(e-1))-approximation to the MCM. To this end, we build on the classic "ranking" online algorithm by Karp et al. [STOC'90]. Beyond the results, our work draws connections between the areas of dynamic graph algorithms and online algorithms, and it proposes several open questions that seem to be overlooked thus far.

Cite as

Hung Le, Lazar Milenković, Shay Solomon, and Virginia Vassilevska Williams. Dynamic Matching Algorithms Under Vertex Updates. In 13th Innovations in Theoretical Computer Science Conference (ITCS 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 215, pp. 96:1-96:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{le_et_al:LIPIcs.ITCS.2022.96,
  author =	{Le, Hung and Milenkovi\'{c}, Lazar and Solomon, Shay and Vassilevska Williams, Virginia},
  title =	{{Dynamic Matching Algorithms Under Vertex Updates}},
  booktitle =	{13th Innovations in Theoretical Computer Science Conference (ITCS 2022)},
  pages =	{96:1--96:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-217-4},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{215},
  editor =	{Braverman, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2022.96},
  URN =		{urn:nbn:de:0030-drops-156923},
  doi =		{10.4230/LIPIcs.ITCS.2022.96},
  annote =	{Keywords: maximal matching, approximate matching, dynamic algorithm, vertex updates}
}
Document
Track A: Algorithms, Complexity and Games
Fine-Grained Hardness for Edit Distance to a Fixed Sequence

Authors: Amir Abboud and Virginia Vassilevska Williams

Published in: LIPIcs, Volume 198, 48th International Colloquium on Automata, Languages, and Programming (ICALP 2021)


Abstract
Nearly all quadratic lower bounds conditioned on the Strong Exponential Time Hypothesis (SETH) start by reducing k-SAT to the Orthogonal Vectors (OV) problem: Given two sets A,B of n binary vectors, decide if there is an orthogonal pair a ∈ A, b ∈ B. In this paper, we give an alternative reduction in which the set A does not depend on the input to k-SAT; thus, the quadratic lower bound for OV holds even if one of the sets is fixed in advance. Using the reductions in the literature from OV to other problems such as computing similarity measures on strings, we get hardness results of a stronger kind: there is a family of sequences {S_n}_{n = 1}^{∞}, |S_n| = n such that computing the Edit Distance between an input sequence X of length n and the (fixed) sequence S_n requires n^{2-o(1)} time under SETH.

Cite as

Amir Abboud and Virginia Vassilevska Williams. Fine-Grained Hardness for Edit Distance to a Fixed Sequence. In 48th International Colloquium on Automata, Languages, and Programming (ICALP 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 198, pp. 7:1-7:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{abboud_et_al:LIPIcs.ICALP.2021.7,
  author =	{Abboud, Amir and Vassilevska Williams, Virginia},
  title =	{{Fine-Grained Hardness for Edit Distance to a Fixed Sequence}},
  booktitle =	{48th International Colloquium on Automata, Languages, and Programming (ICALP 2021)},
  pages =	{7:1--7:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-195-5},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{198},
  editor =	{Bansal, Nikhil and Merelli, Emanuela and Worrell, James},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2021.7},
  URN =		{urn:nbn:de:0030-drops-140768},
  doi =		{10.4230/LIPIcs.ICALP.2021.7},
  annote =	{Keywords: SAT, edit distance, fine-grained complexity, conditional lower bound, sequence alignment}
}
Document
Track A: Algorithms, Complexity and Games
Improved Approximation for Longest Common Subsequence over Small Alphabets

Authors: Shyan Akmal and Virginia Vassilevska Williams

Published in: LIPIcs, Volume 198, 48th International Colloquium on Automata, Languages, and Programming (ICALP 2021)


Abstract
This paper investigates the approximability of the Longest Common Subsequence (LCS) problem. The fastest algorithm for solving the LCS problem exactly runs in essentially quadratic time in the length of the input, and it is known that under the Strong Exponential Time Hypothesis the quadratic running time cannot be beaten. There are no such limitations for the approximate computation of the LCS however, except in some limited scenarios. There is also a scarcity of approximation algorithms. When the two given strings are over an alphabet of size k, returning the subsequence formed by the most frequent symbol occurring in both strings achieves a 1/k approximation for the LCS. It is an open problem whether a better than 1/k approximation can be achieved in truly subquadratic time (O(n^{2-δ}) time for constant δ > 0). A recent result [Rubinstein and Song SODA'2020] showed that a 1/2+ε approximation for the LCS over a binary alphabet is possible in truly subquadratic time, provided the input strings have the same length. In this paper we show that if a 1/2+ε approximation (for ε > 0) is achievable for binary LCS in truly subquadratic time when the input strings can be unequal, then for every constant k, there is a truly subquadratic time algorithm that achieves a 1/k+δ approximation for k-ary alphabet LCS for some δ > 0. Thus the binary case is the hardest. We also show that for every constant k, if one is given two strings of equal length over a k-ary alphabet, one can obtain a 1/k+ε approximation for some constant ε > 0 in truly subquadratic time, thus extending the Rubinstein and Song result to all alphabets of constant size.

Cite as

Shyan Akmal and Virginia Vassilevska Williams. Improved Approximation for Longest Common Subsequence over Small Alphabets. In 48th International Colloquium on Automata, Languages, and Programming (ICALP 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 198, pp. 13:1-13:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{akmal_et_al:LIPIcs.ICALP.2021.13,
  author =	{Akmal, Shyan and Vassilevska Williams, Virginia},
  title =	{{Improved Approximation for Longest Common Subsequence over Small Alphabets}},
  booktitle =	{48th International Colloquium on Automata, Languages, and Programming (ICALP 2021)},
  pages =	{13:1--13:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-195-5},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{198},
  editor =	{Bansal, Nikhil and Merelli, Emanuela and Worrell, James},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2021.13},
  URN =		{urn:nbn:de:0030-drops-140821},
  doi =		{10.4230/LIPIcs.ICALP.2021.13},
  annote =	{Keywords: approximation algorithms, longest common subsequence, subquadratic}
}
Document
Track A: Algorithms, Complexity and Games
Algorithms, Reductions and Equivalences for Small Weight Variants of All-Pairs Shortest Paths

Authors: Timothy M. Chan, Virginia Vassilevska Williams, and Yinzhan Xu

Published in: LIPIcs, Volume 198, 48th International Colloquium on Automata, Languages, and Programming (ICALP 2021)


Abstract
All-Pairs Shortest Paths (APSP) is one of the most well studied problems in graph algorithms. This paper studies several variants of APSP in unweighted graphs or graphs with small integer weights. APSP with small integer weights in undirected graphs [Seidel'95, Galil and Margalit'97] has an Õ(n^ω) time algorithm, where ω < 2.373 is the matrix multiplication exponent. APSP in directed graphs with small weights however, has a much slower running time that would be Ω(n^{2.5}) even if ω = 2 [Zwick'02]. To understand this n^{2.5} bottleneck, we build a web of reductions around directed unweighted APSP . We show that it is fine-grained equivalent to computing a rectangular Min-Plus product for matrices with integer entries; the dimensions and entry size of the matrices depend on the value of ω. As a consequence, we establish an equivalence between APSP in directed unweighted graphs, APSP in directed graphs with small (Õ(1)) integer weights, All-Pairs Longest Paths in DAGs with small weights, cRed-APSP in undirected graphs with small weights, for any c ≥ 2 (computing all-pairs shortest path distances among paths that use at most c red edges), #_{≤ c}APSP in directed graphs with small weights (counting the number of shortest paths for each vertex pair, up to c), and approximate APSP with additive error c in directed graphs with small weights, for c ≤ Õ(1). We also provide fine-grained reductions from directed unweighted APSP to All-Pairs Shortest Lightest Paths (APSLP) in undirected graphs with {0,1} weights and #_{mod c}APSP in directed unweighted graphs (computing counts mod c), thus showing that unless the current algorithms for APSP in directed unweighted graphs can be improved substantially, these problems need at least Ω(n^{2.528}) time. We complement our hardness results with new algorithms. We improve the known algorithms for APSLP in directed graphs with small integer weights (previously studied by Zwick [STOC'99]) and for approximate APSP with sublinear additive error in directed unweighted graphs (previously studied by Roditty and Shapira [ICALP'08]). Our algorithm for approximate APSP with sublinear additive error is optimal, when viewed as a reduction to Min-Plus product. We also give new algorithms for variants of #APSP (such as #_{≤ U}APSP and #_{mod U}APSP for U ≤ n^{Õ(1)}) in unweighted graphs, as well as a near-optimal Õ(n³)-time algorithm for the original #APSP problem in unweighted graphs (when counts may be exponentially large). This also implies an Õ(n³)-time algorithm for Betweenness Centrality, improving on the previous Õ(n⁴) running time for the problem. Our techniques also lead to a simpler alternative to Shoshan and Zwick’s algorithm [FOCS'99] for the original APSP problem in undirected graphs with small integer weights.

Cite as

Timothy M. Chan, Virginia Vassilevska Williams, and Yinzhan Xu. Algorithms, Reductions and Equivalences for Small Weight Variants of All-Pairs Shortest Paths. In 48th International Colloquium on Automata, Languages, and Programming (ICALP 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 198, pp. 47:1-47:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{chan_et_al:LIPIcs.ICALP.2021.47,
  author =	{Chan, Timothy M. and Vassilevska Williams, Virginia and Xu, Yinzhan},
  title =	{{Algorithms, Reductions and Equivalences for Small Weight Variants of All-Pairs Shortest Paths}},
  booktitle =	{48th International Colloquium on Automata, Languages, and Programming (ICALP 2021)},
  pages =	{47:1--47:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-195-5},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{198},
  editor =	{Bansal, Nikhil and Merelli, Emanuela and Worrell, James},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2021.47},
  URN =		{urn:nbn:de:0030-drops-141166},
  doi =		{10.4230/LIPIcs.ICALP.2021.47},
  annote =	{Keywords: All-Pairs Shortest Paths, Fine-Grained Complexity, Graph Algorithm}
}
Document
Track A: Algorithms, Complexity and Games
Faster Monotone Min-Plus Product, Range Mode, and Single Source Replacement Paths

Authors: Yuzhou Gu, Adam Polak, Virginia Vassilevska Williams, and Yinzhan Xu

Published in: LIPIcs, Volume 198, 48th International Colloquium on Automata, Languages, and Programming (ICALP 2021)


Abstract
One of the most basic graph problems, All-Pairs Shortest Paths (APSP) is known to be solvable in n^{3-o(1)} time, and it is widely open whether it has an O(n^{3-ε}) time algorithm for ε > 0. To better understand APSP, one often strives to obtain subcubic time algorithms for structured instances of APSP and problems equivalent to it, such as the Min-Plus matrix product. A natural structured version of Min-Plus product is Monotone Min-Plus product which has been studied in the context of the Batch Range Mode [SODA'20] and Dynamic Range Mode [ICALP'20] problems. This paper improves the known algorithms for Monotone Min-Plus Product and for Batch and Dynamic Range Mode, and establishes a connection between Monotone Min-Plus Product and the Single Source Replacement Paths (SSRP) problem on an n-vertex graph with potentially negative edge weights in {-M, …, M}. SSRP with positive integer edge weights bounded by M can be solved in Õ(Mn^ω) time, whereas the prior fastest algorithm for graphs with possibly negative weights [FOCS'12] runs in O(M^{0.7519} n^{2.5286}) time, the current best running time for directed APSP with small integer weights. Using Monotone Min-Plus Product, we obtain an improved O(M^{0.8043} n^{2.4957}) time SSRP algorithm, showing that SSRP with constant negative integer weights is likely easier than directed unweighted APSP, a problem that is believed to require n^{2.5-o(1)} time. Complementing our algorithm for SSRP, we give a reduction from the Bounded-Difference Min-Plus Product problem studied by Bringmann et al. [FOCS'16] to negative weight SSRP. This reduction shows that it might be difficult to obtain an Õ(M n^{ω}) time algorithm for SSRP with negative weight edges, thus separating the problem from SSRP with only positive weight edges.

Cite as

Yuzhou Gu, Adam Polak, Virginia Vassilevska Williams, and Yinzhan Xu. Faster Monotone Min-Plus Product, Range Mode, and Single Source Replacement Paths. In 48th International Colloquium on Automata, Languages, and Programming (ICALP 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 198, pp. 75:1-75:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{gu_et_al:LIPIcs.ICALP.2021.75,
  author =	{Gu, Yuzhou and Polak, Adam and Vassilevska Williams, Virginia and Xu, Yinzhan},
  title =	{{Faster Monotone Min-Plus Product, Range Mode, and Single Source Replacement Paths}},
  booktitle =	{48th International Colloquium on Automata, Languages, and Programming (ICALP 2021)},
  pages =	{75:1--75:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-195-5},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{198},
  editor =	{Bansal, Nikhil and Merelli, Emanuela and Worrell, James},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2021.75},
  URN =		{urn:nbn:de:0030-drops-141440},
  doi =		{10.4230/LIPIcs.ICALP.2021.75},
  annote =	{Keywords: APSP, Min-Plus Product, Range Mode, Single-Source Replacement Paths}
}
Document
Invited Talk
3SUM and Related Problems in Fine-Grained Complexity (Invited Talk)

Authors: Virginia Vassilevska Williams

Published in: LIPIcs, Volume 189, 37th International Symposium on Computational Geometry (SoCG 2021)


Abstract
3SUM is a simple to state problem: given a set S of n numbers, determine whether S contains three a,b,c so that a+b+c = 0. The fastest algorithms for the problem run in n² poly(log log n)/(log n)² time both when the input numbers are integers [Ilya Baran et al., 2005] (in the word RAM model with O(log n) bit words) and when they are real numbers [Timothy M. Chan, 2020] (in the real RAM model). A hypothesis that is now central in Fine-Grained Complexity (FGC) states that 3SUM requires n^{2-o(1)} time (on the real RAM for real inputs and on the word RAM with O(log n) bit numbers for integer inputs). This hypothesis was first used in Computational Geometry by Gajentaan and Overmars [A. Gajentaan and M. Overmars, 1995] who built a web of reductions showing that many geometric problems are hard, assuming that 3SUM is hard. The web of reductions within computational geometry has grown considerably since then (see some citations in [V. Vassilevska Williams, 2018]). A seminal paper by Pǎtraşcu [Mihai Pǎtraşcu, 2010] showed that the integer version of the 3SUM hypothesis can be used to prove polynomial conditional lower bounds for several problems in data structures and graph algorithms as well, extending the implications of the hypothesis to outside computational geometry. Pǎtraşcu proved an important tight equivalence between (integer) 3SUM and a problem called 3SUM-Convolution (see also [Timothy M. Chan and Qizheng He, 2020]) that is easier to use in reductions: given an integer array a of length n, do there exist i,j ∈ [n] so that a[i]+a[j] = a[i+j]. From 3SUM-Convolution, many 3SUM-based hardness results have been proven: e.g. to listing graphs in triangles, dynamically maintaining shortest paths or bipartite matching, subset intersection and many more. It is interesting to consider more runtime-equivalent formulations of 3SUM, with the goal of uncovering more relationships to different problems. The talk will outline some such equivalences. For instance, 3SUM (over the reals or the integers) is equivalent to All-Numbers-3SUM: given a set S of n numbers, determine for every a ∈ S whether there are b,c ∈ S with a+b+c = 0 (e.g. [V. Vassilevska Williams and R. Williams, 2018]). The equivalences between 3SUM, 3SUM-Convolution and All-Numbers 3SUM are (n²,n²)-fine-grained equivalences that imply that if there is an O(n^{2-ε}) time algorithm for one of the problems for ε > 0, then there is also an O(n^{2-ε'}) time algorithm for the other problems for some ε' > 0. More generally, for functions a(n),b(n), there is an (a,b)-fine-grained reduction [V. Vassilevska Williams, 2018; V. Vassilevska Williams and R. Williams, 2010; V. Vassilevska Williams and R. Williams, 2018] from problem A to problem B if for every ε > 0 there is a δ > 0 and an O(a(n)^{1-δ}) time algorithm for A that does oracle calls to instances of B of sizes n₁,…,n_k (for some k) so that ∑_{j = 1}^k b(n_j)^{1-ε} ≤ a(n)^{1-δ}. With such a reduction, an O(b(n)^{1-ε}) time algorithm for B can be converted into an O(a(n)^{1-δ}) time algorithm for A by replacing the oracle calls by calls to the B algorithm. A and B are (a,b)-fine-grained equivalent if A (a,b)-reduces to B and B (b,a)-reduces to A. One of the main open problems in FGC is to determine the relationship between 3SUM and the other central FGC problems, in particular All-Pairs Shortest Paths (APSP). A classical graph problem, APSP in n node graphs has been known to be solvable in O(n³) time since the 1950s. Its fastest known algorithm runs in n³/exp(√{log n}) time [Ryan Williams, 2014]. The APSP Hypothesis states that n^{3-o(1)} time is needed to solve APSP in graphs with integer edge weights in the word-RAM model with O(log n) bit words. It is unknown whether APSP and 3SUM are fine-grained reducible to each other, in either direction. The two problems are very similar. Problems such as (min,+)-convolution (believed to require n^{2-o(1)} time) have tight fine-grained reductions to both APSP and 3SUM, and both 3SUM and APSP have tight fine-grained reductions to problems such as Exact Triangle [V. Vassilevska Williams and R. Williams, 2018; V. Vassilevska and R. Williams, 2009; V. Vassilevska Williams and Ryan Williams, 2013] and (since very recently) Listing triangles in sparse graphs [Mihai Pǎtraşcu, 2010; Tsvi Kopelowitz et al., 2016; V. Vassilevska Williams and Yinzhan Xu, 2020]. The talk will discuss these relationships and some of their implications, e.g. to dynamic algorithms.

Cite as

Virginia Vassilevska Williams. 3SUM and Related Problems in Fine-Grained Complexity (Invited Talk). In 37th International Symposium on Computational Geometry (SoCG 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 189, pp. 2:1-2:2, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{vassilevskawilliams:LIPIcs.SoCG.2021.2,
  author =	{Vassilevska Williams, Virginia},
  title =	{{3SUM and Related Problems in Fine-Grained Complexity}},
  booktitle =	{37th International Symposium on Computational Geometry (SoCG 2021)},
  pages =	{2:1--2:2},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-184-9},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{189},
  editor =	{Buchin, Kevin and Colin de Verdi\`{e}re, \'{E}ric},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SoCG.2021.2},
  URN =		{urn:nbn:de:0030-drops-138014},
  doi =		{10.4230/LIPIcs.SoCG.2021.2},
  annote =	{Keywords: fine-grained complexity}
}
Document
Distributed Distance Approximation

Authors: Bertie Ancona, Keren Censor-Hillel, Mina Dalirrooyfard, Yuval Efron, and Virginia Vassilevska Williams

Published in: LIPIcs, Volume 184, 24th International Conference on Principles of Distributed Systems (OPODIS 2020)


Abstract
Diameter, radius and eccentricities are fundamental graph parameters, which are extensively studied in various computational settings. Typically, computing approximate answers can be much more efficient compared with computing exact solutions. In this paper, we give a near complete characterization of the trade-offs between approximation ratios and round complexity of distributed algorithms for approximating these parameters, with a focus on the weighted and directed variants. Furthermore, we study bi-chromatic variants of these parameters defined on a graph whose vertices are colored either red or blue, and one focuses only on distances for pairs of vertices that are colored differently. Motivated by applications in computational geometry, bi-chromatic diameter, radius and eccentricities have been recently studied in the sequential setting [Backurs et al. STOC'18, Dalirrooyfard et al. ICALP'19]. We provide the first distributed upper and lower bounds for such problems. Our technical contributions include introducing the notion of approximate pseudo-center, which extends the pseudo-centers of [Choudhary and Gold SODA'20], and presenting an efficient distributed algorithm for computing approximate pseudo-centers. On the lower bound side, our constructions introduce the usage of new functions into the framework of reductions from 2-party communication complexity to distributed algorithms.

Cite as

Bertie Ancona, Keren Censor-Hillel, Mina Dalirrooyfard, Yuval Efron, and Virginia Vassilevska Williams. Distributed Distance Approximation. In 24th International Conference on Principles of Distributed Systems (OPODIS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 184, pp. 30:1-30:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{ancona_et_al:LIPIcs.OPODIS.2020.30,
  author =	{Ancona, Bertie and Censor-Hillel, Keren and Dalirrooyfard, Mina and Efron, Yuval and Vassilevska Williams, Virginia},
  title =	{{Distributed Distance Approximation}},
  booktitle =	{24th International Conference on Principles of Distributed Systems (OPODIS 2020)},
  pages =	{30:1--30:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-176-4},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{184},
  editor =	{Bramas, Quentin and Oshman, Rotem and Romano, Paolo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.OPODIS.2020.30},
  URN =		{urn:nbn:de:0030-drops-135150},
  doi =		{10.4230/LIPIcs.OPODIS.2020.30},
  annote =	{Keywords: Distributed Computing, Distance Computation, Algorithms, Lower Bounds}
}
Document
Track A: Algorithms, Complexity and Games
Conditionally Optimal Approximation Algorithms for the Girth of a Directed Graph

Authors: Mina Dalirrooyfard and Virginia Vassilevska Williams

Published in: LIPIcs, Volume 168, 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)


Abstract
The girth is one of the most basic graph parameters, and its computation has been studied for many decades. Under widely believed fine-grained assumptions, computing the girth exactly is known to require mn^{1-o(1)} time, both in sparse and dense m-edge, n-node graphs, motivating the search for fast approximations. Fast good quality approximation algorithms for undirected graphs have been known for decades. For the girth in directed graphs, until recently the only constant factor approximation algorithms ran in O(n^ω) time, where ω < 2.373 is the matrix multiplication exponent. These algorithms have two drawbacks: (1) they only offer an improvement over the mn running time for dense graphs, and (2) the current fast matrix multiplication methods are impractical. The first constant factor approximation algorithm that runs in O(mn^{1-ε}) time for ε > 0 and all sparsities m was only recently obtained by Chechik et al. [STOC 2020]; it is also combinatorial. It is known that a better than 2-approximation algorithm for the girth in dense directed unweighted graphs needs n^{3-o(1)} time unless one uses fast matrix multiplication. Meanwhile, the best known approximation factor for a combinatorial algorithm running in O(mn^{1-ε}) time (by Chechik et al.) is 3. Is the true answer 2 or 3? The main result of this paper is a (conditionally) tight approximation algorithm for directed graphs. First, we show that under a popular hardness assumption, any algorithm, even one that exploits fast matrix multiplication, would need to take at least mn^{1-o(1)} time for some sparsity m if it achieves a (2-ε)-approximation for any ε > 0. Second we give a 2-approximation algorithm for the girth of unweighted graphs running in Õ(mn^{3/4}) time, and a (2+ε)-approximation algorithm (for any ε > 0) that works in weighted graphs and runs in Õ(m√ n) time. Our algorithms are combinatorial. We also obtain a (4+ε)-approximation of the girth running in Õ(mn^{√2-1}) time, improving upon the previous best Õ(m√n) running time by Chechik et al. Finally, we consider the computation of roundtrip spanners. We obtain a (5+ε)-approximate roundtrip spanner on Õ(n^{1.5}/ε²) edges in Õ(m√n/ε²) time. This improves upon the previous approximation factor (8+ε) of Chechik et al. for the same running time.

Cite as

Mina Dalirrooyfard and Virginia Vassilevska Williams. Conditionally Optimal Approximation Algorithms for the Girth of a Directed Graph. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 35:1-35:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{dalirrooyfard_et_al:LIPIcs.ICALP.2020.35,
  author =	{Dalirrooyfard, Mina and Vassilevska Williams, Virginia},
  title =	{{Conditionally Optimal Approximation Algorithms for the Girth of a Directed Graph}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{35:1--35:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.35},
  URN =		{urn:nbn:de:0030-drops-124421},
  doi =		{10.4230/LIPIcs.ICALP.2020.35},
  annote =	{Keywords: Shortest cycle, Girth, Graph algorithms, Approximation algorithms, Fine-grained complexity, Roundtrip Spanner}
}
Document
Track A: Algorithms, Complexity and Games
Towards Optimal Set-Disjointness and Set-Intersection Data Structures

Authors: Tsvi Kopelowitz and Virginia Vassilevska Williams

Published in: LIPIcs, Volume 168, 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)


Abstract
In the online set-disjointness problem the goal is to preprocess a family of sets ℱ, so that given two sets S,S' ∈ ℱ, one can quickly establish whether the two sets are disjoint or not. If N = ∑_{S ∈ ℱ} |S|, then let N^p be the preprocessing time and let N^q be the query time. The most efficient known combinatorial algorithm is a generalization of an algorithm by Cohen and Porat [TCS'10] which has a tradeoff curve of p+q = 2. Kopelowitz, Pettie, and Porat [SODA'16] showed that, based on the 3SUM hypothesis, there is a conditional lower bound curve of p+2q ≥ 2. Thus, the current state-of-the-art exhibits a large gap. The online set-intersection problem is the reporting version of the online set-disjointness problem, and given a query, the goal is to report all of the elements in the intersection. When considering algorithms with N^p preprocessing time and N^q +O(op) query time, where op is the size of the output, the combinatorial algorithm for online set-disjointess can be extended to solve online set-intersection with a tradeoff curve of p+q = 2. Kopelowitz, Pettie, and Porat [SODA'16] showed that, assuming the 3SUM hypothesis, for 0 ≤ q ≤ 2/3 this curve is tight. However, for 2/3 ≤ q < 1 there is no known lower bound. In this paper we close both gaps by showing the following: - For online set-disjointness we design an algorithm whose runtime, assuming ω = 2 (where ω is the exponent in the fastest matrix multiplication algorithm), matches the lower bound curve of Kopelowitz et al., for q ≤ 1/3. We then complement the new algorithm by a matching conditional lower bound for q > 1/3 which is based on a natural hypothesis on the time required to detect a triangle in an unbalanced tripartite graph. Remarkably, even if ω > 2, the algorithm matches the lower bound curve of Kopelowitz et al. for p≥ 1.73688 and q ≤ 0.13156. - For set-intersection, we prove a conditional lower bound that matches the combinatorial upper bound curve for q≥ 1/2 which is based on a hypothesis on the time required to enumerate all triangles in an unbalanced tripartite graph. - Finally, we design algorithms for detecting and enumerating triangles in unbalanced tripartite graphs which match the lower bounds of the corresponding hypotheses, assuming ω = 2.

Cite as

Tsvi Kopelowitz and Virginia Vassilevska Williams. Towards Optimal Set-Disjointness and Set-Intersection Data Structures. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 74:1-74:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{kopelowitz_et_al:LIPIcs.ICALP.2020.74,
  author =	{Kopelowitz, Tsvi and Vassilevska Williams, Virginia},
  title =	{{Towards Optimal Set-Disjointness and Set-Intersection Data Structures}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{74:1--74:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.74},
  URN =		{urn:nbn:de:0030-drops-124813},
  doi =		{10.4230/LIPIcs.ICALP.2020.74},
  annote =	{Keywords: Set-disjointness data structures, Triangle detection, Triangle enumeration, Fine-grained complexity, Fast matrix multiplication}
}
Document
Monochromatic Triangles, Intermediate Matrix Products, and Convolutions

Authors: Andrea Lincoln, Adam Polak, and Virginia Vassilevska Williams

Published in: LIPIcs, Volume 151, 11th Innovations in Theoretical Computer Science Conference (ITCS 2020)


Abstract
The most studied linear algebraic operation, matrix multiplication, has surprisingly fast O(n^ω) time algorithms for ω < 2.373. On the other hand, the (min,+) matrix product which is at the heart of many fundamental graph problems such as All-Pairs Shortest Paths, has received only minor n^o(1) improvements over its brute-force cubic running time and is widely conjectured to require n^{3-o(1)} time. There is a plethora of matrix products and graph problems whose complexity seems to lie in the middle of these two problems. For instance, the Min-Max matrix product, the Minimum Witness matrix product, All-Pairs Shortest Paths in directed unweighted graphs and determining whether an edge-colored graph contains a monochromatic triangle, can all be solved in Õ(n^{(3+ω)/2}) time. While slight improvements are sometimes possible using rectangular matrix multiplication, if ω=2, the best runtimes for these "intermediate" problems are all Õ(n^2.5). A similar phenomenon occurs for convolution problems. Here, using the FFT, the usual (+,×)-convolution of two n-length sequences can be solved in O(n log n) time, while the (min,+) convolution is conjectured to require n^{2-o(1)} time, the brute force running time for convolution problems. There are analogous intermediate problems that can be solved in O(n^1.5) time, but seemingly not much faster: Min-Max convolution, Minimum Witness convolution, etc. Can one improve upon the running times for these intermediate problems, in either the matrix product or the convolution world? Or, alternatively, can one relate these problems to each other and to other key problems in a meaningful way? This paper makes progress on these questions by providing a network of fine-grained reductions. We show for instance that APSP in directed unweighted graphs and Minimum Witness product can be reduced to both the Min-Max product and a variant of the monochromatic triangle problem, so that a significant improvement over n^{(3+ω)/2} time for any of the latter problems would result in a similar improvement for both of the former problems. We also show that a natural convolution variant of monochromatic triangle is fine-grained equivalent to the famous 3SUM problem. As this variant is solvable in O(n^1.5) time and 3SUM is in O(n^2) time (and is conjectured to require n^{2-o(1)} time), our result gives the first fine-grained equivalence between natural problems of different running times. We also relate 3SUM to monochromatic triangle, and a coin change problem to monochromatic convolution, and thus to 3SUM.

Cite as

Andrea Lincoln, Adam Polak, and Virginia Vassilevska Williams. Monochromatic Triangles, Intermediate Matrix Products, and Convolutions. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 53:1-53:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{lincoln_et_al:LIPIcs.ITCS.2020.53,
  author =	{Lincoln, Andrea and Polak, Adam and Vassilevska Williams, Virginia},
  title =	{{Monochromatic Triangles, Intermediate Matrix Products, and Convolutions}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{53:1--53:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.53},
  URN =		{urn:nbn:de:0030-drops-117382},
  doi =		{10.4230/LIPIcs.ITCS.2020.53},
  annote =	{Keywords: 3SUM, fine-grained complexity, matrix multiplication, monochromatic triangle}
}
Document
OV Graphs Are (Probably) Hard Instances

Authors: Josh Alman and Virginia Vassilevska Williams

Published in: LIPIcs, Volume 151, 11th Innovations in Theoretical Computer Science Conference (ITCS 2020)


Abstract
A graph G on n nodes is an Orthogonal Vectors (OV) graph of dimension d if there are vectors v_1, …, v_n ∈ {0,1}^d such that nodes i and j are adjacent in G if and only if ⟨v_i,v_j⟩ = 0 over Z. In this paper, we study a number of basic graph algorithm problems, except where one is given as input the vectors defining an OV graph instead of a general graph. We show that for each of the following problems, an algorithm solving it faster on such OV graphs G of dimension only d=O(log n) than in the general case would refute a plausible conjecture about the time required to solve sparse MAX-k-SAT instances: - Determining whether G contains a triangle. - More generally, determining whether G contains a directed k-cycle for any k ≥ 3. - Computing the square of the adjacency matrix of G over ℤ or ?_2. - Maintaining the shortest distance between two fixed nodes of G, or whether G has a perfect matching, when G is a dynamically updating OV graph. We also prove some complementary results about OV graphs. We show that any problem which is NP-hard on constant-degree graphs is also NP-hard on OV graphs of dimension O(log n), and we give two problems which can be solved faster on OV graphs than in general: Maximum Clique, and Online Matrix-Vector Multiplication.

Cite as

Josh Alman and Virginia Vassilevska Williams. OV Graphs Are (Probably) Hard Instances. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 83:1-83:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{alman_et_al:LIPIcs.ITCS.2020.83,
  author =	{Alman, Josh and Vassilevska Williams, Virginia},
  title =	{{OV Graphs Are (Probably) Hard Instances}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{83:1--83:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.83},
  URN =		{urn:nbn:de:0030-drops-117686},
  doi =		{10.4230/LIPIcs.ITCS.2020.83},
  annote =	{Keywords: Orthogonal Vectors, Fine-Grained Reductions, Cycle Finding}
}
Document
Fine-grained Algorithms and Complexity

Authors: Virginia Vassilevska Williams

Published in: LIPIcs, Volume 98, 21st International Conference on Database Theory (ICDT 2018)


Abstract
A central goal of algorithmic research is to determine how fast computational problems can be solved in the worst case. Theorems from complexity theory state that there are problems that, on inputs of size n, can be solved in t(n) time but not in O(t(n)^{1-epsilon}) time for epsilon>0. The main challenge is to determine where in this hierarchy various natural and important problems lie. Throughout the years, many ingenious algorithmic techniques have been developed and applied to obtain blazingly fast algorithms for many problems. Nevertheless, for many other central problems, the best known running times are essentially those of their classical algorithms from the 1950s and 1960s. Unconditional lower bounds seem very difficult to obtain, and so practically all known time lower bounds are conditional. For years, the main tool for proving hardness of computational problems have been NP-hardness reductions, basing hardness on P neq NP. However, when we care about the exact running time (as opposed to merely polynomial vs non-polynomial), NP-hardness is not applicable, especially if the problem is already solvable in polynomial time. In recent years, a new theory has been developed, based on "fine-grained reductions" that focus on exact running times. Mimicking NP-hardness, the approach is to (1) select a key problem X that is conjectured to require essentially t(n) time for some t, and (2) reduce X in a fine-grained way to many important problems. This approach has led to the discovery of many meaningful relationships between problems, and even sometimes to equivalence classes. The main key problems used to base hardness on have been: the 3SUM problem, the CNF-SAT problem (based on the Strong Exponential Time Hypothesis (SETH)) and the All Pairs Shortest Paths Problem. Research on SETH-based lower bounds has flourished in particular in recent years showing that the classical algorithms are optimal for problems such as Approximate Diameter, Edit Distance, Frechet Distance, Longest Common Subsequence etc. In this talk I will give an overview of the current progress in this area of study, and will highlight some exciting new developments and their relationship to database theory.

Cite as

Virginia Vassilevska Williams. Fine-grained Algorithms and Complexity. In 21st International Conference on Database Theory (ICDT 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 98, p. 1:1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{vassilevskawilliams:LIPIcs.ICDT.2018.1,
  author =	{Vassilevska Williams, Virginia},
  title =	{{Fine-grained Algorithms and Complexity}},
  booktitle =	{21st International Conference on Database Theory (ICDT 2018)},
  pages =	{1:1--1:1},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-063-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{98},
  editor =	{Kimelfeld, Benny and Amsterdamer, Yael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICDT.2018.1},
  URN =		{urn:nbn:de:0030-drops-86135},
  doi =		{10.4230/LIPIcs.ICDT.2018.1},
  annote =	{Keywords: algorithms, complexity, fine-grained}
}
Document
Further Limitations of the Known Approaches for Matrix Multiplication

Authors: Josh Alman and Virginia Vassilevska Williams

Published in: LIPIcs, Volume 94, 9th Innovations in Theoretical Computer Science Conference (ITCS 2018)


Abstract
We consider the techniques behind the current best algorithms for matrix multiplication. Our results are threefold. (1) We provide a unifying framework, showing that all known matrix multiplication running times since 1986 can be achieved from a single very natural tensor - the structural tensor T_q of addition modulo an integer q. (2) We show that if one applies a generalization of the known techniques (arbitrary zeroing out of tensor powers to obtain independent matrix products in order to use the asymptotic sum inequality of Schönhage) to an arbitrary monomial degeneration of T_q, then there is an explicit lower bound, depending on q, on the bound on the matrix multiplication exponent omega that one can achieve. We also show upper bounds on the value alpha that one can achieve, where alpha is such that n * n^alpha * n matrix multiplication can be computed in n^{2+o(1)} time. (3) We show that our lower bound on omega approaches 2 as q goes to infinity. This suggests a promising approach to improving the bound on omega: for variable q, find a monomial degeneration of T_q which, using the known techniques, produces an upper bound on omega as a function of q. Then, take q to infinity. It is not ruled out, and hence possible, that one can obtain omega=2 in this way.

Cite as

Josh Alman and Virginia Vassilevska Williams. Further Limitations of the Known Approaches for Matrix Multiplication. In 9th Innovations in Theoretical Computer Science Conference (ITCS 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 94, pp. 25:1-25:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{alman_et_al:LIPIcs.ITCS.2018.25,
  author =	{Alman, Josh and Vassilevska Williams, Virginia},
  title =	{{Further Limitations of the Known Approaches for Matrix Multiplication}},
  booktitle =	{9th Innovations in Theoretical Computer Science Conference (ITCS 2018)},
  pages =	{25:1--25:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-060-6},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{94},
  editor =	{Karlin, Anna R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2018.25},
  URN =		{urn:nbn:de:0030-drops-83609},
  doi =		{10.4230/LIPIcs.ITCS.2018.25},
  annote =	{Keywords: matrix multiplication, lower bound, monomial degeneration, structural tensor of addition mod p}
}
Document
Fine-grained I/O Complexity via Reductions: New Lower Bounds, Faster Algorithms, and a Time Hierarchy

Authors: Erik D. Demaine, Andrea Lincoln, Quanquan C. Liu, Jayson Lynch, and Virginia Vassilevska Williams

Published in: LIPIcs, Volume 94, 9th Innovations in Theoretical Computer Science Conference (ITCS 2018)


Abstract
This paper initiates the study of I/O algorithms (minimizing cache misses) from the perspective of fine-grained complexity (conditional polynomial lower bounds). Specifically, we aim to answer why sparse graph problems are so hard, and why the Longest Common Subsequence problem gets a savings of a factor of the size of cache times the length of a cache line, but no more. We take the reductions and techniques from complexity and fine-grained complexity and apply them to the I/O model to generate new (conditional) lower bounds as well as new faster algorithms. We also prove the existence of a time hierarchy for the I/O model, which motivates the fine-grained reductions. - Using fine-grained reductions, we give an algorithm for distinguishing 2 vs. 3 diameter and radius that runs in O(|E|^2/(MB)) cache misses, which for sparse graphs improves over the previous O(|V|^2/B) running time. - We give new reductions from radius and diameter to Wiener index and median. These reductions are new in both the RAM and I/O models. - We show meaningful reductions between problems that have linear-time solutions in the RAM model. The reductions use low I/O complexity (typically O(n/B)), and thus help to finely capture between "I/O linear time" O(n/B) and RAM linear time O(n). - We generate new I/O assumptions based on the difficulty of improving sparse graph problem running times in the I/O model. We create conjectures that the current best known algorithms for Single Source Shortest Paths (SSSP), diameter, and radius are optimal. - From these I/O-model assumptions, we show that many of the known reductions in the word-RAM model can naturally extend to hold in the I/O model as well (e.g., a lower bound on the I/O complexity of Longest Common Subsequence that matches the best known running time). - We prove an analog of the Time Hierarchy Theorem in the I/O model, further motivating the study of fine-grained algorithmic differences.

Cite as

Erik D. Demaine, Andrea Lincoln, Quanquan C. Liu, Jayson Lynch, and Virginia Vassilevska Williams. Fine-grained I/O Complexity via Reductions: New Lower Bounds, Faster Algorithms, and a Time Hierarchy. In 9th Innovations in Theoretical Computer Science Conference (ITCS 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 94, pp. 34:1-34:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{demaine_et_al:LIPIcs.ITCS.2018.34,
  author =	{Demaine, Erik D. and Lincoln, Andrea and Liu, Quanquan C. and Lynch, Jayson and Vassilevska Williams, Virginia},
  title =	{{Fine-grained I/O Complexity via Reductions: New Lower Bounds, Faster Algorithms, and a Time Hierarchy}},
  booktitle =	{9th Innovations in Theoretical Computer Science Conference (ITCS 2018)},
  pages =	{34:1--34:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-060-6},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{94},
  editor =	{Karlin, Anna R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2018.34},
  URN =		{urn:nbn:de:0030-drops-83335},
  doi =		{10.4230/LIPIcs.ITCS.2018.34},
  annote =	{Keywords: IO model, Fine-grained Complexity, Algorithms}
}
Document
Conditional Hardness for Sensitivity Problems

Authors: Monika Henzinger, Andrea Lincoln, Stefan Neumann, and Virginia Vassilevska Williams

Published in: LIPIcs, Volume 67, 8th Innovations in Theoretical Computer Science Conference (ITCS 2017)


Abstract
In recent years it has become popular to study dynamic problems in a sensitivity setting: Instead of allowing for an arbitrary sequence of updates, the sensitivity model only allows to apply batch updates of small size to the original input data. The sensitivity model is particularly appealing since recent strong conditional lower bounds ruled out fast algorithms for many dynamic problems, such as shortest paths, reachability, or subgraph connectivity. In this paper we prove conditional lower bounds for these and additional problems in a sensitivity setting. For example, we show that under the Boolean Matrix Multiplication (BMM) conjecture combinatorial algorithms cannot compute the (4/3-\varepsilon)-approximate diameter of an undirected unweighted dense graph with truly subcubic preprocessing time and truly subquadratic update/query time. This result is surprising since in the static setting it is not clear whether a reduction from BMM to diameter is possible. We further show under the BMM conjecture that many problems, such as reachability or approximate shortest paths, cannot be solved faster than by recomputation from scratch even after only one or two edge insertions. We extend our reduction from BMM to Diameter to give a reduction from All Pairs Shortest Paths to Diameter under one deletion in weighted graphs. This is intriguing, as in the static setting it is a big open problem whether Diameter is as hard as APSP. We further get a nearly tight lower bound for shortest paths after two edge deletions based on the APSP conjecture. We give more lower bounds under the Strong Exponential Time Hypothesis. Many of our lower bounds also hold for static oracle data structures where no sensitivity is required. Finally, we give the first algorithm for the (1+\varepsilon)-approximate radius, diameter, and eccentricity problems in directed or undirected unweighted graphs in case of single edges failures. The algorithm has a truly subcubic running time for graphs with a truly subquadratic number of edges; it is tight w.r.t. the conditional lower bounds we obtain.

Cite as

Monika Henzinger, Andrea Lincoln, Stefan Neumann, and Virginia Vassilevska Williams. Conditional Hardness for Sensitivity Problems. In 8th Innovations in Theoretical Computer Science Conference (ITCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 67, pp. 26:1-26:31, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{henzinger_et_al:LIPIcs.ITCS.2017.26,
  author =	{Henzinger, Monika and Lincoln, Andrea and Neumann, Stefan and Vassilevska Williams, Virginia},
  title =	{{Conditional Hardness for Sensitivity Problems}},
  booktitle =	{8th Innovations in Theoretical Computer Science Conference (ITCS 2017)},
  pages =	{26:1--26:31},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-029-3},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{67},
  editor =	{Papadimitriou, Christos H.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2017.26},
  URN =		{urn:nbn:de:0030-drops-81783},
  doi =		{10.4230/LIPIcs.ITCS.2017.26},
  annote =	{Keywords: sensitivity, conditional lower bounds, data structures, dynamic graph algorithms}
}
Document
Metatheorems for Dynamic Weighted Matching

Authors: Daniel Stubbs and Virginia Vassilevska Williams

Published in: LIPIcs, Volume 67, 8th Innovations in Theoretical Computer Science Conference (ITCS 2017)


Abstract
We consider the maximum weight matching (MWM) problem in dynamic graphs. We provide two reductions. The first reduces the dynamic MWM problem on m-edge, n-node graphs with weights bounded by N to the problem with weights bounded by (n/eps)^2, so that if the MWM problem can be alpha-approximated with update time t(m,n,N), then it can also be (1+eps)alpha-approximated with update time O(t(m,n,(n/eps)^2)log^2 n+log n loglog N)). The second reduction reduces the dynamic MWM problem to the dynamic maximum cardinality matching (MCM) problem in which the graph is unweighted. This reduction shows that if there is an \alpha-approximation algorithm for MCM with update time t(m,n) in m-edge n-node graphs, then there is also a (2+eps)alpha-approximation algorithm for MWM with update time O(t(m,n)eps^{-2}log^2 N). We also obtain better bounds in our reductions if the ratio between the largest and the smallest edge weight is small. Combined with recent work on MCM, these two reductions substantially improve upon the state-of-the-art of dynamic MWM algorithms.

Cite as

Daniel Stubbs and Virginia Vassilevska Williams. Metatheorems for Dynamic Weighted Matching. In 8th Innovations in Theoretical Computer Science Conference (ITCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 67, pp. 58:1-58:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{stubbs_et_al:LIPIcs.ITCS.2017.58,
  author =	{Stubbs, Daniel and Vassilevska Williams, Virginia},
  title =	{{Metatheorems for Dynamic Weighted Matching}},
  booktitle =	{8th Innovations in Theoretical Computer Science Conference (ITCS 2017)},
  pages =	{58:1--58:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-029-3},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{67},
  editor =	{Papadimitriou, Christos H.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2017.58},
  URN =		{urn:nbn:de:0030-drops-81944},
  doi =		{10.4230/LIPIcs.ITCS.2017.58},
  annote =	{Keywords: dynamic algorithms, maximum matching, maximum weight matching}
}
Document
Dynamic Parameterized Problems and Algorithms

Authors: Josh Alman, Matthias Mnich, and Virginia Vassilevska Williams

Published in: LIPIcs, Volume 80, 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)


Abstract
Fixed-parameter algorithms and kernelization are two powerful methods to solve NP-hard problems. Yet, so far those algorithms have been largely restricted to static inputs. In this paper we provide fixed-parameter algorithms and kernelizations for fundamental NP-hard problems with dynamic inputs. We consider a variety of parameterized graph and hitting set problems which are known to have f(k)n^{1+o(1)} time algorithms on inputs of size n, and we consider the question of whether there is a data structure that supports small updates (such as edge/vertex/set/element insertions and deletions) with an update time of g(k)n^{o(1)}; such an update time would be essentially optimal. Update and query times independent of n are particularly desirable. Among many other results, we show that Feedback Vertex Set and k-Path admit dynamic algorithms with f(k)log O(1) n update and query times for some function f depending on the solution size k only. We complement our positive results by several conditional and unconditional lower bounds. For example, we show that unlike their undirected counterparts, Directed Feedback Vertex Set and Directed k-Path do not admit dynamic algorithms with n^{o(1) } update and query times even for constant solution sizes k <= 3, assuming popular hardness hypotheses. We also show that unconditionally, in the cell probe model, Directed Feedback Vertex Set cannot be solved with update time that is purely a function of k.

Cite as

Josh Alman, Matthias Mnich, and Virginia Vassilevska Williams. Dynamic Parameterized Problems and Algorithms. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 41:1-41:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{alman_et_al:LIPIcs.ICALP.2017.41,
  author =	{Alman, Josh and Mnich, Matthias and Vassilevska Williams, Virginia},
  title =	{{Dynamic Parameterized Problems and Algorithms}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{41:1--41:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.41},
  URN =		{urn:nbn:de:0030-drops-74419},
  doi =		{10.4230/LIPIcs.ICALP.2017.41},
  annote =	{Keywords: Dynamic algorithms, fixed-parameter algorithms}
}
Document
Preserving Distances in Very Faulty Graphs

Authors: Greg Bodwin, Fabrizio Grandoni, Merav Parter, and Virginia Vassilevska Williams

Published in: LIPIcs, Volume 80, 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)


Abstract
Preservers and additive spanners are sparse (hence cheap to store) subgraphs that preserve the distances between given pairs of nodes exactly or with some small additive error, respectively. Since real-world networks are prone to failures, it makes sense to study fault-tolerant versions of the above structures. This turns out to be a surprisingly difficult task. For every small but arbitrary set of edge or vertex failures, the preservers and spanners need to contain replacement paths around the faulted set. Unfortunately, the complexity of the interaction between replacement paths blows up significantly, even from 1 to 2 faults, and the structure of optimal preservers and spanners is poorly understood. In particular, no nontrivial bounds for preservers and additive spanners are known when the number of faults is bigger than 2. Even the answer to the following innocent question is completely unknown: what is the worst-case size of a preserver for a single pair of nodes in the presence of f edge faults? There are no super-linear lower bounds, nor subquadratic upper bounds for f>2. In this paper we make substantial progress on this and other fundamental questions: - We present the first truly sub-quadratic size fault-tolerant single-pair preserver in unweighted (possibly directed) graphs: for any n node graph and any fixed number f of faults, O~(fn^{2-1/2^f}) size suffices. Our result also generalizes to the single-source (all targets) case, and can be used to build new fault-tolerant additive spanners (for all pairs). - The size of the above single-pair preserver grows to O(n^2) for increasing f. We show that this is necessary even in undirected unweighted graphs, and even if you allow for a small additive error: If you aim at size O(n^{2-eps}) for \eps>0, then the additive error has to be \Omega(eps f). This surprisingly matches known upper bounds in the literature. - For weighted graphs, we provide matching upper and lower bounds for the single pair case. Namely, the size of the preserver is Theta(n^2) for f > 1 in both directed and undirected graphs, while for f=1 the size is Theta(n) in undirected graphs. For directed graphs, we have a superlinear upper bound and a matching lower bound. Most of our lower bounds extend to the distance oracle setting, where rather than a subgraph we ask for any compact data structure.

Cite as

Greg Bodwin, Fabrizio Grandoni, Merav Parter, and Virginia Vassilevska Williams. Preserving Distances in Very Faulty Graphs. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 73:1-73:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{bodwin_et_al:LIPIcs.ICALP.2017.73,
  author =	{Bodwin, Greg and Grandoni, Fabrizio and Parter, Merav and Vassilevska Williams, Virginia},
  title =	{{Preserving Distances in Very Faulty Graphs}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{73:1--73:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.73},
  URN =		{urn:nbn:de:0030-drops-74906},
  doi =		{10.4230/LIPIcs.ICALP.2017.73},
  annote =	{Keywords: Fault Tolerance, shortest paths, replacement paths}
}
Document
Structure and Hardness in P (Dagstuhl Seminar 16451)

Authors: Moshe Lewenstein, Seth Pettie, and Virginia Vassilevska Williams

Published in: Dagstuhl Reports, Volume 6, Issue 11 (2017)


Abstract
This document contains description of the talks at the Dagstuhl seminar 16451 "Structure and Hardness in P". The main goal of the seminar was to bring together researchers from several disciplines and connect those who work on proving conditional lower bounds with those who or may benefit from it. This resulted in an extensive list of open problems which is also provided.

Cite as

Moshe Lewenstein, Seth Pettie, and Virginia Vassilevska Williams. Structure and Hardness in P (Dagstuhl Seminar 16451). In Dagstuhl Reports, Volume 6, Issue 11, pp. 1-34, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@Article{lewenstein_et_al:DagRep.6.11.1,
  author =	{Lewenstein, Moshe and Pettie, Seth and Vassilevska Williams, Virginia},
  title =	{{Structure and Hardness in P (Dagstuhl Seminar 16451)}},
  pages =	{1--34},
  journal =	{Dagstuhl Reports},
  ISSN =	{2192-5283},
  year =	{2017},
  volume =	{6},
  number =	{11},
  editor =	{Lewenstein, Moshe and Pettie, Seth and Vassilevska Williams, Virginia},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagRep.6.11.1},
  URN =		{urn:nbn:de:0030-drops-70373},
  doi =		{10.4230/DagRep.6.11.1},
  annote =	{Keywords: Algorithmic equivalences, Classifying P, Hardness assumptions, Lower bounds}
}
Document
Deterministic Time-Space Trade-Offs for k-SUM

Authors: Andrea Lincoln, Virginia Vassilevska Williams, Joshua R. Wang, and R. Ryan Williams

Published in: LIPIcs, Volume 55, 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)


Abstract
Given a set of numbers, the k-SUM problem asks for a subset of k numbers that sums to zero. When the numbers are integers, the time and space complexity of k-SUM is generally studied in the word-RAM model; when the numbers are reals, the complexity is studied in the real-RAM model, and space is measured by the number of reals held in memory at any point. We present a time and space efficient deterministic self-reduction for the k-SUM problem which holds for both models, and has many interesting consequences. To illustrate: - 3-SUM is in deterministic time O(n^2*lg(lg(n))/lg(n)) and space O(sqrt(n*lg(n)/lg(lg(n)))). In general, any polylogarithmic-time improvement over quadratic time for 3-SUM can be converted into an algorithm with an identical time improvement but low space complexity as well. - 3-SUM is in deterministic time O(n^2) and space O(sqrt(n)), derandomizing an algorithm of Wang. - A popular conjecture states that 3-SUM requires n^{2-o(1)} time on the word-RAM. We show that the 3-SUM Conjecture is in fact equivalent to the (seemingly weaker) conjecture that every O(n^{.51})-space algorithm for 3-SUM requires at least n^{2-o(1)} time on the word-RAM. - For k >= 4, k-SUM is in deterministic O(n^{k-2+2/k}) time and O(sqrt(n)) space.

Cite as

Andrea Lincoln, Virginia Vassilevska Williams, Joshua R. Wang, and R. Ryan Williams. Deterministic Time-Space Trade-Offs for k-SUM. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 58:1-58:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{lincoln_et_al:LIPIcs.ICALP.2016.58,
  author =	{Lincoln, Andrea and Vassilevska Williams, Virginia and Wang, Joshua R. and Williams, R. Ryan},
  title =	{{Deterministic Time-Space Trade-Offs for k-SUM}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{58:1--58:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.58},
  URN =		{urn:nbn:de:0030-drops-62250},
  doi =		{10.4230/LIPIcs.ICALP.2016.58},
  annote =	{Keywords: 3SUM, kSUM, time-space tradeoff, algorithm}
}
Document
Invited Talk
RNA-Folding - From Hardness to Algorithms (Invited Talk)

Authors: Virginia Vassilevska Williams

Published in: LIPIcs, Volume 58, 41st International Symposium on Mathematical Foundations of Computer Science (MFCS 2016)


Abstract
This is an overview of the invited talk delivered at the 41st International Symposium on Mathematical Foundations of Computer Science (MFCS-2016).

Cite as

Virginia Vassilevska Williams. RNA-Folding - From Hardness to Algorithms (Invited Talk). In 41st International Symposium on Mathematical Foundations of Computer Science (MFCS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 58, p. 5:1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{vassilevskawilliams:LIPIcs.MFCS.2016.5,
  author =	{Vassilevska Williams, Virginia},
  title =	{{RNA-Folding - From Hardness to Algorithms}},
  booktitle =	{41st International Symposium on Mathematical Foundations of Computer Science (MFCS 2016)},
  pages =	{5:1--5:1},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-016-3},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{58},
  editor =	{Faliszewski, Piotr and Muscholl, Anca and Niedermeier, Rolf},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2016.5},
  URN =		{urn:nbn:de:0030-drops-65115},
  doi =		{10.4230/LIPIcs.MFCS.2016.5},
  annote =	{Keywords: RNA folding, matrix multiplication}
}
Document
A 7/3-Approximation for Feedback Vertex Sets in Tournaments

Authors: Matthias Mnich, Virginia Vassilevska Williams, and László A. Végh

Published in: LIPIcs, Volume 57, 24th Annual European Symposium on Algorithms (ESA 2016)


Abstract
We consider the minimum-weight feedback vertex set problem in tournaments: given a tournament with non-negative vertex weights, remove a minimum-weight set of vertices that intersects all cycles. This problem is NP-hard to solve exactly, and Unique Games-hard to approximate by a factor better than 2. We present the first 7/3 approximation algorithm for this problem, improving on the previously best known ratio 5/2 given by Cai et al. [FOCS 1998, SICOMP 2001].

Cite as

Matthias Mnich, Virginia Vassilevska Williams, and László A. Végh. A 7/3-Approximation for Feedback Vertex Sets in Tournaments. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 67:1-67:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{mnich_et_al:LIPIcs.ESA.2016.67,
  author =	{Mnich, Matthias and Vassilevska Williams, Virginia and V\'{e}gh, L\'{a}szl\'{o} A.},
  title =	{{A 7/3-Approximation for Feedback Vertex Sets in Tournaments}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{67:1--67:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.67},
  URN =		{urn:nbn:de:0030-drops-64098},
  doi =		{10.4230/LIPIcs.ESA.2016.67},
  annote =	{Keywords: Approximation algorithms, feedback vertex sets, tournaments}
}
Document
Invited Talk
Fine-Grained Algorithms and Complexity (Invited Talk)

Authors: Virginia Vassilevska Williams

Published in: LIPIcs, Volume 47, 33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016)


Abstract
A central goal of algorithmic research is to determine how fast computational problems can be solved in the worst case. Theorems from complexity theory state that there are problems that, on inputs of size n, can be solved in t(n) time but not in t(n)^{1-epsilon} time for epsilon>0. The main challenge is to determine where in this hierarchy various natural and important problems lie. Throughout the years, many ingenious algorithmic techniques have been developed and applied to obtain blazingly fast algorithms for many problems. Nevertheless, for many other central problems, the best known running times are essentially those of the classical algorithms devised for them in the 1950s and 1960s. Unconditional lower bounds seem very difficult to obtain, and so practically all known time lower bounds are conditional. For years, the main tool for proving hardness of computational problems have been NP-hardness reductions, basing hardness on P != NP. However, when we care about the exact running time (as opposed to merely polynomial vs non-polynomial), NP-hardness is not applicable, especially if the problem can already be solved in polynomial time. In recent years, a new theory has been developed, based on "fine-grained reductions" that focus on exact running times. The goal of these reductions is as follows. Suppose problem A is solvable in a(n) time and problem B in b(n) time, and no a(n)^{1-epsilon} and b(n)^{1-epsilon} algorithms are known for A and B respectively, for any epsilon>0. Then if A is fine-grained reducible to problem B (for a(n) and b(n)), then a b(n)^{1-epsilon} time algorithm for B (for any epsilon>0) implies an a(n)^{1-epsilon'} algorithm for A (for some epsilon'>0). Now, mimicking NP-hardness, the approach is to (1) select a key problem X that is conjectured to require t(n)^{1-o(1)} time for some t(n), and (2) reduce X in a fine-grained way to many important problems. This approach has led to the discovery of many meaningful relationships between problems, and even sometimes to equivalence classes. In this talk I will give an overview of the current progress in this area of study, and will highlight some new exciting developments.

Cite as

Virginia Vassilevska Williams. Fine-Grained Algorithms and Complexity (Invited Talk). In 33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 47, p. 3:1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{vassilevskawilliams:LIPIcs.STACS.2016.3,
  author =	{Vassilevska Williams, Virginia},
  title =	{{Fine-Grained Algorithms and Complexity}},
  booktitle =	{33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016)},
  pages =	{3:1--3:1},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-001-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{47},
  editor =	{Ollinger, Nicolas and Vollmer, Heribert},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2016.3},
  URN =		{urn:nbn:de:0030-drops-57044},
  doi =		{10.4230/LIPIcs.STACS.2016.3},
  annote =	{Keywords: algorithms, complexity, polynomial time problems}
}
Document
Invited Talk
Hardness of Easy Problems: Basing Hardness on Popular Conjectures such as the Strong Exponential Time Hypothesis (Invited Talk)

Authors: Virginia Vassilevska Williams

Published in: LIPIcs, Volume 43, 10th International Symposium on Parameterized and Exact Computation (IPEC 2015)


Abstract
Algorithmic research strives to develop fast algorithms for fundamental problems. Despite its many successes, however, many problems still do not have very efficient algorithms. For years researchers have explained the hardness for key problems by proving NP-hardness, utilizing polynomial time reductions to base the hardness of key problems on the famous conjecture P != NP. For problems that already have polynomial time algorithms, however, it does not seem that one can show any sort of hardness based on P != NP. Nevertheless, we would like to provide evidence that a problem $A$ with a running time O(n^k) that has not been improved in decades, also requires n^{k-o(1)} time, thus explaining the lack of progress on the problem. Such unconditional time lower bounds seem very difficult to obtain, unfortunately. Recent work has concentrated on an approach mimicking NP-hardness: (1) select a few key problems that are conjectured to require T(n) time to solve, (2) use special, fine-grained reductions to prove time lower bounds for many diverse problems in P based on the conjectured hardness of the key problems. In this abstract we outline the approach, give some examples of hardness results based on the Strong Exponential Time Hypothesis, and present an overview of some of the recent work on the topic.

Cite as

Virginia Vassilevska Williams. Hardness of Easy Problems: Basing Hardness on Popular Conjectures such as the Strong Exponential Time Hypothesis (Invited Talk). In 10th International Symposium on Parameterized and Exact Computation (IPEC 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 43, pp. 17-29, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{vassilevskawilliams:LIPIcs.IPEC.2015.17,
  author =	{Vassilevska Williams, Virginia},
  title =	{{Hardness of Easy Problems: Basing Hardness on Popular Conjectures such as the Strong Exponential Time Hypothesis}},
  booktitle =	{10th International Symposium on Parameterized and Exact Computation (IPEC 2015)},
  pages =	{17--29},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-92-7},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{43},
  editor =	{Husfeldt, Thore and Kanj, Iyad},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.IPEC.2015.17},
  URN =		{urn:nbn:de:0030-drops-55683},
  doi =		{10.4230/LIPIcs.IPEC.2015.17},
  annote =	{Keywords: reductions, satisfiability, strong exponential time hypothesis, shortest paths, 3SUM, equivalences, fine-grained complexity}
}
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail