Search Results

Documents authored by Sachdeva, Sushant


Document
Track A: Algorithms, Complexity and Games
Optimal Electrical Oblivious Routing on Expanders

Authors: Cella Florescu, Rasmus Kyng, Maximilian Probst Gutenberg, and Sushant Sachdeva

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
In this paper, we investigate the question of whether the electrical flow routing is a good oblivious routing scheme on an m-edge graph G = (V, E) that is a Φ-expander, i.e. where |∂ S| ≥ Φ ⋅ vol(S) for every S ⊆ V, vol(S) ≤ vol(V)/2. Beyond its simplicity and structural importance, this question is well-motivated by the current state-of-the-art of fast algorithms for 𝓁_∞ oblivious routings that reduce to the expander-case which is in turn solved by electrical flow routing. Our main result proves that the electrical routing is an O(Φ^{-1} log m)-competitive oblivious routing in the 𝓁₁- and 𝓁_∞-norms. We further observe that the oblivious routing is O(log² m)-competitive in the 𝓁₂-norm and, in fact, O(log m)-competitive if 𝓁₂-localization is O(log m) which is widely believed. Using these three upper bounds, we can smoothly interpolate to obtain upper bounds for every p ∈ [2, ∞] and q given by 1/p + 1/q = 1. Assuming 𝓁₂-localization in O(log m), we obtain that in 𝓁_p and 𝓁_q, the electrical oblivious routing is O(Φ^{-(1-2/p)}log m) competitive. Using the currently known result for 𝓁₂-localization, this ratio deteriorates by at most a sublogarithmic factor for every p, q ≠ 2. We complement our upper bounds with lower bounds that show that the electrical routing for any such p and q is Ω(Φ^{-(1-2/p)} log m)-competitive. This renders our results in 𝓁₁ and 𝓁_∞ unconditionally tight up to constants, and the result in any 𝓁_p- and 𝓁_q-norm to be tight in case of 𝓁₂-localization in O(log m).

Cite as

Cella Florescu, Rasmus Kyng, Maximilian Probst Gutenberg, and Sushant Sachdeva. Optimal Electrical Oblivious Routing on Expanders. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 65:1-65:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{florescu_et_al:LIPIcs.ICALP.2024.65,
  author =	{Florescu, Cella and Kyng, Rasmus and Gutenberg, Maximilian Probst and Sachdeva, Sushant},
  title =	{{Optimal Electrical Oblivious Routing on Expanders}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{65:1--65:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.65},
  URN =		{urn:nbn:de:0030-drops-202083},
  doi =		{10.4230/LIPIcs.ICALP.2024.65},
  annote =	{Keywords: Expanders, Oblivious routing for 𝓁\underlinep, Electrical flow routing}
}
Document
Track A: Algorithms, Complexity and Games
Better Sparsifiers for Directed Eulerian Graphs

Authors: Sushant Sachdeva, Anvith Thudi, and Yibin Zhao

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
Spectral sparsification for directed Eulerian graphs is a key component in the design of fast algorithms for solving directed Laplacian linear systems. Directed Laplacian linear system solvers are crucial algorithmic primitives to fast computation of fundamental problems on random walks, such as computing stationary distributions, hitting and commute times, and personalized PageRank vectors. While spectral sparsification is well understood for undirected graphs and it is known that for every graph G, (1+ε)-sparsifiers with O(nε^{-2}) edges exist [Batson-Spielman-Srivastava, STOC '09] (which is optimal), the best known constructions of Eulerian sparsifiers require Ω(nε^{-2}log⁴ n) edges and are based on short-cycle decompositions [Chu et al., FOCS '18]. In this paper, we give improved constructions of Eulerian sparsifiers, specifically: 1) We show that for every directed Eulerian graph G→, there exists an Eulerian sparsifier with O(nε^{-2} log² n log²log n + nε^{-4/3}log^{8/3} n) edges. This result is based on combining short-cycle decompositions [Chu-Gao-Peng-Sachdeva-Sawlani-Wang, FOCS '18, SICOMP] and [Parter-Yogev, ICALP '19], with recent progress on the matrix Spencer conjecture [Bansal-Meka-Jiang, STOC '23]. 2) We give an improved analysis of the constructions based on short-cycle decompositions, giving an m^{1+δ}-time algorithm for any constant δ > 0 for constructing Eulerian sparsifiers with O(nε^{-2}log³ n) edges.

Cite as

Sushant Sachdeva, Anvith Thudi, and Yibin Zhao. Better Sparsifiers for Directed Eulerian Graphs. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 119:1-119:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{sachdeva_et_al:LIPIcs.ICALP.2024.119,
  author =	{Sachdeva, Sushant and Thudi, Anvith and Zhao, Yibin},
  title =	{{Better Sparsifiers for Directed Eulerian Graphs}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{119:1--119:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.119},
  URN =		{urn:nbn:de:0030-drops-202628},
  doi =		{10.4230/LIPIcs.ICALP.2024.119},
  annote =	{Keywords: Graph algorithms, Linear algebra and computation, Discrepancy theory}
}
Document
Universal Matrix Sparsifiers and Fast Deterministic Algorithms for Linear Algebra

Authors: Rajarshi Bhattacharjee, Gregory Dexter, Cameron Musco, Archan Ray, Sushant Sachdeva, and David P. Woodruff

Published in: LIPIcs, Volume 287, 15th Innovations in Theoretical Computer Science Conference (ITCS 2024)


Abstract
Let S ∈ ℝ^{n × n} be any matrix satisfying ‖1-S‖₂ ≤ εn, where 1 is the all ones matrix and ‖⋅‖₂ is the spectral norm. It is well-known that there exists S with just O(n/ε²) non-zero entries achieving this guarantee: we can let 𝐒 be the scaled adjacency matrix of a Ramanujan expander graph. We show that, beyond giving a sparse approximation to the all ones matrix, 𝐒 yields a universal sparsifier for any positive semidefinite (PSD) matrix. In particular, for any PSD A ∈ ℝ^{n×n} which is normalized so that its entries are bounded in magnitude by 1, we show that ‖A-A∘S‖₂ ≤ ε n, where ∘ denotes the entrywise (Hadamard) product. Our techniques also yield universal sparsifiers for non-PSD matrices. In this case, we show that if S satisfies ‖1-S‖₂ ≤ (ε²n)/(c log²(1/ε)) for some sufficiently large constant c, then ‖A-A∘S‖₂ ≤ ε⋅max(n,‖ A‖₁), where ‖A‖₁ is the nuclear norm. Again letting 𝐒 be a scaled Ramanujan graph adjacency matrix, this yields a sparsifier with Õ(n/ε⁴) entries. We prove that the above universal sparsification bounds for both PSD and non-PSD matrices are tight up to logarithmic factors. Since 𝐀∘𝐒 can be constructed deterministically without reading all of A, our result for PSD matrices derandomizes and improves upon established results for randomized matrix sparsification, which require sampling a random subset of O((n log n)/ε²) entries and only give an approximation to any fixed A with high probability. We further show that any randomized algorithm must read at least Ω(n/ε²) entries to spectrally approximate general A to error εn, thus proving that these existing randomized algorithms are optimal up to logarithmic factors. We leverage our deterministic sparsification results to give the first {deterministic algorithms} for several problems, including singular value and singular vector approximation and positive semidefiniteness testing, that run in faster than matrix multiplication time. This partially addresses a significant gap between randomized and deterministic algorithms for fast linear algebraic computation. Finally, if A ∈ {-1,0,1}^{n × n} is PSD, we show that a spectral approximation à with ‖A-Ã‖₂ ≤ ε n can be obtained by deterministically reading Õ(n/ε) entries of A. This improves the 1/ε dependence on our result for general PSD matrices by a quadratic factor and is information-theoretically optimal up to a logarithmic factor.

Cite as

Rajarshi Bhattacharjee, Gregory Dexter, Cameron Musco, Archan Ray, Sushant Sachdeva, and David P. Woodruff. Universal Matrix Sparsifiers and Fast Deterministic Algorithms for Linear Algebra. In 15th Innovations in Theoretical Computer Science Conference (ITCS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 287, pp. 13:1-13:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{bhattacharjee_et_al:LIPIcs.ITCS.2024.13,
  author =	{Bhattacharjee, Rajarshi and Dexter, Gregory and Musco, Cameron and Ray, Archan and Sachdeva, Sushant and Woodruff, David P.},
  title =	{{Universal Matrix Sparsifiers and Fast Deterministic Algorithms for Linear Algebra}},
  booktitle =	{15th Innovations in Theoretical Computer Science Conference (ITCS 2024)},
  pages =	{13:1--13:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-309-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{287},
  editor =	{Guruswami, Venkatesan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2024.13},
  URN =		{urn:nbn:de:0030-drops-195415},
  doi =		{10.4230/LIPIcs.ITCS.2024.13},
  annote =	{Keywords: sublinear algorithms, randomized linear algebra, spectral sparsification, expanders}
}
Document
Electrical Flows for Polylogarithmic Competitive Oblivious Routing

Authors: Gramoz Goranci, Monika Henzinger, Harald Räcke, Sushant Sachdeva, and A. R. Sricharan

Published in: LIPIcs, Volume 287, 15th Innovations in Theoretical Computer Science Conference (ITCS 2024)


Abstract
Oblivious routing is a well-studied paradigm that uses static precomputed routing tables for selecting routing paths within a network. Existing oblivious routing schemes with polylogarithmic competitive ratio for general networks are tree-based, in the sense that routing is performed according to a convex combination of trees. However, this restriction to trees leads to a construction that has time quadratic in the size of the network and does not parallelize well. In this paper we study oblivious routing schemes based on electrical routing. In particular, we show that general networks with n vertices and m edges admit a routing scheme that has competitive ratio O(log² n) and consists of a convex combination of only O(√m) electrical routings. This immediately leads to an improved construction algorithm with time Õ(m^{3/2}) that can also be implemented in parallel with Õ(√m) depth.

Cite as

Gramoz Goranci, Monika Henzinger, Harald Räcke, Sushant Sachdeva, and A. R. Sricharan. Electrical Flows for Polylogarithmic Competitive Oblivious Routing. In 15th Innovations in Theoretical Computer Science Conference (ITCS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 287, pp. 55:1-55:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{goranci_et_al:LIPIcs.ITCS.2024.55,
  author =	{Goranci, Gramoz and Henzinger, Monika and R\"{a}cke, Harald and Sachdeva, Sushant and Sricharan, A. R.},
  title =	{{Electrical Flows for Polylogarithmic Competitive Oblivious Routing}},
  booktitle =	{15th Innovations in Theoretical Computer Science Conference (ITCS 2024)},
  pages =	{55:1--55:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-309-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{287},
  editor =	{Guruswami, Venkatesan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2024.55},
  URN =		{urn:nbn:de:0030-drops-195830},
  doi =		{10.4230/LIPIcs.ITCS.2024.55},
  annote =	{Keywords: oblivious routing, electrical flows}
}
Document
Track A: Algorithms, Complexity and Games
Almost-Linear-Time Weighted 𝓁_p-Norm Solvers in Slightly Dense Graphs via Sparsification

Authors: Deeksha Adil, Brian Bullins, Rasmus Kyng, and Sushant Sachdeva

Published in: LIPIcs, Volume 198, 48th International Colloquium on Automata, Languages, and Programming (ICALP 2021)


Abstract
We give almost-linear-time algorithms for constructing sparsifiers with n poly(log n) edges that approximately preserve weighted (𝓁²₂ + 𝓁^p_p) flow or voltage objectives on graphs. For flow objectives, this is the first sparsifier construction for such mixed objectives beyond unit 𝓁_p weights, and is based on expander decompositions. For voltage objectives, we give the first sparsifier construction for these objectives, which we build using graph spanners and leverage score sampling. Together with the iterative refinement framework of [Adil et al, SODA 2019], and a new multiplicative-weights based constant-approximation algorithm for mixed-objective flows or voltages, we show how to find (1+2^{-poly(log n)}) approximations for weighted 𝓁_p-norm minimizing flows or voltages in p(m^{1+o(1)} + n^{4/3 + o(1)}) time for p = ω(1), which is almost-linear for graphs that are slightly dense (m ≥ n^{4/3 + o(1)}).

Cite as

Deeksha Adil, Brian Bullins, Rasmus Kyng, and Sushant Sachdeva. Almost-Linear-Time Weighted 𝓁_p-Norm Solvers in Slightly Dense Graphs via Sparsification. In 48th International Colloquium on Automata, Languages, and Programming (ICALP 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 198, pp. 9:1-9:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{adil_et_al:LIPIcs.ICALP.2021.9,
  author =	{Adil, Deeksha and Bullins, Brian and Kyng, Rasmus and Sachdeva, Sushant},
  title =	{{Almost-Linear-Time Weighted 𝓁\underlinep-Norm Solvers in Slightly Dense Graphs via Sparsification}},
  booktitle =	{48th International Colloquium on Automata, Languages, and Programming (ICALP 2021)},
  pages =	{9:1--9:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-195-5},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{198},
  editor =	{Bansal, Nikhil and Merelli, Emanuela and Worrell, James},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2021.9},
  URN =		{urn:nbn:de:0030-drops-140782},
  doi =		{10.4230/LIPIcs.ICALP.2021.9},
  annote =	{Keywords: Weighted 𝓁\underlinep-norm, Sparsification, Spanners, Iterative Refinement}
}
Document
Convergence Results for Neural Networks via Electrodynamics

Authors: Rina Panigrahy, Ali Rahimi, Sushant Sachdeva, and Qiuyi Zhang

Published in: LIPIcs, Volume 94, 9th Innovations in Theoretical Computer Science Conference (ITCS 2018)


Abstract
We study whether a depth two neural network can learn another depth two network using gradient descent. Assuming a linear output node, we show that the question of whether gradient descent converges to the target function is equivalent to the following question in electrodynamics: Given k fixed protons in R^d, and k electrons, each moving due to the attractive force from the protons and repulsive force from the remaining electrons, whether at equilibrium all the electrons will be matched up with the protons, up to a permutation. Under the standard electrical force, this follows from the classic Earnshaw's theorem. In our setting, the force is determined by the activation function and the input distribution. Building on this equivalence, we prove the existence of an activation function such that gradient descent learns at least one of the hidden nodes in the target network. Iterating, we show that gradient descent can be used to learn the entire network one node at a time.

Cite as

Rina Panigrahy, Ali Rahimi, Sushant Sachdeva, and Qiuyi Zhang. Convergence Results for Neural Networks via Electrodynamics. In 9th Innovations in Theoretical Computer Science Conference (ITCS 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 94, pp. 22:1-22:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{panigrahy_et_al:LIPIcs.ITCS.2018.22,
  author =	{Panigrahy, Rina and Rahimi, Ali and Sachdeva, Sushant and Zhang, Qiuyi},
  title =	{{Convergence Results for Neural Networks via Electrodynamics}},
  booktitle =	{9th Innovations in Theoretical Computer Science Conference (ITCS 2018)},
  pages =	{22:1--22:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-060-6},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{94},
  editor =	{Karlin, Anna R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2018.22},
  URN =		{urn:nbn:de:0030-drops-83521},
  doi =		{10.4230/LIPIcs.ITCS.2018.22},
  annote =	{Keywords: Deep Learning, Learning Theory, Non-convex Optimization}
}