15 Search Results for "Musco, Christopher"


Document
APPROX
Streaming Algorithms for Network Design

Authors: Chandra Chekuri, Rhea Jain, Sepideh Mahabadi, and Ali Vakilian

Published in: LIPIcs, Volume 353, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2025)


Abstract
We consider the Survivable Network Design problem (SNDP) in the single-pass insertion-only streaming model. The input to SNDP is an edge-weighted graph G = (V, E) and an integer connectivity requirement r(uv) for each u, v ∈ V. The objective is to find a minimum-weight subgraph H ⊆ G such that, for every pair of vertices u, v ∈ V, u and v are r(uv)-edge/vertex-connected. Recent work by [Ce Jin et al., 2024] obtained approximation algorithms for edge-connectivity augmentation, and via that, also derived algorithms for edge-connectivity SNDP (EC-SNDP). In this work we consider vertex-connectivity setting (VC-SNDP) and obtain several results for it as well as improved results for EC-SNDP. - We provide a general framework for solving connectivity problems including SNDP and others in streaming; this is based on a connection to fault-tolerant spanners. For VC-SNDP we provide an O(tk)-approximation in Õ(k^{1-1/t}n^{1 + 1/t}) space, where k is the maximum connectivity requirement, assuming an exact algorithm at the end of the stream. Using a refined LP-based analysis, we provide an O(β t)-approximation where β is the integrality gap of the natural cut-based LP relaxation. These are the first approximation algorithms in the streaming model for VC-SNDP. When applied to the EC-SNDP, our framework provides an O(t)-approximation in Õ(k^{1/2-1/(2t)}n^{1 + 1/t} + kn) space, improving the O(t log k)-approximation of [Ce Jin et al., 2024] using Õ(kn^{1+1/t}) space; this also extends to element-connectivity SNDP. - We consider vertex connectivity-augmentation in the link-arrival model. The input is a k-vertex-connected spanning subgraph G, and additional weighted links L arrive in the stream; the goal is to store the min-weight set of links such that G ∪ L is (k+1)-vertex-connected. We obtain constant-factor approximations in near-linear space for k = 1, 2. Our result for k = 2 is based on using the SPQR tree, a novel application for this well-known representation of 2-connected graphs.

Cite as

Chandra Chekuri, Rhea Jain, Sepideh Mahabadi, and Ali Vakilian. Streaming Algorithms for Network Design. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 353, pp. 4:1-4:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{chekuri_et_al:LIPIcs.APPROX/RANDOM.2025.4,
  author =	{Chekuri, Chandra and Jain, Rhea and Mahabadi, Sepideh and Vakilian, Ali},
  title =	{{Streaming Algorithms for Network Design}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2025)},
  pages =	{4:1--4:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-397-3},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{353},
  editor =	{Ene, Alina and Chattopadhyay, Eshan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2025.4},
  URN =		{urn:nbn:de:0030-drops-243709},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2025.4},
  annote =	{Keywords: Streaming Algorithms, Survivable Network Design, Fault-Tolerant Spanners}
}
Document
Track A: Algorithms, Complexity and Games
Optimal Oblivious Subspace Embeddings with Near-Optimal Sparsity

Authors: Shabarish Chenakkod, Michał Dereziński, and Xiaoyu Dong

Published in: LIPIcs, Volume 334, 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)


Abstract
An oblivious subspace embedding is a random m× n matrix Π such that, for any d-dimensional subspace, with high probability Π preserves the norms of all vectors in that subspace within a 1±ε factor. In this work, we give an oblivious subspace embedding with the optimal dimension m = Θ(d/ε²) that has a near-optimal sparsity of Õ(1/ε) non-zero entries per column of Π. This is the first result to nearly match the conjecture of Nelson and Nguyen [FOCS 2013] in terms of the best sparsity attainable by an optimal oblivious subspace embedding, improving on a prior bound of Õ(1/ε⁶) non-zeros per column [Chenakkod et al., STOC 2024]. We further extend our approach to the non-oblivious setting, proposing a new family of Leverage Score Sparsified embeddings with Independent Columns, which yield faster runtimes for matrix approximation and regression tasks. In our analysis, we develop a new method which uses a decoupling argument together with the cumulant method for bounding the edge universality error of isotropic random matrices. To achieve near-optimal sparsity, we combine this general-purpose approach with new trace inequalities that leverage the specific structure of our subspace embedding construction.

Cite as

Shabarish Chenakkod, Michał Dereziński, and Xiaoyu Dong. Optimal Oblivious Subspace Embeddings with Near-Optimal Sparsity. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 55:1-55:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{chenakkod_et_al:LIPIcs.ICALP.2025.55,
  author =	{Chenakkod, Shabarish and Derezi\'{n}ski, Micha{\l} and Dong, Xiaoyu},
  title =	{{Optimal Oblivious Subspace Embeddings with Near-Optimal Sparsity}},
  booktitle =	{52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
  pages =	{55:1--55:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-372-0},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{334},
  editor =	{Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.55},
  URN =		{urn:nbn:de:0030-drops-234324},
  doi =		{10.4230/LIPIcs.ICALP.2025.55},
  annote =	{Keywords: Randomized linear algebra, matrix sketching, subspace embeddings}
}
Document
Track A: Algorithms, Complexity and Games
Deterministic Complexity Analysis of Hermitian Eigenproblems

Authors: Aleksandros Sobczyk

Published in: LIPIcs, Volume 334, 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)


Abstract
In this work we revisit the arithmetic and bit complexity of Hermitian eigenproblems. Recently, [BGVKS, FOCS 2020] proved that a (non-Hermitian) matrix A can be diagonalized with a randomized algorithm in O(n^{ω}log²(n/ε)) arithmetic operations, where ω≲ 2.371 is the square matrix multiplication exponent, and [Shah, SODA 2025] significantly improved the bit complexity for the Hermitian case. Our main goal is to obtain similar deterministic complexity bounds for various Hermitian eigenproblems. In the Real RAM model, we show that a Hermitian matrix can be diagonalized deterministically in O(n^{ω}log(n)+n²polylog(n/ε)) arithmetic operations, improving the classic deterministic Õ(n³) algorithms, and derandomizing the aforementioned state-of-the-art. The main technical step is a complete, detailed analysis of a well-known divide-and-conquer tridiagonal eigensolver of Gu and Eisenstat [GE95], when accelerated with the Fast Multipole Method, asserting that it can accurately diagonalize a symmetric tridiagonal matrix in nearly-O(n²) operations. In finite precision, we show that an algorithm by Schönhage [Sch72] to reduce a Hermitian matrix to tridiagonal form is stable in the floating point model, using O(log(n/ε)) bits of precision. This leads to a deterministic algorithm to compute all the eigenvalues of a Hermitian matrix in O(n^{ω}ℱ(log(n/ε)) + n²polylog(n/ε)) bit operations, where ℱ(b) ∈ Õ(b) is the bit complexity of a single floating point operation on b bits. This improves the best known Õ(n³) deterministic and O(n^{ω}log²(n/ε)ℱ(log(n/ε))) randomized complexities. We conclude with some other useful subroutines such as computing spectral gaps, condition numbers, and spectral projectors, and with some open problems.

Cite as

Aleksandros Sobczyk. Deterministic Complexity Analysis of Hermitian Eigenproblems. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 131:1-131:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{sobczyk:LIPIcs.ICALP.2025.131,
  author =	{Sobczyk, Aleksandros},
  title =	{{Deterministic Complexity Analysis of Hermitian Eigenproblems}},
  booktitle =	{52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
  pages =	{131:1--131:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-372-0},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{334},
  editor =	{Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.131},
  URN =		{urn:nbn:de:0030-drops-235081},
  doi =		{10.4230/LIPIcs.ICALP.2025.131},
  annote =	{Keywords: Hermitian eigenproblem, eigenvalues, SVD, tridiagonal reduction, matrix multiplication time, diagonalization, bit complexity}
}
Document
Track A: Algorithms, Complexity and Games
Decremental (1+ε)-Approximate Maximum Eigenvector: Dynamic Power Method

Authors: Deeksha Adil and Thatchaphol Saranurak

Published in: LIPIcs, Volume 334, 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)


Abstract
We present a dynamic algorithm for maintaining (1+ε)-approximate maximum eigenvector and eigenvalue of a positive semi-definite matrix A undergoing decreasing updates, i.e., updates which may only decrease eigenvalues. Given a vector v updating A ← A-vv^⊤, our algorithm takes Õ(nnz(v)) amortized update time, i.e., polylogarithmic per non-zeros in the update vector. Our technique is based on a novel analysis of the influential power method in the dynamic setting. The two previous sets of techniques have the following drawbacks (1) algebraic techniques can maintain exact solutions but their update time is at least polynomial per non-zeros, and (2) sketching techniques admit polylogarithmic update time but suffer from a crude additive approximation. Our algorithm exploits an oblivious adversary. Interestingly, we show that any algorithm with polylogarithmic update time per non-zeros that works against an adaptive adversary and satisfies an additional natural property would imply a breakthrough for checking psd-ness of matrices in Õ(n²) time, instead of O(n^ω) time.

Cite as

Deeksha Adil and Thatchaphol Saranurak. Decremental (1+ε)-Approximate Maximum Eigenvector: Dynamic Power Method. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 6:1-6:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{adil_et_al:LIPIcs.ICALP.2025.6,
  author =	{Adil, Deeksha and Saranurak, Thatchaphol},
  title =	{{Decremental (1+\epsilon)-Approximate Maximum Eigenvector: Dynamic Power Method}},
  booktitle =	{52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
  pages =	{6:1--6:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-372-0},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{334},
  editor =	{Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.6},
  URN =		{urn:nbn:de:0030-drops-233834},
  doi =		{10.4230/LIPIcs.ICALP.2025.6},
  annote =	{Keywords: Power Method, Dynamic Algorithms}
}
Document
Track A: Algorithms, Complexity and Games
Guessing Efficiently for Constrained Subspace Approximation

Authors: Aditya Bhaskara, Sepideh Mahabadi, Madhusudhan Reddy Pittu, Ali Vakilian, and David P. Woodruff

Published in: LIPIcs, Volume 334, 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)


Abstract
In this paper we study constrained subspace approximation problem. Given a set of n points {a₁,…,a_n} in ℝ^d, the goal of the subspace approximation problem is to find a k dimensional subspace that best approximates the input points. More precisely, for a given p ≥ 1, we aim to minimize the pth power of the 𝓁_p norm of the error vector (‖a₁-Pa₁‖,…,‖a_n-Pa_n‖), where P denotes the projection matrix onto the subspace and the norms are Euclidean. In constrained subspace approximation (CSA), we additionally have constraints on the projection matrix P. In its most general form, we require P to belong to a given subset 𝒮 that is described explicitly or implicitly. We introduce a general framework for constrained subspace approximation. Our approach, that we term coreset-guess-solve, yields either (1+ε)-multiplicative or ε-additive approximations for a variety of constraints. We show that it provides new algorithms for partition-constrained subspace approximation with applications to fair subspace approximation, k-means clustering, and projected non-negative matrix factorization, among others. Specifically, while we reconstruct the best known bounds for k-means clustering in Euclidean spaces, we improve the known results for the remainder of the problems.

Cite as

Aditya Bhaskara, Sepideh Mahabadi, Madhusudhan Reddy Pittu, Ali Vakilian, and David P. Woodruff. Guessing Efficiently for Constrained Subspace Approximation. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 29:1-29:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{bhaskara_et_al:LIPIcs.ICALP.2025.29,
  author =	{Bhaskara, Aditya and Mahabadi, Sepideh and Pittu, Madhusudhan Reddy and Vakilian, Ali and Woodruff, David P.},
  title =	{{Guessing Efficiently for Constrained Subspace Approximation}},
  booktitle =	{52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
  pages =	{29:1--29:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-372-0},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{334},
  editor =	{Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.29},
  URN =		{urn:nbn:de:0030-drops-234068},
  doi =		{10.4230/LIPIcs.ICALP.2025.29},
  annote =	{Keywords: parameterized complexity, low rank approximation, fairness, non-negative matrix factorization, clustering}
}
Document
Space Complexity of Minimum Cut Problems in Single-Pass Streams

Authors: Matthew Ding, Alexandro Garces, Jason Li, Honghao Lin, Jelani Nelson, Vihan Shah, and David P. Woodruff

Published in: LIPIcs, Volume 325, 16th Innovations in Theoretical Computer Science Conference (ITCS 2025)


Abstract
We consider the problem of finding a minimum cut of a weighted graph presented as a single-pass stream. While graph sparsification in streams has been intensively studied, the specific application of finding minimum cuts in streams is less well-studied. To this end, we show upper and lower bounds on minimum cut problems in insertion-only streams for a variety of settings, including for both randomized and deterministic algorithms, for both arbitrary and random order streams, and for both approximate and exact algorithms. One of our main results is an Õ(n/ε) space algorithm with fast update time for approximating a spectral cut query with high probability on a stream given in an arbitrary order. Our result breaks the Ω(n/ε²) space lower bound required of a sparsifier that approximates all cuts simultaneously. Using this result, we provide streaming algorithms with near optimal space of Õ(n/ε) for minimum cut and approximate all-pairs effective resistances, with matching space lower-bounds. The amortized update time of our algorithms is Õ(1), provided that the number of edges in the input graph is at least (n/ε²)^{1+o(1)}. We also give a generic way of incorporating sketching into a recursive contraction algorithm to improve the post-processing time of our algorithms. In addition to these results, we give a random-order streaming algorithm that computes the exact minimum cut on a simple, unweighted graph using Õ(n) space. Finally, we give an Ω(n/ε²) space lower bound for deterministic minimum cut algorithms which matches the best-known upper bound up to polylogarithmic factors.

Cite as

Matthew Ding, Alexandro Garces, Jason Li, Honghao Lin, Jelani Nelson, Vihan Shah, and David P. Woodruff. Space Complexity of Minimum Cut Problems in Single-Pass Streams. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 43:1-43:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{ding_et_al:LIPIcs.ITCS.2025.43,
  author =	{Ding, Matthew and Garces, Alexandro and Li, Jason and Lin, Honghao and Nelson, Jelani and Shah, Vihan and Woodruff, David P.},
  title =	{{Space Complexity of Minimum Cut Problems in Single-Pass Streams}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{43:1--43:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.43},
  URN =		{urn:nbn:de:0030-drops-226714},
  doi =		{10.4230/LIPIcs.ITCS.2025.43},
  annote =	{Keywords: minimum cut, approximate, random order, lower bound}
}
Document
Learning-Augmented Streaming Algorithms for Approximating MAX-CUT

Authors: Yinhao Dong, Pan Peng, and Ali Vakilian

Published in: LIPIcs, Volume 325, 16th Innovations in Theoretical Computer Science Conference (ITCS 2025)


Abstract
We study learning-augmented streaming algorithms for estimating the value of MAX-CUT in a graph. In the classical streaming model, while a 1/2-approximation for estimating the value of MAX-CUT can be trivially achieved with O(1) words of space, Kapralov and Krachun [STOC’19] showed that this is essentially the best possible: for any ε > 0, any (randomized) single-pass streaming algorithm that achieves an approximation ratio of at least 1/2 + ε requires Ω(n / 2^poly(1/ε)) space. We show that it is possible to surpass the 1/2-approximation barrier using just O(1) words of space by leveraging a (machine learned) oracle. Specifically, we consider streaming algorithms that are equipped with an ε-accurate oracle that for each vertex in the graph, returns its correct label in {-1, +1}, corresponding to an optimal MAX-CUT solution in the graph, with some probability 1/2 + ε, and the incorrect label otherwise. Within this framework, we present a single-pass algorithm that approximates the value of MAX-CUT to within a factor of 1/2 + Ω(ε²) with probability at least 2/3 for insertion-only streams, using only poly(1/ε) words of space. We also extend our algorithm to fully dynamic streams while maintaining a space complexity of poly(1/ε,log n) words.

Cite as

Yinhao Dong, Pan Peng, and Ali Vakilian. Learning-Augmented Streaming Algorithms for Approximating MAX-CUT. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 44:1-44:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{dong_et_al:LIPIcs.ITCS.2025.44,
  author =	{Dong, Yinhao and Peng, Pan and Vakilian, Ali},
  title =	{{Learning-Augmented Streaming Algorithms for Approximating MAX-CUT}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{44:1--44:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.44},
  URN =		{urn:nbn:de:0030-drops-226728},
  doi =		{10.4230/LIPIcs.ITCS.2025.44},
  annote =	{Keywords: Learning-Augmented Algorithms, Graph Streaming Algorithms, MAX-CUT}
}
Document
RANDOM
Faster Algorithms for Schatten-p Low Rank Approximation

Authors: Praneeth Kacham and David P. Woodruff

Published in: LIPIcs, Volume 317, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)


Abstract
We study algorithms for the Schatten-p Low Rank Approximation (LRA) problem. First, we show that by using fast rectangular matrix multiplication algorithms and different block sizes, we can improve the running time of the algorithms in the recent work of Bakshi, Clarkson and Woodruff (STOC 2022). We then show that by carefully combining our new algorithm with the algorithm of Li and Woodruff (ICML 2020), we can obtain even faster algorithms for Schatten-p LRA. While the block-based algorithms are fast in the real number model, we do not have a stability analysis which shows that the algorithms work when implemented on a machine with polylogarithmic bits of precision. We show that the LazySVD algorithm of Allen-Zhu and Li (NeurIPS 2016) can be implemented on a floating point machine with only logarithmic, in the input parameters, bits of precision. As far as we are aware, this is the first stability analysis of any algorithm using O((k/√ε)poly(log n)) matrix-vector products with the matrix A to output a 1+ε approximate solution for the rank-k Schatten-p LRA problem.

Cite as

Praneeth Kacham and David P. Woodruff. Faster Algorithms for Schatten-p Low Rank Approximation. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 317, pp. 55:1-55:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{kacham_et_al:LIPIcs.APPROX/RANDOM.2024.55,
  author =	{Kacham, Praneeth and Woodruff, David P.},
  title =	{{Faster Algorithms for Schatten-p Low Rank Approximation}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)},
  pages =	{55:1--55:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-348-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{317},
  editor =	{Kumar, Amit and Ron-Zewi, Noga},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2024.55},
  URN =		{urn:nbn:de:0030-drops-210488},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2024.55},
  annote =	{Keywords: Low Rank Approximation, Schatten Norm, Rectangular Matrix Multiplication, Stability Analysis}
}
Document
Universal Matrix Sparsifiers and Fast Deterministic Algorithms for Linear Algebra

Authors: Rajarshi Bhattacharjee, Gregory Dexter, Cameron Musco, Archan Ray, Sushant Sachdeva, and David P. Woodruff

Published in: LIPIcs, Volume 287, 15th Innovations in Theoretical Computer Science Conference (ITCS 2024)


Abstract
Let S ∈ ℝ^{n × n} be any matrix satisfying ‖1-S‖₂ ≤ εn, where 1 is the all ones matrix and ‖⋅‖₂ is the spectral norm. It is well-known that there exists S with just O(n/ε²) non-zero entries achieving this guarantee: we can let 𝐒 be the scaled adjacency matrix of a Ramanujan expander graph. We show that, beyond giving a sparse approximation to the all ones matrix, 𝐒 yields a universal sparsifier for any positive semidefinite (PSD) matrix. In particular, for any PSD A ∈ ℝ^{n×n} which is normalized so that its entries are bounded in magnitude by 1, we show that ‖A-A∘S‖₂ ≤ ε n, where ∘ denotes the entrywise (Hadamard) product. Our techniques also yield universal sparsifiers for non-PSD matrices. In this case, we show that if S satisfies ‖1-S‖₂ ≤ (ε²n)/(c log²(1/ε)) for some sufficiently large constant c, then ‖A-A∘S‖₂ ≤ ε⋅max(n,‖ A‖₁), where ‖A‖₁ is the nuclear norm. Again letting 𝐒 be a scaled Ramanujan graph adjacency matrix, this yields a sparsifier with Õ(n/ε⁴) entries. We prove that the above universal sparsification bounds for both PSD and non-PSD matrices are tight up to logarithmic factors. Since 𝐀∘𝐒 can be constructed deterministically without reading all of A, our result for PSD matrices derandomizes and improves upon established results for randomized matrix sparsification, which require sampling a random subset of O((n log n)/ε²) entries and only give an approximation to any fixed A with high probability. We further show that any randomized algorithm must read at least Ω(n/ε²) entries to spectrally approximate general A to error εn, thus proving that these existing randomized algorithms are optimal up to logarithmic factors. We leverage our deterministic sparsification results to give the first {deterministic algorithms} for several problems, including singular value and singular vector approximation and positive semidefiniteness testing, that run in faster than matrix multiplication time. This partially addresses a significant gap between randomized and deterministic algorithms for fast linear algebraic computation. Finally, if A ∈ {-1,0,1}^{n × n} is PSD, we show that a spectral approximation à with ‖A-Ã‖₂ ≤ ε n can be obtained by deterministically reading Õ(n/ε) entries of A. This improves the 1/ε dependence on our result for general PSD matrices by a quadratic factor and is information-theoretically optimal up to a logarithmic factor.

Cite as

Rajarshi Bhattacharjee, Gregory Dexter, Cameron Musco, Archan Ray, Sushant Sachdeva, and David P. Woodruff. Universal Matrix Sparsifiers and Fast Deterministic Algorithms for Linear Algebra. In 15th Innovations in Theoretical Computer Science Conference (ITCS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 287, pp. 13:1-13:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{bhattacharjee_et_al:LIPIcs.ITCS.2024.13,
  author =	{Bhattacharjee, Rajarshi and Dexter, Gregory and Musco, Cameron and Ray, Archan and Sachdeva, Sushant and Woodruff, David P.},
  title =	{{Universal Matrix Sparsifiers and Fast Deterministic Algorithms for Linear Algebra}},
  booktitle =	{15th Innovations in Theoretical Computer Science Conference (ITCS 2024)},
  pages =	{13:1--13:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-309-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{287},
  editor =	{Guruswami, Venkatesan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2024.13},
  URN =		{urn:nbn:de:0030-drops-195415},
  doi =		{10.4230/LIPIcs.ITCS.2024.13},
  annote =	{Keywords: sublinear algorithms, randomized linear algebra, spectral sparsification, expanders}
}
Document
Efficient Block Approximate Matrix Multiplication

Authors: Chuhan Yang and Christopher Musco

Published in: LIPIcs, Volume 274, 31st Annual European Symposium on Algorithms (ESA 2023)


Abstract
Randomized matrix algorithms have had significant recent impact on numerical linear algebra. One especially powerful class of methods are algorithms for approximate matrix multiplication based on sampling. Such methods typically sample individual matrix rows and columns using carefully chosen importance sampling probabilities. However, due to practical considerations like memory locality and the preservation of matrix structure, it is often preferable to sample contiguous blocks of rows and columns all together. Recently, (Wu, 2018) addressed this setting by developing an approximate matrix multiplication method based on block sampling. However, the method is inefficient, as it requires knowledge of optimal importance sampling probabilities that are expensive to compute. We address this issue by showing that the method of Wu can be accelerated through the use of a randomized implicit trace estimation method. Doing so allows us to provably reduce the cost of sampling to near-linear in the size of the matrices being multiplied, without impacting the accuracy of the final approximate matrix multiplication. Overall, this yields a fast practical algorithm, which we test on a number of synthetic and real-world data sets. We complement our algorithmic contribution with the first extensive empirical comparison of block algorithms for randomized matrix multiplication. Our method offers a significant runtime advantage over the method of (Wu, 2018) and also outperforms basic uniform sampling of blocks. However, we find another recent method of (Charalambides, 2021) which uses sub-optimal but efficiently computable sampling probabilities often (but not always) offers the best trade-off between speed and accuracy.

Cite as

Chuhan Yang and Christopher Musco. Efficient Block Approximate Matrix Multiplication. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 103:1-103:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{yang_et_al:LIPIcs.ESA.2023.103,
  author =	{Yang, Chuhan and Musco, Christopher},
  title =	{{Efficient Block Approximate Matrix Multiplication}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{103:1--103:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.103},
  URN =		{urn:nbn:de:0030-drops-187562},
  doi =		{10.4230/LIPIcs.ESA.2023.103},
  annote =	{Keywords: Approximate matrix multiplication, randomized numerical linear algebra, trace estimation}
}
Document
Track A: Algorithms, Complexity and Games
Sublinear Time Eigenvalue Approximation via Random Sampling

Authors: Rajarshi Bhattacharjee, Gregory Dexter, Petros Drineas, Cameron Musco, and Archan Ray

Published in: LIPIcs, Volume 261, 50th International Colloquium on Automata, Languages, and Programming (ICALP 2023)


Abstract
We study the problem of approximating the eigenspectrum of a symmetric matrix A ∈ ℝ^{n×n} with bounded entries (i.e., ‖A‖_∞ ≤ 1). We present a simple sublinear time algorithm that approximates all eigenvalues of A up to additive error ±εn using those of a randomly sampled Õ((log³ n)/ε³)×Õ((log³ n)/ε³) principal submatrix. Our result can be viewed as a concentration bound on the complete eigenspectrum of a random submatrix, significantly extending known bounds on just the singular values (the magnitudes of the eigenvalues). We give improved error bounds of ± ε √{nnz(A)} and ±ε‖A‖_F when the rows of A can be sampled with probabilities proportional to their sparsities or their squared 𝓁₂ norms respectively. Here nnz(A) is the number of non-zero entries in A and ‖A‖_F is its Frobenius norm. Even for the strictly easier problems of approximating the singular values or testing the existence of large negative eigenvalues (Bakshi, Chepurko, and Jayaram, FOCS '20), our results are the first that take advantage of non-uniform sampling to give improved error bounds. From a technical perspective, our results require several new eigenvalue concentration and perturbation bounds for matrices with bounded entries. Our non-uniform sampling bounds require a new algorithmic approach, which judiciously zeroes out entries of a randomly sampled submatrix to reduce variance, before computing the eigenvalues of that submatrix as estimates for those of A. We complement our theoretical results with numerical simulations, which demonstrate the effectiveness of our algorithms in practice.

Cite as

Rajarshi Bhattacharjee, Gregory Dexter, Petros Drineas, Cameron Musco, and Archan Ray. Sublinear Time Eigenvalue Approximation via Random Sampling. In 50th International Colloquium on Automata, Languages, and Programming (ICALP 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 261, pp. 21:1-21:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{bhattacharjee_et_al:LIPIcs.ICALP.2023.21,
  author =	{Bhattacharjee, Rajarshi and Dexter, Gregory and Drineas, Petros and Musco, Cameron and Ray, Archan},
  title =	{{Sublinear Time Eigenvalue Approximation via Random Sampling}},
  booktitle =	{50th International Colloquium on Automata, Languages, and Programming (ICALP 2023)},
  pages =	{21:1--21:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-278-5},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{261},
  editor =	{Etessami, Kousha and Feige, Uriel and Puppis, Gabriele},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2023.21},
  URN =		{urn:nbn:de:0030-drops-180733},
  doi =		{10.4230/LIPIcs.ICALP.2023.21},
  annote =	{Keywords: sublinear algorithms, eigenvalue approximation, randomized linear algebra}
}
Document
A Local Search Algorithm for the Min-Sum Submodular Cover Problem

Authors: Lisa Hellerstein, Thomas Lidbetter, and R. Teal Witter

Published in: LIPIcs, Volume 248, 33rd International Symposium on Algorithms and Computation (ISAAC 2022)


Abstract
We consider the problem of solving the Min-Sum Submodular Cover problem using local search. The Min-Sum Submodular Cover problem generalizes the NP-complete Min-Sum Set Cover problem, replacing the input set cover instance with a monotone submodular set function. A simple greedy algorithm achieves an approximation factor of 4, which is tight unless P=NP [Streeter and Golovin, NeurIPS, 2008]. We complement the greedy algorithm with analysis of a local search algorithm. Building on work of Munagala et al. [ICDT, 2005], we show that, using simple initialization, a straightforward local search algorithm achieves a (4+ε)-approximate solution in time O(n³log(n/ε)), provided that the monotone submodular set function is also second-order supermodular. Second-order supermodularity has been shown to hold for a number of submodular functions of practical interest, including functions associated with set cover, matching, and facility location. We present experiments on two special cases of Min-Sum Submodular Cover and find that the local search algorithm can outperform the greedy algorithm on small data sets.

Cite as

Lisa Hellerstein, Thomas Lidbetter, and R. Teal Witter. A Local Search Algorithm for the Min-Sum Submodular Cover Problem. In 33rd International Symposium on Algorithms and Computation (ISAAC 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 248, pp. 3:1-3:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{hellerstein_et_al:LIPIcs.ISAAC.2022.3,
  author =	{Hellerstein, Lisa and Lidbetter, Thomas and Witter, R. Teal},
  title =	{{A Local Search Algorithm for the Min-Sum Submodular Cover Problem}},
  booktitle =	{33rd International Symposium on Algorithms and Computation (ISAAC 2022)},
  pages =	{3:1--3:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-258-7},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{248},
  editor =	{Bae, Sang Won and Park, Heejin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2022.3},
  URN =		{urn:nbn:de:0030-drops-172880},
  doi =		{10.4230/LIPIcs.ISAAC.2022.3},
  annote =	{Keywords: Local search, submodularity, second-order supermodularity, min-sum set cover}
}
Document
Finding an Approximate Mode of a Kernel Density Estimate

Authors: Jasper C.H. Lee, Jerry Li, Christopher Musco, Jeff M. Phillips, and Wai Ming Tai

Published in: LIPIcs, Volume 204, 29th Annual European Symposium on Algorithms (ESA 2021)


Abstract
Given points P = {p₁,...,p_n} subset of ℝ^d, how do we find a point x which approximately maximizes the function 1/n ∑_{p_i ∈ P} e^{-‖p_i-x‖²}? In other words, how do we find an approximate mode of a Gaussian kernel density estimate (KDE) of P? Given the power of KDEs in representing probability distributions and other continuous functions, the basic mode finding problem is widely applicable. However, it is poorly understood algorithmically. We provide fast and provably accurate approximation algorithms for mode finding in both the low and high dimensional settings. For low (constant) dimension, our main contribution is a reduction to solving systems of polynomial inequalities. For high dimension, we prove the first dimensionality reduction result for KDE mode finding. The latter result leverages Johnson-Lindenstrauss projection, Kirszbraun’s classic extension theorem, and perhaps surprisingly, the mean-shift heuristic for mode finding. For constant approximation factor these algorithms run in O(n (log n)^{O(d)}) and O(nd + (log n)^{O(log³ n)}), respectively; these are proven more precisely as a (1+ε)-approximation guarantee. Furthermore, for the special case of d = 2, we give a combinatorial algorithm running in O(n log² n) time. We empirically demonstrate that the random projection approach and the 2-dimensional algorithm improves over the state-of-the-art mode-finding heuristics.

Cite as

Jasper C.H. Lee, Jerry Li, Christopher Musco, Jeff M. Phillips, and Wai Ming Tai. Finding an Approximate Mode of a Kernel Density Estimate. In 29th Annual European Symposium on Algorithms (ESA 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 204, pp. 61:1-61:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{lee_et_al:LIPIcs.ESA.2021.61,
  author =	{Lee, Jasper C.H. and Li, Jerry and Musco, Christopher and Phillips, Jeff M. and Tai, Wai Ming},
  title =	{{Finding an Approximate Mode of a Kernel Density Estimate}},
  booktitle =	{29th Annual European Symposium on Algorithms (ESA 2021)},
  pages =	{61:1--61:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-204-4},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{204},
  editor =	{Mutzel, Petra and Pagh, Rasmus and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2021.61},
  URN =		{urn:nbn:de:0030-drops-146428},
  doi =		{10.4230/LIPIcs.ESA.2021.61},
  annote =	{Keywords: Kernel density estimation, Dimensionality reduction, Coresets, Means-shift}
}
Document
Simple Heuristics Yield Provable Algorithms for Masked Low-Rank Approximation

Authors: Cameron Musco, Christopher Musco, and David P. Woodruff

Published in: LIPIcs, Volume 185, 12th Innovations in Theoretical Computer Science Conference (ITCS 2021)


Abstract
In the masked low-rank approximation problem, one is given data matrix A ∈ ℝ^{n × n} and binary mask matrix W ∈ {0,1}^{n × n}. The goal is to find a rank-k matrix L for which: cost(L) := ∑_{i=1}^n ∑_{j=1}^n W_{i,j} ⋅ (A_{i,j} - L_{i,j})² ≤ OPT + ε ‖A‖_F², where OPT = min_{rank-k L̂} cost(L̂) and ε is a given error parameter. Depending on the choice of W, the above problem captures factor analysis, low-rank plus diagonal decomposition, robust PCA, low-rank matrix completion, low-rank plus block matrix approximation, low-rank recovery from monotone missing data, and a number of other important problems. Many of these problems are NP-hard, and while algorithms with provable guarantees are known in some cases, they either 1) run in time n^Ω(k²/ε) or 2) make strong assumptions, for example, that A is incoherent or that the entries in W are chosen independently and uniformly at random. In this work, we show that a common polynomial time heuristic, which simply sets A to 0 where W is 0, and then finds a standard low-rank approximation, yields bicriteria approximation guarantees for this problem. In particular, for rank k' > k depending on the public coin partition number of W, the heuristic outputs rank-k' L with cost(L) ≤ OPT + ε ‖A‖_F². This partition number is in turn bounded by the randomized communication complexity of W, when interpreted as a two-player communication matrix. For many important cases, including all those listed above, this yields bicriteria approximation guarantees with rank k' = k ⋅ poly(log n/ε). Beyond this result, we show that different notions of communication complexity yield bicriteria algorithms for natural variants of masked low-rank approximation. For example, multi-player number-in-hand communication complexity connects to masked tensor decomposition and non-deterministic communication complexity to masked Boolean low-rank factorization.

Cite as

Cameron Musco, Christopher Musco, and David P. Woodruff. Simple Heuristics Yield Provable Algorithms for Masked Low-Rank Approximation. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 6:1-6:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{musco_et_al:LIPIcs.ITCS.2021.6,
  author =	{Musco, Cameron and Musco, Christopher and Woodruff, David P.},
  title =	{{Simple Heuristics Yield Provable Algorithms for Masked Low-Rank Approximation}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{6:1--6:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.6},
  URN =		{urn:nbn:de:0030-drops-135452},
  doi =		{10.4230/LIPIcs.ITCS.2021.6},
  annote =	{Keywords: low-rank approximation, communication complexity, weighted low-rank approximation, bicriteria approximation algorithms}
}
Document
Eigenvector Computation and Community Detection in Asynchronous Gossip Models

Authors: Frederik Mallmann-Trenn, Cameron Musco, and Christopher Musco

Published in: LIPIcs, Volume 107, 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)


Abstract
We give a simple distributed algorithm for computing adjacency matrix eigenvectors for the communication graph in an asynchronous gossip model. We show how to use this algorithm to give state-of-the-art asynchronous community detection algorithms when the communication graph is drawn from the well-studied stochastic block model. Our methods also apply to a natural alternative model of randomized communication, where nodes within a community communicate more frequently than nodes in different communities. Our analysis simplifies and generalizes prior work by forging a connection between asynchronous eigenvector computation and Oja's algorithm for streaming principal component analysis. We hope that our work serves as a starting point for building further connections between the analysis of stochastic iterative methods, like Oja's algorithm, and work on asynchronous and gossip-type algorithms for distributed computation.

Cite as

Frederik Mallmann-Trenn, Cameron Musco, and Christopher Musco. Eigenvector Computation and Community Detection in Asynchronous Gossip Models. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 159:1-159:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{mallmanntrenn_et_al:LIPIcs.ICALP.2018.159,
  author =	{Mallmann-Trenn, Frederik and Musco, Cameron and Musco, Christopher},
  title =	{{Eigenvector Computation and Community Detection in Asynchronous Gossip Models}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{159:1--159:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.159},
  URN =		{urn:nbn:de:0030-drops-91639},
  doi =		{10.4230/LIPIcs.ICALP.2018.159},
  annote =	{Keywords: block model, community detection, distributed clustering, eigenvector computation, gossip algorithms, population protocols}
}
  • Refine by Type
  • 15 Document/PDF
  • 7 Document/HTML

  • Refine by Publication Year
  • 7 2025
  • 2 2024
  • 2 2023
  • 1 2022
  • 2 2021
  • Show More...

  • Refine by Author
  • 5 Woodruff, David P.
  • 4 Musco, Cameron
  • 4 Musco, Christopher
  • 3 Vakilian, Ali
  • 2 Bhattacharjee, Rajarshi
  • Show More...

  • Refine by Series/Journal
  • 15 LIPIcs

  • Refine by Classification
  • 4 Theory of computation → Sketching and sampling
  • 3 Theory of computation → Streaming, sublinear and near linear time algorithms
  • 2 Mathematics of computing → Computations on matrices
  • 1 Computing methodologies → Distributed algorithms
  • 1 Mathematics of computing → Mathematical analysis
  • Show More...

  • Refine by Keyword
  • 2 randomized linear algebra
  • 2 sublinear algorithms
  • 1 Approximate matrix multiplication
  • 1 Coresets
  • 1 Dimensionality reduction
  • Show More...

Any Issues?
X

Feedback on the Current Page

CAPTCHA

Thanks for your feedback!

Feedback submitted to Dagstuhl Publishing

Could not send message

Please try again later or send an E-mail