185 Search Results for "Woodruff, David P."


Volume

LIPIcs, Volume 229

49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)

ICALP 2022, July 4-8, 2022, Paris, France

Editors: Mikołaj Bojańczyk, Emanuela Merelli, and David P. Woodruff

Document
RANDOM
Faster Algorithms for Schatten-p Low Rank Approximation

Authors: Praneeth Kacham and David P. Woodruff

Published in: LIPIcs, Volume 317, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)


Abstract
We study algorithms for the Schatten-p Low Rank Approximation (LRA) problem. First, we show that by using fast rectangular matrix multiplication algorithms and different block sizes, we can improve the running time of the algorithms in the recent work of Bakshi, Clarkson and Woodruff (STOC 2022). We then show that by carefully combining our new algorithm with the algorithm of Li and Woodruff (ICML 2020), we can obtain even faster algorithms for Schatten-p LRA. While the block-based algorithms are fast in the real number model, we do not have a stability analysis which shows that the algorithms work when implemented on a machine with polylogarithmic bits of precision. We show that the LazySVD algorithm of Allen-Zhu and Li (NeurIPS 2016) can be implemented on a floating point machine with only logarithmic, in the input parameters, bits of precision. As far as we are aware, this is the first stability analysis of any algorithm using O((k/√ε)poly(log n)) matrix-vector products with the matrix A to output a 1+ε approximate solution for the rank-k Schatten-p LRA problem.

Cite as

Praneeth Kacham and David P. Woodruff. Faster Algorithms for Schatten-p Low Rank Approximation. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 317, pp. 55:1-55:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{kacham_et_al:LIPIcs.APPROX/RANDOM.2024.55,
  author =	{Kacham, Praneeth and Woodruff, David P.},
  title =	{{Faster Algorithms for Schatten-p Low Rank Approximation}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)},
  pages =	{55:1--55:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-348-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{317},
  editor =	{Kumar, Amit and Ron-Zewi, Noga},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2024.55},
  URN =		{urn:nbn:de:0030-drops-210488},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2024.55},
  annote =	{Keywords: Low Rank Approximation, Schatten Norm, Rectangular Matrix Multiplication, Stability Analysis}
}
Document
Universal Matrix Sparsifiers and Fast Deterministic Algorithms for Linear Algebra

Authors: Rajarshi Bhattacharjee, Gregory Dexter, Cameron Musco, Archan Ray, Sushant Sachdeva, and David P. Woodruff

Published in: LIPIcs, Volume 287, 15th Innovations in Theoretical Computer Science Conference (ITCS 2024)


Abstract
Let S ∈ ℝ^{n × n} be any matrix satisfying ‖1-S‖₂ ≤ εn, where 1 is the all ones matrix and ‖⋅‖₂ is the spectral norm. It is well-known that there exists S with just O(n/ε²) non-zero entries achieving this guarantee: we can let 𝐒 be the scaled adjacency matrix of a Ramanujan expander graph. We show that, beyond giving a sparse approximation to the all ones matrix, 𝐒 yields a universal sparsifier for any positive semidefinite (PSD) matrix. In particular, for any PSD A ∈ ℝ^{n×n} which is normalized so that its entries are bounded in magnitude by 1, we show that ‖A-A∘S‖₂ ≤ ε n, where ∘ denotes the entrywise (Hadamard) product. Our techniques also yield universal sparsifiers for non-PSD matrices. In this case, we show that if S satisfies ‖1-S‖₂ ≤ (ε²n)/(c log²(1/ε)) for some sufficiently large constant c, then ‖A-A∘S‖₂ ≤ ε⋅max(n,‖ A‖₁), where ‖A‖₁ is the nuclear norm. Again letting 𝐒 be a scaled Ramanujan graph adjacency matrix, this yields a sparsifier with Õ(n/ε⁴) entries. We prove that the above universal sparsification bounds for both PSD and non-PSD matrices are tight up to logarithmic factors. Since 𝐀∘𝐒 can be constructed deterministically without reading all of A, our result for PSD matrices derandomizes and improves upon established results for randomized matrix sparsification, which require sampling a random subset of O((n log n)/ε²) entries and only give an approximation to any fixed A with high probability. We further show that any randomized algorithm must read at least Ω(n/ε²) entries to spectrally approximate general A to error εn, thus proving that these existing randomized algorithms are optimal up to logarithmic factors. We leverage our deterministic sparsification results to give the first {deterministic algorithms} for several problems, including singular value and singular vector approximation and positive semidefiniteness testing, that run in faster than matrix multiplication time. This partially addresses a significant gap between randomized and deterministic algorithms for fast linear algebraic computation. Finally, if A ∈ {-1,0,1}^{n × n} is PSD, we show that a spectral approximation à with ‖A-Ã‖₂ ≤ ε n can be obtained by deterministically reading Õ(n/ε) entries of A. This improves the 1/ε dependence on our result for general PSD matrices by a quadratic factor and is information-theoretically optimal up to a logarithmic factor.

Cite as

Rajarshi Bhattacharjee, Gregory Dexter, Cameron Musco, Archan Ray, Sushant Sachdeva, and David P. Woodruff. Universal Matrix Sparsifiers and Fast Deterministic Algorithms for Linear Algebra. In 15th Innovations in Theoretical Computer Science Conference (ITCS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 287, pp. 13:1-13:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{bhattacharjee_et_al:LIPIcs.ITCS.2024.13,
  author =	{Bhattacharjee, Rajarshi and Dexter, Gregory and Musco, Cameron and Ray, Archan and Sachdeva, Sushant and Woodruff, David P.},
  title =	{{Universal Matrix Sparsifiers and Fast Deterministic Algorithms for Linear Algebra}},
  booktitle =	{15th Innovations in Theoretical Computer Science Conference (ITCS 2024)},
  pages =	{13:1--13:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-309-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{287},
  editor =	{Guruswami, Venkatesan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2024.13},
  URN =		{urn:nbn:de:0030-drops-195415},
  doi =		{10.4230/LIPIcs.ITCS.2024.13},
  annote =	{Keywords: sublinear algorithms, randomized linear algebra, spectral sparsification, expanders}
}
Document
Space-Optimal Profile Estimation in Data Streams with Applications to Symmetric Functions

Authors: Justin Y. Chen, Piotr Indyk, and David P. Woodruff

Published in: LIPIcs, Volume 287, 15th Innovations in Theoretical Computer Science Conference (ITCS 2024)


Abstract
We revisit the problem of estimating the profile (also known as the rarity) in the data stream model. Given a sequence of m elements from a universe of size n, its profile is a vector ϕ whose i-th entry ϕ_i represents the number of distinct elements that appear in the stream exactly i times. A classic paper by Datar and Muthukrishan from 2002 gave an algorithm which estimates any entry ϕ_i up to an additive error of ± ε D using O(1/ε² (log n + log m)) bits of space, where D is the number of distinct elements in the stream. In this paper, we considerably improve on this result by designing an algorithm which simultaneously estimates many coordinates of the profile vector ϕ up to small overall error. We give an algorithm which, with constant probability, produces an estimated profile ϕˆ with the following guarantees in terms of space and estimation error: b) For any constant τ, with O(1 / ε² + log n) bits of space, ∑_{i = 1}^τ |ϕ_i - ϕˆ_i| ≤ ε D. c) With O(1/ ε²log (1/ε) + log n + log log m) bits of space, ∑_{i = 1}^m |ϕ_i - ϕˆ_i| ≤ ε m. In addition to bounding the error across multiple coordinates, our space bounds separate the terms that depend on 1/ε and those that depend on n and m. We prove matching lower bounds on space in both regimes. Application of our profile estimation algorithm gives estimates within error ± ε D of several symmetric functions of frequencies in O(1/ε² + log n) bits. This generalizes space-optimal algorithms for the distinct elements problems to other problems including estimating the Huber and Tukey losses as well as frequency cap statistics.

Cite as

Justin Y. Chen, Piotr Indyk, and David P. Woodruff. Space-Optimal Profile Estimation in Data Streams with Applications to Symmetric Functions. In 15th Innovations in Theoretical Computer Science Conference (ITCS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 287, pp. 32:1-32:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.ITCS.2024.32,
  author =	{Chen, Justin Y. and Indyk, Piotr and Woodruff, David P.},
  title =	{{Space-Optimal Profile Estimation in Data Streams with Applications to Symmetric Functions}},
  booktitle =	{15th Innovations in Theoretical Computer Science Conference (ITCS 2024)},
  pages =	{32:1--32:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-309-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{287},
  editor =	{Guruswami, Venkatesan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2024.32},
  URN =		{urn:nbn:de:0030-drops-195605},
  doi =		{10.4230/LIPIcs.ITCS.2024.32},
  annote =	{Keywords: Streaming and Sketching Algorithms, Sublinear Algorithms}
}
Document
Near-Linear Time and Fixed-Parameter Tractable Algorithms for Tensor Decompositions

Authors: Arvind V. Mahankali, David P. Woodruff, and Ziyu Zhang

Published in: LIPIcs, Volume 287, 15th Innovations in Theoretical Computer Science Conference (ITCS 2024)


Abstract
We study low rank approximation of tensors, focusing on the Tensor Train and Tucker decompositions, as well as approximations with tree tensor networks and general tensor networks. As suggested by hardness results also shown in this work, obtaining (1+ε)-approximation algorithms for rank k tensor train and Tucker decompositions efficiently may be computationally hard for these problems. Therefore, we propose different algorithms that respectively satisfy some of the objectives above while violating some others within a bound, known as bicriteria algorithms. On the one hand, for rank-k tensor train decomposition for tensors with q modes, we give a (1 + ε)-approximation algorithm with a small bicriteria rank (O(qk/ε) up to logarithmic factors) and O(q ⋅ nnz(A)) running time, up to lower order terms. Here nnz(A) denotes the number of non-zero entries in the input tensor A. We also show how to convert the algorithm of [Huber et al., 2017] into a relative error approximation algorithm, but their algorithm necessarily has a running time of O(qr² ⋅ nnz(A)) + n ⋅ poly(qk/ε) when converted to a (1 + ε)-approximation algorithm with bicriteria rank r. Thus, the running time of our algorithm is better by at least a k² factor. To the best of our knowledge, our work is the first to achieve a near-input-sparsity time relative error approximation algorithm for tensor train decomposition. Our key technique is a method for efficiently obtaining subspace embeddings for a matrix which is the flattening of a Tensor Train of q tensors - the number of rows in the subspace embeddings is polynomial in q, thus avoiding the curse of dimensionality. We extend our algorithm to tree tensor networks and tensor networks on arbitrary graphs. Another way of coping with intractability is by looking at fixed-parameter tractable (FPT) algorithms. We give FPT algorithms for the tensor train, Tucker, and Canonical Polyadic (CP) decompositions, which are simpler than the FPT algorithms of [Song et al., 2019], since our algorithms do not make use of polynomial system solvers. Our technique of using an exponential number of Gaussian subspace embeddings with exactly k rows (and thus exponentially small success probability) may be of independent interest.

Cite as

Arvind V. Mahankali, David P. Woodruff, and Ziyu Zhang. Near-Linear Time and Fixed-Parameter Tractable Algorithms for Tensor Decompositions. In 15th Innovations in Theoretical Computer Science Conference (ITCS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 287, pp. 79:1-79:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{mahankali_et_al:LIPIcs.ITCS.2024.79,
  author =	{Mahankali, Arvind V. and Woodruff, David P. and Zhang, Ziyu},
  title =	{{Near-Linear Time and Fixed-Parameter Tractable Algorithms for Tensor Decompositions}},
  booktitle =	{15th Innovations in Theoretical Computer Science Conference (ITCS 2024)},
  pages =	{79:1--79:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-309-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{287},
  editor =	{Guruswami, Venkatesan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2024.79},
  URN =		{urn:nbn:de:0030-drops-196078},
  doi =		{10.4230/LIPIcs.ITCS.2024.79},
  annote =	{Keywords: Low rank approximation, Sketching algorithms, Tensor decomposition}
}
Document
Training Multi-Layer Over-Parametrized Neural Network in Subquadratic Time

Authors: Zhao Song, Lichen Zhang, and Ruizhe Zhang

Published in: LIPIcs, Volume 287, 15th Innovations in Theoretical Computer Science Conference (ITCS 2024)


Abstract
We consider the problem of training a multi-layer over-parametrized neural network to minimize the empirical risk induced by a loss function. In the typical setting of over-parametrization, the network width m is much larger than the data dimension d and the number of training samples n (m = poly(n,d)), which induces a prohibitive large weight matrix W ∈ ℝ^{m× m} per layer. Naively, one has to pay O(m²) time to read the weight matrix and evaluate the neural network function in both forward and backward computation. In this work, we show how to reduce the training cost per iteration. Specifically, we propose a framework that uses m² cost only in the initialization phase and achieves a truly subquadratic cost per iteration in terms of m, i.e., m^{2-Ω(1)} per iteration. Our result has implications beyond standard over-parametrization theory, as it can be viewed as designing an efficient data structure on top of a pre-trained large model to further speed up the fine-tuning process, a core procedure to deploy large language models (LLM).

Cite as

Zhao Song, Lichen Zhang, and Ruizhe Zhang. Training Multi-Layer Over-Parametrized Neural Network in Subquadratic Time. In 15th Innovations in Theoretical Computer Science Conference (ITCS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 287, pp. 93:1-93:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{song_et_al:LIPIcs.ITCS.2024.93,
  author =	{Song, Zhao and Zhang, Lichen and Zhang, Ruizhe},
  title =	{{Training Multi-Layer Over-Parametrized Neural Network in Subquadratic Time}},
  booktitle =	{15th Innovations in Theoretical Computer Science Conference (ITCS 2024)},
  pages =	{93:1--93:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-309-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{287},
  editor =	{Guruswami, Venkatesan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2024.93},
  URN =		{urn:nbn:de:0030-drops-196212},
  doi =		{10.4230/LIPIcs.ITCS.2024.93},
  annote =	{Keywords: Deep learning theory, Nonconvex optimization}
}
Document
APPROX
Facility Location in the Sublinear Geometric Model

Authors: Morteza Monemizadeh

Published in: LIPIcs, Volume 275, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2023)


Abstract
In the sublinear geometric model, we are provided with an oracle access to a point set P of n points in a bounded discrete space [Δ]², where Δ = n^O(1) is a polynomially bounded number in n. That is, we do not have direct access to the points, but we can make certain types of queries and there is an oracle that responds to our queries. The type of queries that we assume we can make in this paper, are range counting queries where ranges are axis-aligned rectangles (that are basic primitives in database [Srikanta Tirthapura and David P. Woodruff, 2012; Bentley, 1975; Mark de Berg et al., 2008], computational geometry [Pankaj K. Agarwal, 2004; Pankaj K. Agarwal et al., 1996; Boris Aronov et al., 2010; Boris Aronov et al., 2009], and machine learning [Menachem Sadigurschi and Uri Stemmer, 2021; Long and Tan, 1998; Michael J. Kearns and Umesh V. Vazirani, 1995; Michael J. Kearns and Umesh V. Vazirani, 1994]). The oracle then answers these queries by returning the number of points that are in queried ranges. Let {Alg} be an algorithm that (exactly or approximately) solves a problem 𝒫 in the sublinear geometric model. The query complexity of Alg is measured in terms of the number of queries that Alg makes to solve 𝒫. In this paper, we study the complexity of the (uniform) Euclidean facility location problem in the sublinear geometric model. We develop a randomized sublinear algorithm that with high probability, (1+ε)-approximates the cost of the Euclidean facility location problem of the point set P in the sublinear geometric model using Õ(√n) range counting queries. We complement this result by showing that approximating the cost of the Euclidean facility location problem within o(log(n))-factor in the sublinear geometric model using the sampling strategy that we propose for our sublinear algorithm needs Ω̃(n^{1/4}) RangeCount queries. We leave it as an open problem whether such a polynomial lower bound on the number of RangeCount queries exists for any randomized sublinear algorithm that approximates the cost of the facility location problem within a constant factor.

Cite as

Morteza Monemizadeh. Facility Location in the Sublinear Geometric Model. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 275, pp. 6:1-6:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{monemizadeh:LIPIcs.APPROX/RANDOM.2023.6,
  author =	{Monemizadeh, Morteza},
  title =	{{Facility Location in the Sublinear Geometric Model}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2023)},
  pages =	{6:1--6:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-296-9},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{275},
  editor =	{Megow, Nicole and Smith, Adam},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2023.6},
  URN =		{urn:nbn:de:0030-drops-188315},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2023.6},
  annote =	{Keywords: Facility Location, Sublinear Geometric Model, Range Counting Queries}
}
Document
Recovery from Non-Decomposable Distance Oracles

Authors: Zhuangfei Hu, Xinda Li, David P. Woodruff, Hongyang Zhang, and Shufan Zhang

Published in: LIPIcs, Volume 251, 14th Innovations in Theoretical Computer Science Conference (ITCS 2023)


Abstract
A line of work has looked at the problem of recovering an input from distance queries. In this setting, there is an unknown sequence s ∈ {0,1}^{≤ n}, and one chooses a set of queries y ∈ {0,1}^𝒪(n) and receives d(s,y) for a distance function d. The goal is to make as few queries as possible to recover s. Although this problem is well-studied for decomposable distances, i.e., distances of the form d(s,y) = ∑_{i=1}^n f(s_i, y_i) for some function f, which includes the important cases of Hamming distance, 𝓁_p-norms, and M-estimators, to the best of our knowledge this problem has not been studied for non-decomposable distances, for which there are important special cases such as edit distance, dynamic time warping (DTW), Fréchet distance, earth mover’s distance, and so on. We initiate the study and develop a general framework for such distances. Interestingly, for some distances such as DTW or Fréchet, exact recovery of the sequence s is provably impossible, and so we show by allowing the characters in y to be drawn from a slightly larger alphabet this then becomes possible. In a number of cases we obtain optimal or near-optimal query complexity. We also study the role of adaptivity for a number of different distance functions. One motivation for understanding non-adaptivity is that the query sequence can be fixed and the distances of the input to the queries provide a non-linear embedding of the input, which can be used in downstream applications involving, e.g., neural networks for natural language processing.

Cite as

Zhuangfei Hu, Xinda Li, David P. Woodruff, Hongyang Zhang, and Shufan Zhang. Recovery from Non-Decomposable Distance Oracles. In 14th Innovations in Theoretical Computer Science Conference (ITCS 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 251, pp. 73:1-73:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{hu_et_al:LIPIcs.ITCS.2023.73,
  author =	{Hu, Zhuangfei and Li, Xinda and Woodruff, David P. and Zhang, Hongyang and Zhang, Shufan},
  title =	{{Recovery from Non-Decomposable Distance Oracles}},
  booktitle =	{14th Innovations in Theoretical Computer Science Conference (ITCS 2023)},
  pages =	{73:1--73:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-263-1},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{251},
  editor =	{Tauman Kalai, Yael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2023.73},
  URN =		{urn:nbn:de:0030-drops-175767},
  doi =		{10.4230/LIPIcs.ITCS.2023.73},
  annote =	{Keywords: Sequence Recovery, Edit Distance, DTW Distance, Fr\'{e}chet Distance}
}
Document
RANDOM
Streaming Algorithms with Large Approximation Factors

Authors: Yi Li, Honghao Lin, David P. Woodruff, and Yuheng Zhang

Published in: LIPIcs, Volume 245, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2022)


Abstract
We initiate a broad study of classical problems in the streaming model with insertions and deletions in the setting where we allow the approximation factor α to be much larger than 1. Such algorithms can use significantly less memory than the usual setting for which α = 1+ε for an ε ∈ (0,1). We study large approximations for a number of problems in sketching and streaming, assuming that the underlying n-dimensional vector has all coordinates bounded by M throughout the data stream: 1) For the 𝓁_p norm/quasi-norm, 0 < p ≤ 2, we show that obtaining a poly(n)-approximation requires the same amount of memory as obtaining an O(1)-approximation for any M = n^Θ(1), which holds even for randomly ordered streams or for streams in the bounded deletion model. 2) For estimating the 𝓁_p norm, p > 2, we show an upper bound of O(n^{1-2/p} (log n log M)/α²) bits for an α-approximation, and give a matching lower bound for linear sketches. 3) For the 𝓁₂-heavy hitters problem, we show that the known lower bound of Ω(k log nlog M) bits for identifying (1/k)-heavy hitters holds even if we are allowed to output items that are 1/(α k)-heavy, provided the algorithm succeeds with probability 1-O(1/n). We also obtain a lower bound for linear sketches that is tight even for constant failure probability algorithms. 4) For estimating the number 𝓁₀ of distinct elements, we give an n^{1/t}-approximation algorithm using O(tlog log M) bits of space, as well as a lower bound of Ω(t) bits, both excluding the storage of random bits, where n is the dimension of the underlying frequency vector and M is an upper bound on the magnitude of its coordinates. 5) For α-approximation to the Schatten-p norm, we give near-optimal Õ(n^{2-4/p}/α⁴) sketching dimension for every even integer p and every α ≥ 1, while for p not an even integer we obtain near-optimal sketching dimension once α = Ω(n^{1/q-1/p}), where q is the largest even integer less than p. The latter is surprising as it is unknown what the complexity of Schatten-p norm estimation is for constant approximation; we show once the approximation factor is at least n^{1/q-1/p}, we can obtain near-optimal sketching bounds.

Cite as

Yi Li, Honghao Lin, David P. Woodruff, and Yuheng Zhang. Streaming Algorithms with Large Approximation Factors. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 245, pp. 13:1-13:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{li_et_al:LIPIcs.APPROX/RANDOM.2022.13,
  author =	{Li, Yi and Lin, Honghao and Woodruff, David P. and Zhang, Yuheng},
  title =	{{Streaming Algorithms with Large Approximation Factors}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2022)},
  pages =	{13:1--13:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-249-5},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{245},
  editor =	{Chakrabarti, Amit and Swamy, Chaitanya},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2022.13},
  URN =		{urn:nbn:de:0030-drops-171354},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2022.13},
  annote =	{Keywords: streaming algorithms, 𝓁\underlinep norm, heavy hitters, distinct elements}
}
Document
RANDOM
Adaptive Sketches for Robust Regression with Importance Sampling

Authors: Sepideh Mahabadi, David P. Woodruff, and Samson Zhou

Published in: LIPIcs, Volume 245, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2022)


Abstract
We introduce data structures for solving robust regression through stochastic gradient descent (SGD) by sampling gradients with probability proportional to their norm, i.e., importance sampling. Although SGD is widely used for large scale machine learning, it is well-known for possibly experiencing slow convergence rates due to the high variance from uniform sampling. On the other hand, importance sampling can significantly decrease the variance but is usually difficult to implement because computing the sampling probabilities requires additional passes over the data, in which case standard gradient descent (GD) could be used instead. In this paper, we introduce an algorithm that approximately samples T gradients of dimension d from nearly the optimal importance sampling distribution for a robust regression problem over n rows. Thus our algorithm effectively runs T steps of SGD with importance sampling while using sublinear space and just making a single pass over the data. Our techniques also extend to performing importance sampling for second-order optimization.

Cite as

Sepideh Mahabadi, David P. Woodruff, and Samson Zhou. Adaptive Sketches for Robust Regression with Importance Sampling. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 245, pp. 31:1-31:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{mahabadi_et_al:LIPIcs.APPROX/RANDOM.2022.31,
  author =	{Mahabadi, Sepideh and Woodruff, David P. and Zhou, Samson},
  title =	{{Adaptive Sketches for Robust Regression with Importance Sampling}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2022)},
  pages =	{31:1--31:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-249-5},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{245},
  editor =	{Chakrabarti, Amit and Swamy, Chaitanya},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2022.31},
  URN =		{urn:nbn:de:0030-drops-171531},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2022.31},
  annote =	{Keywords: Streaming algorithms, stochastic optimization, importance sampling}
}
Document
Complete Volume
LIPIcs, Volume 229, ICALP 2022, Complete Volume

Authors: Mikołaj Bojańczyk, Emanuela Merelli, and David P. Woodruff

Published in: LIPIcs, Volume 229, 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)


Abstract
LIPIcs, Volume 229, ICALP 2022, Complete Volume

Cite as

49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 1-2516, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@Proceedings{bojanczyk_et_al:LIPIcs.ICALP.2022,
  title =	{{LIPIcs, Volume 229, ICALP 2022, Complete Volume}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{1--2516},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022},
  URN =		{urn:nbn:de:0030-drops-163400},
  doi =		{10.4230/LIPIcs.ICALP.2022},
  annote =	{Keywords: LIPIcs, Volume 229, ICALP 2022, Complete Volume}
}
Document
Front Matter
Front Matter, Table of Contents, Preface, Conference Organization

Authors: Mikołaj Bojańczyk, Emanuela Merelli, and David P. Woodruff

Published in: LIPIcs, Volume 229, 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)


Abstract
Front Matter, Table of Contents, Preface, Conference Organization

Cite as

49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 0:i-0:xxxvi, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{bojanczyk_et_al:LIPIcs.ICALP.2022.0,
  author =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  title =	{{Front Matter, Table of Contents, Preface, Conference Organization}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{0:i--0:xxxvi},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.0},
  URN =		{urn:nbn:de:0030-drops-163417},
  doi =		{10.4230/LIPIcs.ICALP.2022.0},
  annote =	{Keywords: Front Matter, Table of Contents, Preface, Conference Organization}
}
Document
Invited Talk
Towards a Theory of Algorithmic Proof Complexity (Invited Talk)

Authors: Albert Atserias

Published in: LIPIcs, Volume 229, 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)


Abstract
A possibly unexpected by-product of the mathematical study of the lengths of proofs, as is done in the field of propositional proof complexity, is, I claim, that it may lead to new polynomial-time algorithms. To explain this, I will first recall the origins of proof complexity as a field, and then explain why some of the recent progress in it could lead to some new algorithms.

Cite as

Albert Atserias. Towards a Theory of Algorithmic Proof Complexity (Invited Talk). In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 1:1-1:2, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{atserias:LIPIcs.ICALP.2022.1,
  author =	{Atserias, Albert},
  title =	{{Towards a Theory of Algorithmic Proof Complexity}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{1:1--1:2},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.1},
  URN =		{urn:nbn:de:0030-drops-163423},
  doi =		{10.4230/LIPIcs.ICALP.2022.1},
  annote =	{Keywords: proof complexity, logic, computational complexity}
}
Document
Invited Talk
Equilibrium Computation, Deep Learning, and Multi-Agent Reinforcement Learning (Invited Talk)

Authors: Constantinos Daskalakis

Published in: LIPIcs, Volume 229, 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)


Abstract
Machine Learning has recently made significant advances in challenges such as speech and image recognition, automatic translation, and text generation, much of that progress being fueled by the success of gradient descent-based optimization methods in computing local optima of non-convex objectives. From robustifying machine learning models against adversarial attacks to causal inference, training generative models, multi-robot interactions, and learning in strategic environments, many outstanding challenges in Machine Learning lie at its interface with Game Theory. On this front, however, gradient-descent based optimization methods have been less successful. Here, the role of single-objective optimization is played by equilibrium computation, but gradient-descent based methods commonly fail to find equilibria, and even computing local approximate equilibria has remained daunting. We shed light on these challenges through a combination of learning-theoretic, complexity-theoretic, game-theoretic and topological techniques, presenting obstacles and opportunities for Machine Learning and Game Theory going forward. I will assume no Deep Learning background for this talk and present results from joint works with S. Skoulakis and M. Zampetakis [Daskalakis et al., 2021] as well as with N. Golowich and K. Zhang [Daskalakis et al., 2022].

Cite as

Constantinos Daskalakis. Equilibrium Computation, Deep Learning, and Multi-Agent Reinforcement Learning (Invited Talk). In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, p. 2:1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{daskalakis:LIPIcs.ICALP.2022.2,
  author =	{Daskalakis, Constantinos},
  title =	{{Equilibrium Computation, Deep Learning, and Multi-Agent Reinforcement Learning}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{2:1--2:1},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.2},
  URN =		{urn:nbn:de:0030-drops-163431},
  doi =		{10.4230/LIPIcs.ICALP.2022.2},
  annote =	{Keywords: Deep Learning, Multi-Agent (Reinforcement) Learning, Game Theory, Nonconvex Optimization, PPAD}
}
Document
Invited Talk
Some New (And Old) Results on Contention Resolution (Invited Talk)

Authors: Leslie Ann Goldberg

Published in: LIPIcs, Volume 229, 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)


Abstract
This is an extended abstract of my talk at ICALP 2022, based on joint work with John Lapinskas.

Cite as

Leslie Ann Goldberg. Some New (And Old) Results on Contention Resolution (Invited Talk). In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 3:1-3:3, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{goldberg:LIPIcs.ICALP.2022.3,
  author =	{Goldberg, Leslie Ann},
  title =	{{Some New (And Old) Results on Contention Resolution}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{3:1--3:3},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.3},
  URN =		{urn:nbn:de:0030-drops-163444},
  doi =		{10.4230/LIPIcs.ICALP.2022.3},
  annote =	{Keywords: contention resolution, multiple access channel, randomised algorithms}
}
  • Refine by Author
  • 47 Woodruff, David P.
  • 7 Li, Yi
  • 5 Zhou, Samson
  • 3 Braverman, Vladimir
  • 3 Bringmann, Karl
  • Show More...

  • Refine by Classification

  • Refine by Keyword
  • 7 communication complexity
  • 6 sketching
  • 5 Fine-Grained Complexity
  • 5 data streams
  • 5 graph algorithms
  • Show More...

  • Refine by Type
  • 184 document
  • 1 volume

  • Refine by Publication Year
  • 140 2022
  • 8 2018
  • 6 2019
  • 6 2020
  • 6 2021
  • Show More...

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail