Search Results

Documents authored by Cohen, Michael B.


Found 2 Possible Name Variants:

Cohen, Michael B.

Document
Relative Lipschitzness in Extragradient Methods and a Direct Recipe for Acceleration

Authors: Michael B. Cohen, Aaron Sidford, and Kevin Tian

Published in: LIPIcs, Volume 185, 12th Innovations in Theoretical Computer Science Conference (ITCS 2021)


Abstract
We show that standard extragradient methods (i.e. mirror prox [Arkadi Nemirovski, 2004] and dual extrapolation [Yurii Nesterov, 2007]) recover optimal accelerated rates for first-order minimization of smooth convex functions. To obtain this result we provide fine-grained characterization of the convergence rates of extragradient methods for solving monotone variational inequalities in terms of a natural condition we call relative Lipschitzness. We further generalize this framework to handle local and randomized notions of relative Lipschitzness and thereby recover rates for box-constrained 𝓁_∞ regression based on area convexity [Jonah Sherman, 2017] and complexity bounds achieved by accelerated (randomized) coordinate descent [Zeyuan {Allen Zhu} et al., 2016; Yurii Nesterov and Sebastian U. Stich, 2017] for smooth convex function minimization.

Cite as

Michael B. Cohen, Aaron Sidford, and Kevin Tian. Relative Lipschitzness in Extragradient Methods and a Direct Recipe for Acceleration. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 62:1-62:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{cohen_et_al:LIPIcs.ITCS.2021.62,
  author =	{Cohen, Michael B. and Sidford, Aaron and Tian, Kevin},
  title =	{{Relative Lipschitzness in Extragradient Methods and a Direct Recipe for Acceleration}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{62:1--62:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.62},
  URN =		{urn:nbn:de:0030-drops-136011},
  doi =		{10.4230/LIPIcs.ITCS.2021.62},
  annote =	{Keywords: Variational inequalities, minimax optimization, acceleration, 𝓁\underline∞ regression}
}
Document
Simple Analyses of the Sparse Johnson-Lindenstrauss Transform

Authors: Michael B. Cohen, T.S. Jayram, and Jelani Nelson

Published in: OASIcs, Volume 61, 1st Symposium on Simplicity in Algorithms (SOSA 2018)


Abstract
For every n-point subset X of Euclidean space and target distortion 1+eps for 0<eps<1, the Sparse Johnson Lindenstrauss Transform (SJLT) of (Kane, Nelson, J. ACM 2014) provides a linear dimensionality-reducing map f:X-->l_2^m where f(x) = Ax for A a matrix with m rows where (1) m = O((log n)/eps^2), and (2) each column of A is sparse, having only O(eps m) non-zero entries. Though the constructions given for such A in (Kane, Nelson, J. ACM 2014) are simple, the analyses are not, employing intricate combinatorial arguments. We here give two simple alternative proofs of their main result, involving no delicate combinatorics. One of these proofs has already been tested pedagogically, requiring slightly under forty minutes by the third author at a casual pace to cover all details in a blackboard course lecture.

Cite as

Michael B. Cohen, T.S. Jayram, and Jelani Nelson. Simple Analyses of the Sparse Johnson-Lindenstrauss Transform. In 1st Symposium on Simplicity in Algorithms (SOSA 2018). Open Access Series in Informatics (OASIcs), Volume 61, pp. 15:1-15:9, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{cohen_et_al:OASIcs.SOSA.2018.15,
  author =	{Cohen, Michael B. and Jayram, T.S. and Nelson, Jelani},
  title =	{{Simple Analyses of the Sparse Johnson-Lindenstrauss Transform}},
  booktitle =	{1st Symposium on Simplicity in Algorithms (SOSA 2018)},
  pages =	{15:1--15:9},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-064-4},
  ISSN =	{2190-6807},
  year =	{2018},
  volume =	{61},
  editor =	{Seidel, Raimund},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SOSA.2018.15},
  URN =		{urn:nbn:de:0030-drops-83056},
  doi =		{10.4230/OASIcs.SOSA.2018.15},
  annote =	{Keywords: dimensionality reduction, Johnson-Lindenstrauss, Sparse Johnson-Lindenstrauss Transform}
}
Document
Online Row Sampling

Authors: Michael B. Cohen, Cameron Musco, and Jakub Pachocki

Published in: LIPIcs, Volume 60, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2016)


Abstract
Finding a small spectral approximation for a tall n x d matrix A is a fundamental numerical primitive. For a number of reasons, one often seeks an approximation whose rows are sampled from those of A. Row sampling improves interpretability, saves space when A is sparse, and preserves row structure, which is especially important, for example, when A represents a graph. However, correctly sampling rows from A can be costly when the matrix is large and cannot be stored and processed in memory. Hence, a number of recent publications focus on row sampling in the streaming setting, using little more space than what is required to store the outputted approximation [Kelner Levin 2013] [Kapralov et al. 2014]. Inspired by a growing body of work on online algorithms for machine learning and data analysis, we extend this work to a more restrictive online setting: we read rows of A one by one and immediately decide whether each row should be kept in the spectral approximation or discarded, without ever retracting these decisions. We present an extremely simple algorithm that approximates A up to multiplicative error epsilon and additive error delta using O(d log d log (epsilon ||A||_2^2/delta) / epsilon^2) online samples, with memory overhead proportional to the cost of storing the spectral approximation. We also present an algorithm that uses O(d^2) memory but only requires O(d log (epsilon ||A||_2^2/delta) / epsilon^2) samples, which we show is optimal. Our methods are clean and intuitive, allow for lower memory usage than prior work, and expose new theoretical properties of leverage score based matrix approximation.

Cite as

Michael B. Cohen, Cameron Musco, and Jakub Pachocki. Online Row Sampling. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 60, pp. 7:1-7:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{cohen_et_al:LIPIcs.APPROX-RANDOM.2016.7,
  author =	{Cohen, Michael B. and Musco, Cameron and Pachocki, Jakub},
  title =	{{Online Row Sampling}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2016)},
  pages =	{7:1--7:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-018-7},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{60},
  editor =	{Jansen, Klaus and Mathieu, Claire and Rolim, Jos\'{e} D. P. and Umans, Chris},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2016.7},
  URN =		{urn:nbn:de:0030-drops-66304},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2016.7},
  annote =	{Keywords: spectral sparsification, leverage score sampling, online sparsification}
}
Document
Optimal Approximate Matrix Product in Terms of Stable Rank

Authors: Michael B. Cohen, Jelani Nelson, and David P. Woodruff

Published in: LIPIcs, Volume 55, 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)


Abstract
We prove, using the subspace embedding guarantee in a black box way, that one can achieve the spectral norm guarantee for approximate matrix multiplication with a dimensionality-reducing map having m = O(˜r/epsilon^2) rows. Here r˜ is the maximum stable rank, i.e., the squared ratio of Frobenius and operator norms, of the two matrices being multiplied. This is a quantitative improvement over previous work of [Magen and Zouzias, SODA, 2011] and [Kyrillidis et al., arXiv, 2014] and is also optimal for any oblivious dimensionality-reducing map. Furthermore, due to the black box reliance on the subspace embedding property in our proofs, our theorem can be applied to a much more general class of sketching matrices than what was known before, in addition to achieving better bounds. For example, one can apply our theorem to efficient subspace embeddings such as the Subsampled Randomized Hadamard Transform or sparse subspace embeddings, or even with subspace embedding constructions that may be developed in the future. Our main theorem, via connections with spectral error matrix multiplication proven in previous work, implies quantitative improvements for approximate least squares regression and low rank approximation, and implies faster low rank approximation for popular kernels in machine learning such as the gaussian and Sobolev kernels. Our main result has also already been applied to improve dimensionality reduction guarantees for k-means clustering, and also implies new results for nonparametric regression. Lastly, we point out that the proof of the "BSS" deterministic row-sampling result of [Batson et al., SICOMP, 2012] can be modified to obtain deterministic row-sampling for approximate matrix product in terms of the stable rank of the matrices. The original "BSS" proof was in terms of the rank rather than the stable rank.

Cite as

Michael B. Cohen, Jelani Nelson, and David P. Woodruff. Optimal Approximate Matrix Product in Terms of Stable Rank. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 11:1-11:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{cohen_et_al:LIPIcs.ICALP.2016.11,
  author =	{Cohen, Michael B. and Nelson, Jelani and Woodruff, David P.},
  title =	{{Optimal Approximate Matrix Product in Terms of Stable Rank}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{11:1--11:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.11},
  URN =		{urn:nbn:de:0030-drops-62788},
  doi =		{10.4230/LIPIcs.ICALP.2016.11},
  annote =	{Keywords: subspace embeddings, approximate matrix multiplication, stable rank, regression, low rank approximation}
}

Cohen, Michael

Document
Image Synthesis and Interactive 3D Graphics (Dagstuhl Seminar 00251)

Authors: Michael Cohen, Heinrich Müller, Claude Puech, and Hans-Peter Seidel

Published in: Dagstuhl Seminar Reports. Dagstuhl Seminar Reports, Volume 1 (2021)


Abstract

Cite as

Michael Cohen, Heinrich Müller, Claude Puech, and Hans-Peter Seidel. Image Synthesis and Interactive 3D Graphics (Dagstuhl Seminar 00251). Dagstuhl Seminar Report 278, pp. 1-34, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2001)


Copy BibTex To Clipboard

@TechReport{cohen_et_al:DagSemRep.278,
  author =	{Cohen, Michael and M\"{u}ller, Heinrich and Puech, Claude and Seidel, Hans-Peter},
  title =	{{Image Synthesis and Interactive 3D Graphics (Dagstuhl Seminar 00251)}},
  pages =	{1--34},
  ISSN =	{1619-0203},
  year =	{2001},
  type = 	{Dagstuhl Seminar Report},
  number =	{278},
  institution =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagSemRep.278},
  URN =		{urn:nbn:de:0030-drops-151620},
  doi =		{10.4230/DagSemRep.278},
}
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail