Streaming Complexity of SVMs

Authors Alexandr Andoni, Collin Burns, Yi Li, Sepideh Mahabadi, David P. Woodruff



PDF
Thumbnail PDF

File

LIPIcs.APPROX-RANDOM.2020.50.pdf
  • Filesize: 0.56 MB
  • 22 pages

Document Identifiers

Author Details

Alexandr Andoni
  • Columbia University, New York, NY, USA
Collin Burns
  • Columbia University, New York, NY, USA
Yi Li
  • Nanyang Technological University, Singapore, Singapore
Sepideh Mahabadi
  • Toyota Technological Institute at Chicago, IL, USA
David P. Woodruff
  • Carnegie Mellon University, Pittsburgh, PA, USA

Cite As Get BibTex

Alexandr Andoni, Collin Burns, Yi Li, Sepideh Mahabadi, and David P. Woodruff. Streaming Complexity of SVMs. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 176, pp. 50:1-50:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020) https://doi.org/10.4230/LIPIcs.APPROX/RANDOM.2020.50

Abstract

We study the space complexity of solving the bias-regularized SVM problem in the streaming model. In particular, given a data set (x_i,y_i) ∈ ℝ^d× {-1,+1}, the objective function is F_λ(θ,b) = λ/2‖(θ,b)‖₂² + 1/n∑_{i=1}ⁿ max{0,1-y_i(θ^Tx_i+b)} and the goal is to find the parameters that (approximately) minimize this objective. This is a classic supervised learning problem that has drawn lots of attention, including for developing fast algorithms for solving the problem approximately: i.e., for finding (θ,b) such that F_λ(θ,b) ≤ min_{(θ',b')} F_λ(θ',b')+ε.
One of the most widely used algorithms for approximately optimizing the SVM objective is Stochastic Gradient Descent (SGD), which requires only O(1/λε) random samples, and which immediately yields a streaming algorithm that uses O(d/λε) space. For related problems, better streaming algorithms are only known for smooth functions, unlike the SVM objective that we focus on in this work.
We initiate an investigation of the space complexity for both finding an approximate optimum of this objective, and for the related "point estimation" problem of sketching the data set to evaluate the function value F_λ on any query (θ, b). We show that, for both problems, for dimensions d = 1,2, one can obtain streaming algorithms with space polynomially smaller than 1/λε, which is the complexity of SGD for strongly convex functions like the bias-regularized SVM [Shalev-Shwartz et al., 2007], and which is known to be tight in general, even for d = 1 [Agarwal et al., 2009]. We also prove polynomial lower bounds for both point estimation and optimization. In particular, for point estimation we obtain a tight bound of Θ(1/√{ε}) for d = 1 and a nearly tight lower bound of Ω̃(d/{ε}²) for d = Ω(log(1/ε)). Finally, for optimization, we prove a Ω(1/√{ε}) lower bound for d = Ω(log(1/ε)), and show similar bounds when d is constant.

Subject Classification

ACM Subject Classification
  • Theory of computation → Randomness, geometry and discrete structures
  • Theory of computation → Streaming, sublinear and near linear time algorithms
  • Theory of computation → Machine learning theory
  • Theory of computation → Lower bounds and information complexity
Keywords
  • support vector machine
  • streaming algorithm
  • space lower bound
  • sketching algorithm
  • point estimation

Metrics

  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    0
    PDF Downloads

References

  1. Alekh Agarwal, Peter Bartlett, Pradeep Ravikumar, and Martin J. Wainwright. Information-theoretic lower bounds on the oracle complexity of convex optimization. International Conference on Neural Information Processing Systems (NIPS), 2009. Google Scholar
  2. Zeyuan Allen-Zhu. Katyusha: The first direct acceleration of stochastic gradient methods. Symposium on Theory of Computing (STOC), 2017. Google Scholar
  3. Arturs Backurs, Piotr Indyk, and Ludwig Schmidt. On the fine-grained complexity of empirical risk minimization: Kernel methods and neural networks. In Advances in Neural Information Processing Systems (NIPS), 2017. Google Scholar
  4. Kenneth L. Clarkson and David P. Woodruff. Numerical linear algebra in the streaming model. Symposium on Theory of Computing (STOC), 2009. Google Scholar
  5. J. H. Huggins, R. P. Adams, and T. Broderick. PASS-GLM: polynomial approximate sufficient statistics for scalable Bayesian GLM inference. International Conference on Neural Information Processing Systems (NIPS), 2017. Google Scholar
  6. T. S. Jayram and David P. Woodruff. Optimal bounds for johnson-lindenstrauss transforms and streaming problems with subconstant error. ACM Transactions on Algorithms, 2013. Google Scholar
  7. Yi Li, Ruosong Wang, and David P. Woodruff. Tight bounds for the subspace sketch problem with applications. In Proceedings of SODA, 2020. Google Scholar
  8. Ali Rahimi and Benjamin Recht. Random features for large-scale kernel machines. International Conference on Neural Information Processing Systems (NIPS), 2007. Google Scholar
  9. Piyush Rai, Hal Daumé III, and Suresh Venkatasubramanian. Streamed learning: One-pass svms. International Joint Conference on Artificial Intelligence (IJCAI), 2009. Google Scholar
  10. Mark Schmidt, Nicolas Le Roux, and Francis Bach. Minimizing finite sums with the stochastic average gradient. Mathematical Programming, 2017. Google Scholar
  11. Shai Shalev-Shwartz and Shai Ben-David. Understanding machine learning: From theory to algorithms. Cambridge university press, 2014. Google Scholar
  12. Shai Shalev-Shwartz, Yoram Singer, and Nathan Srebro. Pegasos: Primal estimated sub-gradient solver for svm. International Conference on Machine Learning (ICML), 2007. Google Scholar
  13. Ivor W. Tsang, James T. Kwok, and Pak-Ming Cheung. Core vector machines: Fast svm training on very large data sets. Journal of Machine Learning Research (JMLR), 2005. Google Scholar
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail