Streaming Complexity of SVMs
We study the space complexity of solving the bias-regularized SVM problem in the streaming model. In particular, given a data set (x_i,y_i) ∈ ℝ^d× {-1,+1}, the objective function is F_λ(θ,b) = λ/2‖(θ,b)‖₂² + 1/n∑_{i=1}ⁿ max{0,1-y_i(θ^Tx_i+b)} and the goal is to find the parameters that (approximately) minimize this objective. This is a classic supervised learning problem that has drawn lots of attention, including for developing fast algorithms for solving the problem approximately: i.e., for finding (θ,b) such that F_λ(θ,b) ≤ min_{(θ',b')} F_λ(θ',b')+ε.
One of the most widely used algorithms for approximately optimizing the SVM objective is Stochastic Gradient Descent (SGD), which requires only O(1/λε) random samples, and which immediately yields a streaming algorithm that uses O(d/λε) space. For related problems, better streaming algorithms are only known for smooth functions, unlike the SVM objective that we focus on in this work.
We initiate an investigation of the space complexity for both finding an approximate optimum of this objective, and for the related "point estimation" problem of sketching the data set to evaluate the function value F_λ on any query (θ, b). We show that, for both problems, for dimensions d = 1,2, one can obtain streaming algorithms with space polynomially smaller than 1/λε, which is the complexity of SGD for strongly convex functions like the bias-regularized SVM [Shalev-Shwartz et al., 2007], and which is known to be tight in general, even for d = 1 [Agarwal et al., 2009]. We also prove polynomial lower bounds for both point estimation and optimization. In particular, for point estimation we obtain a tight bound of Θ(1/√{ε}) for d = 1 and a nearly tight lower bound of Ω̃(d/{ε}²) for d = Ω(log(1/ε)). Finally, for optimization, we prove a Ω(1/√{ε}) lower bound for d = Ω(log(1/ε)), and show similar bounds when d is constant.
support vector machine
streaming algorithm
space lower bound
sketching algorithm
point estimation
Theory of computation~Randomness, geometry and discrete structures
Theory of computation~Streaming, sublinear and near linear time algorithms
Theory of computation~Machine learning theory
Theory of computation~Lower bounds and information complexity
50:1-50:22
APPROX
https://arxiv.org/abs/2007.03633
Alexandr
Andoni
Alexandr Andoni
Columbia University, New York, NY, USA
Supported in part by Simons Foundation (#491119) and NSF (CCF-1617955, CCF- 1740833).
Collin
Burns
Collin Burns
Columbia University, New York, NY, USA
Yi
Li
Yi Li
Nanyang Technological University, Singapore, Singapore
Supported in part by Singapore Ministry of Education (AcRF) Tier 2 grant MOE2018-T2-1-013.
Sepideh
Mahabadi
Sepideh Mahabadi
Toyota Technological Institute at Chicago, IL, USA
David P.
Woodruff
David P. Woodruff
Carnegie Mellon University, Pittsburgh, PA, USA
Supported by the National Science Foundation under Grant No. CCF-1815840.
10.4230/LIPIcs.APPROX/RANDOM.2020.50
Alekh Agarwal, Peter Bartlett, Pradeep Ravikumar, and Martin J. Wainwright. Information-theoretic lower bounds on the oracle complexity of convex optimization. International Conference on Neural Information Processing Systems (NIPS), 2009.
Zeyuan Allen-Zhu. Katyusha: The first direct acceleration of stochastic gradient methods. Symposium on Theory of Computing (STOC), 2017.
Arturs Backurs, Piotr Indyk, and Ludwig Schmidt. On the fine-grained complexity of empirical risk minimization: Kernel methods and neural networks. In Advances in Neural Information Processing Systems (NIPS), 2017.
Kenneth L. Clarkson and David P. Woodruff. Numerical linear algebra in the streaming model. Symposium on Theory of Computing (STOC), 2009.
J. H. Huggins, R. P. Adams, and T. Broderick. PASS-GLM: polynomial approximate sufficient statistics for scalable Bayesian GLM inference. International Conference on Neural Information Processing Systems (NIPS), 2017.
T. S. Jayram and David P. Woodruff. Optimal bounds for johnson-lindenstrauss transforms and streaming problems with subconstant error. ACM Transactions on Algorithms, 2013.
Yi Li, Ruosong Wang, and David P. Woodruff. Tight bounds for the subspace sketch problem with applications. In Proceedings of SODA, 2020.
Ali Rahimi and Benjamin Recht. Random features for large-scale kernel machines. International Conference on Neural Information Processing Systems (NIPS), 2007.
Piyush Rai, Hal Daumé III, and Suresh Venkatasubramanian. Streamed learning: One-pass svms. International Joint Conference on Artificial Intelligence (IJCAI), 2009.
Mark Schmidt, Nicolas Le Roux, and Francis Bach. Minimizing finite sums with the stochastic average gradient. Mathematical Programming, 2017.
Shai Shalev-Shwartz and Shai Ben-David. Understanding machine learning: From theory to algorithms. Cambridge university press, 2014.
Shai Shalev-Shwartz, Yoram Singer, and Nathan Srebro. Pegasos: Primal estimated sub-gradient solver for svm. International Conference on Machine Learning (ICML), 2007.
Ivor W. Tsang, James T. Kwok, and Pak-Ming Cheung. Core vector machines: Fast svm training on very large data sets. Journal of Machine Learning Research (JMLR), 2005.
Alexandr Andoni, Collin Burns, Yi Li, Sepideh Mahabadi, and David P. Woodruff
Creative Commons Attribution 3.0 Unported license
https://creativecommons.org/licenses/by/3.0/legalcode