Simple Analysis of Sparse, Sign-Consistent JL

Author Meena Jagadeesan

Thumbnail PDF


  • Filesize: 0.5 MB
  • 20 pages

Document Identifiers

Author Details

Meena Jagadeesan
  • Harvard University, Cambridge, Massachusetts, USA


I would like to thank Prof. Jelani Nelson for advising this project.

Cite AsGet BibTex

Meena Jagadeesan. Simple Analysis of Sparse, Sign-Consistent JL. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 145, pp. 61:1-61:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Allen-Zhu, Gelashvili, Micali, and Shavit construct a sparse, sign-consistent Johnson-Lindenstrauss distribution, and prove that this distribution yields an essentially optimal dimension for the correct choice of sparsity. However, their analysis of the upper bound on the dimension and sparsity requires a complicated combinatorial graph-based argument similar to Kane and Nelson’s analysis of sparse JL. We present a simple, combinatorics-free analysis of sparse, sign-consistent JL that yields the same dimension and sparsity upper bounds as the original analysis. Our analysis also yields dimension/sparsity tradeoffs, which were not previously known. As with previous proofs in this area, our analysis is based on applying Markov’s inequality to the pth moment of an error term that can be expressed as a quadratic form of Rademacher variables. Interestingly, we show that, unlike in previous work in the area, the traditionally used Hanson-Wright bound is not strong enough to yield our desired result. Indeed, although the Hanson-Wright bound is known to be optimal for gaussian degree-2 chaos, it was already shown to be suboptimal for Rademachers. Surprisingly, we are able to show a simple moment bound for quadratic forms of Rademachers that is sufficiently tight to achieve our desired result, which given the ubiquity of moment and tail bounds in theoretical computer science, is likely to be of broader interest.

Subject Classification

ACM Subject Classification
  • Theory of computation → Random projections and metric embeddings
  • Dimensionality reduction
  • Random projections
  • Johnson-Lindenstrauss distribution
  • Hanson-Wright bound
  • Neuroscience-based constraints


  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    PDF Downloads


  1. D. Achlioptas. Database-friendly Random Projections: Johnson-Lindenstrauss with Binary Coins. J. Comput. Syst. Sci., 66(4):671-687, June 2003. Google Scholar
  2. Z. Allen-Zhu, R. Gelashvili, S. Micali, and N. Shavit. Sparse sign-consistent Johnson–Lindenstrauss matrices: Compression with neuroscience-based constraints. In Proceedings of the National Academy of Sciences (PNAS), volume 111, pages 16872-16876, 2014. Google Scholar
  3. M. B. Cohen. Nearly tight oblivious subspace embeddings by trace inequalities. In Proceedings of the Twenty-Seventh Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 278-287, 2016. Google Scholar
  4. M. B. Cohen, T. S. Jayram, and J. Nelson. Simple Analyses of the Sparse Johnson-Lindenstrauss Transform. In Proceedings of the 1st Symposium on Simplicity in Algorithms (SOSA), pages 1-9, 2018. Google Scholar
  5. S. Dahlgaard, M. Knudsen, and M. Thorup. Practical Hash Functions for Similarity Estimation and Dimensionality Reduction. In Proceedings of the 31st International Conference on Neural Information Processing Systems (NIPS), pages 6618-6628, 2017. Google Scholar
  6. A. Dasgupta, R. Kumar, and T. Sarlos. A Sparse Johnson-Lindenstrauss Transform. In Proceedings of the 42nd ACM Symposium on Theory of Computing (STOC), pages 341-350, 2010. Google Scholar
  7. C. Freksen, L. Kamma, and K. G. Larsen. Fully Understanding the Hashing Trick. In Proceedings of the 32nd International Conference on Neural Information Processing Systems (NeurIPS), pages 5394-5404, 2018. Google Scholar
  8. Z. Füredi and J. Komlós. The eigenvalues of random symmetric matrices. Combinatorica, 62:233-241, 1981. Google Scholar
  9. S. Ganguli and H. Sompolinsky. Compressed sensing, sparsity, and dimensionality in neuronal information processing and data analysis. Annual Review of Neuroscience, 35:485-508, 2012. Google Scholar
  10. R.T. Gray and P.A. Robinson. Stability and structural constraints of random brain networks with excitatory and inhibitory neural populations. Journal of Computational Neuroscience, 27(1):81-101, 2009. Google Scholar
  11. D. L. Hanson and F. T. Wright. A bound on tail probabilities for quadratic forms in independent random variables. Annals of Mathematical Statistics, 42(3):1079-1083, 1971. Google Scholar
  12. P. Hitczenko. Domination inequality for martingale transforms of Rademacher sequence. Israel Journal of Mathematics, 84:161-178, 1993. Google Scholar
  13. M. Jagadeesan. Understanding Sparse JL for Feature Hashing. CoRR, abs/1903.03605, 2019. URL:
  14. T.S. Jayram and D. P. Woodruff. Optimal bounds for Johnson-Lindenstrauss transforms and steaming problems with subconstant error. In ACM Transactions on Algorithms (TALG) - Special Issue on SODA'11, volume 9, pages 1-26, 2013. Google Scholar
  15. W. B. Johnson and J. Lindenstrauss. Extensions of Lipschitz mappings into a Hilbert space. Contemporary Mathematics, 26:189-206, 1984. Google Scholar
  16. D. M. Kane, R. Meka, and J. Nelson. Almost optimal explicit Johnson-Lindenstrauss families. In Proceedings of the 14th International Workshop and 15th International Conference on Approximation, Randomization, and Combinatorial Optimization: Algorithms and Techniques (RANDOM), pages 628-639, 2011. Google Scholar
  17. D. M. Kane and J. Nelson. A Derandomized Sparse Johnson-Lindenstrauss Transform. CoRR, abs/1006.3585, 2010. URL:
  18. D. M. Kane and J. Nelson. Sparser Johnson-Lindenstrauss transforms. In Proceedings of the 23rd Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 16872-16876. ACM Press, 2012. Google Scholar
  19. R. Kiani, H. Esteky, K. Mirpour, and K. Tanaka. Object category structure in response patterns of neuronal population in monkey inferior temporal cortex. Journal of Neurophysiology, 97:4296-4309, 2007. Google Scholar
  20. R. Latała. Estimation of moments of sums of independent real random variables. Annals of Probability, 25(3):1502-1513, 1997. Google Scholar
  21. R. Latała. Tail and moment estimates for some types of chaos. Studia Mathematica, 135(1):39-53, 1999. Google Scholar
  22. S. Muthukrishnan. Data streams: Algorithms and applications. Foundations and Trends in Theoretical Computer Science, 1, 2005. Google Scholar
  23. K. Rajan and L.F. Abbot. Eigenvalue spectra of random matrices for neural networks. Physical Review Letters, 97:188104, 2006. Google Scholar
  24. T. Rogers and J. McClelland. Semantic Cognition: A Parallel Distributed Processing Approach. MIT Press, 2004. Google Scholar
  25. D. Spielman and N. Srivastava. Graph sparsification by effective resistances. SIAM Journal on Computing (SICOMP), 40:1913-1926, 2011. Google Scholar
  26. R. Vershynin. High-Dimensional Probability. Cambridge University Press, 2018. Google Scholar
  27. K. Weinberger, A. Dasgupta, J. Langford, A. Smola, and J. Attenberg. Feature Hashing for Large Scale Multitask Learning. In Proceedings of the 26th Annual International Conference on Machine Learning (ICML), pages 1113-1120, 2009. Google Scholar
  28. E.P. Wigner. Characteristic vectors of bordered matrices with infinite dimensions. Annals of Mathematics, 62:548-564, 1955. Google Scholar
  29. D.P. Woodruff. Sketching as a tool for numerical linear algebra. Foundations and Trends in Theoretical Computer Science, 10:1-157, 2014. Google Scholar
Questions / Remarks / Feedback

Feedback for Dagstuhl Publishing

Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail