Neuro-RAM Unit with Applications to Similarity Testing and Compression in Spiking Neural Networks

Authors Nancy Lynch, Cameron Musco, Merav Parter



PDF
Thumbnail PDF

File

LIPIcs.DISC.2017.33.pdf
  • Filesize: 0.84 MB
  • 16 pages

Document Identifiers

Author Details

Nancy Lynch
Cameron Musco
Merav Parter

Cite As Get BibTex

Nancy Lynch, Cameron Musco, and Merav Parter. Neuro-RAM Unit with Applications to Similarity Testing and Compression in Spiking Neural Networks. In 31st International Symposium on Distributed Computing (DISC 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 91, pp. 33:1-33:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017) https://doi.org/10.4230/LIPIcs.DISC.2017.33

Abstract

We study distributed algorithms implemented in a simplified biologically inspired model for stochastic spiking neural networks. We focus on tradeoffs between computation time and network complexity, along with the role of noise and randomness in efficient neural computation.

It is widely accepted that neural spike responses, and neural computation in general, is inherently stochastic. In recent work, we explored how this stochasticity could be leveraged to solve the 'winner-take-all' leader election task. Here, we focus on using randomness in neural algorithms for similarity testing and compression. In the most basic setting, given two n-length patterns of firing neurons, we wish to distinguish if the patterns are equal or epsilon-far from equal.

Randomization allows us to solve this task with a very compact network, using O((sqrt(n) log n)/epsilon) auxiliary neurons, which is sublinear in the input size. At the heart of our solution is the design of a t-round neural random access memory, or indexing network, which we call a neuro-RAM. This module can be implemented with O(n/t) auxiliary neurons and is useful in many applications beyond similarity testing - e.g., we discuss its application to compression via random projection.

Using a VC dimension-based argument, we show that the tradeoff between runtime and network size in our neuro-RAM is nearly optimal. To the best of our knowledge, we are the first to apply these techniques to stochastic spiking networks. Our result has several implications - since our neuro-RAM can be implemented with deterministic threshold gates, it demonstrates that, in contrast to similarity testing, randomness does not provide significant computational advantages for this problem. It also establishes a separation between our networks, which spike with a sigmoidal probability function, and well-studied deterministic sigmoidal networks, whose gates output real number values, and which can implement a neuro-RAM much more efficiently.

Subject Classification

Keywords
  • spiking neural networks
  • biological distributed algorithms
  • circuit design

Metrics

  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    0
    PDF Downloads

References

  1. Christina Allen and Charles F Stevens. An evaluation of causes for unreliability of synaptic transmission. PNAS, 1994. Google Scholar
  2. Zeyuan Allen-Zhu, Rati Gelashvili, Silvio Micali, and Nir Shavit. Sparse sign-consistent Johnson-Lindenstrauss matrices: Compression with neuroscience-based constraints. PNAS, 2014. Google Scholar
  3. Eric Allender. A note on the power of threshold circuits. In Proceedings of the \nth\intcalcSub19891959 Annual IEEE Symposium on Foundations of Computer Science (FOCS), 1989. Google Scholar
  4. Martin Anthony and Peter L Bartlett. Neural network learning: Theoretical foundations. Cambridge University Press, 2009. Google Scholar
  5. Eric B Baum and David Haussler. What size net gives valid generalization? In Advances in Neural Information Processing Systems \intcalcSub19891987 (NIPS), 1989. Google Scholar
  6. Christos Boutsidis, Anastasios Zouzias, and Petros Drineas. Random projections for k-means clustering. In Advances in Neural Information Processing Systems \intcalcSub20101987 (NIPS), 2010. Google Scholar
  7. Kenneth L Clarkson and David P Woodruff. Low rank approximation and regression in input sparsity time. In Proceedings of the \nth\intcalcSub20131968 Annual ACM Symposium on Theory of Computing (STOC), 2013. Google Scholar
  8. Michael B Cohen, Sam Elder, Cameron Musco, Christopher Musco, and Madalina Persu. Dimensionality reduction for k-means clustering and low rank approximation. In Proceedings of the \nth\intcalcSub20151968 Annual ACM Symposium on Theory of Computing (STOC), 2015. Google Scholar
  9. A Aldo Faisal, Luc PJ Selen, and Daniel M Wolpert. Noise in the nervous system. Nature Reviews Neuroscience, 9(4):292-303, 2008. Google Scholar
  10. Merrick Furst, James B Saxe, and Michael Sipser. Parity, circuits, and the polynomial-time hierarchy. Theory of Computing Systems, 17(1):13-27, 1984. Google Scholar
  11. Surya Ganguli and Haim Sompolinsky. Compressed sensing, sparsity, and dimensionality in neuronal information processing and data analysis. Annual Review of Neuroscience, 2012. Google Scholar
  12. Wulfram Gerstner and Werner M Kistler. Spiking neuron models: Single neurons, populations, plasticity. Cambridge University Press, 2002. Google Scholar
  13. John J Hopfield, David W Tank, et al. Computing with neural circuits- a model. Science, 233(4764):625-633, 1986. Google Scholar
  14. Bill G Horne and Don R Hush. On the node complexity of neural networks. Neural Networks, 7(9):1413-1426, 1994. Google Scholar
  15. Eugene M Izhikevich. Which model to use for cortical spiking neurons? IEEE Transactions on Neural Networks, 15(5):1063-1070, 2004. Google Scholar
  16. Pascal Koiran. VC dimension in circuit complexity. In Proceedings of the \nth\intcalcSub19961985 Annual IEEE Conference on Computational Complexity, 1996. Google Scholar
  17. Nancy Lynch, Cameron Musco, and Merav Parter. Computational tradeoffs in biological neural networks: Self-stabilizing winner-take-all networks. In Proceedings of the \nth\intcalcSub20172009 Conference on Innovations in Theoretical Computer Science (ITCS), 2017. Google Scholar
  18. Wolfgang Maass. On the computational power of noisy spiking neurons. In Advances in Neural Information Processing Systems \intcalcSub19961987 (NIPS), 1996. Google Scholar
  19. Wolfgang Maass. Networks of spiking neurons: the third generation of neural network models. Neural Networks, 10(9):1659-1671, 1997. Google Scholar
  20. Wolfgang Maass, Georg Schnitger, and Eduardo D Sontag. On the computational power of sigmoid versus boolean threshold circuits. In Proceedings of the \nth\intcalcSub19911959 Annual IEEE Symposium on Foundations of Computer Science (FOCS), 1991. Google Scholar
  21. Marvin Minsky and Seymour Papert. Perceptrons. 1969. Google Scholar
  22. Tamas Sarlos. Improved approximation algorithms for large matrices via random projections. In Proceedings of the \nth\intcalcSub20061959 Annual IEEE Symposium on Foundations of Computer Science (FOCS), 2006. Google Scholar
  23. Norbert Sauer. On the density of families of sets. Journal of Combinatorial Theory, Series A, 13(1):145-147, 1972. Google Scholar
  24. Michael N Shadlen and William T Newsome. Noise, neural codes and cortical organization. Current Opinion in Neurobiology, 4(4):569-579, 1994. Google Scholar
  25. Saharon Shelah. A combinatorial problem; stability and order for models and theories in infinitary languages. Pacific Journal of Mathematics, 41(1):247-261, 1972. Google Scholar
  26. Leslie G Valiant. Circuits of the Mind. Oxford University Press on Demand, 2000. Google Scholar
  27. Andrew Chi-Chin Yao. Probabilistic computations: Toward a unified measure of complexity. In Proceedings of the \nth\intcalcSub19771959 Annual IEEE Symposium on Foundations of Computer Science (FOCS), 1977. Google Scholar
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail