Random Sketching, Clustering, and Short-Term Memory in Spiking Neural Networks

Authors Yael Hitron, Nancy Lynch, Cameron Musco, Merav Parter



PDF
Thumbnail PDF

File

LIPIcs.ITCS.2020.23.pdf
  • Filesize: 1.72 MB
  • 31 pages

Document Identifiers

Author Details

Yael Hitron
  • Weizmann Institute of Science, Rehovot, Israel
Nancy Lynch
  • Massachusetts Institute of Technology, Cambridge, MA, USA
Cameron Musco
  • University of Massachusetts, Amherst, MA, USA
Merav Parter
  • Weizmann Institute of Science, Rehovot, Israel

Cite As Get BibTex

Yael Hitron, Nancy Lynch, Cameron Musco, and Merav Parter. Random Sketching, Clustering, and Short-Term Memory in Spiking Neural Networks. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 23:1-23:31, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020) https://doi.org/10.4230/LIPIcs.ITCS.2020.23

Abstract

We study input compression in a biologically inspired model of neural computation. We demonstrate that a network consisting of a random projection step (implemented via random synaptic connectivity) followed by a sparsification step (implemented via winner-take-all competition) can reduce well-separated high-dimensional input vectors to well-separated low-dimensional vectors. By augmenting our network with a third module, we can efficiently map each input (along with any small perturbations of the input) to a unique representative neuron, solving a neural clustering problem.
Both the size of our network and its processing time, i.e., the time it takes the network to compute the compressed output given a presented input, are independent of the (potentially large) dimension of the input patterns and depend only on the number of distinct inputs that the network must encode and the pairwise relative Hamming distance between these inputs. The first two steps of our construction mirror known biological networks, for example, in the fruit fly olfactory system [Caron et al., 2013; Lin et al., 2014; Dasgupta et al., 2017]. Our analysis helps provide a theoretical understanding of these networks and lay a foundation for how random compression and input memorization may be implemented in biological neural networks.
Technically, a contribution in our network design is the implementation of a short-term memory. Our network can be given a desired memory time t_m as an input parameter and satisfies the following with high probability: any pattern presented several times within a time window of t_m rounds will be mapped to a single representative output neuron. However, a pattern not presented for c⋅t_m rounds for some constant c>1 will be "forgotten", and its representative output neuron will be released, to accommodate newly introduced patterns.

Subject Classification

ACM Subject Classification
  • Theory of computation → Design and analysis of algorithms
Keywords
  • biological distributed computing
  • spiking neural networks
  • compressed sensing
  • clustering
  • random projection
  • dimensionality reduction
  • winner-take-all

Metrics

  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    0
    PDF Downloads

References

  1. Jayadev Acharya, Arnab Bhattacharyya, and Pritish Kamath. Improved bounds for universal one-bit compressive sensing. In 2017 IEEE International Symposium on Information Theory (ISIT), pages 2353-2357, 2017. Google Scholar
  2. Zeyuan Allen-Zhu, Rati Gelashvili, Silvio Micali, and Nir Shavit. Sparse sign-consistent Johnson-Lindenstrauss matrices: Compression with neuroscience-based constraints. PNAS, 2014. Google Scholar
  3. Cornelia I Bargmann and Eve Marder. From the connectome to brain function. Nature Methods, 10(6):483-490, 2013. Google Scholar
  4. Ella Bingham and Heikki Mannila. Random projection in dimensionality reduction: applications to image and text data. In Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining, pages 245-250. ACM, 2001. Google Scholar
  5. Burton H Bloom. Space/time trade-offs in hash coding with allowable errors. Communications of the ACM, 13(7):422-426, 1970. Google Scholar
  6. Petros T Boufounos and Richard G Baraniuk. 1-bit compressive sensing. In 42nd Annual Conference on Information Sciences and Systems (CISS 2008)., pages 16-21. IEEE, 2008. Google Scholar
  7. Christos Boutsidis, Anastasios Zouzias, and Petros Drineas. Random projections for k-means clustering. In ŅIPS2010, 2010. Google Scholar
  8. Emmanuel J Candes and Terence Tao. Near-optimal signal recovery from random projections: Universal encoding strategies? IEEE transactions on information theory, 52(12):5406-5425, 2006. Google Scholar
  9. Sophie JC Caron, Vanessa Ruta, LF Abbott, and Richard Axel. Random convergence of olfactory inputs in the Drosophila mushroom body. Nature, 497(7447):113, 2013. Google Scholar
  10. Chi-Ning Chou, Kai-Min Chung, and Chi-Jen Lu. On the algorithmic power of spiking neural networks. arXiv preprint, 2018. URL: http://arxiv.org/abs/1803.10375.
  11. Kenneth L Clarkson and David P Woodruff. Low rank approximation and regression in input sparsity time. In Proceedings of the ŞTOC2013, 2013. Google Scholar
  12. Michael B Cohen, Sam Elder, Cameron Musco, Christopher Musco, and Madalina Persu. Dimensionality reduction for k-means clustering and low rank approximation. In Proceedings of the ŞTOC2015, 2015. Google Scholar
  13. Graham Cormode and Shan Muthukrishnan. An improved data stream summary: the count-min sketch and its applications. Journal of Algorithms, 55(1):58-75, 2005. Google Scholar
  14. Robert Coultrip, Richard Granger, and Gary Lynch. A cortical model of winner-take-all competition via lateral inhibition. Neural Networks, 5(1):47-54, 1992. Google Scholar
  15. Anirban Dasgupta, Ravi Kumar, and Tamás Sarlós. Fast locality-sensitive hashing. In Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 1073-1081. ACM, 2011. Google Scholar
  16. Sanjoy Dasgupta, Timothy C Sheehan, Charles F Stevens, and Saket Navlakha. A neural data structure for novelty detection. Proceedings of the National Academy of Sciences, 115(51):13093-13098, 2018. Google Scholar
  17. Sanjoy Dasgupta, Charles F Stevens, and Saket Navlakha. A neural algorithm for a fundamental computing problem. Science, 358(6364):793-796, 2017. Google Scholar
  18. Mayur Datar, Nicole Immorlica, Piotr Indyk, and Vahab S Mirrokni. Locality-sensitive hashing scheme based on p-stable distributions. In Proceedings of the twentieth annual symposium on Computational geometry, pages 253-262. ACM, 2004. Google Scholar
  19. David L Donoho. Compressed sensing. IEEE Transactions on information theory, 52(4):1289-1306, 2006. Google Scholar
  20. Surya Ganguli and Haim Sompolinsky. Compressed sensing, sparsity, and dimensionality in neuronal information processing and data analysis. Annual Review of Neuroscience, 2012. Google Scholar
  21. Simon S Haykin. Neural networks and learning machines, volume 3. Pearson, 2009. Google Scholar
  22. Yael Hitron and Merav Parter. Counting to Ten with Two Fingers: Compressed Counting with Spiking Neurons. ESA, 2019. URL: http://arxiv.org/abs/1902.10369.
  23. John J Hopfield, David W Tank, et al. Computing with neural circuits- A model. Science, 233(4764):625-633, 1986. Google Scholar
  24. Laurent Jacques, Jason N Laska, Petros T Boufounos, and Richard G Baraniuk. Robust 1-bit compressive sensing via binary stable embeddings of sparse vectors. IEEE Transactions on Information Theory, 59(4):2082-2102, 2013. Google Scholar
  25. Robert T Knight. Contribution of human hippocampal region to novelty detection. Nature, 383(6597):256, 1996. Google Scholar
  26. Christof Koch and Shimon Ullman. Shifts in selective visual attention: towards the underlying neural circuitry. In Matters of intelligence, pages 115-141. Springer, 1987. Google Scholar
  27. Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 2015. Google Scholar
  28. Robert A. Legenstein, Wolfgang Maass, Christos H. Papadimitriou, and Santosh Srinivas Vempala. Long Term Memory and the Densest K-Subgraph Problem. In 9th Innovations in Theoretical Computer Science Conference, ITCS 2018, January 11-14, 2018, Cambridge, MA, USA, pages 57:1-57:15, 2018. Google Scholar
  29. Andrew C Lin, Alexei M Bygrave, Alix De Calignon, Tzumin Lee, and Gero Miesenböck. Sparse, decorrelated odor coding in the mushroom body enhances learned odor discrimination. Nature neuroscience, 17(4):559, 2014. Google Scholar
  30. Adi Livnat and Christos Papadimitriou. Evolution and learning: used together, fused together. A response to Watson and Szathmáry. Trends in Ecology & Evolution, 31(12):894-896, 2016. Google Scholar
  31. Nikos K Logothetis. What we can do and what we cannot do with fMRI. Nature, 453(7197):869, 2008. Google Scholar
  32. Nancy Lynch and Cameron Musco. A Basic Compositional Model for Spiking Neural Networks. arXiv preprint, 2018. URL: http://arxiv.org/abs/1808.03884.
  33. Nancy Lynch, Cameron Musco, and Merav Parter. Computational Tradeoffs in Biological Neural Networks: Self-Stabilizing Winner-Take-All Networks. In Proceedings of the \cITCS2017, 2017. Google Scholar
  34. Nancy Lynch, Cameron Musco, and Merav Parter. Neuro-RAM Unit with Applications to Similarity Testing and Compression in Spiking Neural Networks. In Proceedings of the \cDISC2017, 2017. Google Scholar
  35. Nancy Lynch, Cameron Musco, and Merav Parter. Spiking Neural Networks: An Algorithmic Perspective. In 5th Workshop on Biological Distributed Algorithms (BDA 2017), July 2017. Google Scholar
  36. Wolfgang Maass. Networks of spiking neurons: the third generation of neural network models. Neural Networks, 10(9):1659-1671, 1997. Google Scholar
  37. Wolfgang Maass. On the computational power of winner-take-all. Neural computation, 12(11):2519-2535, 2000. Google Scholar
  38. Christos H Papadimitriou and Santosh S Vempala. Cortical learning via prediction. In Conference on Learning Theory, pages 1402-1422, 2015. Google Scholar
  39. Christos H Papadimitriou and Santosh S Vempala. Random Projection in the Brain and Computation with Assemblies of Neurons. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik, 2018. Google Scholar
  40. Narender Ramnani and Adrian M Owen. Anterior prefrontal cortex: insights into function from anatomy and neuroimaging. Nature Reviews. Neuroscience, 5(3):184, 2004. Google Scholar
  41. Charan Ranganath and Gregor Rainer. Cognitive neuroscience: Neural mechanisms for detecting and remembering novel events. Nature Reviews Neuroscience, 4(3):193, 2003. Google Scholar
  42. Tamas Sarlos. Improved approximation algorithms for large matrices via random projections. In Proceedings of the \cFOCS2006, 2006. Google Scholar
  43. Olaf Sporns, Giulio Tononi, and Rolf Kötter. The human connectome: a structural description of the human brain. PLoS Computational Biology, 1(4):e42, 2005. Google Scholar
  44. Leslie G Valiant. Circuits of the Mind. Oxford University Press on Demand, 2000. Google Scholar
  45. Leslie G Valiant. Memorization and association on a realistic neural model. Neural computation, 17(3):527-555, 2005. Google Scholar
  46. Leslie G Valiant. Capacity of Neural Networks for Lifelong Learning of Composable Tasks. In Proceedings of the \cFOCS2017, pages 367-378, 2017. Google Scholar
  47. Santosh S Vempala. The random projection method, volume 65. American Mathematical Society, 2005. Google Scholar
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail