Spiking Neural Networks Through the Lens of Streaming Algorithms

Authors Yael Hitron, Cameron Musco, Merav Parter



PDF
Thumbnail PDF

File

LIPIcs.DISC.2020.10.pdf
  • Filesize: 492 kB
  • 18 pages

Document Identifiers

Author Details

Yael Hitron
  • Weizmann Institute of Science, Rehovot, Israel
Cameron Musco
  • University of Massachusetts, Amherst, MA, USA
Merav Parter
  • Weizmann Institute of Science, Rehovot, Israel

Acknowledgements

We are very grateful to Eylon Yogev for various discussions on pseudorandom generators, and for pointing out to us Lemma 27.

Cite As Get BibTex

Yael Hitron, Cameron Musco, and Merav Parter. Spiking Neural Networks Through the Lens of Streaming Algorithms. In 34th International Symposium on Distributed Computing (DISC 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 179, pp. 10:1-10:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020) https://doi.org/10.4230/LIPIcs.DISC.2020.10

Abstract

We initiate the study of biologically-inspired spiking neural networks from the perspective of streaming algorithms. Like computers, human brains face memory limitations, which pose a significant obstacle when processing large scale and dynamically changing data. In computer science, these challenges are captured by the well-known streaming model, which can be traced back to Munro and Paterson `78 and has had significant impact in theory and beyond. In the classical streaming setting, one must compute a function f of a stream of updates 𝒮 = {u₁,…,u_m}, given restricted single-pass access to the stream. The primary complexity measure is the space used by the algorithm. 
In contrast to the large body of work on streaming algorithms, relatively little is known about the computational aspects of data processing in spiking neural networks. In this work, we seek to connect these two models, leveraging techniques developed for streaming algorithms to better understand neural computation. Our primary goal is to design networks for various computational tasks using as few auxiliary (non-input or output) neurons as possible. The number of auxiliary neurons can be thought of as the "space" required by the network. 
Previous algorithmic work in spiking neural networks has many similarities with streaming algorithms. However, the connection between these two space-limited models has not been formally addressed. We take the first steps towards understanding this connection. On the upper bound side, we design neural algorithms based on known streaming algorithms for fundamental tasks, including distinct elements, approximate median, and heavy hitters. The number of neurons in our solutions almost match the space bounds of the corresponding streaming algorithms. As a general algorithmic primitive, we show how to implement the important streaming technique of linear sketching efficiently in spiking neural networks. On the lower bound side, we give a generic reduction, showing that any space-efficient spiking neural network can be simulated by a space-efficient streaming algorithm. This reduction lets us translate streaming-space lower bounds into nearly matching neural-space lower bounds, establishing a close connection between the two models.

Subject Classification

ACM Subject Classification
  • Networks → Network algorithms
Keywords
  • Biological distributed algorithms
  • Spiking neural networks
  • Streaming algorithms

Metrics

  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    0
    PDF Downloads

References

  1. Noga Alon, Yossi Matias, and Mario Szegedy. The space complexity of approximating the frequency moments. Journal of Computer and system sciences, 58(1):137-147, 1999. Google Scholar
  2. Ziv Bar-Yossef, T. S. Jayram, Ravi Kumar, D. Sivakumar, and Luca Trevisan. Counting distinct elements in a data stream. In Randomization and Approximation Techniques, 6th International Workshop, RANDOM 2002, Cambridge, MA, USA, September 13-15, 2002, Proceedings, pages 1-10, 2002. Google Scholar
  3. Jaroslaw Blasiok. Optimal streaming and tracking distinct elements with high probability. In Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2018, New Orleans, LA, USA, January 7-10, 2018, pages 2432-2448, 2018. URL: https://doi.org/10.1137/1.9781611975031.156.
  4. J Lawrence Carter and Mark N Wegman. Universal classes of hash functions. Journal of computer and system sciences, 18(2):143-154, 1979. Google Scholar
  5. Moses Charikar, Kevin Chen, and Martin Farach-Colton. Finding frequent items in data streams. In International Colloquium on Automata, Languages, and Programming, pages 693-703. Springer, 2002. Google Scholar
  6. Philippe Chassaing and Lucas Gerin. Efficient estimation of the cardinality of large data sets. arXiv preprint math/0701347, 2007. Google Scholar
  7. Zhiwei Chen and Aoqian Zhang. A survey of approximate quantile computation on large-scale data. IEEE Access, 8:34585-34597, 2020. Google Scholar
  8. Chi-Ning Chou, Kai-Min Chung, and Chi-Jen Lu. On the algorithmic power of spiking neural networks. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik, 2018. Google Scholar
  9. Graham Cormode, Mayur Datar, Piotr Indyk, and Shanmugavelayutham Muthukrishnan. Comparing data streams using hamming norms (how to zero in). IEEE Transactions on Knowledge and Data Engineering, 15(3):529-540, 2003. Google Scholar
  10. Graham Cormode and Shan Muthukrishnan. An improved data stream summary: the count-min sketch and its applications. Journal of Algorithms, 55(1):58-75, 2005. Google Scholar
  11. Sanjoy Dasgupta, Charles F Stevens, and Saket Navlakha. A neural algorithm for a fundamental computing problem. Science, 358(6364):793-796, 2017. Google Scholar
  12. Marianne Durand and Philippe Flajolet. Loglog counting of large cardinalities (extended abstract). In Algorithms - ESA 2003, 11th Annual European Symposium, Budapest, Hungary, September 16-19, 2003, Proceedings, pages 605-617, 2003. Google Scholar
  13. Philippe Flajolet, Éric Fusy, Olivier Gandouet, and Frédéric Meunier. Hyperloglog: the analysis of a near-optimal cardinality estimation algorithm. In DMTCS Proceedings vol. AH, 2007 Conference on Analysis of Algorithms (AofA 07), 2007. URL: https://dmtcs.episciences.org/3545.
  14. Yael Hitron, Nancy A. Lynch, Cameron Musco, and Merav Parter. Random sketching, clustering, and short-term memory in spiking neural networks. In 11th Innovations in Theoretical Computer Science Conference, ITCS 2020, January 12-14, 2020, Seattle, Washington, USA, pages 23:1-23:31, 2020. Google Scholar
  15. Yael Hitron and Merav Parter. Counting to ten with two fingers: Compressed counting with spiking neurons. In 27th Annual European Symposium on Algorithms, ESA 2019, September 9-11, 2019, Munich/Garching, Germany, pages 57:1-57:17, 2019. Google Scholar
  16. Yael Hitron, Merav Parter, and Gur Perri. The computational cost of asynchronous neural communication. In 11th Innovations in Theoretical Computer Science Conference, ITCS 2020, January 12-14, 2020, Seattle, Washington, USA, pages 48:1-48:47, 2020. Google Scholar
  17. Russell Impagliazzo and Avi Wigderson. P = BPP if E requires exponential circuits: Derandomizing the XOR lemma. In Proceedings of the Twenty-Ninth Annual ACM Symposium on the Theory of Computing, El Paso, Texas, USA, May 4-6, 1997, pages 220-229, 1997. Google Scholar
  18. Piotr Indyk. Stable distributions, pseudorandom generators, embeddings, and data stream computation. Journal of the ACM (JACM), 53(3):307-323, 2006. Google Scholar
  19. Piotr Indyk and Eric Price. K-median clustering, model-based compressive sensing, and sparse recovery for earth mover distance. In Proceedings of the forty-third annual ACM symposium on Theory of computing, pages 627-636, 2011. Google Scholar
  20. Piotr Indyk and David P. Woodruff. Tight lower bounds for the distinct elements problem. In 44th Symposium on Foundations of Computer Science (FOCS 2003), 11-14 October 2003, Cambridge, MA, USA, Proceedings, pages 283-288, 2003. Google Scholar
  21. John Kallaugher and Eric Price. Separations and equivalences between turnstile streaming and linear sketching. In Symposium on Theory of Computing, STOC 2020, 2020. Google Scholar
  22. Daniel M. Kane, Jelani Nelson, and David P. Woodruff. An optimal algorithm for the distinct elements problem. In Proceedings of the Twenty-Ninth ACM SIGMOD-SIGACT-SIGART Symposium on Principles of Database Systems, PODS 2010, June 6-11, 2010, Indianapolis, Indiana, USA, pages 41-52, 2010. URL: https://doi.org/10.1145/1807085.1807094.
  23. Michael Kapralov, Aida Mousavifar, Cameron Musco, Christopher Musco, Navid Nouri, Aaron Sidford, and Jakab Tardos. Fast and space efficient spectral sparsification in dynamic streams. In Proceedings of the Fourteenth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 1814-1833. SIAM, 2020. Google Scholar
  24. Zohar Karnin, Kevin Lang, and Edo Liberty. Optimal quantile approximation in streams. In 2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS), pages 71-78. IEEE, 2016. Google Scholar
  25. Jun Haeng Lee, Tobi Delbruck, and Michael Pfeiffer. Training deep spiking neural networks using backpropagation. Frontiers in Neuroscience, 10:508, 2016. Google Scholar
  26. Robert A. Legenstein, Wolfgang Maass, Christos H. Papadimitriou, and Santosh S. Vempala. Long term memory and the densest k-subgraph problem. In 9th Innovations in Theoretical Computer Science Conference, ITCS 2018, January 11-14, 2018, Cambridge, MA, USA, pages 57:1-57:15, 2018. Google Scholar
  27. Yi Li, Huy L. Nguyen, and David P. Woodruff. Turnstile streaming algorithms might as well be linear sketches. In Symposium on Theory of Computing, STOC 2014, New York, NY, USA, May 31 - June 03, 2014, pages 174-183, 2014. URL: https://doi.org/10.1145/2591796.2591812.
  28. Nancy Lynch, Cameron Musco, and Merav Parter. Computational tradeoffs in biological neural networks: Self-stabilizing winner-take-all networks. Innovations in Theoretical Computer Science, 2017. Google Scholar
  29. Nancy Lynch, Cameron Musco, and Merav Parter. Spiking neural networks: An algorithmic perspective. In 5th Workshop on Biological Distributed Algorithms (BDA 2017), 2017. Google Scholar
  30. Nancy Lynch and Mien Brabeeba Wang. Integrating temporal information to spatial information in a neural circuit. arXiv preprint arXiv:1903.01217, 2019. Google Scholar
  31. Nancy A. Lynch, Cameron Musco, and Merav Parter. Neuro-ram unit with applications to similarity testing and compression in spiking neural networks. In 31st International Symposium on Distributed Computing, DISC 2017, October 16-20, 2017, Vienna, Austria, pages 33:1-33:16, 2017. Google Scholar
  32. Wolfgang Maass. On the computational power of noisy spiking neurons. In Advances in neural information processing systems, pages 211-217, 1996. Google Scholar
  33. Wolfgang Maass. Networks of spiking neurons: the third generation of neural network models. Neural networks, 10(9):1659-1671, 1997. Google Scholar
  34. Wolfgang Maass. On the computational power of winner-take-all. Neural computation, 12(11):2519-2535, 2000. Google Scholar
  35. Wolfgang Maass, Christos H. Papadimitriou, Santosh S. Vempala, and Robert A. Legenstein. Brain computation: A computer science perspective. In Computing and Software Science - State of the Art and Perspectives, pages 184-199. Springer, 2019. URL: https://doi.org/10.1007/978-3-319-91908-9_11.
  36. Gurmeet Singh Manku, Sridhar Rajagopalan, and Bruce G Lindsay. Approximate medians and other quantiles in one pass and with limited memory. ACM SIGMOD Record, 27(2):426-435, 1998. Google Scholar
  37. J Ian Munro and Mike S Paterson. Selection and sorting with limited storage. Theoretical computer science, 12(3):315-323, 1980. Google Scholar
  38. Shanmugavelayutham Muthukrishnan. Data streams: Algorithms and applications. Now Publishers Inc, 2005. Google Scholar
  39. Noam Nisan and Avi Wigderson. Hardness vs randomness. J. Comput. Syst. Sci., 49(2):149-167, 1994. Google Scholar
  40. Christos H. Papadimitriou and Santosh S. Vempala. Random projection in the brain and computation with assemblies of neurons. In 10th Innovations in Theoretical Computer Science Conference, ITCS 2019, January 10-12, 2019, San Diego, California, USA, pages 57:1-57:19, 2019. Google Scholar
  41. Lili Su, Chia-Jung Chang, and Nancy Lynch. Spike-based winner-take-all computation: Fundamental limits and order-optimal circuits. Neural Computation, 31(12):2523-2561, 2019. Google Scholar
  42. Amirhossein Tavanaei, Masoud Ghodrati, Saeed Reza Kheradpisheh, Timothée Masquelier, and Anthony Maida. Deep learning in spiking neural networks. Neural Networks, 111:47-63, 2019. Google Scholar
  43. Leslie G. Valiant. Capacity of neural networks for lifelong learning of composable tasks. In 58th IEEE Annual Symposium on Foundations of Computer Science, FOCS 2017, Berkeley, CA, USA, October 15-17, 2017, pages 367-378, 2017. Google Scholar
  44. David P. Woodruff. Optimal space lower bounds for all frequency moments. In Proceedings of the Fifteenth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2004, New Orleans, Louisiana, USA, January 11-14, 2004, pages 167-175, 2004. Google Scholar
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail