The Computational Cost of Asynchronous Neural Communication

Authors Yael Hitron, Merav Parter, Gur Perri



PDF
Thumbnail PDF

File

LIPIcs.ITCS.2020.48.pdf
  • Filesize: 2.15 MB
  • 47 pages

Document Identifiers

Author Details

Yael Hitron
  • Weizmann Institute of Science, Rehovot, Israel
Merav Parter
  • Weizmann Institute of Science, Rehovot, Israel
Gur Perri
  • Weizmann Institute of Science, Rehovot, Israel

Acknowledgements

We are thankful to Roei Tell and Gil Cohen for helpful discussions on Boolean Circuits. We also thank Yoram Moses for referring us to related work on asynchronous digital circuits and discussing the connections between the two settings.

Cite AsGet BibTex

Yael Hitron, Merav Parter, and Gur Perri. The Computational Cost of Asynchronous Neural Communication. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 48:1-48:47, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)
https://doi.org/10.4230/LIPIcs.ITCS.2020.48

Abstract

Biological neural computation is inherently asynchronous due to large variations in neuronal spike timing and transmission delays. So-far, most theoretical work on neural networks assumes the synchronous setting where neurons fire simultaneously in discrete rounds. In this work we aim at understanding the barriers of asynchronous neural computation from an algorithmic perspective. We consider an extension of the widely studied model of synchronized spiking neurons [Maass, Neural Networks 97] to the asynchronous setting by taking into account edge and node delays. - Edge Delays: We define an asynchronous model for spiking neurons in which the latency values (i.e., transmission delays) of non self-loop edges vary adversarially over time. This extends the recent work of [Hitron and Parter, ESA'19] in which the latency values are restricted to be fixed over time. Our first contribution is an impossibility result that implies that the assumption that self-loop edges have no delays (as assumed in Hitron and Parter) is indeed necessary. Interestingly, in real biological networks self-loop edges (a.k.a. autapse) are indeed free of delays, and the latter has been noted by neuroscientists to be crucial for network synchronization. To capture the computational challenges in this setting, we first consider the implementation of a single NOT gate. This simple function already captures the fundamental difficulties in the asynchronous setting. Our key technical results are space and time upper and lower bounds for the NOT function, our time bounds are tight. In the spirit of the distributed synchronizers [Awerbuch and Peleg, FOCS'90] and following [Hitron and Parter, ESA'19], we then provide a general synchronizer machinery. Our construction is very modular and it is based on efficient circuit implementation of threshold gates. The complexity of our scheme is measured by the overhead in the number of neurons and the computation time, both are shown to be polynomial in the largest latency value, and the largest incoming degree Δ of the original network. - Node Delays: We introduce the study of asynchronous communication due to variations in the response rates of the neurons in the network. In real brain networks, the round duration varies between different neurons in the network. Our key result is a simulation methodology that allows one to transform the above mentioned synchronized solution under edge delays into a synchronized under node delays while incurring a small overhead w.r.t space and time.

Subject Classification

ACM Subject Classification
  • Theory of computation → Design and analysis of algorithms
Keywords
  • asynchronous communication
  • asynchronous computation
  • spiking neurons
  • synchronizers

Metrics

  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    0
    PDF Downloads

References

  1. Douglas B Armstrong, Arthur D Friedman, and Premachandran R Menon. Design of asynchronous circuits assuming unbounded gate delays. IEEE Transactions on Computers, 100(12):1110-1120, 1969. Google Scholar
  2. Baruch Awerbuch and David Peleg. Network Synchronization with Polylogarithmic Overhead. In 31st Annual Symposium on Foundations of Computer Science, St. Louis, Missouri, USA, October 22-24, 1990, Volume II, pages 514-522, 1990. Google Scholar
  3. Michael J Berry II and Markus Meister. Refractoriness and neural precision. In Advances in Neural Information Processing Systems, pages 110-116, 1998. Google Scholar
  4. Tobias Bjerregaard and Shankar Mahadevan. A survey of research and practices of network-on-chip. ACM Computing Surveys (CSUR), 38(1):1, 2006. Google Scholar
  5. Sami Boudkkazi, Edmond Carlier, Norbert Ankri, Olivier Caillard, Pierre Giraud, Laure Fronzaroli-Molinieres, and Dominique Debanne. Release-dependent variations in synaptic latency: a putative code for short-and long-term synaptic dynamics. Neuron, 56(6):1048-1060, 2007. Google Scholar
  6. Chi-Ning Chou, Kai-Min Chung, and Chi-Jen Lu. On the Algorithmic Power of Spiking Neural Networks. In 10th Innovations in Theoretical Computer Science Conference, ITCS 2019, January 10-12, 2019, San Diego, California, USA, pages 26:1-26:20, 2019. Google Scholar
  7. RE Lee DeVille and Charles S Peskin. Synchrony and asynchrony in a fully stochastic neural network. Bulletin of mathematical biology, 70(6):1608-1633, 2008. Google Scholar
  8. Huawei Fan, Yafeng Wang, Hengtong Wang, Ying-Cheng Lai, and Xingang Wang. Autapses promote synchronization in neuronal networks. Scientific reports, 8(1):580, 2018. Google Scholar
  9. Martin Fürer. Faster integer multiplication. SIAM Journal on Computing, 39(3):979-1005, 2009. Google Scholar
  10. Johan Håstad. On the Size of Weights for Threshold Gates. SIAM J. Discrete Math., 7(3):484-492, 1994. URL: https://doi.org/10.1137/S0895480192235878.
  11. Scott Hauck. Asynchronous design methodologies: An overview. Proceedings of the IEEE, 83(1):69-93, 1995. Google Scholar
  12. Yael Hitron and Merav Parter. Counting to Ten with Two Fingers: Compressed Counting with Spiking Neurons. ESA, 2019. Google Scholar
  13. Yael Hitron and Merav Parter. Counting to Ten with Two Fingers: Compressed Counting with Spiking Neurons. CoRR, abs/1902.10369, 2019. URL: http://arxiv.org/abs/1902.10369.
  14. Kaori Ikeda and John M Bekkers. Autapses. Current Biology, 16(9):R308, 2006. Google Scholar
  15. Fabian Kuhn, Joel Spencer, Konstantinos Panagiotou, and Angelika Steger. Synchrony and asynchrony in neural networks. In Proceedings of the twenty-first annual ACM-SIAM symposium on Discrete algorithms, pages 949-964. SIAM, 2010. Google Scholar
  16. Robert A. Legenstein, Wolfgang Maass, Christos H. Papadimitriou, and Santosh Srinivas Vempala. Long Term Memory and the Densest K-Subgraph Problem. In 9th Innovations in Theoretical Computer Science Conference, ITCS 2018, January 11-14, 2018, Cambridge, MA, USA, pages 57:1-57:15, 2018. Google Scholar
  17. Nancy Lynch and Cameron Musco. A Basic Compositional Model for Spiking Neural Networks. arXiv preprint, 2018. URL: http://arxiv.org/abs/1808.03884.
  18. Nancy Lynch, Cameron Musco, and Merav Parter. Computational Tradeoffs in Biological Neural Networks: Self-Stabilizing Winner-Take-All Networks. In Proceedings of the \cITCS2017, 2017. Google Scholar
  19. Nancy Lynch, Cameron Musco, and Merav Parter. Spiking Neural Networks: An Algorithmic Perspective. In 5th Workshop on Biological Distributed Algorithms (BDA 2017), July 2017. Google Scholar
  20. Nancy A. Lynch, Cameron Musco, and Merav Parter. Neuro-RAM Unit with Applications to Similarity Testing and Compression in Spiking Neural Networks. In 31st International Symposium on Distributed Computing, DISC 2017, October 16-20, 2017, Vienna, Austria, pages 33:1-33:16, 2017. Google Scholar
  21. Jun Ma, Xinlin Song, Wuyin Jin, and Chuni Wang. Autapse-induced synchronization in a coupled neuronal network. Chaos, Solitons & Fractals, 80:31-38, 2015. Google Scholar
  22. Wolfgang Maass. Lower Bounds for the Computational Power of Networks of Spiking Neurons. Electronic Colloquium on Computational Complexity (ECCC), 1(19), 1994. URL: http://eccc.hpi-web.de/eccc-reports/1994/TR94-019/index.html.
  23. Wolfgang Maass. On the computational power of noisy spiking neurons. In ŅIPS1995, 1996. Google Scholar
  24. Wolfgang Maass. Networks of spiking neurons: the third generation of neural network models. Neural Networks, 10(9):1659-1671, 1997. Google Scholar
  25. Wolfgang Maass. Paradigms for computing with spiking neurons. In Models of Neural Networks IV, pages 373-402. Springer, 2002. Google Scholar
  26. Rajit Manohar and Yoram Moses. Analyzing Isochronic Forks with Potential Causality. In 21st IEEE International Symposium on Asynchronous Circuits and Systems, ASYNC 2015, Mountain View, CA, USA, May 4-6, 2015, pages 69-76, 2015. URL: https://doi.org/10.1109/ASYNC.2015.19.
  27. Rajit Manohar and Yoram Moses. The eventual C-element theorem for delay-insensitive asynchronous circuits. In 2017 23rd IEEE International Symposium on Asynchronous Circuits and Systems (ASYNC), pages 102-109. IEEE, 2017. Google Scholar
  28. Alain J Martin. The limitations to delay-insensitivity in asynchronous circuits. In Beauty is our business, pages 302-311. Springer, 1990. Google Scholar
  29. Robert Miller. Time and the brain. CRC Press, 2000. Google Scholar
  30. Saburo Muroga. Threshold logic and its applications. Wiley, 1971. Google Scholar
  31. Yu P Ofman. On the algorithmic complexity of discrete functions. In Doklady Akademii Nauk, volume 145, pages 48-51. Russian Academy of Sciences, 1962. Google Scholar
  32. Christos H. Papadimitriou and Santosh S. Vempala. Random Projection in the Brain and Computation with Assemblies of Neurons. In 10th Innovations in Theoretical Computer Science Conference, ITCS 2019, January 10-12, 2019, San Diego, California, USA, pages 57:1-57:19, 2019. URL: https://doi.org/10.4230/LIPIcs.ITCS.2019.57.
  33. Huixin Qin, Jun Ma, Chunni Wang, and Ying Wu. Autapse-induced spiral wave in network of neurons under noise. PloS one, 9(6):e100849, 2014. Google Scholar
  34. Alexa Riehle, Sonja Grün, Markus Diesmann, and Ad Aertsen. Spike synchronization and rate modulation differentially involved in motor cortical function. Science, 278(5345):1950-1953, 1997. Google Scholar
  35. BL Sabatini and WG Regehr. Timing of synaptic transmission. Annual review of physiology, 61(1):521-542, 1999. Google Scholar
  36. Jens Sparsø. Asynchronous circuit design-a tutorial. In Chapters 1-8 in "Principles of asynchronous circuit design-A systems Perspective". Kluwer Academic Publishers, 2001. Google Scholar
  37. Lili Su, Chia-Jung Chang, and Nancy Lynch. Spike-Based Winner-Take-All Computation: Fundamental Limits and Order-Optimal Circuits. Neural Computation, 2019. Google Scholar
  38. Christopher S. Wallace. A Suggestion for a Fast Multiplier. IEEE Trans. Electronic Computers, 13(1):14-17, 1964. URL: https://doi.org/10.1109/PGEC.1964.263830.
  39. Barbeeba Wang and Nancy Lynch. Integrating Temporal Information to Spatial Information in a Neural Circuit. arXiv preprint, 2019. URL: http://arxiv.org/abs/1903.01217.
  40. Ergin Yilmaz, Mahmut Ozer, Veli Baysal, and Matjaž Perc. Autapse-induced multiple coherence resonance in single neurons and neuronal networks. Scientific Reports, 6:30914, 2016. Google Scholar
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail