Computational Tradeoffs in Biological Neural Networks: Self-Stabilizing Winner-Take-All Networks

Authors Nancy Lynch, Cameron Musco, Merav Parter



PDF
Thumbnail PDF

File

LIPIcs.ITCS.2017.15.pdf
  • Filesize: 1.27 MB
  • 44 pages

Document Identifiers

Author Details

Nancy Lynch
Cameron Musco
Merav Parter

Cite As Get BibTex

Nancy Lynch, Cameron Musco, and Merav Parter. Computational Tradeoffs in Biological Neural Networks: Self-Stabilizing Winner-Take-All Networks. In 8th Innovations in Theoretical Computer Science Conference (ITCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 67, pp. 15:1-15:44, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017) https://doi.org/10.4230/LIPIcs.ITCS.2017.15

Abstract

We initiate a line of investigation into biological neural networks from an algorithmic perspective. We develop a simplified but biologically plausible model for distributed computation in stochastic spiking neural networks and study tradeoffs between computation time and network complexity in this model. Our aim is to abstract real neural networks in a way that, while not capturing all interesting features, preserves high-level behavior and allows us to make biologically relevant conclusions.

In this paper, we focus on the important 'winner-take-all' (WTA) problem, which is analogous to a neural leader election unit: a network consisting of $n$ input neurons and n corresponding output neurons must converge to a state in which a single output corresponding to a firing input (the 'winner') fires, while all other outputs remain silent. Neural circuits for WTA rely on inhibitory neurons, which suppress the activity of competing outputs and drive the network towards a converged state with a single firing winner. We attempt to understand how the number of inhibitors used affects network convergence time.

We show that it is possible to significantly outperform naive WTA constructions through a more refined use of inhibition, solving the problem in O(\theta) rounds in expectation with just O(\log^{1/\theta} n) inhibitors for any \theta. An alternative construction gives convergence in O(\log^{1/\theta} n) rounds with O(\theta) inhibitors. We complement these upper bounds with our main technical contribution, a nearly matching lower bound for networks using \ge \log \log n inhibitors. Our lower bound uses familiar indistinguishability and locality arguments from distributed computing theory applied to the neural setting. It lets us derive a number of interesting conclusions about the structure of any network solving WTA with good probability, and the use of randomness and inhibition within such a network.

Subject Classification

Keywords
  • biological distributed algorithms
  • neural networks
  • distributed lower bounds
  • winner-take-all networks

Metrics

  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    0
    PDF Downloads

References

  1. David H Ackley, Geoffrey E Hinton, and Terrence J Sejnowski. A learning algorithm for boltzmann machines. Cognitive science, 9(1):147-169, 1985. Google Scholar
  2. Maruan Al-Shedivat, Rawan Naous, Emre Neftci, Gert Cauwenberghs, and Khaled N Salama. Inherently stochastic spiking neurons for probabilistic neural computation. In 2015 7th International IEEE/EMBS Conference on Neural Engineering (NER), pages 356-359. IEEE, 2015. Google Scholar
  3. Christina Allen and Charles F Stevens. An evaluation of causes for unreliability of synaptic transmission. Proceedings of the National Academy of Sciences, 91(22):10380-10383, 1994. Google Scholar
  4. Sander M Bohte, Joost N Kok, and Han La Poutre. Error-backpropagation in temporally encoded networks of spiking neurons. Neurocomputing, 48(1):17-37, 2002. Google Scholar
  5. Romain Brette, Michelle Rudolph, Ted Carnevale, Michael Hines, David Beeman, James M Bower, Markus Diesmann, Abigail Morrison, Philip H Goodman, Frederick C Harris Jr, et al. Simulation of networks of spiking neurons: a review of tools and strategies. Journal of computational neuroscience, 23(3):349-398, 2007. Google Scholar
  6. Lars Buesing, Johannes Bill, Bernhard Nessler, and Wolfgang Maass. Neural dynamics as sampling: a model for stochastic computation in recurrent networks of spiking neurons. PLoS Comput Biol, 7(11):e1002211, 2011. Google Scholar
  7. Robert Coultrip, Richard Granger, and Gary Lynch. A cortical model of winner-take-all competition via lateral inhibition. Neural networks, 5(1):47-54, 1992. Google Scholar
  8. Shlomi Dolev. Self-stabilization. MIT press, 2000. Google Scholar
  9. Shlomi Dolev, Amos Israeli, and Shlomo Moran. Uniform dynamic self-stabilizing leader election. IEEE Transactions on Parallel and Distributed Systems, 8(4):424-440, 1997. Google Scholar
  10. A Aldo Faisal, Luc PJ Selen, and Daniel M Wolpert. Noise in the nervous system. Nature reviews neuroscience, 9(4):292-303, 2008. Google Scholar
  11. Michael Fischer and Hong Jiang. Self-stabilizing leader election in networks of finite-state anonymous agents. In International Conference On Principles Of Distributed Systems, pages 395-409. Springer, 2006. Google Scholar
  12. Wulfram Gerstner and Werner M Kistler. Spiking neuron models: Single neurons, populations, plasticity. Cambridge university press, 2002. Google Scholar
  13. Sonia M Gómez-Urquijo, Concepción Reblet, José L Bueno-López, and Iñaki Gutiérrez-Ibarluzea. Gabaergic neurons in the rabbit visual cortex: percentage, layer distribution and cortical projections. Brain research, 862(1):171-179, 2000. Google Scholar
  14. Ankur Gupta and Lyle N Long. Hebbian learning with winner take all for spiking neural networks. In 2009 International Joint Conference on Neural Networks, pages 1054-1060. IEEE, 2009. Google Scholar
  15. Stefan Habenschuss, Zeno Jonke, and Wolfgang Maass. Stochastic computations in cortical microcircuit models. PLoS Comput Biol, 9(11):e1003311, 2013. Google Scholar
  16. John J Hopfield, David W Tank, et al. Computing with neural circuits- a model. Science, 233(4764):625-633, 1986. Google Scholar
  17. Laurent Itti and Christof Koch. Computational modelling of visual attention. Nature reviews neuroscience, 2(3):194-203, 2001. Google Scholar
  18. Eugene M Izhikevich. Which model to use for cortical spiking neurons? IEEE transactions on neural networks, 15(5):1063-1070, 2004. Google Scholar
  19. Zeno Jonke, Stefan Habenschuss, and Wolfgang Maass. Solving constraint satisfaction problems with networks of spiking neurons. Frontiers in neuroscience, 10, 2016. Google Scholar
  20. Samuel Kaski and Teuvo Kohonen. Winner-take-all networks for physiological models of competitive learning. Neural Networks, 7(6-7):973-984, 1994. Google Scholar
  21. Christof Koch and Shimon Ullman. Shifts in selective visual attention: towards the underlying neural circuitry. In Matters of intelligence, pages 115-141. Springer, 1987. Google Scholar
  22. John Lazzaro, Sylvie Ryckebusch, Misha Anne Mahowald, and Caver A Mead. Winner-take-all networks of o (n) complexity. Technical report, DTIC Document, 1988. Google Scholar
  23. Dale K Lee, Laurent Itti, Christof Koch, and Jochen Braun. Attention activates winner-take-all competition among visual filters. Nature neuroscience, 2(4):375-381, 1999. Google Scholar
  24. Nancy Lynch. A hundred impossibility proofs for distributed computing. In Proceedings of the eighth annual ACM Symposium on Principles of distributed computing, pages 1-28. ACM, 1989. Google Scholar
  25. Nancy A Lynch. Distributed algorithms. Morgan Kaufmann, 1996. Google Scholar
  26. Wolfgang Maass. On the computational power of noisy spiking neurons. Advances in neural information processing systems, pages 211-217, 1996. Google Scholar
  27. Wolfgang Maass. Networks of spiking neurons: the third generation of neural network models. Neural networks, 10(9):1659-1671, 1997. Google Scholar
  28. Wolfgang Maass. Neural computation with winner-take-all as the only nonlinear operation. In NIPS, pages 293-299. Citeseer, 1999. Google Scholar
  29. Wolfgang Maass. On the computational power of winner-take-all. Neural computation, 12(11):2519-2535, 2000. Google Scholar
  30. Wolfgang Maass. Noise as a resource for computation and learning in networks of spiking neurons. Proceedings of the IEEE, 102(5):860-880, 2014. Google Scholar
  31. Marvin Minsky and Seymour Papert. Perceptrons. 1969. Google Scholar
  32. Steven J Nowlan. Maximum likelihood competitive learning. In NIPS, pages 574-582, 1989. Google Scholar
  33. Matthias Oster, Rodney Douglas, and Shih-Chii Liu. Computation with spikes in a winner-take-all network. Neural computation, 21(9):2437-2465, 2009. Google Scholar
  34. Matthias Oster and Shih-Chii Liu. Spiking inputs to a winner-take-all network. Advances in Neural Information Processing Systems, 18:1051, 2006. Google Scholar
  35. Christos H Papadimitriou and Santosh S Vempala. Unsupervised learning through prediction in a model of cortex. arXiv preprint arXiv:1412.7955, 2014. Google Scholar
  36. Christos Papadimitrou, Samantha Petti, and Santosh Vempala. Cortical computation via iterative constructions. arXiv preprint arXiv:1602.08357, 2016. Google Scholar
  37. Josep L Rossello, Vincent Canals, Antoni Morro, and Antoni Oliver. Hardware implementation of stochastic spiking neural networks. International journal of neural systems, 22(04):1250014, 2012. Google Scholar
  38. Lisa Roux and György Buzsáki. Tasks for inhibitory interneurons in intact brain circuits. Neuropharmacology, 88:10-23, 2015. Google Scholar
  39. Bernardo Rudy, Gordon Fishell, SooHyun Lee, and Jens Hjerling-Leffler. Three groups of interneurons account for nearly 100%of neocortical gabaergic neurons. Developmental neurobiology, 71(1):45-61, 2011. Google Scholar
  40. BL Sabatini and WG Regehr. Timing of synaptic transmission. Annual Review of Physiology, 61(1):521-542, 1999. Google Scholar
  41. H Sebastian Seung. Learning in spiking neural networks by reinforcement of stochastic synaptic transmission. Neuron, 40(6):1063-1073, 2003. Google Scholar
  42. Michael N Shadlen and William T Newsome. Noise, neural codes and cortical organization. Current opinion in neurobiology, 4(4):569-579, 1994. Google Scholar
  43. Simon J Thorpe. Spike arrival times: A highly efficient coding scheme for neural networks. Parallel processing in neural systems, pages 91-94, 1990. Google Scholar
  44. Leslie G Valiant. Circuits of the Mind. Oxford University Press on Demand, 2000. Google Scholar
  45. Leslie G Valiant. A neuroidal architecture for cognitive computation. Journal of the ACM (JACM), 47(5):854-882, 2000. Google Scholar
  46. Leslie G Valiant. Memorization and association on a realistic neural model. Neural computation, 17(3):527-555, 2005. Google Scholar
  47. Wei Wang and Jean-Jacques E Slotine. K-winners-take-all computation with neural oscillators. arXiv preprint q-bio/0401001, 2003. Google Scholar
  48. Alan L Yuille and Norberto M Grzywacz. A winner-take-all mechanism based on presynaptic inhibition feedback. Neural Computation, 1(3):334-347, 1989. Google Scholar
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail