skip to main content
10.1145/3583131.3590460acmconferencesArticle/Chapter ViewAbstractPublication PagesgeccoConference Proceedingsconference-collections
research-article

Learning to Act through Evolution of Neural Diversity in Random Neural Networks

Published:12 July 2023Publication History

ABSTRACT

Biological nervous systems consist of networks of diverse, sophisticated information processors in the form of neurons of different classes. In most artificial neural networks (ANNs), neural computation is abstracted to an activation function that is usually shared between all neurons within a layer or even the whole network; training of ANNs focuses on synaptic optimization. In this paper, we propose the optimization of neuro-centric parameters to attain a set of diverse neurons that can perform complex computations. Demonstrating the promise of the approach, we show that evolving neural parameters alone allows agents to solve various reinforcement learning tasks without optimizing any synaptic weights. While not aiming to be an accurate biological model, parameterizing neurons to a larger degree than the current common practice, allows us to ask questions about the computational abilities afforded by neural diversity in random neural networks. The presented results open up interesting future research directions, such as combining evolved neural diversity with activity-dependent plasticity.

References

  1. Larry F Abbott and Sacha B Nelson. 2000. Synaptic plasticity: taming the beast. Nature neuroscience 3, 11 (2000), 1178--1183.Google ScholarGoogle Scholar
  2. Andrew G Barto, Richard S Sutton, and Charles W Anderson. 1983. Neuronlike adaptive elements that can solve difficult learning control problems. IEEE transactions on systems, man, and cybernetics 5 (1983), 834--846.Google ScholarGoogle ScholarCross RefCross Ref
  3. Mina Basirat and Peter M Roth. 2018. The quest for the golden activation function. arXiv preprint arXiv:1808.00783 (2018).Google ScholarGoogle Scholar
  4. Lou Beaulieu-Laroche, Enrique HS Toloza, Marie-Sophie Van der Goes, Mathieu Lafourcade, Derrick Barnagian, Ziv M Williams, Emad N Eskandar, Matthew P Frosch, Sydney S Cash, and Mark T Harnett. 2018. Enhanced dendritic compart-mentalization in human cortical neurons. Cell 175, 3 (2018), 643--651.Google ScholarGoogle ScholarCross RefCross Ref
  5. Yoshua Bengio, Aaron Courville, and Pascal Vincent. 2013. Representation learning: A review and new perspectives. IEEE transactions on pattern analysis and machine intelligence 35, 8 (2013), 1798--1828.Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. David Beniaguev, Idan Segev, and Michael London. 2021. Single cortical neurons as deep artificial neural networks. Neuron 109, 17 (2021), 2727--2739.Google ScholarGoogle ScholarCross RefCross Ref
  7. José Manuel Benítez, Juan Luis Castro, and Ignacio Requena. 1997. Are artificial neural networks black boxes? IEEE Transactions on neural networks 8, 5 (1997), 1156--1164.Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Garrett Bingham, William Macke, and Risto Miikkulainen. 2020. Evolutionary optimization of deep learning activation functions. In Proceedings of the 2020 Genetic and Evolutionary Computation Conference. 289--296.Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Garrett Bingham and Risto Miikkulainen. 2022. Discovering parametric activation functions. Neural Networks 148 (2022), 48--65.Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. 2016. Openai gym. arXiv preprint arXiv:1606.01540 (2016).Google ScholarGoogle Scholar
  11. David J Chalmers. 1991. The evolution of learning: An experiment in genetic connectionism. In Connectionist Models. Elsevier, 81--90.Google ScholarGoogle Scholar
  12. Mathieu Chalvidal, Thomas Serre, and Rufin Van-Rullen. 2022. Meta-Reinforcement Learning with Self-Modifying Networks. In 36th Conference on Neural Information Processing Systems (NeurIPS 2022). 1--19.Google ScholarGoogle Scholar
  13. Nitin Kumar Chauhan and Krishna Singh. 2018. A review on conventional machine learning vs deep learning. In 2018 International conference on computing, power and communication technologies (GUCON). IEEE, 347--352.Google ScholarGoogle ScholarCross RefCross Ref
  14. Kyunghyun Cho, Bart Van Merriënboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder-decoder approaches. arXiv preprint arXiv:1409.1259 (2014).Google ScholarGoogle Scholar
  15. Jeffrey L Elman. 1990. Finding structure in time. Cognitive science 14, 2 (1990), 179--211.Google ScholarGoogle Scholar
  16. B Fasel. 2003. An introduction to bio-inspired artificial neural network architectures. Acta neurologica belgica 103, 1 (2003), 6--12.Google ScholarGoogle Scholar
  17. Dario Floreano and Claudio Mattiussi. 2008. Bio-inspired artificial intelligence: theories, methods, and technologies. MIT press.Google ScholarGoogle Scholar
  18. Jonathan Frankle and Michael Carbin. 2018. The lottery ticket hypothesis: Finding sparse, trainable neural networks. arXiv preprint arXiv:1803.03635 (2018).Google ScholarGoogle Scholar
  19. Jonathan Frankle, Gintare Karolina Dziugaite, Daniel M Roy, and Michael Carbin. 2019. Stabilizing the lottery ticket hypothesis. arXiv preprint arXiv:1903.01611 (2019).Google ScholarGoogle Scholar
  20. Adam Gaier and David Ha. 2019. Weight agnostic neural networks. Advances in neural information processing systems 32 (2019).Google ScholarGoogle Scholar
  21. Luíza C Garaffa, Abdullah Aljuffri, Cezar Reinbrecht, Said Hamdioui, Mottaqiallah Taouil, and Johanna Sepulveda. 2021. Revealing the secrets of spiking neural networks: The case of izhikevich neuron. In 2021 24th Euromicro Conference on Digital System Design (DSD). IEEE, 514--518.Google ScholarGoogle ScholarCross RefCross Ref
  22. Wulfram Gerstner. 1990. Associative memory in a network ofbiological'neurons. Advances in neural information processing systems 3 (1990).Google ScholarGoogle Scholar
  23. David Ha. 2017. Evolving Stable Strategies. blog.otoro.net (2017). http://blog.otoro.net/2017/11/12/evolving-stable-strategies/Google ScholarGoogle Scholar
  24. David Ha. 2017. A visual guide to evolution strategies. blog. otoro. net (2017).Google ScholarGoogle Scholar
  25. Alexander Hagg, Maximilian Mensing, and Alexander Asteroth. 2017. Evolving parsimonious networks by mixing activation functions. In Proceedings of the Genetic and Evolutionary Computation Conference. 425--432.Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Nikolaus Hansen. 2006. The CMA evolution strategy: a comparing review. Towards a new evolutionary computation (2006), 75--102.Google ScholarGoogle Scholar
  27. Demis Hassabis, Dharshan Kumaran, Christopher Summerfield, and Matthew Botvinick. 2017. Neuroscience-inspired artificial intelligence. Neuron 95, 2 (2017), 245--258.Google ScholarGoogle ScholarCross RefCross Ref
  28. Kun He, Yan Wang, and John Hopcroft. 2016. A powerful generative model using random weights for the deep image representation. Advances in Neural Information Processing Systems 29 (2016).Google ScholarGoogle Scholar
  29. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2015. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the IEEE international conference on computer vision. 1026--1034.Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation 9, 8 (1997), 1735--1780.Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Eugene M Izhikevich. 2003. Simple model of spiking neurons. IEEE Transactions on neural networks 14, 6 (2003), 1569--1572.Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. Eugene M Izhikevich. 2006. Polychronization: computation with spikes. Neural computation 18, 2 (2006), 245--282.Google ScholarGoogle Scholar
  33. Eugene M Izhikevich. 2007. Dynamical systems in neuroscience. MIT press.Google ScholarGoogle Scholar
  34. Michael I Jordan. 1997. Serial order: A parallel distributed processing approach. In Advances in psychology. Vol. 121. Elsevier, 471--495.Google ScholarGoogle Scholar
  35. Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. 2015. Deep learning. nature 521, 7553 (2015), 436--444.Google ScholarGoogle Scholar
  36. Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, and Hans Peter Graf. 2016. Pruning filters for efficient convnets. arXiv preprint arXiv:1608.08710 (2016).Google ScholarGoogle Scholar
  37. Laura Lillien. 1997. Neural development: instructions for neural diversity. Current Biology 7, 3 (1997), R168--R171.Google ScholarGoogle ScholarCross RefCross Ref
  38. Hanxiao Liu, Andy Brock, Karen Simonyan, and Quoc Le. 2020. Evolving normalization-activation layers. Advances in Neural Information Processing Systems 33 (2020), 13539--13550.Google ScholarGoogle Scholar
  39. Eran Malach, Gilad Yehudai, Shai Shalev-Schwartz, and Ohad Shamir. 2020. Proving the lottery ticket hypothesis: Pruning is all you need. In International Conference on Machine Learning. PMLR, 6682--6691.Google ScholarGoogle Scholar
  40. Eve Marder, LF Abbott, Gina G Turrigiano, Zheng Liu, and Jorge Golowasch. 1996. Memory from the dynamics of intrinsic membrane currents. Proceedings of the national academy of sciences 93, 24 (1996), 13481--13486.Google ScholarGoogle ScholarCross RefCross Ref
  41. Thomas Miconi. 2016. Learning to learn with backpropagation of Hebbian plasticity. arXiv preprint arXiv:1609.02228 (2016).Google ScholarGoogle Scholar
  42. Jean-Baptiste Mouret and Paul Tonelli. 2014. Artificial evolution of plastic neural networks: a few key concepts. In Growing adaptive machines. Springer, 251--261.Google ScholarGoogle Scholar
  43. Elias Najarro and Sebastian Risi. 2020. Meta-learning through hebbian plasticity in random networks. Advances in Neural Information Processing Systems 33 (2020).Google ScholarGoogle Scholar
  44. Chigozie Nwankpa, Winifred Ijomah, Anthony Gachagan, and Stephen Marshall. 2018. Activation functions: Comparison of trends in practice and research for deep learning. arXiv 2018. arXiv preprint arXiv:1811.03378 (2018).Google ScholarGoogle Scholar
  45. Evgenia Papavasileiou, Jan Cornelis, and Bart Jansen. 2021. A systematic literature review of the successors of "neuroevolution of augmenting topologies". Evolutionary Computation 29, 1 (2021), 1--73.Google ScholarGoogle ScholarCross RefCross Ref
  46. Joachim Winther Pedersen and Sebastian Risi. 2021. Evolving and merging hebbian learning rules: increasing generalization by decreasing the number of rules. arXiv preprint arXiv:2104.07959 (2021).Google ScholarGoogle Scholar
  47. Michael Pfeiffer and Thomas Pfeil. 2018. Deep learning with spiking neurons: opportunities and challenges. Frontiers in neuroscience (2018), 774.Google ScholarGoogle Scholar
  48. Vivek Ramanujan, Mitchell Wortsman, Aniruddha Kembhavi, Ali Farhadi, and Mohammad Rastegari. 2020. What's hidden in a randomly weighted neural network?. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 11893--11902.Google ScholarGoogle ScholarCross RefCross Ref
  49. Sebastian Risi and Kenneth O Stanley. 2012. A unified approach to evolving plasticity and neural geometry. In The 2012 International Joint Conference on Neural Networks (IJCNN). IEEE, 1--8.Google ScholarGoogle ScholarCross RefCross Ref
  50. Tim Salimans, Jonathan Ho, Xi Chen, Szymon Sidor, and Ilya Sutskever. 2017. Evolution strategies as a scalable alternative to reinforcement learning. arXiv preprint arXiv:1703.03864 (2017).Google ScholarGoogle Scholar
  51. Jürgen Schmidhuber. 2015. Deep learning in neural networks: An overview. Neural networks 61 (2015), 85--117.Google ScholarGoogle Scholar
  52. Chris Sekirnjak and Sascha Du Lac. 2002. Intrinsic firing dynamics of vestibular nucleus neurons. Journal of Neuroscience 22, 6 (2002), 2083--2095.Google ScholarGoogle ScholarCross RefCross Ref
  53. Ivan Soltesz et al. 2006. Diversity in the neuronal machine: order and variability in interneuronal microcircuits. Oxford University Press.Google ScholarGoogle Scholar
  54. Andrea Soltoggio, John A Bullinaria, Claudio Mattiussi, Peter Dürr, and Dario Floreano. 2008. Evolutionary advantages of neuromodulated plasticity in dynamic, reward-based scenarios. In Proceedings of the 11th international conference on artificial life (Alife XI). MIT Press, 569--576.Google ScholarGoogle Scholar
  55. Andrea Soltoggio, Kenneth O Stanley, and Sebastian Risi. 2018. Born to learn: the inspiration, progress, and future of evolved plastic artificial neural networks. Neural Networks 108 (2018), 48--67.Google ScholarGoogle ScholarCross RefCross Ref
  56. Kenneth O Stanley and Risto Miikkulainen. 2002. Evolving neural networks through augmenting topologies. Evolutionary computation 10, 2 (2002), 99--127.Google ScholarGoogle Scholar
  57. Joan Stiles. 2000. Neural plasticity and cognitive development. Developmental neuropsychology 18, 2 (2000), 237--272.Google ScholarGoogle Scholar
  58. Amirhossein Tavanaei, Masoud Ghodrati, Saeed Reza Kheradpisheh, Timothée Masquelier, and Anthony Maida. 2019. Deep learning in spiking neural networks. Neural networks 111 (2019), 47--63.Google ScholarGoogle Scholar
  59. Yi Tay, Mostafa Dehghani, Dara Bahri, and Donald Metzler. 2022. Efficient transformers: A survey. Comput. Surveys 55, 6 (2022), 1--28.Google ScholarGoogle ScholarDigital LibraryDigital Library
  60. Paul Tonelli and Jean-Baptiste Mouret. 2013. On the Relationships between Generative Encodings, Regularity, and Learning Abilities when Evolving Plastic Artificial Neural Networks. PLOS ONE 8, 11 (11 2013), 1--12. Google ScholarGoogle ScholarCross RefCross Ref
  61. Dmitry Ulyanov, Andrea Vedaldi, and Victor Lempitsky. 2018. Deep image prior. In Proceedings of the IEEE conference on computer vision and pattern recognition. 9446--9454.Google ScholarGoogle Scholar
  62. Joseba Urzelai and Dario Floreano. 2001. Evolution of adaptive synapses: Robots with fast adaptive behavior in new environments. Evolutionary computation 9, 4 (2001), 495--524.Google ScholarGoogle Scholar
  63. Arjen van Ooyen. 1994. Activity-dependent neural network development. Network: Computation in Neural Systems 5, 3 (1994), 401--423.Google ScholarGoogle ScholarCross RefCross Ref
  64. Lin Wang, Junteng Zheng, and Jeff Orchard. 2019. Evolving Generalized Modulatory Learning: Unifying Neuromodulation and Synaptic Plasticity. IEEE Transactions on Cognitive and Developmental Systems 12, 4 (2019), 797--808.Google ScholarGoogle ScholarCross RefCross Ref
  65. Dante Francisco Wasmuht, Eelke Spaak, Timothy J Buschman, Earl K Miller, and Mark G Stokes. 2018. Intrinsic neuronal dynamics predict distinct functional roles during working memory. Nature communications 9, 1 (2018), 3499.Google ScholarGoogle Scholar
  66. Mitchell Wortsman, Vivek Ramanujan, Rosanne Liu, Aniruddha Kembhavi, Mohammad Rastegari, Jason Yosinski, and Ali Farhadi. 2020. Supermasks in superposition. Advances in Neural Information Processing Systems 33 (2020), 15173--15184.Google ScholarGoogle Scholar

Index Terms

  1. Learning to Act through Evolution of Neural Diversity in Random Neural Networks

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      GECCO '23: Proceedings of the Genetic and Evolutionary Computation Conference
      July 2023
      1667 pages
      ISBN:9798400701191
      DOI:10.1145/3583131

      Copyright © 2023 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 12 July 2023

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      Overall Acceptance Rate1,669of4,410submissions,38%

      Upcoming Conference

      GECCO '24
      Genetic and Evolutionary Computation Conference
      July 14 - 18, 2024
      Melbourne , VIC , Australia
    • Article Metrics

      • Downloads (Last 12 months)72
      • Downloads (Last 6 weeks)2

      Other Metrics

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader