An Analysis Using Stochastic Processes Is Learning in Biological Neural Networks based on Stochastic Gradient Descent?

  • Date in the past
  • Monday, 25. November 2024, 11:00
  • INF 205, 4/414
    • Prof. Sören Christensen (Uni Kiel)
  • Address

    INF 205, 4/414

  • Organizer

  • Event Type

The fundamental differences between how biological and artificial neural networks learn have been a subject of intense research. While artificial networks heavily rely on optimization techniques of Stochastic Gradient Descent (SGD)-type, the biological learning process has often been assumed to operate solely on local information, making SGD seemingly inapplicable.

This talk challenges this line of argument by studying a stochastic process model for supervised learning in biological neural networks.  Our results show that a process approximating a continuous gradient step emerges through the accumulation of numerous local updates in response to each learning opportunity. This suggests that SGD-like optimization may be a fundamental mechanism underlying learning in biological brains.