Sorry, registration has ended.

Dr. Guillaume Bellec's talk at https://meet.google.com/dde-bczh-jfe. Babies and humans appear to learn from very little data in comparison with machines. Towards a functional model of representation learning in visual cortices, we derive a brain plasticity model using self-supervised learning theory from AI. The resulting learning theory models learning in babies on at least two levels: at the cognitive level, self-supervision signals come from spontaneous gaze changes (saccades), and at the mechanistic level, all the terms required to compute the parameter gradients can be mapped with data-grounded mechanisms (the learning rule is layer-wise local and only requires pre-post activity, top-down dendritic input or global neuromodulator-like factor). Despite this realism, we demonstrate that this algorithm builds useful hierarchical representations in visual cortices by testing it on machine learning benchmarks. Going further, we will sketch how to extend this theory in other sensory pathways with time-varying inputs (like the auditory pathway) towards a general theory of brain plasticity inspired by AI theory.


  • Date:23/02/2023 01:00 PM
  • Location Online Event

Description

Dr. Bellec develops computational theories of brains and intelligent machines.  His work is most well-known for showing that a competitive artificial intelligence can emerge from simple mathematical models of biologically realistic neural networks.