Machine Learning group

The University of Sheffield

ABOUT
We are the Machine Learning group at the University of Sheffield. This webpage is dedicated to our seminar series, which is open to the public.


  •  19/09/2024 03:00 PM
  •   Regent Court, Sheffield City Centre, Sheffield, UK

Title: Merging insights from artificial and biological neural networks for neuromorphic edge intelligence Google Meet Link: meet.google.com/qvz-mtfi-rta Abstract: The development of efficient bio-inspired algorithms and hardware is currently missing a clear framework. Should we start from the brain computational primitives and figure out how to apply them to real-world problems (bottom-up approach), or should we build on working AI solutions and fine-tune them to increase their biological plausibility (top-down approach)? We will see why biological plausibility and hardware efficiency are often two sides of the same coin, and how neuroscience- and AI-driven insights can cross-feed each other toward neuromorphic edge intelligence. Bio: Charlotte Frenkel is an Assistant Professor at Delft University of Technology, The Netherlands. She received her Ph.D. from Université catholique de Louvain in 2020 and was a post-doctoral researcher at the Institute of Neuroinformatics, UZH, and ETH Zürich, Switzerland. Her research aims at bridging the bottom-up (bio-inspired) and top-down (engineering-driven) design approaches toward neuromorphic intelligence, with a focus on digital neuromorphic processor design, embedded machine learning, and brain-inspired on-device learning. Dr. Frenkel received a best paper award at the IEEE International Symposium on Circuits and Systems (ISCAS) 2020 conference, and her Ph.D. thesis was awarded the FNRS / Nokia Bell Scientific Award 2021 and the FNRS / IBM Innovation Award 2021. In 2023, she was awarded prestigious AiNed Fellowship and Veni grants from the Dutch Research Council (NWO). She served as a program co-chair of the NICE conference and of the tinyML Research Symposium, as a TPC member of IEEE ESSERC, and as an associate editor for the IEEE Transactions on Biomedical Circuits and Systems."

  •  12/02/2024 01:00 PM
  •   Regent Court, Sheffield City Centre, Sheffield, UK

Training deep resistive networks with equilibrium propagation. We present a mathematical framework of learning called "equilibrium propagation" (EP). The EP framework is compatible with gradient-descent optimization -- the workhorse of deep learning -- but in EP, inference and gradient computation are achieved using the same physical laws, and the learning rule for each weight (trainable parameter) is local, thus opening a path for energy efficient deep learning. We show that EP can be used to train electrical circuits composed of voltage sources, variable resistors and diodes – a class of networks that we dub "deep resistive networks" (DRNs). We show that DRNs are universal function approximators: they can implement or approximate arbitrary input-output functions. We then present a fast algorithm to simulate DRNs (on classical computers) as well as simulations of DRNs trained by EP on MNIST. We argue that DRNs are closely related to deep Hopfield networks (DHNs), and we present simulations of DHN trained by EP on CIFAR10, CIFAR100 and ImageNet 32x32. Altogether, we contend that DRNs and EP can guide the development of efficient processors for AI.

  •  24/01/2024 08:44 PM
  •   Regent Court, Sheffield City Centre, Sheffield, UK

Uncertainty-modulated prediction errors in cortical microcircuits. To make contextually appropriate predictions in a stochastic environment, the brain needs to take uncertainty into account. Prediction error neurons have been identified in layer 2/3 of diverse brain areas. How uncertainty modulates prediction error activity and hence learning is, however, unclear. Here, we use a normative approach to derive how prediction errors should be modulated by uncertainty and postulate that such uncertainty-weighted prediction errors (UPE) are represented by layer 2/3 pyramidal neurons. We further hypothesise that the layer 2/3 circuit calculates the UPE through the subtractive and divisive inhibition by different inhibitory cell types. We ascribe different roles to somatostatin-positive (SST), and parvalbumin-positive (PV) interneurons. By implementing the calculation of UPEs in a microcircuit model, we show that different cell types in cortical circuits can compute means, variances and UPEs with local activity-dependent plasticity rules. Finally, we show that the resulting UPEs enable adaptive learning rates.

  •  19/10/2023 02:00 PM
  •   Regent Court, Sheffield City Centre, Sheffield, UK

Despite Alan Turing’s fear in 1951 that ‘once the machine thinking method has started at some stage we will have to expect the machine will take control’, or Norbert Wiener’s prediction in the 1960s, ‘we know that for a long time everything we do will be nothing more than the jumping off point for those who have the advantage of already being aware of our ultimate results’, strong “Artificial Intelligence” has triggered only recently a polarised debate in several domains of science, together with the inevitable seduction of the general public’s imagination. Particularly the algorithmic dance on oligopolistic social media and e-commerce platforms has been at the forefront of “hacking democracy” assertions: responsible for fake news and disinformation architectures, symptomatic populism-radicalism-violent extremism, reproducing gender, race,class and other bias in employment, health, education, as well as digital labor and gig economy problematics on the future of work and intensification, hence giving birth to new concerns of data justice,tech giant whistle-blowing, digital rights, data inequality and environmental impact of computation, even resistance movements to any Intelligent Machines whatsoever. Cutting through this hyperbolic, yet partially justified fog, my talk will engage with the problem of loss and seduction: Learning from machines comes with a sense of loss, not because of losing the uniqueness of being human, but because internal human temporalisation accelerates in sync with machines in ways humanity cannot yet understand. Moreover, we understand still little about how babies and children are learning and we have made little progress on how we experience human consciousness. To evidence this argument, my talk will rely on empirical snapshots to argue for a de-translation and re-translation of what we might think we are learning from machines, and what we might want to be intervening and correcting in the process of machines learning, by ‘becoming more certain of the purpose with which we desire’ (Wiener 1961), in order to invent an AI future that includes everyone.

  •  28/09/2023 11:00 PM
  •   Regent Court, Sheffield City Centre, Sheffield, UK

The talk will address the problems of fine-grained video understanding and generation, which present new challenges compared to conventional scenarios and offer wide-ranging applications in sports, cooking, entertainment, and beyond. The presentation will commence with an overview of our work on instructional video analysis, including the COIN dataset and a novel condensed action space learning method for procedure planning in instructional videos. Next, we will introduce an uncertainty-aware score distribution learning method and a group-aware attention method for assessing action quality. Lastly, we will discuss how we leverage multimodal information (such as language and music) to enhance the performance of referring segmentation and dance generation.

  •  11/07/2023 10:00 AM
  •   Regent Court, Sheffield City Centre, Sheffield, UK

Fully supervised Deep Learning methods are very effective in providing high performance. However, this approach has two major drawbacks. Firstly fully supervised methods require tedious expert annotations. Secondly, the AI trained on expert labels cannot surpass the standard of the expert and therefore AI performance degrades as the labels become noisy. We proposed an indirect way of training our AI models using a weakly supervised, multi-instance, multi-task learning paradigm that avoids detailed annotations. With this indirect way of training our AI, we demonstrated that our learning paradigm can elucidate instance-level features very well. This work also provides mathematical guarantees on some properties of our weakly supervised method.

  •  12/06/2023 11:00 AM
  •   Regent Court, Sheffield City Centre, Sheffield, UK

The brain is one of the most energy intense organs. Some of this energy is used for neural information processing, however, fruitfly experiments have shown that also learning is metabolically costly. First, we will present estimates of this cost, introduce a general model of this cost, and compare it to costs in computers. Next, we turn to a supervised artificial network setting and explore a number of strategies that can save energy need for plasticity, either by modifying the cost function, by restricting plasticity, or by using less costly transient forms of plasticity. Finally, we will discuss adaptive strategies and possible relevance for computer hardware.

  •  10/03/2023 03:00 PM
  • Online Event

Prof. Claudia Clopath's talk at https://meet.google.com/zns-usxs-gdy. Animals use afferent feedback to rapidly correct ongoing movements in the presence of a perturbation. Repeated exposure to a predictable perturbation leads to behavioural adaptation that counteracts its effects. Primary motor cortex (M1) is intimately involved in both processes, integrating inputs from various sensorimotor brain regions to update the motor output. Here, we investigate whether feedback-based motor control and motor adaptation may share a common implementation in M1 circuits. We trained a recurrent neural network to control its own output through an error feedback signal, which allowed it to recover rapidly from external perturbations. Implementing a biologically plausible plasticity rule based on this same feedback signal also enabled the network to learn to counteract persistent perturbations through a trial-by-trial process, in a manner that reproduced several key aspects of human adaptation. Moreover, the resultant network activity changes were also present in neural population recordings from monkey M1. Online movement correction and longer-term motor adaptation may thus share a common implementation in neural circuits.

  •  23/02/2023 01:00 PM
  • Online Event

Dr. Guillaume Bellec's talk at https://meet.google.com/dde-bczh-jfe. Babies and humans appear to learn from very little data in comparison with machines. Towards a functional model of representation learning in visual cortices, we derive a brain plasticity model using self-supervised learning theory from AI. The resulting learning theory models learning in babies on at least two levels: at the cognitive level, self-supervision signals come from spontaneous gaze changes (saccades), and at the mechanistic level, all the terms required to compute the parameter gradients can be mapped with data-grounded mechanisms (the learning rule is layer-wise local and only requires pre-post activity, top-down dendritic input or global neuromodulator-like factor). Despite this realism, we demonstrate that this algorithm builds useful hierarchical representations in visual cortices by testing it on machine learning benchmarks. Going further, we will sketch how to extend this theory in other sensory pathways with time-varying inputs (like the auditory pathway) towards a general theory of brain plasticity inspired by AI theory.

  •  22/07/2022 04:00 PM
  •   Regent Court, Sheffield City Centre, Sheffield, UK

Neuromorphic processors comprise hybrid analog/digital circuits that implement hardware models of biological systems, using computational principles analogous to the ones used by the nervous systems. The neuromorphic devices exhibit very slow, biologically plausible, time constants to well match the signals they are designed to process, such that they are inherently synchronized with the real-world signals they sense and act on. This leads to the advantage of ultra-low power processing of natural sensory signals, which is particularly important in biomedical and prosthetic applications. In addition, neuromorphic technology offers the possibility to process the data directly on the sensor side, at the "edge", in real-time, making them ideal for wearable solutions. In this presentation, a general concept of neuromorphic engineering is introduced, together with some practical use cases.

  •  30/11/2021 05:00 PM
  • Online Event

Prof. Ivan Tyukin's long-term research interests revolve around the challenges of creating theories, methods, and algorithms underpinning the creation and understanding of machine intelligence, adaptation, and learning as well as helping to reveal their fundamental limitations.

  •  19/08/2021 05:00 PM
  •   2 Leavygreave Rd, Broomhall, Sheffield, UK

Modern machine learning methods have driven significant advances in artificial intelligence, with notable examples coming from Deep Learning, enabling super-human performance in the game of Go and highly accurate prediction of protein folding e.g. AlphaFold. In this talk we look at deep learning from the perspective of Gaussian processes. Deep Gaussian processes extend the notion of deep learning to propagate uncertainty alongside function values. We’ll explain why this is important and show some simple examples.

Paper Title (link)
Suggested by
Time
Place
Notes
Neural SDEs as Infinite-Dimensional GANs (link)
Luca Manneschi
12noon, Thursday 28th September 5th October, 2023.
Ada Lovelace Room,
Regent Court

BayesFlow: Learning complex stochastic models with invertible neural networks (link)
Mike Smith
12 noon, Thursday,2nd November, 2023.
Ada Lovelace Room,
Regent Court

Causal Dynamics Learning for Task-Independent State Abstraction (link)
Chao Han
12 noon, Thursday,
16th November, 2023.
Ada Lovelace Room,
Regent Court

Deep physical neural networks trained with backpropagation (link)
Ian Vidamour
12 noon, Thursday,
30th November, 2023.
Ada Lovelace Room,
Regent Court

- cancelled -
- cancelled -
12 noon, Thursday
11th January, 2024.
- cancelled -

Variational inference for infinitely deep neural networks (llink)
Chris Noroozi
12 noon, Thursday
25th January, 2024.
Ada Lovelace Room,
Regent Court

Global response sensitivity analysis using probability distance measures and generalization of Sobol's analysis (link)
Mariya Mamajiwala
12 noon, Thursday,
8th February, 2024.
Ada Lovelace Room,
Regent Court

Deep Gaussian process emulation using stochastic imputation (link)
Mike Smith
12 noon, Thursday, 7th March, 21st March, 2024.
18th April, 2024.
Ada Lovelace Room,
Regent Court


Deep Neural Networks as Gaussian Processes (link)

Mike Smith
12 noon, Thursday 13th June, 2024.
Ada Lovelace Room,
Regent Court

Neural Processes (link)
Richard Wilkinson
12 noon, Thursday 27th June, 2024.
Ada Lovelace Room,
Regent Court

- summer break -




Stochastic physics-informed neural ordinary differential equations (link)
Mariya Mamajiwala
12 noon, Thursday  3rd October, 2024.
Ada Lovelace Room,
Regent Court

Attention is All you Need (link)
Luca Manneschi
12 noon, Thursday 17th October, 2024.
Ada Lovelace Room,
Regent Court

TBC
TBC
12 noon, 31st October, 2024.
Ada Lovelace Room,
Regent Court

  • Regent Court, Sheffield City Centre, Sheffield, UK