Artificial Intelligence
Biomimetics
21 Feb 2024

Biologically-inspired algorithms for continuous onboard learning

The recent advancements in artificial intelligence (AI) have kindled the debate about the types of algorithms that might describe learning in the brain. A variety of algorithms have been proposed in the recent years, with - arguably - two types being at the forefront: error-driven learning rules that attempt to demonstrate how gradient-based learning (similar to the error backpropagation algorithm in AI) could be realised in the brain [1-4], and Hebbian-based methods that optimise synapic strengths without a signal representing the gradient of a performance-dependent cost function (i.e., error) [5-7]. Both approaches share the ambition of deriving algorithms that obey the constraints found in biology, such as locality constraints (neurons and synapses are only able to access information locally connected to them), the capability of working in real-time, and compatibility with spiking neural networks, while achieving state-of-the-art performance on machine learning benchmark tasks.

Due to these properties, such learning rules are especially interesting for neuromorphic devices and have the potential to enable fully decentralised and continuously learning neural networks in highly energy-efficient hardware substrates. This is of particular interest for onboard applications requiring not only cheap (in terms of energy) inference, but continouos retraining of neural networks. Hence, neuromorphic devices equipped with such decentralised learning algorithms inspired from biology might enable a future generation of autonomous spacecraft capable of (re-)learning on-the-fly.


Project overview

So far, these kinds of novel and local learning algorithms have never been considered for application on board a spacecraft. This study aims at closing this gap by transferring these methods to space-relevant applications and demonstrating their suitability and potential benefits for enabling continuous and power-efficient onboard learning.


References:

[1] Richards, Blake A., et al. "A deep learning framework for neuroscience." Nature neuroscience 22.11 (2019): 1761-1770.

[2] Whittington, James CR, and Rafal Bogacz. "Theories of error back-propagation in the brain." Trends in cognitive sciences 23.3 (2019): 235-250.

[3] Sacramento, João, et al. "Dendritic cortical microcircuits approximate the backpropagation algorithm." Advances in neural information processing systems 31 (2018).

[4] Payeur, Alexandre, et al. "Burst-dependent synaptic plasticity can coordinate learning in hierarchical circuits." Nature neuroscience 24.7 (2021): 1010-1019.

[5] Illing, Bernd, Wulfram Gerstner, and Johanni Brea. "Biologically plausible deep learning—but how far can we go with shallow networks?." Neural Networks 118 (2019): 90-101.

[6] Illing, Bernd, et al. "Local plasticity rules can learn deep representations using self-supervised contrastive predictions." Advances in Neural Information Processing Systems 34 (2021): 30365-30379.

[7] Krotov, Dmitry, and John J. Hopfield. "Unsupervised learning by competing hidden units." Proceedings of the National Academy of Sciences 116.16 (2019): 7723-7731.

Hamburger icon
Menu
Advanced Concepts Team