A bio-inspired technique to mitigate catastrophic

Deep neural networks have achieved highly promising results on several tasks, including image and text classification. Nonetheless, many of these computational methods are prone to what is known as catastrophic forgetting, which essentially means that when they are trained on a new task, they tend to rapidly forget how to complete tasks they were trained to complete in the past.These are a highly simplified form of deep neural networks, the flagship method of modern artificial intelligence, which can perform complex tasks with reduced memory requirements and energy consumption

A bio-inspired technique to mitigate catastrophic forgetting in binarized neural networks

Neuroscience studies suggest that the ability of nerve cells to adapt to experiences is what ultimately allows the human brain to avoid ‘catastrophic forgetting’ and remember how to complete a given task even after tackling a new one. Most artificial intelligence (AI) agents, however, forget previously learned tasks very rapidly after learning new ones.

In both binarized neural networks and the Fusi model of metaplasticity, the strength of the synapses can only take two values, but the training process involves a ‘hidden’ parameter. This is how we got the idea that binarized neural networks could provide a way to alleviate the issue of catastrophic forgetting in AI.

The most notable findings of our study are, firstly, that the new consolidation mechanism we introduced effectively reduces forgetting and it does so based on the local internal state of the synapse only, without the need to change the metric optimized by the network between tasks, in contrast with other approaches of the literature,” Axel Laborieux, a first-year Ph.D. student who carried out the study, told TechXplore. “This feature is especially appealing for the design of low-power hardware since one must avoid the overhead of data movement and computation.

The findings gathered by this team of researchers could have important implications for the development of AI agents and deep neutral networks.The consolidation mechanism introduced in the recent paper could help to mitigate catastrophic forgetting in binarized neural networks, enabling the development of AI agents that can perform well on a variety of tasks. Overall, the study by Querlioz, Laborieux and their colleagues Maxence Ernoult and Tifenn Hirtzlin also highlights the value of drawing inspiration from neuroscience theory when trying to develop better performing AI agents.

Synaptic metaplasticity in binarized neural networks. Nature Communications(2021).DOI: 10.1038/s41467-021-22768-y. www.nature.com/articles/s41467-021-22768-y

Leave a Reply

Your email address will not be published. Required fields are marked *