Follow Mint Lounge

Latest Issue

Home > Health> Wellness > Could artificial neural networks combat learning disabilities?

Could artificial neural networks combat learning disabilities?

Artificial Neural Networks, ANNs, could help scientists understand how to retain information in the human brain, says study

The research could develop training strategies for people with memory problems from aging or those with brain damage
The research could develop training strategies for people with memory problems from aging or those with brain damage (Photo by Milad Fakurian, Unsplash)

Listen to this article

A discovery about how algorithms can retain information more efficiently offers potential insight into the brain's ability to absorb new knowledge. The findings could aid in combating cognitive impairments and improving technology.

The scientists focused on artificial neural networks, known as ANNs, which are algorithms designed to emulate the behaviour of brain neurons. Like human minds, ANNs can absorb and classify vast quantities of information. Unlike our brains, however, ANNs tend to forget what they already know when fresh knowledge is introduced too fast, a phenomenon known as catastrophic forgetting.

Also Read: ‘Cognitive decline may start years before symptoms emerge’

Researchers have long theorized that our ability to learn new concepts stems from the interplay between the brain's hippocampus and the neocortex. The hippocampus captures fresh information and replays it during rest and sleep. The neocortex grabs the new material and reviews its existing knowledge so it can interleave, or layer, the fresh material into similar categories developed from the past.

However, there has been some question about this process, given the excessive amount of time it would take the brain to sort through the whole trove of information it has gathered during a lifetime. This pitfall could explain why ANNs lose long-term knowledge when absorbing new data too quickly.

Traditionally, the solution used in deep machine learning has been to retrain the network on the entire set of past data, whether or not it was closely related to the new information, a very time-consuming process. The UCI scientists decided to examine the issue in greater depth and made a notable discovery.

"We found that when ANNs interleaved a much smaller subset of old information, including mainly items that were similar to the new knowledge they were acquiring, they learned it without forgetting what they already knew," said graduate student Rajat Saxena, the paper's first author. Saxena spearheaded the project with assistance from Justin Shobe, an assistant project scientist. Both members of the laboratory of Bruce McNaughton, Distinguished Professor of neurobiology & behavior.

Also Read: Could brain function become normal once exposed to trauma?

"It allowed ANNs to take in fresh information very efficiently, without having to review everything they had previously acquired," Saxena said. "These findings suggest a brain mechanism for why experts at something can learn new things in that area much faster than non-experts. If the brain already has a cognitive framework related to the new information, the new material can be absorbed more quickly because changes are only needed in the part of brain's network that encodes the expert knowledge."

The discovery holds potential for tackling cognitive issues, according to McNaughton. "Understanding the mechanisms behind learning is essential for making progress," he said. "It gives us insights into what's going on when brains don't work the way they are supposed to. We could develop training strategies for people with memory problems from aging or those with brain damage. It could also lead to the ability to manipulate brain circuits so people can overcome these deficits."

The findings offer possibilities as well for making algorithms in machines such as medical diagnostic equipment, autonomous cars and many others more precise and efficient. 

Next Story