Computer models that mimic the structure and performance of the human auditory system could help researchers develop higher hearing aids, cochlear implants and brain-machine interfaces. A brand new study from MIT has found that modern computational models derived from machine learning are getting closer to this goal.

In the biggest study thus far of deep neural networks trained to perform auditory tasks, the MIT team showed that the majority of those models produce internal representations that share properties of representations that appear within the human brain when people hear the identical sounds .

The study also provides insight into best train most of these models: The researchers found that models trained on auditory input, including background noise, higher mimic the activation patterns of the human auditory cortex.

“The special thing about this study is that it’s essentially the most comprehensive comparison of any such model with the hearing system thus far. “The study suggests that models derived from machine learning are a step in the precise direction, and it gives us some clues about what tends to make them higher models of the brain,” says Josh McDermott, associate professor of brain and Cognitive Sciences at MIT, a member of MIT’s McGovern Institute for Brain Research and Center for Brains, Minds, and Machines, and senior writer of the study.

MIT graduate Greta Tuckute and Jenelle Feather PhD ’22 are the lead authors of the open access Paper appearing today in .

Models of hearing

Deep neural networks are computational models consisting of many layers of knowledge processing units that will be trained to perform specific tasks based on huge amounts of knowledge. These kinds of models are widely utilized in many applications, and neuroscientists have begun to explore the likelihood that these systems will also be used to explain the way in which the human brain performs certain tasks.

“These models built with machine learning are capable of convey behaviors to a level that basically wasn’t possible with previous kinds of models, and that has led to an interest in whether or not the representations within the models could capture events .” within the brain,” says Tuckute.

When a neural network performs a task, its processing units generate activation patterns in response to any audio input it receives, reminiscent of a word or other form of sound. These model representations of the inputs will be in comparison with the activation patterns observed in fMRI brain scans of individuals hearing the identical inputs.

In 2018, McDermott and then-graduate student Alexander Kell reported that after they trained a neural network to perform auditory tasks (e.g. recognizing words from an audio signal). People listening to the identical sounds.

Since then, most of these models have turn into widely used, so McDermott’s research group set out to judge a bigger set of models to seek out out whether the flexibility to approximate the neural representations observed within the human brain is a general feature of those models.

For this study, researchers analyzed nine publicly available models of deep neural networks trained to perform auditory tasks and likewise created 14 of their very own models based on two different architectures. Most of those models were trained to perform a single task – recognizing words, identifying the speaker, recognizing environmental sounds and identifying musical genres – while two of them were trained to perform multiple tasks.

When the researchers presented these models with natural sounds used as stimuli in human fMRI experiments, they found that the interior model representations tended to bear similarity to those produced by the human brain. The models whose representations were most much like those within the brain were models that had been trained on a couple of task and were trained on auditory input that included background noise.

“If you train models in noise, they provide higher predictions to the brain than should you don’t, which intuitively is sensible because quite a lot of hearing in the actual world involves hearing in noise, and that is plausibly something to think in regards to the auditory system is customized,” says Feather.

Hierarchical processing

The latest study also supports the concept that the human auditory cortex has a point of hierarchical organization, with processing divided into phases that support different computational functions. As within the 2018 study, the researchers found that representations generated in earlier stages of the model are most much like those in primary auditory cortex, while representations generated in later stages of the model are more much like those created in Brain regions beyond the first cortex were generated.

Additionally, the researchers found that models trained on different tasks were higher at reproducing different elements of the audition process. For example, models trained on a language-related task were more much like language-selective areas.

“Even though the model has seen the exact same training data and the architecture is identical, if you optimize it for a selected task, you may see that it selectively explains certain tuning properties within the brain,” says Tuckute.

McDermott’s lab now plans to make use of their findings to develop models that may reproduce the human brain’s responses much more successfully. Such models not only help scientists learn more in regards to the organization of the brain, but is also used to develop higher hearing aids, cochlear implants and brain-machine interfaces.

“One goal of our field is to develop a pc model that may predict brain reactions and behavior. We consider that if we will achieve this goal, many doors will open,” says McDermott.

The research was funded by the National Institutes of Health, an Amazon Science Hub Fellowship, an American Association of University Women International Doctoral Fellowship, an MIT Friends of McGovern Institute Fellowship, and a K. Lisa Yang Integrative Computational Neuroscience Fellowship (ICoN ) Center at MIT and a graduate fellowship from the Department of Energy Computational Science.

This article was originally published at news.mit.edu