A new brain-inspired algorithm developed by US researchers could help hearing aids tune out interference and isolate single talkers in a crowd of voices.
In testing, researchers found it could improve word recognition accuracy by 40 percentage points relative to current hearing aid algorithms.
Kamal Sen is the algorithm’s developer and a Boston University (BU) College of Engineering associate professor of biomedical engineering.
The researcher said: “We were extremely surprised and excited by the magnitude of the improvement in performance—it’s pretty rare to find such big improvements.”
Around 2.5 billion people globally are expected to have some form of hearing loss, according to the World Health Organization.
Virginia Best is a BU Sargent College of Health & Rehabilitation Sciences research associate professor of speech, language, and hearing sciences.
Best said: “The primary complaint of people with hearing loss is that they have trouble communicating in noisy environments.
“These environments are very common in daily life and they tend to be really important to people—think about dinner table conversations, social gatherings, workplace meetings.
“So, solutions that can enhance communication in noisy places have the potential for a huge impact.”
As part of the research, they also tested the ability of current hearing aid algorithms to cope with the cacophony of cocktail parties.
Many hearing aids already include noise reduction algorithms and directional microphones, or beamformers, designed to emphasise sounds coming from the front.
Researcher Kamal Sen said: “We decided to benchmark against the industry standard algorithm that’s currently in hearing aids.
That existing algorithm “doesn’t improve performance at all; if anything, it makes it slightly worse.
“Now we have data showing what’s been known anecdotally from people with hearing aids.”
Sen has patented the new algorithm—known as BOSSA, which stands for biologically oriented sound segregation algorithm—and is hoping to connect with companies interested in licensing the technology.
He says that with Apple jumping into the hearing aid market—its latest AirPod Pro 2 headphones are advertised as having a clinical-grade hearing aid function—the BU team’s breakthrough is timely:
Sen said: “If hearing aid companies don’t start innovating fast, they’re going to get wiped out, because Apple and other start-ups are entering the market.”
For the past 20 years, Sen has been studying how the brain encodes and decodes sounds, looking for the circuits involved in managing the cocktail party effect.
With researchers in his Natural Sounds & Neural Coding Laboratory, he’s plotted how sound waves are processed at different stages of the auditory pathway, tracking their journey from the ear to translation by the brain.
One key mechanism: inhibitory neurons, brain cells that help suppress certain, unwanted sounds.
Sen said: “You can think of it as a form of internal noise cancellation.
“If there’s a sound at a particular location, these inhibitory neurons get activated.” According to Sen, different neurons are tuned to different locations and frequencies.
The brain’s approach is the inspiration for the new algorithm, which uses spatial cues like the volume and timing of a sound to tune into or tune out of it, sharpening or muffling a speaker’s words as needed.
Sen said: “It’s basically a computational model that mimics what the brain does, and actually segregates sound sources based on sound input.”
Best added: “Ultimately, the only way to know if a benefit will translate to the listener is via behavioural studies and that requires scientists and clinicians who understand the target population.”
Best helped design a study using a group of young adults with sensorineural hearing loss, typically caused by genetic factors or childhood diseases.
In a lab, participants wore headphones that simulated people talking from different nearby locations.
Their ability to pick out select speakers was tested with the aid of the new algorithm, the current standard algorithm, and no algorithm.
Reporting their findings, the researchers wrote that the “biologically inspired algorithm led to robust intelligibility gains under conditions in which a standard beamforming approach failed.
“The results provide compelling support for the potential benefits of biologically inspired algorithms for assisting individuals with hearing loss in ‘cocktail party’ situations.”
They’re now in the early stages of testing an upgraded version that incorporates eye tracking technology to allow users to better direct their listening attention.
The science powering the algorithm might have implications beyond hearing loss too.
Sen said: “The [neural] circuits we are studying are much more general purpose and much more fundamental.
“It ultimately has to do with attention, where you want to focus—that’s what the circuit was really built for.
“In the long term, we’re hoping to take this to other populations, like people with ADHD or autism, who also really struggle when there’s multiple things happening.”