By Matthew Hutson
Somehow, even in a room full of hectic conversations, our brains can focus on a single sound in something called a cocktail party effect. But it gets louder or harder as you get older. Now, researchers have figured out how to fix it with a machine learning technique called silence.
Computer scientists trained neural networks, which roughly mimic wiring in the brain, to detect and isolate the sounds of many people speaking in a room. Part of the network did this by measuring how long it takes for sound to come on a cluster of microphones in the center of the room.
When the researchers tested their setup with a very loud background sound, they found that the cone of silence had two tones in their sources. Located within, they reported at an online conference on neural information processing systems this month. Which compares with a sensitivity of only 11.5º for previous state-of-the-art technology. When researchers trained their new system on additional sounds, it managed the same trick by handling eight sounds. Operated for a 7% sensitivity – even if it had never heard more than four at once.
Such a system can be used in one day hearing aids, surveillance setups, speakerphones or laptops. The new technology, which can also track moving sounds, will make your zoom calls easier by isolating and muted background noise, from vacuum cleaners to rambunctious children.