What if the next advances in augmented reality weren’t based on visual rendering but on more immersive sound? This is the bet of Facebook and its Facebook Reality Lab Research team led by Michael Abrash, which is responsible for developing augmented reality glasses. He shared his audio progress with the US press, while posting a blog post.
Its mission is twofold: “Create virtual sounds indistinguishable from reality and redefine human hearing”. Just that. The team focuses on two axes. The first is the audio presence. During an online video chat, for example, the sound of the other person’s voice might reach such a quality that you would feel like they were standing next to you.
An AI capable of sorting sounds
The second and most ambitious focuses on what Facebook calls the “Perceptual superpowers”. This will be the system’s ability to reduce unwanted background noise while boosting the volume of the audio source you want to hear. This supposes an artificial intelligence capable of selecting only the speech of a person in the midst of the hubbub of a canteen. We can get an idea of what it would look like with the recording below:
This technology works thanks to several microphones incorporated in the glasses. She also uses the head and eye movement pattern to determine the target sound. A process that is completely transparent to the user and thus anticipates their wishes. In a restaurant, glasses would be able to spot and identify the noise of the air conditioning, that of the dishes or even people chatting. The system would then sort it out, removing some, and highlighting the ones that interest you.
For the moment, this technology is embedded in a strong prototype. But it could eventually be incorporated into Oculus virtual reality headsets. The ultimate goal of course remains to achieve the famous augmented reality glasses from Facebook.
An almost frightening feat. In the future, therefore, glasses could choose for us what we should hear.