Malek Itani – 91̽News /news Thu, 27 Jun 2024 23:36:17 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 AI headphones let wearer listen to a single person in a crowd, by looking at them just once /news/2024/05/23/ai-headphones-noise-cancelling-target-speech-hearing/ Thu, 23 May 2024 16:36:42 +0000 /news/?p=85538

Noise-canceling headphones have gotten very good at creating an auditory blank slate. But allowing certain sounds from a wearer’s environment through the erasure still challenges researchers. The latest edition of Apple’s AirPods Pro, for instance, for wearers — sensing when they’re in conversation, for instance — but the user has little control over whom to listen to or when this happens.

A 91̽ team has developed an artificial intelligence system that lets a user wearing headphones look at a person speaking for three to five seconds to “enroll” them. The system, called “Target Speech Hearing,” then cancels all other sounds in the environment and plays just the enrolled speaker’s voice in real time even as the listener moves around in noisy places and no longer faces the speaker.

The team presented May 14 in Honolulu at the ACM CHI Conference on Human Factors in Computing Systems. The is available for others to build on. The system is not commercially available.

“We tend to think of AI now as web-based chatbots that answer questions,” said senior author , a 91̽professor in the Paul G. Allen School of Computer Science & Engineering. “But in this project, we develop AI to modify the auditory perception of anyone wearing headphones, given their preferences. With our devices you can now hear a single speaker clearly even if you are in a noisy environment with lots of other people talking.”

To use the system, a person wearing off-the-shelf headphones fitted with microphones taps a button while directing their head at someone talking. The sound waves from that speaker’s voice then should reach the microphones on both sides of the headset simultaneously; there’s a 16-degree margin of error. The headphones send that signal to an , where the team’s machine learning software learns the desired speaker’s vocal patterns. The system latches onto that speaker’s voice and continues to play it back to the listener, even as the pair moves around. The system’s ability to focus on the enrolled voice improves as the speaker keeps talking, giving the system more training data.

Related:

  • For more information, visit
  • Stories from and

The team tested its system on 21 subjects, who rated the clarity of the enrolled speaker’s voice nearly twice as high as the unfiltered audio on average.

This work builds on the team’s previous “semantic hearing” research, which allowed users to select specific sound classes — such as birds or voices — that they wanted to hear and canceled other sounds in the environment.

Currently the TSH system can enroll only one speaker at a time, and it’s only able to enroll a speaker when there is not another loud voice coming from the same direction as the target speaker’s voice. If a user isn’t happy with the sound quality, they can run another enrollment on the speaker to improve the clarity.

The team is working to expand the system to earbuds and hearing aids in the future.

Additional co-authors on the paper were , and , 91̽doctoral students in the Allen School, and , director of research at AssemblyAI. This research was funded by a Moore Inventor Fellow award, a and a .

For more information, contact tsh@cs.washington.edu.

]]>
91̽team’s shape-changing smart speaker lets users mute different areas of a room /news/2023/09/21/shape-changing-smart-speaker-ai-noise-canceling-alexa-robot/ Thu, 21 Sep 2023 15:19:43 +0000 /news/?p=82410 Four people have separate conversations in a meeting room.
A team led by researchers at the 91̽ has developed a shape-changing smart speaker, which uses self-deploying microphones to divide rooms into speech zones and track the positions of individual speakers. Here 91̽doctoral students Tuochao Chen (foreground), Mengyi Shan, Malek Itani, and Bandhav Veluri — all in the Paul G. Allen School of Computer Science & Engineering — demonstrate the system in a meeting room. Photo: April Hong/91̽

In virtual meetings, it’s easy to keep people from talking over each other. Someone just hits mute. But for the most part, this ability doesn’t translate easily to recording in-person gatherings. In a bustling cafe, there are no buttons to silence the table beside you.

The ability to locate and control sound — isolating one person talking from a specific location in a crowded room, for instance — has , especially without visual cues from cameras.

A team led by researchers at the 91̽ has developed a shape-changing smart speaker, which uses self-deploying microphones to divide rooms into speech zones and track the positions of individual speakers. With the help of the team’s deep-learning algorithms, the system lets users mute certain areas or separate simultaneous conversations, even if two adjacent people have similar voices. Like a fleet of Roombas, each about an inch in diameter, the microphones automatically deploy from, and then return to, a charging station. This allows the system to be moved between environments and set up automatically. In a conference room meeting, for instance, such a system might be deployed instead of a central microphone, allowing better control of in-room audio.

The team published Sept. 21 in Nature Communications.

“If I close my eyes and there are 10 people talking in a room, I have no idea who’s saying what and where they are in the room exactly. That’s extremely hard for the human brain to process. Until now, it’s also been difficult for technology,” said co-lead author , a 91̽doctoral student in the Paul G. Allen School of Computer Science & Engineering. “For the first time, using what we’re calling a robotic ‘acoustic swarm,’ we’re able to track the positions of multiple people talking in a room and separate their speech.”

Previous research on has required using overhead or on-device cameras, projectors or special surfaces. The 91̽team’s system is the first to accurately distribute a robot swarm using only sound.

The team’s prototype consists of seven small robots that spread themselves across tables of various sizes. As they move from their charger, each robot emits a high frequency sound, like a bat navigating, using this frequency and other sensors to avoid obstacles and move around without falling off the table. The automatic deployment allows the robots to place themselves for maximum accuracy, permitting greater sound control than if a person set them. The robots disperse as far from each other as possible since greater distances make differentiating and locating people speaking easier. Today’s consumer smart speakers have multiple microphones, but clustered on the same device, they’re too close to allow for this system’s mute and active zones.

A small robot sits on a table beside a coffee cup.
The tiny individual microphones are able to navigate around clutter and place themselves with only sound. Photo: April Hong/91̽

“If I have one microphone a foot away from me, and another microphone two feet away, my voice will arrive at the microphone that’s a foot away first. If someone else is closer to the microphone that’s two feet away, their voice will arrive there first,” said co-lead author , a 91̽doctoral student in the Allen School. “We developed neural networks that use these time-delayed signals to separate what each person is saying and track their positions in a space. So you can have four people having two conversations and isolate any of the four voices and locate each of the voices in a room.”

The team tested the robots in offices, living rooms and kitchens with groups of three to five people speaking. Across all these environments, the system could discern different voices within 1.6 feet (50 centimeters) of each other 90% of the time, without prior information about the number of speakers. The system was able to process three seconds of audio in 1.82 seconds on average — fast enough for live streaming, though a bit too long for real-time communications such as video calls.

As the technology progresses, researchers say, acoustic swarms might be deployed in smart homes to better differentiate people talking with smart speakers. That could potentially allow only people sitting on a couch, in an “active zone,” to vocally control a TV, for example.

The seven robotic microphones sit in their charging station
To charge, the microphones automatically return to their charging station. Photo: April Hong/91̽

Researchers plan to eventually make microphone robots that can move around rooms, instead of being limited to tables. The team is also investigating whether the speakers can emit sounds that allow for real-world mute and active zones, so people in different parts of a room can hear different audio. The current study is another step toward science fiction technologies, such as the “cone of silence” in “Get Smart” and “Dune,” the authors write.

For more information see .

Of course, any technology that evokes comparison to fictional spy tools will raise questions of privacy. Researchers acknowledge the potential for misuse, so they have included guards against this: The microphones navigate with sound, not an onboard camera like other similar systems. The robots are easily visible and their lights blink when they’re active. Instead of processing the audio in the cloud, as most smart speakers do, the acoustic swarms process all the audio locally, as a privacy constraint. And even though some people’s first thoughts may be about surveillance, the system can be used for the opposite, the team says.

“It has the potential to actually benefit privacy, beyond what current smart speakers allow,” Itani said. “I can say, ‘Don’t record anything around my desk,’ and our system will create a bubble 3 feet around me. Nothing in this bubble would be recorded. Or if two groups are speaking beside each other and one group is having a private conversation, while the other group is recording, one conversation can be in a mute zone, and it will remain private.”

, formerly a principal research manager at Microsoft, is a co-author on this paper, and , a professor in the Allen School, is a senior author. The research was funded by a Moore Inventor Fellow award.

For more information, contact acousticswarm@cs.washington.edu.

]]>