Look Once to Hear: Target Speech Hearing with Noisy Examples 🏆

Bandhav Veluri*, Malek Itani*, Tuochao Chen, Takuya Yoshioka, Shyam Gollakota

Paul G. Allen School of Computer Science & Engineering, University of Washington, USA
Assembly AI, San Francisco, CA, USA

* Equal contribution

CHI ’24: Proceedings of the CHI Conference on Human Factors in Computing Systems

🏆 This work has received an honorable mention


Main Video

Demo 1

Demo 2

 

 

In crowded settings, the human brain can focus on speech from a target speaker, given prior knowledge of how they sound. We introduce a novel intelligent hearable system that achieves this capability, enabling target speech hearing to ignore all interfering speech and noise, but the target speaker. A naïve approach is to require a clean speech example to enroll the target speaker. This is however not well aligned with the hearable application domain since obtaining a clean example is challenging in real world scenarios, creating a unique user interface problem. We present the first enrollment interface where the wearer looks at the target speaker for a few seconds to capture a single, short, highly noisy, binaural example of the target speaker. This noisy example is used for enrollment and subsequent speech extraction in the presence of interfering speakers and noise. Our system achieves a signal quality improvement of 7.01 dB using less than 5 seconds of noisy enrollment audio and can process 8 ms of audio chunks in 6.24 ms on an embedded CPU. Our user studies demonstrate generalization to real-world static and mobile speakers in previously unseen indoor and outdoor multipath environments. Finally, our enrollment interface for noisy examples does not cause performance degradation compared to clean examples, while being convenient and user-friendly. Taking a step back, this paper takes an important step towards enhancing the human auditory perception with artificial intelligence.

[Paper] [Code]