< Program

Technology Update Session

Session 3C
Multi-Modal Sensor Integration to Predict User Intent and Steer Advanced Hearing-Aid Features
Sébastien Santurette, Federica Bianchi, Kasper Eskelund, Valentina Zapata-Rodríguez, Raul Sanchez-Lopez, Pernille Aaby Gade, Thomas Behrens, Elaine Hoi Ning Ng
Centre for Applied Audiology Research, Oticon A/S, Smørum, Denmark

When fitting hearing aids (HAs), the user's hearing health information and the audiologist's assessment are combined to adjust advanced settings so that these meet the user's needs in various situations. Premium hearing aids then adapt their processing automatically based only on  changes in the acoustic environment. Currently, for a sound scene of given acoustic complexity, users with the same advanced settings receive the same level of support in terms of the balance between speech and surrounding sounds. However, users' listening intentions can vary within the same sound scene. In a busy restaurant, a guest conversing with a friend and a waiter moving between tables face the same acoustic complexity, but their listening needs are different. The guest needs support to understand their friend, while the waiter needs enhanced awareness of the surrounding environment. Traditional HAs are not personalized to respond to these different listening intentions.Recent research suggested that head and body movements are key to understanding communication intent. When engaged in a conversation, we usually orient our head and body towards the person we're talking to. In complex situations, we may lean forward, move closer, or turn our head slightly to hear better. In group conversations, we typically move our heads more, switching between the people  we're engaging with. When walking or running, awareness of our surroundings becomes important for safe movement. Multi-modal sensor integration (MMSI) in HAs can now leverage this knowledge to predict user intent and provide different levels of listening support based on the detection of head movements, body movements, and conversation activity, on top of acoustic complexity. Here, we introduce how MMSI can enhance deep-neural-network (DNN) based noise management systems and steer the balance between speech and ambient sounds from real-time estimations of user intent.Technical and clinical studies compared the performance of a HA with MMSI to the latest available premium technology. Technical analyses of HA output showed that MMSI allowed a wide range of support adaptation within the same acoustic environment, depending on the predicted user intent. Electroencephalography recordings in HA users indicated that this support adaptation was mirrored in the brain, such that attention to ambient sounds varied significantly based on listening intentions, while attention to speech always remained strong. Speech comprehension in a realistic audiovisual multi-talker scene was improved with the use of MMSI when users focused on a target talker after naturally orienting through the sound scene. Access to speech was preserved regardless of whether the target talker was in front of or beside the user. Finally, users reported that the combination of MMSI with DNN-based noise suppression significantly enhanced sound quality.These findings illustrate the additional benefits provided by MMSI across situations. They also suggest that the traditional communication tactics  of always looking directly at the conversation partner may be suboptimal, as slight head movements will not hinder speech understanding from multiple conversation partners. Overall, MMSI can offer personalized support to users based on their listening intentions to help them engage more successfully in daily life.


Sébastien Santurette is Principal Researcher at Oticon's Centre for Applied Audiology Research. He has held associate professorship positions in clinical audiology and hearing rehabilitation at the Technical University of Denmark and Copenhagen University Hospital. His research interests include psychoacoustics, effects of hearing loss on sound perception, and audiology. He is an engineering graduate of Ecole Centrale Paris and holds an MSc degree in Engineering Acoustics (2005) and a PhD in Electronics and Communication from the Technical University of Denmark (2011).