< Program

Technology Update Session

Session 3C
Clinical Benefits of a New Approach to Adaptive Directionality with Phonak Lumity
Kevin D. Seitz-Paquette, AuD, Sonova

Since the transition from analog to digital hearing instruments, hearing aids have offered increasingly sophisticated features.  Among these are automatic classification and steering systems to apply signal processing features only in the scenarios where patients are expected to benefit.  Automated behavior must make assumptions about the listening intent of the user.  As an example, automatic engagement of directional microphones assumes that the user wants to focus on speech from the front.  However, in situations where the speech of interest is to the side or back of the user, the hearing instrument's automatic behavior and the user's listening intent are mismatched.  Historically, hearing care providers have trained their patients to adapt their own behavior to align with the hearing instrument's (e.g., by training patients to always sit facing the talker of interest).  Phonak's latest product platform, Lumity, takes an integral step toward solving this problem.  The Lumity product platform incorporates a new feature, SpeechSensor, into the automatic directionality system.  SpeechSensor identifies whether the primary speech is located in front of, to the side of, or behind the user.  When speech is in front of the user and all other activation criteria are met, the hearing instruments will engage StereoZoom (a binaural beamformer) to create a narrow beam in front of the user.  When speech is to the side, StereoZoom will disengage and broaden the beam to a fixed directional pattern.  When speech is behind the user, Real Ear Sound (which mimics the natural directionality of the pinna) is applied.  By opening up the directional pattern when speech is off-axis, Lumity allows the user to train their attention on the talker of interest, rather than excluding off-axis talkers by applying a less-than-ideal directional response. Clinical investigation of this approach to directionality included measures of speech intelligibility in noise (Oldenburg Satztest [OLSA]) and listening effort (Adaptive Categorical Listening Effort Scaling [ACALES]) for off-axis speech.  Participants (n = 22) were seated in a 12-speaker array in a sound booth.  Babble noise was presented from eleven of the speakers at an overall level of 71 dB (A).  The remaining speaker, located at 90 or 180 degrees depending on the spatial configuration under test, presented the speech at an adaptive level.  Speech intelligibility was improved by 1.55 dB SRT (p< 0.001), on average, when speech was presented from the side or the back when using SpeechSensor's directional behavior compared to legacy behavior.  When comparing SpeechSensor to legacy behavior for listening effort, participants showed an average benefit of 1.37 dB (p< 0.001) for speech presented from the side or the back.Clinical investigation of Lumity and SpeechSensor demonstrate the benefit for listening to off-axis speech.  This new innovation can support patients in complex and challenging listening environments, especially when the patient cannot direct his or her visual attention towards the speaker.


Kevin Seitz-Paquette is the Director of the Phonak Audiology Research Center (PARC) in Aurora, IL.  PARC conducts clinical investigations and technical analysis of Phonak products, ensuring they provide benefit to the patient and that hearing care providers have the evidence needed to use them effectively.  Before joining Sonova in 2020, Dr. Seitz-Paquette held roles in Product Management and Clinical Research elsewhere in the hearing aid industry.  He earned his AuD from Northwestern University and an MA in Linguistics from Indiana University.