< Program

Special Session: Artificial Intelligence and Hearing Healthcare

Potentials of and Barriers to the Use of Artificial Intelligence in Hearing Aids
Brian C.J. Moore, PhD
Cambridge Hearing Group, Department of Psychology, University of Cambridge, UK 

Objectives: To review potential benefits of the use of artificial intelligence (AI) in hearing aids and to consider barriers to the implementation of AI and how those barriers might be overcome.

Design: Literature review and personal experience with hearing aids.

Results: AI is currently being applied in hearing aids in two main ways: (1) to determine the nature of the sound scene (e.g. speech in quiet, music, in a restaurant) and to select appropriate signal processing based on the identified scene; (2) to process the incoming signal directly so as to attenuate some sounds (e.g. background noise) while preserving the signal of interest (usually speech). AI in the form of deep neural networks has shown great promise in laboratory studies for enhancing speech in background noise and reverberation. However, there are several barriers to the effective implementation of AI in hearing aids, including:

(1)  Many of the most effective AI schemes require processing power and memory capacity exceeding what is currently available in hearing aids. This barrier is likely to be overcome by technical advances, but the timescale is uncertain.

(2)  Some AI schemes introduce a time delay exceeding the maximum acceptable delay of 10-20 ms. Further development of low-latency AI schemes is needed.

(3)  Many AI schemes have been trained with only a limited number of types of background sounds. Training with a great variety of types of backgrounds is needed to ensure appropriate generalization.

(4)  A common and important listening situation is when several people are talking at once. While AI can be quite effective at separating two or more talkers, adequate methods for selecting the talker that the person wants to listen to are not available. Such methods are needed but will be difficult to develop. Probably they will require the combination of information from several sources (using AI), for example evoked electrical potentials, eye gaze, head movements, and information from the auditory and visual scene. This will require multiple sensors, making the whole system “clunky” and potentially reducing acceptability to the user.

(5)  To get the benefits of AI in situations with high sound levels, the ear canal needs to be sealed; the commonly used open fitting would allow unprocessed sound to leak to the eardrum, reducing or eliminating any potential benefits of AI signal processing. Closed fittings create problems with occlusion (the person’s own voice sounding loud and boomy) for hearing-impaired people who have reasonably good hearing at low frequencies, which is common. Possible solutions are “active vents” or active own-voice cancellation.

Conclusions: Much research on potential applications of AI to hearing aids has been done in laboratory studies, considering only the effectiveness of the signal processing using sounds delivered via headphones or loudspeakers. A more wholistic approach is needed, where the barriers described above are considered and addressed. 

Brian Moore is Emeritus Professor of Auditory Perception in the University of Cambridge. His research focuses on the perception of sound by people with normal and impaired hearing, and on the design and fitting of hearing aids. He is a Fellow of the Royal Society, the Academy of Medical Sciences, the Acoustical Society of America, the British Society of Audiology, and the Audio Engineering Society. He has received the Silver and Gold medals from the Acoustical Society of America, and awards from the American Academy of Audiology, the Association for Research in Otolaryngology, and the American Auditory Society. He has an Honorary Doctorate from Adam Mickiewicz University, Poland. He has published 22 books and over 658 refereed journal articles.