< Program

Translational Research II - The Killion Lecture

New Models of Human Hearing Via Machine Learning
Josh McDermott, PhD
Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA

Humans derive an enormous amount of information about the world from sound. This talk will describe our recent efforts to leverage contemporary machine learning to build neural network models of our auditory abilities and their instantiation in the brain. Such models have enabled a qualitative step forward in our ability to account for real-world auditory behavior and illuminate function within auditory cortex. They also open the door to new approaches for designing auditory prosthetics and understanding their effect on behavioral abilities.

 

Josh McDermott is a perceptual scientist studying sound and hearing in the Department of Brain and Cognitive Sciences at MIT. His research addresses human and machine audition using tools from experimental psychology, engineering, and neuroscience. He is particularly interested in using the gap between human and machine competence to both better understand biological hearing and design better algorithms to aid human hearing.