Skip to content

Sound Futures PhD – Better Personalization of Deep Learning-Enhanced Hearing Devices

About the project

Hearing loss affects over 5% of the world’s population, making it a major public health concern. Hearing aids are the most commonly prescribed treatment, but many users report they do not perform well for listening to speech in noisy situations. Breakthroughs in deep learning and low-power chip design are driving the next generation of hearing devices and wearables, with the potential to revolutionize speech understanding in challenging listening environments. For example, Apple’s AirPods Pro have gained FDA approval as hearing aids for mild to moderate hearing loss, and Phonak has introduced deep neural network-equipped devices that dynamically enhance speech clarity in noisy environments. However, training these approaches to work in general settings and to suit individual preferences remains a critical challenge.

To improve deep learning-enhanced hearing aids, we require metrics that predict how well a given hearing aid algorithm will perform for a specific user in a particular acoustic environment. Existing approaches often rely on oversimplified assumptions about listener preferences, which are captured using basic metrics. For example, it is often assumed there is a well-defined target speaker and that processing should maximise noise suppression while preserving quality. These simple metrics do little to capture users’ needs in more complex settings, such as trying to engage in multiparty conversations in a busy restaurant.

The project will explore a variety of methods to understand hearing device user preferences in more complex settings, including leveraging virtual reality (VR) to simulate diverse acoustic environments and hearing aid algorithms. VR offers the advantage of creating immersive and controlled scenarios where users can directly experience and evaluate different algorithmic configurations. This approach allows the systematic measurement of user preferences across a wide range of conditions, ensuring both ecological validity and experimental rigor. From this understanding new algorithm quality metrics will be derived for optimising existing deep-learning enhancement approaches in a more user-dependent manner.

The project will be based at the University of Sheffield and co-supervised by experts from both Sheffield and the University of Salford, collaborators on the ongoing EPSRC-funded Clarity Project . The Clarity Project focuses on improving speech-in-noise understanding, making it a natural foundation for this work. The Royal National Institute for Deaf People (RNID) will act as a key partner, offering additional expertise and a crucial end-user perspective.

Supervisors

PhD Funding

  • The PhD is full-funded provided a tax free stipend (£20,780 increasing annually), full tuition fee (home fees rate) and a generous budget for research training support. When the CDT begins in 2025, the costs for your involvement in the cohort activities will be fully funded by the CDT.
Posted on 4th March 2025 in Careers in acoustics