About the Project
Project Outline:
Hearing loss affects over 5% of the world’s population, making it a major public health concern. Hearing aids are the most commonly prescribed treatment, but many users report they do not perform well for listening to speech in noisy situations. Breakthroughs in deep learning and low-power chip design are driving the next generation of hearing devices and wearables, with the potential to revolutionize speech understanding in challenging listening environments. For example, Apple’s AirPods Pro have gained FDA approval as hearing aids for mild to moderate hearing loss, and Phonak has introduced deep neural network-equipped devices that dynamically enhance speech clarity in noisy environments. However, training these approaches to work in general settings and to suit individual preferences remains a critical challenge.
To improve deep learning-enhanced hearing aids, we require metrics that predict how well a given hearing aid algorithm will perform for a specific user in a particular acoustic environment. Existing approaches often rely on oversimplified assumptions about listener preferences, which are captured using basic metrics. For example, it is often assumed there is a well-defined target speaker and that processing should maximise noise suppression while preserving quality. These simple metrics do little to capture users’ needs in more complex settings, such as trying to engage in multiparty conversations in a busy restaurant.
The project will explore a variety of methods to understand hearing device user preferences in more complex settings, including leveraging virtual reality (VR) to simulate diverse acoustic environments and hearing aid algorithms. VR offers the advantage of creating immersive and controlled scenarios where users can directly experience and evaluate different algorithmic configurations. This approach allows the systematic measurement of user preferences across a wide range of conditions, ensuring both ecological validity and experimental rigor. From this understanding new algorithm quality metrics will be derived for optimising existing deep-learning enhancement approaches in a more user-dependent manner.
The project will be based at the University of Sheffield and co-supervised by experts from both Universities of Sheffield and of Salford, collaborators on the ongoing EPSRC-funded Clarity Project . The Clarity Project focuses on improving speech-in-noise understanding, making it a natural foundation for this work. The Royal National Institute for Deaf People (RNID) will act as a key partner, offering additional expertise and a crucial end-user perspective.
APPLY FOR PROJECT https://soundfutures.salford.ac.uk/sheffield-phd-opportunity/
CDT programme
This University of Sheffield PhD project is part of the EPSRC Centre for Doctoral Training in Sustainable Sound Futures programme. Further details about the CDT and the programme can be found on the CDT website https://soundfutures.salford.ac.uk/
Funding Notes
The project offers a fully-funded studentship including a tax-free stipend (£20,780 increasing annually), full tuition fees (at UK home rate) and a very generous fund/budget to support your research training. The successful candidate will also have access to funded cohort based training from September 2025 as a member of the Centre for Doctoral Training in Sustainable Sound Futures.