Abstract : Sound plays an important role in immersion when consuming content in eXtended reality (AR/VR). Modeling the acoustics of the virtual/real space one occupies is a key element in engagement and immersion. Multichannel spatial content must be adapted to mimic the acoustics of a space, and rendered binaurally over headphones. This talk will focus on modeling late reverberation for AR/VR, and discuss the optimum techniques for different scenarios with a focus on low-latency, real-time methods that provide perceptual plausibility. Particular focus will be given to delay-network based parametric reverberators that can be tuned to emulate a space and can be run in real-time.
Bio : Orchisama Das is a Senior Audio Research Scientist at Sonos Inc, based in London. She received her PhD from the Center for Computer Research in Music and Acoustics at Stanford University, during which she interned at Tesla and Meta Reality labs, and later, did a postdoc at the Institute of Sound Recording at University of Surrey. Her research focuses on artificial reverberation and room acoustics modeling.