In this talk we summarize our work on acoustic sensing from a multi-rotor drone, which is challenged by the strong ego-noise from rotating rotors and propellers. We proposed a time-frequency spatial filtering framework for sound enhancement and source localization with a microphone array mounted on the drone. The method works well in presence strong ego-noise by exploiting the time-frequency sparsity of the acoustic signals. We further proposed an audio-visual joint processing framework which additionally uses a camera mounted on the drone to detect and localize potential target sound sources, and to provide guidance to steer the time-frequency spatial filter. The audio-visual calibration problem is addressed in this framework, which aims to align the audio and visual streams temporally and geometrically. We also collected an audio-visual dataset which was made publicly available to promote the research in flying robot audition.
This webinar will be delivered via Zoom. Register via this Eventbrite site so that we can email the Zoom link to you.
Registration ends at 23:30 on 23 March 2021.