Skip to content

Underwater Acoustics Data Challenge Workshop 2025: Thales

↩ Back to main page

Context

In this challenge, you are tasked with designing an algorithm to track and classify contacts in underwater acoustic data. You will be provided with synthetic passive sonar data from a line
array that increases in complexity over time. Your algorithm must be able to deal with this increasing complexity, which may include crossing contacts, quiet contacts, multiple tonals,
interference from sea mammals or other vessels in the water and active sonar pulses. The data you will be provided with is beam data (bearing vs time). You will also be provided with
ground truth data to enable measurement of the performance of your algorithm or to train any machine learning solutions.

You must decide on an appropriate method of visualisation to present the results of your tracking and classification algorithms to the sonar operator. Sonar operators are continuously presented with bearing-time records (BTR), which display the intensity of the signal outputted from the beamformer at each bearing over time. Visualisations provided to the sonar operator are displayed via a human-computer interface (HCI). An example of a BTR containing two crossing targets is provided below:

Track
Applying a simple contact follower, which selects the point of greatest intensity, to the contact initially at a bearing of 150 degrees gives the following result, where the track is given in red:

Note that the contact follower loses the correct track when the two contacts cross, incorrectly selecting the target that remains at broadside. This exemplifies the issues encountered with crossing targets in acoustic data.

Classify
Classification of contacts is undertaken through inspection of the frequency content in a signal, which can be visualised using a spectrogram. Observe the time series of a signal below and listen to how it sounds:

The corresponding spectrogram is:

This signal is simulated to represent interference from a propeller which contains multiple frequency tonals. This is one example of a form of interference that will mask the identification of contacts in the water that must be identified by your signal processing algorithm. Operators can use this information to classify a vehicle; in the example above the sound has a peak at 3Hz corresponding to the propeller rate of the source.

Challenge Goals

Your goal in this challenge is to ensure that the performance of your algorithm is maintained across increasingly complex data.

Your initial task will be to track a single contact in a short block of data, providing a visualisation of your output appropriate for presentation to the operator along with a measure of the error between your track and the ground truth. Making use of the tonal catalogue that will be provided, you will then be required to design a classification algorithm.

As you move through the data in time, the complexity in the data increases. Adjust your algorithm to account for the complexities mentioned in the context section above, while maintaining performance for the simpler scenario.

Choose one element of the processing chain that you would like to focus your attention on for further development, optimising the output of your algorithm. You will be provided with
any necessary supplementary material to support the development of your chosen function, i.e. track data. You may wish to improve the design of your tracker, explore the frequency content of the interfering signals in the data set, or test the impact of varying integration on the visualisation provided to the operator while creating a sufficient HCI to improve the operator’s perception.

The method you use for designing your algorithm is free for you to choose, whether this be signal processing, machine learning or an alternative mechanism you deem appropriate. In addition to the core data set, you will be provided with a data set for testing.

Pre-requisites

You will need to bring a laptop with MATLAB, Python or another similar alternative, as well as any relevant libraries/toolboxes, such as NumPy, Matplotlib, SciPy, PyTorch, TensorFlow.

Background Material

Sonar data
Hydrophones detect changes in pressure induced by the propagation of sound in water that are converted to electrical representations of the acoustic signal. The processing of these
electrical representations enables the detection, tracking and classification of contacts in the water.

Beamforming
Beamforming is fundamental to sonar processing and is one of the first steps in the signal processing chain. Beamforming allows us to listen to acoustic signals in different directions
and facilitates the extraction of dimensionality from hydrophone data. Your data has been beamformed using a delay-and-sum (or conventional) beamformer, which operates in the time domain.
Given a line array, unless incident at broadside, acoustic signals are received at different times at each point on the array. To maximise the intensity of the received signal, the delay-and-sum beamformer applies a time delay to the data received at each hydrophone and sums the outputs across all hydrophones. At each time step, the beamformer repeats this process for all bearings (azimuth angles) that the beam is steered to. The summed response across hydrophones is then outputted for each bearing at each time. An equivalent algorithm exists for beamforming in the frequency domain, where a phase shift is applied instead of a time delay. Adaptive beamformers are an alternative to conventional beamformers that steer a null towards an interfering signal to reduce its effect on the beamformer output.

Useful references

A. D. Waite. 2002. Sonar for Practising Engineers, 3rd Edition