Other things on this site...


MSc project topics with me

I'm an academic working on AI and Biodiversity - my research is described here. If you're an MSc student at Tilburg University CSAI department (or elsewhere), you could take your project with me. In most cases you will need some deep learning skills, and in most cases you'll be working with natural sounds such as wildlife sound monitoring. Here are some specific topics of interest right now, that you could study:

  • Novelty detection for biodiversity images/audio (with an industry partner)
    • We use images and sound recordings to detect birds, insects and plants all across the Netherlands. But what if we receive an upload with a species we don’t know about, or an unusual file that needs expert attention? In this project you will work with machine learning methods for anomaly detection, and apply them to biodiversity data. You will help to improve automatic monitoring of biodiversity, in the Netherlands and beyond.
  • Detecting birdsong on-device
    • Automatic detection of sounds is useful for "wake-up" functionality in a smartphone. It's already used for keyword-spotting in mobile devices. Can we use it to automatically detect a particular birdsong? In this project we would like to develop a phone app that can listen continuously and react when a particular bird species is detected. We know this is possible, and you can solve it using standard deep learning toolkits - but the big challenge will be to run this on-phone (Android, iOS), creating a smart but low-power algorithm that can run on device. In this project you might use existing "keyword spotting" tools, or use toolkits such as TFlite to port an algorithm onto device.
  • Classifying rare fish from photos (with an industry partner)
    • Our partner runs an app for anglers, who submit photos of the fish they catch, for automatic species ID. Can we use this unique dataset to help monitor the fish biodiversity of the Netherlands? Organisations such as the Waterschaps are obliged to make biodiversity reports to the EU, so would benefit from automatic monitoring technology. However, there is an interesting research question: photos of fish taken out of water are very different from those on automatic underwater cameras.This is an extreme example of "domain adaptation" in machine learning. You could also investigate how to classify rare fish species, in which we only have 1 or 2 live images – can image synthesis, or collections of drawings, help?
  • Build a better bird classifier (with an industry partner)
    • Warblr is a mobile app for automatic birdsong classification. The company uses machine learning. It has its training dataset, but now it also has many thousands of user-contributed sound recordings (unlabelled). Can you use these data to train a better classifier for the app? The classifier must also fit within the constraints of the running service: no more than 10 million parameters, and no more than 4 seconds to produce a decision. To work on this problem, you might use methods such as self-supervised learning, semi-supervised learning, pretraining, model distillation, or model pruning.
  • Birdsong automatic transcription
    • For images we have "object detection", and the equivalent for audio is "sound event detection" (SED). But - can we successfully detect all the sound events in a dawn chorus, when many birds are singing early in the morning? We have a dataset of annotated birdsong recordings. SED has been studied using deep learning, but we don't know if it works well enough for dense sound scenes of multiple birds. If we could get this working, we could understand animal behaviour (for example, are the birds taking turns) and also improve biodiversity monitoring
  • Perceiver as a model of bat echolocation foraging
    • Some bats can recognise their favourite flowers by the sound of the echolocation reflected back to them. We replicated this process using a neural network, in a recent research paper. In the paper, we commented that there may be better ways to handle the sequential analysis of multiple echoes. A recent new neural network called Perceiver seems to be appropriate for this. Can you use it to analyse spectrograms of reflected bat echolocations?
  • Optimal updating of a deep learning classifier service
    • We deploy deep learning (DL) classifiers, often using convolutional neural networks (CNNs), to recognise animal images and sounds. We also continuously receive new data – some of it labelled, some of it unlabelled. What’s the optimal way to “update” our classifier for best results? We could train it again from scratch; we could "fine tune" it using the new data; we could keep part of the model "frozen" and re-train part of it. And how would we verify that the model was not worse than the previous one? In this project you will design and validate an approach for updating a classifier for use in a live deployed web service, considering theoretical and practical aspects of how to maintain and improve the quality of service.
  • Detecting animal sounds to improve animal welfare (with an industry partner)
    • We work with a small company that creates a device placed in farms to monitor the health and wellbeing of animals. They already monitor climate conditions and visual changes - but sound could also be a key indicator for animal health issues. Coughs, screams, or "alarm calls" could be detected (in cows, pigs, chickens, and more) to ensure any problems with welfare can be alerted rapidly. In this project you will train an AI algorithm to detect specific animal sounds, using data from a real commercial product, and implemeting the algorithm to run efficiently on a device (Raspberry Pi-based). You will validate the performance of the detector, try out improvements - and there is potential to deploy the system in a live setting, if the results are good.
  • Birdsong classification with spectrogram patches
    • A recent deep learning paper found that using “patch embeddings” was a powerful method for image classification. It has some similarities with older methods used on patch embeddings of spectrograms for sound classification. In this project you will adapt the recent image-classification work for birdsong classification, and find out if you can create the next generation of powerful birdsong classifier.
  • Sound event detection using transformers/perceivers
    • There has been recent interest in the "Transformer" and "Perceiver" neural network architecture, originally used for text data but now for images and audio. Can they be used for sound event detection/classification? In this project you will run a detection (or classification) audio task and compare their performance against standard CNN models.
  • A simulated dataset for wildlife Sound Event Localisation and Detection
    • It is very useful to automatically localise and detect sounds – for example, multiple people speaking in a room. It would also be highly useful for outdoor wildlife monitoring – but there’s a problem: in order to train machine learning, we need a well-annotated training dataset, but it’s very hard to do this for outdoor natural sound recordings, because the sounds are complex and hard to annotate exactly. In this project, you will follow a recent method for creating synthetic SELD datasets but adapt it for outdoor sound. The challenges will be to obtain good sound source material, as well as “impulse responses” for natural reverberation, and to evaluate the naturalness of the synthesised sound recordings. Your work will enable the next generation of intelligent wildlife monitoring.

Also we have INTERNSHIP ideas:

  • Internship: “Human perception of bird sounds”
    • We have conducted a research study in which we played bird song “syllables” to birds, and asked the birds which sounds were similar to each other. But what do humans think? Would they make the same decisions as birds? In this study you will reproduce our bird song comparison study with human volunteers. More specifically, volunteers hear 3 sounds, let’s cal them A/X/B, and they are asked: does X sound more similar to A or to B? From this study, we will be able to explore how similar or different human sound perception is to birds’ sound perception.
  • Internship: "Adding annotation ability to the world’s largest open birdsong database"
    • Xeno Canto is the world’s biggest database of birdsong, with 60,000 recordings submitted each year. For data analysis and animal behaviour research, it’s useful to have annotations of the recordings (“who sung when”), but the site doesn’t have a facility for that. In this project you will add this feature. You’ll work with the existing PHP/MySQL website, adding a database table to upload+validate+store CSV annotations, and also perhaps visualise these annotations or provide users with direct editing ability.

Here are some PAST projects I've supervised:

  • 2021:
    • Voice anonymisation in wildlife sound recordings
    • Insect sound classification using deep learning
    • Detecting animal sounds to improve animal welfare
    • Wildlife sound source separation using deep learning
  • 2019:
    • Efficient bird sound detection on the Bela embedded system [Paper]
    • Short-term Prediction of Power Generated from Photovoltaic Systems using Gaussian Process Regression [Paper]
    • Listen like a bat: plant classification using echolocation and deep learning
    • Evaluating the impact of Full Spectrum Temporal Convolutional Neural Networks on Bird Species Sound Classification
  • 2018:
    • Detecting and classifying animal calls
  • 2017:
    • Estimating & Mitigating the Impact of Acoustic Environments on Digital Audio Signalling [Paper]

Check out the published papers to see some details from the kind of work we do!

Get in touch with me by email, info here. You're welcome to suggest a topic of your own, though to work with me it should concern new deep learning methods and/or animal sounds.

| science | Permalink