Other things on this site...

MCLD
music
Evolutionary sound
Listen to Flat Four Internet Radio
Learn about
The Molecules of HIV
MCLD
software
Make Oddmusic!
Make oddmusic!
[Blog archives] [Categories]

New journal article: Automatic acoustic identification of individuals in multiple species

New journal article from us!

"Automatic acoustic identification of individuals in multiple species: improving identification across recording conditions" - a collaboration published in the Journal of the Royal Society Interface.

For machine learning, the main takeaway is that data augmentation is not just a way to create bigger training sets: used judiciously, it can mitigate the effect of confounds in the training data. It can also be used at test time to check a classifier's robustness.

For bioacoustics, the main takeaway is that previous automatic acoustic individual ID research may have been overconfident in their claimed accuracy, due to dataset confounds - and we provide methods to try and quantify such issues, even without gathering new data.

This journal article is the output of a nice collaboration we've been working on, to try and bring machine learning closer to solved the problems zoologists really need solved. It's been very pleasant working on these ideas with Pavel Linhart and Tereza Petrusková (I didn't actually meet Martin Šálek!). The problem of detecting individual animals' vocal signatures is not yet a solved one, but I hope this paper helps nudge us part of the way there, and helps the field to get there more efficiently by a careful use of audio datasets.

| science |

Suggested reading: getting going with deep learning

Based on a conversation we had in the Machine Listening Lab last week, here are some blogs and other things you can read when you're - say - a new PhD student who wants to get started with applying/understanding deep learning. We can recommend plenty of textbooks too, but here it's mainly blogs and other informal introductions. Our recommended reading:

PRACTICAL:

ADVANCED:

| Science |

Academics who fly less - my story

I've been struggling with the tension between academia and flying for a long time. The vast majority of my holidays I've done by train and the occasional boat - for example the train from London to southern Germany is a lovely ride, as is London to Edinburgh or Glasgow. But in academia the big issue is conferences and invited seminars - much of the time you don't get to choose where they are, and much of the time there are specific conferences that you "must" be publishing at, or your students "must" be at for their career, or you're invited to give a talk.

What can you do? Well, you can't give up. So here's what I've done, for the past five years at least:

  • I've declined various opportunities to fly (e.g. to North America and Australia - I'm Europe based). Sometimes this hurts - there are great meetings that you'd like to be at. In general, though, you usually find there are similar opportunities nearer by. You'll probably meet most of the people in one of those events anyway. In the big picture, it's probably better for academia to be structured as an overlapping patchwork network, rather than having single-point-of-groupthink.
  • I've taken the train to many conferences and meetings. From the UK I've taken the train to France, Spain, Germany, Netherlands, and I'm happy to go further. If you haven't done long train journeys for work then maybe you don't realise: with a laptop, many long-distance train journeys are ideal peaceful office days, with a reserved seat and beautiful views scrolling past. (UK folks: ask The Man In Seat 61 for the best train trips.) If your concern is making time for the journey, don't worry! You'll be much more productive than when you fly!
  • When invited to fly somewhere, I always discuss lower-carbon ways of doing it. Rome2Rio is a handy site to compare how to get anywhere by different means. If flying is the only way and I'm tempted to accept the invitation, I ask the inviters to pay for carbon offsetting too.
    Many university administrations don't want to pay for carbon offsets - why? This needs to change. If they're paying for flights they should be paying for the negative externalities of them. I'm not worried here about whether carbon offsetting is a good excuse or not - I'm concerned about research being more aware of its responsibilities.
  • If travelling somewhere (even by train), always try to make the most of the journey by finding other opportunities while out there - e.g. a new research group to say hello to (even if just a cuppa), a company or NGO. It's good to make face-to-face contact because that makes it much easier to do remote collaboration or coordination at other times (with the same people, I mean), reducing the need for extra trips.
    (Talking to my German colleagues, I learn that the German finance rules mean you have to travel home as soon as possible after the event, i.e. not roll multiple things into one trip - that's an unfortunate rule, we should change that.)
  • And of course I've done plenty of video-conferencing and audio-conferencing. It doesn't replace face-to-face meetings and we should be realistic about that, but it's a tool to use.

There's a cost implication which I haven't mentioned: flights are unfortunately often cheaper than trains and stopovers. This needs to change, of course - and can be a bit tricky when you're invited to speak somewhere and the cost ends up more than the organisers expected. However, I've been managing a funded research project for the past five years and I've noticed that in fact I've spent much less money on travel than I had projected. Why? Well back when I wrote the budget I costed for international flights and so on. But my adapted approach to travel means I take fewer big long-distance trips, but I get more out of them because I combine things into one trip, and I've skipped certain distant meetings in favour of ones closer to home - all of which means the cost is less than it would have been.

By the way, this handy flight CO2 calculator can help to work out the impact of speific trips, including multi-stop trips, so you can calculate if combining flights into a round-trip is sensible.

None of these are absolute rules. We can't carry all the burden solo, and we have to make compromises between different priorities. But if we all make some changes we can adapt academia to current realities. We can do this together - which is why I've signed my name on No Fly Climate Sci, a place for academics collectively to pledge to fly less. As I said, you don't have to be absolute about this, and the No Fly Climate Sci pledge acknowledges that. Join me?

| science |

ICEI 2018 special session "Analysis of ecoacoustic recordings: detection, segmentation and classification" - full programme

I'm really pleased about the selection presentations we have for our special session at ICEI2018 in Jena (Germany) 24th-28th September. The session is chaired by Jérôme Sueur and me, and is titled "Analysis of ecoacoustic recordings: detection, segmentation and classification".

Our session is special session S1.2 in the programme and here's a list of the accepted talks:

  • AUREAS: a tool for recognition of anuran vocalizations
    William E. Gómez, Claudia V. Isaza, Sergio Gómez, Juan M. Daza and Carol Bedoya
  • Content description of very-long-duration recordings of the environment
    Michael Towsey, Aniek Roelofs, Yvonne Phillips, Anthony Truskinger and Paul Roe
  • What male humpback whale song chorusing can and cannot tell us about their ecology: strengths and limitations of passive acoustic monitoring of a vocally active baleen whale
    Anke Kügler and Marc Lammers
  • Improving acoustic monitoring of biodiversity using deep learning-based source separation algorithms
    Mao-Ning Tuanmu, Tzu-Hao Lin, Joe Chun-Chia Huang, Yu Tsao and Chia-Yun Lee
  • Acoustic sensor networks and machine learning: scalable ecological data to advance vidence-based conservation
    Matthew McKown and David Klein
  • Extracting information on bat activities from long-term ultrasonic recordings through sound separation
    Chia-Yun Lee, Tzu-Hao Lin and Mao-Ning Tuanmu
  • Information retrieval from marine soundscape by using machine learning-based source separation
    Tzu-Hao Lin, Tomonari Akamatsu, Yu Tsao and Katsunori Fujikura
  • A Novel Set of Acoustic Features for the Categorization of Stridulatory Sounds in Beetles
    Carol Bedoya, Eckehard Brockerhoff, Michael Hayes, Richard Hofstetter, Daniel Miller and Ximena Nelson
  • Noise robust 2D bird localization via sound using microphone arrays
    Daniel Gabriel, Ryosuke Kojima, Kotaro Hoshiba, Katsutoshi Itoyama, Kenji Nishida and Kazuhiro Nakadai
  • Fine-scale observations of spatiotemporal dynamics and vocalization type of birdsongs using microphone arrays and unsupervised feature mapping
    Reiji Suzuki, Shinji Sumitani, Naoaki Chiba, Shiho Matsubayashi, Takaya Arita, Kazuhiro Nakadai and Hiroshi Okuno
  • Articulating citizen science, automatic classification and free web services for long-term acoustic monitoring: examples from bat monitoring schemes in France and UK
    Yves Bas, Kevin Barre, Christian Kerbiriou, Jean-Francois Julien and Stuart Newson

We also have poster presentations on related topics:

  • Towards truly automatic bird audio detection: an international challenge
    Dan Stowell
  • Assessing Ecosystem Change using Soundscape Analysis
    Diana C. Duque-Montoya, Claudia Isaza and Juan M. Daza
  • MatlabHTK: a simple interface for bioacoustic aanalyses using hidden Markov models
    Louis Ranjard
  • MAAD, a rational unsupervised method to estimate diversity in ecoacoustic recordings
    Juan Sebastian Ulloa, Thierry Aubin, Sylvain Haupert, Chloé Huetz, Diego Llusia, Charles Bouveyron and Jerome Sueur
  • Underwater acoustic habitats: towards a toolkit to assess acoustic habitat quality
    Irene Roca and Ilse Van Opzeeland
  • Focus on geophony: what weather sounds can tell
    Roberta Righini and Gianni Pavan
  • Reverse Wavelet Interference Algorithm for Detection of Avian Species and Characterization of Biodiversity
    Sajeev C Rajan, Athira K and Jaishanker R
  • Automatic Bird Sound Detection: Logistic Regression Based Acoustic Occupancy Model
    Yi-Chin Tseng, Bianca Eskelson and Kathy Martin
  • A software detector for monitoring endangered common spadefoot toad populations
    Guillaume Dutilleux and Charlotte Curé
  • PylotWhale a python package for automatically annotating bioacoustic recordings
    Maria Florencia Noriega Romero Vargas, Heike Vester and Marc Timme

You can register for the conference here - early discount until 15th Sep. See you there!

| science |

Notes from LVA-ICA conference 2018

This week we've been at the LVA-ICA 2018 conference, at the University of Surrey. A lot of papers presented on source separation. Here are some notes:

  • Evrim Acar gave a great tutorial on tensor factorisation. Slides here
  • Hiroshi Sawada described a nice extension of "joint diagonalisation", applying it in synchronised fashion across all frequency bands at once. He also illustrated well how this method reduces to some existing well-known methods, in certain limiting cases.
  • Ryan Corey showed his work on helping smart-speaker devices (such as Alexa or whatever) to estimate the relative transfer function which helps with multi-microphone sound processing. He made use of the wake-up keywords that are used for such devices ("Hi Marvin" etc), taking advantage of the known content to estimate the RTF for "free" i.e. with no extra interaction. He DTW-aligned the spoken keyword against a dictionary, then used that to mask the recorded sound and estimate the RTF.
  • Stefan Uhlich presented their (Sony group's) strongly-performing SiSEC sound separation method. Interestingly, they use a variant of DenseNet, as well as a BLSTM, to estimate a tf mask. Stefan also said that once the estimates have been made, a crucial improvement was to re-estimate them by putting the estimated masks together through a multichannel Wiener filtering stage.
  • Ama Marina Kreme presented her new task of "phase inpainting" and methods to solve it - estimating a missing portion of phases in a spectrogram, when all of the magnitudes and some of the phases are known. I can see this being useful in post-processing of source separation outputs, though her application was in engine noise analysis with an industrial collaborator.
  • Lucas Rencker presented some very nice ideas in "consistent dictionary learning" for signal declipping. Here, "consistent" means that the reconstructed signal should be painting the missing regions in a way that matches the clipping - if some part of the signal was clipped at a maximum of X, then its reconstruction should take values greater than or equal to X. Here's his Python code of the declipping method. Apparently also the state-of-the-art in this task is a method called "A-SPADE" by Kitic (2015). Pavel Zaviska presented an analysis of A-SPADE and S-SPADE, improving the latter but not beating A-SPADE.

An interesting feature of the week was the "SiSEC" Signal Separation Evaluation Challenge. We saw posters of some of the methods used to separate musical recordings into their component stems, but even better, we were used as guinea-pigs, doing a quick listening test to see which methods we thought were giving the best results. In most SiSEC work this is evaluated using computational measures such as signal-to-distortion ratio (SDR), but there's quite a lot of dissatisfaction with these "objective" measures since there's plenty that they get wrong. At the end of LVA-ICA the organisers announced the results of the listening test: surprisingly or not, the results of the listening test had broadly a strong correlation with the SDR measures, though there were some tracks for which this didn't hold. More analysis of the data to come, apparently.

From our gang, my students Will and Delia presented their posters and both went really well. Here's the photographic evidence:

  • Delia Fano Yela's poster about source separation using graph theory and Kernel Additive Modelling read the preprint here
  • Will Wilkinson's poster "A Generative Model for Natural Sounds Based on Latent Force Modelling" read the preprint here

Also from our research group (though not working with me) Daniel Stoller presented a poster as well as a talk, getting plenty of interest for his deep learning methods for source separation preprint here.

| science |

Thinning distance between point process realisations

The paper "Wasserstein Learning of Deep Generative Point Process Models" published at the NIPS 2017 conference has some interesting ideas in it, connecting generative deep learning - which is mostly used for dense data such as pixels - together with point processes, which are useful for "spiky" timestamp events.

They use the Wasserstein distance (aka the "earth-mover's distance") to compare sequences of spikes, and they do acknowledge that this has advantages and disadvantages. It's all about pushing things around until they match up - e.g. move a spike a few seconds earlier in one sequence, so that it lines up with a spike in the other sequence. It doesn't nicely account for insertions or deletions, which is tricky because it's quite common to have "missing" spikes for added "clutter" in data coming from detectors, for example. It'd be better if this method could incorporate more general "edit distances", though that's non-trivial.

So I was thinking about distances between point processes. More reading to be done. But a classic idea, and a good way to think about insertions/deletions, is called "thinning". It's where you take some data from a point process and randomly delete some of the events, to create a new event sequence. If you're using Poisson processes then thinning can be used for example to sample from a non-stationary Poisson process, essentially by "rejection sampling" from a stationary one.

Thinning is a probabilistic procedure: in the simplest case, take each event, flip a coin, and keep the event only if the coin says heads. So if we are given one event sequence, and a specification of the thinning procedure, we can define the likelihood that this would have produced any given "thinned" subset of events. Thus, if we take two arbitrary event sequences, we can imagine their union was the "parent" from which they were both derived, and calculate a likelihood that the two were generated from it. (Does it matter if the parent process actually generated this union list, or if there were unseen "extra" parent events that were actually deleted from both? In simple models where the thinning is independent for each event, no: the deletion process can happen in any order, and so we can assume those common deletions happened first to take us to some "common ancestor". However, this does make it tricky to compare distances across different datasets, because the unseen deletions are constant multiplicative factors on the true likelihood.)

We can thus define a "thinning distance" between two point process realisations as the negative log-likelihood under this thinning model. Clearly, the distance depends entirely on the number of events the two sequences have in common, and the numbers of events that are unique to them - the actual time positions of the events has no effect, in this simple model, it's just whether they line up or not. It's one of the simplest comparisons we can make. It's complementary to the Wasserstein distance which is all about time-position and not about insertions/deletions.

This distance boils down to:

NLL = -( n1 * log(n1/nu)  +  n2 * log(n2/nu)  +  (nu-n1) * log(1 - n1/nu)  +  (nu-n2) * log(1 - n2/nu) )

where "n1" is the number of events in seq 1, "n2" in seq 2, and "nu" in their union.

Does this distance measure work? Yes, at least in limited toy cases. I generated two "parent" sequences (using the same rate for each) and separately thinned each one ten times. I then measured the thinning distance between all pairs of the child sequences, and there's a clear separation between related and unrelated sequences:

Distances between distinct children of same process:
Min 75.2, Mean 93.3, Median 93.2, Max 106.4
Distances between children of different processes:
Min 117.3, Mean 137.7, Median 138.0, Max 167.3

[Python example script here]

This is nice because easy to calculate, etc. To be able to do work like in the paper I cited above, we'd need to be able to optimise against something like this, and even better, to be able to combine it into a full edit distance, one which we can parameterise according to situation (e.g. to balance the relative cost of moves vs. deletions).

This idea of distance based on how often the spikes coincide relates to "co-occurrence metrics" previously described in the literature. So far, I haven't found a co-occurrence metric that takes this form. To relax the strict requirement of events hitting at the exact same time, there's often some sort of quantisation or binning involved in practice, and I'm sure that'd help for direct application to data. Ideally we'd generalise over the possible quantisations, or use a jitter model to allow for the fact that spikes might move.

| science |

My PhD research students

I'm lucky to be working with a great set of PhD students on a whole range of exciting topics about sound and computation. (We're based in C4DM and the Machine Listening Lab.) Let me give you a quick snapshot of what my students are up to!

I'm primary supervisor for …

| science |

IBAC 2017 India - bioacoustics research

I'm just flying from the International Bioacoustics Congress 2017, held in Haridwar in the north of India. It was a really interesting time. I'm glad that IBAC was successfully brought to India, i.e. to a developing country with a more fragmented bioacoustics community (I think!) than in the west …

| science |

Review of "Living Together: Mind and Machine Intelligence" by Neil Lawrence

In the early twentieth century when the equations of quantum physics were born, physicists found themselves in a difficult position. They needed to interpret what the quantum equations meant in terms of their real-world consequences, and yet they were faced with paradoxes such as wave-particle duality and "spooky action at …

| science |

Sneak preview: papers in special sessions on bioacoustics and machine listening

This season, I'm lead organiser for two special conference sessions on machine listening for bird/animal sound: EUSIPCO 2017 in Kos, Greece, and IBAC 2017 in Haridwar, India. I'm very happy to see the diverse selection of work that has been accepted for presentation - the diversity of the research itself …

| science |

social