Other things on this site...

Evolutionary sound
Listen to Flat Four Internet Radio
Learn about
The Molecules of HIV
Make Oddmusic!
Make oddmusic!
[Blog archives] [Categories]

Vegan mozarella cheese for pizza

Excited today to get a delivery of the new mail-order vegan cheese from my friend's new London cheezmakery, Black Arts Vegan! It came beautifully packed, see:


Their first cheese is a vegan mozarella. We unpacked the cheese and had a taste - yes, a good clear taste like standard mozarella. But they've worked on getting it right so it goes melty and gooey, and browns nicely in the oven. So let's try it on a pizza!


It really does come into its own on the pizza - the lovely warm melted mozarella consistency is great, and it's easy to forget that it's plant-based and not dairy. Magic :)

| food |

Chickpea chana curry with tamarind and baby aubergine

Tamarind is ace. It imparts a deep, rich and sweet flavour to curries. Buy a block and put it in your fridge, it keeps for months, and you can hack a piece off and chuck it in your curry just like that. That's what I did in this lovely chana (chickpea) curry.

Note that the block sort-of dissolves as it cooks, and leaves behind inedible pips. If you prefer not to spit out pips then you could put the tamarind in a paper teabag perhaps, so you can fish it out afterwards.

You can change the veg choices in here - the red pepper is a nice bright contrasting flavour - but in particular the baby aubergines do this great thing of going gooey and helping to create the sauce. Full-sized aubergines don't seem to do that, in my experience. It's the tamarind and the aubergine that go to add body to the sauce, I think - I don't add any tomato or anything like that, and yet the sauce is flavoursome and thickened.

  • 1 tbsp veg oil
  • 1 tsp mustard seeds
  • 1 tsp cumin seeds
  • 2 cloves
  • 1 onion, chopped fine-ish
  • 1 red chilli, sliced (reduce amount if you want less heat)
  • 1 tsp cumin powder
  • 1 tsp coriander powder
  • 1 tsp turmeric powder
  • 1 red pepper, chopped into slices/dices
  • 4 baby aubergines, chopped into 2cm chunks
  • 1 400g tin chickpeas, drained and rinsed
  • 1 packet of cooked beetroot, drained and quartered (you can add the drained beetroot juices to the pot later)
  • About 2cm cubed of tamarind block
  • Black pepper
  • 1 bunch coriander leaves, rinsed and roughly chopped

Heat the oil in a largeish deep pan which has a lid, on quite a hot frying heat. Add the spice seeds and the cloves - you might like to put the lid half-on at this point because as the seeds fry and pop they'll jump around and may jump out at you.

After 30 secs or so with the seeds, add the onion, then the chilli and the powdered spices. Give it a good stir round. Let the onion fry for a minute or two before adding the red pepper and the aubergines. Fry this all for another couple of minutes, stirring occasionally.

Add the chickpeas, the beetroot with its juices, the tamarind block, and maybe 1 cup of boiling water (don't add too much water - not enough to cover the mixture). Give this a good stir, then put the lid on, turn the heat down to its lowest, and let it bubble for 30 minutes or so. It can be longer or shorter, I'd say 20 minutes is an absolute minimum. No need to stir now, you can go and do something else, as long as you're sure it's not going to bubble over!

When the curry is nearly ready, take the lid off, turn the heat up to thicken the liquid if needed, and give it all a stir.

Give it a good twist of black pepper, then serve it up in bowls, with coriander leaf sprinkled on top. Serve it with bread (eg naan or roti).

| recipes |

Birds heard through bells

I'm very happy to publish a video of this installation piece that Sarah Angliss and I collaborated on a couple of years ago. We used computational methods to transcribe a dawn chorus birdsong recording into music for Sarah's robot carillon:

We presented this at Soundcamp in 2016. We'd also done a preview of it at an indoor event, but in this lush Spring morning with the very active birds all around in the park, it slotted in just perfectly.

If you listen you find that obviously the bells don't directly sound like birds singing. How could they! Ever since I started my research on birdsong, I've been fascinated by the rhythms of birdsong and how strongly they differ from human rhythms, and what I love about this piece is the way the bells take on that non-human patterning and re-present it in a way that makes it completely unfamiliar (yet still pleasing). We humans are too used to birdsong as background sound, we fail to notice what's so otherwordly about it. The piece has a lovely ebb and flow, and is full of little gestures and structures. None of that was composed by us - it all comes directly from an automatic transcription of a dawn chorus. (We did of course make creative decisions about how the automatic transcription was mapped. For example the pitch range we transposed to get the best alignment between birds' and bells' singing range.) And in context with the ongoing atmosphere of the park, the birdsong and the children, it works really well.

| art |

Thinning distance between point process realisations

The paper "Wasserstein Learning of Deep Generative Point Process Models" published at the NIPS 2017 conference has some interesting ideas in it, connecting generative deep learning - which is mostly used for dense data such as pixels - together with point processes, which are useful for "spiky" timestamp events.

They use the Wasserstein distance (aka the "earth-mover's distance") to compare sequences of spikes, and they do acknowledge that this has advantages and disadvantages. It's all about pushing things around until they match up - e.g. move a spike a few seconds earlier in one sequence, so that it lines up with a spike in the other sequence. It doesn't nicely account for insertions or deletions, which is tricky because it's quite common to have "missing" spikes for added "clutter" in data coming from detectors, for example. It'd be better if this method could incorporate more general "edit distances", though that's non-trivial.

So I was thinking about distances between point processes. More reading to be done. But a classic idea, and a good way to think about insertions/deletions, is called "thinning". It's where you take some data from a point process and randomly delete some of the events, to create a new event sequence. If you're using Poisson processes then thinning can be used for example to sample from a non-stationary Poisson process, essentially by "rejection sampling" from a stationary one.

Thinning is a probabilistic procedure: in the simplest case, take each event, flip a coin, and keep the event only if the coin says heads. So if we are given one event sequence, and a specification of the thinning procedure, we can define the likelihood that this would have produced any given "thinned" subset of events. Thus, if we take two arbitrary event sequences, we can imagine their union was the "parent" from which they were both derived, and calculate a likelihood that the two were generated from it. (Does it matter if the parent process actually generated this union list, or if there were unseen "extra" parent events that were actually deleted from both? In simple models where the thinning is independent for each event, no: the deletion process can happen in any order, and so we can assume those common deletions happened first to take us to some "common ancestor". However, this does make it tricky to compare distances across different datasets, because the unseen deletions are constant multiplicative factors on the true likelihood.)

We can thus define a "thinning distance" between two point process realisations as the negative log-likelihood under this thinning model. Clearly, the distance depends entirely on the number of events the two sequences have in common, and the numbers of events that are unique to them - the actual time positions of the events has no effect, in this simple model, it's just whether they line up or not. It's one of the simplest comparisons we can make. It's complementary to the Wasserstein distance which is all about time-position and not about insertions/deletions.

This distance boils down to:

NLL = -( n1 * log(n1/nu)  +  n2 * log(n2/nu)  +  (nu-n1) * log(1 - n1/nu)  +  (nu-n2) * log(1 - n2/nu) )

where "n1" is the number of events in seq 1, "n2" in seq 2, and "nu" in their union.

Does this distance measure work? Yes, at least in limited toy cases. I generated two "parent" sequences (using the same rate for each) and separately thinned each one ten times. I then measured the thinning distance between all pairs of the child sequences, and there's a clear separation between related and unrelated sequences:

Distances between distinct children of same process:
Min 75.2, Mean 93.3, Median 93.2, Max 106.4
Distances between children of different processes:
Min 117.3, Mean 137.7, Median 138.0, Max 167.3

[Python example script here]

This is nice because easy to calculate, etc. To be able to do work like in the paper I cited above, we'd need to be able to optimise against something like this, and even better, to be able to combine it into a full edit distance, one which we can parameterise according to situation (e.g. to balance the relative cost of moves vs. deletions).

This idea of distance based on how often the spikes coincide relates to "co-occurrence metrics" previously described in the literature. So far, I haven't found a co-occurrence metric that takes this form. To relax the strict requirement of events hitting at the exact same time, there's often some sort of quantisation or binning involved in practice, and I'm sure that'd help for direct application to data. Ideally we'd generalise over the possible quantisations, or use a jitter model to allow for the fact that spikes might move.

| science |

My PhD research students

I'm lucky to be working with a great set of PhD students on a whole range of exciting topics about sound and computation. (We're based in C4DM and the Machine Listening Lab.) Let me give you a quick snapshot of what my students are up to!

I'm primary supervisor for Veronica and Pablo:

  • Veronica is working on deep learning techniques for jointly identifying the species and the time-location of bird sounds in audio recordings. A particular challenge is the relatively small amount of labelled data available for each species, which forces us to pay attention to how the network architecture can make best use of the data and the problem structure.

  • Pablo is using a mathematical framework called Gaussian processes as a new paradigm for automatic music transcription - the idea is that it can perform high-resolution transcription and source separation at the same time, while also making use of some sophisticated "priors" i.e. information about the structure of the problem domain. A big challenge here is how to scale it up to run over large datasets.

I'm joint-primary supervisor for Will and Delia:

  • Will is developing a general framework for analysing sounds and generating new sounds, combining subband/sinusoidal analysis with probabilistic generative modelling. The aim is that the same model can be used for sound types as diverse as footsteps, cymbals, dog barks...

  • Delia is working on source separation and audio enhancement, using a lightweight framework based on nonlocal median filtering, which works without the need for large training datasets or long computation times. The challenge is to adapt and configure this so it makes best use of the structure of the data that's implicitly there within an audio recording.

I'm secondary supervisor for Jiajie and Sophie:

  • Jiajie is studying how singers' pitch tuning is affected when they sing together. She has designed and conducted experiments with two or four singers, in which sometimes they can all hear each other, sometimes only one can hear the other, etc. Many singers or choir conductors have their own folk-theories about what affects people's tuning, but Jiajie's experiments are making scientific measurements of it.

  • Sophie is exploring how to enhance a sense of community (e.g. for a group of people living together in a housing estate) through technological interventions that provide a kind of mediated community awareness. Should inhabitants gather around the village square or around a Facebook group? Those aren't the only two ways!

    • (A paper coming later!)
| science |

My climatarian diet

I was shocked - and frankly, really sceptical - to realise that eating meat was one of the biggest climate impacts I was having. On the flip side, that's a good thing, because it's one of the easiest things to change on our own, without upending society. Easier than rerouting the air travel industry! I've been doing it for a couple of years now and hey - it's actually been surprisingly easy and enjoyable.

Yes meat-eating really is an important factor in climate change: see e.g. this recent letter from scientists to the world - but see also the great book "How Bad Are Bananas" for a nice readable intro. For even more detail this 2014 paper and this 2013 paper both quantify the emissions of different diets.

The great thing is you don't have to be totally vegetarian, and you don't have to be an absolutist. Don't set yourself a goal that's way too far out of reach. Don't set yourself up for failure.

So, my climatarian diet. here's how it works:

  1. Don't eat any more beef or lamb, or other ruminants. Those are by far the most climate-changing animals to eat (basically because of all the methane they produce... on top of the impacts of all the crops you need to feed them up, etc).

Actually, that rule is the only rule that I follow as an absolute. Avoid other meat as far as possible, but don't worry too much if you end up having some chicken/pork/etc now and again. The impact of chicken/pork/etc is not to be ignored but it's much less than beef, and I find I've not really wanted much meat since I shifted my eating habits a bit.

More tips:

  1. Seafood (and various fish) is a good CO2-friendly source of protein and vitamins etc for someone who isn't eating meat, so do go for those. Especially seafood, oily fish. (Though it's hard to be sure which fish is better/worse - see e.g. this article about how much fuel it takes to get different fish out of the sea. Farmed fish must be easier right?)
  2. Try not to eat too much cheese, but again don't worry too much.
  3. When people ask, it's easiest just to say "vegetarian" - they usually know how to feed you then :)

I never thought I'd be able to give up on beef (steaks, roasts, burgers) but the weirdest thing is, within a couple of months I just had no inclination towards it. Funny how these seemingly unchangeable things can change.

If CO2 is what you care about then you might end up preferring battery-farmed animals rather than free-range, because if you think about it battery-farming is all about efficiency, and typically uses less resources per animal (it also restricts animal movements) - however, I'm saying this not to justify it but to point out that maybe you don't want to have just a one-track mind.

Vegans have an even better CO2 footprint than vegetarians or almost-ish-vegetarians like me. Vegan food is getting better and better but I don't think I'm going to be ready to set the bar that high for myself, not for the foreseeable. Still, sampling vegan now and again is also worth doing.

Is this plan the ideal one? Of course not. The biggest problem, CO2-wise, is cheese, since personally I just don't know how to cut that out when I'm already cutting out meat etc - it's just a step too far for me. The book "How Bad Are Bananas" points out that a kilogram of cheese often has a CO2 footprint higher than that of some meats. Yet vegetarians who eat cheese still contribute one-third less CO2 than meat-eaters [source], because of course they don't eat a whole cheese steak every day!

You don't have to be perfect - you just have to be a bit better. Give it a go?

| food |

How to package SuperCollider for Debian and Ubuntu Linux

SuperCollider works on Linux just great. I've been responsible for one specific part of that in recent years, which is that when a new release of SuperCollider is available, I put it into the Debian official package repository - which involves a few obscure admin processes - and then this means ...

| supercollider |

IBAC 2017 India - bioacoustics research

I'm just flying from the International Bioacoustics Congress 2017, held in Haridwar in the north of India. It was a really interesting time. I'm glad that IBAC was successfully brought to India, i.e. to a developing country with a more fragmented bioacoustics community (I think!) than in ...

| science |

Food I found in India

I just spent a couple of weeks in northern India (to attend IBAC). Did I have some great food? Of course I did.

No fancy gastro stuff, just tried the local cuisine. Funny thing is, as an English person and living in East London for more than a decade, I ...

| food |

A pea soup

A nice fresh pea soup can be great sometimes, and also a good thing to do with leftovers. This worked well for me when I had some leftover spring onions, creme fraiche and wasabi. You can of course leave out the wasabi, or swap the creme fraiche for cream or ...

| recipes |