Other things on this site...

MCLD
music
Evolutionary sound
Listen to Flat Four Internet Radio
Learn about
The Molecules of HIV
MCLD
software
Make Oddmusic!
Make oddmusic!
[Blog archives] [Categories]

ICEI 2018 special session "Analysis of ecoacoustic recordings: detection, segmentation and classification" - full programme

I'm really pleased about the selection presentations we have for our special session at ICEI2018 in Jena (Germany) 24th-28th September. The session is chaired by Jérôme Sueur and me, and is titled "Analysis of ecoacoustic recordings: detection, segmentation and classification".

Our session is special session S1.2 in the programme and here's a list of the accepted talks:

  • AUREAS: a tool for recognition of anuran vocalizations
    William E. Gómez, Claudia V. Isaza, Sergio Gómez, Juan M. Daza and Carol Bedoya
  • Content description of very-long-duration recordings of the environment
    Michael Towsey, Aniek Roelofs, Yvonne Phillips, Anthony Truskinger and Paul Roe
  • What male humpback whale song chorusing can and cannot tell us about their ecology: strengths and limitations of passive acoustic monitoring of a vocally active baleen whale
    Anke Kügler and Marc Lammers
  • Improving acoustic monitoring of biodiversity using deep learning-based source separation algorithms
    Mao-Ning Tuanmu, Tzu-Hao Lin, Joe Chun-Chia Huang, Yu Tsao and Chia-Yun Lee
  • Acoustic sensor networks and machine learning: scalable ecological data to advance vidence-based conservation
    Matthew McKown and David Klein
  • Extracting information on bat activities from long-term ultrasonic recordings through sound separation
    Chia-Yun Lee, Tzu-Hao Lin and Mao-Ning Tuanmu
  • Information retrieval from marine soundscape by using machine learning-based source separation
    Tzu-Hao Lin, Tomonari Akamatsu, Yu Tsao and Katsunori Fujikura
  • A Novel Set of Acoustic Features for the Categorization of Stridulatory Sounds in Beetles
    Carol Bedoya, Eckehard Brockerhoff, Michael Hayes, Richard Hofstetter, Daniel Miller and Ximena Nelson
  • Noise robust 2D bird localization via sound using microphone arrays
    Daniel Gabriel, Ryosuke Kojima, Kotaro Hoshiba, Katsutoshi Itoyama, Kenji Nishida and Kazuhiro Nakadai
  • Fine-scale observations of spatiotemporal dynamics and vocalization type of birdsongs using microphone arrays and unsupervised feature mapping
    Reiji Suzuki, Shinji Sumitani, Naoaki Chiba, Shiho Matsubayashi, Takaya Arita, Kazuhiro Nakadai and Hiroshi Okuno
  • Articulating citizen science, automatic classification and free web services for long-term acoustic monitoring: examples from bat monitoring schemes in France and UK
    Yves Bas, Kevin Barre, Christian Kerbiriou, Jean-Francois Julien and Stuart Newson

We also have poster presentations on related topics:

  • Towards truly automatic bird audio detection: an international challenge
    Dan Stowell
  • Assessing Ecosystem Change using Soundscape Analysis
    Diana C. Duque-Montoya, Claudia Isaza and Juan M. Daza
  • MatlabHTK: a simple interface for bioacoustic aanalyses using hidden Markov models
    Louis Ranjard
  • MAAD, a rational unsupervised method to estimate diversity in ecoacoustic recordings
    Juan Sebastian Ulloa, Thierry Aubin, Sylvain Haupert, Chloé Huetz, Diego Llusia, Charles Bouveyron and Jerome Sueur
  • Underwater acoustic habitats: towards a toolkit to assess acoustic habitat quality
    Irene Roca and Ilse Van Opzeeland
  • Focus on geophony: what weather sounds can tell
    Roberta Righini and Gianni Pavan
  • Reverse Wavelet Interference Algorithm for Detection of Avian Species and Characterization of Biodiversity
    Sajeev C Rajan, Athira K and Jaishanker R
  • Automatic Bird Sound Detection: Logistic Regression Based Acoustic Occupancy Model
    Yi-Chin Tseng, Bianca Eskelson and Kathy Martin
  • A software detector for monitoring endangered common spadefoot toad populations
    Guillaume Dutilleux and Charlotte Curé
  • PylotWhale a python package for automatically annotating bioacoustic recordings
    Maria Florencia Noriega Romero Vargas, Heike Vester and Marc Timme

You can register for the conference here - early discount until 15th Sep. See you there!

| science |

Notes from LVA-ICA conference 2018

This week we've been at the LVA-ICA 2018 conference, at the University of Surrey. A lot of papers presented on source separation. Here are some notes:

  • Evrim Acar gave a great tutorial on tensor factorisation. Slides here
  • Hiroshi Sawada described a nice extension of "joint diagonalisation", applying it in synchronised fashion across all frequency bands at once. He also illustrated well how this method reduces to some existing well-known methods, in certain limiting cases.
  • Ryan Corey showed his work on helping smart-speaker devices (such as Alexa or whatever) to estimate the relative transfer function which helps with multi-microphone sound processing. He made use of the wake-up keywords that are used for such devices ("Hi Marvin" etc), taking advantage of the known content to estimate the RTF for "free" i.e. with no extra interaction. He DTW-aligned the spoken keyword against a dictionary, then used that to mask the recorded sound and estimate the RTF.
  • Stefan Uhlich presented their (Sony group's) strongly-performing SiSEC sound separation method. Interestingly, they use a variant of DenseNet, as well as a BLSTM, to estimate a tf mask. Stefan also said that once the estimates have been made, a crucial improvement was to re-estimate them by putting the estimated masks together through a multichannel Wiener filtering stage.
  • Ama Marina Kreme presented her new task of "phase inpainting" and methods to solve it - estimating a missing portion of phases in a spectrogram, when all of the magnitudes and some of the phases are known. I can see this being useful in post-processing of source separation outputs, though her application was in engine noise analysis with an industrial collaborator.
  • Lucas Rencker presented some very nice ideas in "consistent dictionary learning" for signal declipping. Here, "consistent" means that the reconstructed signal should be painting the missing regions in a way that matches the clipping - if some part of the signal was clipped at a maximum of X, then its reconstruction should take values greater than or equal to X. Here's his Python code of the declipping method. Apparently also the state-of-the-art in this task is a method called "A-SPADE" by Kitic (2015). Pavel Zaviska presented an analysis of A-SPADE and S-SPADE, improving the latter but not beating A-SPADE.

An interesting feature of the week was the "SiSEC" Signal Separation Evaluation Challenge. We saw posters of some of the methods used to separate musical recordings into their component stems, but even better, we were used as guinea-pigs, doing a quick listening test to see which methods we thought were giving the best results. In most SiSEC work this is evaluated using computational measures such as signal-to-distortion ratio (SDR), but there's quite a lot of dissatisfaction with these "objective" measures since there's plenty that they get wrong. At the end of LVA-ICA the organisers announced the results of the listening test: surprisingly or not, the results of the listening test had broadly a strong correlation with the SDR measures, though there were some tracks for which this didn't hold. More analysis of the data to come, apparently.

From our gang, my students Will and Delia presented their posters and both went really well. Here's the photographic evidence:

  • Delia Fano Yela's poster about source separation using graph theory and Kernel Additive Modelling read the preprint here
  • Will Wilkinson's poster "A Generative Model for Natural Sounds Based on Latent Force Modelling" read the preprint here

Also from our research group (though not working with me) Daniel Stoller presented a poster as well as a talk, getting plenty of interest for his deep learning methods for source separation preprint here.

| science |

Thinning distance between point process realisations

The paper "Wasserstein Learning of Deep Generative Point Process Models" published at the NIPS 2017 conference has some interesting ideas in it, connecting generative deep learning - which is mostly used for dense data such as pixels - together with point processes, which are useful for "spiky" timestamp events.

They use the Wasserstein distance (aka the "earth-mover's distance") to compare sequences of spikes, and they do acknowledge that this has advantages and disadvantages. It's all about pushing things around until they match up - e.g. move a spike a few seconds earlier in one sequence, so that it lines up with a spike in the other sequence. It doesn't nicely account for insertions or deletions, which is tricky because it's quite common to have "missing" spikes for added "clutter" in data coming from detectors, for example. It'd be better if this method could incorporate more general "edit distances", though that's non-trivial.

So I was thinking about distances between point processes. More reading to be done. But a classic idea, and a good way to think about insertions/deletions, is called "thinning". It's where you take some data from a point process and randomly delete some of the events, to create a new event sequence. If you're using Poisson processes then thinning can be used for example to sample from a non-stationary Poisson process, essentially by "rejection sampling" from a stationary one.

Thinning is a probabilistic procedure: in the simplest case, take each event, flip a coin, and keep the event only if the coin says heads. So if we are given one event sequence, and a specification of the thinning procedure, we can define the likelihood that this would have produced any given "thinned" subset of events. Thus, if we take two arbitrary event sequences, we can imagine their union was the "parent" from which they were both derived, and calculate a likelihood that the two were generated from it. (Does it matter if the parent process actually generated this union list, or if there were unseen "extra" parent events that were actually deleted from both? In simple models where the thinning is independent for each event, no: the deletion process can happen in any order, and so we can assume those common deletions happened first to take us to some "common ancestor". However, this does make it tricky to compare distances across different datasets, because the unseen deletions are constant multiplicative factors on the true likelihood.)

We can thus define a "thinning distance" between two point process realisations as the negative log-likelihood under this thinning model. Clearly, the distance depends entirely on the number of events the two sequences have in common, and the numbers of events that are unique to them - the actual time positions of the events has no effect, in this simple model, it's just whether they line up or not. It's one of the simplest comparisons we can make. It's complementary to the Wasserstein distance which is all about time-position and not about insertions/deletions.

This distance boils down to:

NLL = -( n1 * log(n1/nu)  +  n2 * log(n2/nu)  +  (nu-n1) * log(1 - n1/nu)  +  (nu-n2) * log(1 - n2/nu) )

where "n1" is the number of events in seq 1, "n2" in seq 2, and "nu" in their union.

Does this distance measure work? Yes, at least in limited toy cases. I generated two "parent" sequences (using the same rate for each) and separately thinned each one ten times. I then measured the thinning distance between all pairs of the child sequences, and there's a clear separation between related and unrelated sequences:

Distances between distinct children of same process:
Min 75.2, Mean 93.3, Median 93.2, Max 106.4
Distances between children of different processes:
Min 117.3, Mean 137.7, Median 138.0, Max 167.3

[Python example script here]

This is nice because easy to calculate, etc. To be able to do work like in the paper I cited above, we'd need to be able to optimise against something like this, and even better, to be able to combine it into a full edit distance, one which we can parameterise according to situation (e.g. to balance the relative cost of moves vs. deletions).

This idea of distance based on how often the spikes coincide relates to "co-occurrence metrics" previously described in the literature. So far, I haven't found a co-occurrence metric that takes this form. To relax the strict requirement of events hitting at the exact same time, there's often some sort of quantisation or binning involved in practice, and I'm sure that'd help for direct application to data. Ideally we'd generalise over the possible quantisations, or use a jitter model to allow for the fact that spikes might move.

| science |

My PhD research students

I'm lucky to be working with a great set of PhD students on a whole range of exciting topics about sound and computation. (We're based in C4DM and the Machine Listening Lab.) Let me give you a quick snapshot of what my students are up to!

I'm primary supervisor for Veronica and Pablo:

  • Veronica is working on deep learning techniques for jointly identifying the species and the time-location of bird sounds in audio recordings. A particular challenge is the relatively small amount of labelled data available for each species, which forces us to pay attention to how the network architecture can make best use of the data and the problem structure.

  • Pablo is using a mathematical framework called Gaussian processes as a new paradigm for automatic music transcription - the idea is that it can perform high-resolution transcription and source separation at the same time, while also making use of some sophisticated "priors" i.e. information about the structure of the problem domain. A big challenge here is how to scale it up to run over large datasets.

I'm joint-primary supervisor for Will and Delia:

  • Will is developing a general framework for analysing sounds and generating new sounds, combining subband/sinusoidal analysis with probabilistic generative modelling. The aim is that the same model can be used for sound types as diverse as footsteps, cymbals, dog barks...

  • Delia is working on source separation and audio enhancement, using a lightweight framework based on nonlocal median filtering, which works without the need for large training datasets or long computation times. The challenge is to adapt and configure this so it makes best use of the structure of the data that's implicitly there within an audio recording.

I'm secondary supervisor for Jiajie and Sophie:

  • Jiajie is studying how singers' pitch tuning is affected when they sing together. She has designed and conducted experiments with two or four singers, in which sometimes they can all hear each other, sometimes only one can hear the other, etc. Many singers or choir conductors have their own folk-theories about what affects people's tuning, but Jiajie's experiments are making scientific measurements of it.

  • Sophie is exploring how to enhance a sense of community (e.g. for a group of people living together in a housing estate) through technological interventions that provide a kind of mediated community awareness. Should inhabitants gather around the village square or around a Facebook group? Those aren't the only two ways!

| science |

IBAC 2017 India - bioacoustics research

I'm just flying from the International Bioacoustics Congress 2017, held in Haridwar in the north of India. It was a really interesting time. I'm glad that IBAC was successfully brought to India, i.e. to a developing country with a more fragmented bioacoustics community (I think!) than in the west. For me, getting to know some of the Indian countryside, the people, and the food was ace. Let me make a few notes about research themes that were salient to me:

  • "The music is not in the notes, but in the silence between" - this Mozart quote which Anshul Thakur used is a lovely byline for his studies - as well as some studies by others - on using the durations of the gaps between units in a birdsong, for the purposes of classification/analysis. Here's who investigated gaps:
    • Anshul Thakur howed how he used the gap duration as an auxiliary feature, alongside the more standard acoustic classification, to improve quality.
    • Florencia Noriega discussed her own use of gap durations, in which she fitted a Gaussian mixture model to the histogram of log-gap-durations between parrot vocalisation units, and then used this to look for outliers. One use that she suggested was that this could be a good way to look for unusual vocalisation sequences that could be checked out in more detail by further fieldwork.
    • (I have to note here, although I didn't present it at IBAC, that I've also been advocating the use of gap durations. The clearest example is in our 2013 paper in JMLR in which we used them to disentangle sequences of bird sounds.)
    • Tomoko Mizuhara presented evidence from a perceptual study in zebrafinches, that the duration of the gap preceding a syllable exhibits some role in the perception of syllable identity. The gap before? Why? - Well one connection is that the gap before an event might relate if it's the time the bird takes to breathe in, and thus there's an empirical correlation, whether the bird is using that purely empirically or for some more innate reason.
  • Machine learning methods in bioacoustics - this is the session that I organised, and I think it went well - I hope people found it useful. I won't go into loads of detail here since I'm mostly making notes about things that are new to me. One notable thing though was Vincent Lostanlen announcing a dataset "BirdVox-70k" (flight calls of birds recorded in the USA, annotated with the time and frequency of occurrence) - I always like it when a dataset that might be useful for bird sound analysis is published under an open licence! No link yet - I think that's to come soon. (They've also done other things such as this neat in-browser audio annotation tool.)

  • Software platforms for bioacoustics. When I do my research I'm often coding my own Python scripts or suchlike, but that's not a vernacular that most bioacousticians speak. It's tricky to think what the ideal platform for bioacoustics would be, since there are quite some demands to meet: for example ideally it could handle ten seconds of audio as well as one year of audio, yet also provide an interface suitable for non-programmers. A few items on that theme:

    • Phil Eichinski (standing in for Paul Roe) presented QUT's Eco-Sounds platform. They've put effort into making it work for big audio data, managing terabytes of audio and optimising whether to analyse the sound proactively or on-demand. The false-colour "long duration spectrograms" developed by Michael Towsey et al are used to visualise long audio recordings. (I'll say a bit more about that below.)
    • Yves Bas presented his Tadarida toolbox for detection and classification.
    • Ed Baker presented his BioAcoustica platform for archiving and analysing sounds, with a focus on connecting deposits to museum specimens and doing audio query-by-example.
    • Anton Gradisek, in our machine-learning session, presented "JSI Sound: a machine learning tool in Orange for classification of diverse biosounds" - this is a kind of "machine-learning as a service" idea.
    • Then a few others that might or might not be described as full-blown "platforms":
      • Tomas Honaiser wasn't describing a new platform, but his monitoring work - I noted that he was using the existing AMIBIO project to host and analyse his recordings.
      • Sandor Zsebok presented his Ficedula matlab toolbox which he's used for segmenting and clustering etc to look at cultural evolution in the Collared flycatcher.
      • Julie Elie mentioned her lab's SoundSig Python tools for audio analysis.
    • Oh by the way, what infrastructure should these projects be built upon? The QUT platform is built using Ruby, which is great for web developers but strikes me as an odd choice because very few people in bioacoustics or signal processing have even heard of it - so how is the team / the community going to find the people to maintain it in future? (EDIT: here's a blog article with background information that the QUT team wrote in response to this question.) Yves Bas' is C++ and R which makes sense for R users (fairly common in this field). BioAcoustica - not sure if it's open-source but there's an R package that connects to it. --- I'm not an R user, I much prefer Python, because of its good language design, its really wide user base, and its big range of uses, though I recognise that it doesn't have the solid stats base that R does. People will debate the merits of these tools for ever onwards - we're not going to all come down on one decision - but it's a question that I often come back to, how best to build software tools to ensure they're useable and maintainable and solid.

So about those false-colour "long duration spectrograms". I've been advocating this visualisation method ever since I saw Michael Towsey present it (I think at the Ecoacoustics meeting in Paris). Just a couple of months ago I was at a workshop at the University of Sussex and Alice Eldridge and colleagues had been playing around with it too. At IBAC this week, ecologist Liz Znidersic talked really interestingly about how she had used them to detect a cryptic (i.e. hard-to-find) bird species. It shows that the tool helps with "needle in a haystack" problems, including those where you might not have a good idea of what needle you're looking for.

In Liz's case she looked at the long-duration spectrograms manually, to spot calling activity patterns. We could imagine automating this, i.e. using the long-dur spectrogam as a "feature set" to make inferences about diurnal activity. But even without automation it's still really neat.

Anyway back to the thematic listings...

  • Trills in bird sounds are fascinating. These rapidly-frequency-modulated sounds are often difficult and energetic to do, and this seems to lead to them being used for specific functions.
    • Tereza Petruskova presented a poster of her work on tree pipits, arguing for different roles for the "loud trill" and the "soft trill" in their song.
    • Christina Masco spoke about trills in splendid fairywrens (cute-looking birds those!). They can be used as a call but can also be included at the end of a song, which raises the question of why are they getting "co-opted" in this way. Christina argued that the good propagation properties of the trill could be a reason - there was some discussion about differential propagation and the "ranging hypothesis" etc.
    • Ole Larsen gave a nice opening talk about signal design for private vs public messages. It was all well-founded, though I quibbled his comment that strongly frequency-modulated sounds would be for "private" communication because if they cross multiple critical bands they might not accumulate enough energy in a "temporal integration window" to trigger detection. This seems intuitively wrong to me (e.g.: sirens!) but I need to find some good literature to work this one through.
  • Hybridisation zones are interesting for studying birdsong, since they're zones where two species coexist and individuals of that species might or might not breed with individuals of the other species. For birds, song recognition might play a part in whether this happens. It's quite a "strong" concept of perceptual similarity, to ask the question "Is that song similar enough to breed with?"!
    • Alex Kirschel showed evidence from a suboscine (and so not a vocal learner) which in some parts of Africa seems to hybridise and in some parts seems not to - and there could be some interplay with the similarity of the two species' song in that region.
    • Irina Marova also talked about hybridisation, in songbirds, but I failed to make a note of what she said!
  • Duetting in birdsong was discussed by a few people, including Pedro Diniz and Tomasz Osiejuk. Michal Budka argued that his playback studies with Chubb's cisticola showed they use duet for territory defence and signalling commitment but not for "mate-guarding".
    • Oh and before the conference, I was really taken by the duetting song of the grey treepie, a bird we heard up in the Himalayan hills. Check it out if you can!

As usual, my apologies to anyone I've misrepresented. IBAC has long days and lots of short talks (often 15 minutes), so it can all be a bit of a whirlwind! Also of course this is just a terribly partial list.

(PS: from the archives, here's my previous blog about IBAC 2015 in Murnau, Germany.)

| science |

Review of "Living Together: Mind and Machine Intelligence" by Neil Lawrence

In the early twentieth century when the equations of quantum physics were born, physicists found themselves in a difficult position. They needed to interpret what the quantum equations meant in terms of their real-world consequences, and yet they were faced with paradoxes such as wave-particle duality and "spooky action at a distance". They turned to philosophy and developed new metaphysics of their own. Thought-experiments such as Schrodinger's cat, originally intended to highlight the absurdity of the standard "Copenhagen interpretation", became standard teaching examples.

In the twenty-first century, researchers in artificial intelligence (AI) and machine learning (ML) find themselves in a roughly analogous position. There has been a sudden step-change in the abilities of machine learning systems, and the dream of AI (which had been put on ice after the initial enthusiasm of the 1960s turned out to be premature) has been reinvigorated - while at the same time, the deep and widespread industrial application of ML means that whatever advances are made, their effects will be felt. There's a new urgency to long-standing philosophical questions about minds, machines and society.

So I was glad to see that Neil Lawrence, an accomplished research leader in ML, published an article on these social implications. The article is "Living Together: Mind and Machine Intelligence". Lawrence makes a noble attempt to provide an objective basis for considering the differences between human and machine intelligences, and what those differences imply for the future place of machine intelligence in society.

In case you're not familiar with the arXiv website I should point out that articles there are un-refereed, they haven't been through the peer-review process that guards the gate of standard scientific journals. And let me cut to the chase - in this paper, I'm not sure which journal he was targeting, but if I was a reviewer I wouldn't have recommended acceptance. Lawrence's computer science is excellent, but here I find his philosophical arguments are disappointing. Here's my review:

Embodiment? No: containment

A key difference between humans and machines, notes Lawrence, is that we humans - considered for the moment as abstract computational agents - have high computational capacity but a very limited bandwidth to communicate. We speak (or type) our thoughts, but really we're sharing the tiniest shard of the information we have computed, whereas modern computers can calculate quite a lot (not as much as does a brain) but they can communicate with such high bandwidth that the results are essentially not "trapped" in the computer. For Lawrence this is a key difference, making the boundaries between machine intelligences much less pertinent than the boundaries between natural intelligences, and suggesting that future AI might not act as a lot of "agents" but as a unified subconscious.

Lawrence quantifies this difference as the numerical ratio between computational capacity and communicative bandwidth. Embarrassingly, he then names this ratio the "embodiment factor". The embodiment of cognition is an important idea in much modern thought-about-thought: essentially, "embodiment" is the rejection of the idea that my cognition can really be considered as an abstract computational process separate from my body. There are many ways we can see this: my cognition is non-trivially affected by whether or not I have hay-fever symptoms today; it's affected by the limited amount of energy I have, and the fact I must find food and shelter to keep that energy topped up; it's affected by whether I've written the letter "g" on my hand (or is it a "9"? oh well); it's affected by whether I have an abacus to hand; it's affected by whether or not I can fly, and thus whether in my experience it's useful to think about geography as two-dimensional or three-dimensional. (For a recent perspective on extended cognition in animals see the thoughts of a spiderweb.) I don't claim to be an expert on embodied cognition. But given the rich cognitive affordances that embodiment clearly offers, it's terribly embarrassing and a little revealing that Lawrence chooses to reduce it to the mere notion of being "locked in" (his phrase) with constraints on our ability to communicate.

Lawrence's ratio could perhaps be useful, so to defuse the unfortunate trivial reduction of embodiment, I would like to rename it "containment factor". He uses it to argue that while humans can be considered as individual intelligent agents, for computer intelligences the boundaries dissolve and they can be considered more as a single mass. But it's clear that containment is far from sufficient in itself: natural intelligences are not the only things whose computation is not matched by their communication. Otherwise we would have to consider an air-gapped laptop as an intelligent agent, but not an ordinary laptop.

Agents have their own goals and ambitions

The argument that the boundaries between AI agents dissolve also rests on another problem. In discussing communication Lawrence focusses too heavily on 'altruistic' or 'honest' communication: transparent communication between agents that are collaborating to mutually improve their picture of the world. This focus leads him to neglect the fact that communicating entities often have differing goals, and often have reason to be biased or even deceitful in the information shared.

The tension between communication and individual aims has been analysed in a long line of thought in evolutionary biology under the name of signalling theory. For example the conditions under which "honest signalling" is beneficial to the signaller. It's important to remember that the different agents each have their own contexts, their own internal states/traits (maybe one is low on energy reserves, and another is not) which affect communicative goals even if the overall avowed aim is common.

In Lawrence's description the focus on honest communication leads him to claim that "if an entity's ability to communicate is high [...] then that entity is arguably no longer distinct from those which it is sharing with" (p3). This is a direct consequence of Lawrence's elision: it can only be "no longer distinct" if it has no distinct internal traits, states, or goals. The elision of this aspect recurs throughout, e.g. "communication reduces to a reconciliation of plot lines among us" (p5).

Unfortunately the implausible unification of AI into a single morass is a key plank of the ontology that Lawrence wants to develop, and also key to the societal consequences he draws.

There is no System Zero

Lawrence considers some notions of human cognition including the idea of "system 1 and system 2" thinking, and proposes that the mass of machine intelligence potentially forms a new "System Zero" whose essentially unconscious reasoning forms a new stratum of our cognition. The argument goes that this stratum has a strong influence on our thought and behaviour, and that the implications of this on society could be dramatic. This concept has an appeal of neatness but it falls down too easily. There is no System Zero, and Lawrence's conceptual starting-point in communication bandwidth shows us why:

  • Firstly, the morass of machine intelligence has no high-bandwidth connection to System 1 or to System 2. The reason we talk of "System 1 and System 2" coexisting in the same agent is that they're deeply and richly connected in our cognition. (BTW I don't attribute any special status to "System 1 and System 2", they're just heuristics for thinking about thinking - that doesn't really matter here.) Lawrence's own argument about the poverty of communication channels such as speech also goes for our reception of information. However intelligent, unified or indeed devious AI becomes, it communicates with humans through narrow channels such as adverts, notifications on your smartphone, or selecting items to show to you. The "wall" between ourselves as agents and AI will be significant for a long time.
    • Direct brain-computer interfacing is a potential counterargument here, and if that technology were to develop significantly then it is true that our cognition could gain a high-bandwidth interface. I remain sceptical that such potential will be non-trivially realised in my lifetime. And if they do come to pass, they would dissolve human-human bottlenecks as much as human-computer bottlenecks, so in either case Lawrence's ontology does not stand.
  • Secondly AI/ML technologies are not unified. There's no one entity connecting them all together, endowing them with the same objective. Do you really think that Google and Facebook, Europe and China, will pool their machine intelligences together, allowing unbounded and unguarded communication? No. And so irrespective of how high the bandwidth is within and between these silos, they each act as corporate agents, with some degrees of collusion and mutual inference, sure, but they do not unify into an underlying substrate of our intelligence. This disunification highlights the true ontology: these agents sit relative to us as agents - powerful, information-rich and potentially dangerous agents, but then so are some humans.

Disturbingly Lawrence claims "Sytem Zero is already aligned with our goals". This starts from a useful observation - that many commercial processes such as personalised advertising work because they attempt to align with our subconscious desires and biases. But again it elides too much. In reality, such processes are aligned not with our goals but with the goals of powerful social elites, large companies etc, and if they are aligned with our "system 1" goals then that is a contingent matter.

Importantly, the control of these processes is largely not democratic but controlled commercially or via might-makes-right. Therefore even if AI/ML does align with some people's desires, it will preferentially align with the desires of those with cash to spend.

We need models; machines might not

On a positive note: Lawrence argues that our limited communication bandwidth shapes our intelligence in a particular way: it makes it crucial for us to maintain "models" of others, so that we can infer their internal state (as well as our own) from their behaviour and their signalling. He argues that conversely, many ML systems do not need such structured models - they simply crunch on enough data and they are able to predict our behaviour pretty well. This distinction seems to me to mark a genuine difference between natural intelligence and AI, at least according to the current state of the art in ML.

He does go a little too far in this as well, though. He argues that our reliance on a "model" of our own behaviour implies that we need to believe that our modelled self is in control - in Freudian terms, we could say he is arguing that the existence of the ego necessitates its own illusion that it controls the id. The argument goes that if the self-model knew it was not in control,

"when asked to suggest how events might pan out, the self model would always answer with "I don't know, it would depend on the precise circumstances"."

This argument is shockingly shallow coming from a scientist with a rich history of probabilistic machine learning, who knows perfectly well how machines and natural agents can make informed predictions in uncertain circumstances!

I also find unsatisfactory the eagerness with which various dualisms are mapped onto one another. The most awkward is the mapping of "self-model vs self" onto Cartesian dualism (mind vs body); this mapping is a strong claim and needs to be argued for rather than asserted. It would also need to account for why such mind-body dualism is not a universal, across history nor across cultures.

However, Lawrence is correct to argue that "sentience" of AI/ML is not the overriding concern in its role in our society; rather, its alignment or otherwise with our personal and collective goals, and its potential to undermine human democratic agency, is the prime issue of concern. This is a philosophical and a political issue, and one on which our discussion should continue.

| science |

Sneak preview: papers in special sessions on bioacoustics and machine listening

This season, I'm lead organiser for two special conference sessions on machine listening for bird/animal sound: EUSIPCO 2017 in Kos, Greece, and IBAC 2017 in Haridwar, India. I'm very happy to see the diverse selection of work that has been accepted for presentation - the diversity of the research itself …

| science |

Another failed attempt to discredit the vegans

People love to take the vegans down a peg or two. I guess they must unconsciously agree that the vegans are basically correct and doing the right thing, hence the defensive mud-slinging.

There's a bullshit article "Being vegan isn’t as good for humanity as you think". Like many bullshit …

| science |

On the validity of looking at spectrograms

People who do technical work with sound use spectrograms a heck of a lot. This standard way of visualising sound becomes second nature to us.

As you can see from these photos, I like to point at spectrograms all the time:

(Our research group even makes some really nice software …

| science |

Modelling vocal interactions

Last year I took part in the Dagstuhl seminar on Vocal Interactivity in-and-between Humans, Animals and Robots (VIHAR). Many fascinating discussions with phoneticians, roboticists, and animal behaviourists (ethologists).

One surprisingly difficult topic was to come up with a basic data model for describing multi-party interactions. It was so easy to …

| science |

social