Other things on this site...

MCLD
music
Evolutionary sound
Listen to Flat Four Internet Radio
Learn about
The Molecules of HIV
MCLD
software
Make Oddmusic!
Make oddmusic!
[Blog archives] [Categories]

The H-index can not be relied on for recruitment

The H-index is one of my favourite publication statistics. It's really simple to define: a person's H-index is the biggest number H of publications which have been cited H times each. It's robust to outliers: if you've a million publications with no citations, or one publication with a million citations, this doesn't influence the outcome - it's the "core" of your H most cited publications that matter. This makes it quite a nice heuristic for the academic impact of a body of work. A common source of the H-index is Google Scholar, which automatically calculates it for each scholar who has an account, and influential academics with long publication records typically have a high H-index.

However, the H-index should not be used as a primary measure for evaluating academics, e.g. for recruitment or promotion.

Why?

The main reason is it's straightforward, in fact almost trivial, to manipulate your own H-index. You can make it artificially high.

Google Scholar doesn't exclude self-citations from its counting. It even counts self-citations in preprints, so the citations might not even be peer-reviewed. You could chuck a handful of hastily-written preprints into arXiv just before you apply for a job. (Should Google exclude self-citations? Yes, in my opinion: it's trivially easy given that they have groundtruth of which academic "owns" which paper. However, that wouldn't remove the vulnerability, because pairs of authors could go one level beyond and conspire to cross-cite each other etc.) Self-citations are often valid things to do, but they're also often used by academics to promote their own previous papers, so it's a grey area.

Google Scholar often automatically adds papers to a person's profile, using text matching to guess if the author matches. I've seen real examples in which an academic's profile included extremely highly-cited papers... that were not by them. In fact they were from completely different research topics! Google's text-matching isn't perfect, and like most text-matching it often has a problem with working out which names are actually the same author.

You can further manipulate your H-index, by choosing how to publish: you can divide research outputs into multiple smaller publications rather than single integrated papers.

Or you can do that after the fact, by tweaking your options in Google about whether two particular publications should be merged into one record or not. (Google has this option, since it often picks up two slightly-different versions of the same publication.)

Most of the vulnerabilities I've listed relate to Google's chosen way of implementing the H-score; however, at least some of them will apply however it is counted.

The H-index is a heuristic. It's OK to look at it as a quick superficial statistic, or even to use it as part of a general assessment making use of other stats and other evidence. But I'm increasingly seeing academic job adverts that say "please submit your Google Scholar H-index". This should not be done: it sends a public signal that this number is considered potentially decisive for recruitment (which it shouldn't be), creating a strong incentive to game the value. It also enforces a new monopoly position for a private company, demanding that academics create Google accounts in order to be eligible for a job. Academia is too important to have single points of failure centred on single companies (witness the recent debates around Elsevier!).

When trying to sift a large pile of applications, people like to have simple heuristics to help them make a start. That's understandable. It's naive to think that one's opinion isn't influenced by the first-pass heuristics - and so it's vital that you use heuristics that aren't so trivially gameable.

| science |

New journal article: Automatic acoustic identification of individuals in multiple species

New journal article from us!

"Automatic acoustic identification of individuals in multiple species: improving identification across recording conditions" - a collaboration published in the Journal of the Royal Society Interface.

For machine learning, the main takeaway is that data augmentation is not just a way to create bigger training sets: used judiciously, it can mitigate the effect of confounds in the training data. It can also be used at test time to check a classifier's robustness.

For bioacoustics, the main takeaway is that previous automatic acoustic individual ID research may have been overconfident in their claimed accuracy, due to dataset confounds - and we provide methods to try and quantify such issues, even without gathering new data.

This journal article is the output of a nice collaboration we've been working on, to try and bring machine learning closer to solved the problems zoologists really need solved. It's been very pleasant working on these ideas with Pavel Linhart and Tereza Petrusková (I didn't actually meet Martin Šálek!). The problem of detecting individual animals' vocal signatures is not yet a solved one, but I hope this paper helps nudge us part of the way there, and helps the field to get there more efficiently by a careful use of audio datasets.

| science |

Suggested reading: getting going with deep learning

Based on a conversation we had in the Machine Listening Lab last week, here are some blogs and other things you can read when you're - say - a new PhD student who wants to get started with applying/understanding deep learning. We can recommend plenty of textbooks too, but here it's mainly blogs and other informal introductions. Our recommended reading:

PRACTICAL:

ADVANCED:

| Science |

Academics who fly less - my story

I've been struggling with the tension between academia and flying for a long time. The vast majority of my holidays I've done by train and the occasional boat - for example the train from London to southern Germany is a lovely ride, as is London to Edinburgh or Glasgow. But in academia the big issue is conferences and invited seminars - much of the time you don't get to choose where they are, and much of the time there are specific conferences that you "must" be publishing at, or your students "must" be at for their career, or you're invited to give a talk.

What can you do? Well, you can't give up. So here's what I've done, for the past five years at least:

  • I've declined various opportunities to fly (e.g. to North America and Australia - I'm Europe based). Sometimes this hurts - there are great meetings that you'd like to be at. In general, though, you usually find there are similar opportunities nearer by. You'll probably meet most of the people in one of those events anyway. In the big picture, it's probably better for academia to be structured as an overlapping patchwork network, rather than having single-point-of-groupthink.
  • I've taken the train to many conferences and meetings. From the UK I've taken the train to France, Spain, Germany, Netherlands, and I'm happy to go further. If you haven't done long train journeys for work then maybe you don't realise: with a laptop, many long-distance train journeys are ideal peaceful office days, with a reserved seat and beautiful views scrolling past. (UK folks: ask The Man In Seat 61 for the best train trips.) If your concern is making time for the journey, don't worry! You'll be much more productive than when you fly!
  • When invited to fly somewhere, I always discuss lower-carbon ways of doing it. Rome2Rio is a handy site to compare how to get anywhere by different means. If flying is the only way and I'm tempted to accept the invitation, I ask the inviters to pay for carbon offsetting too.
    Many university administrations don't want to pay for carbon offsets - why? This needs to change. If they're paying for flights they should be paying for the negative externalities of them. I'm not worried here about whether carbon offsetting is a good excuse or not - I'm concerned about research being more aware of its responsibilities.
  • If travelling somewhere (even by train), always try to make the most of the journey by finding other opportunities while out there - e.g. a new research group to say hello to (even if just a cuppa), a company or NGO. It's good to make face-to-face contact because that makes it much easier to do remote collaboration or coordination at other times (with the same people, I mean), reducing the need for extra trips.
    (Talking to my German colleagues, I learn that the German finance rules mean you have to travel home as soon as possible after the event, i.e. not roll multiple things into one trip - that's an unfortunate rule, we should change that.)
  • And of course I've done plenty of video-conferencing and audio-conferencing. It doesn't replace face-to-face meetings and we should be realistic about that, but it's a tool to use.

There's a cost implication which I haven't mentioned: flights are unfortunately often cheaper than trains and stopovers. This needs to change, of course - and can be a bit tricky when you're invited to speak somewhere and the cost ends up more than the organisers expected. However, I've been managing a funded research project for the past five years and I've noticed that in fact I've spent much less money on travel than I had projected. Why? Well back when I wrote the budget I costed for international flights and so on. But my adapted approach to travel means I take fewer big long-distance trips, but I get more out of them because I combine things into one trip, and I've skipped certain distant meetings in favour of ones closer to home - all of which means the cost is less than it would have been.

By the way, this handy flight CO2 calculator can help to work out the impact of speific trips, including multi-stop trips, so you can calculate if combining flights into a round-trip is sensible.

None of these are absolute rules. We can't carry all the burden solo, and we have to make compromises between different priorities. But if we all make some changes we can adapt academia to current realities. We can do this together - which is why I've signed my name on No Fly Climate Sci, a place for academics collectively to pledge to fly less. As I said, you don't have to be absolute about this, and the No Fly Climate Sci pledge acknowledges that. Join me?

| science |

ICEI 2018 special session "Analysis of ecoacoustic recordings: detection, segmentation and classification" - full programme

I'm really pleased about the selection presentations we have for our special session at ICEI2018 in Jena (Germany) 24th-28th September. The session is chaired by Jérôme Sueur and me, and is titled "Analysis of ecoacoustic recordings: detection, segmentation and classification".

Our session is special session S1.2 in the programme and here's a list of the accepted talks:

  • AUREAS: a tool for recognition of anuran vocalizations
    William E. Gómez, Claudia V. Isaza, Sergio Gómez, Juan M. Daza and Carol Bedoya
  • Content description of very-long-duration recordings of the environment
    Michael Towsey, Aniek Roelofs, Yvonne Phillips, Anthony Truskinger and Paul Roe
  • What male humpback whale song chorusing can and cannot tell us about their ecology: strengths and limitations of passive acoustic monitoring of a vocally active baleen whale
    Anke Kügler and Marc Lammers
  • Improving acoustic monitoring of biodiversity using deep learning-based source separation algorithms
    Mao-Ning Tuanmu, Tzu-Hao Lin, Joe Chun-Chia Huang, Yu Tsao and Chia-Yun Lee
  • Acoustic sensor networks and machine learning: scalable ecological data to advance vidence-based conservation
    Matthew McKown and David Klein
  • Extracting information on bat activities from long-term ultrasonic recordings through sound separation
    Chia-Yun Lee, Tzu-Hao Lin and Mao-Ning Tuanmu
  • Information retrieval from marine soundscape by using machine learning-based source separation
    Tzu-Hao Lin, Tomonari Akamatsu, Yu Tsao and Katsunori Fujikura
  • A Novel Set of Acoustic Features for the Categorization of Stridulatory Sounds in Beetles
    Carol Bedoya, Eckehard Brockerhoff, Michael Hayes, Richard Hofstetter, Daniel Miller and Ximena Nelson
  • Noise robust 2D bird localization via sound using microphone arrays
    Daniel Gabriel, Ryosuke Kojima, Kotaro Hoshiba, Katsutoshi Itoyama, Kenji Nishida and Kazuhiro Nakadai
  • Fine-scale observations of spatiotemporal dynamics and vocalization type of birdsongs using microphone arrays and unsupervised feature mapping
    Reiji Suzuki, Shinji Sumitani, Naoaki Chiba, Shiho Matsubayashi, Takaya Arita, Kazuhiro Nakadai and Hiroshi Okuno
  • Articulating citizen science, automatic classification and free web services for long-term acoustic monitoring: examples from bat monitoring schemes in France and UK
    Yves Bas, Kevin Barre, Christian Kerbiriou, Jean-Francois Julien and Stuart Newson

We also have poster presentations on related topics:

  • Towards truly automatic bird audio detection: an international challenge
    Dan Stowell
  • Assessing Ecosystem Change using Soundscape Analysis
    Diana C. Duque-Montoya, Claudia Isaza and Juan M. Daza
  • MatlabHTK: a simple interface for bioacoustic aanalyses using hidden Markov models
    Louis Ranjard
  • MAAD, a rational unsupervised method to estimate diversity in ecoacoustic recordings
    Juan Sebastian Ulloa, Thierry Aubin, Sylvain Haupert, Chloé Huetz, Diego Llusia, Charles Bouveyron and Jerome Sueur
  • Underwater acoustic habitats: towards a toolkit to assess acoustic habitat quality
    Irene Roca and Ilse Van Opzeeland
  • Focus on geophony: what weather sounds can tell
    Roberta Righini and Gianni Pavan
  • Reverse Wavelet Interference Algorithm for Detection of Avian Species and Characterization of Biodiversity
    Sajeev C Rajan, Athira K and Jaishanker R
  • Automatic Bird Sound Detection: Logistic Regression Based Acoustic Occupancy Model
    Yi-Chin Tseng, Bianca Eskelson and Kathy Martin
  • A software detector for monitoring endangered common spadefoot toad populations
    Guillaume Dutilleux and Charlotte Curé
  • PylotWhale a python package for automatically annotating bioacoustic recordings
    Maria Florencia Noriega Romero Vargas, Heike Vester and Marc Timme

You can register for the conference here - early discount until 15th Sep. See you there!

| science |

Notes from LVA-ICA conference 2018

This week we've been at the LVA-ICA 2018 conference, at the University of Surrey. A lot of papers presented on source separation. Here are some notes:

  • Evrim Acar gave a great tutorial on tensor factorisation. Slides here
  • Hiroshi Sawada described a nice extension of "joint diagonalisation", applying it in synchronised fashion across all frequency bands at once. He also illustrated well how this method reduces to some existing well-known methods, in certain limiting cases.
  • Ryan Corey showed his work on helping smart-speaker devices (such as Alexa or whatever) to estimate the relative transfer function which helps with multi-microphone sound processing. He made use of the wake-up keywords that are used for such devices ("Hi Marvin" etc), taking advantage of the known content to estimate the RTF for "free" i.e. with no extra interaction. He DTW-aligned the spoken keyword against a dictionary, then used that to mask the recorded sound and estimate the RTF.
  • Stefan Uhlich presented their (Sony group's) strongly-performing SiSEC sound separation method. Interestingly, they use a variant of DenseNet, as well as a BLSTM, to estimate a tf mask. Stefan also said that once the estimates have been made, a crucial improvement was to re-estimate them by putting the estimated masks together through a multichannel Wiener filtering stage.
  • Ama Marina Kreme presented her new task of "phase inpainting" and methods to solve it - estimating a missing portion of phases in a spectrogram, when all of the magnitudes and some of the phases are known. I can see this being useful in post-processing of source separation outputs, though her application was in engine noise analysis with an industrial collaborator.
  • Lucas Rencker presented some very nice ideas in "consistent dictionary learning" for signal declipping. Here, "consistent" means that the reconstructed signal should be painting the missing regions in a way that matches the clipping - if some part of the signal was clipped at a maximum of X, then its reconstruction should take values greater than or equal to X. Here's his Python code of the declipping method. Apparently also the state-of-the-art in this task is a method called "A-SPADE" by Kitic (2015). Pavel Zaviska presented an analysis of A-SPADE and S-SPADE, improving the latter but not beating A-SPADE.

An interesting feature of the week was the "SiSEC" Signal Separation Evaluation Challenge. We saw posters of some of the methods used to separate musical recordings into their component stems, but even better, we were used as guinea-pigs, doing a quick listening test to see which methods we thought were giving the best results. In most SiSEC work this is evaluated using computational measures such as signal-to-distortion ratio (SDR), but there's quite a lot of dissatisfaction with these "objective" measures since there's plenty that they get wrong. At the end of LVA-ICA the organisers announced the results of the listening test: surprisingly or not, the results of the listening test had broadly a strong correlation with the SDR measures, though there were some tracks for which this didn't hold. More analysis of the data to come, apparently.

From our gang, my students Will and Delia presented their posters and both went really well. Here's the photographic evidence:

  • Delia Fano Yela's poster about source separation using graph theory and Kernel Additive Modelling read the preprint here
  • Will Wilkinson's poster "A Generative Model for Natural Sounds Based on Latent Force Modelling" read the preprint here

Also from our research group (though not working with me) Daniel Stoller presented a poster as well as a talk, getting plenty of interest for his deep learning methods for source separation preprint here.

| science |

Thinning distance between point process realisations

The paper "Wasserstein Learning of Deep Generative Point Process Models" published at the NIPS 2017 conference has some interesting ideas in it, connecting generative deep learning - which is mostly used for dense data such as pixels - together with point processes, which are useful for "spiky" timestamp events.

They use the …

| science |

My PhD research students

I'm lucky to be working with a great set of PhD students on a whole range of exciting topics about sound and computation. (We're based in C4DM and the Machine Listening Lab.) Let me give you a quick snapshot of what my students are up to!

I'm primary supervisor for …

| science |

IBAC 2017 India - bioacoustics research

I'm just flying from the International Bioacoustics Congress 2017, held in Haridwar in the north of India. It was a really interesting time. I'm glad that IBAC was successfully brought to India, i.e. to a developing country with a more fragmented bioacoustics community (I think!) than in the west …

| science |

Review of "Living Together: Mind and Machine Intelligence" by Neil Lawrence

In the early twentieth century when the equations of quantum physics were born, physicists found themselves in a difficult position. They needed to interpret what the quantum equations meant in terms of their real-world consequences, and yet they were faced with paradoxes such as wave-particle duality and "spooky action at …

| science |

social