Other things on this site...

Research
profile
Recipes
[Blog archives] [Categories]

How to prepare yourself for sonic weapons at public protests?

I was concerned to see news of sonic weapons apparently used against the public attending a peaceful vigil in Belgrade. These are powerful tools that could ruin many people's hearing and their lives. I don't know the details of the protest, but I don't like the idea of governments dabbling in how to use these tools against civilians.

I don't expect this to happen in my country. However, in these increasingly turbulent times, we should prepare for such things to happen - at least to someone, somewhere. There's not much public info about sonic weapons, and so there's almost no information for an average innocent citizen, to know what the dangers are or how to protect themselves against harm.

I study sound so I believe I can offer some thoughts. I don't study anything remotely like these weapons hardware, so please consider me only a "half-expert": I have studied sound analysis for around 20 years, BUT primarily recorded/digital sound, not the physical acoustics of devices or their effects on people. Please feel free to find better expert advice - and to tell me about it! Collectively, we should build up some public knowledge on this topic.

Background: LRAD and vortex cannon

I am actually referring to 2 different technologies that have recently been acquired by police forces and military:

  1. LRAD ("long range acoustic device") or "sound cannon": a device that emits a narrow beam of sound energy, unlike a standard loudspeaker which radiates energy widely. The narrow focussed beam means that sound can be projected much, much farther and with higher loudness than you might expect. They can be used simply to project audible sound such as a spoken message, but can also be used at high energy to hurt or disorient you.

    (The "LRAD" is a specific commercial product which uses a technique to encode the sound into ultrasound, which then becomes audible when it hits a target. In this article I'm using the term "LRAD" quite generically.)

  2. Vortex cannon or vortex ring: a device that creates the same phenomenon as the "smoke ring" that cigar smokers used to blow... but with higher energy. The physics of a vortex ring means that the energy doesn't dissipate much, and the invisible ring (invisible because there's usually no smoke) can travel a long way through a street or into a crowd, with a loud and disorienting "whooshing" sound. I believe that this would have more physical force than an LRAD - you might actually feel it pushing against you as a sudden wave of wind.

These are two very different technologies. But what they have in common is the ability to project energy "invisibly" along a straight line, and a risk of hearing damage. For technical details about how both of these things work, see this article by Black Mountain Analysis.

I don't have any special knowledge of the events in Belgrade on March 15th 2025, but from this analysis by Netzpolotik and the audio analysis shown in the first article I linked, it seems there was at least one LRAD mounted on a jeep, BUT in fact the mysterious loud sound which scattered the crowd is more likely have been a vortex cannon. (Though I didn't see any footage that would show the device that produced it.)

Differences between LRAD and vortex cannon

The vortex cannon is different from the LRAD in how it operates, but also in its effect on you.

The LRAD can project almost any sound, e.g. voices or tones, and I don't believe it's restricted to short bursts. If one of these was fired at you, I think you would hear that sound, coming mysteriously from an unknown direction. It would also be a harsh distorted version of the sound (not a "hi-fi" version).

The vortex cannon, however, is all about projecting a short burst. Looking at the videos from Belgrade, reading the eyewitness accounts, and listening to the sound recordings, you get a consistent message: people heard a very loud roaring/whoosing sound, like a jet engine or a military vehicle suddenly roaring at them at very close range. The effect on the crowd is notable, especially that people all seem to run in a unified direction, so suddenly that they can't all just be following each other. It must be caused by a strong perception that "the jet engine is behind you" from one particular direction, which will be the "middle" of the line of fire.

Some thoughts on how to be prepared

PREPARE IN ADVANCE

  • Listen to some examples of what these devices sound like. This way, if it happens to you, you're likely to know what's going on.
  • Look at some of the videos (e.g. from Belgrade) to see how crowds tend to react to these devices. Think about how you could be safe in that situation.

ON THE DAY

  • Take some hearing protection with you. Don't rely on "noise-reducing" (ANR) headphones, because I don't think they would respond well to high-energy, high-bandwidth, bursts. I don't believe they would protect you. ANR can also make it harder to understand ordinary sound events in a busy situation. Instead, use proper physical hearing protection: ear defenders would be best, but of course that's quite bulky and would mean that you can't hear much at all. You could also use the kind of earplugs that people use for sleeping, made of foam or gel or wax for example -- those would be better than the ones people use for listening to music, but much less protection than ear defenders.
  • Maybe you don't want to be wearing ear defenders on every public protest. Indeed you shouldn't have to! You could consider wearing an earplug in one ear when you're in a low-risk situation but want to be a little bit careful.
  • Know how to spot the acoustic devices. Often mounted on a van or jeep, they're not very big, but they do also need a non-trivial electric supply.

REACT WELL TO A SONIC WEAPON

  • The sonic weapon seems to cause a sudden "flow" of the crowd. Be aware of this - you'll probably be safest if you move with the crowd a little, but try not to panic, try to stay standing, and please protect anyone who falls over.
  • Put your hands over your ears, if they're un-protected.
  • Get out of the direct line of the shot. Normal sound goes around corners pretty well, but these devices use "coherent" phenomena whose effects mostly fall apart when dissipated around obstructions. So, hiding behind a car or some other big bulky object would help.
  • Since these devices are currently so new, many people will not know what's happened. You should tell them. Shout "sonic weapon" or some phrase that will quickly convey the situation.
  • The forces that shot the sonic weapon: what will they do next? There are a lot of unknowns. Maybe nothing. But it occurs to me that the effect seen in the Belgrade videos was to clear a channel down the middle of the "beam". I wouldn't be surprised if some forces use this to clear a channel, and then send personnel rushing in through that cleared channel.
  • Will there be another shot immediately after? I don't know how quickly these things can be re-charged. A vortex cannon uses fuel, which has a finite supply. I wouldn't expect a lot of rapid shots.

AFTER CARE

The negative effects of these weapons are broadly the same as "ordinary" loud noise trauma. Short-term hearing problems which might or might not become long-term. Disorientation and nausea.

  • Follow the standard advice for looking after your hearing after traumatic loud noise. I don't have that advice, but you'll find it online. Give your hearing some time to recover. Visit your doctor.

...

That's all I have for now. I hope that there are better experts than me out there who can document more precisely what is known and what can be expected of these things. And I hope governments and other forces will act with restraint and care.

| sound |

Invite you all to a reveil

Tomorrow (Saturday) is the 2020 edition of Soundcamp's Reveil. I've been attending it for the past few years and I heartily recommend it as a 24-hour immersion into the sound of dawn chorus bird song from around the world.

The core of the Reveil is a 24-hour live audio stream, which is actually made of live streams piped from dozens of locations around the world. Over the course of 24 hours, from 5am Saturday to 5am Sunday, we hear the birds waking up and singing in amazing and very different soundscapes, familiar and unfamiliar.

We're really lucky that the livestream Reveil is something that works perfectly even without us all meeting up in person. I'll be tuning in (you can listen via the website, or via Resonance Extra, or even via a UK telephone number - listening info is here) and I'll also be joining in the text chat where people will be discussing the soundscapes, as well as the art pieces and talks scheduled to run throughout Saturday. Do join!

| sound |

Soundcamp 2016

This weekend I was invited to take part in an event called Soundcamp. Let me quote their own description:

Soundcamp is a series of outdoor listening events on International Dawn Chorus Day, linked by Reveil: a 24 hour broadcast that tracks the sounds of daybreak, travelling West from microphone to microphone on sounds transmitted live by audio streamers around the globe.

Soundcamp / Reveil will be at Stave Hill Ecological Park in Rotherhithe from the 30th of April to the 1st of May 2016, and at soundcamps elsewhere in the UK and beyond.

So as an experience, it's two things: a worldwide live-streamed radio event that you can tune into online, and also if you're there in person it's a 24-hour outdoor event with camping, talks and workshops, with a focus on listening and on birdsong.

(Photos here are by @lemon_disco)

There was a great group of people involved. I was very happy to be on the bill with Geoff Sample the excellent bird recordist, and with Sarah Angliss the always-entertaining musician/roboticist/historian. We each spoke about bird sound from our own different angles and I think it was a really good mix of perspectives. There was also Jez Riley French the field recordist, who led a workshop on ultrasonic underwater sound, and Iain Bolton who took us on a bat walk: the immersive sound of multiple bat detectors clicking and squeaking away around the pond at dusk was much more of a sonic experience than I expected, quite memorable.

For myself - well, I talked about our work on automatic bird sound recognition, in particular our app Warblr: how it works, and how it has been used by people. But more than that, it was a great opportunity to think about how we listen to sound. Trying to get computers to make sense of sound is a good way to emphasise what's so strange and amazing about our own powers of listening.

It was also a perfect setting for the little collaboration Sarah Angliss and I put together last year. Sarah has built a robot carillon, a set of automated bells, and we worked together to transcribe birdsong automatically into musical scores for the bells. These bells, singing away in the corner of the park, with the warm spring weather and the real birdsong all around, were right at home.

At dawn on the Sunday we took a dawn chorus walk. It was an interesting thing to do, and the star of the show was undoubtedly when we reached the end of the walk, almost ready to go back, and a grasshopper warbler sang out loud and long and strange - an unfamiliar sound to me, and apparently the first time anyone had heard one around there! Is it an insect, a bird, a piece of machinery...?

The main Soundcamp organisers - Grant Smith, Maria Papadomanolaki, Dawn Scarfe - also brought their own great and really thoughtful approaches to listening too. Grant and his son led a workshop on making a soundscape streaming device, really quite simply with a Raspberry Pi and a couple of microphones (based on Primo EM172 capsules). I've been really impressed by the quality of the sound field they get from a pair of mics stuck in a section of poster tube.

Here's Maria mixing the radio stream, in the temporary on-site studio:

There are more photos here from Dawn Scarfe

| sound |

Audio phenomenon: Schroeder-phase complexes

Here's an audio phenomenon you should know about: Schroeder-phase complexes.

These are harmonic series which are designed so that their amplitude envelope is maximally flat. When you synthesise a harmonic series of partials, you know what frequencies you should use for the component sinewaves: F, 2F, 3F, 4F, etc. But what phase should you use?

Often we stick with a simple default such as every partial starts with zero phase. There's an issue with that, though, which can lead to issues in perceptual tests: the amplitude envelope, within one pitch period, is quite bumpy, because there are moments when the component phases all line up to produce strong amplitude. Sometimes this bumpiness leads to experimental confounds.

One thing you could do to work around this is use random phases, but adding this extra randomness into an experiment is usually not that desirable.

In 1970 Schroeder published a formula for choosing the phases so that the resulting waveform has a minimal crest factor, i.e. no big amplitude peaks. The formula is pretty simple but my blog doesn't render equations yet so see e.g. this paper.

Let me prove this to you directly: here I've synthesised the same harmonic sound with five different choices of phase. The top row, "sine-phase" and "cosine-phase" correspond to two versions of the default phase-aligned choice, and look how spiky they are:

Waveforms

In the middle is random phase, and at the bottom are two plots from Schroeder-phase. Please note that the y-axis has different scales in each plot - the waveforms each have the same energy, and the same Fourier-transform magnitudes, despite looking very different!

The reason that there are TWO Schroeder plots is because we have an option to flip the sign (time-reverse the waveform) while preserving the waveform characteristics. The shorthand label that people sometimes use is that one of these is "Schroeder-plus" and one is "Schroeder-minus".

BUT WAIT there's one weird thing I haven't shown you yet, and it pops out when you listen to the examples. These stimuli can be used to find frequency thresholds - at low frequency we can tell the difference, but at high frequency they sound identical. And the weirdest thing is when you listen to them at very low frequencies, they don't sound like static harmonic complexes at all (evenr though that's definitely how we generated them), they sound like otherworldly down- or up-chirps.

Listen to this audio file where I play a plus and a minus, at different frequencies. First at 300 Hz, then at 65 Hz, then 16 Hz, then 2 Hz. At first you'll hear two essentially identical tones, but then the differences become noticeable, and then overwhelming:

Download the wave file

It's a nice demonstration of the fact that any periodic signal can be conceived as a sum of stationary sinusoids - as in Fourier analysis. Here we synthesised a chirpy nonstationary-sounding (but periodic) signal, starting from scratch from the sinusoids.

My implementation is here as SuperCollider code, inspired by this paper: Phase effects on the perceived elevation of complex tones.

| sound |

You're not testing a bird recognition app if you're not testing it with birds

So, our Warblr bird sound recognition app has been out for almost a month, and we've had many thousands of people using it and submitting bird audio recordings (thanks!). We've also had lots of great reviews in the consumer press. (Listen to this evocative piece on BBC Radio Scotland, fast-forward to 1hr 43.)

One thing which we knew was going to happen was that some people would demo it by playing back sound recordings into the mic, rather than recording actual birds. After all, sound recordings are easier to grab... What I didn't realise, from my own perspective, is that people would think this was a good way to test the app.

Playing back recordings is usually a really bad way to test the app, or any sound recognition app really, because recorded sounds differ in many many ways:

  • Often people test it with low-quality audio recordings (encoded badly or squished as MP3s or Youtube videos). There are lots of recordings out there on the web which are noticeably distorted or over-filtered.
  • Usually people use low-quality speakers to play back (laptops, phones) which miss out some of the audio content, or again distort it.
  • Usually the audio environment around the playback is inappropriate (e.g. a chiff chaff in the kitchen!) which means the sound contains misleading information.

All of these things make the audio drastically different from a genuine direct recording, even though our human ears are clever enough to understand the correspondence. Yes, ideally a system would be as clever as our human ears, but that's for the future. (Note the difference from a product like Shazam, which recognises recordings but does not recognise the real live musician... interesting eh!)

Plus there's yet another aspect to consider: we make use of your location to help determine what kind of bird is likely. This is thanks to the BTO whose amazing crowdsourced bird data helps us know which birds to expect where and when. So, if you're playing a sound file that isn't native to where you are, our system is doubtful that the bird is there... and quite rightly doubtful, perhaps.

I can't emphasise enough that playing back recorded sounds is not the best way to test. We can't prevent people from doing this, of course! That's fine, but always bear in mind that you didn't test it in proper field conditions, only at your desk. You're not testing a bird recognition app if you're not testing it against real wild birds...

| sound |

How to analyse pan position per frequency of your sound files

Someone on the Linux Audio Users list asked how they could analyse a load of FLAC files to work out if it was true for their music collection, that bass frequencies below about 150 Hz (say) tended to be centre-panned. Here's my answer.

First of all, coincidentally I know that Pedro Pestana published a nice analysis of exactly this phenomenon, at the AES 53rd conference recently. He actually looked at hundreds of number-one singles to determine the relationship between panning and frequency in the habits of producers/engineers for popular tracks. The paper isn't open access unfortunately but there you go.

So anyway here's a Python script I just wrote: script to analyse your audio files and plot the distribution of panning per frequency. And here's how it looks when I analyse the excellent Rumour Cubes album:

(Just to stress, this is a simple analysis. It simply looks at the spectral representation of the complete mix, it doesn't infer anything clever about the component parts of the mix.)

See any patterns? The pattern I was looking for is a bit subtle, but it's right down at the bottom below 100 Hz (i.e. 0.1 kHz on the scale): the bass tends to "pinch in" and not get panned around so much as the other stuff.

This analysis of Lotus Flower by Radiohead (by Daniel Jones) shows the effect more clearly.

This is what's generally observed, and widely known in mixing engineer "folklore": pan your bass to the centre, do what you like with the rest. Not everyone agrees on the reasons: some people say it's because the bass can cause the needle to skip out of vinyl records if it's off-centre, some people say it's because we can't really perceive the spatialisation very well at low frequencies, some people say it's just to maximise the energy in the mix. I have no comment on what the reasons might be, but it's certainly folk wisdom for various audio people, and empirically you can test it for yourself by analysing some of your music collection.

NOTE: Code and image updated 2014-02-08, thanks to Daniel Jones (see comments below) for spotting an issue.

| sound |

Embedded acoustic environments (Barry Truax)

This weekend I was at the Symposium on Acoustic Ecology. Interesting event, but here I just want to note one specific thing from Barry Truax, who gave a keynote as well as a new composition.

Truax has a pretty nice way of talking about acoustic structure at different scales. As …

| sound |

social