Other things on this site...


Reverse-engineering the rave hoover

One of the classic rave synth sounds is the "hoover", a sort of slurry chorussy synth line like the classic Dominator by Human Resource. I decided to recreate that sound from scratch, by reverse engineering the sound and recreating it in SuperCollider. Here's how to do it:

Step 1: Listening

Analytical listening doesn't come naturally but it gets easier with practice. Listen to the target sound again and again, and try and get a feel for the different technical aspects of it. Does it sound bright or dull? Is there vibrato, tremolo, chorusing?

When I listened to the opening sound in Dominator I made the following notes:

  • There's a main sound (the slightly fuzzy chorussy synth) plus also some added bass underneath. They could be part of the same sound-maker but to me they sound like separate elements, although the pitch of the two is clearly locked together. The bass might be produced by a separate bass oscillator, or maybe just by boosting the bass frequencies in the main synth, or maybe the main synth just has some octave-doubling in it.
  • It's probably created by playing notes on a piano-style keyboard but adding a massive slur (a "portamento") to the pitch values so that they blur into each other rather than staying separate and steady. I can hear possibly a three-note descending line as being the underlying pattern?
  • What sort of oscillator (sine, square, triangle, saw) might be the basis of the main synth line? It's a buzzy sound and fairly classic analogue synth-sounding so it's probably one of those at base. My first guess was "square" but when I implemented it (described later) this wasn't sounding right, and I settled on "saw" as being much closer to the original sound. (But see the "Update" at the bottom of this article - it turns out the original synth used a hybrid of saw+square!)
  • You can hear some kind of pulsation in the sound, even when it reaches the "stable" bit of the line. The pulsation has a period of about 3 cycles per second (3 Hz). I feel like there is vibrato happening as well as chorusing. Vibrato means directly wobbling the pitch at a slowish rate. A chorus effect (in a synth like this) causes the slow-rate pulsations by blending oscillators at slightly different frequencies: if you play an oscillator at 100 Hz and one at 103 Hz, you get a sound with a 3 Hz pulsation (==103-100) as the oscillators drift in and out of phase with each other.

Step 2: Visualise pitch, spectrum, waveform

There are various programs that can visualise the contents of audio. I like Sonic Visualiser. Just download it, use it to open an audio file, and then you have lots of nice tools like spectrograms, chromagrams, spectrums, pitch trackers....

So that's what I did. By visualising the original ("time-domain") signal and its spectrogram, I could first estimate the durations of things. The vibrato in the sound was indeed about 3 Hz: 0.335 seconds per cycle in my measurement. The duration of the main line was 1.929 secs, or 0.2412 per note if you divide that time into 8 equal measures. (I decided that the line was probably sequenced froma series of 8 notes, although it's hard to tell.)

If you find a pitch detector in the Transform menu you can get an computerised estimate of the fundamental frequency ("f0") and how it changes over time. In the following I've used two different pitch trackers (Yin, and Yin-with-fft) and completely coincidentally, the one marked in green seems to be finding the bass note while the one marked in purple seems to be finding the main synth note (with a few errors):

OK, so it's quite likely that our bass notes are around 70--80 Hz and our synth notes are around 280--300 Hz. This fits with my expectations after having some experience in this, but try synthesising specific tones for yourself if you're unsure:

  x = {SinOsc.ar(290, 0, 0.1)}.play

In the picture above, notice how both curves seem to run closely in parallel, and also, imbetween the repetitions they seem to do a "scoop" right down to some very low pitch indeed.

OK. Next let's look at harmonics. If you choose Pane > Melodic Range Spectrogram you get to see this frequency analysis:

The lowest, thickest, yellowest band at the bottom is the fundamental of the bassline. It matches up with the curve we got from the pitch-tracker.

If the bassline were a pure sinewave then it would have no harmonics stacked above it, but that's not the case here. The next line above is fainter, and the one above it fainter still, and these are the first two harmonics of the bass tone (having pitch f0 * 2 and f0 * 3, respectively).

Just above there we see the fundamental of the main synth sound, at about 290 Hz, which has plenty of harmonics stacked above it too.

How do we know which is a fundamental, and which harmonics belong with it? I had a bit of luck with the pitch-trackers, but it's not always that easy. The fundamental is usually the lowest, the strongest, and the harmonics have a frequency which is an integer multiple of the fundamental. In this case it becomes a little more complicated, because I've asserted that there are two different synth tones (the bass does sound separate, to me). The trace at about 290 Hz is pretty strong, making it a good candidate for being the fundamental of our main mid-range synth.

If you look at the 290ish trace you can also see that the strength of that note seems to be wibbling in and out in a very stable pattern (about 3 Hz) - in other words it looks like a pattern of blobs rather than a steady yellow curve. So there must be some kind of modulation happening. From a spectrogram I wouldn't be sure if this was due to tremolo, chorusing or vibrato. But in combination with listening, I feel pretty confident that the chorusing is doing most of that.

So what do we know so far? Looks pretty clear that we have a main synth playing notes around the 290-300 Hz mark, with chorusing and probably some vibrato, and plenty of harmonics. We also have a bass synth whose fundamental is one-quarter of that frequency, i.e. two octaves below, and which also has some harmonics.

Step 3: Trying to reconstruct

1: A Saw wave with chorusing

We know we're having a Saw wave with chorusing at 3.87 Hz. So how do we do that? Simply take the main frequency and add/subtract multiples of 3.87 to create a set of different frequencies. The following simple SuperCollider patch does that, as well as printing the frequencies for you to see:

x = {
  var freq = 400;
  freq = freq + (3.87 * [-1, 0, 1]);
  "Frequencies: %".format(freq).postln;
  Saw.ar(freq).mean * 0.1

Listen out for the regular pulsing sound, which should be happening 3.87 times per second.

In the above we created 3 Saw oscillators, using the array expansion starting from [-1, 0, 1]. Try using [-2, -1, 0, -1, 2] to increase it to 5 different oscillators, and see how the sound changes.

This chorussy sound is going to be the basis of the main synth but it'll need some tweaking before it sounds like the Dominator...

2: Vibrato

You know, I thought at first that I was sure there was some vibrato in there. You can add vibrato easily in the above patch, by inserting this line just before the Saw line:

  freq = freq + LFPar.kr(1, 0, 10); // strong vibrato @ 1 Hz

...but I'm actually not sure if I was right, or if it's actually just chorusing that causes the wobblyness in the sound. Anyway, we will add a little bit and see how we go.

3: Portamento

The pitch needs to slide around so that when we change midinote (e.g. hit a different note on a midi keyboard), it converges to that note within 0.1 seconds or so instead of changing instantly. It also needs to slide down to zero and back up again in that characteristic way which contributes so strongly to the overall feel of the synth line. (Sliding down to zero is probably what happens when you release the key on the original synth?)

In SuperCollider there are plenty of ways of doing portamento but I haven't quite hit the perfect one yet for this sound. The standard way is to use the "lag" message, e.g.

  freq = freq.lag(0.1, 0.2);

which would apply an exponential lag so that the frequency takes 0.1 seconds to reach a new target if we're increasing the frequency, or 0.2 seconds if we're decreasing the frequency. There's also Ramp.kr() which can apply a linear lag rather than exponential. But so far, I haven't quite found a lag shape/time that quite seems to match what the original is doing. However, either of these two ways works fine and certainly gets us very close.

Here's a simple example of portamento:

x = {
  var freq = Duty.kr(0.3, 0, Dseq([50, 55, 56, 52, 56, 28].midicps, inf));
  //freq = freq.lag(0.4, 0.3);
  Saw.ar(freq) * 0.1

Play it once, then un-comment that middle line to activate the portamento and play it again.

4: How do we do the bass?

I wasn't sure if the bassy part was made with a simple siney bass oscillator, or with a similar saw wave as the main synth, so I had to try both out.

As a simple example, here's a saw wave with a bassline added one octave below (freq * 0.5) using a second Saw:

x = {
  var freq = Duty.kr(0.3, 0, Dseq([50, 55, 56, 52, 56, 28].midicps, inf));
  Saw.ar(freq) + Saw.ar(freq * 0.5) * 0.1

Now, change the bass oscillator from Saw to SinOsc and see what difference it makes to the sound. To my ears, one of these options kind of merges the two sounds while one of them produces parallel sounds with two different characters.

5: What notes do we play?

The pitch-tracker stuff from earlier can be useful in deciding which notes to play. If we have a frequency like 290 Hz, we can use that directly in SuperCollider, or we can convert it to a midi note number using 290.cpsmidi and we get an answer of about 62.

Another nice feature in Sonic Visualiser is the Chromagram, which takes spectrogram data and warps+wraps it so that you can see how much energy there is at frequencies corresponding to traditional western musical notes:

From this image we can be quite confident that D is the note that our sustained sound reaches, during the phrase. And this matches up with what we guessed from the pitch - midi note 62 is the note named "D3".

So, with a lot of guesswork I ended up with the following list of 8 midi notes:

  [40, 67, 64, 62, 62, 62, 62, 62]

The "40" is an almost-arbitrary low note which drags the pitch down so it can slur back up again. Then we slur up to 67 ("G3"), then 64 ("E3") before our held note at 62.

Step 4: Compare your version against the original

If you put all the above components together you can create something approximating the sound at the start of the Dominator track. I did this, then I recorded the result to an audio file and loaded that into Sonic Visualiser as well. Then I could visually compare the results: Do the pitch traces look similar, curve similar? Do the harmonic strengths on the spectrogram seem to match up, or are some too strong? Is the overall "spectral slope" (relative strength of low vs high frequencies) OK? Does the chromagram show that I'm producing the same notes?

Step 5: Tweak, test, iterate

Of course you can then iterate this. Take your results and improve your model. The main tweaks I had to make, on top of my first attempt, were:

1: Bass not loud enough. I hadn't realised how much the bass amplitude really almost dwarfs the amplitude of the main synth. Probably this is because the main synth captures my attention more easily, so in my listening I give it prominence despite it being relatively weak.

2: Choice of bass oscillator. Generating the bass sound via the Saw oscillator (the same as the main synth) just sounds totally wrong. Simply synthesising using a sine-oscillator, plus a couple of quieter sine-oscillators for the harmonics which I saw on the spectrogram, gets it much better. (This is the classic additive synthesis approach, BTW: whatever you want, add sinewaves together until you've got it...)

3: EQ. The EQ balance is slightly different - my version needs more mid-range boost. Of course, most records have EQ added to their sound during production, they don't just use whatever comes straight out of the synth, so a little bit of this tweaking was likely to be needed.

4: Vibrato too soon, too much. The vibrato was a bit heavy, so I lowered the depth of the vibrato. In particular, it sounds good on the "held" note but when applied to the changing note it just sends the pitch wibbling around in an out-of-tune-sounding way, so I needed the vibrato to only come in after the main pitch has been held steady for a while.

5: Portamento. I haven't managed to get the portamento just right yet. As described above, the pitch curve seems to happen slightly differently in the original, compared against my version. There must be some simple combiantion of parameters that gets it, but I haven't quite hit it yet.

6: Goes bad when goes deep. The sound was pretty good on the notes and the way it slurred downwards, but after it had slurred downwards it became a rough kind of sound completely absent from the original. How to fix? Well in the end I added a low-pass filter connected to the main frequency, such that as the frequency drops right down, this low-pass filter squashes everything until at the bottom of the trough there's no sound remaining.

The final sound:

It isn't perfect but it's got most of the elements there.

There's some difference in the onset which I haven't yet got right: when the note kicks in and the pitch slurs up from zero, the original sound has more punch to it, as if perhaps there's some resonance happening in the filters due to the freq sweep, or perhaps just some ADSR shaping or suchlike.

SynthDef(\dominator, { |freq=440, amp=0.1, gate=1|
    var midfreqs, son, vibamount;

    // Portamento:
    freq = freq.lag(0.2, 0.6);
    // you could alternatively try:
    //  freq = Ramp.kr(freq, 0.2);

    // vibrato doesn't fade in until note is held:
    vibamount = EnvGen.kr(Env([0,0,1],[0.0,0.4], loopNode:1), HPZ1.kr(freq).abs).poll;
    // Vibrato (slightly complicated to allow it to fade in):
    freq = LinXFade2.kr(freq, freq * LFPar.kr(3).exprange(0.98, 1.02), vibamount * 2 - 1);

    // We want to chorus the frequencies to have a period of 0.258 seconds
    // ie freq difference is 0.258.reciprocal == 3.87
    midfreqs = freq + (3.87 * (-2 .. 2));

    // Add some drift to the frequencies so they don't sound so digitally locked in phase:
    midfreqs = midfreqs.collect{|f| f + (LFNoise1.kr(2) * 3) };

    // Now we generate the main sound via Saw oscs:
    son = Saw.ar(midfreqs).sum 
        // also add the subharmonic, the pitch-locked bass:
        + SinOsc.ar(freq * [0.25, 0.5, 0.75], 0, [1, 0.3, 0.2] * 2).sum;

    // As the pitch scoops away, we low-pass filter it to allow the sound to stop without simply gating it
    son = RLPF.ar(son, freq * if(freq < 100, 1, 32).lag(0.01));

    // Add a bit more mid-frequency emphasis to the sound
    son = son + BPF.ar(son, 1000, mul: 0.5) + BPF.ar(son, 3000, mul: 0.3);

    // This envelope mainly exists to allow the synth to free when needed:
    son = son * EnvGen.ar(Env.asr, gate, doneAction:2);

    Out.ar(0, Pan2.ar(son * amp))

// This plays the opening sound:
p = Pmono(\dominator, \dur, 0.24, \midinote, Pseq([40, 67, 64, 62, 62, 62, 62, 62], inf)).play;

// And this plays the main synth line in the track:
p = Pmono(\dominator, \dur, 0.24, \midinote, Pseq([55, 52, 67, 55, 40, 55, 53, 52], inf)).play;

If you don't have SuperCollider... well why don't you have SuperCollider? Anyway I've uploaded a recording of the reconstructed Dominator patch. See if you can improve it, or recreate some other synth.


This article triggered various discussions online, and Wouter Snoei produced a version which sounds even nicer. He found the manual for the actual synth used, which described the unusual "pwm'ed sawtooth" waveform used in the synth (a Roland Alpha Juno 2). See the article More Dominator Deconstruction for Wouter's synth, and a graph of the unusual wave shape.

| supercollider | Permalink