Making Reverb with Web Audio API

Making Reverb with the Web Audio API

“Making Reverb with Web Audio” is part of a series of blog posts on how to do sound design in Web Audio. Let’s start with one of the most useful effects, Reverb.

What is Reverb?

Reverberation, or reverb, is a continuation of a sound through echoes. Almost every room you’ve ever been in has this effect. Your brain uses the reverberated sound to construct a 3D impression of the space.

To get rid of reverb, scientists build specialized rooms called Anechoic Chambers. Recording Engineers do something similar; they form small padded rooms meant to “deaden” the sound while maintaining some of the natural reverb (because it sounds weird without it).

To combine all the sounds into a single space during mixing, sound engineers use audio effects that simulate reverberations.

We refer to these effects as “Reverbs.”

Creating the effect

To make a reverb with Web Audio API is quite simple. You fill a convolution node with some decaying noise, and boom. Reverb. It sounds nice!

To create the tail, We render noise into this buffer using an OfflineAudioContext. Why? By rendering the sound of the reverb buffer, filters and equalizers (and other effects) can be used to shape the frequency response of the reverb.

I’ve added a library of classes to simplify the explanation. Feel free to grab it and use them in your projects.

/// ... SimpleReverb Class
renderTail () {
    const tailContext = new OfflineAudioContext(2, this.context.sampleRate * this.reverbTime, this.context.sampleRate);
    const tailOsc = new Noise(tailContext, 1);
          tailOsc.init();
          tailOsc.connect(tailContext.destination);
          tailOsc.attack = this.attack;
          tailOsc.decay = this.decay;
          tailOsc.release = this.release;

        setTimeout(()=>{
      tailContext.startRendering().then((buffer) => {
        this.effect.buffer = buffer;
      });

      tailOsc.on({frequency: 500, velocity: 127});
      tailOsc.off();
    }, 20);
}

What if we want more control over our reverb?

Anatomy of a Reverb

Reverb is a temporal event. It’s a series of changes that happen to a single sound over time. Like waves in a pool, we cannot say that a wave has “parts.” We can, however, identify patterns that exist, and write code to simulate them.

Wet vs Dry

We need to understand a few terms to shape our sound. When I say “Wet” I mean the affected signal. When I say “Dry” I mean the unaffected signal. Ratios of Wet to Dry signal allows us to control the perception of how far away a sound is.

Early Reflections

As sound waves travel toward your ear, they are absorbed and reflected by the objects and surfaces they hit. Hard surfaces redirect portions of the sound towards your ear. Due to the speed of sound, these reflections arrive after the initial sound. In reverbs, these are called early reflections.

To simulate early reflections, we use a multi-tap delay. Each “tap” of the delay effect is a simulation of a surface reflecting the sound.

Early reflections allow you to control the illusion of how close or far away a sound is.

Pre-Delay

The speed of sound is pretty slow. Depending on the size of the room you are simulating, it may take a moment for the collected reverberations to reach you. Pre-Delay adds a few milliseconds to the start of the reverb. Combined with the dry signal, this places the sound in the room.

Diffused sound

Once a sound has bounced around for a while, it becomes “diffused.” With each bounce off of a surface, energy is lost and absorbed. Eventually, it turns into noise that no longer resembles the original sound. This noise is the reverb “tail.”

To simulate these phenomena we’re going to use:

  • A Delay node for Pre-Delay
  • A multitap delay node for early reflections
  • A convolution node for the diffused sound (precisely like the basic reverb)
  • Filtered noise for the convolution buffer.
  • Gain nodes to help us control the balance between the different pieces.

AdvancedReverb Setup

We’re going to add a few delay nodes for pre-delay and multitap.
One thing you might notice is that the multitap delay nodes bypass the reverb effect. The multitap is simulating Early Reflections, so we don’t need to add those sounds to the reverberated sound.

// Advanced Reverb Setup
setup() {
    this.effect = this.context.createConvolver();

    this.reverbTime = reverbTime;

    this.attack = 0;
    this.decay = 0.0;
    this.release = reverbTime/3;

    this.preDelay = this.context.createDelay(reverbTime);
    this.preDelay.delayTime.setValueAtTime(preDelay,    this.context.currentTime);
    this.multitap = [];
    for(let i = 2; i > 0; i--) {
      this.multitap.push(this.context.createDelay(reverbTime));
    }
    this.multitap.map((t,i)=>{
      if(this.multitap[i+1]) {
        t.connect(this.multitap[i+1])
      }
      t.delayTime.setValueAtTime(0.001+(i*(preDelay/2)), this.context.currentTime);
    })

    this.multitapGain = this.context.createGain();
    this.multitap[this.multitap.length-1].connect(this.multitapGain);
    this.multitapGain.gain.value = 0.2;

    this.multitapGain.connect(this.output);
    this.wet = this.context.createGain();

    this.input.connect(this.wet);
    this.wet.connect(this.preDelay);
    this.wet.connect(this.multitap[0]);
    this.preDelay.connect(this.effect);
    this.effect.connect(this.output);

    this.renderTail();
}

Let’s take a look at the AdvancedReverb renderTail() function.

//...AdvancedReverb Class
renderTail () {
    const tailContext = new OfflineAudioContext(2, this.context.sampleRate * this.reverbTime, this.context.sampleRate);
    const tailOsc = new Noise(tailContext, 1);
    const tailLPFilter = new Filter(tailContext, "lowpass", 5000, 1);
    const tailHPFilter = new Filter(tailContext, "highpass", 500, 1);

    tailOsc.init();
        tailOsc.connect(tailHPFilter.input);
        tailHPFilter.connect(tailLPFilter.input);
        tailLPFilter.connect(tailContext.destination);
        tailOsc.attack = this.attack;
        tailOsc.decay = this.decay;
        tailOsc.release = this.release;

    setTimeout(()=>{
      tailContext.startRendering().then((buffer) => {
        this.effect.buffer = buffer;
      });

      tailOsc.on({frequency: 500, velocity: 127});
      tailOsc.off();
    }, 20)
}

The extra filters, pre-delay, and multi-tap allow us to shape smaller rooms. The SimpleReverb class creates a simple echoed space. AdvancedReverb simulates more phenomena that occur inside of that echoed space.

Playing Around with Space.

Now that you have your reverb, you can begin putting sounds into space. By changing the size of the pre-delay, the number of multi-taps, the frequencies the filtered noise, and the ratio of dry signal to wet signal, you can drastically alter the space.

A quick rundown of how to apply each setting:

Wet/Dry Ratio:
The less dry signal, the further into the reverberated space your sound is.

Pre-delay:
The larger the pre-delay, the further away the walls feel.

Multi-tap:
The more multi-taps, the smaller, square, and untreated the room feels.

Filtered noise:
Removing high frequencies can make the reverb sound more “dead” and, with low Dry settings can make the reverb sound like it is coming through a wall.

Removing low frequencies sounds unnatural, but, this can make the reverberated sound more clear. If you’re adding reverb to bass sounds, I suggest filtering the bass sounds from the reverb.

Listen!

There are no hard and fast rules. You need to tweak the settings and listen to the results.

Grab the examples from CodePen and play around with the values. I deliberately left the examples free from controls so you have to manually tweak. See what kind of spaces you can make!

Avatar photo

Matthew Willox

Artist masquerading as a developer in a designers world.

7 Comments

  1. This unfortunately confuses reverb for echo/delay: delay is “the same signal, playing again at some later time”, typically with a drop-off in volume, and repetition until that drop-off hits zero, like an echo, or ripples on a pond. Reverb, on the other hand, is a constant transformation of the audio where part of the frequency distribution gets absorbed (i.e. dampened to the point of removal), most of the frequencies undergo some kind of lateral transformation, and a small fraction may undergo boosting. There are no ripples, no periods, just a a constant and continuous mixing. What this page explains is how to do implement delay using web audio, which is useful, but it’s not reverb. For reverb, we need to perform convolution, using the ConvolverNode and an impulse response.

  2. Pomax, it is a reverb and both examples use convolution.

    “You fill a convolution node with some decaying noise, and boom. Reverb. It sounds nice!”

    The delay is used to simulate early reflected sound by allowing the dry signal to arrive before the reverberated sound.

  3. Hello ,
    I am making an music app using flutter and wanted to add slowed reverb effect for it.I am making app using Flutter. Could anyone please help me in this ?

    If anyone reading this please contact me.

    Thank you.
    E-mail : hchauhan241103@gmail.com

  4. Carl-Werner Oehlrich January 8, 2023 at 9:39am

    Can this project be downloaded in a way that I can execute it locally?

  5. How can this be applied to Soundjs/Createjs? There is a scene I am working on where the player is in a cave and I’d like to use this effect for all the audio when played in there.

  6. This is the link of unfinished music app I was trying to make using flutter.If anyone can reverb function.Pls help.

  7. Great article, thank you very much. I am using your code for a video generator app. The AI speech sounds so much better with this reverb. Great!

Leave a Reply

Your email address will not be published. Required fields are marked *