Making Reverb with Web Audio API

Making Reverb with the Web Audio API

“Making Reverb with Web Audio” is part of a series of blog posts on how to do sound design in Web Audio. Let’s start with one of the most useful effects, Reverb.

What is Reverb?

Reverberation, or reverb, is a continuation of a sound through echoes. Almost every room you’ve ever been in has this effect. Your brain uses the reverberated sound to construct a 3D impression of the space.

To get rid of reverb, scientists build specialized rooms called Anechoic Chambers. Recording Engineers do something similar; they form small padded rooms meant to “deaden” the sound while maintaining some of the natural reverb (because it sounds weird without it).

To combine all the sounds into a single space during mixing, sound engineers use audio effects that simulate reverberations.

We refer to these effects as “Reverbs.”

Creating the effect

To make a reverb with Web Audio API is quite simple. You fill a convolution node with some decaying noise, and boom. Reverb. It sounds nice!

To create the tail, We render noise into this buffer using an OfflineAudioContext. Why? By rendering the sound of the reverb buffer, filters and equalizers (and other effects) can be used to shape the frequency response of the reverb.

I’ve added a library of classes to simplify the explanation. Feel free to grab it and use them in your projects.

/// ... SimpleReverb Class
renderTail () {
    const tailContext = new OfflineAudioContext(2, this.context.sampleRate * this.reverbTime, this.context.sampleRate);
    const tailOsc = new Noise(tailContext, 1);
          tailOsc.init();
          tailOsc.connect(tailContext.destination);
          tailOsc.attack = this.attack;
          tailOsc.decay = this.decay;
          tailOsc.release = this.release;

        setTimeout(()=>{
      tailContext.startRendering().then((buffer) => {
        this.effect.buffer = buffer;
      });

      tailOsc.on({frequency: 500, velocity: 127});
      tailOsc.off();
    }, 20);
}

What if we want more control over our reverb?

Anatomy of a Reverb

Reverb is a temporal event. It’s a series of changes that happen to a single sound over time. Like waves in a pool, we cannot say that a wave has “parts.” We can, however, identify patterns that exist, and write code to simulate them.

Wet vs Dry

We need to understand a few terms to shape our sound. When I say “Wet” I mean the affected signal. When I say “Dry” I mean the unaffected signal. Ratios of Wet to Dry signal allows us to control the perception of how far away a sound is.

Early Reflections

As sound waves travel toward your ear, they are absorbed and reflected by the objects and surfaces they hit. Hard surfaces redirect portions of the sound towards your ear. Due to the speed of sound, these reflections arrive after the initial sound. In reverbs, these are called early reflections.

To simulate early reflections, we use a multi-tap delay. Each “tap” of the delay effect is a simulation of a surface reflecting the sound.

Early reflections allow you to control the illusion of how close or far away a sound is.

Pre-Delay

The speed of sound is pretty slow. Depending on the size of the room you are simulating, it may take a moment for the collected reverberations to reach you. Pre-Delay adds a few milliseconds to the start of the reverb. Combined with the dry signal, this places the sound in the room.

Diffused sound

Once a sound has bounced around for a while, it becomes “diffused.” With each bounce off of a surface, energy is lost and absorbed. Eventually, it turns into noise that no longer resembles the original sound. This noise is the reverb “tail.”

To simulate these phenomena we’re going to use:

  • A Delay node for Pre-Delay
  • A multitap delay node for early reflections
  • A convolution node for the diffused sound (precisely like the basic reverb)
  • Filtered noise for the convolution buffer.
  • Gain nodes to help us control the balance between the different pieces.

AdvancedReverb Setup

We’re going to add a few delay nodes for pre-delay and multitap.
One thing you might notice is that the multitap delay nodes bypass the reverb effect. The multitap is simulating Early Reflections, so we don’t need to add those sounds to the reverberated sound.

// Advanced Reverb Setup
setup() {
    this.effect = this.context.createConvolver();

    this.reverbTime = reverbTime;

    this.attack = 0;
    this.decay = 0.0;
    this.release = reverbTime/3;

    this.preDelay = this.context.createDelay(reverbTime);
    this.preDelay.delayTime.setValueAtTime(preDelay,    this.context.currentTime);
    this.multitap = [];
    for(let i = 2; i > 0; i--) {
      this.multitap.push(this.context.createDelay(reverbTime));
    }
    this.multitap.map((t,i)=>{
      if(this.multitap[i+1]) {
        t.connect(this.multitap[i+1])
      }
      t.delayTime.setValueAtTime(0.001+(i*(preDelay/2)), this.context.currentTime);
    })

    this.multitapGain = this.context.createGain();
    this.multitap[this.multitap.length-1].connect(this.multitapGain);
    this.multitapGain.gain.value = 0.2;

    this.multitapGain.connect(this.output);
    this.wet = this.context.createGain();

    this.input.connect(this.wet);
    this.wet.connect(this.preDelay);
    this.wet.connect(this.multitap[0]);
    this.preDelay.connect(this.effect);
    this.effect.connect(this.output);

    this.renderTail();
}

Let’s take a look at the AdvancedReverb renderTail() function.

//...AdvancedReverb Class
renderTail () {
    const tailContext = new OfflineAudioContext(2, this.context.sampleRate * this.reverbTime, this.context.sampleRate);
    const tailOsc = new Noise(tailContext, 1);
    const tailLPFilter = new Filter(tailContext, "lowpass", 5000, 1);
    const tailHPFilter = new Filter(tailContext, "highpass", 500, 1);

    tailOsc.init();
        tailOsc.connect(tailHPFilter.input);
        tailHPFilter.connect(tailLPFilter.input);
        tailLPFilter.connect(tailContext.destination);
        tailOsc.attack = this.attack;
        tailOsc.decay = this.decay;
        tailOsc.release = this.release;

    setTimeout(()=>{
      tailContext.startRendering().then((buffer) => {
        this.effect.buffer = buffer;
      });

      tailOsc.on({frequency: 500, velocity: 127});
      tailOsc.off();
    }, 20)
}

The extra filters, pre-delay, and multi-tap allow us to shape smaller rooms. The SimpleReverb class creates a simple echoed space. AdvancedReverb simulates more phenomena that occur inside of that echoed space.

Playing Around with Space.

Now that you have your reverb, you can begin putting sounds into space. By changing the size of the pre-delay, the number of multi-taps, the frequencies the filtered noise, and the ratio of dry signal to wet signal, you can drastically alter the space.

A quick rundown of how to apply each setting:

Wet/Dry Ratio:
The less dry signal, the further into the reverberated space your sound is.

Pre-delay:
The larger the pre-delay, the further away the walls feel.

Multi-tap:
The more multi-taps, the smaller, square, and untreated the room feels.

Filtered noise:
Removing high frequencies can make the reverb sound more “dead” and, with low Dry settings can make the reverb sound like it is coming through a wall.

Removing low frequencies sounds unnatural, but, this can make the reverberated sound more clear. If you’re adding reverb to bass sounds, I suggest filtering the bass sounds from the reverb.

Listen!

There are no hard and fast rules. You need to tweak the settings and listen to the results.

Grab the examples from CodePen and play around with the values. I deliberately left the examples free from controls so you have to manually tweak. See what kind of spaces you can make!

Inspiration to Reality: Exploring Generative Art

You know that sense of awe when you see inspiring work? The kind of feeling that makes you say “wow, I wish I could do that.” Seeing Ash Thorp, GMUNK and Joey Camacho’s CG work makes me feel that all the time. There’s so much thought that goes into their compositions and I wanted to see if I could emulate some of that using Blender. The following is a summary of what I’ve learned through experimenting with CG. You can also see the final results of what I made here. I hope this encourages you to have fun making your own crazy creations after reading!

Continue reading →

5 Times When You Absolutely Must Write Tests For Your Code

It’s my opinion that you should always write tests for your source code. Tests force you to code better. Tests allow you to write dependable code, create better architecture, and help you live longer*. They also help you spot fussy APIs, opportunities for reuse, and redundancies.

That said. You don’t always have the time (or budget) to test everything to death! Not everyone sees the value in all those little green checkmarks. Life isn’t pedantic and heavy-handed. Life is a pack wild horses and sometimes you need to be the cowboy.

So, when do you push back? When do you say NO! We must write tests!

1. When you have the time.

There is no reason to skip out on writing tests if you have the time to write them. Why would you opt out of better code? Taking the extra time to make your code testable will turn average code into dependable code. Code that you know actually works is almost always better than new code.

2. When you’re making a data structure.

You cannot make a data structure without writing tests for it. Why would anyone trust a data structure that cannot prove it works?

Data structures must be tested. I don’t even know how you’d code a data structure without setting up a test harness first. You don’t know how your code will be used, so knowing that every little piece works as expected is necessary.

With tests, you’ll see the logic in breaking code into small pieces. Tests will make it easier to spot problems in your architecture.

Writing data structures against a test suite is the only way to do it right.

3. When you want community contributions.

Tests are the backbone of any open source project. They make sure that community contributions do not break the codebase. This allows fixes, changes, and optimizations to be made with certainty.

Your test suite becomes the hurdle that any contributor must clear. It’s not too much to ask for contributions that prove they work.

4. When you’re designing an API.

Starting with a test suite is a great way to design an API.

This allows you to work backward from your code interface instead of coding to it. This will let you design an API from the user’s perspective first.

5. When it is a dependency

Point blank. If other code needs to use this code, you must write a test for it.

The testable code will become part of an ever-expanding toolbox. Dependable toy soldiers who can be summoned to fight for you. Go! Test the world!

Here are a few resources to help you start writing tests for your code.
Mocha
Node.JS Assert
Writing good tests

*There is no scientific data that shows writing tests will help you live longer.

barrel with torch on top

Embracing the Bottom of the Learning Curve

I’ve always had a complicated relationship with learning as a designer. It’s satisfying to gain new skills, but staying in my comfort zone feels so much easier. I want to push myself and get awesome results, but there’s an intimidating hurdle of not knowing how to start. The bottom of the learning curve is a scary hurdle to confront. 3D design had that hurdle stalling me from progressing. Dipping my toes into 3D modelling and quitting after a week was a common occurrence for years. There’s dozens of abandoned attempts sitting on my old hard drives. Something always prevented me from wanting to continue. Normals, modifiers, rendering — 3D felt too overwhelming and vast. I felt stumped. How do you get started learning something when you don’t even know what you don’t know?
Continue reading →