“Making Reverb with Web Audio” is part of a series of blog posts on how to do sound design in Web Audio. Let’s start with one of the most useful effects, Reverb.
What is Reverb?
Reverberation, or reverb, is a continuation of a sound through echoes. Almost every room you’ve ever been in has this effect. Your brain uses the reverberated sound to construct a 3D impression of the space.
To get rid of reverb, scientists build specialized rooms called Anechoic Chambers. Recording Engineers do something similar; they form small padded rooms meant to “deaden” the sound while maintaining some of the natural reverb (because it sounds weird without it).
To combine all the sounds into a single space during mixing, sound engineers use audio effects that simulate reverberations.
We refer to these effects as “Reverbs.”
Creating the effect
To make a reverb with Web Audio API is quite simple. You fill a convolution node with some decaying noise, and boom. Reverb. It sounds nice!
To create the tail, We render noise into this buffer using an OfflineAudioContext. Why? By rendering the sound of the reverb buffer, filters and equalizers (and other effects) can be used to shape the frequency response of the reverb.
I’ve added a library of classes to simplify the explanation. Feel free to grab it and use them in your projects.
/// ... SimpleReverb Class
renderTail () {
const tailContext = new OfflineAudioContext(2, this.context.sampleRate * this.reverbTime, this.context.sampleRate);
const tailOsc = new Noise(tailContext, 1);
tailOsc.init();
tailOsc.connect(tailContext.destination);
tailOsc.attack = this.attack;
tailOsc.decay = this.decay;
tailOsc.release = this.release;
setTimeout(()=>{
tailContext.startRendering().then((buffer) => {
this.effect.buffer = buffer;
});
tailOsc.on({frequency: 500, velocity: 127});
tailOsc.off();
}, 20);
}
What if we want more control over our reverb?
Anatomy of a Reverb
Reverb is a temporal event. It’s a series of changes that happen to a single sound over time. Like waves in a pool, we cannot say that a wave has “parts.” We can, however, identify patterns that exist, and write code to simulate them.
Wet vs Dry
We need to understand a few terms to shape our sound. When I say “Wet” I mean the affected signal. When I say “Dry” I mean the unaffected signal. Ratios of Wet to Dry signal allows us to control the perception of how far away a sound is.
Early Reflections
As sound waves travel toward your ear, they are absorbed and reflected by the objects and surfaces they hit. Hard surfaces redirect portions of the sound towards your ear. Due to the speed of sound, these reflections arrive after the initial sound. In reverbs, these are called early reflections.
To simulate early reflections, we use a multi-tap delay. Each “tap” of the delay effect is a simulation of a surface reflecting the sound.
Early reflections allow you to control the illusion of how close or far away a sound is.
Pre-Delay
The speed of sound is pretty slow. Depending on the size of the room you are simulating, it may take a moment for the collected reverberations to reach you. Pre-Delay adds a few milliseconds to the start of the reverb. Combined with the dry signal, this places the sound in the room.
Diffused sound
Once a sound has bounced around for a while, it becomes “diffused.” With each bounce off of a surface, energy is lost and absorbed. Eventually, it turns into noise that no longer resembles the original sound. This noise is the reverb “tail.”
To simulate these phenomena we’re going to use:
- A Delay node for Pre-Delay
- A multitap delay node for early reflections
- A convolution node for the diffused sound (precisely like the basic reverb)
- Filtered noise for the convolution buffer.
- Gain nodes to help us control the balance between the different pieces.
AdvancedReverb Setup
We’re going to add a few delay nodes for pre-delay and multitap.
One thing you might notice is that the multitap delay nodes bypass the reverb effect. The multitap is simulating Early Reflections, so we don’t need to add those sounds to the reverberated sound.
// Advanced Reverb Setup
setup() {
this.effect = this.context.createConvolver();
this.reverbTime = reverbTime;
this.attack = 0;
this.decay = 0.0;
this.release = reverbTime/3;
this.preDelay = this.context.createDelay(reverbTime);
this.preDelay.delayTime.setValueAtTime(preDelay, this.context.currentTime);
this.multitap = [];
for(let i = 2; i > 0; i--) {
this.multitap.push(this.context.createDelay(reverbTime));
}
this.multitap.map((t,i)=>{
if(this.multitap[i+1]) {
t.connect(this.multitap[i+1])
}
t.delayTime.setValueAtTime(0.001+(i*(preDelay/2)), this.context.currentTime);
})
this.multitapGain = this.context.createGain();
this.multitap[this.multitap.length-1].connect(this.multitapGain);
this.multitapGain.gain.value = 0.2;
this.multitapGain.connect(this.output);
this.wet = this.context.createGain();
this.input.connect(this.wet);
this.wet.connect(this.preDelay);
this.wet.connect(this.multitap[0]);
this.preDelay.connect(this.effect);
this.effect.connect(this.output);
this.renderTail();
}
Let’s take a look at the AdvancedReverb renderTail() function.
//...AdvancedReverb Class
renderTail () {
const tailContext = new OfflineAudioContext(2, this.context.sampleRate * this.reverbTime, this.context.sampleRate);
const tailOsc = new Noise(tailContext, 1);
const tailLPFilter = new Filter(tailContext, "lowpass", 5000, 1);
const tailHPFilter = new Filter(tailContext, "highpass", 500, 1);
tailOsc.init();
tailOsc.connect(tailHPFilter.input);
tailHPFilter.connect(tailLPFilter.input);
tailLPFilter.connect(tailContext.destination);
tailOsc.attack = this.attack;
tailOsc.decay = this.decay;
tailOsc.release = this.release;
setTimeout(()=>{
tailContext.startRendering().then((buffer) => {
this.effect.buffer = buffer;
});
tailOsc.on({frequency: 500, velocity: 127});
tailOsc.off();
}, 20)
}
The extra filters, pre-delay, and multi-tap allow us to shape smaller rooms. The SimpleReverb class creates a simple echoed space. AdvancedReverb simulates more phenomena that occur inside of that echoed space.
Playing Around with Space.
Now that you have your reverb, you can begin putting sounds into space. By changing the size of the pre-delay, the number of multi-taps, the frequencies the filtered noise, and the ratio of dry signal to wet signal, you can drastically alter the space.
A quick rundown of how to apply each setting:
Wet/Dry Ratio:
The less dry signal, the further into the reverberated space your sound is.
Pre-delay:
The larger the pre-delay, the further away the walls feel.
Multi-tap:
The more multi-taps, the smaller, square, and untreated the room feels.
Filtered noise:
Removing high frequencies can make the reverb sound more “dead” and, with low Dry settings can make the reverb sound like it is coming through a wall.
Removing low frequencies sounds unnatural, but, this can make the reverberated sound more clear. If you’re adding reverb to bass sounds, I suggest filtering the bass sounds from the reverb.
Listen!
There are no hard and fast rules. You need to tweak the settings and listen to the results.
Grab the examples from CodePen and play around with the values. I deliberately left the examples free from controls so you have to manually tweak. See what kind of spaces you can make!