Not Your Typical Walk Cycle

Warning: There is a GIF at the bottom of the post with flashing images.

Like many of you, I was inspired by the impressive visuals of Spider-Man: Into the Spider-Verse (which, by the way, has a website built using gskinner’s CreateJS libraries) and wanted to try to apply some of that style into one of my animations. On top of that, I’ve had the idea of making a walk cycle with an astronaut for a while and decided it was time to make it. 

Continue reading →
Join Us!

Reasons to Work at gskinner

A few years ago, we shared some awesome reasons why you should come work with us. We are in a brand new office and hiring smart developers and designers, so we thought we would update the list.

We Have the Coolest Clients & Projects

We’re privileged to collaborate with notable organizations like Adobe, Microsoft, Google, Disney, National Film Board, and FITC (to name a few). We are tasked to prove concepts, explore new functionality, and create flagship experiences for announcements. We design and build world-class apps, experiences, installations, and games on a wide variety of platforms. This rich breadth of work offers constant engagement and opportunities to learn and grow.

Continue reading →

Think More, Draw Less: Tips for Stronger Images

It’s funny how the more you learn, the more you realize that you don’t know as much as you thought you did. I feel that way a lot when it comes to digital painting and it’s why I enjoy reviewing fundamentals so much. There’s always some overlooked piece of knowledge that reveals itself in time if you go back to look for it. To find more nuggets of wisdom, I spent the past month focusing on digital painting techniques and process. Luckily, after my review, I have found some nuggets that can be applied to your creative process.

Continue reading →
Level Up Client Deliveries Promo

How to level up client deliveries and get stakeholders more involved by doing less

At gskinner, I seldom get face-to-face time with clients. Most of our clients are remote which can be challenging when presenting design. Luckily screen sharing and InVision help us demonstrate our designs over video conference calls.

The real problem occurs when other stakeholders, not involved in those design presentations, start voicing their opinion after the call has ended.

The Client Shows Your Design

Let’s say you’ve worked with your client over 2 weeks to design a solution to a complex feature in their app. Everyone involved is happy with how it’s turned out. It’s ready to be handed off to developers. In passing, your client shares the designs with their CEO (or team, neighbours, kids, etc.) who haven’t been involved in the process. Without realizing it, your client is now selling your design for you.

This can leave your client vulnerable to answer questions or defend ideas they might not have prepared for.

We Have Some Feedback

In a perfect world, you will have prepared your client for this moment. You’ve walked them through your rationale. Provided tour points on the prototype, your PM has included call summaries to decisions and pivots. But, in the moment, your client has to remember all this and try to walk the talk. The lack of certainty and clarity of details probably won’t get projected onto your client. They will get projected onto you. These external stakeholders, sensing this uncertainty, begin to try and fix the problems by offering feedback and suggestions.

During your next meeting (or worse: in an email) your client suddenly unloads a bunch of new feedback out of the blue.

This all connects in one place: you were not there to sell that stakeholder on the design decisions. You gave your client all the confidence in the design, but nothing to back it up with.


Making It Easier For Your Client To Sell Design With Prototypes

Recently we transitioned away from creating large scale InVision prototypes (100+ screens). They had become more presentation decks and became unmanageable prototypes. Moving to smaller InVision prototypes kept our client focused on what to give feedback on, and let us keep features distinct between prototypes. This was the start of us offering our clients better artifacts so they could more easily talk about design.

We had hoped that this would make it easier to bring up design to show their stakeholders.

Smaller prototypes are easier than larger more complex prototypes. But they could still a barrier to external stakeholders

Assumptions About Stakeholders

Ideally, the higher up in an organization you go, the more involved people want to be. The problem with people higher up in an organization though is that they lack time. Think about trying to get time with your CEO, or even your clients CEO. Getting these people ad-hoc or even in a calendared event can be a struggle.

We saw that even a small scale prototype could be a barrier for entry for someone not intimately connected to the project. Imagine if someone asked you to download a new app for your phone and give feedback. It’s a similar feeling.

If only there was a way we could clone our designers and send them along with the design to present it to everyone, wherever it went.


Making It Even Easier For Your Client To Sell Design With Video

A walkthrough of the process to screen record prototypes and preparing them to share with your clients/stakeholders

This is where we started creating screen captures of our prototypes. Using ScreenFlow and InVision, we used the same prototype our clients were familiar with and walked through our rational, what we had accomplished that sprint, and feedback incorporated from our client.

The argument could be made for doing this directly in Sketch (or another design tool), but we wanted to keep continuity with our client deliveries. So if you present a different way than we do, keep it familiar. It’ll give your client something else to show their stakeholder if they want to dig deeper.

We also only created these videos after we presented to our client, and incorporated feedback that we talked through with them. The worst thing you can do is make a video only to have it be out of date or missing requests.

A Video Is Only As Good As Its Voice

Video Delivery Example using an example prototype from sketch

An important note is that we also did voice-overs for these screen captures. Our first screen capture didn’t include voice because we felt it was straightforward enough, but there was still so much guesswork to be done by the viewer. Have you ever tried to watch a tutorial on YouTube without any sound? It’s very frustrating.

Adding voice-over allowed us to clarify all the details. It also meant that we were selling our design when not in the room. Our client didn’t need to remember everything we had told them. Now they look prepared when asked for updates.

Remember how we started doing smaller prototypes? Smaller prototypes meant smaller videos. Most of the videos we delivered ran 3-5 minutes. Because ScreenFlow lets you record your screen and voice at the same time, often I would sit down and do them together. Although someone could re-record their voice after for a better take.

These videos would always have 2 other components, the project title with the sprint/feature, and our logo at the end of the video. We do this for when our clients share it internally, so they know the context of it, and who it came from.


Client Feedback On Process Matters

This was a significant change to our design process and added time to every delivery, but our clients have loved getting these videos. We know because we asked them. They also told us about a few scenarios when they shared them with their team. Remember to talk to your clients!

They were happy to be reminded of our design decisions outside of our meetings and gave them a better way to sell design with their team when it came to more critical features that had many stakeholders involved.

Less Stress Means A Better Delivery

Stronger deliveries like these are more work so they can be stressful. We gave ourselves a few different restrictions to make sure these could be delivered but wouldn’t add unnecessary stress to designers.

  1. Set expectations on how long these should take to produce
  2. Minor voice flubs are ok
  3. Provide support for anyone not wanting to do VO
  4. Designers are given screen recording software (Screenflow) for screen and voice recording
  5. We only screen record invision prototypes
  6. VO is recorded at the same time to cut down on production time
  7. Prefab Adobe Premiere templates for Title & logo bumpers
  8. Shared video directly on Basecamp instead of using Youtube/Vimeo

Deliveries like these will help your client know more, sell the design for you, and reduce feedback churn. You might even attend a fewer meetings.

Making Reverb with Web Audio API

Making Reverb with the Web Audio API

“Making Reverb with Web Audio” is part of a series of blog posts on how to do sound design in Web Audio. Let’s start with one of the most useful effects, Reverb.

What is Reverb?

Reverberation, or reverb, is a continuation of a sound through echoes. Almost every room you’ve ever been in has this effect. Your brain uses the reverberated sound to construct a 3D impression of the space.

To get rid of reverb, scientists build specialized rooms called Anechoic Chambers. Recording Engineers do something similar; they form small padded rooms meant to “deaden” the sound while maintaining some of the natural reverb (because it sounds weird without it).

To combine all the sounds into a single space during mixing, sound engineers use audio effects that simulate reverberations.

We refer to these effects as “Reverbs.”

Creating the effect

To make a reverb with Web Audio API is quite simple. You fill a convolution node with some decaying noise, and boom. Reverb. It sounds nice!

To create the tail, We render noise into this buffer using an OfflineAudioContext. Why? By rendering the sound of the reverb buffer, filters and equalizers (and other effects) can be used to shape the frequency response of the reverb.

I’ve added a library of classes to simplify the explanation. Feel free to grab it and use them in your projects.

/// ... SimpleReverb Class
renderTail () {
    const tailContext = new OfflineAudioContext(2, this.context.sampleRate * this.reverbTime, this.context.sampleRate);
    const tailOsc = new Noise(tailContext, 1);
          tailOsc.init();
          tailOsc.connect(tailContext.destination);
          tailOsc.attack = this.attack;
          tailOsc.decay = this.decay;
          tailOsc.release = this.release;

        setTimeout(()=>{
      tailContext.startRendering().then((buffer) => {
        this.effect.buffer = buffer;
      });

      tailOsc.on({frequency: 500, velocity: 127});
      tailOsc.off();
    }, 20);
}

What if we want more control over our reverb?

Anatomy of a Reverb

Reverb is a temporal event. It’s a series of changes that happen to a single sound over time. Like waves in a pool, we cannot say that a wave has “parts.” We can, however, identify patterns that exist, and write code to simulate them.

Wet vs Dry

We need to understand a few terms to shape our sound. When I say “Wet” I mean the affected signal. When I say “Dry” I mean the unaffected signal. Ratios of Wet to Dry signal allows us to control the perception of how far away a sound is.

Early Reflections

As sound waves travel toward your ear, they are absorbed and reflected by the objects and surfaces they hit. Hard surfaces redirect portions of the sound towards your ear. Due to the speed of sound, these reflections arrive after the initial sound. In reverbs, these are called early reflections.

To simulate early reflections, we use a multi-tap delay. Each “tap” of the delay effect is a simulation of a surface reflecting the sound.

Early reflections allow you to control the illusion of how close or far away a sound is.

Pre-Delay

The speed of sound is pretty slow. Depending on the size of the room you are simulating, it may take a moment for the collected reverberations to reach you. Pre-Delay adds a few milliseconds to the start of the reverb. Combined with the dry signal, this places the sound in the room.

Diffused sound

Once a sound has bounced around for a while, it becomes “diffused.” With each bounce off of a surface, energy is lost and absorbed. Eventually, it turns into noise that no longer resembles the original sound. This noise is the reverb “tail.”

To simulate these phenomena we’re going to use:

  • A Delay node for Pre-Delay
  • A multitap delay node for early reflections
  • A convolution node for the diffused sound (precisely like the basic reverb)
  • Filtered noise for the convolution buffer.
  • Gain nodes to help us control the balance between the different pieces.

AdvancedReverb Setup

We’re going to add a few delay nodes for pre-delay and multitap.
One thing you might notice is that the multitap delay nodes bypass the reverb effect. The multitap is simulating Early Reflections, so we don’t need to add those sounds to the reverberated sound.

// Advanced Reverb Setup
setup() {
    this.effect = this.context.createConvolver();

    this.reverbTime = reverbTime;

    this.attack = 0;
    this.decay = 0.0;
    this.release = reverbTime/3;

    this.preDelay = this.context.createDelay(reverbTime);
    this.preDelay.delayTime.setValueAtTime(preDelay,    this.context.currentTime);
    this.multitap = [];
    for(let i = 2; i > 0; i--) {
      this.multitap.push(this.context.createDelay(reverbTime));
    }
    this.multitap.map((t,i)=>{
      if(this.multitap[i+1]) {
        t.connect(this.multitap[i+1])
      }
      t.delayTime.setValueAtTime(0.001+(i*(preDelay/2)), this.context.currentTime);
    })

    this.multitapGain = this.context.createGain();
    this.multitap[this.multitap.length-1].connect(this.multitapGain);
    this.multitapGain.gain.value = 0.2;

    this.multitapGain.connect(this.output);
    this.wet = this.context.createGain();

    this.input.connect(this.wet);
    this.wet.connect(this.preDelay);
    this.wet.connect(this.multitap[0]);
    this.preDelay.connect(this.effect);
    this.effect.connect(this.output);

    this.renderTail();
}

Let’s take a look at the AdvancedReverb renderTail() function.

//...AdvancedReverb Class
renderTail () {
    const tailContext = new OfflineAudioContext(2, this.context.sampleRate * this.reverbTime, this.context.sampleRate);
    const tailOsc = new Noise(tailContext, 1);
    const tailLPFilter = new Filter(tailContext, "lowpass", 5000, 1);
    const tailHPFilter = new Filter(tailContext, "highpass", 500, 1);

    tailOsc.init();
        tailOsc.connect(tailHPFilter.input);
        tailHPFilter.connect(tailLPFilter.input);
        tailLPFilter.connect(tailContext.destination);
        tailOsc.attack = this.attack;
        tailOsc.decay = this.decay;
        tailOsc.release = this.release;

    setTimeout(()=>{
      tailContext.startRendering().then((buffer) => {
        this.effect.buffer = buffer;
      });

      tailOsc.on({frequency: 500, velocity: 127});
      tailOsc.off();
    }, 20)
}

The extra filters, pre-delay, and multi-tap allow us to shape smaller rooms. The SimpleReverb class creates a simple echoed space. AdvancedReverb simulates more phenomena that occur inside of that echoed space.

Playing Around with Space.

Now that you have your reverb, you can begin putting sounds into space. By changing the size of the pre-delay, the number of multi-taps, the frequencies the filtered noise, and the ratio of dry signal to wet signal, you can drastically alter the space.

A quick rundown of how to apply each setting:

Wet/Dry Ratio:
The less dry signal, the further into the reverberated space your sound is.

Pre-delay:
The larger the pre-delay, the further away the walls feel.

Multi-tap:
The more multi-taps, the smaller, square, and untreated the room feels.

Filtered noise:
Removing high frequencies can make the reverb sound more “dead” and, with low Dry settings can make the reverb sound like it is coming through a wall.

Removing low frequencies sounds unnatural, but, this can make the reverberated sound more clear. If you’re adding reverb to bass sounds, I suggest filtering the bass sounds from the reverb.

Listen!

There are no hard and fast rules. You need to tweak the settings and listen to the results.

Grab the examples from CodePen and play around with the values. I deliberately left the examples free from controls so you have to manually tweak. See what kind of spaces you can make!