Chris Kohanik

Designer at gskinner. AR, VR, 3D, cartoons, and pizza is what I'm all about.


Level Up Client Deliveries Promo

How to level up client deliveries and get stakeholders more involved by doing less

At gskinner, I seldom get face-to-face time with clients. Most of our clients are remote which can be challenging when presenting design. Luckily screen sharing and InVision help us demonstrate our designs over video conference calls.

The real problem occurs when other stakeholders, not involved in those design presentations, start voicing their opinion after the call has ended.

The Client Shows Your Design

Let’s say you’ve worked with your client over 2 weeks to design a solution to a complex feature in their app. Everyone involved is happy with how it’s turned out. It’s ready to be handed off to developers. In passing, your client shares the designs with their CEO (or team, neighbours, kids, etc.) who haven’t been involved in the process. Without realizing it, your client is now selling your design for you.

This can leave your client vulnerable to answer questions or defend ideas they might not have prepared for.

We Have Some Feedback

In a perfect world, you will have prepared your client for this moment. You’ve walked them through your rationale. Provided tour points on the prototype, your PM has included call summaries to decisions and pivots. But, in the moment, your client has to remember all this and try to walk the talk. The lack of certainty and clarity of details probably won’t get projected onto your client. They will get projected onto you. These external stakeholders, sensing this uncertainty, begin to try and fix the problems by offering feedback and suggestions.

During your next meeting (or worse: in an email) your client suddenly unloads a bunch of new feedback out of the blue.

This all connects in one place: you were not there to sell that stakeholder on the design decisions. You gave your client all the confidence in the design, but nothing to back it up with.

Making It Easier For Your Client To Sell Design With Prototypes

Recently we transitioned away from creating large scale InVision prototypes (100+ screens). They had become more presentation decks and became unmanageable prototypes. Moving to smaller InVision prototypes kept our client focused on what to give feedback on, and let us keep features distinct between prototypes. This was the start of us offering our clients better artifacts so they could more easily talk about design.

We had hoped that this would make it easier to bring up design to show their stakeholders.

Smaller prototypes are easier than larger more complex prototypes. But they could still a barrier to external stakeholders

Assumptions About Stakeholders

Ideally, the higher up in an organization you go, the more involved people want to be. The problem with people higher up in an organization though is that they lack time. Think about trying to get time with your CEO, or even your clients CEO. Getting these people ad-hoc or even in a calendared event can be a struggle.

We saw that even a small scale prototype could be a barrier for entry for someone not intimately connected to the project. Imagine if someone asked you to download a new app for your phone and give feedback. It’s a similar feeling.

If only there was a way we could clone our designers and send them along with the design to present it to everyone, wherever it went.

Making It Even Easier For Your Client To Sell Design With Video

A walkthrough of the process to screen record prototypes and preparing them to share with your clients/stakeholders

This is where we started creating screen captures of our prototypes. Using ScreenFlow and InVision, we used the same prototype our clients were familiar with and walked through our rational, what we had accomplished that sprint, and feedback incorporated from our client.

The argument could be made for doing this directly in Sketch (or another design tool), but we wanted to keep continuity with our client deliveries. So if you present a different way than we do, keep it familiar. It’ll give your client something else to show their stakeholder if they want to dig deeper.

We also only created these videos after we presented to our client, and incorporated feedback that we talked through with them. The worst thing you can do is make a video only to have it be out of date or missing requests.

A Video Is Only As Good As Its Voice

Video Delivery Example using an example prototype from sketch

An important note is that we also did voice-overs for these screen captures. Our first screen capture didn’t include voice because we felt it was straightforward enough, but there was still so much guesswork to be done by the viewer. Have you ever tried to watch a tutorial on YouTube without any sound? It’s very frustrating.

Adding voice-over allowed us to clarify all the details. It also meant that we were selling our design when not in the room. Our client didn’t need to remember everything we had told them. Now they look prepared when asked for updates.

Remember how we started doing smaller prototypes? Smaller prototypes meant smaller videos. Most of the videos we delivered ran 3-5 minutes. Because ScreenFlow lets you record your screen and voice at the same time, often I would sit down and do them together. Although someone could re-record their voice after for a better take.

These videos would always have 2 other components, the project title with the sprint/feature, and our logo at the end of the video. We do this for when our clients share it internally, so they know the context of it, and who it came from.

Client Feedback On Process Matters

This was a significant change to our design process and added time to every delivery, but our clients have loved getting these videos. We know because we asked them. They also told us about a few scenarios when they shared them with their team. Remember to talk to your clients!

They were happy to be reminded of our design decisions outside of our meetings and gave them a better way to sell design with their team when it came to more critical features that had many stakeholders involved.

Less Stress Means A Better Delivery

Stronger deliveries like these are more work so they can be stressful. We gave ourselves a few different restrictions to make sure these could be delivered but wouldn’t add unnecessary stress to designers.

  1. Set expectations on how long these should take to produce
  2. Minor voice flubs are ok
  3. Provide support for anyone not wanting to do VO
  4. Designers are given screen recording software (Screenflow) for screen and voice recording
  5. We only screen record invision prototypes
  6. VO is recorded at the same time to cut down on production time
  7. Prefab Adobe Premiere templates for Title & logo bumpers
  8. Shared video directly on Basecamp instead of using Youtube/Vimeo

Deliveries like these will help your client know more, sell the design for you, and reduce feedback churn. You might even attend a fewer meetings.

Making Reverb with Web Audio API

Making Reverb with the Web Audio API

“Making Reverb with Web Audio” is part of a series of blog posts on how to do sound design in Web Audio. Let’s start with one of the most useful effects, Reverb.

What is Reverb?

Reverberation, or reverb, is a continuation of a sound through echoes. Almost every room you’ve ever been in has this effect. Your brain uses the reverberated sound to construct a 3D impression of the space.

To get rid of reverb, scientists build specialized rooms called Anechoic Chambers. Recording Engineers do something similar; they form small padded rooms meant to “deaden” the sound while maintaining some of the natural reverb (because it sounds weird without it).

To combine all the sounds into a single space during mixing, sound engineers use audio effects that simulate reverberations.

We refer to these effects as “Reverbs.”

Creating the effect

To make a reverb with Web Audio API is quite simple. You fill a convolution node with some decaying noise, and boom. Reverb. It sounds nice!

To create the tail, We render noise into this buffer using an OfflineAudioContext. Why? By rendering the sound of the reverb buffer, filters and equalizers (and other effects) can be used to shape the frequency response of the reverb.

I’ve added a library of classes to simplify the explanation. Feel free to grab it and use them in your projects.

/// ... SimpleReverb Class
renderTail () {
    const tailContext = new OfflineAudioContext(2, this.context.sampleRate * this.reverbTime, this.context.sampleRate);
    const tailOsc = new Noise(tailContext, 1);
          tailOsc.attack = this.attack;
          tailOsc.decay = this.decay;
          tailOsc.release = this.release;

      tailContext.startRendering().then((buffer) => {
        this.effect.buffer = buffer;

      tailOsc.on({frequency: 500, velocity: 127});;
    }, 20);

What if we want more control over our reverb?

Anatomy of a Reverb

Reverb is a temporal event. It’s a series of changes that happen to a single sound over time. Like waves in a pool, we cannot say that a wave has “parts.” We can, however, identify patterns that exist, and write code to simulate them.

Wet vs Dry

We need to understand a few terms to shape our sound. When I say “Wet” I mean the affected signal. When I say “Dry” I mean the unaffected signal. Ratios of Wet to Dry signal allows us to control the perception of how far away a sound is.

Early Reflections

As sound waves travel toward your ear, they are absorbed and reflected by the objects and surfaces they hit. Hard surfaces redirect portions of the sound towards your ear. Due to the speed of sound, these reflections arrive after the initial sound. In reverbs, these are called early reflections.

To simulate early reflections, we use a multi-tap delay. Each “tap” of the delay effect is a simulation of a surface reflecting the sound.

Early reflections allow you to control the illusion of how close or far away a sound is.


The speed of sound is pretty slow. Depending on the size of the room you are simulating, it may take a moment for the collected reverberations to reach you. Pre-Delay adds a few milliseconds to the start of the reverb. Combined with the dry signal, this places the sound in the room.

Diffused sound

Once a sound has bounced around for a while, it becomes “diffused.” With each bounce off of a surface, energy is lost and absorbed. Eventually, it turns into noise that no longer resembles the original sound. This noise is the reverb “tail.”

To simulate these phenomena we’re going to use:

  • A Delay node for Pre-Delay
  • A multitap delay node for early reflections
  • A convolution node for the diffused sound (precisely like the basic reverb)
  • Filtered noise for the convolution buffer.
  • Gain nodes to help us control the balance between the different pieces.

AdvancedReverb Setup

We’re going to add a few delay nodes for pre-delay and multitap.
One thing you might notice is that the multitap delay nodes bypass the reverb effect. The multitap is simulating Early Reflections, so we don’t need to add those sounds to the reverberated sound.

// Advanced Reverb Setup
setup() {
    this.effect = this.context.createConvolver();

    this.reverbTime = reverbTime;

    this.attack = 0;
    this.decay = 0.0;
    this.release = reverbTime/3;

    this.preDelay = this.context.createDelay(reverbTime);
    this.preDelay.delayTime.setValueAtTime(preDelay,    this.context.currentTime);
    this.multitap = [];
    for(let i = 2; i > 0; i--) {
      if(this.multitap[i+1]) {
      t.delayTime.setValueAtTime(0.001+(i*(preDelay/2)), this.context.currentTime);

    this.multitapGain = this.context.createGain();
    this.multitapGain.gain.value = 0.2;

    this.wet = this.context.createGain();



Let’s take a look at the AdvancedReverb renderTail() function.

//...AdvancedReverb Class
renderTail () {
    const tailContext = new OfflineAudioContext(2, this.context.sampleRate * this.reverbTime, this.context.sampleRate);
    const tailOsc = new Noise(tailContext, 1);
    const tailLPFilter = new Filter(tailContext, "lowpass", 5000, 1);
    const tailHPFilter = new Filter(tailContext, "highpass", 500, 1);

        tailOsc.attack = this.attack;
        tailOsc.decay = this.decay;
        tailOsc.release = this.release;

      tailContext.startRendering().then((buffer) => {
        this.effect.buffer = buffer;

      tailOsc.on({frequency: 500, velocity: 127});;
    }, 20)

The extra filters, pre-delay, and multi-tap allow us to shape smaller rooms. The SimpleReverb class creates a simple echoed space. AdvancedReverb simulates more phenomena that occur inside of that echoed space.

Playing Around with Space.

Now that you have your reverb, you can begin putting sounds into space. By changing the size of the pre-delay, the number of multi-taps, the frequencies the filtered noise, and the ratio of dry signal to wet signal, you can drastically alter the space.

A quick rundown of how to apply each setting:

Wet/Dry Ratio:
The less dry signal, the further into the reverberated space your sound is.

The larger the pre-delay, the further away the walls feel.

The more multi-taps, the smaller, square, and untreated the room feels.

Filtered noise:
Removing high frequencies can make the reverb sound more “dead” and, with low Dry settings can make the reverb sound like it is coming through a wall.

Removing low frequencies sounds unnatural, but, this can make the reverberated sound more clear. If you’re adding reverb to bass sounds, I suggest filtering the bass sounds from the reverb.


There are no hard and fast rules. You need to tweak the settings and listen to the results.

Grab the examples from CodePen and play around with the values. I deliberately left the examples free from controls so you have to manually tweak. See what kind of spaces you can make!

barrel with torch on top

Embracing the Bottom of the Learning Curve

I’ve always had a complicated relationship with learning as a designer. It’s satisfying to gain new skills, but staying in my comfort zone feels so much easier. I want to push myself and get awesome results, but there’s an intimidating hurdle of not knowing how to start. The bottom of the learning curve is a scary hurdle to confront. 3D design had that hurdle stalling me from progressing. Dipping my toes into 3D modelling and quitting after a week was a common occurrence for years. There’s dozens of abandoned attempts sitting on my old hard drives. Something always prevented me from wanting to continue. Normals, modifiers, rendering — 3D felt too overwhelming and vast. I felt stumped. How do you get started learning something when you don’t even know what you don’t know?
Continue reading →

RegExr v2: Build, Test, & Learn RegEx

RegExr is exactly six years old today. Built in Flex and AS3, it was a largely accidental outcome of exploring a few technical concepts I was interested in at the time (tokenizers/lexers, advanced text interactions, regular expressions).

RegExr v1 circa 2008

I thought the end result might be useful to others struggling to learn or work with RegEx, so I released it online. Its popularity took me by surprise, with around 10M hits and 150K patterns saved to date. This is despite being essentially abandoned since 2008.

I’m happy to announce that the neglect is finally ending, with today’s release of RegExr v2. Rebuilt from scratch in HTML/JS, and (hopefully) improved in every way. I’d like to believe that RegExr v2 is the best way to learn, build and test RegEx online today.

RegExr v2

Key features:

  • clean, modern design
  • video tutorial
  • expression syntax coloring
  • underlines expression errors in red
  • contextual help for all regex tokens and errors on rollover
  • updates matches as you type
  • support for testing substitution/replace
  • full reference of all JS RegEx tokens, with loadable examples
  • searchable database of community submitted patterns
  • drag and drop text files to load their content
  • save and share patterns with others via direct links
  • undo/redo

I also dug through over 240 comments on the original blog post, and implemented a ton of suggestions:

  • larger monospaced text and support for browser zoom (my eyes are older, my monitors are larger, and 10pt font just doesn’t seem so cool now)
  • vastly improved tokenizer, that is (hopefully) 100% accurate to JS RegExp standards
  • improved documentation, now with examples
  • support for pasting full expressions (including flags)
  • save includes your sample and substitution text

Now that it’s released, we’re going to try not to let it stagnate again. The first order of business is to clean up the code and commit it to the RegExr GitHub repo, so that it becomes a living project with community support.

We’re also going to try to clean up the existing community patterns – likely scrubbing any that now have errors (due to differences in AS3 and JS for example).

Following that, I’m going to be taking a look at different options for wrapping it in a desktop installer, so you can run it offline and save your favourites locally (input on this is welcome). I’d also love to make it usable on mobile devices, not because I think there’s a huge demand for testing regular expressions on mobile phones, but as a challenge to see if it can be done well – I think the “click to insert” feature of the reference library could work really well.

I’m also planning to write up a blog post exploring some of the technical challenges and decisions that we made while building this.

If you enjoy using RegExr, you can help out by tweeting, facebooking, gPlussing, blogging, or otherwise sharing/linking to it so others can find it. Version 1 disappeared almost completely from Google a few months ago (I believe they downgraded pages with only Flash content), and I’d really like it to recover in the rankings.

As always, I’d love to hear what you think of the new version of RegExr, and any feedback on how to make it even better.

WebGL Support in EaselJS + Mozilla Sponsors CreateJS

We’re absolutely thrilled to welcome Mozilla in joining Adobe, Microsoft, and AOL to the roster of CreateJS sponsors!

We’ve been working with the Firefox OS team to ensure our libraries are well-supported and valuable tools for app and game creation on Firefox OS.

The first big announcement as a result of this collaboration, is WebGL Support for EaselJS (currently in public beta on GitHub), which is supported in both the browser and application contexts of Firefox OS (as well as other WebGL-enabled browsers). In our tests, we’ve managed to to draw a subset of 2D content up to 50x faster than is currently possible on the Canvas 2D Context. You can learn more about our WebGL implementation on the Mozilla Hacks or CreateJS blogs.

Be sure to let us know what you think in the EaselJS Community forum.

Welcome Mozilla, and a huge thank-you to all our amazing sponsors!

Bardbarian Launches on iOS!

I’m very excited that our first big TreeFortress game, Bardbarian, has launched on the iOS App Store! Bardbarian offers a unique gameplay style (think tower defence merged with a top down shooter), and some really fantastic artwork.


Check out screenshots & gameplay videos on the Bardbarian game site, or check it out on the App Store.

Bardbarian has also been greenlit on Steam, and will be coming to PC, Android, and other platforms soon.