Luca Fusi Sound Design | Implementation


You Can’t Kill the Hallway

I don't know where to start writing this or who it's for, but I need to do it.

Last week (it was 'yesterday' when I started this), PopCap Seattle fired some 40% of its staff as it leans up and refocuses.

There's an official statement you can read on it:

A message re: the 05/2017 PopCap layoffs.

It's a financially sensible move whose (inevitability / the fact that it has now happened) is nonetheless totally heartbreaking. Even handled with care, layoffs are a capricious and violent force. They disrupt, bringing what was in motion to a violent stop; or they accelerate a course of action that was just barely getting off the ground.

What they bring universally is change. Change, that steadfast guarantee that nothing ever stays that way. Change is at once life's most awe-inspiring principle and its most terrifying. Change is growth, discovery, curiosity, the intense and unexpected--but it's uncertainty, the passing of what is into what won't be anymore. Change marks time and reminds us that even life's greatest moments don't last forever. That are there are only so many moments we ever get. Dramatic change is worth a dramatic marking.

I want to talk about PopCap's audio team, for as much as I've gotten to know it, and one of the most remarkable chapters of my life so far. It's a chapter I never went looking for.

This is how it started.

This is how it started.

This first part is going to be really heavily about me, so if you're as annoyed by that thought as I am, you can skip it. But it's good context.

How did I get here?

Change is at once life's most awe-inspiring principle and its most terrifying.

If you know the PopCap audio team, you know that smallest unit of measure our science observes is the leap of faith that is the decision to work with the PopCap audio team. They are a diverse band of visionaries spanning decades and specialties, trailing stardust ribbons of renown behind everything they do alone; together, they are almost intergalactic. Their work touches millions and the hearts of those millions.

But in 2014, when I got this message from my friend, RJ, I thought I was doing pretty fucking good. I had learned lots on Project Spark and met my best friend, but had left it burnt out and feeling very post-in-house after just one gig. I am known for my resilience. Just angry, Matthew McConaghuey-style rants on the great helplessness of it all and how bullshit the machinations of game development become at the point where someone's answerable to shareholders. Stability is an illusion.

I had the chance of a lifetime to roll out of that and into collaboration / wide-eyed servitude with some incredible talent, remotely working on the sorts of titles I knew I wanted to. It was a lottery moment, and after a few months in the relationship, I was where I thought I wanted to be: able to work full-time on 2016's favorite puzzle game in my freaking pajamas.

(I didn't actually own pajamas then, but I do now. They say "PopCap" on them.)

And so I leapt from Microsoft not really knowing if all of this would totally come through on the other side, because it's scary to think about going independent, even as you're really always independent as a contractor, just temporarily sheltered. It seems very obvious now that this scenario was going to work. It was who I thought I should be and I would try it on for a while. Before I got too deep into it, though, I got that message from RJ.

I wasn't playing mobile titles. I wasn't looking to do something like this, or really, anything different at all, because pajamas. Putting myself in the headspace of that time, you can see why Popcap could seem like a good but still lateral move. Certainly not THE move. But, I had done my thing where I give a soft yes, and--because of something my therapist could probably point to--I hate disappointing people more than anything and I was now in it to at least talk to Becky. I talked to her, and she was wonderful.

I agreed to take a short design test for them the following weekend.

If I have to..

If I have to..


I really did not want to take a design test. I hate them the way you hate running and love them the way you love being done with running. I was suddenly sitting here looking at my already way-more-than-OK scenario and thinking, fuck. I have a design test this weekend. Why? Can I bail out of it?

*(I almost bailed out of it but, as happens every time, my guardian angel talked me off the ledge. I have bailed out of design tests and I don't recommend doing that. We are all works-in-progress.)

But I ran hot on it until 6AM Monday morning and had an incredible time trying to find the aesthetic involved. I was so stuck in Witness mode that I used Paulstretch on several design elements that definitely didn't need Paulstretch. The sounds I made were trying to be powerful and really textured, really literal, for the most part. Some of them were smudgy and terrible in ways I don't want to admit to now. Most of my design was very grey in sharp contrast to the colorful vibe of the test's cartoon aesthetic.

Nothing was working until I basically gave up and started pulling from the library. These things were lighthearted, funny. Forget making the best sounding leaves. What were funny sounds? I broke a previously unverbalized rule and started pulling from the really old stuff, the classic cartoon sounds you say you'll never use.

Dropping the first of those onto the track was the moment. Classing up old clunkers from the likes of Hanna Barbera with some higher frequency layers and sculpting them all together in WHOOSH was my first foray into a realm I'd go on to speak about almost a year later. I didn't know if they'd go for it, but I was having fun, just like Becky'd asked.

They did go for it. They called me back a few weeks later, and just about two and a half years ago, I grabbed coffee with Becky and Guy at the Uptown Espresso across the street from Popcap. It was the day I was headed home for Christmas.

This was my first facetime with Becky, and I'd only ever seen Guy from the middle audience row at his GDC talk on the music of Peggle 2. But the two of them felt like old friends as we spoke, mostly about sound design and what I'd done, but also about what we'd all like to see next in game audio. No hints of what I'd be working on, just a pure character assessment.

They took me upstairs to see the rest of the space. We rounded the 7th floor, the area where Bejeweled Stars was being built, and I was offered a drink from the friendly, arduino-driven drinkbot occupying the middle of the room. That day was the Christmas party and there were dozens of folks having a great time with each other in their terrible sweaters. Easily a few dozen more than I'd seen around the corridors of my last gig, and several dozen more than I was likely to run into in my pajamas. I was catching a glimpse of a culture that I'd never really thought about but was quickly becoming an unignorable factor.

We rounded the corner to the hallway. Damian was there. Damian, maybe the first voice of game audio I'd ever heard; in the days before my time at VFS, I'd pace lonely around rainy Vancouver while drowning my anxiety in the Game Audio Podcast. If I've posted this to my site, you can probably read all about it. RJ was there, too, excitedly showing off the way they were using RTPCs in Wwise to push MIDI around. I asked a lot of questions through the drink and hung out until it was time for everyone to get back to work.

I'm not sure if I left convinced, because I had a plane to catch that night and lots on my mind. But I left on the line. I was giving this place a chance, and it was hanging in. There was something to it. But I mostly put it out of my mind until a few weeks later.

I worked a bit through the holidays and returned to Seattle ready to settle in to the next several months' time freelancing, barring anything crazy coming up. A month later, I got the call from Becky--and the offer. I thought about it for a few days. Looking back at how everything's played out, the internal dialogue is still incredibly relevant; things would've worked out either way. But the idea of working in-person alongside that team, on that floor, moving into that tiny bit of culture I'd glimpsed.. I wanted to try it.

So, I jumped from my independent, post-in-house dream to a subsidiary of EA that developed casual titles for a platform I didn't play games on.

The easy choices aren't choices at all. It's the hard ones that really shape us.

This is not my beautiful team

And it's funny how that worked out. If RJ hadn't messaged me that day with the soft sell, I don't know if I'd ever have strayed into the orbit of this all-star team.

I knew I'd made the right choice so goddamn fast. We were throwing sounds into a prototype build within my first two weeks and off to GDC just days after that.

So, I jumped from my independent, post-in-house dream to a subsidiary of EA that developed casual titles for a platform I didn't play games on.

GDC 2015 was just a few weeks after my start date; we all went down. Guy / Jaclyn / RJ were speaking, Becky was paneling, Damian was suffusing. It was my first GDC as part of a big ol' audio team, and the company name on my badge still fit me like a hand-me-down. Even as I'd spend time with the team, I had one foot planted firmly outside, in the past. As my solo act. So I feel like I was still able to watch this team in action as anyone else in the audience would. I could not believe what I had stepped into. Everyone was so.. brilliant. They won a fucking GANG award! That talk was one of the best talks I've ever seen, and those three people work two doors down from me. They're my fucking coworkers. How the hell am I going to do this?

I kind of went hard in after we got back. I had this major chip on my shoulder for how amazing everyone else on the team was and I really wanted to be a part of it; I also totally acknowledged that I wasn't yet. I saw this as, let me do something cool for me, and hope they accept me in.

This attitude didn't do me any favors. I had all my armor up from previous projects and attacked meetings, tasks and all sorts of management outside my scope because I was used to it not getting done. I think I tried to do Becky's job, a little bit. I was kinda mean and we had disagreements. In my mind, it was probably pretty unclear to everyone whether I really fit in or not just yet.

Slowly, the positivity of the hallway started to wear me down. Things got done without calamity; interfacing with production wasn't so bad. I was coming in to benefit from all the developer goodwill that'd been built up by Team Audio in the years before me. That positivity wasn't to placate, or to make empty promises--it was the real deal. It's what made PopCap run like it did.

Also, weirdly, everyone in the audio hallway seemed to know the name of everyone else at the company. How does that happen?

I started to find the place where my voice crossed the PopCap vision, and that confidence let me move myself closer to that warm emotional center of the hallway. Over weeks and months, these people became my friends. They'd be in my office making silly voices, they were eyeing up my salads with concern, they got me to buy a car off fucking eBay in the middle of a voiceover session.

And the days go by

Later that year, we hit our first real crunch period on Heroes. Becky and I, in for weekends and weekends in a row. My grandfather got sick. My Mom got terrifyingly depressed, and it caught. I started going to therapy for anxiety; when it didn't work well enough, I saw my first psych. I kind of had to talk about it, even at work. It turns out that it was totally okay to talk about it.

I want to say that it's around here that I started really to fall in love with this group of people. When I realized that I could be my messiest self, whether I was trying to work on it or not, and have my entire department there for me as family.

I want to say that it's around here that I started really to fall in love with this group of people. When I realized that I could be my messiest self, whether I was trying to work on it or not, and have my entire department there for me as family. I say, 'the entire department,' but I feel like it was Becky who came first, and everyone else in time. Fair's fair. It is really something special to have a work environment where you can feel safe like that. You set it out there, you get the support back, you feel so unbelievably lucky and you find a foothold to push off and start making silly, joyful sounds again. You really can't make those when you're coming from a bad place.

Anyways, it was always good, but this is when it was the most good. As time went on, PopCap'd try something, not get it quite right and come back a little less confident in itself. Around the time I left my first contract, the world outside the hallway was changing. Bejeweled Stars, one of the most emotional and beautiful sounding games I've ever heard--a fucking Match-3, no less--launched to incredible acclaim, but couldn't stay there. I came back from three months' furlough to frantically join the GDC pitch session, where between us all we must've seven or eight proposals at the committee. It'd been two years since the last talk and we needed to share! I was so goddamn thrilled to be inside the walls and able to participate this time around. I belonged. The Hail Mary'd be a six person panel on Team Culture, and how cool would it be if we could just all sit up there and try to share the warmth of working together and get some of that out into the world of game audio.

We finished off off PvZ: Heroes, flying down to LA to post-mortem it at Game Sound Con before it'd even launched. It, too, hit hard out the gate, but we needed it to hit harder. I would say it was around the launch of Heroes when we knew that things were sliding down. It was sad for a while. I guess, in games, you kind of get inured to after a few years, but it still never feels good. People outside the hallway started leaving and, in January, we lost Jaclyn.

By now, the hallway was a warm huddle. Hugs happened constantly and without warning. We all came together in one final sprint to finish a huge wave of content together, moving one last wave of Post-Its across the whiteboard. I can't believe that was just last month.

Same as it ever was

I'm sorry that this is so light on useful information. History first. Maybe it's a bunch of in-club kinda personal grandstanding that no one will read, but maybe it'll click with you. Maybe it'll make you press for it in your next team.

And anyways it's kind of redundant: I realized after starting this love letter that we sort of already had our moment just a few months back, at this year's game development conference. A last reckoning, some lessons for the time capsule. It wasn't quite over then.

I didn't get to writing up a GDC post this year, but if you were at it, you probably saw us. I'm not sure what happened--probably the sweatshirts--but there is something about how we knew the end was coming down that drew us so tightly together as to start a fusion reaction. I felt like, and I say this with all due humility, PopCap was kind of a big deal this year. We all felt so responsible to spread whatever enthusiasm, whatever knowledge, whatever love we'd learned from one another over the years that we might've well glowed. I'm just a guy who makes sounds for a mobile game that didn't hit all that well, but this year, I felt like I had to be someone better than myself. Or maybe, to be my best self. Like we all owed the community that. I'm not sure? Am I crazy? You can't respond inline?

It's funny, re-watching the GDC 2017 panel now. A lot of talk about change, impermanence, evolution--Guy Whitmore's words coming out of my mouth. Boy, you'd think we all spent a lot of time together. What my indoctrination really says to me is that we've all knew this moment was coming for a while, and, I think, suspected that our best days within the hallway were behind us. And though the Actual Moment of it hurts, in the time between last week's send-off and right now, I really haven't seen so many tears from Team Audio. I think we were weirdly prepared. It was worse at Jaclyn's departure, the first crack in the ice shelf. The rest of it was just a matter of time.

If you are looking for takeaways, you want to watch that panel talk--they're in there, along with a lot more stuff like this. Right now, it's still locked behind Vault access, but I can't imagine that will last long.

I'll just add some personal ones here:

  • Seek out values like this in your next team. Gender balance, work-life balance, folks you think you could be friends with, that you'll be happy to see when things get really stressful. Teams over projects.
  • Be somewhere you can be yourself. Trust your coworkers to catch you on the other side of that.
  • Don't be afraid of working with friends. Friendship and commitment to a vision + an audience can coexist, and they completely support one another.

PopCap's taught me a lot, but its final lesson is a really universal one: that when you get moments like these you need that you need to squeeze them as tight as you can, because this shit doesn't last. For anyone.

Just as there was life before these years, there'll be life after it. The beautiful thing about change is that we'll just have to wait and see.

Okay, last lap. If you're reading this, you probably care a bit about game audio; you had probably heard of PopCap beforehand. I truly hope we helped you along your journey in some small, measurable way. If what we were doing hit you with even a fraction of what PopCap gave me, you are running inspired, rainbow batteries full of ideas and joy and enthusiasm enough to last you the next few projects. Grab the torch Guy, Becky, Damian, Jaclyn, RJ and I carried for those couple of years and run it all the way across the finish line. There may never be a six-person two in-house composer large mobile audio team again so this is on all of us now!

Uhhhh final takeaways: Think big about thinking small! MIDI belongs in Middleware. Make joy your North Star. Runtime sound design is funtime sound design. Use your mouth, filter everything, take the real out. Always be composing. Respect your audience.

Hug a lot.

And, as Becky told me all those years ago when she passed along the design test.. HAVE FUN!


Thanks, Team Audio. I love you.






Thank You

And finally, Damian's Storified experience of the community response to his layoff news. The outpouring is real.


GDC 2016 – Inside Ourselves

A Game That Listens — The Audio of INSIDE

Speakers: Martin Stig Andersen (Independent) []


"I now really enjoy dying."

Martin Stig Andersen is a human being who breathes air, eats food and drinks water, just like the rest of us--but unlike you and I, he was responsible for the audio of LIMBO.

Martin was in tow at this year's Game Developer's Conference to give an advance look at some of the sound design and systems behind Playdead's upcoming title, INSIDE, as well as spend some time at the Audiokinetic booth on the show floor showing off all the knobs and wires.

And look, I don't want to oversell his talents too much, because there's a nonzero chance that he'll read this at some point and it could be really weird when I run into him down the line. It already is, to an extent. For a lot of people. After a few years of realizing that everyone behind your favorite game experiences are just people, you cool it with the crazy sycophantic fandom stuff. Mostly. But Martin's got a really, really sharp mind for sound design and interactive audio, and I wouldn't be the first person to tell him so.

I kinda feel bad for him, actually; there was, even in the rushed fifteen-minutes-on-a-Friday time I hung around the booth, this revolving Who's Who cast of sound designers that'd never hire me, asking questions and pointing at the screen and talking coolly to Martin about their thoughts. Satisfied, they would end this exchange in this self-aware explosion of hurried fandom on how incredible his work is and how important LIMBO was to them and gosh, I dunno, you're just a genius, and thank you so much for doing everything you do, Martin, before shaking his hand and dashing off.

Maybe there's another article in me sometime about how damn weird it must be to be on that other side, or on the etiquette of admitting prior knowledge of a person and how much you love them when you're finally face to face on the conference floor. I do think about it a bunch, and usually just tell people to chill.

For now I should get to brass tacks on what Martin and Playdead are up to and how you might learn from it.

A Brief History

First off, get familiar: if you haven't played LIMBO, what the fuck are you doing here?

You can either play it now, or after this article makes it realize you should, but it'd probably inform the read a little bit.

Martin's work on LIMBO set a lot of firsts for game audio, or at least, went public with them in a way that let their influence really start to bear on this generation of sound designers. On the design front, LIMBO's soundscape is a perfect match for the stark, blurred-out out monochrome of the game's visuals. Sounds move from muffled, wire recorder-processed bits of texture to full-frequency moments of danger that leap to the fore. The similarly blurred line between what's sound / what's music is tread almost invisibly through the game thanks to Martin's nearly Musique Concrete approach of tuning everything to fit together. There's a talk on this from GDC 2011--watch it.

Beyond its aesthetic accomplishments, LIMBO was the poster child of How You Should Do Things In Wwise for many years. The game released in 2009; the program Martin used to stitch all this logic together, I bet most of us would barely recognize. But he sure as heck accomplished. Credit where it's due, Playdead seems to have given him a really wonderful suite of debug tools with which to draw bounds and place events for determining mix states throughout the game's stages. This mix what glues it all together is consistently auto-attenuating, resetting, evolving as the player adventures onwards. Watch the talks I link at the bottom. These were shown in Martin's 2012 lecture on implementation--again, watch the talks.

So yeah man, this game was a big deal.

Shortly after completing LIMBO, Playdead scooped up audio programmer Jakob Schmid in a really smart first-round draft. Playdead's lead gameplay designer, Jeppe Carlsen, set off to explore the Unity game engine and try his hand at a different sort of platformer, bringing along Schmid, as well as motion graphics designers Niels Fyrst Lausdahl and Andreas Arnild Peitersen. They'd create what eventually became 140, a visually mesmerizing platformer in which each level's elements (blocks, elevators, etc.) were choreographed by the game's soundtrack, knitting it all together in a really cohesive and trippy package. See for yourself!

Years later, with these audio engine-directed learnings in place, INSIDE's in full production, and we get to where we are now. Headline! Inside INSIDE: Playdead Comes Alive.



The Talk

Get It Right The Second Time

Nothing's ever perfect in the mind of its creator. Martin revealed that one of his biggest regrets with LIMBO was the way the death / respawn interrupted the flow of the game's soundscape. If you weren't particularly bothered by this, it's probably because this is the way we expect games to work. When we fade to black, things go quiet; when we fade up, all is new again. Games flush cache and reload variables across moments like this, and lots about what the audio engine knows or is doing goes along for the ride. It's bog standard.

Thing is, in LIMBO, you die a lot. And Martin had very specific goals for its sound experience--an ever-evolving, blend of literal and acousmatic sound effects, textures and pads--that clash with the way the audio engine's impersistence. Imagine the Boy's deaths and the player's experience of the world as one, unbroken timeline, and you can start to see Martin's perspective: you intend for some sound design, or a particular mix state to play just at the player's first encounter with an area, and these serve as passage into some new part of the overall composition. You'd rather not back out of it until the player's advanced, and every death before the next checkpoint is part of that same section of the piece. Retriggering these cues "because Memory Management" goes against artistic intent and makes things feel much more "game"y than they need to.

This time around, they set the sound engine to start at the beginning of the game, and never, ever unload until the game has finished. Everything the audio engine knows, and everything the game knows about that rolls all the way through, untouched until the game is shut off. This means voices don't need to cut off, playback position can be remembered, mix states and other game object-specific variables can be adjusted with the long view in mind.

We do something similar at PopCap with our music system, firing off one event at sound engine initialization and letting it roll until the session's done or interrupted. The game's music evolves through State changes over time. Mostly, we do this because guaranteeing the music's on lets us steer it mid-flight with Events we've created ourselves, instead of relying on developers to trigger those cues. Also, cycling things in and out of memory costs CPU, which is itself a precious commodity on mobile.

So this idea of keeping the audio engine alive forever is not new--nor are a lot of the bullet points that make up the technical considerations of INSIDE.

Yet, there's something magical in the way they're all brought together.

How This Sounded

To illustrate the shortcomings he saw in LIMBO, Martin presented a couple of gameplay clips of the boy dying and respawning with their original audio in place--and then a few clips of the same sequence with their audio replaced to allow the music to play through.

The Befores and Afters here are subtle. Music cues which fired off just as a player first encountered a challenge (hopping across the electrified HOTEL sign) happened just once, the first time, and the rest of the piece continued through every fade to black as the boy died a handful of times in trying to cross this section. It lent a really nice continuity to the segment that wasn't there when everything had to restart on each reload.

A practical example of this on INSIDE takes place in one of the game's many puzzles. Martin demonstrated a section in which the Boy's got to flip a breaker switch that turns out a few dangerous, rotating sentry lights that will kill him if crossed, but whose light he nonetheless needs to proceed.

Those lights sounded dangerous as all get-out; really great, full-spectrum stuff. Activating them drives a railroad spike through what's been a cautiously filtered mix up until this point. They seize and hold your attention, and they are meant to: Don't fuck with these lights.

[pullquote align="left"]Martin quipped that these were "such small changes that the player [was] unlikely to notice them," but that's a very knowing statement.[/pullquote]

But the focus shifts thereafter to the Boy and how you'll help him through this. So, in a persistent mix decision that leverages this always-on audio engine, the game begins to attenuate those lights on subsequent deaths and reloads. Even though the game objects associated with those sounds are trashed in the interim, those variables are kept alive in the sound engine, to which the game looks for its cues.

We saw this same phenomenon in LIMBO, with an incredibly layered attenuation system that slowly swung the volume of the boy's footsteps up and down within a 15dB window, irrespective of camera distance and entirely for this effect.

What happened there, and what'll happen with INSIDE, is that all of these subtle changes sum up to larger changes you feel, even if you can't quite put your finger on them. This is the mix as conversation with the player than some unweighted diegetic reality, broadcast out.

Again, there's no one showstopper here, it's the concert of everything working together. You start to see what five years' tuning will get you.

Listening to What?

That sounds all well and good, but the name of the talk--what exactly is the game listening to?

INSIDE's been structured so that many of its events and logic are fired and evaluated totally within the context of the audio engine, which has some really rad creative consequences I'll think out loud about in a bit. For starters, this top-down audio-driven vision required that "game time"--which is usually measured in frames, and can swing variably up and down depending on system performance--has instead been locked to Wwise's metronome.

A boy runs across a dangerous plank in a screenshot of Playdead's INSIDE.

None of this is gonna bob to the rhythm. No, you'll have to listen. [Credit:]

The BPM of INSIDE's soundtrack (and I use the term loosely, as none of it seems so obviously musical) is the actual beating heart of the game. You can easily envision how this would work in something like Rock Band, or 140, but INSIDE is very different from those games. It's bleak, desaturated and looks very much like it rose from the bones of LIMBO. That's another way of saying, it's not flashing note charts and neon elevators in your face--but the all-powerful thrum of its soundscape in just as much in control.

Robot Rock

Here's an example.

In one scene, the Boy is in a factory, sneaking between the rank and file of a bunch of deactivated automata. There's a conspicuous space between two of them, and as you step into it, a track-mounted security drone snaps to life from just above, latching you with a spotlight. The steady pulse of the factory swells into a rhythm; the line of machines begins to march. It's The Sorcerer's Apprentice meets Pink Floyd's "The Wall," and the player's got to keep lockstep with the beat to avoid being snicked by the drone.

[pullquote align="right"]There are no scrolling meters or anything to let you know that this is how this section needs to be played--by now, the game has taught you to listen.[/pullquote]

Four bars of a march, four bars of silence. The drones stop, you stop. The drones move, you move. Just like that.

There are no scrolling meters or anything to let you know that this is how this section needs to be played--by now, the game has taught you to listen.

(Actually, I'm wondering how the hearing-impaired are ever supposed to play this game. Or even those without some extended low-end going on in their living room, because GODDAMN this game is about the well-crafted sub.)

Here, as everywhere, the soundscape rolls straight through your horrifying understated death. If you were to start tapping your foot to the march of this level, you could keep it up until you finished. Probably beyond, actually. I think Martin mentioned that they wanted music to transition between areas as well. (Of course...)

"But, it takes a few seconds to pull the curtains back up and drop you into that robot conga line for another try. What if I respawn right in the middle of a marching part?"

Glad you asked, simulated reader!

At the point when you die, the game engine's checking the playback head over the march's Music Playlist container to evaluate how much time will have passed by the time the player's back in control after being revived.

///I don't actually code and this is why
if((Start of Don't Move Section < (Music.PlaybackLocation + RespawnTime) < End Of Don't Move Section))


That looked a lot cooler when it existed only in my brain. Basically though, if the game's going to respawn you during a march, it instead segues into a two bar transition of more Don't Move soundscape that buys you some time to react. Bear in mind again that the game can know these timings intimately, because game logic time has been slaved to music time; no matter how crummy your system, the game and Wwise are in constant deterministic communication. "Hey, garbage collection's done--ready to reload. Where are we?"

The Shockwave

Here's another cool one.

There's a part of the game in which a distant wave cannon is firing into the foreground, directly towards the screen. Like the way you'd throw guys in Turtles in Time.

As each pulse lands, things splinter and smash, sheet metal flies away, the screen shakes. You've got to run the Boy left to right, up and down, from cover to cover within the merciful intervals in which this weapon reloads.

No one called this thing a "wave cannon," but in starting to write this, it occurred to me that I don't know what sort of blast this thing's actually firing. And that's because there's no visual cue that shows you each of these pulses coming in, only the aftermath their impacts leave on the environment.

Here again, you're forced to listen to survive. All of the logic surrounding whether the Boy lives or dies is slaved to the metronome of the level.

The metronome, in this case, is quite literally the entire envelope of this terrifying weapon sound, whose variations are implemented as a Playlist within the Interactive Music Hierarchy.

Just like with the march, the playlist container of the wave cannon envelope's dotted with annotations that send information out to the engine: trigger blast effects, shake the screen, check to see if the Boy's behind cover or not. The last one's key, because the game is actually checking to see if you're behind cover or not at the peak of the sound but before all the visuals of the sound rippling through the foreground fire off.

I think this delay was baked in to allow a little grace window for the player to scramble, but it means that you could be behind cover at the moment of this check, duck out into the open right afterwards and be found safe, even as you stand amidst all the flying wreckage.

Disconnect // Net Effect

In a lot of games, that scenario'd seem like a bug. But, I find this moment of potential disconnect between sound and visuals super interesting, because it actually serves to strengthen the connection between sound and survival.

Surviving despite the visuals telling you you shouldn't have lets the player know: It's not the impact what kills you. It's the sound.

The sentry doesn't kill you--the march does.

I haven't played a minute of INSIDE, and I've only seen what Martin's been willing to show, but I would bet you that the game's full of tiny little asynchronicities that cement this relationship over the course of the game. Moments that cut out the middleman and teach the player that sound is the gentle jury and dispassionate executioner.

Sound doesn't point to the reward, the feeling that you're safe--it quite literally means those things.

A positive music transition didn't happen because you passed a checkpoint, it IS the checkpoint.

Maybe? Could be that I'm giving the guy too much credit, but I kinda stand with @mattesque:

Matt Marteinsson reacts to INSIDE.



The Expo Floor

Martin, Jakob and a few curious minds surround the INSIDE demo station at the Audiokinetic booth at GDC 2016.

The shepherds and their flock.

On the last day of GDC, I finally made time for the Expo Floor, and just enough to get to the Audiokinetic booth. It happened to be during one of Martin's demo slots. It was, as the rhapsodic quality of this article has likely made clear, one of the highlights of my conference--not because of what it contained, but because of how it made me feel about myself afterwards.

I don't have a great extended metaphor to wrap around what it's like to check out a new Wwise project from the creator of LIMBO; suffice to say, a cloned copy of LIMBO's Wwise project, dirty production laundry and hacks and all, is a checkbox you can elect to grab when you're installing a new build of Wwise. Like, the way you'd download a Hendrix guitar simulator or how you page through Richard Devine presets in every fucking sound design tool you use, that's where Martin is with Wwise. It takes a certain mind.

Anyways, I expected to see some super next-level shit and come away feeling like I needed to get a thankless job aboard some Alaskan fishing boat, so destroyed was my confidence.

But actually? I kinda understood everything that was going on. I think you will, too.

INSIDE's debug tools in a live-running session at the Audiokinetic booth at GDC 2016.

I don't even see the code.

Even having seen the wiring in the walls, the thing that struck me about INSIDE's implementation wasn't its complexity--it was its elegance.

You know when you build a standalone Wwise demo for a reel, and because you don't have all the live inputs a game would actually have, you have to rig up all sorts of little tricks to simulate the Soundcaster session feeling like the real thing might? (Shameless plug.) Well, the INSIDE Wwise project felt kinda like Martin did that, and then everyone else just made the game work around it.

Or like a track layout after you've gone in and pruned out all those garbage layers that weren't helping anything. Clean, purposeful, only what needed to be there.

Some cool stuff that's going on:

  • Physics Is It: Animations are not tagged. Besides footsteps, anyways. Player limb physics velocities and angles are measured and sent in as RTPCs, which flip switches on constantly-running events that trigger bits of movement coming through when thresholds are crossed. Bet you this matrixes with the material system in place to make sure the right sound plays every time. Several games are doing this and have done it already! Just a reminder that if you can, you should, too. Where is the line between "footsteps" and "foot-based physics system impacts" drawn, anyways?
  • Beat Your Feet: Player breath events are fired according to the player's run speed: the game measures the frequency of the Boy's footfalls and generates a rough BPM out of that, firing breath requests off at the intervals that naturally emerge. This keeps the breaths consistently in sync with the player's movement, as well as lets things settle into the same rhythm we do when we're running and breathing out in the wild.
  • It's the Little Things: Running consistently puts the player into various states of exhaustion, which switch breath audio into more exasperated-sounding sample sets, as well as probably adjust the trigger rate to some degree. There's a cooldown period thereafter as the player recovers. (It wouldn't shock me if they're driving the Boy's exhausted animations from these RTPCs, rather than the other way around. But I didn't ask.)
  • What They're Meant For: Blend Containers everywhere. I'll be honest, I haven't used one of these things for more than "play a bunch of stuff at once" holders for a long time, now, but it doesn't shock me that they're aplenty in place in INSIDE, and used exactly to effect. Take water movement: Martin had the Boy leap onto a rope above a pool of water, shimmying down it until he could swing back and forth across the water's surface. A bunch of light, lapping swishes play as you skim the shallows. Descend further, and those movements blend into a sample set of deeper water movements without skipping a beat.
  • Little Things, Vol. 2: The Boy's wetness is tracked after he leaves the water, contributing to a sweetener layer on footsteps and movements (material-specific, of course) until, after enough movement, that's gone down to zero. I bet you there are even blend containers there for various states of wet-dry balance.

One More Thing

If someone asks you about the audio of INSIDE and you wanna get 'em excited about it really quick, drop this bomb:

"[Its] sound design's in the Interactive Music Hierarchy, and its music lives in the Actor-Mixer Hierarchy."

Okay, not all of it is. Like basically all the movement sounds I just listed up there are probably not.

But as the natural counterpart to that wave cannon's spot in a Music Playlist, much of the game's ambient music is fired off within the Actor-Mixer Hierarchy. Annotatations on the looping playlists of the game's 'music' (read: its sounds) are sent out to the engine and back in as Play Events that trip random containers full of variably trigger-rated pads and bits of texture. INSIDE's music, basically, is set up like you and I would set up unseen ambience emitters. Neat trick!




I had a few questions for Martin that I wasn't able to ask during the talk, because there were lots of others who went before me. So, I had an extra day or two to work them over in my head before I got the chance to pose them. By Friday, I felt pretty confident about what his answers would be, especially so after having toured the Wwise project--which was laid out so sensibly, refined down to such a point that all the cool things it was doing felt pretty self-evident.

Just the same, I had to know a couple of things:

Q: What do you with five years' worth of time on a game? What takes the longest--the asset creation, establishment of the style, systems and implementation?

A: "Yes, [the implementation]."

This squares with my experience as well. Yes, sound design and palette establishment can take a while, ditto the core engine setup, but I've worked on a couple of Wwise projects, and you don't go straight from concept to getting them looking like that. Everything was so spartan and deliberate; you could tell that this was a project where they got to go back after all the messy experimentation and do exactly what they needed to from ground up.

It creates the appearance of unbroken genius from start to finish, but the only thing unbroken about it's the intent, really.

We may never have the time to fully refactor all of our experiments into project-wide elegance, but it's a reminder to upkeep as you go. As projects stretch onwards, hacks bite back.

And the big one.

Q: In a game where everything's looking to the audio to make its judgment calls, the sound designer wields a tremendous amount of power: you shorten one of those wave cannon loops or move an annotation, and suddenly, the level becomes impossible to pass.

This top-down causality makes you kind of a Game Designer on INSIDE.

How was that balancing act with the rest of the team? Were there a lot of interdisciplinary power struggles and overstepped boundaries? Or did you all just kind of work it out amicably for the sake of the game?

A: "Yes, [we just worked it out]."

I mean, there you have it.

It strikes me that on larger team or in more egoistic environments, this kind of thing wouldn't fly. People can get precious about their work, and this subversion of the trickle-down flow where audio's usually the last to know and the most affected--well, it'd take a certain somebody to ease that paradigm shift through. Martin strikes me as that kind of guy. If you've listened in to any of this year's Sightglass discussions, the hearsay on how audio managed to work out such a sweet deal on this game came down to a

"If Martin wants it, Martin gets it"

... sort of vibe within the team.

But, I don't think it needs to be that way.

Take this, from this year's talk on the Music and Sound Design of Ori and the Blind Forest--

That's to say, if you've got a team that's willing to have the hard discussions, battle it out, put the game before all else--all of these little struggles eventually become scraps of forgotten noise.

I don't doubt that Playdead works much like this. But who can say!



The Takeaways

I'm still not sure of what to say here. There's no one big secret weapon trick I've walked away from my time with INSIDE. If I had to stretch for some global ones:

  • If you weren't convinced that Proactive Audio were gonna be a thing yet, well. Even Playdead's doing it!
  • Animation tagging stuff sucks, it'd slipped my mind for a while that physics-based movement is a much better way to go


As a personal takeaway, though?

The entire experience of this talk, where Martin laid out the showstopper points of what he's doing--and the Wwise project, where you got to see all the less glamorous finishing touches that nonetheless adorn the game--left me feeling, well, strangely empowered. It's been five years since I last saw Martin speak, and I left that talk with my head in a million pieces. Maybe I've learned a few things since then, because I was able to keep it well together this time around. It's a credit to the way he thinks about sound, and to the accessible degree everything's been refined, that I listened to and looked at all this stuff and just felt like, "I get it."

Made me feel like maybe I could do something down the line that'd make someone else feel the same way.

As a parting shot, then, maybe this is a reminder to acknowledge the way you'll walk away from this article having understood a few things--and to give yourself credit for that.

None of us is Martin, but even Martin's got to brush his teeth in the morning.

We'll all make a thing like this someday if we just stick with it.



Get Yer Stig On

As a coda to all this, here's a solid collection of Stig-related resources for you to geek over.

And finally, a Designing Sound interview with the man himself:

“Limbo” – Exclusive Interview with Martin Stig Andersen


GDC 2016

If you count on this site for nothing else, it should be for starry-eyed reflection on whatever year's Game Developer's Conference. So, here we are again.

Most of what can be said about #GameAudioGDC 2016's been said already, and I'll leave the rest of it to voices more capable, or rarely heard. But I was there, again. Through the coffee--the too damned early coffee--through the the exhaustion; I showed up as often as I could. Through the talks, hallways, through the barely-controlled screaming that passes for conversational volume at the Death Star, I spoke and listened. I drank. I slept, barely. I went to Denny's.

It's an unceasing read / write cycle on your soft tissue flash memory, and you end the week completely full up on ideas you fear you're about to lose.

As a completely selfish mnemonic act, here's what I thought and felt coming out of this year's GDC.

Observations on Year Five

GDC 2016 morning roundtable at Sightglass, Saturday edition.

The least packed of days.

*Six, technically, but five since I really started this sound thing.

Skip the talks. Skip the pass, even. My first piece of advice to game audio hopefuls and veterans alike has ever been "follow @lostlab," and this year, he did an interesting thing: he didn't attend a single talk. I mean, besides the ones he paneled on. No, this year, he went Full Sightglass, passing on most of the conference proper for meetings, time on the Expo floor and the morning conversations around the coffee table. Which, incidentally, was packed as fuck. Someday soon, we're gonna need a bigger boat, but that upstairs ain't taking on water just yet.

It's a strategy. The deeper you get into game audio, the more you've seen and done, the lesser the odds of That One Talk (or Any Talk) that's gonna crack your brain open. No one I'd talked to knew what they were planning on attending before the conference which, I think, speaks to the way your priorities shift as time goes on. The greatest experiences of GDC happen outside Moscone, and the really good ones that don't? Well, they tend to wind their way back to Sightglass each morning. Smart play, Damian.


If you're new to the show, or to the field, I can still totally recommend the Audio Pass. But the next time you're back, or the time after, you'll find it makes sense to shift tracks:

  • If you can speak (and you can!); if you've got a company that'll buy you a pass (you probably don't), you can go All Access. Extra-disciplinary talks are the best, 'cause you have to fight to make them relevant, extrapolate some meaning out of them that you can bring back to audio. Every year, all of my best ideas are borne out of that mental reframing that comes in the middle of a talk on AI or Art or Monetization and ask, "Why the heck am I here? What can I learn from this?" Try it. You'll be surprised!
  • If neither of the above apply, or you're burnt? Just go Expo Pass. Or nothing. Take the Damian approach, and reap 99% of the good stuff you'd get out of attending.


We Are Legion

GDC2016's Monday night VGM mixer at Terroir, San Francisco.

The VGM Mixer at Terroir. So this x3 was roughly Brewcade.

As @mattesque put it on this week's Bleeps and Bloops, "at some point game audio got to the point where people felt like they could win without other people losing."

My Gosh, but there are a lot of game audio professionals out there. Day Zero's Designing Sound Brewcade meetup had something like 260 RSVPs. 260!

The above quote is lifted from last year's recap, and it still rings. Because while we've got a fuckton of work to do, I think the game audio community's the best it's ever been. We've got the Slack, we've got Twitter, we've got a million and one write-ups exactly like mine but also completely different and they're all running in this glorious space where people are finding work and staying hopeful.

That's how I feel, anyways. I can't pretend to know how goddamn hard it is to get this career going from behind the ball of gender discrimination, racism, politics, terrorism and ignorance. But in the admittedly limited landscape I can survey from my point of privilege, things feel pretty welcoming. I'll do whatever I can to push that feeling out there and pull more of us together going forwards, and I know lots of others who will, too.

On to the non-touchy feely details.


There's very little you could read about VR here that everyone hasn't said already. It's shit hot right now.

I couldn't find an X-Files Movie poster with this tagline on it, but trust me, this was super clever when I thought of it on the bus.

I do.

I'm no expert on VR, but I played one at GDC. Like many of us! But this is a safe place, and we can all be honest about it.

There's a raw, hopeful enthusiasm coursing through game development right now as we strip mine all the cruft away from that Golden Buddha that is VR As It Should Be. Who knows how many fumbly, derivative minigames and point-to-teleporters it'll take until we're there, but I think the experience we're chasing it clear: it's more of what you feel when you first tour the demos. The way your heart catches the first time you're on the edge of a building, staring off, right before you notice the lack of wind. Or face-to-face with a gentle giant, whose majesty strikes your lizard brain like a tuning fork. You--me, and the majority who've tried it, I hope--have undone those velcro straps and thought, "I wanna go back in."

VR game development seems nothing if not self-aware. We know these early products suck, or at least, that they will suck inevitably compared to where things'll be down the line. We've seen this before: films that wanted to be books, games that wanted to be films, the Internet that wanted to be TV. Given time, these media found their way (or have started to) and flowered into the full bloom of their craft. We've done this enough that we know where we stand.

I feel the marketing's more or less lining up the same way. (RemindMe! Christmas 2016) For all the hype around VR, there's a reassuring sense of restraint. The hype around it doesn't feel like this "4K 3D buy now you need it kind of hype," but more of a, "it's there, and if you're the type of person that likes buying New Things and Taking Risks, here it is" thing. It feels to me like no one's in a race to burn this thing out via overexposure--that if it and the tidal wave of investment behind it can hold out just long enough, we'll get to that killer application in time for things to latch.

So--what'll that killer app be like? And what does it mean for audio?

My favorite universal takeaway from the handful of VR talks I went to is that we don't know.

Three panels on Monday all kinda concluded with a final slide that said, "That was sixty minutes' of research and best practices--and here are half a dozen things you should try that would prove them all wrong."

And frankly, that's fucking thrilling. Because really, once we've got our wishlist of realtime geometry-based fourth-order Ambisonics reflections and occlusion and dead-on XYZ positioning and cheap A-format microphones and all of that decoding running on tiny Android devices that everyone's got Cardboards for, well. What next? We've modeled a very realistic way to localize sound. Film wasn't a solved problem with the invention of 5.1. It's the stuff we'll do next that's exciting; it's the stories we'll tell that count.

One nice thing about VR is that it makes the case for telling those stories through sound a much easier sell. 'Presence' doesn't happen without immersive audio. That's not a bullet point on my agency's pitch sheet, that's something that even the sound-blind layperson's picking up on as soon as they strap in. That inescapable link between quality audio and believing you're there (kind of the entire selling point of VR) should make fighting the cause for quality game audio a whole lot easier.

My VR experience at GDC has, if nothing else, got me flipping lots of perspectives and asking lots of questions about how sound should work, put a fire in me to try some new shit out just because it seems cool. I needed that. For however many Vives end up next to Rock Band guitars in a couple of years, those lessons are mine, and I'm glad to have them.


  • The tech's getting really cool, and we're on the eve of consumer availability, but I wouldn't worry about jumping in right now. It's still a ways before this stuff takes en masse. Potentially a long ways. If you're smart, communicative, can make good sounds and respect the art of telling a story, you'll be able to roll over and figure out how to bolt on a Ambisonics decoder some years down the line with little workflow interruption.
  • But if you do jump in now? Try everything.

Proactive Audio

Not the first, and not the last.

More like this pls

That label we were looking for for a while, for games that weren't rhythm games, but felt beat-driven, connected deeply to music: we has it now.

It isn't a new concept, but it's one that feels like it's about to go wide. Eric Robinson's a sound designer, audio programmer and developer of Koreographer, a bolt-on suite of Unity Audio functionality that makes it easy for devs to get their games taking cues from the sound. He gave a couple of presentations on the case for audio-driven gameplay, standalone and at this year's Audio Boot Camp, which were full of naturally extensible examples like snapping footstep rhythms to the beat.

The demo occasionally went a little far afield--tree branches that pulse and swell to the rhythm are kind of a tough sell unless you're working on some Aldous Huxley tribute--but was super compelling for something so simple. And that's great, because it's something that shows around. Eric's doing a lot of good work to push this case forward for the rest of us, and I'm stoked to have a bit of a movement going.


There was another talk. Martin Stig Andersen and the team at Playdead are making a thing after LIMBO, and are finally beginning to pull the curtains back on what it is.

I'll save that for another article, but suffice to say that the stuff Martin showed hit like a warhead, and it's squarely within this wave of games whose logic is slaved to audio for the sake of storytelling and impact. I can't wait for it to land. For all of these to, really, because every great new experience that's got audio at the foreground makes it easier to make the case for audio support the next time around.


  • Koreographer's a set of tools to lash your Unity Audio-driven game to the masthead of sounds and music. Structures for lining things up, stretching a rhythm through as many corners of the game as you can dream up. If you've thought this is a thing you wanted to do, you have some tools for it, now, and you also have a name for what those tools do.
  • If you're on Wwise, heads up for a set of similarly useful music and audio callbacks coming sometime in 2016.
  • At a wider dev culture level--start thinking about how you can have this conversation with the rest of your team. What if you made it easy, even natural, for everyone to do what they were doing while acknowledging the underlying meter of the game? A plug-in for Unity that turns subdivides the animation timeline into beats instead of frames. What kinds of systems might you get rolling subtly on rhythm to glue it all together? Coming off GDC, peopleare usually bristling with inspiration and ready to dream. Tap into that and start the dialogue up.


The Feels

I thought I was done with this stuff but had to make a detour back.

Each year, there's always one or two GDC talks that get to that universal, subsurface human stuff that goes too often unaddressed in the daily churn towards making games. The #1ReasonToBe talks are perennial winners, here. There was Manveer's 2014 talk on the under-addressed stereotyping our game narratives support; there's Brenda Romero's meditative "Jiro Dreams of Game Design".

"Everyone In This Room Is a Fraud" was that this year for me. And maybe it's just because I'm uniquely in this place in life when self-worth battles and therapy and weird empty thirty-something-What Now?-ness have become constant players on life's stage, but I really needed to hear all of these incredible people speak openly about the things they did. Life is hard; creativity is hard. When you peg so much of how you feel about yourself to your craft, a couple of bad days in the DAW can leave you feeling like a total zero. When you're really good at locking into that self-hatred spiral, bad days can turn into bad weeks. To know that even those you most admire deal with this on the daily--and to really feel it in this roundtable format where you just wanted to jump in and start talking like you were out to dinner with friends, these stories were so familiar--I'm so glad I got to experience that.

I hope they un-Vault this thing soon so we can all group hug about it.

This talk flipped a switch that sent me gushing downriver all over the Twitters, so I decided to Storify that:


As you'll see there, and will have seen above, part of me still feels like I don't even get to feel these things, because I have been so, so lucky in life. (Holy shit. Am I imposter syndromeing right now, or can you tell this is genuine? Words suck.)

So with all that necessary privilege-disclaiming out of the way, I want to say that this year, it feels there's finally a movement afoot--or that the movement I've caught so many more open discussions on inclusion, diversification and the need for more voices in game audio this year than in years past. It seems better. I hope it's getting better. Shit like this still happens, but at the least, I wanna put a stake down and say that this year's been a personal turning point in how I see my role in all of this.

I feel like I don't think about the issue of gender and racial diversity in game audio often enough because from where I'm sitting, there's never felt like a reason to. That sounds horrible. And it is! But, everyone I've worked with has been great, and what they look like or where they're from hasn't factored into that at all. I work with people; I'm a person. This fear of wading into the minefield that is even addressing this stuff is, I think, a lot of what keeps otherwise nice and even-keeled folks who have had nothing but great experiences with audio teams of Human Beings of all sorts from saying anything at all.

But inaction is an of itself a harmful act. Playing the middle, being a nice guy, staying inoffensive and quiet--all of that doesn't help, and what doesn't help, hurts.

This stuff came up pre-GDC when a bunch of us were out at coffee a few weeks back, and I explained this feeling to my boss, who happens to be a woman in game audio: how tough it is to find a place to help, or talk about gender representation in game audio, because you don't really know how to say anything without pissing someone off--even when you know in your soul you want to do well. That I wanted to advance the conversation, but was looking for some sort of acknowledgement that I wasn't a bad guy.

Well, I didn't get it. Instead, she simply put back that that paralyzing mental juggling I do when I want to start saying anything about women in game audio? Imagine going through that with nearly everything you want to say about anything for every day of your career. And that's how the other side has it.

I was quiet for a long while after that.

So here's to hoping that all of us, whether vocal aggressors, hatemongers, clueless dolts or well-meaning passive observers in the shitty way things have come to be.. here's hoping that we can start acknowledging the ways in which we've fucked up and keep the conversation moving forwards however we can. To see opportunities where we don't need to be heard--where we shouldn't be heard--and to Step Back, Shut Up and Listen.

What's gonna be great is when it doesn't need to be a movement at all, anymore, and simply is. Until then, though, here's your impersonal Internet-delivered reminder to know when to surrender your spot to those who need it most, and do the right thing for all of our industry.

Man, I have a lot of reading to do.


  • Nope. Read the whole thing.


Thanks to everyone who made this year's GDC the best yet. I'll see you all at the Carousel before too long.



I've been to GDC a couple of times in the past, always returning to reality overflowing with ideas, batteries recharged.

But this year's was something special.

Some lucky confluence of my own experiences, where my head's at, the people in attendance, the conference itself--I don't know--came together to give me a GDC I'll never forget. That sounds like something out of a John Hughes film, but I'm serious.

Funny thing is that this year, it wasn't even the talks. I attended half a dozen or so over the week in between my volunteer responsibilities, and they were great. I left them all with lots of takeaways for my current gig at PopCap, some of which I'm putting into practice already. But my God, the conversations. Sunup to sundown, with restless hostel sleep that kept my brain from processing it all before it was time to do it again.

Each day began with a familiar, early-rising ritual: the 7AM walk through the urine-soaked, scaffold-strewn jungle that is SoMa to Sightglass and a morning roundtable with the Game Audio Podcast. (Side note: how the hell is that poverty line so distinctly drawn and right below a park, two shopping centers and the Yerba Buena Center for the Arts?) This initiative was fully on the rails last year, packed each morning and full of amazing discussion. This year's, though. I mean, they may as well start selling Sightglass Track passes to the conference for all the folks that were showing up daily. I want to say we were running around 40-50 bleary-eyed sound designers, programmers and audio-curious every single morning. And yet, the whole thing--this interacting with the broader game audio community--still feels intimate, like it's really ours. The companion carousel lunch hour's the same way: a free-flowing group of game audio all-stars swapping cards and stories every day. No "X years AAA development" nor shipped titles necessary.

I don't think I'm the only one who felt this way. A scrape of last week's activity on the #GameAudioGDC hashtag reveals a lot of sentimentality and gratitude, held up as maybe the most important takeaway from this year's conference.

So I'll get to digging into some of my talks, thoughts and learnings in a later post, but wanted to start the snowball rolling with a final round of Internet thanks towards the entire game audio community for being so fucking rad. Not four years ago, this site was a desperate chronicling of me and my student exercises. I wanted very badly to belong to the conversation but was sharply aware of how little I knew, how I hadn't "broken in" yet. I wore this attitude on my sleeve a lot of the time, and it's made my small steps into this career really taxing at times. Turns out I needn't have felt that way. We're all in it--from the hobbyist, the still-enrolled and the just-getting-started to the AAA sound designer and audio director. There's room around the Sightglass table for all sorts. We're fortunate to have that.

As @mattesque put it on this week's Bleeps and Bloops, "at some point game audio got to the point where people felt like they could win without other people losing."

Cheers to that.



Hello! It's been a minute. Lots to catch up on--it's probably best to just jump into present day and go from there.

Another Game Developer's Conference has come and gone, and I wanted to make sense of the whole experience, commit it to print before the day-to-day sinks back in. Let's take it point for point.

The People

If I've said it once...

The best thing about the game industry are the people within it. This is my second year as a semi-credentialed, guess-I-belong-here attendee of GDC, going by that AAA name on my conference pass--but the people of game audio have been welcoming for as long as I've had intent to join them. They're humble, kind and--thanks to the tireless #GameAudioGDC banner-flying of @lostlab--extremely visible at the conference itself.

Something I saw this year was a lot of folks going Expo Pass only, saving some scratch and eschewing the body of the conference for the networking fringe: hallway meetups and late-night business idea shares over overpriced drinks. When you've got a group as organized as game audio, it works. Each morning's Game Audio Podcast meetup at Sightglass was an informal chance to mull over the day's talks and go all wide-eyed about the future alongside all manner of rookies and vets. It's so fucking cool that the group's that close-knit, and I really need to thank Damian and Anton for setting that stuff up every morning.

My heart goes out to all the underrepresented disciplines who don't have that same social leadership, as hanging with these guys is always the best part of the conference.

The Talks

Of course, there was a lot to watch and hear that you could only get to with a badge. Everyone I spoke with agrees that GDC2014's talks were a notch up: ferociously technical and full of stuff you wanted to run back and put into practice. I've outlined two specific favorites below.

Two of the most-talked about presentations on the Audio Track talks were delivered one after another on Wednesday morning--and both by audio programmers. Tools, systems and implementation hooks are sexy, and a development team whose culture supports these things is one of the surest components of a great sounding game.

Aural Immersion: Audio Technology in The Last of Us

Jonathan Lanier's an audio programmer at Naughty Dog (do they have more than one? The luxury!) who spoke on the systems that went into the incredible sound of The Last of Us. That one was my game of the year--in an age when I'm spoiled for choice and spend far too much time considering, but not actively engaging with, my Steam catalog, TLoU had me running home from work to fire up the console and running my mouth around the coffee machine every morning with stories of the last night's play. Lanier outlined the Creative and Audio Directors' early pre-production talks, which set audio up for development support and eventual success, before digging into the technical ins and outs.

The audio team was able to ground their audio in the gritty realism of the world by hitching a ride on Naughty Dog's tried and tested raycast engine. This let them throw lines and cones around every crumbling environment, bringing back useful information that let them filter, verb out and otherwise treat their sound. In a game where you spend so much time crouching and listening, the sum of all these subtle treatments made for some incredibly tense pre-combat situations: planning my approach as unseen Clickers shambled and squealed somewhere off in the dark, or straining just a little bit to hear Ellie and realizing I'd jogged too far ahead.

What's important is that the team never became slaves to their own systems, putting the technique above the telling. They tried out HDR--the silver bullet audio solution of 2013--and found it didn't fit the type of perspective they were trying to put you in. So they rolled their own dynamic mixing solution. They liked the way enemy chatter faded out over distance, but that same falloff curve meant some key dialogue with Ellie could go unintelligible. So they they sent enemy and friendly NPC dialogue through separately adjustable wet/dry treatments and reverb buses.

TLoU's audio tech is impressive, but nothing any AAA studio couldn't have dreamed up themselves. It's the fact that they got so much of it into the game--and had a studio that believed in audio; that gave them the resources to do all of that--that turned it into the greatest sounding game of the year.

The only shitty thing about this talk is that it was double-scheduled alongside A Context-Aware Character Dialog System. So, you had to pick one or another--but not both. One to watch on the Vault later on.

The Sound of Grand Theft Auto V

This was the Audio Track talk that sidelined everyone this year: Alastair MacGregor's an audio programmer from Rockstar who brought with him an overview of what it took to accomplish the sound of Grand Theft Auto V. I feel Rockstar doesn't often go public about their methods and techniques--as Anton said in the podcast, Alastair's name on the program felt like "someone from Rockstar being let outdoors"--but I don't think anyone expected them to reveal what they ended up showing.

GTAV features around 90+ hours of recorded dialogue, heaps of licensed music and sound design in what is almost certainly the audio budget record-breaker of last generation. All of this was powered by Rockstar's internal audio toolset, RAGE. It's maintained and developed by a team of audio programmers and sound designers that seem to be staffed there independent of any specific game project, e.g. they're a dedicated team. They've been iterating and improving upon RAGE from around the time of Grand Theft Auto V, making RAGE--now versioned 3.0--at least five years in the making.

RAGE is insanely comprehensive in what it facilitates; it reads like a game audio Christmas list fulfilled. Thankfully, volunteers and event management were on hand to scrape flying chunks of blown mind off the walls as Alastair touched upon feature after feature. Here are a few highlights; you'll want to try to catch the talk or someone else's summary for more, because there was more.

GTAV didn't even ship on PS4, ergo: there is and will be more.

How RAGE Wins Everything

Synchronicity 3.0
When the team started running up against the wall of lining up microfragments of weapon audio and trigger timings, the RAGE team responded. The engine allows for sub-frame (e.g. more than once per 1/30th second, or, more frequently than most stuff in the game's ever making a call), synchronous, sample accurate triggering of multiple assets in different formats. Designers could stack one gun layer in uncompressed PCM, another wrapped in XMA--which would need a little decoding--and the engine accounts for this, keeping everything locked up. Did I mention that GTA was so filled to capacity that the architects had to load audio into the PS3's video RAM to hit their goals? They did, and RAGE buffers for the transfer time out of video memory and still keeps things locked.

Better Engines, Cheaper
GTAV's cars sound much better than its precedessor's. (I don't know this for sure. Haven't played GTAV yet! But, I'm taking Alastair's word for it.) Beyond simple loops, each instance of a car in GTAV is kitted out with not one, but two granular synthesizers--one for processing engine sounds, another for exhaust--that help to split source recordings into tiny, reassemble-able grains at runtime, stretching their audio further and reduce memory usage. Naturally, RAGE features a nice, graphical interface for the audio designers to tune these synths in and offers fine control, e.g. what sections of a specific sample to granulate, how to blend between these areas to create convincing idle transitions (which, as steady, non-pitching sounds are typically poor candidates for granulation). They're even able to specify a % number of grains to use from each section to get really gritty about memory usage; get the sound believable, then start paring the complexity back and ride that fine line. Thoughtful options like this mean that these synthesizers can run with brutal efficiency, so that even the CPU load of double instances per car--and the game features a lot of cars--make for an effective tradeoff vs. loading fatter loops into memory. GTAV's programmers are seventh-dan master of the Cell processor architecture.

Like Promethean Fire
There's lots of talk about procedural audio these days: sounds spun up entirely out of oscillators and code, costing very little memory at the expense of some CPU usage. The idea is that at their best, procedural sound can free up valuable memory for larger, necessarily manmade assets like voiceover and orchestral music by covering all the little bits that you don't need to maybe get sounding 100% realistic. Footsteps, physics sounds, etc. At least, that's where most of us have been setting the near-term bar, because even making those sorts of sounds out of thin air is really freaking hard to do. The general consensus has been that procedural audio is coming, but isn't quite ready just yet.

Except that fully 30% of the sound effects in GTAV were created using RAGE's procedural audio editor.

Fucking 30%. Of a game that large. That shipped on the last generation.

Alastair spent some time demonstrating RAGE's modular synth-like interface that helped make this possible. It allows their audio designers to craft and tinker towards a procedural sound asset before exporting that synthesizer configuration as an asset that can run in-game. He auditioned a few that might as well have come from a microphone; apparently, Rockstar's sound designers are pretty much all Neo. This part of the talk thrust me through the full ten stages of denial and I eventually came around to stunned bewilderment.

tl;dr Rockstar's audio tech is years ahead of everyone and we all had no idea.

Everything Else

Gosh, there's still so much to go over. FMOD, Wwise and Fabric battling down to become the de facto indie audio solution of the future, just as Unity spools up its own tech. Unreal Engine coming down from its status as a AAA platform to meet the little guys with a cheapish subscription model, and throwing back the curtain on Blueprint, its new visual scripting tool for quickly creating awesome looking stuff.

It was a week of ingestion whose digestion continues. I'll likely have more to say once the whole of the conference hits the online Vault. The plan is to kick back and nerd it up with some coworkers, catch all the stuff we missed from the Audio Track and beyond. I'm sure there's lots in there that'd equally inspire.

For now, it's time to cool my spending, crack into a side project or two and thank everyone who made last week so amazing.

#GameAudioGDC is a truly happy place.


Game Audio – Movement Soundscape

The third term of my time at VFS has finished, which means we're 6 months from reality. It's incredible to think about how much I've learned since starting here last October; moreso to try to imagine all the things I've still got to learn. There's enough for several lifetimes, and seeing the work my classmates, our teachers and the wide world of sound on the Internet puts out every day is super inspiring and challenging. It's time to start looking back through my most recent batch of work...

Here's an end-of-term project for our Game Audio II course:

Game Audio II

Reconstructing Movement in Game Audio:

Environmental Sound

The aim of this project is to record source material, and then to re-construct a simulated environment for the user/listener to establish more than one ambient game audio context. The project should take the user on an auditory journey transitioning the user through more than one ambient ‘zone’, as if the character is moving through the game world on a “spline”.

After hearing all the examples of past students' work we were shown in class, I was pretty impressed by some of the ambient shifts these former all-stars had come up with. Some really cool stuff, like underwater to above-water, nice examples of occlusion through shutting a car door in the face of a horde of zombies, etc.

But while they all had lots of imagination and detail, none of them really felt like they were moving to me so much as the environment was changing over time.

I wanted to do better, here; to get movement across at all costs, even if it meant I had to keep my concept a little more grounded in reality. From jump, I thought that this would be a great chance to learn a bit about binaural recording (the project was going to be played back/evaluated in headphones) and to try some in-the-field source recording techniques, stay away from the very close-mic'd dry recordings we'd normally do in our Foley rooms.

The goal was to keep the environments simple and just let the movement speak for itself. And in that respect, I think I did pretty well.

What is Binaural Recording?

Brief aside on what all this "binaural" stuff means:

Binaural recording is a method of recording sound that uses a special microphone arrangement and is intended for replay using headphones.
-- Wikipedia, "Binaural recording"

Sound waves don't just dive straight into our brains via some kind of "line in" jack: they have to move through air to get there. And as they do that, these waves bounce and reflect off your shoulders and chest, wrap around your skull, move through your hair, filter through your clothing, etc., all the while undergoing subtle changes in frequency, volume, and arrival time (e.g. hit your left ear just before your right ear), to name a few.

Obviously, our hearing receptors don't sit in an open, microphone-shaped capsule in front of us, they're buried deep in our ear canals. So by recording via two tiny, omnidirectional microphones positioned roughly where your ears are, you are going to record the sound not "as it sounds" in some pure, idealized sense, but as it sounds to you.

When played back, all of those subtle frequency/delay/arrival time changes you captured by recording this are interpreted by your brain in a way that makes you instinctively feel as though those objects are coming from where they were when you originally heard them. We don't hear "slight loss of frequencies around the 8,000Hz range because those tiny waveforms were absorbed by my haircut," we hear "behind." As an incorrect but illustrative example.

By recording this way we can capture much more detail than just "left" or "right" - we can get above or below, in front or behind, etc. And you can turn two channels of simple stereo into sounding like a whole 360-degree sphere of surround.

And yes, people build lifelike human mannequin heads, complete with density-matched materials to simulate the hard reflections of our skull vs. the soft, absorptive tissue of our outer ears, embed microphones in them and use these devices to do these sorts of recordings. At the fanatical level.


This is a popular binaural recording example that might blow your mind - wear headphones.

There's a lot more to say on this than I can teach, so if you're interested, look around. Our ears are amazing.

Deciding on a Scenario

First, I brainstormed a few scenarios - I wanted something simple and real, with the chance to show off a lot of movement and cool perspective changes. The most interesting to me was that concept of a Metal Gear Solid-style infiltration, with some close spaces, hearing people filtered through a vent shaft, and eventual running/scuffling.

I "prototyped" some of my recording techniques by squirming around in a broken vent we have in our props room and adding a little bit of reverb to the signal, and got some really nice, claustrophobic results. I also affixed two omnidirectional clip mics to a pair of classes, hooked myself into a portable recorder and walked around our campus a bit to see if it'd work as a poor man's binaural setup. The results weren't 100% interesting all the time, but a couple of cool moments convinced me that this was an avenue to chase.

Planning and Recording

Vent decided on, I sketched out the rest of the soundscape and the environments it'd take place on - again, sticking to stuff that was close to campus so that I could use a lot of my own recordings. I had to figure out roughly how much time I wanted the character to spend in each environment, plus making sure that I was hitting the assignment requirements in terms of "signals" (in-your-face, attention-calling sounds like guard footsteps and key jingles), "keynotes" (middle-ground emitting sounds like vending machines, vents, a radio) and "ambiences" (persistent background sounds that give some subtle information about where you are).

With only 90 seconds to work with, I wanted to make sure you knew where you were as fast as possible whenever the environment changes. If you were in a vent, you needed to know within a couple of seconds. So I put a lot of emphasis on making those binaural recordings the "bed" of the piece, cranking them in the final mix. Normally, your ears tune out "room tone" as you hang around an environment, but I thought that if I were too subtle with these recordings, the cool perspectivey effects wouldn't come through.

Some things I recorded this way were all of the different room tones I wanted, the character's footsteps, and the stairwell door opens and closes - this let me put all of that easily within "his" perspective without having to do any extra mixing. I also spent a bunch of time tilting myself around like an idiot outside in our school's courtyard to get some really dizzying panning on the nearby traffic for the fight up on the rooftop. That, with some of the low- and high-end rolled off to give a "rooftop" sense of distance, became the ambience for the final fight on the rooftop.

For the main character's breaths, I used a single lav mic and just panned it directly in the center (when little else was) to give it a bunch of presence. Simple but effective!

All other effects were recorded in mono with the phenomenal Sennheiser 416. I love love love this microphone for what it's good for, and it feels great to finally be discovering favorites. It's extremely directional, which means I was able to walk around campus and point it at the tiniest little emitting spots on machines, coolers and various electronics and pick up their distinctive buzz without capturing too much of the rest of the environmental noise. That plus our Foley rooms' new preamps allowed me to grab some extremely clean recordings of all the other stuff I needed: guard boots, cloth punches, key jingles, mug and paper tosses, etc.

These recordings were also almost totally free of our floor's soul-crushing, inescapable 120Hz hum. As a result, I barely needed to treat these recordings with EQ etc. before I started cutting the final piece. Saved me a ton of time.

Finally, the brilliant guard dialogue was written by yours truly in a lame attempt to explain why these two guards just wouldn't shoot the guy on sight, plus give a little extra depth and a sense of threat to the whole piece.

I wanted to experiment a bit more with having my classmates (the two guards) read their lines a bit farther back from the mic and even "off-axis" (not facing the direction that the mic picks up sound in) to try to have that sense of people moving around and not talking directly towards the main character's ear, but stuck to just recording it normally and positioning it in the mix later on. The one "hey!" in the stairwell was recorded a few floors up and in the stairwell, though.

Editing and Mixing

My major discovery for this part of the project was WaveArt's Panorama plug-in, which I managed to convince our IT department to install a trial of on my machine. It's a 3D panner that uses some of those same binaural recording principles to subtly adjust certain frequencies, volumes and delay times in a recording to let you position objects wherever you want within a stereo sound field. And as an RTAS plug-in, everything in it was totally automatable.. which basically let me "draw" the path of all the objects in my soundscape according to where I thought my main user should be. Tooooooo freaking cool.

The Panorama interface.

The only shortcoming I found with it was that it doesn't respond well to really quick pans/head turns, as that's just too much math too fast for the plug-in to crunch. You end up getting a lot of phasey, washy sounds when you do it, so I had to keep the 3D panning stuff slow and gradual. Small price to pay! Being able to place the guard's voices below and increasingly behind the character as he moved forward was amazing, and I'm especially happy with how the quick front-to-back pans on the office objects (mug of pens, paper) worked in the middle of the chase. That plus the whirl-around as the guards open the door on the roof.

The next big victory in positioning everything in this environment came from messing around with multiple reverbs. When I started, I was using a standard low-pass filter (that "muted" effect) to try to make the guards' voices sound as if they were being blocked by the walls of the vent, but it felt forced. Sending their voices into the same claustrophobic, metallic-sounding reverb I was using for the main character's vent squirming, while keeping their dry voice signal down, worked way better. Favorite examples of verb usage in the soundscape are as the main character pans his head back up and one of the guard's voices seems to "move" into the vent, as well as the way I treated the alarm as the character bursts into the stairwell.

Final Result

Separated into stems because we had to turn them in that way, and it might give you a better look at what's happening in each part of the whole.

Ambiences Only:

Keynotes Only:

Signals Only:

Final Mix:

What Went Right / What I Learned

  • You shouldn't be scared of recording *in* a location when you want something to sound like it happened there. It seems obvious, I know, but up until this project, I really felt like the safest way to do things was to record sounds clean and close in a soundproof room, then control the way everything was positioned/echoing in a mix later on. For an assignment like this, I saved a ton of time by *not* doing this that way, and even managed to create some more realistic results by doing it. I couldn't have made that stairwell running happen as realistically in any other way but just recording myself jogging up them with two tiny mics clamped to my glasses.
  • A little noise isn't a big deal. Seriously. When recording room tones etc. with our tiny omnidirectional microphones, I had to turn the gain way up in order to get anything to record at all. Doing tihs added a ton of hiss and white noise into the recordings - the mics weren't meant to really be turned that high - but in the final mix, once all the ambiences were in and with everything else going on, you couldn't hear that shit at all. If I had spent any time cleaning those recordings up with Xnoise etc. before mixing them, it would have been totally wasted.
  • The Sennheiser 416T is awesome. So is Panorama.

What Went Wrong

  • In retrospect, the whole project took me way longer than it should have due to my own meticulousness - if I had gotten my shit straightened out with recording planning and done that all in one day, things could have happened faster, but because of this term's schedule, all my recording sessions were happening on and off through the end of the term. As usual, "organization will set you free."
  • I wouldn't have done the fall from the vent/ring-out as dramatically, if I could go back and do it again. It's a bit over-the-top and Gears of War-y, but it helps cover up the fact that there's not much going in that section of the piece besides some footstep scuffles. A straight ring with a less dramatic low-pass might've done it.
  • Not totally totally sold on the fall from the roof as well - I wish I had some closer-sounded traffic sources to gradually fade in to give a better sense of proximity. Struggled a lot with this last section!

Please leave any questions/thoughts/criticism in the comments!