Last week, I set the flag down and declared I'd be learning C# while I wait out this strange inter-contract abeyance.
I haven't made as much progress as I've hoped to. And typically, what I'd do is beat myself up over that. Why hasn't this thing taken root?, Why have you wasted other time doing XYZ when you know deeply that this is the best thing for you--that your mind craves those challenges, actually, on the days you've elected to do something else? and other such self-eroding lines of thought.
But that's a useless way of looking at things. There are lessons to be learned in our failures (probably even more than in our successes!) if we can find the clarity to wall off the sting of it, step back and analyze why things have gone wrong. Maybe you'll see some of yourself in this post and in the days' upcoming work logs. I don't mean for this site to turn into a self-help resource, but I don't mind exposing my flaws if it might help someone else.
The truth is that aligning my mind and body to want to work at this thing every day in a world full of easy, alluring--ultimately unsatisfying--escapist options, is really difficult. And as darkly reassuring as it would be to think that I'm alone in that, I suspect I'm not. Success on this front means building up good habits, like keeping the whole Luca unit running well on meditation, exercise, and a healthy diet, three things I've found to provide a lot of stability and satisfaction. Reflecting on how those things help has built some momentum towards keeping them going, and their upkeep becomes easier. As an aside, I've found that those three together are critically intertwined, and I can't really skip on any of them without the others falling apart. So that's something to troubleshoot.
I think it also means giving in to the bad habits, though, and seeing where they lead you. By not forbidding them, you naturally start to see that they don't provide you with the same satisfaction that all of your good habits do, or that their effects are totally impermanent. You naturally start to trend away from them, even as you've given yourself total permission to explore them in the face of your less sexy productive options.
This is what I'm finding in the face of some clarity today. Long as I'm learning--be it C# or how to short-circuit my natural tendencies towards more consistent progress--it's all good.
I wanted to share this song with someone after rediscovering it this morning.
I find this track irrepressibly beautiful and hopeful in a way I can't easily describe. And I've felt this way ever since stumbling across it three years ago.
Every little element comes together in just the right way to serve the meaning I've ascribed it. The thin, unpracticed vocals and kitchen sink percussion; that impossibly low bass line and how it wavers on the edge of breaking some oscillator or amp. The one, single variation on that chip sound when it bends up and down on the final chorus. And the terrible amount of reverb that glues it all together.
A friend of mine put it well: "There are a lot of songs that sound like this, actually...but this one is special."
What I think it is is that this track feels like being young as you experienced it--not as someone looking back.
Thanks for reading!
I've spent the last few months on contract break / forced sabbatical from my time at Microsoft. And through the professional void, it's been personally fruitful. Thanks to living like an antisocial monk for most of 2013, I'd put away enough to take a long trip into Southeast Asia and wander about for a month.
(That deserves its own post--which it may or may not get--but you can view my efforts at photojournaling the whole thing over on my Instagram. It starts here, and I wish there were an easier way to reverse-chronologically browse this thing.)
Travel led into more travel: I got to take a trip to the Italian homeland with my dad and brother for a week's skiing, eating and pacing around downtown Rome. Then GDC. Then, a few weeks later, the annual VALVE Hawaii trip, which I'd been invited along to as a guest. I'm really blessed to have been able to live out this downtime as I have.
But amidst all the vacationing, the overactive brain wanders. You gotta feed it or it dies.
I've thought for a while that a real safe heading for game audio is the career path of the audio programmer. In my last year's experience on Spark, I can tell you that their time is an incredibly precious commodity. If you, the intrepid Sound Designer and Implementer, are the dreamer of big things, they are the ones that turn those dreams into executable reality. I don't care how good you are with Wwise or Unity or whatever, on any game of sufficient scope, and if you're trying to do anything that'd stand out against the forward-rushing edge of game audio, you will need a programmer's help. Sometimes, though, you won't get it.
What do you do then?
As preparation for a hopeful and glorious return to pay-stubbed game audio--and because I have a little game I'd like to make someday--I'll endeavor to decode some of this low-level magic that these guys do. And, jointly because I want to keep myself on rails and give you all something to read about, I'll be documenting what I find, showing my work, demystifying everything I can.
The simplest of sandboxes seems like a ready-made project where I can poke into some Wwise-Unity integration and figure out exactly what's going on. I know Wwise well enough and there's documentation on that particular spot where the middleware hits the engine.
Here's a mission statement of sorts:
I want to hook a Wwise project directly to a game engine, preferably Unity. This means taking a Wwise project with in-built RTPCs, Switches etc. and creating brand new hooks to them within the game code, compiling and experiencing the audio moving about.
- Can I do this via an already built Unity game simply integrating a Wwise project into it?
- What languages would I need to learn to do it?
I really don't know anything about programming beyond some basic batch scripting stuff and a well-rusted primer on Python, courtesty of my time at VFS. So, expect a lot of frustration, doing things without really understanding how they're working and, hopefully, lightbulbs coming on.
Step 1's checking out the Wwise-Unity integration package and seeing what the deal with it is.
Hello! It's been a minute. Lots to catch up on--it's probably best to just jump into present day and go from there.
Another Game Developer's Conference has come and gone, and I wanted to make sense of the whole experience, commit it to print before the day-to-day sinks back in. Let's take it point for point.
If I've said it once...
The best thing about the game industry are the people within it. This is my second year as a semi-credentialed, guess-I-belong-here attendee of GDC, going by that AAA name on my conference pass--but the people of game audio have been welcoming for as long as I've had intent to join them. They're humble, kind and--thanks to the tireless #GameAudioGDC banner-flying of @lostlab--extremely visible at the conference itself.
Something I saw this year was a lot of folks going Expo Pass only, saving some scratch and eschewing the body of the conference for the networking fringe: hallway meetups and late-night business idea shares over overpriced drinks. When you've got a group as organized as game audio, it works. Each morning's Game Audio Podcast meetup at Sightglass was an informal chance to mull over the day's talks and go all wide-eyed about the future alongside all manner of rookies and vets. It's so fucking cool that the group's that close-knit, and I really need to thank Damian and Anton for setting that stuff up every morning.
My heart goes out to all the underrepresented disciplines who don't have that same social leadership, as hanging with these guys is always the best part of the conference.
Of course, there was a lot to watch and hear that you could only get to with a badge. Everyone I spoke with agrees that GDC2014's talks were a notch up: ferociously technical and full of stuff you wanted to run back and put into practice. I've outlined two specific favorites below.
Two of the most-talked about presentations on the Audio Track talks were delivered one after another on Wednesday morning--and both by audio programmers. Tools, systems and implementation hooks are sexy, and a development team whose culture supports these things is one of the surest components of a great sounding game.
Jonathan Lanier's an audio programmer at Naughty Dog (do they have more than one? The luxury!) who spoke on the systems that went into the incredible sound of The Last of Us. That one was my game of the year--in an age when I'm spoiled for choice and spend far too much time considering, but not actively engaging with, my Steam catalog, TLoU had me running home from work to fire up the console and running my mouth around the coffee machine every morning with stories of the last night's play. Lanier outlined the Creative and Audio Directors' early pre-production talks, which set audio up for development support and eventual success, before digging into the technical ins and outs.
The audio team was able to ground their audio in the gritty realism of the world by hitching a ride on Naughty Dog's tried and tested raycast engine. This let them throw lines and cones around every crumbling environment, bringing back useful information that let them filter, verb out and otherwise treat their sound. In a game where you spend so much time crouching and listening, the sum of all these subtle treatments made for some incredibly tense pre-combat situations: planning my approach as unseen Clickers shambled and squealed somewhere off in the dark, or straining just a little bit to hear Ellie and realizing I'd jogged too far ahead.
What's important is that the team never became slaves to their own systems, putting the technique above the telling. They tried out HDR--the silver bullet audio solution of 2013--and found it didn't fit the type of perspective they were trying to put you in. So they rolled their own dynamic mixing solution. They liked the way enemy chatter faded out over distance, but that same falloff curve meant some key dialogue with Ellie could go unintelligible. So they they sent enemy and friendly NPC dialogue through separately adjustable wet/dry treatments and reverb buses.
TLoU's audio tech is impressive, but nothing any AAA studio couldn't have dreamed up themselves. It's the fact that they got so much of it into the game--and had a studio that believed in audio; that gave them the resources to do all of that--that turned it into the greatest sounding game of the year.
The only shitty thing about this talk is that it was double-scheduled alongside A Context-Aware Character Dialog System. So, you had to pick one or another--but not both. One to watch on the Vault later on.
This was the Audio Track talk that sidelined everyone this year: Alastair MacGregor's an audio programmer from Rockstar who brought with him an overview of what it took to accomplish the sound of Grand Theft Auto V. I feel Rockstar doesn't often go public about their methods and techniques--as Anton said in the podcast, Alastair's name on the program felt like "someone from Rockstar being let outdoors"--but I don't think anyone expected them to reveal what they ended up showing.
GTAV features around 90+ hours of recorded dialogue, heaps of licensed music and sound design in what is almost certainly the audio budget record-breaker of last generation. All of this was powered by Rockstar's internal audio toolset, RAGE. It's maintained and developed by a team of audio programmers and sound designers that seem to be staffed there independent of any specific game project, e.g. they're a dedicated team. They've been iterating and improving upon RAGE from around the time of Grand Theft Auto V, making RAGE--now versioned 3.0--at least five years in the making.
RAGE is insanely comprehensive in what it facilitates; it reads like a game audio Christmas list fulfilled. Thankfully, volunteers and event management were on hand to scrape flying chunks of blown mind off the walls as Alastair touched upon feature after feature. Here are a few highlights; you'll want to try to catch the talk or someone else's summary for more, because there was more.
GTAV didn't even ship on PS4, ergo: there is and will be more.
How RAGE Wins Everything
When the team started running up against the wall of lining up microfragments of weapon audio and trigger timings, the RAGE team responded. The engine allows for sub-frame (e.g. more than once per 1/30th second, or, more frequently than most stuff in the game's ever making a call), synchronous, sample accurate triggering of multiple assets in different formats. Designers could stack one gun layer in uncompressed PCM, another wrapped in XMA--which would need a little decoding--and the engine accounts for this, keeping everything locked up. Did I mention that GTA was so filled to capacity that the architects had to load audio into the PS3's video RAM to hit their goals? They did, and RAGE buffers for the transfer time out of video memory and still keeps things locked.
Better Engines, Cheaper
GTAV's cars sound much better than its precedessor's. (I don't know this for sure. Haven't played GTAV yet! But, I'm taking Alastair's word for it.) Beyond simple loops, each instance of a car in GTAV is kitted out with not one, but two granular synthesizers--one for processing engine sounds, another for exhaust--that help to split source recordings into tiny, reassemble-able grains at runtime, stretching their audio further and reduce memory usage. Naturally, RAGE features a nice, graphical interface for the audio designers to tune these synths in and offers fine control, e.g. what sections of a specific sample to granulate, how to blend between these areas to create convincing idle transitions (which, as steady, non-pitching sounds are typically poor candidates for granulation). They're even able to specify a % number of grains to use from each section to get really gritty about memory usage; get the sound believable, then start paring the complexity back and ride that fine line. Thoughtful options like this mean that these synthesizers can run with brutal efficiency, so that even the CPU load of double instances per car--and the game features a lot of cars--make for an effective tradeoff vs. loading fatter loops into memory. GTAV's programmers are seventh-dan master of the Cell processor architecture.
Like Promethean Fire
There's lots of talk about procedural audio these days: sounds spun up entirely out of oscillators and code, costing very little memory at the expense of some CPU usage. The idea is that at their best, procedural sound can free up valuable memory for larger, necessarily manmade assets like voiceover and orchestral music by covering all the little bits that you don't need to maybe get sounding 100% realistic. Footsteps, physics sounds, etc. At least, that's where most of us have been setting the near-term bar, because even making those sorts of sounds out of thin air is really freaking hard to do. The general consensus has been that procedural audio is coming, but isn't quite ready just yet.
Except that fully 30% of the sound effects in GTAV were created using RAGE's procedural audio editor.
Fucking 30%. Of a game that large. That shipped on the last generation.
Alastair spent some time demonstrating RAGE's modular synth-like interface that helped make this possible. It allows their audio designers to craft and tinker towards a procedural sound asset before exporting that synthesizer configuration as an asset that can run in-game. He auditioned a few that might as well have come from a microphone; apparently, Rockstar's sound designers are pretty much all Neo. This part of the talk thrust me through the full ten stages of denial and I eventually came around to stunned bewilderment.
tl;dr Rockstar's audio tech is years ahead of everyone and we all had no idea.
Gosh, there's still so much to go over. FMOD, Wwise and Fabric battling down to become the de facto indie audio solution of the future, just as Unity spools up its own tech. Unreal Engine coming down from its status as a AAA platform to meet the little guys with a cheapish subscription model, and throwing back the curtain on Blueprint, its new visual scripting tool for quickly creating awesome looking stuff.
It was a week of ingestion whose digestion continues. I'll likely have more to say once the whole of the conference hits the online Vault. The plan is to kick back and nerd it up with some coworkers, catch all the stuff we missed from the Audio Track and beyond. I'm sure there's lots in there that'd equally inspire.
For now, it's time to cool my spending, crack into a side project or two and thank everyone who made last week so amazing.
#GameAudioGDC is a truly happy place.
I thought it might be a good idea to cut together several different short reels for different purposes -- not everyone's going to be interested in my sound design or implementation if they're looking for help with a radio spot or cleanup on a speech-only PSA. Sometimes, they may only want to see a selection of what I do.
The latest of these reels is meant to highlight some of the shorts I've done Dialog Editing or Mixing on, and it's below.
I found it pretty tough to find good cuts from my products that would 'showcase' dialog work, as usually it's the type of thing that slips into the background when done well. It's not really meant to be noticed. And so while the general public is still semi-clueless as to what goes on in post-production audio, I think there's a special lack of understanding around dialog work. If you tell someone you that cut sound effects on Transformers, they might stop and realize that Optimus Prime, as a CG construct, didn't make all those crazy transforming noises by himself. But tell them that you smoothed tone transitions between cuts on the production track or cleaned up unwanted movement/mouth noise, and you'd probably get a blank stare .
I think that the examples I chose for this reel and the way I structured it may help to show even the unfamiliar viewer what a Dialog Editor / Mixer typically does -- and to those of you already familiar with it, I hope you like what you hear.
As always, thoughts and comments are welcome!
I was approached a few weeks ago by one of the team from ArtSchoolReviews.ca, who asked me if I'd write up a review of VFS' Sound Design for Visual Media program for their site -- which is about to undergo some dramatic design changes.
As of a couple of days ago, that review is online, and you can read it here:
I was originally planning to host a review on this site as well, but as their formatting is just so much nicer, I don't think there's a need for right now.
Many of you have written to me with questions about VFS over the past year or so and I've been happy to help. And while I hope this review serves as a good starting point for a few new generations of prospective VFS SD students, you can still e-mail me directly if there's anything you'd like to know.
New Year's Eve is always a time for reflection. I certainly have my share of sound resolutions for 2012 -- tools and techniques I'd like to learn types of projects I'd like to work on, ways I'd like my professional life to shape up -- but more than anything else, I'd like to give myself permission to enjoy the ride a little bit. I'm traditionally very critical of himself and my own work, but as I stand here at the start of the rest of my professional life, I'm really hopeful about the way this new year's going to turn out. Here's what's new.
Linear Audio Reel
I've cut together a Linear Audio Reel of some of my work from the past 12 months, put together with a day's worth of learning in Adobe Premiere. Hacking it all together has already given me a lot more respect for (and interest in) the work of the picture editor, and Adobe's dynamic link technology -- which allows you to import and work with entire video timelines between applications, edit them live and see the updates reflected across the board without any re-rendering -- is really amazing stuff.
This represents pretty much the full spectrum of post disciplines I was exposed to this past year, and I think it plays pretty well. But I'd love to hear your feedback.
- Finished post-production on Red Rabbit a month or so ago, earning my first IMDB Credit. I provided dialogue editing/mixing, ADR and Walla recording/editing/mixing, music editing/mixing, Foley recording/editing, and co-designed the sound effects for the film's animated fight sequence. Red Rabbit has been sent off to the Tribeca Film Festival, among others. Very pleased with the end result and have my fingers crossed..
- Finished post-production on Portrait, a short horror slash paranormal film for VFS. Provided dialogue editing/mixing, ADR recording/editing/mixing and Foley recording/editing.
- Have begun work on an unannounced independent platformer for XBLA. Really excited to be able to show some stuff from this one eventually..
This site will be getting a bit of a makeover in 2012 as well, as I rebrand and reform for easier independent contracting.
I'll be relocating to Seattle in the early part of the New Year to pound pavement and continue to collaborate on independent game sound and short films, watch this space for news as always.
And have a great New Year!
So what's next?
After finishing up Deus Ex: Human Revolution - one of the carrots that got me through the last two months - it's back onto some great, detailed Foley and SFX work for Red Rabbit, a student film we're polishing up for festival submission at the end of the month. Red Rabbit is a Tarantino-esque slice of cinema that tracks badass bounty hunter Babs Eaden through a small New Mexican town on her run to the border. I'm pretty psyched to get back into cutting blades and guns despite just coming out of the trailer redesign; there's always more to learn. I've been handling the dialogue and music edits/mixes so far and it's going well. The final cut should be done by the end of the month.
I'll be doing some production sound work on the side and chasing leads wherever I can. Most importantly, I'm laying out a couple independent sound projects/Thing-A-Week type deals to keep the skills sharp - some thoughts:
- Ambiences - I love backgrounds, but after a DX:HR playthrough I am newly re-sold on their power to captivate and immerse. The EIDOS Montreal sound team killed it with the ambient beds in the Detroit and Hengsha hubs, and the way everything's mixed, it's difficult to know where the beat of the city ends and the game's own music begins.
- Whooshes/Trailer Impacts - These two things are part of the bedrock of Hollywood sound design, and I feel like they're very minute-to-learn, lifetime-to-master. Fresh off that DX:HR trailer redesign, and I'm really eager to push past what I was able to do there and generate some new, personalized stuff and find my own favorite methods for these.
- Synthesis/Zebra - I had such a great time with Zebra and Massive for creating all of the music and some of the inorganic sounds of my final, but have really barely scratched the surfaces of these two powerful synths. The nice thing with synthesis is that so much of it is based on the same fundamentals, so diving crazy deep into a particular synth is still pretty transferrable onto the next big thing, the Absynth or Reaktor of the 2010s. Lots to explore here.
Abstract sound design is another idea. I've got an animation lined up that's much less literal than my most recent trailer and it'd be nice to just cut loose, generate a ton of crazy source and explore some avenues I haven't yet. There's creatures, hard effects, neat vocal processing techniques, tons out there - I'm still many thousand hours away from my personal 10,000.
I'll be back around New York City for the coming holidays and am looking forward to spending some time with my family, most of whom I haven't seen for the whole year. The plan after that is to head back out West, put a few roots in Seattle and its great game development scene. Of course, a great work offer would change everything - so you never know.
Last week, I graduated from VFS' Sound Design for Visual Media program, to an audience of colleagues, parents, instructors and friends. What a ride it's been.
A year ago, that night before I left New York, it felt like I was throwing everything away on a total gamble. I was moving away from friends I'd had for years, my family, my contacts and what have no doubt been some incredible times with Wanderfly to move a coast I'd never known and drown myself in student debt. All to chase the dream, to shut up that little voice that kept telling me I needed to make a change. It's now a year later, and I'm really glad I did. Not in some blanket, everything-has-been-amazing way, but in a very flawed, wonderfully imperfect kinda way. I've changed. I've learned things about myself I would never have known without taking that leap. My eyes and ears have been opened thanks to the wisdom of some incredible teachers. And that far-flung bunch of sound junkies called SD49 have been there the whole time, going from classmates to confidants to lifelong friends.
This was the jist (I think) of my graduation speech as class rep, and while I was worried it all waxed a little too emotional at times, it went over well and it's how I truly feel. It's been an amazing ride, but I can't wait for what comes next.
I owe this place a proper write-up about my time at VFS and things I'd do differently - ex-producer can't help but post-mortem - but for now, a quick look at my final project:
Deus Ex - Human Revolution "Revenge" Trailer Redesign
Everything you're hearing in there, I had a hand in, and as a major Deus Ex fan, it was a total labor of love. I really have to thank Square-Enix and EIDOS Montreal for their permission.
I'll talk a little more about the sounds that went into that trailer when I get some spare time. In the meantime, a bunch of samples of my work have moved to this Work Examples page.
November is film work, job applications and all the sound experimentation I can find time for.
Months and months later, here we are. I'm nearing graduation from VFS Sound Design for Digital Media program and will tidying this place up quite a bit. It's been a whirlwind year with a lot of lessons learned, so in addition to throwing up all of my latest coursework (at least some of which has to be good enough to show around) I'll have a few closing thoughts on the program. In the meantime, things will likely change and move around. Don't worry, it'll all be back in order before October rolls through.
Cheers to all the prospective students who have found my blog while searching for information on the Sound Design for Visual Media program, you can always contact me via e-mail if you have any in-depth questions.
And a big thanks to everyone who's been reading!
We three amigos were feeling pretty confident about our workload for Term 3 when we were approached by a graduating student from VFS' Classical Animation department about doing the sound for her final, the story of a day in the life of a young girl at play, and the magical backpack that protects her from harm throughout.
While we probably would have turned down just any old work to focus instead on our assignments, this animation was completely legit -- there was no way we could pass up a chance to get our sound on such a beautiful piece.
So we said yes, and threw ourselves into it amidst the rest of our coursework.
Going in reverse for this one. Here's the end result, and below that is some thinking/work that went into it.
Our first look at "Behind" was a draft copy of the piece with temp music dropped in -- nothing locked yet, but close enough to completion that we got what the animator had in mind. The whole piece had a very Miyazaki-esque/Spirited Away sort of vibe to it, and we wanted that to come through unaffected in the final clip. Our job here was really clearly to support the visual and the mood just enough to elevate the movie, then get the hell out of the way.
Some early thoughts we had:
- Less is more. Both the delicate visual style of BB's animation and the tone of the piece called for a really light touch from the sound department. We've been used to crazy psychic battles, fight scenes and explosions, so trying to create "soft" sounds was totally new for us. We decided early on to keep the sound effects and ambiences from being "over-real" and leave out a lot of the detail we'd otherwise put in.
- Cede to the music. It's weird to say this, as sound people are usually jostling with the music guys to get their sounds played loudest in the final mix - but again, with a really delicate piece like this one, we had to concede that music could say a lot more than simple effects could to create the vibe we wanted. We were prepared to have our stuff played down.
- If we can't get the crying working, we'll drop the dialogue entirely. It was a coin toss as to whether the girl was really vocalizing or not as she skips along, but we thought it'd be a great experience to get in touch with our acting campus and cast talent for the role, get some extra dialogue editing practice, etc. -- so we committed to putting dialogue in the piece, with the caveat: if it wasn't believable, if the crying felt false or got in the way, we'd strip it all out.
In all the student works (in-progress and final) I've seen so far in my time here, I've observed that dialogue is kind of a high-risk proposition. When it's good, or just average, it goes unnoticed and we simply absorb it, freeing up our brains to concentrate on the visuals/rest of the sound design that's gone into a piece. When it's bad, it hijacks our attention and basically spoils any chances of walking away from the piece thinking that it had good sound...
Recording and Editing
... Fortunately, we managed to find a couple of voice actors that fit the character we were looking for, and had one of those "Oh my God"/mutual turn-to-each-other moments after our session with Arielle Tuliao, who did the voice of the girl. That was a huge turning point in the process; after editing her stuff into sync and laying it on with our Foley, everything started feeling like it was really going to work out. Everyone else did a great job as well, with Brendan (one of the kids') voices also providing some neat source material for the bag roar when pitched down.
Apart from the music, the great majority of the sound that got played up in the final mix was our Foley -- all of it was captured with the Rode NTG-3 running through an FP33 mic pre into Pro Tools. Would've loved to have used the 416 on all that stuff, but didn't discover it until a few weeks later!
The sessions took us the better part of two days to fully record and edit. Notably "cool" Foley solutions were flicking our fingers with masking tape on them for the butterfly flaps, ripping pieces of cloth underneath a dirt pit for root tears/plucks, and a nice layer of fresh-picked green onions on our surface for some squeaky meadowy footstep details.
My buddy Gwen handled the BGs and SPFX , both of which I thought really had that soft sonic texture the piece needed, and were rich and full without being really showy.
Manuel took the animator's original Music, a simple piano melody, and really filled it out to score the entire piece with -- unfortunately, a lot of those changes didn't make it into the final mix, in favor of the director's starting piece. You gotta expect it.
Final mixes were done by a later-term student and VFS' Matthew Thomas.
Hope you enjoyed! Leave any questions or feedback in the comments.