Luca Fusi Sound Design | Implementation



Hello! It's been a minute. Lots to catch up on--it's probably best to just jump into present day and go from there.

Another Game Developer's Conference has come and gone, and I wanted to make sense of the whole experience, commit it to print before the day-to-day sinks back in. Let's take it point for point.

The People

If I've said it once...

The best thing about the game industry are the people within it. This is my second year as a semi-credentialed, guess-I-belong-here attendee of GDC, going by that AAA name on my conference pass--but the people of game audio have been welcoming for as long as I've had intent to join them. They're humble, kind and--thanks to the tireless #GameAudioGDC banner-flying of @lostlab--extremely visible at the conference itself.

Something I saw this year was a lot of folks going Expo Pass only, saving some scratch and eschewing the body of the conference for the networking fringe: hallway meetups and late-night business idea shares over overpriced drinks. When you've got a group as organized as game audio, it works. Each morning's Game Audio Podcast meetup at Sightglass was an informal chance to mull over the day's talks and go all wide-eyed about the future alongside all manner of rookies and vets. It's so fucking cool that the group's that close-knit, and I really need to thank Damian and Anton for setting that stuff up every morning.

My heart goes out to all the underrepresented disciplines who don't have that same social leadership, as hanging with these guys is always the best part of the conference.

The Talks

Of course, there was a lot to watch and hear that you could only get to with a badge. Everyone I spoke with agrees that GDC2014's talks were a notch up: ferociously technical and full of stuff you wanted to run back and put into practice. I've outlined two specific favorites below.

Two of the most-talked about presentations on the Audio Track talks were delivered one after another on Wednesday morning--and both by audio programmers. Tools, systems and implementation hooks are sexy, and a development team whose culture supports these things is one of the surest components of a great sounding game.

Aural Immersion: Audio Technology in The Last of Us

Jonathan Lanier's an audio programmer at Naughty Dog (do they have more than one? The luxury!) who spoke on the systems that went into the incredible sound of The Last of Us. That one was my game of the year--in an age when I'm spoiled for choice and spend far too much time considering, but not actively engaging with, my Steam catalog, TLoU had me running home from work to fire up the console and running my mouth around the coffee machine every morning with stories of the last night's play. Lanier outlined the Creative and Audio Directors' early pre-production talks, which set audio up for development support and eventual success, before digging into the technical ins and outs.

The audio team was able to ground their audio in the gritty realism of the world by hitching a ride on Naughty Dog's tried and tested raycast engine. This let them throw lines and cones around every crumbling environment, bringing back useful information that let them filter, verb out and otherwise treat their sound. In a game where you spend so much time crouching and listening, the sum of all these subtle treatments made for some incredibly tense pre-combat situations: planning my approach as unseen Clickers shambled and squealed somewhere off in the dark, or straining just a little bit to hear Ellie and realizing I'd jogged too far ahead.

What's important is that the team never became slaves to their own systems, putting the technique above the telling. They tried out HDR--the silver bullet audio solution of 2013--and found it didn't fit the type of perspective they were trying to put you in. So they rolled their own dynamic mixing solution. They liked the way enemy chatter faded out over distance, but that same falloff curve meant some key dialogue with Ellie could go unintelligible. So they they sent enemy and friendly NPC dialogue through separately adjustable wet/dry treatments and reverb buses.

TLoU's audio tech is impressive, but nothing any AAA studio couldn't have dreamed up themselves. It's the fact that they got so much of it into the game--and had a studio that believed in audio; that gave them the resources to do all of that--that turned it into the greatest sounding game of the year.

The only shitty thing about this talk is that it was double-scheduled alongside A Context-Aware Character Dialog System. So, you had to pick one or another--but not both. One to watch on the Vault later on.

The Sound of Grand Theft Auto V

This was the Audio Track talk that sidelined everyone this year: Alastair MacGregor's an audio programmer from Rockstar who brought with him an overview of what it took to accomplish the sound of Grand Theft Auto V. I feel Rockstar doesn't often go public about their methods and techniques--as Anton said in the podcast, Alastair's name on the program felt like "someone from Rockstar being let outdoors"--but I don't think anyone expected them to reveal what they ended up showing.

GTAV features around 90+ hours of recorded dialogue, heaps of licensed music and sound design in what is almost certainly the audio budget record-breaker of last generation. All of this was powered by Rockstar's internal audio toolset, RAGE. It's maintained and developed by a team of audio programmers and sound designers that seem to be staffed there independent of any specific game project, e.g. they're a dedicated team. They've been iterating and improving upon RAGE from around the time of Grand Theft Auto V, making RAGE--now versioned 3.0--at least five years in the making.

RAGE is insanely comprehensive in what it facilitates; it reads like a game audio Christmas list fulfilled. Thankfully, volunteers and event management were on hand to scrape flying chunks of blown mind off the walls as Alastair touched upon feature after feature. Here are a few highlights; you'll want to try to catch the talk or someone else's summary for more, because there was more.

GTAV didn't even ship on PS4, ergo: there is and will be more.

How RAGE Wins Everything

Synchronicity 3.0
When the team started running up against the wall of lining up microfragments of weapon audio and trigger timings, the RAGE team responded. The engine allows for sub-frame (e.g. more than once per 1/30th second, or, more frequently than most stuff in the game's ever making a call), synchronous, sample accurate triggering of multiple assets in different formats. Designers could stack one gun layer in uncompressed PCM, another wrapped in XMA--which would need a little decoding--and the engine accounts for this, keeping everything locked up. Did I mention that GTA was so filled to capacity that the architects had to load audio into the PS3's video RAM to hit their goals? They did, and RAGE buffers for the transfer time out of video memory and still keeps things locked.

Better Engines, Cheaper
GTAV's cars sound much better than its precedessor's. (I don't know this for sure. Haven't played GTAV yet! But, I'm taking Alastair's word for it.) Beyond simple loops, each instance of a car in GTAV is kitted out with not one, but two granular synthesizers--one for processing engine sounds, another for exhaust--that help to split source recordings into tiny, reassemble-able grains at runtime, stretching their audio further and reduce memory usage. Naturally, RAGE features a nice, graphical interface for the audio designers to tune these synths in and offers fine control, e.g. what sections of a specific sample to granulate, how to blend between these areas to create convincing idle transitions (which, as steady, non-pitching sounds are typically poor candidates for granulation). They're even able to specify a % number of grains to use from each section to get really gritty about memory usage; get the sound believable, then start paring the complexity back and ride that fine line. Thoughtful options like this mean that these synthesizers can run with brutal efficiency, so that even the CPU load of double instances per car--and the game features a lot of cars--make for an effective tradeoff vs. loading fatter loops into memory. GTAV's programmers are seventh-dan master of the Cell processor architecture.

Like Promethean Fire
There's lots of talk about procedural audio these days: sounds spun up entirely out of oscillators and code, costing very little memory at the expense of some CPU usage. The idea is that at their best, procedural sound can free up valuable memory for larger, necessarily manmade assets like voiceover and orchestral music by covering all the little bits that you don't need to maybe get sounding 100% realistic. Footsteps, physics sounds, etc. At least, that's where most of us have been setting the near-term bar, because even making those sorts of sounds out of thin air is really freaking hard to do. The general consensus has been that procedural audio is coming, but isn't quite ready just yet.

Except that fully 30% of the sound effects in GTAV were created using RAGE's procedural audio editor.

Fucking 30%. Of a game that large. That shipped on the last generation.

Alastair spent some time demonstrating RAGE's modular synth-like interface that helped make this possible. It allows their audio designers to craft and tinker towards a procedural sound asset before exporting that synthesizer configuration as an asset that can run in-game. He auditioned a few that might as well have come from a microphone; apparently, Rockstar's sound designers are pretty much all Neo. This part of the talk thrust me through the full ten stages of denial and I eventually came around to stunned bewilderment.

tl;dr Rockstar's audio tech is years ahead of everyone and we all had no idea.

Everything Else

Gosh, there's still so much to go over. FMOD, Wwise and Fabric battling down to become the de facto indie audio solution of the future, just as Unity spools up its own tech. Unreal Engine coming down from its status as a AAA platform to meet the little guys with a cheapish subscription model, and throwing back the curtain on Blueprint, its new visual scripting tool for quickly creating awesome looking stuff.

It was a week of ingestion whose digestion continues. I'll likely have more to say once the whole of the conference hits the online Vault. The plan is to kick back and nerd it up with some coworkers, catch all the stuff we missed from the Audio Track and beyond. I'm sure there's lots in there that'd equally inspire.

For now, it's time to cool my spending, crack into a side project or two and thank everyone who made last week so amazing.

#GameAudioGDC is a truly happy place.


Dialog Reel

I thought it might be a good idea to cut together several different short reels for different purposes -- not everyone's going to be interested in my sound design or implementation if they're looking for help with a radio spot or cleanup on a speech-only PSA. Sometimes, they may only want to see a selection of what I do.

The latest of these reels is meant to highlight some of the shorts I've done Dialog Editing or Mixing on, and it's below.

I found it pretty tough to find good cuts from my products that would 'showcase' dialog work, as usually it's the type of thing that slips into the background when done well. It's not really meant to be noticed. And so while the general public is still semi-clueless as to what goes on in post-production audio, I think there's a special lack of understanding around dialog work. If you tell someone you that cut sound effects on Transformers, they might stop and realize that Optimus Prime, as a CG construct, didn't make all those crazy transforming noises by himself. But tell them that you smoothed tone transitions between cuts on the production track or cleaned up unwanted movement/mouth noise, and you'd probably get a blank stare .

I think that the examples I chose for this reel and the way I structured it may help to show even the unfamiliar viewer what a Dialog Editor / Mixer typically does -- and to those of you already familiar with it, I hope you like what you hear.

As always, thoughts and comments are welcome!


VFS Sound Design Program Review

I was approached a few weeks ago by one of the team from, who asked me if I'd write up a review of VFS' Sound Design for Visual Media program for their site -- which is about to undergo some dramatic design changes.

As of a couple of days ago, that review is online, and you can read it here:

The Hardest Year - Sound Design for Visual Media at Vancouver Film School

I was originally planning to host a review on this site as well, but as their formatting is just so much nicer, I don't think there's a need for right now.

Many of you have written to me with questions about VFS over the past year or so and I've been happy to help. And while I hope this review serves as a good starting point for a few new generations of prospective VFS SD students, you can still e-mail me directly if there's anything you'd like to know.


A New Year

New Year's Eve is always a time for reflection. I certainly have my share of sound resolutions for 2012 -- tools and techniques I'd like to learn types of projects I'd like to work on, ways I'd like my professional life to shape up -- but more than anything else, I'd like to give myself permission to enjoy the ride a little bit. I'm traditionally very critical of himself and my own work, but as I stand here at the start of the rest of my professional life, I'm really hopeful about the way this new year's going to turn out. Here's what's new.

Linear Audio Reel

I've cut together a Linear Audio Reel of some of my work from the past 12 months, put together with a day's worth of learning in Adobe Premiere. Hacking it all together has already given me a lot more respect for (and interest in) the work of the picture editor, and Adobe's dynamic link technology -- which allows you to import and work with entire video timelines between applications, edit them live and see the updates reflected across the board without any re-rendering -- is really amazing stuff.

This represents pretty much the full spectrum of post disciplines I was exposed to this past year, and I think it plays pretty well. But I'd love to hear your feedback.

What's New

  • Finished post-production on Red Rabbit a month or so ago, earning my first IMDB Credit. I provided dialogue editing/mixing, ADR and Walla recording/editing/mixing, music editing/mixing, Foley recording/editing, and co-designed the sound effects for the film's animated fight sequence. Red Rabbit has been sent off to the Tribeca Film Festival, among others. Very pleased with the end result and have my fingers crossed..
  • Finished post-production on Portrait, a short horror slash paranormal film for VFS. Provided dialogue editing/mixing, ADR recording/editing/mixing and Foley recording/editing.
  • Have begun work on an unannounced independent platformer for XBLA. Really excited to be able to show some stuff from this one eventually..

This site will be getting a bit of a makeover in 2012 as well, as I rebrand and reform for easier independent contracting.

I'll be relocating to Seattle in the early part of the New Year to pound pavement and continue to collaborate on independent game sound and short films, watch this space for news as always.

And have a great New Year!

Filed under: Industry, Personal 2 Comments


So what's next?

Red Rabbit Logo

After finishing up Deus Ex: Human Revolution - one of the carrots that got me through the last two months - it's back onto some great, detailed Foley and SFX work for Red Rabbit, a student film we're polishing up for festival submission at the end of the month. Red Rabbit is a Tarantino-esque slice of cinema that tracks badass bounty hunter Babs Eaden through a small New Mexican town on her run to the border. I'm pretty psyched to get back into cutting blades and guns despite just coming out of the trailer redesign; there's always more to learn. I've been handling the dialogue and music edits/mixes so far and it's going well. The final cut should be done by the end of the month.

I'll be doing some production sound work on the side and chasing leads wherever I can. Most importantly, I'm laying out a couple independent sound projects/Thing-A-Week type deals to keep the skills sharp - some thoughts:

  • Ambiences - I love backgrounds, but after a DX:HR playthrough I am newly re-sold on their power to captivate and immerse. The EIDOS Montreal sound team killed it with the ambient beds in the Detroit and Hengsha hubs, and the way everything's mixed, it's difficult to know where the beat of the city ends and the game's own music begins.
  • Whooshes/Trailer Impacts - These two things are part of the bedrock of Hollywood sound design, and I feel like they're very minute-to-learn, lifetime-to-master. Fresh off that DX:HR trailer redesign, and I'm really eager to push past what I was able to do there and generate some new, personalized stuff and find my own favorite methods for these.
  • Synthesis/Zebra - I had such a great time with Zebra and Massive for creating all of the music and some of the inorganic sounds of my final, but have really barely scratched the surfaces of these two powerful synths. The nice thing with synthesis is that so much of it is based on the same fundamentals, so diving crazy deep into a particular synth is still pretty transferrable onto the next big thing, the Absynth or Reaktor of the 2010s. Lots to explore here.

Abstract sound design is another idea. I've got an animation lined up that's much less literal than my most recent trailer and it'd be nice to just cut loose, generate a ton of crazy source and explore some avenues I haven't yet. There's creatures, hard effects, neat vocal processing techniques, tons out there - I'm still many thousand hours away from my personal 10,000.

I'll be back around New York City for the coming holidays and am looking forward to spending some time with my family, most of whom I haven't seen for the whole year. The plan after that is to head back out West, put a few roots in Seattle and its great game development scene. Of course, a great work offer would change everything - so you never know.

Filed under: Industry, Personal No Comments

Final Project and Update

Last week, I graduated from VFS' Sound Design for Visual Media program, to an audience of colleagues, parents, instructors and friends. What a ride it's been.

A year ago, that night before I left New York, it felt like I was throwing everything away on a total gamble. I was moving away from friends I'd had for years, my family, my contacts and what have no doubt been some incredible times with Wanderfly to move a coast I'd never known and drown myself in student debt. All to chase the dream, to shut up that little voice that kept telling me I needed to make a change. It's now a year later, and I'm really glad I did. Not in some blanket, everything-has-been-amazing way, but in a very flawed, wonderfully imperfect kinda way. I've changed. I've learned things about myself I would never have known without taking that leap. My eyes and ears have been opened thanks to the wisdom of some incredible teachers. And that far-flung bunch of sound junkies called SD49 have been there the whole time, going from classmates to confidants to lifelong friends.

This was the jist (I think) of my graduation speech as class rep, and while I was worried it all waxed a little too emotional at times, it went over well and it's how I truly feel. It's been an amazing ride, but I can't wait for what comes next.

I owe this place a proper write-up about my time at VFS and things I'd do differently - ex-producer can't help but post-mortem - but for now, a quick look at my final project:

Deus Ex - Human Revolution "Revenge" Trailer Redesign

Everything you're hearing in there, I had a hand in, and as a major Deus Ex fan, it was a total labor of love. I really have to thank Square-Enix and EIDOS Montreal for their permission.

I'll talk a little more about the sounds that went into that trailer when I get some spare time. In the meantime, a bunch of samples of my work have moved to this Work Examples page.

November is film work, job applications and all the sound experimentation I can find time for.


Nearly There

Months and months later, here we are. I'm nearing graduation from VFS Sound Design for Digital Media program and will tidying this place up quite a bit. It's been a whirlwind year with a lot of lessons learned, so in addition to throwing up all of my latest coursework (at least some of which has to be good enough to show around) I'll have a few closing thoughts on the program. In the meantime, things will likely change and move around. Don't worry, it'll all be back in order before October rolls through.

Cheers to all the prospective students who have found my blog while searching for information on the Sound Design for Visual Media program, you can always contact me via e-mail if you have any in-depth questions.

And a big thanks to everyone who's been reading!


Tagged as: , No Comments

Behind – Post Sound

A quick write-up on a post project I took on with a few friends last term.

We three amigos were feeling pretty confident about our workload for Term 3 when we were approached by a graduating student from VFS' Classical Animation department about doing the sound for her final, the story of a day in the life of a young girl at play, and the magical backpack that protects her from harm throughout.

While we probably would have turned down just any old work to focus instead on our assignments, this animation was completely legit -- there was no way we could pass up a chance to get our sound on such a beautiful piece.

So we said yes, and threw ourselves into it amidst the rest of our coursework.

Going in reverse for this one. Here's the end result, and below that is some thinking/work that went into it.

Final Result

Early Planning

Our first look at "Behind" was a draft copy of the piece with temp music dropped in -- nothing locked yet, but close enough to completion that we got what the animator had in mind. The whole piece had a very Miyazaki-esque/Spirited Away sort of vibe to it, and we wanted that to come through unaffected in the final clip. Our job here was really clearly to support the visual and the mood just enough to elevate the movie, then get the hell out of the way.

Some early thoughts we had:

  • Less is more. Both the delicate visual style of BB's animation and the tone of the piece called for a really light touch from the sound department. We've been used to crazy psychic battles, fight scenes and explosions, so trying to create "soft" sounds was totally new for us. We decided early on to keep the sound effects and ambiences from being "over-real" and leave out a lot of the detail we'd otherwise put in.
  • Cede to the music. It's weird to say this, as sound people are usually jostling with the music guys to get their sounds played loudest in the final mix - but again, with a really delicate piece like this one, we had to concede that music could say a lot more than simple effects could to create the vibe we wanted. We were prepared to have our stuff played down.
  • If we can't get the crying working, we'll drop the dialogue entirely. It was a coin toss as to whether the girl was really vocalizing or not as she skips along, but we thought it'd be a great experience to get in touch with our acting campus and cast talent for the role, get some extra dialogue editing practice, etc. -- so we committed to putting dialogue in the piece, with the caveat: if it wasn't believable, if the crying felt false or got in the way, we'd strip it all out.

In all the student works (in-progress and final) I've seen so far in my time here, I've observed that dialogue is kind of a high-risk proposition. When it's good, or just average, it goes unnoticed and we simply absorb it, freeing up our brains to concentrate on the visuals/rest of the sound design that's gone into a piece. When it's bad, it hijacks our attention and basically spoils any chances of walking away from the piece thinking that it had good sound...

Recording and Editing

... Fortunately, we managed to find a couple of voice actors that fit the character we were looking for, and had one of those "Oh my God"/mutual turn-to-each-other moments after our session with Arielle Tuliao, who did the voice of the girl. That was a huge turning point in the process; after editing her stuff into sync and laying it on with our Foley, everything started feeling like it was really going to work out. Everyone else did a great job as well, with Brendan (one of the kids') voices also providing some neat source material for the bag roar when pitched down.

Apart from the music, the great majority of the sound that got played up in the final mix was our Foley -- all of it was captured with the Rode NTG-3 running through an FP33 mic pre into Pro Tools. Would've loved to have used the 416 on all that stuff, but didn't discover it until a few weeks later!

The sessions took us the better part of two days to fully record and edit. Notably "cool" Foley solutions were flicking our fingers with masking tape on them for the butterfly flaps, ripping pieces of cloth underneath a dirt pit for root tears/plucks, and a nice layer of fresh-picked green onions on our surface for some squeaky meadowy footstep details.

My buddy Gwen handled the BGs and SPFX , both of which I thought really had that soft sonic texture the piece needed, and were rich and full without being really showy.

Manuel took the animator's original Music, a simple piano melody, and really filled it out to score the entire piece with -- unfortunately, a lot of those changes didn't make it into the final mix, in favor of the director's starting piece. You gotta expect it.

Final mixes were done by a later-term student and VFS' Matthew Thomas.

Hope you enjoyed! Leave any questions or feedback in the comments.


Find That You

I just wanted to share this, quickly - a post from my friend at

He's been posting a bunch of bite-sized fiction exercises based around a Google Images search result, and this one, probably because of our shared experiences and talks all the time in high school, really resonated. The sentiment is beautiful.

My hands shake as I reach the podium. My palm slips greasily into that of the principal. I remember him from my time here. I hated him. The microphone protrudes accusingly, and I go over my notes. Tear them up. Throw them to one side...
--, "My hands shake as I reach the podium."

Hope all is well with you!


Say Hello to Kyma

The Kyma timeline environment. Not mine!

The Kyma timeline environment.

In balancing out such a crazy term, a few things were going to have to suffer. I present to you:

Sound Designing 2

Kyma Project - Scanner Duel

Part A
Using a microphone, record a vocal temp into protools to sketch out the sound design for the given quicktime (Scanners duel scene). Use as many tracks as is necessary and any plugin effects or audiosuite transformations that will make your design fit the scene.
Part B
Using any of the processes available in Kyma and any appropriate sound sources, create a sound design for the same scene, using the vocal temp that you created in part A as a template or blueprint.

Kyma Kyma Kyma. Such a wonderful and alluring word! I knew next to nothing about what this thing was or what it did, just that it was a device we had somewhere, buried in one of our back rooms, and that with the right level of arcane mastery, it could be used to create these incredibly techy, otherworldly textures and sounds that you couldn't get anywhere else. I knew we'd spent almost no real class time on it, and that however much each student chose to master the Kyma in the time given to him was solely on him. I was ready.

What is a "Kyma?"

This should start things off:

Kyma is a visual programming language for sound design used by musicians, researchers, and sound designers. In Kyma, a user programs a multiprocessor DSP by graphically connecting modules on the screen of a Macintosh or Windows computer.
-- Wikipedia, "Kyma (sound design language)"

Essentially, the Kyma is a very expensive, very powerful and -- in my limited experience -- very niche sound design tool. It is two parts:

  1. a sort of programming environment where one can set up large processing chains with which to mutilate your choice of source sounds, from loaded-in samples to stuff you synthesize yourself
  2. and the external Paca Rana box that crunches all the 1s and 0s that execute that processing chain. The box is essentially a tiny computer whose every cycle is devoted to rendering sound and nothing else.

All of the components within the Paca itself are of really high quality, and that dedicated processing power means that no matter how much you want to stretch out, granulate and murder that sound, it's still rendered very smoothly and naturally - and without messing with everything else your system has to do on its own, like run Pro Tools and whatever other plug-ins you have set up. It seems like it'd play very well in even crowded sessions.

Not as cute as the real thing.

Not as cute as the real thing.

The Kyma environment as a programming language is custom-designed for sound and sound alone, and genius founder Carla Scaletti has been at it for decades. It teases you in with seemingly infinite possibilities, a playful, hippie-ish sort of branding message and some very cool credits to its name.

If you've seen WALL-E, you've heard Ben Burtt's work on the Kyma in the title robot's adorably lilting voice. Video below, jump to a little past the 6min mark to see some of what the system's capable of in terms of voice construction.

That box, unfortunately, will also run you upwards of $3000 or $4000. And while getting started in that programming environment and making some very out-there sounds with it is pretty quick, it's not something you can jump into and use to go from a pre-planned Point A to Point B. Especially not in the one day timeframe I had left myself for this last assignment of the term.

Start to Finish in a Day



As I said, I was ready to kill this one. I'd heard what this box could do in the work of our badass late-term student, Javier, and I was ready to bend this thing to my will. But the film collaboration happened, the soundscape happened, the days rolled by without me learning much more than how to get the Kyma up and running.

Finally, I found myself at the end of the term, one day between myself and the deadline, and without a plan. Always a great place to be; I love working under pressure!

If you've been reading me, you know I can't work without a little planning. Here's what went through my head before I dove in on this one:

I made the decision to phone in the vocal temp. Yep, it was part of the overall assignment and grade, but I had missed the day's class we had been given to record and clean it up (ironically, because I had been up late learning the Kyma the night before and slept in). With 24 hours to go, it didn't make sense to spend the little time I had left polishing up the blueprint instead of the final actual piece. For better or worse, this needed to spring straight from my head to canvas in one go. Because of this, you're not hearing my vocal temp. At least without buying me a drink.

I watched the scene. Many, many times. Sounds obvious, but if you don't digest the hell out of the clip, even when you're in a rush, you're going to miss the visual cues and anchors you need to hit to create a really impactful sound design. This scene needed a lot of review here because of its extreme length and really slow pacing; there are no bullets fired, no punches or broken panes of glass or anything like that. A lot of it is just two dudes staring intensely at one another. I needed to find the moments where the stare intensified and hit them hard.

After watching and rewatching, and staying deliberately in the dark about the overall arc of the real movie, I decided:

  • This battle needed to feel dangerous, the wounds terrifying. I'd never seen Scanners, didn't know what led up to the point of these two guys clashing, but any time you're facing down a foe so strong that he can get into your head and make you tear your own face off, the audience should fucking fear a guy like that.
  • The psychic battle component should be synth-heavy, push myself a bit into the realm of synthetic sound design, which I (still) know almost nothing about. All my experimenting was getting done on this front, and I was counting on the Kyma to give me some great, unique source material to work with that sounded like nothing I'd ever heard but still resembled an epic battle of the minds.
  • Each character's powers needed to have a distinct voice, so that you instinctively felt when the battle was turning one way or another. Of course, the bad guy's should sound more powerful, a little darker, a little more focused; the good guy's more benevolent, but full enough to take him down.

I'd leave the recorder running and just hope I got something randomly brilliant out of the Kyma. Sad times, but I didn't have the luxury of setting up an elaborate custom processing chain to get myself exactly the sound I needed with so little time left. I ran what I thought was some cool source material (ice cracks, resonant metals) through some intro-level Kyma patches until I stumbled upon something workable, bounced it, then edited from there. The Kyma's contribution ended up being mostly in the form of its really nice granulation functions, which I used (and cranked up) on a slowed down oxygen mask sample that I kept running throughout the scene as a bed for the whole encounter. You'll also hear some of these effects on the wounds the bad guy sustains early on. Though I can't remember why.

In the end, though, nearly all the sounds for this scene came from the Massive synth and a few tweaked presets. I found a few representative synths (bad guy powers, good guy powers, general sub bass, general "scanner duel" atmosphere) and ran through the scene a few times and made multiple passes, scoring the individual layers "live" with my fingers on a bunch of parameter knobs. Recorded it to MIDI, bounced it out to audio, left it to edit the next morning. This got me a lot of good stuff that was already pretty in sync with the pace of the scene, which made editing it pretty easily.

Final Result

That's about all there is to say on this one - editing took it the rest of the way. Some of the plug-ins used here were WaveArts' Panorama for stereo movement, ring modulation, the excellent Waves Doubler and MaxxBass plug-ins, along with a suite of reverbs, eqs and delays. If you have any specific questions, leave them in the comments and I'd be happy to answer!

What Went Right / What I Learned

  • Voice distinction. I feel like I nailed down a sort of personality for each character based on the sound of their psychic attacks; that evil boy sounds evil, and the good guy sounds a little purer.
  • Pain! Was pleased with some of the burning/searing accents on the bad guy's points of focus. This was done mostly with some drastic volume automation on the sizzles combined with pitch shifts/etc. on the sound of his powers right when his expression changed.

What Went Wrong

  • Frequency content. Working with synth presets can be tricky, because they bring a ton of broadband noise/sound with them and sometimes fill up a lot more of the sound spectrum than you want. I didn't have a lot of time to mix this assignment before submission, which means that there are some points where that synth noise really, really builds up and gets harsh. Maybe that serves the scene, but I think it'd need to be tamed a bit before this could go into the mix with BGs, dialogue, sound effects, etc. It's a bit of a spectrum hog right now.
  • Not enough time to really use the Kyma. That device is capable of much, much more than I give it credit for in this assignment, and with a little more time I would have liked to make it do what I wanted, instead of just hitching a ride on the stuff it accidentally did.
  • Injury Foley. I had the brilliant idea to use a pan full of semi-congealed oatmeal and rake my hand across it to simulate the sounds of good guy pulling his face off - but this came a week after I'd finished the project. That sound could have used some love.
  • Flat Fire. With a little more time, I would've stylized the fire sounds a bit more - they were left pretty dry as is, and could have fit better with the texture of the rest of the mix.

The Sound Design community online seems to agree that the Kyma is a very powerful tool -- maybe the best -- for very specific tasks like granulation and cool vocal morphing, but that if you don't have a lifetime to spend mastering it, there are quicker ways to get to the sound you want. I also don't really see anything that it can do that Max/MSP couldn't do better, and for thousands of dollars less, if you're willing to sink time into mastering a sound design-oriented programming language one way or the other.

That all said, looking forward to spending some quality time with the Paca as the year rolls on, and maybe, finally getting it to do what I plan it to (and not simply something cool but unexpected) by the end of the year. I may never see one again, right?

Please leave any questions/thoughts/criticism in the comments!