Author Archives: Luca

WUIS Worklog THU 5/8

Today I Learned:

  • About scope and access. This does a lot to explain some of the compiler errors I’ve been getting when trying to build. I’m used to straight-ahead, “functional” programming stuff like .BAT scripting, which doesn’t involve either of these concepts and in which variables set anywhere can pretty much be accessed anywhere else (an exception being within code blocks like IF statements). Turns out that public variables will display in the Unity GUI within the inspector of the object they’re attached to, which allows you to tweak them at runtime. Useful for debugging and tuning RTPC curve calculations and stuff like that, I guess!
  • Read more into classes after seeing the Unity tutorial guy whizz through a bunch of stuff on them without stopping. Basic CompSci stuff that I just don’t know, and seems worth reviewing. The differences between classes, structs (though I haven’t run into these yet), functions, constructors, and instances of the above are still pretty murky right now. I will have to take an actual class on this stuff at some point, if only for the sake of having a teacher to ping with questions…

Debating switching the default editor back to MonoDevelop because of some intelligent stuff it does. Since it knows what scripts are loaded into the project, what’s public, what’s private, etc. it can autocomplete function references that Sublime can’t. So, using MonoDevelop would prevent me from fucking this stuff up. On the other hand, sticking with Sublime force me to improve my knowledge of scope and access, since I’d have to be more careful.

Up Tomorrow:

  • Continuing with the tutorials! I’m thinking I’ll probably hit my Text Fighter v1.0 project after a few more. Some days, you just wanna stay on target.

WUIS Worklog WED 5/7

So at this point, this venture’s got me straight-up learning about Object-Oriented Programming, assorted computer science fundamentals, C# and its syntax. This this stuff translates readily into understanding more about Unity, vis-a-vis C# being a development language for it (and these tutorials feeding me easy examples of attaching scripts to objects, etc.)–but it doesn’t really have much to do with Wwise, sound, or what I set out to do. The whole of “learning to program” is a rabbit hole into which I could tumble endlessly, so I think I should take a moment to update my daily and overall goals.

The end goal, for now, is still to figure out how to cook a Wwise project up and then create a Unity project which can invoke sound calls, RTPC updates and the like from those bundled banks, turning them into really simple game functions like, “When I push space, this sound plays” or “When I move the mouse on this axis, this RTPC increases.”

To hit that point, I’ll finish these tutorials, and daily:

  • List the stuff I’ve “learned” how to do. (“Learned,” as in, “have experienced in one or another tutorial”–but won’t necessarily have under my skin until I’ve really ventured off the rails and experimented with).
  • Any particularly cool points or definitions I feel like I’ve covered, I’ll go into more detail on. And maybe list a few examples of how I can see that working in a simple project I’ll define as I go along.

By finishing these tutorials and daily drilling in these reminders of what I’ve learned about C#, I expect that I’ll be able to crack open the Wwise-Unity Integration Demo classes and at least understand what’s going on at a structural level. It’ll still be Greek, but I’ll be able to recognize it as such, instead of a big text file full of scary.

Today I Learned:

  • About inheritance, main classes and base classes. Specifying a base class lets a class inherit certain properties, like instance variables and functions. When you create a new Unity script, that first public class and open brackets it creates specifies “MonoBehaviour” as the base class. This is like some default framework that Unity has which contains a bunch of useful functions. Makes sense. Diving deeper, it seems like there are other ways to do this kind of inheritance stuff with “delegation” and “subtyping”–but I don’t know what the deal with those paradigms are or why I’d elect to use them. Yet.
  • About different value typesint and float, for starters. I don’t know why exactly one of my example scripts has to explicitly define a floating value as such twice. That is, in the statement, “float coffeeTemperature = 85.0f;“, why do we have to say both ‘float’ and put the ‘f’ suffix after the initialization value? Anyways, I’m not expecting this type of stuff to start becoming a problem until I’m converting between data types. As an example of why an audio-related function might do that could be turning a super fine-tuned, precise value like mouse coordinates into an integer that falls somewhere specific on an RTPC graph.
  • About constructors, which are like functions that are used to create brand-new objects (like GUI boxes) at runtime. Decided to make some semi-interactive controls to accompany the coffee temperature test tutorial on IF statements, and ran into some compilation errors until I looked this up.
  • About loops–While, DoWhile, For and ForEach–and when you’d use each one. Messed around a bit with trying to combine some examples.

Up Tomorrow:

Thinking to go off the rails a little bit. As a simple but creative test of what I’ve been learning, I have this crazy idea that I want to make a button-navigable, text-based RPG fighter that does some really easy damage calculation. From what I’ve seen, tweaking the GUI may be the hardest part.

Much respect for UI programmers.

One Week Later, Thoughts

Last week, I set the flag down and declared I’d be learning C# while I wait out this strange inter-contract abeyance.

I haven’t made as much progress as I’ve hoped to. And typically, what I’d do is beat myself up over that. Why hasn’t this thing taken root?, Why have you wasted other time doing XYZ when you know deeply that this is the best thing for you–that your mind craves those challenges, actually, on the days you’ve elected to do something else? and other such self-eroding lines of thought.

But that’s a useless way of looking at things. There are lessons to be learned in our failures (probably even more than in our successes!) if we can find the clarity to wall off the sting of it, step back and analyze why things have gone wrong. Maybe you’ll see some of yourself in this post and in the days’ upcoming work logs. I don’t mean for this site to turn into a self-help resource, but I don’t mind exposing my flaws if it might help someone else.

The truth is that aligning my mind and body to want to work at this thing every day in a world full of easy, alluring–ultimately unsatisfying–escapist options, is really difficult. And as darkly reassuring as it would be to think that I’m alone in that, I suspect I’m not. Success on this front means building up good habits, like keeping the whole Luca unit running well on meditation, exercise, and a healthy diet, three things I’ve found to provide a lot of stability and satisfaction. Reflecting on how those things help has built some momentum towards keeping them going, and their upkeep becomes easier. As an aside, I’ve found that those three together are critically intertwined, and I can’t really skip on any of them without the others falling apart. So that’s something to troubleshoot.

I think it also means giving in to the bad habits, though, and seeing where they lead you. By not forbidding them, you naturally start to see that they don’t provide you with the same satisfaction that all of your good habits do, or that their effects are totally impermanent. You naturally start to trend away from them, even as you’ve given yourself total permission to explore them in the face of your less sexy productive options.

This is what I’m finding in the face of some clarity today. Long as I’m learning–be it C# or how to short-circuit my natural tendencies towards more consistent progress–it’s all good.


I wanted to share this song with someone after rediscovering it this morning.

 

I find this track irrepressibly beautiful and hopeful in a way I can’t easily describe. And I’ve felt this way ever since stumbling across it three years ago.

Every little element comes together in just the right way to serve the meaning I’ve ascribed it. The thin, unpracticed vocals and kitchen sink percussion; that impossibly low bass line and how it wavers on the edge of breaking some oscillator or amp. The one, single variation on that chip sound when it bends up and down on the final chorus. And the terrible amount of reverb that glues it all together.

A friend of mine put it well: “There are a lot of songs that sound like this, actually…but this one is special.”

What I think it is is that this track feels like being young as you experienced it–not as someone looking back.

Thanks for reading!

WUIS Worklog WED 4/30

No response to my Audiokinetic support post just yet re: being unable to preview audio within the Unity player due to an apparently missing .dll. So, forging ahead with exploring Unity as a standalone component, divorced of all audio. I’ll loop back into trying to create a script that plays a sound on something as I gain a little experience. Frightened that I will probably have to start taking C# courses at some point.

  • Read through some basic Unity help stuff to familiarize myself with the various work areas / parts of the workspace.
  • Decided to check out one of Unity’s recommended beginner projects–the ‘Stealth‘ game–as a way to get me on rails. Realized from comments that a) the tutorial doesn’t teach very well (apparently) and b) requires Unity Pro, which I’m not paying for yet. Started looking for another option.
  • Started in on an example Scripting tutorial from the Unity website: http://unity3d.com/learn/tutorials/modules/beginner/scripting/scripts-as-behaviour-components
  • Realized I hated MonoDevelop (the included and default .cs editor for Unity) already and endeavored to make Sublime Text 3 the new default. Turns out, there’s a guide for that. Went ahead and followed all the instructions there, including configuring a .sublime-project file that filters out Sublime from seeing stuff it can’t really edit (like images, materials, etc.) from the left-hand panel tree.
  • Worked through the scripting tutorials until a venture into uncharted territory led me into a wall.
  • After doing the Variables and Functions tutorial, I decided to try to branch out and figure shit out. I then hit a stumbling block while trying to make a function that could accept two parameters:

    Assets/Scripts/VariablesAndFuntions_Luca.cs(28,25): error CS0236: A field initializer cannot reference the nonstatic field, method, or property `VariablesAndFuntions_Luca.MultiplyVars(params int[])'

  • Looking that up took me to this Stack Exchange discussion, which actually didn’t help me out of the jam. I’m thinking this is basic C# syntax stuff and I probably shouldn’t be venturing into this territory just yet.

It felt good to actually make stuff. I’ve got a little cube that drops onto the floor!

I’ve realized it’d be neat to be able to embed the results of each day’s work within a Unity Web Player applet on here, so I’ve got little bits of ‘playable’ progress I can look back on now and again.

Work will probably resume Friday, as I’ve got to help a friend move tomorrow.

WUIS Worklog MON 4/28

  • Installed Wwise, Unity.
  • Read through the Wwise Unity Integration Demo documentation and downloaded any listed prerequisites.
  • Created two blank Unity projects–one for sandboxing around and testing out the integration (WwiseUnityIntegration_v2013.2.5_Windows), one for specifically working with the provided demo (WwiseUnityIntegrationDemo_v2013.2.5_Windows).
  • Watched Craig Deskins’ helpful video on Wwise-Unity integration to get a better sense of how all this stuff links together–what folder structures Unity expects, what terminology’s being thrown around (“Scenes,” “Assets,” etc.), things like that. Video is possibly a little outdated, as Audiokinetic seems to have stepped its game up with the provided integration package in the last few years. But, it was still really useful!
  • Stepped through the Demo documentation until I hit my first stumbling block–a .dll missing when I attempt to play audio in the Unity player. Posted a question to the Audiokinetic support forums to try to unblock myself here. Compiling the game into an executable and then running that seems to work fine, but it’d be great to have audio working within the Unity IDE proper.

I’m realizing that I need a better sense of how Unity works on its own, apart from Wwise. And, that I’ll likely have to make a very, very simple game to stretch my legs.

So, I’ve decided to dive into Unity’s own projects and tutorials for a better look under the hood, while I wait to see if anyone’s got a solution to my Wwise audio not working. I’m thinking I missed some key redistributable / bundle of libraries somewhere in the setup process (running Windows 8.1), but who knows.

Wwise, Unity and Starting Something

I’ve spent the last few months on contract break / forced sabbatical from my time at Microsoft. And through the professional void, it’s been personally fruitful. Thanks to living like an antisocial monk for most of 2013, I’d put away enough to take a long trip into Southeast Asia and wander about for a month.

(That deserves its own post–which it may or may not get–but you can view my efforts at photojournaling the whole thing over on my Instagram. It starts here, and I wish there were an easier way to reverse-chronologically browse this thing.)

Travel led into more travel: I got to take a trip to the Italian homeland with my dad and brother for a week’s skiing, eating and pacing around downtown Rome. Then GDC. Then, a few weeks later, the annual VALVE Hawaii trip, which I’d been invited along to as a guest. I’m really blessed to have been able to live out this downtime as I have.

But amidst all the vacationing, the overactive brain wanders. You gotta feed it or it dies.

I’ve thought for a while that a real safe heading for game audio is the career path of the audio programmer. In my last year’s experience on Spark, I can tell you that their time is an incredibly precious commodity. If you, the intrepid Sound Designer and Implementer, are the dreamer of big things, they are the ones that turn those dreams into executable reality. I don’t care how good you are with Wwise or Unity or whatever, on any game of sufficient scope, and if you’re trying to do anything that’d stand out against the forward-rushing edge of game audio, you will need a programmer’s help. Sometimes, though, you won’t get it.

What do you do then?

As preparation for a hopeful and glorious return to pay-stubbed game audio–and because I have a little game I’d like to make someday–I’ll endeavor to decode some of this low-level magic that these guys do. And, jointly because I want to keep myself on rails and give you all something to read about, I’ll be documenting what I find, showing my work, demystifying everything I can.

The simplest of sandboxes seems like a ready-made project where I can poke into some Wwise-Unity integration and figure out exactly what’s going on. I know Wwise well enough and there’s documentation on that particular spot where the middleware hits the engine.

Here’s a mission statement of sorts:

I want to hook a Wwise project directly to a game engine, preferably Unity. This means taking a Wwise project with in-built RTPCs, Switches etc. and creating brand new hooks to them within the game code, compiling and experiencing the audio moving about.

Starter questions:

  • Can I do this via an already built Unity game simply integrating a Wwise project into it?
  • What languages would I need to learn to do it?

I really don’t know anything about programming beyond some basic batch scripting stuff and a well-rusted primer on Python, courtesty of my time at VFS. So, expect a lot of frustration, doing things without really understanding how they’re working and, hopefully, lightbulbs coming on.

Step 1’s checking out the Wwise-Unity integration package and seeing what the deal with it is.

GDC2014

Hello! It’s been a minute. Lots to catch up on–it’s probably best to just jump into present day and go from there.

Another Game Developer’s Conference has come and gone, and I wanted to make sense of the whole experience, commit it to print before the day-to-day sinks back in. Let’s take it point for point.

The People

If I’ve said it once…

The best thing about the game industry are the people within it. This is my second year as a semi-credentialed, guess-I-belong-here attendee of GDC, going by that AAA name on my conference pass–but the people of game audio have been welcoming for as long as I’ve had intent to join them. They’re humble, kind and–thanks to the tireless #GameAudioGDC banner-flying of @lostlab–extremely visible at the conference itself.

Something I saw this year was a lot of folks going Expo Pass only, saving some scratch and eschewing the body of the conference for the networking fringe: hallway meetups and late-night business idea shares over overpriced drinks. When you’ve got a group as organized as game audio, it works. Each morning’s Game Audio Podcast meetup at Sightglass was an informal chance to mull over the day’s talks and go all wide-eyed about the future alongside all manner of rookies and vets. It’s so fucking cool that the group’s that close-knit, and I really need to thank Damian and Anton for setting that stuff up every morning.

My heart goes out to all the underrepresented disciplines who don’t have that same social leadership, as hanging with these guys is always the best part of the conference.

The Talks

Of course, there was a lot to watch and hear that you could only get to with a badge. Everyone I spoke with agrees that GDC2014’s talks were a notch up: ferociously technical and full of stuff you wanted to run back and put into practice. I’ve outlined two specific favorites below.

Two of the most-talked about presentations on the Audio Track talks were delivered one after another on Wednesday morning–and both by audio programmers. Tools, systems and implementation hooks are sexy, and a development team whose culture supports these things is one of the surest components of a great sounding game.

Aural Immersion: Audio Technology in The Last of Us

Jonathan Lanier’s an audio programmer at Naughty Dog (do they have more than one? The luxury!) who spoke on the systems that went into the incredible sound of The Last of Us. That one was my game of the year–in an age when I’m spoiled for choice and spend far too much time considering, but not actively engaging with, my Steam catalog, TLoU had me running home from work to fire up the console and running my mouth around the coffee machine every morning with stories of the last night’s play. Lanier outlined the Creative and Audio Directors’ early pre-production talks, which set audio up for development support and eventual success, before digging into the technical ins and outs.

The audio team was able to ground their audio in the gritty realism of the world by hitching a ride on Naughty Dog’s tried and tested raycast engine. This let them throw lines and cones around every crumbling environment, bringing back useful information that let them filter, verb out and otherwise treat their sound. In a game where you spend so much time crouching and listening, the sum of all these subtle treatments made for some incredibly tense pre-combat situations: planning my approach as unseen Clickers shambled and squealed somewhere off in the dark, or straining just a little bit to hear Ellie and realizing I’d jogged too far ahead.

What’s important is that the team never became slaves to their own systems, putting the technique above the telling. They tried out HDR–the silver bullet audio solution of 2013–and found it didn’t fit the type of perspective they were trying to put you in. So they rolled their own dynamic mixing solution. They liked the way enemy chatter faded out over distance, but that same falloff curve meant some key dialogue with Ellie could go unintelligible. So they they sent enemy and friendly NPC dialogue through separately adjustable wet/dry treatments and reverb buses.

TLoU’s audio tech is impressive, but nothing any AAA studio couldn’t have dreamed up themselves. It’s the fact that they got so much of it into the game–and had a studio that believed in audio; that gave them the resources to do all of that–that turned it into the greatest sounding game of the year.

The only shitty thing about this talk is that it was double-scheduled alongside A Context-Aware Character Dialog System. So, you had to pick one or another–but not both. One to watch on the Vault later on.

The Sound of Grand Theft Auto V

This was the Audio Track talk that sidelined everyone this year: Alastair MacGregor’s an audio programmer from Rockstar who brought with him an overview of what it took to accomplish the sound of Grand Theft Auto V. I feel Rockstar doesn’t often go public about their methods and techniques–as Anton said in the podcast, Alastair’s name on the program felt like “someone from Rockstar being let outdoors”–but I don’t think anyone expected them to reveal what they ended up showing.

GTAV features around 90+ hours of recorded dialogue, heaps of licensed music and sound design in what is almost certainly the audio budget record-breaker of last generation. All of this was powered by Rockstar’s internal audio toolset, RAGE. It’s maintained and developed by a team of audio programmers and sound designers that seem to be staffed there independent of any specific game project, e.g. they’re a dedicated team. They’ve been iterating and improving upon RAGE from around the time of Grand Theft Auto V, making RAGE–now versioned 3.0–at least five years in the making.

RAGE is insanely comprehensive in what it facilitates; it reads like a game audio Christmas list fulfilled. Thankfully, volunteers and event management were on hand to scrape flying chunks of blown mind off the walls as Alastair touched upon feature after feature. Here are a few highlights; you’ll want to try to catch the talk or someone else’s summary for more, because there was more.

GTAV didn’t even ship on PS4, ergo: there is and will be more.

How RAGE Wins Everything

Synchronicity 3.0
When the team started running up against the wall of lining up microfragments of weapon audio and trigger timings, the RAGE team responded. The engine allows for sub-frame (e.g. more than once per 1/30th second, or, more frequently than most stuff in the game’s ever making a call), synchronous, sample accurate triggering of multiple assets in different formats. Designers could stack one gun layer in uncompressed PCM, another wrapped in XMA–which would need a little decoding–and the engine accounts for this, keeping everything locked up. Did I mention that GTA was so filled to capacity that the architects had to load audio into the PS3’s video RAM to hit their goals? They did, and RAGE buffers for the transfer time out of video memory and still keeps things locked.

Better Engines, Cheaper
GTAV’s cars sound much better than its precedessor’s. (I don’t know this for sure. Haven’t played GTAV yet! But, I’m taking Alastair’s word for it.) Beyond simple loops, each instance of a car in GTAV is kitted out with not one, but two granular synthesizers–one for processing engine sounds, another for exhaust–that help to split source recordings into tiny, reassemble-able grains at runtime, stretching their audio further and reduce memory usage. Naturally, RAGE features a nice, graphical interface for the audio designers to tune these synths in and offers fine control, e.g. what sections of a specific sample to granulate, how to blend between these areas to create convincing idle transitions (which, as steady, non-pitching sounds are typically poor candidates for granulation). They’re even able to specify a % number of grains to use from each section to get really gritty about memory usage; get the sound believable, then start paring the complexity back and ride that fine line. Thoughtful options like this mean that these synthesizers can run with brutal efficiency, so that even the CPU load of double instances per car–and the game features a lot of cars–make for an effective tradeoff vs. loading fatter loops into memory. GTAV’s programmers are seventh-dan master of the Cell processor architecture.

Like Promethean Fire
There’s lots of talk about procedural audio these days: sounds spun up entirely out of oscillators and code, costing very little memory at the expense of some CPU usage. The idea is that at their best, procedural sound can free up valuable memory for larger, necessarily manmade assets like voiceover and orchestral music by covering all the little bits that you don’t need to maybe get sounding 100% realistic. Footsteps, physics sounds, etc. At least, that’s where most of us have been setting the near-term bar, because even making those sorts of sounds out of thin air is really freaking hard to do. The general consensus has been that procedural audio is coming, but isn’t quite ready just yet.

Except that fully 30% of the sound effects in GTAV were created using RAGE’s procedural audio editor.

Fucking 30%. Of a game that large. That shipped on the last generation.

Alastair spent some time demonstrating RAGE’s modular synth-like interface that helped make this possible. It allows their audio designers to craft and tinker towards a procedural sound asset before exporting that synthesizer configuration as an asset that can run in-game. He auditioned a few that might as well have come from a microphone; apparently, Rockstar’s sound designers are pretty much all Neo. This part of the talk thrust me through the full ten stages of denial and I eventually came around to stunned bewilderment.

tl;dr Rockstar’s audio tech is years ahead of everyone and we all had no idea.

Everything Else

Gosh, there’s still so much to go over. FMOD, Wwise and Fabric battling down to become the de facto indie audio solution of the future, just as Unity spools up its own tech. Unreal Engine coming down from its status as a AAA platform to meet the little guys with a cheapish subscription model, and throwing back the curtain on Blueprint, its new visual scripting tool for quickly creating awesome looking stuff.

It was a week of ingestion whose digestion continues. I’ll likely have more to say once the whole of the conference hits the online Vault. The plan is to kick back and nerd it up with some coworkers, catch all the stuff we missed from the Audio Track and beyond. I’m sure there’s lots in there that’d equally inspire.

For now, it’s time to cool my spending, crack into a side project or two and thank everyone who made last week so amazing.

#GameAudioGDC is a truly happy place.

Dialog Reel

I thought it might be a good idea to cut together several different short reels for different purposes — not everyone’s going to be interested in my sound design or implementation if they’re looking for help with a radio spot or cleanup on a speech-only PSA. Sometimes, they may only want to see a selection of what I do.

The latest of these reels is meant to highlight some of the shorts I’ve done Dialog Editing or Mixing on, and it’s below.

I found it pretty tough to find good cuts from my products that would ‘showcase’ dialog work, as usually it’s the type of thing that slips into the background when done well. It’s not really meant to be noticed. And so while the general public is still semi-clueless as to what goes on in post-production audio, I think there’s a special lack of understanding around dialog work. If you tell someone you that cut sound effects on Transformers, they might stop and realize that Optimus Prime, as a CG construct, didn’t make all those crazy transforming noises by himself. But tell them that you smoothed tone transitions between cuts on the production track or cleaned up unwanted movement/mouth noise, and you’d probably get a blank stare .

I think that the examples I chose for this reel and the way I structured it may help to show even the unfamiliar viewer what a Dialog Editor / Mixer typically does — and to those of you already familiar with it, I hope you like what you hear.

As always, thoughts and comments are welcome!