Empowering the Next Generation of Women in Audio

Join Us

Similarities Between Different Audio Disciplines Part 2


Part one covered the differences and similarities between dialogue editing and podcast mixing and between sound reinforcement for musical theater and themed entertainment. Here we will be comparing two other surprisingly related audio disciplines: game audio and sound design for themed entertainment.

Collaboration

Every medium benefits from collaboration. Video games and themed entertainment can’t be made without it! Collaboration manifests in many ways throughout the design process, so I’ll focus on communicating with the art and programming departments. Collaboration between those departments with audio designers has quite a few similarities when we compare game audio processes with themed entertainment workflows.
Decisions the art department makes affect the sound design. In game audio, it is very hard to put sound effects to an animation that does not exist yet, or even create ambience and music when you do not know what the world looks like. It is wise for sound designers to check in early in the process. Storyboards and renderings can tell a lot about the world of a game before the game itself is even built. Incomplete or temporary animations are often more than enough to get started on sounds for a character. The sound designer combines these resources with clarifying questions, such as: What is this character wearing? Are the walls in this room wood or stone?, et cetera.
Reaching out to the art department early on informs the sound design for themed entertainment attractions as well. Working off of a script and renderings is a great start, just like with game audio. Set designers for live events will also provide draftings of the scenery, such as ground plans and elevations. This paperwork informs speaker placement, acoustics, backstage tech rooms, and audience pathways. It is wise to have conversations with the art department early on about where you need to put speakers. Directors and set designers will typically want audio equipment to be hidden, and it is the sound designer’s job to make sure that there is a speaker for every special effect and that there are no dead zones. Showing your system to the rest of the team post-installation will not only frustrate other departments but also, your technicians who ran all the cable and hung all the speakers who will have to redo the work. And, you may end up not having the best speaker placement. Communicating from the start will empower you to advocate for the ideal placement of equipment, and ideate with the set designer on ways of hiding equipment.

Then there’s programmers. Programmers implement sound effects and music, integrate all the art and lighting assets and video game programmers also code game mechanics. Establishing an ongoing process ensures that sounds are being played when and where you want them to, and the programming team might even have some cool ideas! In both mediums, the most obvious way to relay necessary information is by keeping an asset list. The asset list should say what the sound is, where and when it plays, how it is triggered, whether it loops, the file name, the file length, the sample rate, the number of channels, and any short creative notes. It is also wise to meet with the programmers early and often, so they can flag any limitations on their end. They are implementing your work, so a positive relationship is a must.
This section talked about collaboration between departments, and these examples are just the most common and the most similar between video games and live attractions. More about programming audio later.

Non-Linear Processes

When I say that the process is non-linear for both disciplines, I do not just mean that the sound design is reiterated until the desired outcome is achieved. Both types of experiences have to be designed based on conditions, not a linear timeline.
In video games, sound (or a change in sound) is triggered such as when the player enters a new room, area, or level; when they encounter/attack/kill an enemy; when the status of their health changes; when a button is pressed; and many more. In themed entertainment, sound is heard or changes when the audience enters a new room, when an actor decides to jump at them (hitting a button to activate a startling sound); when the player interacts with an object in the world; and much more.
Notice how all of these things may occur at a moment in time, but they are triggered by conditions determined by choice and interaction, or by other conditions that have previously been met. Before the product is launched, sound designers need to be able to work on any part of the project at a given time or make adjustments to one part without affecting the rest of the experience. In addition to having a cohesive design in mind, designers in both sectors need to plan their sounds and programming so change can be implemented without a terrible domino effect on other parts of the experience.

Spatialization and Acoustics

In games and themed entertainment, the goal is to immerse the audience in a realistic, three-dimensional space. For this reason, both video games and live experiences involve realistic localization of sound. Google Dictionary defines localization as, “the process of making something local in character or restricting it to a particular place.” Music, user interface sounds, and ambiences are often stereo. However, all other sounds assigned to an object or character, and thus are usually mono.  In real life, sound comes from specific sources in a physical space, and games and attractions emulate that through object-based immersive audio. Sound sources are attached to objects in the game engine. Sound designers in themed entertainment use precise speaker placement and acoustics to trick the audience into hearing a sound from a particular source.
Both mediums require knowledge of and a good ear for acoustics. In game audio, sound designers program virtual room acoustics as a part of creating realistic environments. They have to understand how the sound of voices and objects are affected by the room they are in and the distance from the player. Themed entertainment deals with real-life acoustics, which uses the same principles to achieve immersion. Knowing how sound will bounce off of or get absorbed by objects will inform speaker placement, how the audience perceives the sound, and how the sound designer can work with the set designer to hide the speakers.

Both mediums implement audio via specific sound sources and 3D audio environments, and audio designers have to understand acoustics to create realistic immersion.

Audio Implementation for Adaptive Audio

Earlier we talked about condition-based sound design, where sound is triggered when conditions are met. Possible conditions include entering a new room, encountering an enemy, or pressing a button. The actual term for this is “adaptive audio.” Games and attractions may have linear elements, but both use adaptive audio principles. So how do those sounds get from your DAW to the game engine (video games) or sent through speakers (themed entertainment)? There is another step between those, which is implementation.
Games use something called middleware. Sound files are brought into the middleware software where they then get mixed and programmed. Sound designers can even connect to a build of the game and rehearse their mixes. Common middleware programs are Wwise, FMOD, and Unreal Engine’s Blueprints. Some game studios have their own proprietary middleware. Developers will then integrate metadata from the middleware into the game. On a very small team, the sound designer will also program in the game engine. On larger teams, there is a separate technical sound designer role that will handle programming. No matter the team size, game audio designers implement audio via middleware.
Themed entertainment attractions, on the other hand, use something called show control software. Show control software mixes and routes audio signal and patches inputs and outputs (alongside using other DSP). Show control software is also where triggers for all the technical elements of the experience are programmed. Types of triggers can include but are not limited to a “go” button (at the most basic, the spacebar of a computer), a contact closure, OSC commands, or MIDI commands. Sound, lights, automation, and video are all things that are possible to trigger in show control software. Think of it as a combination of an audio middleware and a game engine. Examples of show control software are QLab, QSys, and WinScript. On a very small team, the sound designer will create content as well as programming audio and all the show control for the experience. As teams get larger, there are more roles. Sound content may be a separate role from the programmer, and there may even be a programmer, a sound designer, a mixer, and someone dedicated to installing and troubleshooting IP networks.
Both video games and themed entertainment require some knowledge of audio implementation. Even if the sound designer is focused solely on content, they need to have an idea of how audio gets into the experience so they can communicate effectively with the programmer and build content appropriate for the sound system.

Agility

As mentioned in part one, many audio disciplines have more in common than most audio professionals realize. The past few years have shown us that being agile can sustain us in times of unpredictable circumstances such as labor strikes, economic uncertainty, and pandemics. Opening up our minds to applications of similar skills across mediums can also open up new job possibilities.

 

5 Sound Design Sketches 

Sound design is as much of an art form as painting, sculpting, acting, dancing… insert any of the visual or performing arts here. One could argue that sound design is totally a performing art! Effects, ambience, and music all have a huge impact on how a  story lands for an audience. It is the sound designer’s job to manipulate all the parts so they interact with each other appropriately to have a profound emotional impact.

In the same way that actors will do readings or visual artists will sketch for practice, we sound designers also need to practice our craft independently. This article contains ideas to make sound design a consistent, self-motivated practice. Rather than

going into technical how-tos, the following ideas assume the reader knows the basics of audio engineering and thus focuses on creativity and inspiration instead.

Record everything.

This is perhaps the most accessible exercise you can undertake as well as the most practically useful. Through a consistent practice of recording, sound designers tune into the world around us while building our sound effect libraries. Handheld recorders are not that expensive, or in the worst-case scenario, you can even use the voice memo app on your phone. Ideally, you have a handheld recorder and are capturing in stereo at the minimum.

Take your recorder everywhere. Start by grabbing ambiences. Really tune in and listen to what is happening in the world around you while you are recording and do not stop it until there is a lull in the action. (For example, it would be unfortunate to stop a  recording of a street in the middle of a car passing by.) When recording ambiances that stay pretty consistent such as walla, some nature ambiences or even room tone, grab no less than one minute – and that may even be too short sometimes. Capture any less and future loops will be tricky to edit or sound boring.

Tune in to the world around you for singular things that sound interesting close up too. Maybe a crosswalk signal makes a unique sound or you have a loud washing machine; get right up to it and record it.

Go beyond spontaneous recording and also make time to create noise. Breaking

celery can make a great sound for breaking bones; a suitcase latch can be a safety mechanism on a pistol; squeezing cooked mac and cheese in your hands could be a  basis for gore sounds. There is so much magic in recording an everyday object and making it sound like something else.

Do a sound design for a video clip.

The only way to get better at sound design is to do it as much as possible. Going back to our visual artist analogy, sound redesigns are to sound designers as pencil sketches are to painters. It’s our artistic exercise when we are not actively working on a  paid project.

A video clip can be from anything that excites you, such as a movie or a  playthrough of a video game. Try to find something that is not too current, as people who see this clip may have expectations for how it sounds based on their recent memory. The cool thing about this exercise is that the end goal can be whatever you want it to be. Once in a while, you should do a thirty-second to one-minute clip that is a  full sound design as a portfolio piece; but the real practice comes in taking even just a five-second clip and digging into one element, whether you focus on weapons, foley,  sci-fi, vehicles… When you focus on one element of a video clip, it is important to record, process, and otherwise create all the sound effects from scratch. Do not throw in sounds from a library unless there are elements that are impossible to create on your own or they are being used in a layer within a sound effect; the point of the exercise is to practice creating sounds from the ground up.

To level this exercise up, create redesigns for different genres to the same clip.  What do the objects in the video sound like if they are in a rom-com? Then, what if you took the same clip and designed it as a psychological thriller?

Sit down and play with a new plugin or piece of gear for 30 minutes.

To put it bluntly: there is no point in having five analog synths or fifty reverb plugins if you are not familiar with any of them. You can learn a lot by penciling in a set amount of time to learn something specific. Your work becomes so much stronger when

you adequately know your tools, and it is so much more effective to have a few plugins that you know well than a bunch that you do not know how to use!

Create a sound design prompt with ChatGPT.

Type in, “sound design prompt” – literally. ChatGPT will spit out a suggestion that has a ton of detail, or maybe even not that much at all. Hit “generate” until it gives you something inspiring yet challenging. Then, time-block this exercise too. Give yourself between two to four hours and go nuts. Approach it how you want; work in a new DAW,  or don’t; record sounds from scratch, or don’t. Really treat this like a “sketch” and make something detailed and unique. And see what you can come up with in just a few hours.  Give yourself an added bonus by purposefully integrating something new as discussed  in the previous “sketch.” Perhaps you aim to use a feature or keystroke in your DAW,  dig into a plugin, or try a new recording technique.

Listen.

An exercise many new sound designers take for granted, and perhaps one everybody fails to do in our fast-paced existence. Really listen. Tune in and take mental or written notes. Here are some considerations, and this is by no means an exhaustive list.

In real life: What do you hear around you at any given moment? Where are different sounds coming from?

When watching movies or television: How are different elements balanced in the mix? Can you identify the low, mid, and high-frequency layers of sound effects and how do you think they were made? What do you think the editors used for different effects?  How is sound used as a thematic and storytelling tool?

When playing video games: all the same things for movies, and even more. How does music change when you enter a new room, have low health, etc.? How are things spatialized? How is sound utilized to give players feedback? How did the sound designers keep events that play over and over from sounding boring?  When listening to podcasts: When is scoring used? How are scenes built with

sound? How is sound used to transition from one place to another?  When listening to music: Can you pick out all the instruments? When are individual instruments not playing and when do they come back? How are instruments panned? What effects are used? Zone into one instrument and pay attention to what it does for the whole song. And – how does the song end?

Take any one of these ideas and do a little bit every day. You would be surprised how many projects you complete and how much you improve over time! As with anything, the hardest part is getting started. Remember that the Mona Lisa was not painted in a day, and each “Star Wars” movie took months or years to sound design and mix. Leonardo DaVinci and Ben Burtt practiced their art their whole lives leading up to and past those accomplishments. Start now, and give yourself permission to just practice without expectations, experiment, and have fun along the way.

Essentials & Creativity of Location Sound

Sound designers for films and podcasts have access to many amazing tools to match and enhance the audio recorded in the field. There are multitudes of audio repair options, as well as EQs, reverbs, preamp simulators, saturation plugins, stereo field wideners, as well a ton of sound libraries. Yet, the technology available to us can still only do so much. We can make our projects sing by recording more quality options on location. This is a guide on how to capture audio in the field and why it matters for post-production. Although I use film terminology throughout this article, these recommendations can apply to any medium.

Gear Recommendations

First, a note on best practices for specifically recording dialogue; I’ve worked on films and documentaries where the only audio I had was from a lavalier mic. Lav mics often sound chesty and unnatural, so it takes a long time in post to get the dialogue sounding crisp and clear. Clothing rustle and other movement sound from lavs take a long time to repair as well. Ideally, dialogue is captured on set with a shotgun mic with lavs as backup options. Shotgun mics are also handy to have in case the wireless catches interference. Booms usually can not be used during wide shots, but you can point the shotgun elsewhere and record ambience. (Even though that would be a mono recording. Still good to have options!)

Go beyond capturing dialogue when planning out gear. Spec out a kit that can get stereo recordings, especially outside. When editing and mixing the final product, extra environmental recordings can be a bed under dialogue and used to create smooth transitions into and out of scenes. Stereo backgrounds set a more immersive and natural sounding environment and are a satisfying, yet basic sound design method. Stereo audio can easily be captured with the mid/side technique, but if that option is not available to you, grab a stereo field recorder and record the environment before or after the interview or shoot.

If you have the inputs available on your mixer/recorder, arrive on location with an extra mic or two that you can plant to capture other audio. Is there a babbling brook nearby? It might be cool and interesting to record that on its own channel during the gig, so the sound designer can layer it in. Same if there is a sidewalk with foot traffic in the background – hiding a mic behind a trash can or in a bush (out of the shot, for film), and boom! You have environmental audio that is synced with the rest of the audio in the scene. For something like this, where you aren’t capturing anything specific, you could probably use an omnidirectional mic. But I say, fine to use whatever you get your hands on. It is far better to have audio recorded than to have a missed opportunity because you couldn’t get the perfect microphone.

To sum it up, here’s your list of gear: 1-2 wireless receivers/transmitters and lavalier mics, 1 shotgun mic and boom pole, 1 handheld field recorder, and/or a mid/side setup (a bidirectional mic, a hyper-cardioid or omnidirectional mic, and a blimp and pistol grip), and of course your trusty mixer/recorder such as a Zoom F8 or Sound Devices Mix-Pre 10. And hopefully other random microphones!

Best Practices

These are blue sky recommendations, so your projects may not allow you the time for all of these. If you can go back to a location and get purely environmental recordings, I highly recommend it. Some of these ideas are things you should advocate for in a production meeting before you step foot on location.

On the note of boom operation – ask the producers when they plan to do a site visit. Site visits are essential to figuring out wireless solutions, power, and possible sources of unwanted noise. In more run-and-gun situations, they are helpful to gain familiarity with the terrain before the shoot. Camera operators get the assistance of a spotter – location sound mixers/boom operators do not. Understanding the terrain beforehand will enable you to keep your boom steady and out of the shot, and reduce the risk of you tripping and getting hurt.

Try to carve out time to get extra audio of the environment or the room. In the post-production phase, it is helpful to have options to create smooth transitions into and out of scenes, with the added benefit of having audio to build an immersive scene through sound design. For indoor scenes, a minute of audio per room is usually fine. Advocate for a “meditation minute” where no one moves or talks on set. Since there is more variability in the environment outdoors, three minutes is usually best. It may be more ideal to go back and get that audio or stick around after the gig.

If you can swing it, try to grab other recordings of cars passing, planes, etc. If you need to stop recording, or a cut is made in post during one of those occurring in the background, it is unnatural and jarring to hear that element suddenly drop out. And background sounds can not always be removed.

Then there are the things you should try to avoid recording while capturing dialogue. Heavy traffic, airplanes, HVAC, fans, unwanted conversation, etc. Discuss sources of unwanted noise with your director/producer during the site visit so they are aware and can hopefully make plans. And if a plane flies overhead or a car passes by, or there are any issues at all, tell production to hold for it. As audio people, we are generally encouraged to keep our heads down, so it can be hard to adjust towards speaking up more. But in these situations, you will get so much more respect by courteously speaking up and advocating for getting good sound. (Though holding for planes only works for scripted shoots. In interviews and documentaries, there is no stopping once you’re rolling.)

Everything discussed here may or may not be possible for every project due to time and budget. I can not emphasize collaborating early to figure out what is possible. The end goal is to serve the project and immerse the audience. Vocalize your suggestions through the lens of bringing the story to life. Every department is there for a common goal – to make the story.

iZotope RX 101

There are many audio repair tools on the market, and arguably the most common one is iZotope RX. And no wonder – it gives the user very fine control over audio clean-up. I have come across questions from new users in several internet groups, so I thought it was about time that I shared everything I have learned about RX.

We will cover the basics: the interface, some user preferences, and the order of operations. This article will be heavily geared towards film, radio, and podcasts, but the software is also a workhorse in the music industry –  I just personally can not speak to how it is used in music. Lastly, I own RX 9 Advanced, so I am giving advice from that perspective. Take the advice in this article and apply what you can to your version of RX. RX Elements and RX Standard just have fewer modules, so a lot of this will just be extra advice. Older versions will have slightly different algorithms but much of this advice will still stand.

I do want to mention that I am in no way sponsored by, or being paid by iZotope, and in writing this I am not necessarily endorsing a single product. I just consider myself pretty good at using RX and want to share the wealth.

The User Interface

This diagram is simply an overview. Hover over any of these in the software’s interface to get the full name and use of the tools in the interface.

The Spectrogam View

The spectrogram is the “heat map” behind the waveform (when the waveform/spectrogram slider is set to center). It gives a detailed visual about the time, frequencies, and amplitude of your audio all in one graph. The loudest frequencies are the “hot” color temperature. The spectrogram helps to visually isolate audio problems like plosives, hums, clicks, buzzes, and intermittent noises like a cough, phone ringing, sirens, et cetera.

There are some settings that need tweaking for maximum efficiency, and to suit your personal preferences. Starting with the scales on the right-hand side:

The amplitude scale should be set to dB. The other options are normalized 16-bit, and percent. Set the view from the dropdown menu that becomes available when you hover over the scale and right-click.

Right-click frequency scale and select extended log. This is a zoomed-in view that allows you to see more details of each frequency range in the spectrogram.

The magnitude scale should be set to decibel. The settings for the amplitude and magnitude scales ensure the accuracy of decibel readings across frequencies in the spectrogram.

Once the scales are set, open Spectrogram Setting either by right-clicking any scale on the right-hand side and selecting Spectrogram Settings from the drop-down menu or in the menu go to  View > Spectrogram Settings.

You can save different presets depending on all the parameters in this window. But I want to draw attention to the color options. The default is cyan to orange. But I recommend blue to pink. The contrast is the most obvious when using this color combination, although it does come down to personal preference. (If you want casual onlookers to think you are dealing with paranormal activity, go with the green and white color map!)

Beware the Dangers of Overprocessing!
  
Overprocessing occurs when the user runs too many modules or modules at heavy settings. It sounds like added digital artifacts, squashed dynamics, alterations to the original sound, or dropouts in the audio. I recommend three strategies to  avoid overprocessing:

Only run the modules you have to. I have some processes I use all the time but might break my own rules if I feel like something needs a lot of one type of module. (i.e., Maybe I skip Spectral De-Noise if the bigger issue is too much room reverb, and I think I may run De-Reverb more than once.)

Run the lightest possible settings. Dial-in what you think you’ll need, then back off a bit, then render the module. It is better to run one module twice with light settings than once with really heavy settings.

Check your work by clicking back in your history window. If any version of the processed file makes the audio sound worse instead of better, undo everything up until that point. After taking a listening break, you might come back and realize that everything you have done sounds worse than the original audio. That is fine. It doesn’t make you bad at audio repair. Take a breath, and start over with fresh ears.

Ultimately, your goal is to keep it natural. Bring out the speech, and do not remove the environmental ambiance altogether. You can try to remove broadband noise so long as you can do so without affecting the audio you want to keep. Even some background noise is preferable over the distraction of noise cutting in and out in the background. Use RX as a tool to make voice intelligible, and assist in blending audio together seamlessly. And note: audio repair tools are not a substitute for a good recording.

Order of Operations

iZotope published this flowchart on their blog back in the days of RX 7. I more or less adhere to it and have managed to avoid creating digital artifacts for a long time.  

Here is my current order of operations when repairing audio for podcasts and films. My process is inspired by iZotope’s recommendations, but I have tailored it. All these steps I run in the RX standalone application. I have a slightly different workflow when I connect RX to Pro Tools and will go over that in a future article.

Mixing module: I run this first, and only if the audio is out of phase.

De-hum: Only if there is a hum, and also, the HPF on it is really nice. I tend to apply a 50 Hz HPF and a 60 Hz reduction. But sometimes I might not run this if I think audio will need a lot of processing.

Mouth de-click – use on just about everything. Set to “output clicks only.”  Dial-in until I hear bits of words, then back off, and back off some more. Uncheck “output clicks only” prior to rendering.

General denoising, a combination of any of the following but I will not usually run them all

Spectral denoise: Meant to target broadband noise, like hiss, or tonal noise. It tends to be super heavy-handed. Use only if I can grab a sample of the broadband audio so the algorithm can “learn” the noise profile. Set the threshold to taste. I usually do not use a reduction of more than 7 dB, but everyone has their own preference. I have had this module remove parts of the audio of a very dynamic talker, so less is more.

Dialogue Isolate: I use this a lot! Lowers background if not removing it altogether, depending on the content of the audio. Remember to keep it natural. Try to just enhance speech over the background.

Voice Denoise: Can also learn a noise profile from a sample. More gentle than Spectral Denoise. Sometimes I use this as a finishing touch to make vocals “pop.”

Then more manual things like painting out unwanted background sounds, plosives, and clicks that mouth de-click did not catch.

EQ is my last step if I am applying EQ in iZotope.

This should be a solid start if you are just getting started with iZotope RX. Remember to keep the audio natural and your goal is to enhance the audio quality. Although there is also creative potential with all of these tools! But having a foundational understanding will empower you to take it to a creative level. In future blogs, we will cover removing unwanted noises by hand and connecting RX to your digital audio workstation.

Demystifying Loudness Standards

               
Every sound engineer refers to some kind of meter to aid the judgments we make with our ears. Sometimes it is a meter on tracks in a DAW or that session’s master output meter, other times it is LEDs lighting up our consoles like a Christmas tree, sometimes it is a handheld sound level meter, other times a VU meter, etc. All of those meters measure audio signal using different scales, but they all use the decibel as a unit of measurement. There is also a way to measure the levels of mixes that are designed to represent the human perception of sound: loudness!

Our job as audio engineers and sound designers is to deliver a seamless aural experience. Loudness standards are a set of guides, measured by particular algorithms, to ensure that everyone who is mixing audio is delivering a product that sounds similar in volume across a streaming service, website, and radio or television station. The less work our audiences have to do, the better we have done our jobs. Loudness is one of the many tools that help us ensure that we are delivering the best experience possible.

History           

A big reason we started mixing to loudness standards was to achieve consistent volume, from program to program as well as within shows. Listeners and viewers used to complain to the FCC and BBC TV about jumps in volume between programs, and volume ranges within programs being too wide. Listeners had to perpetually make volume adjustments on their end when their radio or television suddenly got loud, or to hear what was being said if a moment was mixed too quietly compared to the rest of the program.

In 2007, the International Telecommunications Union (ITU) released the ITU-R BS 1770 standard; a set of algorithms to measure audio program loudness and true-peak level. (Chueks Blog.)  Then, the European Broadcast Union (EBU) began to work with the ITU standard. Then EBU modified their standard when they discovered that gaps of silence could bring a loud program down to their specifications. So they released a standard called EBU R-128. Levels below 8 LUFS of the ungated measurement do not count towards the integrated loudness level, which means that the quiet parts can not skew the measurement of the whole program. The ITU standard is still used internationally.

Even after all of this standardization, television viewers were still being blasted by painfully loud commercials.  So, on December 13th, 2012, the FCC passed the Commercial Advertisement Loudness Mitigation Act. From the FCC website: “Specifically, the CALM Act directs the Commission to establish rules that require TV stations, cable operators, satellite TV providers or other multichannel video program distributors to apply the ATSC A/85 Recommended Practice to commercial advertisements they transmit to viewers. The ATSC A/85 RP is a set of methods to measure and control the audio loudness of digital programming, including commercials.  This standard can be used by all broadcast television stations and pay-TV providers.”    And yup, listeners can file complaints to the FCC if a commercial is too loud. The CALM Act just regulates the loudness of commercials.

Non-Eurocentric countries have their own loudness standards, derived from the global ITU R B.S 1770. China’s standard for television broadcast is GY/T 282-2014; Japan’s is ARIB TR-B32; Australia’s and New Zealand’s is OP-29. Many European and South American countries, along with South Africa, use the EBU R-128 standard. There’s a link with a more comprehensive link at the end of this article, in the resources section.

Most clients you will mix for expect you, the sound designer or sound mixer, to abide by any one of these standards, depending on who is distributing it. (Apple, Spotify, Netflix, YouTube, broadcast, etc.) 

The Science Behind Loudness Measurements

Loudness is a measurement of human perception. If you have not experienced mixing with a loudness meter, you are (hopefully) paying attention to RMS, peak, or VU meters in your DAW or on your hardware. RMS (average level) and peak (loudest level) meters measure levels in decibels relative to full scale (dBFS). The numbers on those meters are based on the voltage of an audio signal. VU meters use a VU scale (where 0 VU is equal to +4 dBu), and like RMS and peak meters, are measuring the voltage of an audio signal.
Those measurements would work to measure loudness – if humans heard all frequencies in the audio spectrum at equal volume levels. But we don’t! Get familiar with the Fletcher-Munson Curve. It is a chart that shows, on average, how sensitive humans are to different frequencies. (Technically speaking, we all hear slightly differently from each other, but this is a solid basis.)

Humans need low frequencies to be cranked up in order to perceive them as the same volume as higher frequencies. And, sound coming from behind us is also weighed louder than sound in front of us. Perhaps it is an instinct that evolved with early humans. As animals, we are still on the lookout for predators that are sneaking up on us from behind.

Instead of measuring loudness in decibels (dB), we measure it in loudness units full scale (LUFS, or interchangeably, LKFS). LUFS measurements account for humans being less sensitive to low frequencies but more sensitive to sounds coming from behind them.

There are a couple more interesting things to know about how loudness meters work. We already mentioned how the EBU standard gates anything below 8 LUFS under the ungated measurement so the really quiet or silent parts do not skew the measurement of the whole mix (which would allow the loudest parts to be way too loud). Loudness standards also dictate the allowed dynamic range of a program (in LUFS). This is important so your audience does not have to tweak the volume to hear people during very quiet scenes, and it saves their ears from getting blasted by a World War Two bomb squadron or a kaiju if they had their stereo turned way up to hear a quiet conversation. (Though every sound designer and mixer knows that there will always be more sensitive listeners who will complain about a loud scene anyway.)

Terms

Here is a list of terms you will see on all loudness meters.

LUFS/LKFS – Loudness Units Full Scale (LKFS = K weighted, but they are effectively the same thing).

Weighting standards – When you mix to a loudness spec in LUFS, also know which standard you should use! The following are the most commonly used standards.

True Peak Max:  Bit of an explanation here. When you play audio in your DAW. you are hearing an analog reconstruction of digital audio data. Depending on how that audio data is decoded, the analog reconstruction might peak beyond the digital waveform. Those peaks are called inter-sample peaks. Inter-sample peaks will not be detected by a limiter or sample peak meter. But a True Peak Meter on a loudness meter will catch them. True peak is measured in dBTP.

Momentary loudness: Loudness at any given moment, for measuring the loudness of a section.

Long-term/ Integrated loudness: This is the average loudness of your mix.

Target Levels: What measurement in LUFS the mix should reach.

Range/LRA: Dynamic range, but in LUFS.  

How To Mix To Loudness Standards

Okay, you know the history, you are armed with the terminology…now what? First, let us talk about the consequences of not mixing to spec.

For every client, there are different devices at the distribution stage that decode your audio and play it out to the airwaves. Those devices have different specifications. The distributor will turn a mix-up or down to normalize the audio to their standards if the mix does not meet specifications. A couple of things happen as a result. One, loss of dynamic range. And, the quietest parts are still too quiet. If there are parts that are too loud, those parts will sound distorted and crushed due to compressed waveforms. The end result is a quiet mix, with no dynamics, with distortion.

To put mixing to loudness in practice, first, start with your ears. Mix what sounds good. Aim for intelligibility and consistency. Keep an eye on your RMS, Peak, or VU meters, but do not worry about LUFS yet.

Your second pass is when you mix to target  LUFS levels. Keep an eye on your loudness meter. I watch the momentary loudness reading because if I am consistently in the ballpark with momentary loudness, I will have a reliable integrated loudness reading and a dynamic range that is not too wide. Limiters can also be used to your advantage.

Then, bounce your mix. Bring the bounce into your session, select the clip, then open your loudness plugin and analyze the bounce. Your loudness plugin will give you a reading with the current specs for your bounce. (Caveat: I am using ProTools terminology. Check if your DAW has a feature similar to AudioSuite.) This also works great for analyzing sections of audio at a time while you are mixing.

Speaking of plugins, here are a few of the most used loudness meters. Insert one of these on your master track to measure your loudness.

Youlean Loudness Meter
This one is top of the list because it is FREE! It also has a cool feature where it shows a linear history of the loudness readings.

iZotope Insight
Insight is really cool. There are a lot of different views, including history and sound field views, and a spectrogram so you can see how different frequencies are being weighted. This plugin measures momentary loudness fast.




Waves WLM Meter

The Waves option may not have a bunch of flashy features like its iZotope competitor, but it does measure everything accurately and comes with an adjustable trim feature. The short-term loudness is accurate but does not bounce around as fast as Insight’s, which I actually prefer.

TC Electronic LMN Meter
I have not personally used this meter, but it looks like a great option for those of us mixing for 5.1 systems. And the radar display is pretty cool!

Wrapping Up: Making Art with Science

The science and history may be a little dry to research, but loudness mixing is an art form itself; Because if listeners have to constantly adjust volume, we are failing at our jobs of creating a distraction and hassle-free experience for our audience. Loudness standards go beyond a set of rules; they are an opportunity for audio engineers to use our scientific prowess to develop our work into a unifying experience.

Resources

First, big thanks to my editors (and fellow audio engineers) Jay Czys and Andie Huether.

The Loudness Standards (Measurement) – LUFS (Cheuks’ Blog)
https://cheuksblog.wordpress.com/2018/04/02/the-loudness-standards-measurement-lufs/#:~:text=Around%202007%2C%20an%20organization%20named,a%20value%20for%20the%20audio.

Loudness: Everything You Need to Know (Production Expert)
https://www.pro-tools-expert.com/production-expert-1/loudness-everything-you-need-to-know

Loud Commercials (The Federal Communications Commission)
https://www.fcc.gov/media/policy/loud-commercials

Loudness vs. True Peak: A Beginner’s Guide (NUGEN Audio)
https://nugenaudio.com/loudness-true-peak/

Worldwide Loudness Standards
https://www.rtw.com/en/blog/worldwide-loudness-delivery-standards.html

Stay Passionate

 Resources to Get Started with a Life-Long Practice of Professional Development

Do you remember your first audio project?  Do you remember how excited (or scared) you were about it?  For the vast majority of folks working in audio production, we fell into this industry out of passion.  It’s a labor of love; long hours and thankless sessions can happen, but we are there to answer the call because we know that we can make things sound the very best we possibly can make them sound.

That initial spark of emotion when we start out drives us all to be better engineers and artists; You have got to keep that spark! In an industry that is always evolving, it is crucial to keep learning, figuring out what is next on the horizon. Our field is really exciting, and actually digging into the various resources available can keep you pumped about your job.

Over the years I have gathered a ton of resources. This is by no means an exhaustive list, but it can get you started on your own professional development journey.

Organizations

Joining an organization not only provides a curated array of resources but is also a way into a community. One of the best ways to learn is from others in your field. Most of the organizations below have membership fees (though some are free), and there are student and early career options available.

SoundGirls 

Obviously! Becoming a member is free. https://soundgirls.org/membership/ 

WAM (Women’s Audio Mission)

Based on San Fransisco, the Women’s Audio Mission holds classes for marginalized genders in audio. Some are in person, and they have remote options as well. They also provide career counseling and work experience.
https://womensaudiomission.org/get-involved/become-a-member/

OmniSound Project

OmniSound Project provides a ton of courses. I took their “Approaching a Mix” intensive a few months ago. They have fantastic workshops as well, and they also do 1:1 lessons. Membership is free to people who belong to marginalized genders.
https://www.omnisoundproject.com/membership.html

TSDCA (Theatrical Sound Designers and Composers Association)

The TSDCA was founded as a response when the Tonys removed Sound Design as a category in 2014. Although the Tonys have since reinstated the awards for Sound Design, the TSCDA continues to be a resource for those working in theatrical sound design, composing, and audio engineering.

https://tsdca.org/application/

AES (Audio Engineering Society)

The Audio Engineering Society is the largest community of audio experts and was created by the industry, for the industry, to inspire and educate the technology and practice of audio. Becoming a member gives you access to 20,000+ of research papers and discounts on their conferences — a must for keeping up with industry technology and standards!
https://aes2.org/aes-membership-overview/

MPSE (Motion Picture Sound Editors)

The premier organization for sound editing professionals. It is dedicated to educating the public as well as the entertainment industry about the artistic merit of sound editing.

https://www.mpse.org/join-us

GANG (Game Audio Network Guild)  

An organization for those working in-game audio. https://www.audiogang.org/why-join/

TEA (Themed Entertainment Association) 

A place for professionals working in and students of themed entertainment to connect. Think theme parks, exhibits, immersive theater, experiential pop-ups.
https://www.teaconnect.org/Members/Join-TEA/index.cfm

Conferences

SoundGirls 

SoundGirls will be hosting their first virtual conference on December 4th and 5th, 2021. There will be a wide array of panels that cover all the different fields of audio.
https://soundgirls.org/event/soundgirls-virtual-conference/

AES (Audio Engineering Society)

I am always blown away by the wide range of panels at AES conferences. I will say that there is often a lot of panels that cover the music industry.
https://aes2.org/events-calendar/aes-fall-online-2021/

NAB (National Association of Broadcasters)

A must if you work in Radio, Television, Streaming, Sports, or Podcasts.
https://nabshow.com/2022/

GameSoundCon

Takes place every year in Los Angeles. It is a great way to learn about Game Audio, see some products at vendors’ tables and meet people working in the field.
https://www.gamesoundcon.com/

GDC  (Game Developer’s Conference)

If you want to work in games, try meeting non-audio people. Those are the folks that will hire you! GDC takes place every year in San Fransisco.
https://gdconf.com/

SIGGRAPH (Special Interest Group on Computer Graphics and Interactive Techniques)

A conference about computer graphics and interactive techniques. From what i have heard, there are often VR projects being presented.  https://www.siggraph.org/

LDI (Lighting Design International)

LDI is a lighting convention, but they usually have a small section with audio vendors and demo speaker systems. Besides, it is good to meet folks working in other parts of the industry.  https://www.ldishow.com/

NAMM (National Association of Music Merchants)

NAMM is a great place to start for anyone working in any part of audio, however, it is geared mostly towards the music industry. Held every year in Anaheim, CA. https://www.namm.org/

CES (Consumer Electronics Show)

A great way to learn and get updated on consumer technology trends and the interests of our audiences. https://www.ces.tech/

ComicCon

Another option is to meet people working in other fields, that could potentially hire you. And, it is essential to consume media and have an understanding of storytelling so that you can best support them with sound. No links here because there is one in most major U.S. cities. (The OG ComicCon is held annually in San Diego.) Look for the one closest to you!

Indiecade

Held annually in Los Angeles, Indiecade is THE gathering for independent game developers. There are board games and LARPs too! https://www.indiecade.com/
On the video game note: also check out Meetups, Global Game Jam, and search for hackathons in your area. Hackathons typically take place over a weekend, and the goal is to build a game. Global Game Jam is pretty much an epic hackathon that takes place annually in multiple cities at the same time. It is a great way to practice sound design while meeting other people. Search for a chapter near you: https://globalgamejam.org/

Blogs 

SoundGirls
So many topics! https://soundgirls.org/contributors/

A Sound Effect 

A wealth of resources about how sound has been made for many different movies, games, shows, and attractions, as well as a place to buy a lot of sound effects.

https://www.asoundeffect.com/

iZotope

A great resource to learn about iZotope products, as well as mixing tips.

https://www.izotope.com/en/learn.html

Pro Tools Expert 

A blog for Pro Tools users. https://www.pro-tools-expert.com/

TheaterArtLife

Blogs about all of the departments in theatre. https://www.theatreartlife.com/

April Tucker’s Blog (Post Production Sound)

April Tucker’s awesome log about post-production. Send along the blogs for filmmakers to your director and editor friends too! https://apriltucker.com/blog/

Podcasts

Available wherever you listen to podcasts.

SoundGirls 

Interviews with kick*** women in audio.

A Sound Effect

A wide variety of topics from how sounds were made for certain films, television shows and games, to hot tips about working in audio (like protecting your ears).

Twenty Thousand Hertz

A podcast about how everyday sounds were made, from washing machines to UI sounds to car sounds. And so much more.

Sound Business

Akash Thakkar’s podcast where he interviews people making a killer living in music and sound.

Tonebenders 

One of my favorites. Interviews with people working in post-production and game audio and how they tackled sound design for certain projects.

Courses 

LinkedIn Learning

I could list what courses are good— but this list would be 20 pages long. Just look up what you want to learn and LinkedIn Learning probably has it. (Hot tip: Many public libraries have a LinkedIn Learning account.)

OmniSound Project

As mentioned before, OmniSound Project holds intensives, workshops, and 1:1s for a wide array of topics and they have a very welcoming community. The website is linked about, but I highly recommend following them on Instagram to keep up to date with their class offerings: https://www.instagram.com/omnisoundproject/?hl=en

The Production Academy

Offers courses in wireless audio, mixing fundamentals, show power, and stage.

https://www.theproductionacademy.com/courses

Sound Design Live 

Courses about all things regarding live sound, from system optimization to mixing to RF coordination. https://school.sounddesignlive.com/

Ear Training

SoundGym https://www.soundgym.co/

iZotope Pro Audio Essentials https://pae.izotope.com/

Forums 

Production Expert
Saved me many times. https://premium.production-expert.com/

Reddit
Too many options to link. Whatever part of audio you work in, there is a Reddit Forum for it.

Facebook
Same deal as Reddit — if you have an interest in a specific realm of audio, there is a Facebook group for it. Also search for local chapters. (i.e., LA Sound Mixers.) Start with the SoundGirls and Hey Audio Student Facebook groups.

Certifications

WWISE

Middleware for game audio. (How you get audio into a game engine.) Common at AAA studios.

https://www.audiokinetic.com/products/wwise/

DANTE
Live sound networking protocol, with three different levels. Levels 1 and 2 are great even just to begin to learn IT technology.

https://www.audinate.com/learning/training-certification/dante-certification-program

QSYS

Show control software for installations, attractions, and even places like airports, restaurants, and conference rooms. It can do a LOT. Note that it is only for Windows.
https://training.qsc.com/

Shure’s RF Certification Course 

Master RF coordination so you have the knowledge to handle any wireless microphone situation that comes your way with this three-course certification: https://www.shure.com/en-US/support/shure-audio-institute/certification/rf-certification

Wrapping up

I hope this list is motivating! Beyond staying on top of the technology and process, constant professional development can motivate you and make you an awesome person to work with — because you will feel excited and intellectually stimulated! It is worth the investment of time and money to keep the spark ignited and to stay on top of your game.

Design Thinking Strategies for Sound Designers


A few years ago, I attended a user experience design boot camp. That course taught me that UX is so much more than designing visuals for apps and websites. UX designers conduct a lot of user research to determine how an app should function, implementing what they call a “human-centered approach” to their decision making; that is, an approach that ensures the final product serves the user.

Since then, I have been meaning to write about the similarities between sound designers and user experience (UX) designers. Sound designers use design thinking strategies all of the time! Through careful analysis and experimentation, we consider the end-user product. For us, that’s usually a film, play, video game, podcast, concert, etc. Even though the tools are very different, the process is very similar. This article will examine the crossover between design thinking and the sound design process through the five phases of design thinking.

Phase 1: Emphasize with the user

The first thing user experience designers do is evaluate and research user needs through a “discovery phase.” They will conduct interviews with users surrounding their specific needs and desires around a product. They may also send out surveys or observe users’ nonverbal interactions.  What they are looking for is a problem to solve.  This first stage is really systematic because although researchers have a specific topic to evaluate, they do not go into the discovery phase with a pre-determined issue. They find it through interacting with users. This makes for an unbiased approach because the research is being conducted objectively and no one is making assumptions about end users’ desires and needs. This academic approach allows for discovering users’ needs so that the end product will actually serve them.

If phase one for the UX designer is about gaining an understanding of the user, phase one for the sound designer is about gaining an understanding of the message, environment, and characters within an experience. The sound designer’s discovery phase involves reading the script and talking to the director about their intentions with the story’s message. They may also begin to look at the work the visual team has done to begin to gain an understanding of the environment. Before talking to the team, the sound designer should have already read the script and begun to think about the message. However, they haven’t made any sure-fire decisions about how the experience should sound until after talking to the director and the team. Even if they have ideas, the sound designer keeps an open mind and conducts objective research.

In this sound design/UX design analogy, the director is the user, at least in this first phase. Much like UX designers, the sound designer first asks non-leading questions to understand what the experience needs; goes in with an unbiased approach, and is ready to pivot if their initial interpretations of the script are not in line with the director’s vision.

Phase 2: Defining the User’s Needs

The user experience designer has a bunch of quantitative and qualitative data from user interviews and tests — now what? The next step is laying out all of the information in a way where the data can be synthesized into findings. This is usually a very hands-on approach. A common technique UX designers will use is called affinity mapping. Every answer or observation is written on a post-it note, then “like” things are grouped together. The groups with the most post-its will inform the UX team about users’ most common and important needs and expectations. Then, they will begin to write up a problem statement, which is usually phrased as a question: “How might we [accomplish X thing that users need]?” Keeping it focused on the issue at hand keeps the approach unbiased and user-centered. The problem statement is a goal, not a sentence that is proposing a solution. The problem has not been solved yet; it has just been defined.

In the same way, UX designers define the problem statement, a sound designer’s second phase involves defining the message; the overall feelings and thoughts that the audience should take away from the experience. They may combine notes from their initial script reading and the conversations they have had. They may also go through the script for sound effects that are mentioned if they did not do that during the first read-through. This is where they define the world and mood of the experience. Some sound designers might even write down their own version of a problem statement, which is the goal or message of the experience. Sometimes in my work, I have found that it is helpful to have a goal for an experience written down so I can keep referring to it and checking that my work is in line with the tone of the piece.

In both roles, keeping a main goal or statement keeps the process about the end-user or audience. While a designer in either role might end up lending their own artists’ voice to a project, maintaining an unbiased approach (starting with a problem statement or message) keeps everything that is designed about the characters and the story.

Phase 3: Ideating

 After user experience designers spend all this time cultivating data, they get to start brainstorming features! A human-centered approach is very systematic; to create a meaningful and relevant product, designers can not get here without the first two phases. Every proposition for a feature is based on user research.

Similarly, the sound designer has defined the director’s expectations and the message, mood, and physical environment in the first two phases. The ideation phase is usually about watching and listening to reference material and beginning to gather and record audio. Much like user experience designers may not implement all of the features they think of, a sound designer might gather sounds that they do not end up using at all.

For both roles, this is when people are referring to their research and brainstorming ideas just to see what sticks. During the third phase, user experience designers are constantly referring to the research and problem statement, and sound designers are referring to their script and notes.

Phase 4 & 5: Building & Prototyping, and Testing

 This is where things begin to heat up! All that data starts to become a real, tangible experience. At this point, the user experience designer has developed a few prototypes. They can exist as paper prototypes or digital mock-ups. They may have a couple of versions in order to conduct usability tests and see what is most relevant and meaningful to users. Designers will build prototypes, test them, get feedback, build a new version and test again. A cycle exists between these phases, and whatever is discovered in phase five will influence a new phase four prototype, then it is on to phase five again for more feedback…Rinse and repeat until the design is cohesive (or the project runs out of time or money). Testing and getting feedback is very important, to make sure the work continues to serve the users or audience. 

A sound designer’s prototype is often the first pass at a full design. For theater, it can be cues they send to be played in rehearsals; for other mediums, it is about inserting all the audio elements and taking notes from the director. Then, they implement different effects for a few iterations until they reach approval from the director and producers. In the sound designer’s case, the director is akin to a beta tester in UX research.

During testing, user experience designers and sound designers have similar considerations to evaluate:

Phase 6: Iterate

Design thinking strategies are far from linear. Throughout the process, a user experience designer or sound designer refers to their initial research and notes to keep their decisions focused on the audience. They will prototype features (UX) or effects (sound), test them out, take feedback, redo, and test again.

Conclusion

A great sound design, while influenced by that artists’ voice, is unbiased and serves the story. A solid product design does the same thing; because at the end of the day, a user’s journey with a product is a story. Consciously implementing design thinking strategies also makes our approach as sound designers human-centered, resulting in stories that have a huge impact on the audience.  A solid, well-researched and thought-through design can bring a project to another level completely; by touching our audiences and end-users in deeply emotional ways, we provide a meaningful and relevant experience to their lives.

Back That Sh*t Up: Success and Horror Stories


Early audio recordings were first printed and played back on reel-to-reel tape. Then DAT tape made an appearance. Compact discs were the next form of recording and distributing audio. Now, aside from rare exceptions, sound designers and audio engineers are working with digital audio files. Modern lighting and sound consoles also store digital files. The luxury of saving shows to a file empowers us to switch from one band’s settings to another faster than you can say, “Check 1, 2.”

There is a downside to this (not so new) digital landscape. Intangible work can make you forgetful about storing backup copies. I must confess that when I was still very green in my career, digital files seemed safer. I learned my lesson the hard way when I was designing sound for a theater production ten years ago. Someone broke into my friend’s car, where my laptop was and stole my computer with all of the sound files on it! I scraped together all of the files from email attachments with my director, but a lot of effects and music had to be recut. Not my proudest moment. Ever since then, everything gets backed up to a drive while I am working, and that drive gets backed up to a cloud. Like many sound professionals, I operate under the convention that if I have one copy of a file, then the file does not exist.

As my experience above illustrates, developing a file redundancy workflow often happens through the hard lesson of losing work. So I’m here to (hopefully) save some readers later pains.

I asked a few colleagues to exchange stories of lessons in the importance of backing up that they learned the hard way. Here are the lessons they shamelessly shared with me.

“ I was designing sound for a play. My sessions were all synced to Dropbox. I sat down for tech and opened my laptop, which proceeded to make that awful ‘crunching’ sound of end-of-life. I ran to the Apple Store, bought a new laptop, downloaded LogicPro, and had my show sessions up and running within an hour – in time for tech! Now, all of my document files live on DropBox. Projects live on synched drives, but I’ll still push to DropBox as an extra layer, namely if it’s something that would take me more than 5 minutes to redo.”

        — Veronika Vorel, sound designer
   
    “While I was studying Music Production and Technology at the Hartt School of Music, I had a project of recording a band. I asked my musician friends and completed recording the whole song including all instruments and vocals. Then, I tried to back up the work at the end of the day because ‘one file means the file doesn’t exist.’ So I backed it up. BUT, I found out that instead of rewriting from the new one to the old one, I did it opposite… replaced the entire new folder with older folder so I lost all my new work. I even used some programs to mirror the folder and stuff. It wasn’t just dragging and drop. I tried to find a way to save the file but it was all gone… I had to redo all the work the next day. Thank you to all my friends who came back the next day. Ever since then, I have become extra careful about the backup process.”
       
        — Gahyae Ryu, sound designer

“These days, I have gotten into the good habit of backing up projects after every session, both cloud and external. Technology constantly evolves, but technology can also fail, without explanation and at the worst time. But let’s say for example if a theatre director decided that they prefer the sound cue sequence from a previous rehearsal day, I can easily pull that from the archive of multiple backups and save precious production time.

    I save every 5-10 minutes because you never know when you can all of a sudden lose power. A worst-case scenario occurred when building cues for a particularly complex sequence in a play. The computer froze, and upon reboot, the progress was not saved. Then the next thing you know, everyone is waiting on you as you redo the building process all over again. You definitely don’t want to find yourself doing that while working on a play like The Curious Incident Of The Dog In The Night-Time.”

        — Jess Mandapat, sound designer & composer

    “ I have a weird story where I had a backup of my laptop and console files and had my backpack stolen at a gig with my laptop and my backup hard drive and all my console thumb drives all in the backpack. I literally lost everything in one shot. So I learned the hard way that now I carry a thumb drive that stays with my console, a hard drive at home and I keep my backpack with me everywhere I go! “

        — Beckie Campbell, front-of-house engineer and owner of  B4Media Production.
   
    “Never delete a project until you’ve verified it’s backed up to one (if not two) places. I once deleted a session from my laptop because I assumed it was backed up. Turns out, my nightly backup failed, and I didn’t have it on another computer like I thought I did. Luckily it was just a personal project and I only lost a couple of afternoons of work, but that was nearly 10 years ago, and it still bothers me.”

        — April Tucker, re-recording mixer

    “I have a good story about when I blanked a console moments AFTER soundcheck, and someone showed me the history feature and bailed me out!” 

        — Becca Kessin, theatrical sound designer & educator

“There was the time I was designing a play, and the night before tech my FX drive decided to pine for the fjords. No worries…I’ll grab the backup drive. Which…had an empty folder called “FX Backup” where the backup had failed to sync.

    I spent that tech texting categories of effects to a generous friend who would quickly copy that category from his library to his webserver to let me download them as needed.”

        —  Andy Leviss, audio engineer & sound designer

    I was backing up a ten-hour day’s work of vocal comping and tuning. Fired up the backup drive to make my safety copy when the power went out, flashed on, back off, and then on again five minutes later. A drunk driver had hit a pole in the neighborhood. The power flashed while my drive was spinning and ended up wiping the main drive. Fortunately, the backup was okay, but I lost the entire day’s work. I called the producer, explained what happened, and told him I was going to have a drink and would re-do my work the following day. Bought an uninterrupted power supply first thing the next morning.”

— Josh Newell, audio engineer

    “My story about not backing up involves recording to only one media rather than not backing up to a laptop or hard drive. A little over 2 years ago, I’d use my Sound Devices MixPre 6 to record sound for smaller jobs. It’s a great little mixer, which can definitely handle 1 boom and 3 lav mics. The downside about this recorder is that it’s only recording to one media (SD Card). On one shoot, we downloaded mid-day. The DIT tells me that there are no files on my SD card, which I thought was strange because I specifically remembered recording the episodes. Somehow when he put the SD card into the computer, it formatted the card. We both panicked a little. Luckily someone had the program that recovers deleted files off of media, which found all of my original files. In this moment, I realized how important it was to record on more than one media. If I use a MixPre then I like to send the audio out to an external recorder.” 

        —  Kally Williams, production sound mixer

“Recently, I had a project I was working on that required multiple different session files that were slightly different from each other. My backup procedure is to usually keep every file I create in at least three separate places: two are stored on hard drives, and the last is a cloud backup (I currently use Backblaze). After a pretty exhausting workday where I put in a lot of time and energy into a particular project, I went to do my standard end-of-the-day backup where I drag the session onto the extra drive. Except, there was apparently another folder with the same name that I had not changed, and since it was a large session, I copied it over, hit okay, and walked away. When I came back a little while later, I realized that I had just completely overwritten the folder I had worked on all day. Not to worry, right? I could just use the backup from my cloud storage. Only, I had forgotten that I had temporarily disabled it a few days prior as I switched hard drives. Long story short, I lost everything- a full days work, and I was on a deadline. Let this be a reminder; always, always check your cloud backup before you start working, and make sure to back up your data, for reasons just like this one.”

        —  Christa Giammettei, freelance post-production audio engineer

I have a few stories where backups have saved the day for me. Console backups have been the most needed – twice because of water damage to the console, one of them the day of event for commencement at UCLA where they ran the sprinklers even though we were assured that they were off. Hollywood Sound arrived with new console 2 hours before the event and we were back up and running in about 20 mins after testing everything. At the Hollywood Bowl during the production of Hair in 2014 that Phil Allen designed, we got rained on at a Saturday show and had to replace the console for the Sunday show (which also rained a little). It was a quick switch out of cables, load the file, test and ready to go.

    So mostly I have had positive experiences with being glad for backups. I do think there is a generational difference in thinking about backups. Maybe it comes from more experience (time) and have seen things go down over different media (tapes, MD’s, CDs, etc., and writing down console settings from an analog desk). But I see many current students who live their lives with no backups whatsoever and when you bring it up they say things like “I would die if I lost my computer right now” yet continue without a backup plan. It’s so easy to backup these days, there really isn’t an excuse. Drives are cheap, dropbox and google drive are also relatively cheap and the software to clone (CCC or SuperDuper) work elegantly and are rock solid for recovery. It is just a state of mind to get into to start.”

 Jonathan Burke, sound designer

A recap of all of these lessons:

Take your pick of external hard drives, thumb drives, and cloud services. There are many ways that are not too expensive to store and back up your work. Even if you start with a smaller hard drive, a couple of thumb drives, and a free Google Drive account it is better than nothing. Invest in even a small file storage system now to save hours (and lots of headache and trust issues) later!

Sound System Design for Immersive Spaces

I have always been excited by sound design and its potential for storytelling as well as the evolving technology of the industry. At the start of my career, I was mainly a theatrical sound designer and engineer. Then I got a gig designing sound for Halloween Horror Nights at Universal Studios. I had never considered theme park attractions! Since then, I have kept getting sound design work for immersive theater and themed events. There are many types of live, immersive storytelling events out there: immersive theater, theme parks, art exhibits, and experiential marketing pop-ups.

When sound designers work on live immersive projects, they must have an understanding of the story as much as how to implement the technology. The appreciation of the story helps the sound designer make decisions about the creative and system design that will not break the audience’s suspension of disbelief. It can be harder to pull off the suspension of disbelief in immersive settings than in more traditional venues, but the payoff is extremely rewarding!

I want to discuss sound reinforcement of mic’d performers and instruments in this article, but it is already very long. The science there does not really change, though perhaps it gets more complicated! So in the interest of length, I will cover very general speaker placement, creative choices, and the collaboration process.

Sound Systems for Traditional Venues vs. Immersive Spaces

Let us begin by discussing the fundamentals of sound systems for proscenium spaces as well as immersive spaces, and the differences between them. Understanding the components of sound systems for more traditional venues informs much of the decision-making behind building immersive sound systems.

Proscenium stages have three output channels as the core of their systems: left, right, and center. Focused and tuned correctly, those three speakers (or speaker arrays) deliver a sound image where the audience perceives sound as coming from the stage, rather than any one speaker. The optimal place to sit to get the best sound image and mix is referred to as the “sweet spot.” Three channels make for good sound coverage of an audience, where most people are sitting in that sweet spot. It also provides more bussing opportunities for a good, intelligible mix. Music is sent to the left and right channels, with vocals in the center channel, so they do not compete as much with music. Then you add subwoofers delivering the low frequencies, making for a full mix.

If you look closely, you can see the center cluster, and then speakers on the left and right sides of the stage. Photographer Mike Hume. (Source: Ahmanson Theatre: https://losangelestheatres.blogspot.com.)

Larger venues with proscenium stages generally also have surround and delay speakers so that sound can reach seats out of the sweet spot (and let’s be honest, they are also for cool panning effects). And then there is usually some type of monitoring on stage for the performers, separate from the sound system for the audience. Two other characteristics of traditional venues: the audience is seated in one place the whole time, and the room is typically designed and acoustically treated for live performance.

Similar to traditional stages, if the immersive sound designer has the budget they can and should buss music, sound effects, and ambience to different speakers for an optimal mix if they have the budget. Truthfully, because of where they end up placing speakers and budget restrictions, they may have no choice but to put multiple sound elements through the same output.

Differently from proscenium stages, immersive events are typically installed in found spaces, and the performance happens on the same plane as the audience. Actors usually mingle and talk to audience members. This could mean that speakers for the audience are also used as actor monitors, which can present acoustical problems if your performer is wearing a microphone. Even with experiences where the audio is all pre-recorded and played back, there are acoustical challenges in immersive spaces since they are not initially designed for live performance. However, you can use acoustics to your advantage and have a lot of fun!

The sound designer’s job is to trick the audience into believing that they are in the same world as the actors. Immersive experiences are even less forgiving of seeing speakers because the sound is supposed to feel as if it is generated within the world and not through a sound system. This challenge sounds like a real bummer, but I implore you to embrace it. So how do we do that?

Collaborate Early and Often

Before we can talk about system design for immersive spaces, we need to talk about what kind of information you need before you can make those decisions. Immersive events are highly collaborative, and it is important to make sure everyone is on the same page.

First, you will receive a client deck or presentation. All the vendors (sound, video, lighting, costumes, set, props, special effects…) will have a meeting with the client or director to get a rundown of their vision. Everyone should receive a paper version of the client deck. In a theatrical setting, this meeting is called a kickoff, first production meeting, or designer meeting. Whatever paperwork you receive during that meeting, consider it your show bible and keep it handy. It may even answer the next few questions covered here.

Next, schedule a site visit. Inquire about power capabilities, since that will determine most of your sound system. Ask about where power is being drawn from. Many immersive productions rent a generator. Some buildings might have the means to use in-house power. If that’s the case, ask how many circuits they have. You might not be able to have quite as many speakers as you would like, and you need to determine your compromises early on so you can let your director know about limitations in the event that the production does not have the budget or facility requirements to support something they have asked for. Find who is in charge of power (usually the best boy or master electrician) and have a discussion about having separate circuits from lighting, and where you need to plugin as soon as possible. You will be dealing with enough unique issues without having to troubleshoot a ground hum.

Venues made for performance have the infrastructure for running cables and hanging speakers; site-specific performance spaces do not. Begin to ask questions about how you can hang speakers at the site visit, as well as cable runs. Ask if you can drill into walls, and what they are made of. Can you hang rigging points from the ceiling? Talk with the lighting, show set, props, and technical direction departments about their plans for running cable so you do not run under theirs. Inquire about whether anything will need to be struck between shows. These are all considerations that will influence your system design. Expect to have these conversations throughout the design phase as every department moves closer to install.

Ask your production manager, director, or set designer about backstage areas (and the traffic going through them) right away. Once you spec your console, show computer and other rack equipment, send your production team exact measurements and rack elevations with power, front and rear access space, and air conditioning/airflow requirements. Real estate is often tight in backstage areas, and your “front of house” area might need to be shared with lighting, video, and even actors in standby for a scene. Put rack and equipment dimensions on your sound plot and perhaps even map out cable runs so everyone on the team has an idea of available real estate in these backstage spaces.

Make note of acoustics, and ask about audience pathways. Where the audience travels will affect where you put your speakers and how you run your cables. Also, ask about audience capacity and flow. If one audience group enters an experience while a previous group is in another room further ahead, you will need to know that to consider sound bleed, which could affect creative choices.

This article iterates on this point throughout, but I’ll say it again: it is in your best interest to collaborate early and often with the art department. Reach out to the set designer and ask for ground plans and elevations at this stage so that you can draw up a speaker plot, and begin to have conversations about hiding speakers.

Finally, ask about emergency procedures. Traditional venues have obvious exits and a voice of god mic. Immersive events might need to utilize actors to guide the audience out of an experience, and the team should talk about whether they cut sound entirely when an emergency happens. (Generally speaking, they should.) Does someone get on a  mic and make a live announcement or is there a pre-recorded cue? Any number of emergencies could happen, whether it is a technical failure, the weather, or a situation where the audience and/or cast are at risk. Cover all of the possibilities. Your director should decide at what point an emergency is serious enough to trigger a show stop, and what the procedure for a show stop is, and how the show is resumed. The team should determine all of this together, and you need to know what the emergency procedures are as the sound designer so you can program a show stop cue. The emergency system should be provided by another vendor because you as the sound designer are probably not current in things like local fire safety and emergency services. You may provide an emergency paging system separate from the show system if you are asked, but have a conversation with your producers about how that is outside of the scope of sound design and that you will need a separate budget.

The Speaker Plot – Ambience

Speakers for an immersive system can have any of five purposes: ambience, music, spot effects, voiceover, or live reinforcement. You can separate out what goes where, but you often end up sending multiple elements through shared outputs. This is because there are many constraints in designing a plot — budget, scenic design, placement of lighting instruments, and of course how the sound waves from speakers will interact with space and other speakers.

Speakers for ambience, music, narrative voiceovers, and emergency announcements work best above the audience. The distance makes for good coverage because they are in the widest part of the speaker’s throw. Additionally, placing speakers low means sending your acoustical energy into the legs of your audience, which means losing a lot of energy needlessly since sound is not aimed at their ears! If your speakers are going to still be visible to the audience even if they are above them, put them behind the audience path. This does not work for every application — if you have a staged area with mic’d performers, this is not the solution because the sound image has to be where the performers are staged — but it works much of the time.

I prefer to use a lot of little speakers with sound pushing through them at a quieter volume. (I really like the Meyer MM-4XPss.) This makes for more consistent coverage and a believable environment. However, the budget does not always allow for a ton of tiny speakers. In this case, you can compromise with one or two big speakers. Always prioritize coverage — it can really take an audience out of the world if they walk through a dead spot. Place and focus larger speakers in such a way that they cover the whole room, and send music, ambience, and voiceover through it. Of course, mix all those elements in such a way that they are balanced and you don’t blow the speaker!

Subwoofer placement is definitely challenging because they are often too big to hide! If you can place a subwoofer outside of a room against the exterior wall, it should do the trick. I have also hidden them behind set pieces. More about subwoofers later.

A note about having music in stereo, because I have run into it with some artists. Yes, having a stereo image for the music is really important. If you can get away with having two speakers in a room for a left and right channel of music — do that.  It also depends greatly on audience path, room size, and budget, and physics. A stereo mix requires that the left and right channels arrive to the listener at the same time. To accomplish that, the left and right speakers need to be equidistant from each other and the listener needs to stand in the sweet spot between them. In an immersive setting where the audience is moving, it might not be possible to place speakers in such a way that a good stereo image is delivered. In most immersive settings instead of stereo or 5.1 Surround Sound (which are valid in situations where your audience is static), you will often have a massive distributed audio system. This means that more often than not mono audio files are preferred so that you can place them exactly where you want them without worrying about how they are tied to something else. Essentially, imagine not building a 5.1 system, but a 32.10 system or larger. Again, have stereo speaker pairs if you can swing it, but be aware that this is another potential compromise. Know the science so you can explain your decisions.

If you are working with a composer, talk to them about giving you stems so you can put individual parts of the music wherever you both want. It is incredibly useful and efficient to mix the music as needed in the room, to hear how it reacts acoustically.

The Speaker Plot – Point Sources

Another consideration is, what are some specific sound sources in a room?  Things like telephone rings, radios, et cetera differ from ambience and music speakers in that they require the audience to perceive that sound is coming from the source. (There are other technical hacks you can do to make a phone ring on cue, but this article talks specifically about speaker placement.)

When choosing a speaker for a point source, consider what is going to play through it and where you are going to hide it. Is it a phone ring, and that’s it? Then it doesn’t have to be big. Or is the effect a loud car horn that requires a bigger transducer to push adequate volume? Also, have a conversation with the set designer about what props and set pieces are around the sound source. The ability to hide a speaker, and how and where it gets mounted, will influence what speaker you choose as well. Another fun note about point source speakers: As you attend tech rehearsals you might realize that one point source speaker does not have the volume or throw necessary for the whole audience to hear it clearly once you get bodies in the room. In these cases, you can use the ambient speakers as fills. Dial in a little bit of the sound effect to fill the room, but just enough so the main source is the point source.

One of the most challenging and rewarding things about system design for immersive spaces is hiding speakers so the audience does not see them. Send a plot to your set designer early and expect to change it several times. Include a key with speaker dimensions. Talk to the set designer and technical director about how you are going to mount speakers and get their input on the best materials to use to do so. If a point source speaker has to be behind something, talk about potentially using an acoustically transparent material. (Yes, I have had to explain that velour curtains will muffle high frequencies. It happens!) Or, be open to the challenge at hand and problem-solve creatively. Maybe the muffle will actually help the purpose of the effect. Or, can you point the speaker upward so it is not shooting directly into props in front of it? Or mount it under a table? This kind of out-of-the-box thinking is really satisfying!

Also, be sure to consider which effects you need to fight for. Often lighting and scenic design have very specific requirements, and that means sound tends to be the design discipline that moves or changes to accommodate them. But sometimes the way a certain element is described in the script or client deck means there are specific sound design requirements. In those situations, it is imperative to put your foot down with the other design disciplines. If an effect calls for an atomic bomb to go off, for example, then you will be needing a subwoofer, and the scenic team will need to accommodate space for that subwoofer in their design. Be a positive collaborator, but be firm, because you can not change physics.

Acoustics & Bleed

When designing for immersive events, you often have to figure out how to cope with bleed from the outside world and even other rooms within the experience. Many immersive experiences are pulsed attractions, meaning they have one audience group starting while people are halfway through the experience, or simultaneous scenes. This makes bleed a really important thing to consider. True isolation is expensive, and I have yet to see an immersive show try to make rooms acoustically isolated. Sound wants are often communicated after the set design, budget, and production timeline have been determined. Many found spaces are unforgiving anyway, such as reverberant warehouses. The following techniques talk about what to do when bleed is apparent, and you have done all you can with acoustic treatment and good speaker placement, tuning, and focus.

One tactic is to embrace the bleed! Evaluate whether it can actually help your sound design. A horror attraction can be made much scarier when people in one room can hear screams coming from another. The next technique is to compromise on what sound effects and music you use. If you have a cheesy piece of music that works for a comedic scene in one room that bleeds into a serious scene in another room, you might need to either lower the cheesy music quite a bit, notch out more present frequencies or potentially pick different music altogether.

The outside world can also be a consideration. I have really enjoyed watching experiences where the outside world is actually a part of an immersive experience. Once I saw a theater production that took place in a graveyard. The natural nighttime atmosphere blurred the lines between what was the real world versus the world of the play. (Super cool!) However, many immersive attractions exist independent from the real world. In this case, you can’t do much about it. Many attractions get around it by making sound and music really loud. And, as you get more bodies in a space, less of the outside world will be heard. And audience members are generally too captivated by the production to notice the world outside!

Regardless of the issue, as you make these discoveries, keep having these discussions with your director. Do a site visit early and anticipate these issues early on and talk about them.

Inspiration Tips

You can gain knowledge and inspiration without working on an immersive project. Everything you apply in sound design for immersive spaces falls under the scientific principles within Acoustic Ecology. As with any type of sound design, start by paying attention to the world around you. What do you hear and where is it coming from? How do things sound different from close up or far away? Keen awareness of the real world can influence creative choices as well as mixing decisions.

Learn all the sound science. Start by looking up the Doppler effect (the pitch of something ascending as it gets closer, like an ambulance siren), occlusion (something blocking a sound), phase cancellation, and literally everything about room acoustics. Research psychoacoustics and how people respond to different frequencies. In a similar vein, learn about loudness metering, because it is weighted by how humans perceive sound. To hear examples of an immersive mix without going to an event, play video games with headphones on. Larger AAA games (some indie too), implement all of these psychoacoustic principles.

Live immersive events are a very fulfilling frontier for those of us with theatrical backgrounds. The process and application are quite different and very in flux throughout, with a ton of collaboration. Understanding the science, forging positive relationships with other departments, and a lot of creative problem solving are the keys to pulling off the suspension of disbelief, and will level up the sound design for your future immersive projects!

Thanks to my editors for reading through this beast and providing feedback: Julien Elstob (lighting designer), Fionnegan Murphy (A/V Integration Engineer), Stephen Ptacek (sound designer).

 

X