Empowering the Next Generation of Women in Audio

Join Us

More Than Line-by-Line

 

Going Beyond the Basics of Mixing

When I started mixing shows in high school—and I use the term “mixing” loosely—I had no idea what I was doing. Which is normal for anyone’s first foray into a new subject, but the problem was that no one else knew either. My training was our TD basically saying, “here’s the board, plug this cable in here, and that’s the mute button,” before he had to rush off to put out another fire somewhere else.

Back then, there were no Youtube videos showing how other people mixed. No articles describing what a mixer’s job entailed. (Even if there were, I wouldn’t have known what terms to put in a Google search to find them!) So I muddled through show by show, and they sounded good enough that I kept going. From high school to a theme park, college shows to local community theatres, and finally eight years on tour, I’ve picked up a new tip or trick or philosophy every step along the way. After over a decade of trial and error, I’m hoping this post can be a jump start for someone else staring down the faders of a console wondering “okay, now what?”

Every sound design and system has a general set of goals for a musical: all the lines and music are clear and the level is enough to be audible but isn’t painfully loud. These parameters make a basic mix.

For Broadway-style musicals, we do what’s called “line-by-line” mixing. This means when someone is talking, her fader comes up and, when she’s done, her fader goes back down, effectively muting her. For example: if actresses A and B are talking, A’s fader is up for her line, then just before B is about to begin her line, B’s fader comes up and A’s fader goes down (once the first line is finished). So the mixer is constantly working throughout the show, bringing faders up and taking them out as actors start and stop talking. Each of these is called a “pickup” and there will be several hundred of them in most shows. Having only the mics open that are necessary for the immediate dialogue helps to eliminate excess noise from the system and prevent audio waves from multiple mics combining (creating phase cancellation or comb filtering which impairs clarity).

You may have noticed that I’ve only talked about using faders so far, and not mute buttons. Using faders allows you to have more control over the mix because the practice of “mixing” with mute buttons assumes that the actors will say each of their lines in the entire show at the same level, which is not realistic. From belting to whispering and everything in between, actors have a dynamic vocal range and faders are far more conducive than mute buttons to make detailed adjustments in the moment. However, when mixing with faders, you have to make sure that your movements are clean and concise. Constantly doing a slow slide into pickups sounds sloppy and may lose the first part of a line, so faders should be brought up and down quickly. (Unless a slow push is an effect or there is a specific reason for it, yes, there are always exceptions.)

So, throughout the show, the mixer is bringing faders up and down for lines, making small adjustments within lines to make sure that the sound of the show is consistent with the design. Yet, that’s only one part of a musical. The other is, obviously, the music. Here the same rules apply. Usually, the band or orchestra is assigned to VCAs or grouped so it’s controlled by one or two faders. When they’re not playing, the faders should be down, and when they are, the mixer is making adjustments with the faders to make sure they stay at the correct level.

The thing to remember at this point is that all these things are happening at the same time. You’re mixing line by line, balancing actor levels with the music, making sure everything stays in an audible, but not eardrum-ripping range. This is the point where you’ve achieved the basic mechanics and can produce an adequate mix. When put into action, it looks something like this:

 

 

A clip from a mix training video for the 2019 National Touring Company of Miss Saigon.

 

But we want more than just an adequate mix, and with a solid foundation under your belt, you can start to focus on the details and subtleties that will continue to improve those skills. Now, full disclosure, I was a complete nerd when I was young (I say that like I’m not now…) and I spent the better part of my childhood reading any book I could get my hands on. As an adult, that has translated into one of my greatest strengths as a mixer: I get stories. Understanding the narrative and emotions of a scene are what help me make intelligent choices of how to manipulate the sound of a show to best convey the story.

Sometimes it’s leaving an actress’s mic up for an ad-lib that has become a routine, or conversely, taking a mic out quicker because that ad-lib pulls your attention from more important information. It could be fading in or out a mic so that an entrance or exit sounds more natural or giving a punchline just a bit of a push to make sure that the audience hears it clearly.

Throughout the entire show, you are using your judgment to shape the sound. Paying attention to what’s going on and the choices the actors are making will help you match the emotion of a scene. Ominous fury and unadulterated rage are both anger. A low chuckle and an earsplitting cackle are both laughs. However, each one sounds completely different. As the mixer, you can give the orchestra an extra push as they swell into an emotional moment, or support an actress enough so that her whisper is audible through the entire house but doesn’t lose its intimacy.

Currently, I’m touring with Mean Girls, and towards the end of the show, Ms. Norbury (the Tina Fey character for those familiar with the movie) gets to cut loose and belt out a solo. Usually, this gets some appreciative cheers from the audience because it’s Norbury’s first time singing and she gets to just GO for it. As the mixer, I help her along by giving her an extra nudge on the fader, but I also give some assistance beforehand. The main character, Cady, sings right before her in a softer, contemplative moment and I keep her mic back just a bit. You can still hear her clearly, but she’s on the quieter side, which gives Norbury an additional edge when she comes in, contrasting Cady’s lyrics with a powerful belt.

Another of my favorite mixing moments is from the Les Mis tour I was on a couple of years ago. During “Empty Chairs at Empty Tables,” Marius is surrounded by the ghosts of his friends who toast him with flickering candles while he mourns their seemingly pointless deaths. The song’s climax comes on the line “Oh my friends, my friends, don’t ask me—” where three things happen at once: the orchestra hits the crest of their crescendo, Marius bites out the sibilant “sk” of “don’t aSK me,” and the student revolutionaries blow out their candles, turning to leave him for good. It’s a stunning visual on its own, but with a little help from the mixer to push into both the orchestral and vocal build, it’s a powerful aural moment as well.

The final and most important part of any mix is: listening. It’s ironic—but maybe unsurprising—that we constantly have to remind ourselves to do the most basic aspect of our job amidst the chaos of all the mechanics. A mix can be technically perfect and still lack heart. It can catch every detail and, in doing so, lose the original story in a sea of noise. It’s a fine line to walk and everyone (and I mean everyone) has an opinion about sound. So, as you hit every pickup, balance everything together, and facilitate the emotions of a scene, make sure you listen to how everything comes together. Pull back the trumpet that decided to go too loud and proud today and is sticking out of the mix. Give the actress who’s getting buried a little push to get her out over the orchestra. When the song reaches its last note and there’s nothing you need to do to help it along, step back and let it resolve.

Combining all these elements should give you a head start on a mix that not only achieves the basic goals of sound design but goes above and beyond to help tell the story. Trust your ears, listen to your designer, and have fun mixing!

Recording in Two Days

For this first blog of 2020, I’m going to be talking about a current recording project I am working on. A few days before the end of the year, I had two full days of recording a great punk band that I am now in the process of mixing an album for! I’ll be talking about the setup I used for recording them for this month’s blog and mixing the album on my next blog.

To start, I want to talk a little about the album. It’s a 16 song album. The genre is garage/punk. We only had two days to this 16 song album, and guess what… WE DID IT! Everyone did a great job of executing their parts and staying focused (including myself). Since we had two days to record 16 songs, you’re probably thinking that we live tracked the album- and you’re right! Personally- that’s my favorite way to record, but I know not all music really calls for that *particular* recording process.

On the first day of recording, we captured drums + bass. We kept some of the guitars we used and doubled them on the second day of recording, but we’ll get to that later. For drums- lately, I’ve been straying away from the *less* is more mentality. I’ve been close mic-ing more of the kit. I ran into some trouble with a couple of my own projects by choosing not to mic certain things, and trying to use the overheads or the room to capture them, but ran into trouble when the mixing engineer didn’t have proper control over the things that I chose to not mic. On that note, if you’re also straying away from the less is more mindset, CHECK PHASE! The more mics you have up, the more the likelihood of phase happening! Anyways, we got GREAT drums tones on that first day of tracking. Now for the bass, it was very simple. I just captured a DI and put one dynamic microphone on the isolated bass amp. It was beefy, yet clean, and now I have two great tones to work within mixing.

On the second day of recording, we knocked out guitars, vocals, and harmonies. We worked the whole day through. The first thing we started on was guitars. We re-recorded some of the songs we did the day before, and if we didn’t we doubled the guitars (for a wider stereo image) and added more layers of tone, etc. For the guitars, I mic’d with one dynamic, and one condenser (sm7b, and a Mojave 201fet). I placed each mic on two different speakers in the cab. We got a great tone out of this setup.

For vocals, it was very simple! I used a microphone I wouldn’t typically use for recording vocals, but it worked for this style. I used an Sm7B and used the Universal Audio 610 for the pre-amp. We doubled the choruses, and depending on the song…sometimes we’d double the whole song. After recording the main vocals, the bass player recording harmonies. For the harmonies, I used a CM7. I wanted these vocals to sound airier and lighter than the main vocals. I wanted the main vocals to have grit, and these to counteract it, and I believe we achieved that pretty well.

As I’m going into the mixing process for this album- I’ll be taking notes along the way of techniques, plug-ins, and other things I’d like to share with you on my next blog. Until then everyone!

There Really Is No Such Thing As A Free Lunch

Using The Scientific Method in Assessment of System Optimization

A couple of years ago, I took a class for the first time from Jamie Anderson at Rational Acoustics where he said something that has stuck with me ever since. He said something to the effect of our job as system engineers is to make it sound the same everywhere, and it is the job of the mix engineer to make it sound “good” or “bad”.

The reality in the world of live sound is that there are many variables stacked up against us. A scenic element being in the way of speaker coverage, a client that does not want to see a speaker in the first place, a speaker that has done one too many gigs and decides that today is the day for one driver to die during load-in or any other myriad of things that can stand in the way of the ultimate goal: a verified, calibrated sound system.

The Challenges Of Reality

 

One distinction that must be made before beginning the discussion of system optimization is that we must draw a line here and make all intentions clear: what is our role at this gig? Are you just performing the tasks of the systems engineer? Are you the systems engineer and FOH mix engineer? Are you the tour manager as well and work directly with the artist’s manager? Why does this matter, you may ask? The fact of the matter is that when it comes down to making final evaluations on the system, there are going to be executive decisions that will need to be made, especially in moments of triage. Having clearly defined what one’s role at the gig is will help in making these decisions when the clock is ticking away.

So in this context, we are going to discuss the decisions of system optimization from the point of the systems engineer. We have decided that the most important task of our gig is to make sure that everyone in the audience is having the same show as the person mixing at front-of-house. I’ve always thought of this as a comparison to a painter and a blank canvas. It is the mix engineer’s job to paint the picture for the audience to hear, it is our job as system engineers to make sure the painting sounds the same every day by providing the same blank canvas.

The scientific method teaches the concept of control with independent and dependent variables. We have an objective that we wish to achieve, we assess our variables in each scenario to come up with a hypothesis of what we believe will happen. Then we execute a procedure, controlling the variables we can, and analyze the results given the tools at hand to draw conclusions and determine whether we have achieved our objective. Recall that an independent variable is a factor that remains the same in an experiment, while a dependent variable is the component that you manipulate and observe the results. In the production world, these terms can have a variety of implications. It is an unfortunate, commonly held belief that system optimization starts at the EQ stage when really there are so many steps before that. If there is a column in front of a hang of speakers, no EQ in the world is going to make them sound like they are not shadowed behind a column.

Now everybody take a deep breath in and say, “EQ is not the solution to a mechanical problem.” And breathe out…

Let’s start with preproduction. It is time to assess our first round of variables. What are the limitations of the venue? Trim height? Rigging limitations? What are the limitations proposed by the client? Maybe there is another element to the show that necessitates the PA being placed in a certain position over another; maybe the client doesn’t want to see speakers at all. We must ask our technical brains and our career paths in each scenario, what can we change and what can we not change? Note that it will not always be the same in every circumstance. In one scenario, we may be able to convince the client to let us put the PA anywhere we want, making it a dependent variable. In another situation, for the sake of our gig, we must accept that the PA will not move or that the low steel of the roof is a bleak 35 feet in the air, and thus we face an independent variable.

The many steps of system optimization that lie before EQ

 

After assessing these first sets of variables, we can now move into the next phase and look at our system design. Again, say it with me, “EQ is not the solution to a mechanical problem.” We must assess our variables again in this next phase of the optimization process. We have been given the technical rider of the venue that we are going to be at and maybe due to budgetary restraints we cannot change the PA: independent variable. Perhaps we are carrying our own PA and thus have control over the design with limitations from the venue: dependent variable forms, but with caveats. Let’s look deeper into this particular scenario and ask ourselves: as engineers building our design, what do we have control over now?

The first step lies in what speaker we choose for the job. Given the ultimate design control scenario where we get the luxury to pick and choose the loudspeakers we get to use in our design, different directivity designs will lend themselves better in one scenario versus another. A point source has just as much validity as the deployment of a line array depending on the situation. For a small audience of 150 people with a jazz band, a point source speaker over a sub may be more valid than showing up with a 12 box line array that necessitates a rigging call to fly from the ceiling. But even in this scenario, there are caveats in our delicate weighing of variables. Where are those 150 people going to be? Are we in a ballroom or a theater? Even the evaluation of our choices on what box to choose for a design are as varied as deciding what type of canvas we wish to use for the mix engineer’s painting.

So let’s create a scenario: let’s say we are doing an arena show and the design has been established with a set number of boxes for daily deployment with an agreed-upon design by the production team. Even the design is pretty much cut and paste in terms of rigging points, but we have varying limitations to trim height due to high and low steel of the venue. What variables do we now have control over? We still have a decent amount of control over trim height up to a (literal) limit of the motor, but we also have control over the vertical directivity of our (let’s make the design decision for the purpose of discussion) line array. There is a hidden assumption here that is often under-represented when talking about system designs.

A friend and colleague of mine, Sully (Chris) Sullivan once pointed out to me that the hidden design assumption that we often make as system engineers, but don’t necessarily acknowledge, is that we assume that the loudspeaker manufacturer has actually achieved the horizontal coverage dictated by technical specifications. This made me reconsider the things I take for granted in a given system. In our design, we choose to use Manufacturer X’s 120-degree line source element. They have established in their technical specs that there is a measurable point at 60 degrees off-axis (total 120-degree coverage) where the polar response drops 6 dB. We can take our measurement microphone and check that the response is what we think it is, but if it isn’t what really are our options? Perhaps we have a manufacturer defect or a blown driver somewhere, but unless we change the physical parameters of the loudspeaker, this is a variable that we put in the trust of the manufacturers. So what do we have control over? He pointed out to me that our decision choices lie in the manipulation of the vertical.

Entire books and papers can and have been written about how we can control the vertical coverage of our loudspeaker arrays, but certain factors remain consistent throughout. Inter-element angles, or splay angles, let us control the summation of elements within an array. Site angle and trim height let us control the geometric relationship of the source to the audience and thus affect the spread of SPL over distance. Azimuth also gives us geometric control of the directivity pattern of the entire array along a horizontal dispersion pattern. Note that this is a distinction from the horizontal pattern control of the frequency response radiating from the enclosure, of which we have handed responsibility over to the manufacturer. Fortunately, the myriad of loudspeaker prediction software available from modern manufacturers has given the modern system engineer an unprecedented level of ability to assess these parameters before a single speaker goes up into the air.

At this point, we have made a lot of decisions on the design of our system and weighed the variables along every step of the way to draw out our procedure for the system deployment. It is now time to analyze our results and verify that what we thought was going to happen did or did not happen. Here we introduce our tools to verify our procedure in a two step-process of mechanical then acoustical verification. First, we use tools such as protractors and laser inclinometers as a means of collecting data to assess whether we have achieved our mechanical design goal. For example, our model says we need a site angle of 2 degrees to achieve this result so we verify with the laser inclinometer that we got there. Once we have assessed that we made our design’s mechanical goals, we must analyze the acoustical results.

Laser inclinometers are just one example of a tool we can use to verify the mechanical actualization of a design

.

It is here only at this stage that we are finally introducing the examination software to analyze the response of our system. After examining our role at the gig, the criteria involved in pre-production, choosing design elements appropriate for the task, and verifying their deployment, only now can move into the realm of analysis software to see if all those goals were met. We can utilize dual-channel measurement software to take transfer functions at different stages of the input and output of our system to verify that our design goals have been met, but more importantly to see if they have not been met and why. This is where our ability to critically interpret the data comes in to play. By evaluating impulse response data, dual-channel FFT (Fast-Fourier Transform) functions, and the coherence of our gathered data we can make an assessment of how our design has been achieved in the acoustical and electronic realm.

What’s interesting to me is that often the discussion of system optimization starts here. In fact, as we have seen, the process begins as early as the pre-production stage when talking with different departments and the client, and even when asking ourselves what our role is at the gig. The final analysis of any design comes down to the tool that we always carry with us: our ears. Our ears are the final arbiters after our evaluation of acoustical and mechanical variables, and are used along every step of our design path along with our trusty use of  “common sense.” In the end, our careful assessment of variables leads us to utilize the power of the scientific method to make educated decisions to work towards our end goal: the blank canvas, ready to be painted.

Big thanks to the following for letting me reference them in this article: Jamie Anderson at Rational Acoustics, Sully (Chris) Sullivan, and Alignarray (www.alignarray.com)

New Decade – New Year

The New Year and a new decade have begun! Thoughts of reinvention and feelings of excitement fill the air. This time of the year can often feel overwhelming. For me, I am in between apartments, jobs, and I just finished up my bachelor’s and am headed into a master’s program. Life has been a roller coaster.

It is no secret that the audio and music industry can be challenging, but as a young woman, I have definitely been feeling the pressure to find work and be successful. Something I am sure we all feel. There has also been a lot of talk amongst my friends and peers about depression and seasonal depression. It seems to flourish in the cold, dark months. I myself have been struggling with it. I wanted to do something proactive to combat these negative thoughts and emotions and to welcome in the New Year in a positive way. So I looked to my community for support and ideas.

Now, one thing you may not know about me is that I am the founder of the Michigan Technological University SoundGirls chapter. It is something I am incredibly proud of and sad to have moved away from. It’s okay though, I left it in good hands.

A few weeks ago, I emailed some of the members of this organization and asked them to fill out a short set of interview questions. Many of these young women I consider my family, if not friends. One response in particular not only brought me joy, but hope for a brighter future for all of us in this amazing, yet challenging industry.

Izzy Waldie is a first-year Audio Production major and the newly elected secretary of the Michigan Tech SoundGirls chapter. She is not only incredibly creative but also very good at STEM classes. Something I have to admit, I am not good at. Because she is still in her first year, I asked Izzy a few ice breaker questions.

Sarah: “So Izzy, what are you excited for or looking forward to in your time here at Michigan Tech in the visual performing arts department?”

Izzy: “I’m just really looking forward to doing more projects with people, and making stuff I’m really proud of.”

When asked about the university chapter specifically, she responded with;

Izzy: “Next semester I really want to do some creative projects with SoundGirls. We will be finishing up our movie project which will be really cool, but I want to do more projects. I was thinking of maybe just us recording a song. Nothing fancy, it could be just for fun, and we could do it with all the musicians in the organization. Now that I’m on the management board I really want to help head up some of these projects.”

When I was in my first year at Michigan Tech, I was one of two female students out of the two audio programs. Now, those numbers have been multiplied by at least five. The fact that there is an organization where students can go, create things together, learn and refine their skills, all while being supportive of each other, makes my heart melt. It reminds me that life isn’t always a challenge. Their excitement makes me excited.

Sarah: “Recently you said you don’t know what you are doing and I wanted to talk to you a little bit more about that. It is my opinion that you don’t need to know exactly what you are doing and it is more important to know what you don’t want to do. By exploring different areas and avenues, you are figuring out what you are doing or at least what you want to do. What are some things that you are exploring, interested in, or new things you might want to try out?”

Izzy: “I’ve definitely realized that what you already know isn’t as important as how willing you are to learn. I still don’t know what I’m doing, no one ever knows 100% what they’re doing, but I definitely have learned a lot this semester. I’ve seen this the most working at the Rozsa (The Rosza Performing Arts Center), I had basically no experience at first, but now I’m working the sound and lighting boards pretty confidently. One thing I really want to get more into is recording. I’ve helped out with some other people’s projects and would like to work more creatively with it.”

Izzy made the observation that most students might overlook. What you already know isn’t as important as how willing you are to learn. Not only was she a student in the organization I was president of, but I was also a teaching assistant for one of her classes. This statement is a testament to how she is as a student and how she approaches learning situations. It is an excellent characteristic to have for the industry that she is headed into.

I was feeling revitalized by the end of our conversation. I had received new hope, excitement, and appreciation from talking with Izzy. To finish out the conversation I asked her something a little more personal.

Sarah: “Tell me something good that happened this semester in our department that you will remember for a while, that makes you smile?”

Izzy: “This semester was awesome. Never did I think I’d be so involved as a first-semester student. One of the best parts for me was working on the haunted mine. (A project that the visual performing arts department collaborates with the Quincy Mine owner on every Halloween). We were down there for a really long time but the idea was really cool and so were the people. I also really liked working in the different groups on the audio movie project. I made a lot of friends while working on this project and our Dr. Seuss The Lorax audio movie ended up being pretty fun to make. I remember one time, at like midnight a bunch of us were in Walker Film Studio working on one of the audio movies while passing around a 2-liter of Dr. Pepper.

Izzy’s responses were wholesome and honest. To me, she has a perspective that exceeds her age. It was a nice reminder when faced with the daunting challenges of moving to a new area, finding work, and starting anew. It was a reminder of why I chose this career field. I chose it for the exciting new projects, learning new things, and working late into the night with people you hardly know, but will soon feel like family to you.

Though our conversation had ended, I was feeling myself again and that was because of the connections and relationships I had made through our little SoundGirls chapter. At the core of SoundGirls, you will find this kind of understanding from its members. We are here to listen to one another, remind each other of why we are here and doing what we love, and create an environment that welcomes all who are seeking opportunities and support. I wish you all a prosperous and happy New Year.

 

The Hive: Cleaning Microphones

From the Hive:  Recommendations and Advice from our Community Through Social Media Discussions.

Topic: Cleaning Microphones

It was recently asked what is the best way to clean microphones (now that it’s cold season)?  Of course, it’s great to clean your mics if someone who’s sick has used them, but it’s also good to clean them frequently no matter what.

Here’s how our community responded with some additional resources in regard to cleaning microphones.

The Quick Clean:

Many folks recommended disinfecting wipes to quickly wipe down the grill of the microphone after use.  Also, consider using a non-flavored Listerine.

You could also use Purell or another kind of hand sanitizer. I recommend avoiding any with added fragrances just to make sure that it doesn’t impede someone’s use of the mic. Many people have fragrance sensitivities.

Microfoam was also suggested, this is also known as a foaming sanitizer or deodorizer a lot like the gel sanitizers we see all the time.

There are also industry-specific cleaners such as Thomann microphone cleaner or the Microphome Cleaning Kit. Hosa also sells a whole line of cleaner sprays for items we encounter in our industry. For mics, they sell Goby Lab’s microphone sanitizer.

 

The Deep Clean:

Remove the grills and foam. Wash the foam with isopropyl, antibacterial hand soap, or dawn detergent. The grills can be washed with the same items using a toothbrush to get a good scrub or thrown in a dishwasher for a deep clean; just make sure both are completely dry before use.

Others suggested an ultrasonic jewelry cleaner. Around $40, these cleaners can be used to wash more than microphone parts. Small in size and only needing limited amounts of cleaner and soap this tool could be useful especially if you’re on tour.

When things get really bad you can also replace the grill and foam on a majority of microphones, but hopefully, none of us have to deal with something that bad! Keep in mind antibacterial soaps and isopropyl won’t kill some viruses. Bleach solutions, hydrogen peroxide or replacement is your best option to stop the spread of tough viruses.

Bonus response!

For windscreens or pop, filters soak them in a 10% bleach solution and rinse in cold water. This eliminates germs, viruses, and order.

Thank you to Jennalyn Alonzo for posing the question and thank you to all of our community members for their great responses!

 

Gain Without the Pain

 

Gain Structure for Live Sound Part 1

Gain structure and gain staging are terms that get thrown about a lot, but often get skimmed over as being obvious, without ever being fully explained. The way some people talk about it, and mock other people for theirs, you’d think proper gain structure was some special secret skill, known only to the most talented engineers. It’s actually pretty straightforward, but knowing how to do it well will save you a lot of headaches down the line. All it really is is setting your channels’ gain levels high enough that you get plenty of signal to work with, without risking distortion. It often gets discussed in studio circles, because it’s incredibly important to the tone and quality of a recording, but we have other things to consider on top of that in a live setting.

So, what exactly is gain?

It seems like the most basic question in sound, but the term is often misunderstood. Gain is not simply the same as volume. It’s a term that comes from electronics, which refers to the increase in amplitude of an incoming signal when you apply electricity to it. In our case, it’s how much we change our input’s amplitude by turning the gain knob. In analogue desks, that means engaging more circuits in the preamp to increase the gain as you turn (have you ever used an old desk where you needed just a bit more level, so you slowly and smoothly turned the gain knob, it made barely any difference… nothing… nothing… then suddenly it was much louder? It was probably because it crossed the threshold to the next circuit being engaged).

Digital desks do something similar but using digital signal processing. It is often called trim instead of gain, especially if no actual preamp is involved. For example, many desks won’t show you a gain knob if you plug something into a local input on the back of it, because its only preamps are in its stagebox. You will see a knob labelled trim instead (I do know these knobs are technically rotary encoders because they don’t have a defined end point, but they are commonly referred to as knobs. Please don’t email in). Trim can also be used to refer to finer adjustments in the input’s signal level, but as a rule of thumb, it’s pretty much the same as gain. Gain is measured as the difference between the signal level when it arrives at the desk to when it leaves the preamp at the top of the channel strip, so it makes sense that it’s measured in decibels (dB), which is a measurement of ratios.

The volume of the channel’s signal once it’s gone through the rest of the channel strip and any outboard is controlled by the fader. You can think of the gain knob as controlling input, and the fader as controlling output (let’s ignore desks with a gain on fader feature. They make it easier for the user to visualise the gain but the work is still being done at the top of the channel strip).

Now, how do you structure it?

For studio recording, the main concern is getting a good amount of signal over the noise floor of all the equipment being used in the signal chain. Unless you’re purposefully going for a lo-fi, old-school sound, you don’t want a lot of background hiss all over your tracks. A nice big signal-to-noise ratio, without distortion, is the goal. In live settings, we can view other instruments or stray noises in the room as part of that noise floor, and we also have to avoid feedback at the other end of the scale. There are two main approaches to setting gains:

Gain first: With the fader all the way down, you dial the gain in until it’s tickling the yellow or orange LEDs on your channel or PFL while the signal is at its loudest, but not quite going into the red or ‘peak’ LEDs (of course, if it’s hitting the red without any gain, you can stick a pad in. You might find a switch on the microphone, instrument or DI box, and the desk. If the mic is being overwhelmed by the sound source it’s best to use its internal pad if it has one, so it can handle it better and deliver a distortion-free signal to the desk). You then bring the fader up until the channel is at the required level. This method gives you a nice, strong signal. It also gives that to anyone sharing the preamps with you, for example, monitors sharing the stagebox or multitrack recording. However, because faders are measured in dBs, which are logarithmic, it can cause some issues. If you look at a fader strip, you’ll see the numbers get closer together the further down they go. So if you have a channel where the fader is near the bottom, and you want to change the volume by 1dB, you’d have to move it about a millimetre. Anything other than a tiny change could make the channel blaringly loud, or so quiet it gets lost in the mix.

Fader at 0: You set all your faders at 0 (or ‘unity’), then bring the gain up to the desired level. This gives you more control over those small volume changes, while still leaving you headroom at the top of the fader’s travel. It’s easier to see if a fader has been knocked or to know where to return a fader to after boosting for a solo, for example. However, it can leave anyone sharing gains with weak or uneven signals. If you’re working with an act you are unfamiliar with, or one that is particularly dynamic, having the faders at zero might not leave you enough headroom for quieter sections, forcing you to have to increase the gain mid-show. This is far from ideal, especially if you are running monitors, because you’re changing everyone’s mix without being able to hear those changes in real-time, and increasing the gain increases the likelihood of feedback. In these cases, it might be beneficial to set all your faders at -5, for example, just in case.

In researching this blog, I found some people set their faders as a visual representation of their mix levels, then adjust their gains accordingly. It isn’t a technique I’ve seen in real life, but if you know the act well and it makes sense to your workflow, it could be worth trying. Once you’ve set your gates, compressors, EQ, and effects, and added the volume of all the channels together you’ll probably need to go back to adjust your gains or faders again, but these approaches will get you in the right ballpark very quickly.

All these methods have their pros and cons, and you may want to choose between them for different situations. I learned sound using the first method, but I now prefer the second method, especially for monitors. It’s clear where all the faders should sit even though the sends to auxes might be completely different, and change song to song. Despite what some people might say, there is no gospel for gain structure that must be followed. In part 2 I’ll discuss a few approaches for different situations, and how to get the best signal-to-noise ratio in those circumstances. Gain structure isn’t some esoteric mystery, but it is important to get right. If you know the underlying concepts you can make informed decisions to get the best out of each channel, which is the foundation for every great mix.

 

Music of the Decade

 

As 2019 comes to a close, not only is a new year approaching but a new decade.

So, what better way to honour the last 10 years than to look at some of the most defining moments in music.

January 2010

Touring is one of the biggest income streams for many artists and employs a number of roles behind the scenes from sound and lighting techs to stage designers and many more. So it was of major significance in 2010 when Live Nation and Ticketmaster merged.

January 2011
In 2011 saw the release of Adele’s record-breaking album ‘21’. Not only was this album successful in its release year but it has become the biggest selling album of the decade.

July 2012
During this year, Psy’s ‘Gangnam Style’ was released. It was the first video on YouTube to reach 1 Billion views.

December 2013
Beyoncé changed the game dropping a visual album out of nowhere on December 13th with no promotion.

2014
Taylor Swift Releases 1989. The album was a huge step in securing herself as a pop star. Another notable artist release was Ed Sheeran with the hit song Thinking out loud.

October 2015
Adele releases ‘Hello’ after a short hiatus.

April 2016
In April, Drake released ‘One Dance’. This was also the same month we lost the extremely talented Prince. Beyoncé also released her visual album ‘Lemonade’.

2017
Kendrick Lamar releases ‘Humble’.

2018
2018 brought us albums like Cardi B’s ‘Invasion of Privacy’, Drakes ‘Scorpion’ and Kacey Musgraves ‘Golden Hour’.

2019
Billie Eilish released ‘When We All Fall Asleep, Where Do We Go?’

Looking back, this decade has changed quite considerably in terms of genre and style and it’s wonderful to know that we are entering a new decade led by a wide variety of different artists. I’m sure we are all in anticipation and looking forward to what is to come in the next decade.

 

Shadowing FOH/TM Tim Harding – Queensrÿche

Tim Harding has invited SoundGirls Members to shadow him on Queensrÿche, where he is the FOH Engineer and Tour Manager. The day will consist of the following schedule:

Show Available

With more than 25 years in the trenches, Tim’s career has taken him from a lowly guitar and keyboard tech for Kenny G to tour managing and front of house mixing for acts including Metal Church, Michael Schenker, and Winger. His background in everything from metal fests to corporates keeps him busy in the Seattle area where he is most often found working as an A1 and A2 at various stadiums, arenas, and theaters throughout the region. When not making stuff loud, he can be found at Harding Acres getting his hands dirty on the property.

Whose Job is It? When Plug-in Effects are Sound Design vs. Mix Choices.

We’ve reached out to our blog readership several times to ask for blog post suggestions.  And surprisingly, this blog suggestion has come up every single time. It seems that there’s a lot of confusion about who should be processing what.  So, I’m going to attempt to break it down for you.  Keep in mind that these are my thoughts on the subject as someone with 12 years of experience as a sound effects editor and supervising sound editor.  In writing this, I’m hoping to clarify the general thought process behind making the distinction between who should process what.  However, if you ever have a specific question on this topic, I would highly encourage you to reach out to your mixer.

Before we get into the specifics of who should process what, I think the first step to understanding this issue is understanding the role of mixer versus sound designer.

UNDERSTANDING THE ROLES

THE MIXER

If we overly simplify the role of the re-recording mixer, I would say that they have three main objectives when it comes to mixing sound effects.  First, they must balance all of the elements together so that everything is clear and the narrative is dynamic.  Second, they must place everything into the stereo or surround space by panning the elements appropriately.  Third, they must place everything into the acoustic space shown on screen by adding reverb, delay, and EQ.

Obviously, there are many other things accomplished in a mix, but these are the absolute bullet points and the most important for you to understand in this particular scenario.

THE SOUND DESIGNER

The sound designer’s job is to create, edit, and sync sound effects to the picture.


BREAKING IT DOWN

EQ

It is the mixer’s job to EQ effects if they are coming from behind a door, are on a television screen, etc.  Basically, anything where all elements should be futzed for any reason.  If this is the case, do your mixer a favor and ask ahead of time if he/she would like you to split those FX out onto “Futz FX” tracks. You’ll totally win brownie points just for asking.  It is important not to do the actual processing in the SFX editorial, as the mixer may want to alter the amount of “futz” that is applied to achieve maximum clarity, depending on what is happening in the rest of the mix.

It is the sound designer’s job to EQ SFX if any particular elements have too much/too little of any frequency to be appropriate for what’s happening on screen.  Do not ever assume that your mixer is going to listen to every single element you cut in a build, and then individually EQ them to make them sound better.  That’s your job!  Or, better yet, don’t choose crappy SFX in the first place!

REVERB/DELAY

It is the mixer’s job to add reverb or delay to all sound effects when appropriate in order to help them to sit within the physical space shown on screen.  For example, he or she may add a bit of reverb to all sound effects which occur while the characters on screen are walking through an underground cave.  Or, he or she may add a bit of reverb and delay to all sound effects when we’re in a narrow but tall canyon.  The mixer would probably choose not to add reverb or delay to any sound effects that occur while a scene plays out in a small closet.

As a sound designer, you should be extremely wary of adding reverb to almost any sound effect.  If you are doing so to help sell that it is occurring in the physical space, check with your mixer first.  Chances are, he or she would rather have full control by adding the reverb themselves.

Sound designers should also use delay fairly sparingly.  This is only a good choice if it is truly a design choice, not a spatial one.  For example, if you are designing a futuristic laser gun blast, you may want to add a very short delay to the sound you’re designing purely for design purposes.

When deciding whether or not to add reverb or delay, always ask yourself whether it is a design choice or a spatial choice.  As long as the reverb/delay has absolutely nothing to do with where the sound effect is occurring, you’re probably in the clear.  But, you may still want to supply a muted version without the effect in the track below, just in case, your mixer finds that the affected one does not play well in the mix.

COMPRESSORS/LIMITERS

Adding compressors or limiters should be the mixer’s job 99% of the time.

The only instance in which I have ever used dynamics processing in my editorial was when a client asked to trigger a pulsing sound effect whenever a particular character spoke (there was a visual pulsing to match).  I used a side chain and gate to do this, but first I had an extensive conversation with my mixer about if he would rather I did this and gave him the tracks, or if he would prefer to set it up himself.  If you are gating any sound effects purely to clean them up, then my recommendation would be to just find a better sound.

PITCH SHIFTING

A mixer does not often pitch shift sound effects unless a client specifically asks that he or she do so.

Thus, pitch shifting almost always falls on the shoulders of the sound designer.  This is because when it comes to sound effects, changing the pitch is almost always a design choice rather than a balance/spatial choice.

MODULATION

A mixer will use modulation effects when processing dialogue sometimes, but it is very uncommon for them to dig into sound effects to use this type of processing.

Most often this type of processing is done purely for design purposes, and thus lands in the wheelhouse of the sound designer.  You should never design something with unprocessed elements, assuming that your mixer will go in and process everything so that it sounds cooler.  It’s the designer’s job to make all of the elements as appropriate as possible to what is on the screen.  So, go ahead and modulate away!

NOISE REDUCTION

Mixers will often employ noise reduction plugins to clean up noisy sounds.  But, this should never be the case with sound effects, since you should be cutting pristine SFX in the first place.

In short, neither of you should be using noise reduction plugins.  If you find yourself reaching for RX while editing sound effects, you should instead reach for a better sound! If you’re dead set on using something that, say, you recorded yourself and is just too perfect to pass up but incredibly noisy, then by all means process it with noise reduction software.  Never assume that your mixer will do this for you.  There’s a much better chance that the offending sound effect will simply be muted in the mix.


ADDITIONAL NOTES

INSERTS VS AUDIOSUITE

I have one final note about inserts versus AudioSuite plug-in use.  Summed up, it’s this: don’t use inserts as an FX editor/sound designer.  Always assume that your mixer is going to grab all of the regions from your tracks and drag them into his or her own tracks within the mix template.  There’s a great chance that your mixer will never even notice that you added an insert.  If you want an effect to play in the mix, then make sure that it’s been printed to your sound files.

AUTOMATION AS EFFECTS

In the same vein, it’s a risky business to create audio effects with automation, such as zany panning or square-wave volume automation.  These may sound really cool, but always give your mixer a heads up ahead of time if you plan to do something like this.  Some mixers automatically delete all of your automation so that they can start fresh.  If there’s any automation that you believe is crucial to the design of a sound, then make sure to mention it before your work gets dragged into the mix template.

X