Empowering the Next Generation of Women in Audio

Join Us

Teaching the Next Generation of Audio Engineers

Life hardly ever takes the simplest route.  Many in the field of Audio embody this sentiment.  When I first moved to the Nashville area from the West Coast (I live in neither location now), I joined the Nashville Chapter of the Audio Engineering Society on Facebook, and as soon my request to join was approved I received a friend request.  I am no social butterfly and was surprised by the notification. Audio Instructor and Engineer Jill Courtney noticed that another female Sound Engineer had joined her beat, and that was reason enough for her to connect. However, in getting to know Jill, I learned that it is natural for her to mentor and support her sisters in arms.  In the spirit of #breakingtheglassfader, I thought I would get Jill to share some of her secrets of teaching the next generation of SoundGirls.

What is your current job title?

Audio/Video Producer/Educator of JCreative Multimedia at www.jillcourtney.com

What subjects and grade levels have you taught? Any preference?

I have taught K-12 and college, and I think I prefer college, with middle school being a close second. High school would be third, and elementary fourth. I love all my students, though. I just relate to college and middle school students the most. Both college and middle school are defining periods in a student’s life.

What got you into teaching audio?

My first truly entrepreneurial adventure was a partnership called Sharkbait Studios, which originated in NYC. When my partner and I relocated the company to Nashville, we were networking among the local universities, of which there are many. During a networking meeting with the Chair of the Music and Performing Arts department at Tennessee State University, the Chair asked for my resume, and I happened to have it handy. Right then and there, he asked if I would teach TSU’s audio production classes, and he even utilized me as an applied voice instructor and the Director of the vocal jazz ensemble.

I was newly out of my Master’s program at New York University and had only taught music, voice, Spanish, and other K-12 topics. Once I taught at TSU for a year, I was in demand as an adjunct (ha!). I ended up working for Belmont University, The Art Institute of Tennessee-Nashville, and Nashville Film Institute, along with a 2-year out-of-state residency at Lamar State College-Port Arthur, where I taught Commercial Applied Voice, Songwriting, Piano, Music Theory, etc. Once back in Nashville, Sharkbait Studios was closed and JCreative Multimedia, my sole/soul venture was established.

What skills (both audio and life skills) do you focus on in your classroom?

I teach my students to listen to details in the music at hand, do their best in building the sound from the ground up, if that is their task, editing when the materials are flawed, and polishing a song into a finished product for online, CD or video applications. I teach them to keep the end goal in mind from the start, to plan too much, protect the quality of sound at every stage, and be a life-long learner without ego. Once you think you are a badass, you are finished. The most revered artists are the ones who are never good enough for their own standards and strive to be better than their former selves with every new project. I believe each new project should reflect an evolution of growth, and personally, I don’t believe in stagnating. So my skill set is constantly being added to or refined. A growth mindset is where it is at, and I hope that conveys amongst my students.

Equally important, I teach them that they must be prepared, punctual, professional, persistent and passionate about their work. If one of those elements slips, then the commitment won’t be present enough to find continued success over time, unless they luck upon a hit or a really fortunate employment scenario. I teach them to be twice as good and half as difficult as their competitors. In addition, I think it is important to paint a picture of reality for them on the job, because I would be doing my students a great disservice if I made it seem easy or glamorous because it takes a lot of years of hard work for the ease and glamour to show up, if ever it does.

I also teach my students the importance of beating deadlines. If a song or project is due on Tuesday, to have it done completely on Sunday to allow for tech glitches, tweaks or a buffer time for life to mess with you. Inevitably, life WILL mess with you, so having the peace of mind and a happy client is worth the extra effort. I love to under-promise and over-deliver with my clients. Often, the only pat on the back I get is a return client and a recommendation, and that is how I know I am doing well. This business isn’t for those who need verbal praise.

My students hear me preach about the importance of knowing the business side of the industry (music or film) as much as the technical/creative side. This is how you can be forever employable and indispensable to a company, team, or client.

What is generally your first lesson?

All lessons begin with the ear. Anyone who knows me will tell you that I am an excellent talker. However, it is my listening ability that keeps me working. Learning about the craft of sound in relation to spaces, the tools you utilize, and the subjects you wish to preserve through video and/or sound are crucial. But learning how to listen and reiterate what a client is seeking is perhaps equally crucial. If they come to you for a country track and leave with one that is a little too rock-like, they may love it, but will feel unheard or manipulated on some level. If the client makes those decisions along the way, that is one thing. But the client needs to steer that ship, and as professionals, it is our job to facilitate that vision and only inject our creativity or opinions if requested. There is such nuance in human communication, especially these days. Being a great listener and an effective communicator is arguably as important as having well-refined artistry.

What have you learned from teaching?

Teaching had refined my own skill set immensely. I wouldn’t know my craft as well if I didn’t have the pressure of being on top of my game so I don’t make a fool of myself in front of a room full of students. Especially in audio, being a minority, I have to know my subject well, or inevitably, it will become a reason why a student is disrespectful or discards my authority or knowledge. Teaching has also highlighted where my strengths and weaknesses exist. It has allotted me a second chance at fully learning the parts in which I was deficient, so I can parlay that effectively, and has given me practice and a platform for showing off my strengths, conveying my secrets to success with the true joy for teaching and helping others. It has given me as much as I have given to the world over the last 20 years of teaching. I have also connected with the next generations in a way that I never would have otherwise. I love kids and young adults, but I never wanted to be a mom. In this way, I get to leave a legacy in the minds of the masses, which better serves humanity, in my opinion,/circumstance.

Why is it important to include Arts (and STEM) in the general curriculum?

Funny you should ask this, as it is so very timely. My current research for my graduate Ed.S. program in Educational Leadership through Lipscomb University is focused on this very subject. The title of my research is “Promoting gender equity in audio and other STEM subjects.” I think that audio fields, specifically, are a perfect merge of Science, Technology, Engineering, and Math, and with the new STEAM initiatives that are trending, the Arts portion is also covered. With STEM skills, it allows students to be versatile, and ultimately more successful in their future adult endeavors, which translates into more economic security, which translates into less hardship.
The more skills you have, the less you will starve. I am a walking advertisement for that fact, cause this body hasn’t missed a meal in 42 years. Ha!

What makes audio a unique subject to teach?

Audio/Sound is a trade or skill that at its root seems simplistic in nature. However, as you peel back the variables, the other factors around it become more vital. The space, the tools chosen, the subject, the mood, the health of the individual… there are so many variables that can distinguish one moment in time from another. And yet, it is also an art form. Some pay little attention to the process with a sole focus on the product, but audio must consider both as equally crucial. To further investigate, the business side, legalities, personal relationships, niche markets, and self-concept/limitations can all play into the final scope of one’s career in audio. It is unpredictable, beautiful and an immense challenge. It can be a dream come true or a total nightmare, and everything in between. Teaching this subject is as subjective as each individual in the class. It is constant differentiation.
Not everyone has an equally musical ear. Not everyone has gumption. Not everyone is healthy. Not everyone is intrinsically motivated. The best I can do is find out more about my students (by listening) and then cater to their strengths and enhance any identified weaknesses or lack of knowledge, provided that they are open to allowing me to do so.

I know you started a girl’s club in one of your schools, what was your goal in implementing it? How did your students respond? Do you have an interesting story from the group?

First, Nashville Audio Women Facebook group is an online place of connection for the few women who are studying and/or working in audio/sound in Nashville. The other was a girls’ club that I began at the middle school where I was teaching last year. My main mission for this club was girls’ empowerment because the middle school years are perhaps the most crucial for a growing girl in so many ways. Many of my students from that school, a Title 1 school, don’t have strong role models in their lives. Many had never met a woman like me.

Developing relationships based upon trust and respect was my primary goal. My secondary goal was to allow them to observe me as a strong female in this world, and once they knew me, they paid attention to how I interacted with the world. Another goal was to give them a person whom they could come to with all the questions they might have about being a female, kind of like an open-minded big sister. I think in this way, I was able to act as a role model that is unique – one that teaches because she digs helping kids, but also goes out and makes movies and recordings, sings with a rock band, pursues more academic degrees, and obsesses over animal photography in her non-teaching time. I wanted them to see that all of this is possible so that they might internalize it. My bottom line with my girls was to instill confidence and parlay life lessons. While they picked up on all of these things, they also wanted a space (in my classroom) where they could play touch football without boys. They wanted a place to paint their nails and video fake-fights for Snapchat. They wanted to ask me about boys and about periods and how to handle dramas with “haters.” I provided the space for all of that, as much as I could. I let the students direct how their club would go, largely, because I wanted them to build their own capacity as leaders and give them an honored voice.

How is mentorship important for young audio students?

I have many mentors. Many have been men, but I have found some incredible women too in more recent years. I believe they are crucial to growth and can help guide your career and provide you with a reality check and advice as you navigate the workplace. Mine are nothing short of lifesavers. For young audio students, I think one thing they don’t realize is that if all are successful, the audio instructor will eventually become a colleague, so the relationship they build with teachers is ever the more crucial. The teachers can help them find employment, write recommendation letters, and help them create lifelong connections. The reach of the teacher is often the potential reach of the student if the student proves her/himself to be worthy of such extensions of help and resources.

Any advice for the next generation?

Oh my, where do I begin? Well, I would highly suggest that any interested potential audio/sound student be as crystal clear as possible on the economic realities of this industry right now. I would encourage them to build an arsenal of skills that they can utilize in a variety of related industries. I would recommend focusing on the parts of the industry with the most jobs and welcoming atmospheres, and to be open-minded to all styles of music and sound jobs. I would also advise that they interview as many people as they possibly can along the way so that they can make informed decisions about how they want to paint their lives. Education has been key for me in remaining relevant and employed, both in the industry and beyond, and while I probably take it to an extreme that not everyone can handle, I recommend being a life-long learner. I am like a sponge with information, and am wholly unafraid to admit that I am uneducated in certain areas, but these are often the areas that arouse my curiosity. My former boss called me the ‘Swiss Army Knife’ of the department, which is flattering and probably a bit accurate. I want my students to be similar so that they can survive in their chosen industry for as long as it makes sense for them.

In addition, I always thought my career would be linear. We are always taught this as we grow up. But my career has taken the most unpredictable zigzags, and I have finally come to understand that in some cases, this is the norm. This business rarely sees someone graduate with a college degree, go into the field, stay at the same company and retire with a pension. So creative thinking, a diverse skill set and a willingness to change it up when necessary are crucial for forward motion


 

Live digital audio in plain English part 2

Never mind the bit clocks… it’s a word clock primer

My last blog dealt with translating audio into a digital signal. The next step is keeping that signal in time when it’s being captured, processed and sent to different parts of the system. This is where the fabled word clock comes in. If anything weird ever happens with a digital set up, like odd clicks or pops over the PA, you can seem wise beyond your years by nodding sagely, saying “Hmm, it sounds like a clocking issue”, then making your excuses and leaving before any further questions can be asked. However, you can become a rare and very valuable member of your audio team by actually learning what word clocks are, how they work and how to fix the most common problems they can cause. They might seem strange and complicated, but they are of course not black magic. It’s all about crystals.

So… what is a word clock?

Any device receiving audio sees a string of 1s and 0s. How does it know whether 0000011100001011 is two samples, reading 00000111 (= 7) and 00001011 (=11), or the second half of a sample, a full sample (01110000 = 112), and the first half of the next one? As you can see, the resulting values can be very different, so it’s essential to get it right to the exact bit.
A word clock is a signal that is sent at a very accurate frequency of one square wave per sample (the bits in each sample make up a ‘word’). This signal is produced by passing an electrical current through a small crystal inside a word clock generator. The rising edge of the resulting wave means 1; the falling edge means 0. The clock runs alongside the audio signal, with 1 usually meaning “this is the start of the sample” and 0 meaning “this is the end.” Different shapes and sizes of crystal resonate at different frequencies, then more subtle changes are controlled by variations in the voltage running through the circuit and temperature. Some clock generators even keep their crystals in tiny ‘ovens’ to keep the temperature constant.

What the clock?

Clocks are necessary for a few different stages in the signal path. AD convertors might take a fixed number of samples per second, but they still need to make sure those samples are evenly spaced. If they aren’t, the waveform will end up deformed when reproduced by something that is in time. Thinking back to the video analogy from the last post, it’s like film taken on old hand-cranked cameras: uneven capturing of the signal leads to weird inconsistencies when it’s played back. In audio, it’s referred to as jitter. This can also happen when an accurately-captured signal gets reconverted with an unreliable clock, like a film being played on a clunky projector (see figure 1). Clocks used to trigger the capturing of the signal are often called sample clocks. There are also bit clocks, which produce one cycle per bit. These days they are only used for signal transport within devices, for example from one PCB to another. You’re very unlikely to encounter a problem with a bit clock, and if you do there isn’t much you can do except send it back for repair. You might also hear people referring to word clock as sync clock, signal clock or simply clock.

An AD converter with a stable word clock (represented by the square wave at the top) captures an accurate waveform (left), but if it’s converted back to analogue through a DA converter with an unstable clock, the waveform will become deformed (right). Source: Apogee Knowledge Base http://www.apogeedigital.com/knowledgebase/fundamentals-of-digital-audio/what-is-jitter/”

 

One clock to rule them all

A stable clock compared to a jittery one, compared to one whose frequency has drifted. Jitter is caused by a varying clock frequency, whereas a clock that has drifted has a pretty stable frequency. It’s just the wrong one. Source: Apogee Knowledge Base http://www.apogeedigital.com/knowledgebase/fundamentals-of-digital-audio/word-clock-whats-the-difference-between-jitter-and-frequency-stability/”

What we are really interested in for live audio is using word clock to keep multiple devices, e.g., the front of house desk, monitor desk, and system processors, in sync. Think of it like keeping a band in time: most digital devices on the market have their own internal clock, so it’s like each member of the band having their own click (or metronome if you’re that way inclined). If it’s a solo artist, there’s no problem. Even if the click wanders a bit, it probably won’t be noticeable, because there’s nothing to compare it to. However, when there are several members, they need to stay in tempo. Neither clocks nor clicks are perfect, and even if everyone starts off together, they will eventually fall out of sync (known as frequency drift. See figure 2). It makes sense to choose one person to keep the beat for everyone else, like the drummer. Much in the same way, you need to designate one device in your system to be the master clock, and the other devices are slaves who sync their clocks to the master. Sometimes, it can be even better to get a separate device whose only job is to keep time, i.e., an external word clock generator. This is like hiring a professional conductor for the band. Much like a conductor though, they can be very expensive and for the most part aren’t necessary as long as you have a good enough band/set up.

“Mirga Gražinytė-Tyla conducting the City of Birmingham Symphony Orchestra at the Snape Maltings Concert Hall during the Aldeburgh Festival, 2017, by Matt Jolly. https://en.m.wikipedia.org/wiki/File:Mirga_Gra-inyt–Tyla_conducts_the_CBSO,_Aldeburgh_Voices_and_Aldeburgh_Music_Club_at_Aldeburgh_Festival-crop.jpg”

Each device still uses its own clock when following the master. They constantly monitor at what phase in the cycle the incoming clock signal is and compare it to their own. If the two fall out of time, the device can adjust its clock (usually by varying the voltage running through the crystal) until it’s locked in sync. The circuit that does this is called a phase-locked-loop. It’s like a band member nudging the speed of their click or metronome until it matches the conductor. However, some common sense is needed. You don’t want to constantly adjust for every tiny discrepancy, nor do you want everyone to follow when the conductor is obviously wrong, like if he sneezes or falls over. A phase-locked loop’s sensitivity can be adjusted, so it ignores fleeting differences and remains locked to the last signal it received if the master clock outputs major errors or drops out of the system. The device will then continue at that speed until the master gets reinstated or replaced, but will slowly drift if this doesn’t happen. The sensitivity can also be adjusted depending on how good the device is compared to the master. If your conductor isn’t the best, it might be better to listen to your own click when in doubt (or invest in a better conductor). In the next post, I’ll discuss how all this relates to our real life setups.

Shadow Beth O’Leary ME Tech on Kylie Minogue

SoundGirls Members who are actively pursuing a career in Live Sound or Concert Production are invited to shadow monitor system tech Beth O’Leary on Kylie Minogue.

Beth O’Leary is a freelance live sound engineer and tech-based in Sheffield, England. Over the last ten years, she has done everything from pushing boxes in tiny clubs to touring arenas and spends a lot of her life in muddy fields working on most of the major festivals in the UK. She has a particular passion for flying PA, the black magic that is RF, travel, and good coffee. 

Read Beth’s Blog

The experience will focus on the monitor system set up and Beth’s responsibilities. This is open to SoundGirls members ages 18 and over. There is one spot available for each show. Call times are TBD. Unfortunately, members will not be able to stay for the show (unless you have a ticket).

Kylie Minogue – European Dates

September:
18th: Metro Radio Arena Newcastle
20th: Motorpoint Arena Nottingham
21st: Genting Arena Birmingham
22nd: Bournemouth International Centre, Bournemouth
24th: Motorpoint Arena Cardiff
26th, 27th, 28th: O2 Arena London
30th: SSE Hydro Arena Glasgow

October:
1st: Manchester Arena Manchester
3rd: Echo Arena Liverpool
4th: First Direct Arena Leeds
7th: 3Arena Dublin
8th: SSE Arena Belfast

Please fill out this application and send a resume to soundgirls@soundgirls.org with Beth in the subject line. If you are selected to attend, information will be emailed to you.

 

Shadowing Opportunity w/ FOH Engineer Edgardo “Verta” Vertanessian

SoundGirls Members who are actively pursuing a career in Live Sound or Concert Production are invited to shadow FOH Engineer Edgardo “Verta” Vertanessian.

Verta is currently FOH Engineer for Vance Joy. Verta has over 22 years of experience having mixed and system teched for a wide range of musical genres in venues ranging from clubs to stadiums. FOH Engineer for Vance Joy, Juanes, Lil Wayne (ME) and more. He has been the system tech/crew chief on tours ranging from Taylor Swift, The Who, Rihanna, Jay Z – Kayne West and more.

The experience will focus on FOH Mixing. This is open to SoundGirls members ages 18 and over. There is one spot available for each show. Call times are TBD and members will most likely be invited to stay for the show (TBD).

Vance Joy – European Dates

Vance Joy – Australian Dates

Additional Dates

Please fill out this application and send a resume to soundgirls@soundgirls.org with Verta in the subject line. If you are selected to attend, information will be emailed to you.

 

Shadowing Opportunity w/ FOH Engineer Sean “Sully” Sullivan

SoundGirls Members who are actively pursuing a career in Live Sound or Concert Production are invited to shadow FOH Engineer Sean “Sully” Sullivan.

Sully has mixed FOH for everyone from Sheryl Crow, Thom Yorke,  Beck, Justin Timberlake, Rihanna & Red Hot Chili Peppers. He is currently mixing Shaina Twain.

The experience will focus on FOH Mixing. This is open to SoundGirls members ages 18 and over. There is one spot available for each show. Call times are TBD and members will most likely be invited to stay for the show (TBD).

Shaina Twain – European Dates

Shaina Twain – Australian and New Zealand Dates

Please fill out this application and send a resume to soundgirls@soundgirls.org with Sully in the subject line. If you are selected to attend, information will be emailed to you.

 

Shadowing Opportunity w/ FOH Engineer Kevin Madigan

SoundGirls Members who are actively pursuing a career in Live Sound or Concert Production are invited to shadow FOH Engineer Kevin Madigan.

The experience will focus on FOH Mixing. This is open to SoundGirls members ages 18 and over. There is one spot available for each show. Call times are TBD and members will most likely be invited to stay for the show (TBD).

Graham Nash

David Crosby

Please fill out this application and send a resume to soundgirls@soundgirls.org with Kevin Madigan in the subject line. If you are selected to attend, information will be emailed to you.

 

Performance Anxiety

I think pretty much everyone has at least once in their lifetime experienced anxiety in one way or another. Personally, my anxiety is a good old friend I have had with me for years. It is something I always have struggled with and there is different reasons to why that is, but some reasons that stands out the most is; I am a perfectionist and I am not best friends with failure.

For a lot of people, I think it is hard to admit that you suffer from anxiety and the impact it may have on your life. I used to be like that because I felt like I was overreacting.

In my previous blog post ‘A lesson about fun & failure,’ I briefly mentioned and touched on the subject about failure. My anxiety, and probably for a lot of people, is linked to the fear of failure.

I have studied music for many years; I began at the age of 11 to play classical piano. I love playing the piano, and I learned sight-reading from an early age. I played Mozart, Beethoven, Bach and I to this day absolutely love their compositions. But, what I could not get my head around was that I could not play those pieces perfectly every time. I got so angry with myself for messing it up to the point where I stopped enjoying playing the piano because I felt like I was failing.

Throughout college, I had to go through plenty of live performances, all of which I suffered terrible anxiety attacks from. I simply did not want to be on stage; I could not deal with the pressure and the possibility of failing. The pressure I put on myself, not anybody else, I’ve realised now later in life.

This is one of the main reasons I chose to work behind the stage and what makes me love and care so much about live performances. For me, it is so important that artists feel comfortable whilst being on stage because I know what it feels like when you don’t.

Performance anxiety is so important to acknowledge and to deal with in all aspects and careers of life. We put so much pressure on ourselves, from such an early age, it affects our mental health severely. It’s good to be ambitious, but when is it too much? At what point do we tell ourselves ‘hey it’s getting a bit too much now’?. Especially within the music industry, it is a very fast-paced industry and you’re expected to be multi-talented from a young age.

Sometimes it is not about overcoming your anxiety, sometimes it is merely about becoming friends with it. Nowadays I handle it in such a way that I give myself some time and space. I analyse what is going on in my life, usually my anxiety flares up when I’ve got too many things going on at the same time and really should’ve said no to a couple of jobs. I get terrible anxiety when I am new to things, especially jobs, to the point where I feel nauseous and overthink every possible scenario that might happen. But when this happens I tell myself that everything will be ok, one way or another.

We are only human in the end of the day, and as I have learned along the way, it is perfectly normal to feel anxious sometimes. However, if you feel like you need help to improve your anxiety and mental health do not hesitate to get in touch with your GP. There are also great apps to manage and improve your mental health here: https://apps.beta.nhs.uk/category/mental_health/.

 

Live Digital Audio in Plain English Part 1

Digitizing the audio

Digital audio is nothing new, but there is still a lot of misunderstanding and confusion about how it really works, and how to fix it when things go wrong. If you’ve ever tried to find out more about digital audio topics, you will know that there are a lot of dry, complicated, and frankly, boring articles out there, seemingly written by automatons. I’m going to spend the next few posts tackling the fundamental ideas, specifically as they relate to live audio (rather than recording, which seems to have been covered a lot more), in plain English. For the sake of clarity and brevity, some things may be oversimplified or a bit wrong. If unsure, consult the internet, your local library, or a pedantic friend.

So, how does audio become digital in the first place? The analogue signal travels from the source (e.g., a mic) into the desk or its stagebox, where it gets turned into a series of 1s and 0s by an analogue-digital converter (AD converter or ADC). AD converters work by taking lots of snapshots (called samples) of the waveform in very quick succession to build up a digital reconstruction of it: a method known as pulse-code modulation (PCM. Don’t worry about remembering all these terms; it’s just useful to understand the whole process. In over ten years of live gigs, I’ve never heard anyone discuss PCM, and I’ve heard some pretty nerdy conversations). Two factors control how accurate that reconstruction will be: sample rate and bit depth.

Sample rate is the rate at which the samples are taken! Not surprisingly, the more samples per second, the smaller the gap between them (sample interval) and the less information that is lost. Think of it like frame rate in film – a low sample rate is like a jerky, stop-motion video, high sample rate is like 48 frames per second fancy Peter Jackson stuff.

Bit depth is the number of bits (piece of information encoded in binary for electronic use – so a 0 or a 1) in each sample. 8 bits make a byte, and samples are set to capture the same number of bytes each time. They record the amplitude of the signal – more bits mean there are more discrete amplitudes that it can be recorded as (See figure 1), so the resolution of the soundwave becomes clearer. Bits are like pixels on a screen – low bit depth is similar to blocky, unclear footage, high bit depth is like high definition where you can see every detail. Back in the early days of computer games, there wasn’t much available memory in the cartridges, so all the sound was recorded in 8-bit. The low-resolution audio matched the pixelated video.

Figure 1: Bit depth vs. sample rate. Time is represented on the x-axis, amplitude on the y-axis. Source: https://www.horusmusic.global/music-formats-explained/ Original source unknown.

Looking at figure 1, it’s clear that the longer the bit depth and the higher the sample rate, the closer you can get to the original waveform. Realistically you can’t take an infinite number of infinitely detailed samples every second – even very high values of each produce an unmanageable amount of data to process, and costs too much to be practical. The Nyquist-Shannon theorem states that to reproduce a waveform accurately for a given bandwidth you need to take more than twice as many samples per second as the highest frequency that you are converting. If you take fewer samples than the highest frequency, an entire wavelength could happen between samples but wouldn’t be recorded. With between as many and twice as many, you still wouldn’t collect enough data about that waveform to differentiate it from all other frequencies, as is shown in figure 2.

Figure 2: Aliasing. If a waveform isn’t sampled often enough, it can be confused with other, lower frequency, ones.Source: Eboomer84 via Wikimedia Commons https://commons.wikimedia.org/wiki/File:Aliasing.JPG

For music, we usually assume the bandwidth is the range of human hearing: roughly 20Hz-20kHz. Twice that range is just under 40kHz, but the Sony corporation figured out that 44.1kHz synced up nicely with the video recording equipment they already had while leaving a nice margin for error, so it became the standard for recording film audio and CDs. Later 48kHz was adopted because it worked well with new digital video recording gear, and could reproduce even higher frequencies. Most digital mixing desks work on 48kHz or 96kHz.

Moiré patterns like this, or the weird lines when you take a photo of a screen, can be caused by the visual equivalent of aliasing. We have more in common with the video department than we might like to admit. Credit: “angry aliasing in a webgl fragment shader” by Adam Smith on flickr. https://creativecommons.org/licenses/

Why bother with 96kHz? No one can hear 48kHz, so what’s the point in sampling enough to cover it? It isn’t strictly necessary, but there are a few reasons to do it anyway. Firstly there’s the argument that, much like when choosing a speaker’s frequency range, frequencies above the limit of human hearing can still affect the overall waveform, and so ignoring them can change the resulting sound. Secondly, in digital sampling, higher frequencies can have a real and detrimental effect called aliasing. In figure 2 you can see that the AD converter would not be able to tell whether the points it’s recorded belong to a very high-frequency waveform or a lower one. It has been told what bandwidth to expect to see, so it will assume that waveform is the lower one, within the defined bandwidth. This causes it to be artificially added to the digital audio, making it sound… just not quite right. AD converters use low pass filters, called anti-aliasing filters, to get rid of these high frequencies but they aren’t perfect; they aren’t like a brick wall stopping everything above 20kHz (or whatever they’re set to) getting through, they have a sloping response just like other filters. Increasing the sample rate can clarify which waveform is which and take the pressure off the anti-aliasing filter, moving the highest frequency that can be accurately recognised higher than that slope. Thirdly, AD converters use complex mathematical formulae to take an educated guess at filling in the blanks between samples, known as quantisation. The more samples you have, the smaller the blanks that need to be filled and the more accurate that quantisation can be.

Increasing the bit depth also greatly reduces quantisation errors. Quantisation is basically rounding to the nearest amplitude point to smooth off the ‘pixelated’ waveform – more bits mean more options to find as close a point to the real value as possible. When this process is inaccurate, the guesswork introduces noise that isn’t present in the original signal. Increasing the bit depth reduces that guesswork, increasing the ‘signal to quantisation noise ratio.’ 24 bit, which is common in live digital audio, can give you over 120dB of dynamic range because it significantly lowers that quantisation noise floor, and so can give your true signal more space and reduce the likelihood of it clipping.

As ever, your sound will only be as good as the weakest link in the chain. You might never notice the differences between these options in a live setting as a lot of live gear is not sensitive enough to show them. This might be why there is so much more discussion about them in relation to studios. However, it helps to know what processes are at work, especially when it comes to troubleshooting, which I’ll cover in a future post.


Beth O’Leary is a freelance live sound engineer and tech-based in Sheffield, England. While studying for her degree in zoology, she got distracted working for her university’s volunteer entertainments society and ended up in the music industry instead of wildlife conservation. Over the last ten years, she has done everything from pushing boxes in tiny clubs to touring arenas and spends a lot of her life in muddy fields working on most of the major festivals in the UK. She has a particular passion for flying PA, the black magic that is RF, travel, and good coffee. 

Read Beth’s Blog

Film Score Mixing with a Team

I was recently at the Banff Centre for Arts and Creativity in Canada to supervise the film score mix of a three-part documentary series (by filmmaker Niobe Thompson and music by composer Darren Fung). We needed to mix over 100 minutes of music – nearly 200 tracks of audio – in about a week. Luckily, we had a large crew available (over ten people and three mix rooms), so we decided to work in an unusual fashion: mixing all three episodes at the same time.

Normally you have one mixer doing the whole score working in the same mix room. Even if he/she mixes on different days (or has assistants doing some of the work), chances are the sound will be pretty similar. It’s a challenge when you have ten mixers with different tastes and ears working in different rooms with different monitors, consoles, control surfaces, etc. What we decided to do was work together for part of the mix to get our general sound then let each group finish independently.

The tracks included orchestra, choir, organ, Taiko drums, percussion, miscellaneous overdubbed instruments and electronic/synth elements. It was recorded/overdubbed the week prior at the Winspear Centre in Edmonton, Alberta. The Pro Tools session came to us mostly edited, so the best performances were already selected, and wrong notes/unwanted noises were edited out (as much as possible). Our first task was to take the edited session and prepare it to be a film score mix session.

When mixing a film score, the final music mix is delivered to a mix stage with tracks summed into groups (called “stems”). For this project, we had stems for orchestra, choir, organ, taiko, percussion, and a couple of others. Each stem needs its own auxes/routing, reverb (isolated from other stems), and record tracks (to sum each of the stems to a new file). I talk about working with stems more in this blog: Why We Don’t Use Buss Compression.

Once the routing and tech were set, we worked on the basic mix. We balanced each of the mics (tackling a group at a time – orchestra, choir, organ, etc.), set pans, reverbs, sends to the subwoofer (since it’s a 5.1 mix for film). In film score mixing, it’s important to keep the center channel as clear as possible. Some tv networks don’t want the center channel used for music at all (if you’re not sure, ask the re-recording mixer who’s doing the final mix). From there, our strategy was to polish a couple of cues that could be used as a reference for mixing the rest. Once our composer gave notes and approved those cues, we made multiple copies of the session file – one for each team to focus on their assigned portion of the music.

Every project has its unique challenges even if it’s recorded really well. When you’re on a tight time schedule, it helps to identify early on what will take extra time or what problems need to be solved. Some parts needed more editing to tighten up against the orchestra (which is very normal when you have overdubs). When the brass played, it bled into most of the orchestra mics (a very common occurrence with orchestral recording). There are usually some spot mics that are problematic – either placed too close or far, pick up unwanted instrument noise, or too much bleed from neighboring instruments. Most of the time you can work around it (masking it with other mics), but it may take more time to mix if you need to feature that mic at some point.

What really makes a film score mix effective is bringing out important musical lines. So, the bulk of the mix work is focused on balance. I think of it like giving an instrument a chance to be the soloist then go back to blending with the ensemble when the solo line is done. Sometimes it’s as easy as bringing a spot mic up a few dB (like a solo part within the orchestra). Sometimes it takes panning the instrument closer to the center or adding a bit of reverb (to make it feel like a soloist in front of the orchestra). Mix choices are more exaggerated in a film score mix because ultimately the score isn’t going to be played alone. There’s dialog sound fx, Foley, and voice-over all competing in the final mix. On top of everything else, it has to work with the picture.

Film score mixing is sort of like mixing an instrumental of a song. The dialog is the equivalent of a lead vocal. I encourage listening in context because what sounds balanced when listening to the score alone may be different than when you listen to your mixdown 10 dB and with dialog. Some instruments are going to stick out too much or conflict with dialog. Other instruments disappear underneath sound fx. Sometimes the re-recording mixer can send you a temp mix to work with, but often all you have is a guide track with rough mics or temp voice-over. Even with that, you can get a general idea how your mix is going to sound and can adjust accordingly.

One unique part of this project was the mix crew was composed of 50% women! Our composer, Darren Fung, put it well when he said, “This is amazing – but it should just be normal.”

Equus: Story of the Horse will debut in Canada in September 2018 on CBC TV “The Nature of Things.” In the US, Equus will air on PBS “Nature” and “Nova” in February 2019. It will also air worldwide in early 2019.

Score Mixers: Matthew Manifould, Alex Bohn, Joaquin Gomez, Esther Gadd, Kseniya Degtyareva, Mariana Hutten, Luisa Pinzon, Jonathan Kaspy, Aleksandra Landsmann, Lilita Dunska

Supervising mixers: James Clemens-Seely and April Tucker

X