Empowering the Next Generation of Women in Audio

Join Us

Grow Your Ears for Music

Imagine if, on the first day of school, your teacher had stood up and said “Look, we’re going to try this thing called reading. It isn’t for everyone. Some of you will just have an eye for words, and some of you won’t. If you find you don’t have the knack, you might as well just leave it.” I like to think that would be greeted with a bunch of toddlers falling over laughing, but you would expect that questions would be asked about the teacher’s career choice at the very least. It is absolutely ridiculous to think that the ability to read is predetermined and cut and dry, so why do we listen to people who say only those with an ear for music can become great sound engineers?

The jury is still out on whether there is such a thing as an innate, genetic talent for hearing and music. Even if there is, the thing about genes is they very, very rarely account for the whole spectrum of differences amongst the population. A gene might give you a head start, but the environment in which you grow up can influence the development of that skill as much, often more. Even for child prodigies, an initial flair gets nurtured (or perhaps smothered) by parental encouragement and hours upon hours of daily practice. It is much the same with sound engineering. Some people might take to it quicker than others, but everyone benefits from practice and study. A skill being hard-earned does not negate its value, otherwise, why would we bother going to school? When I started out I was in awe of what my more experienced colleagues could pick out in a mix, and how quickly they could not only detect but identify the cause of a problem. I didn’t think I’d ever be able to do it. I’m still far from perfect, but there are plenty of sounds I don’t even think about how to fix now; I’ve heard them so many times I automatically know what to do. I’m still discovering new aspects of my favourite songs that I’ve listened to since I was a teenager. Fancier professional earphones can only partly explain that!

So where has this belief that only the golden-eared chosen few can make it in the music industry come from? I suspect it’s people who have been told all their lives that they have an ear for music. When people do well, they like to find logical reasons for that success. The special gifts that they are born with, combined with what they feel was hard work, mean they deserve everything they have earned. Of course, they often do, but too few people acknowledge the roles that the help of others and luck play in a field as fickle and competitive as ours. Similarly, if you don’t make it, it is easy to say that you simply weren’t cut out for it, that you didn’t have a good enough ear. Only successful people want to believe that they live in a meritocracy. In reality, it takes the support and advice of countless colleagues and a big chunk of luck, in addition to skill and determination, to get your break. However, this doesn’t mean you should give up now. You can work to improve your knowledge and skillset and grab as many opportunities as you can. Put yourself in the path of luck as often as possible and be ready when it hits.

Anyone who knows me knows I’m not one for baseless positive thinking. I don’t think we can all become astronauts, as long as we simply believe in ourselves: there aren’t enough shuttles, and someone has to do all the other less exciting jobs. However, someone does have to be an astronaut. Someone has to mix that fantastic up-and-coming band. Someone has to system engineer that stadium tour. Someone has to do all those myriad jobs that don’t get as much attention but can be just as satisfying (and often better paid!) like RF tech, comms tech, or installation engineer. Who gets to decide? Your school music teacher? That lighting guy? Some blogger? What do they know? Even if an ear for music is encoded in your chromosomes, are they suddenly geneticists? How did they get a sample of your DNA anyway? Don’t be put off by other engineers telling you that you don’t have what it takes either. However subconsciously, they are reassuring themselves that they deserve to be where they are and are trying to protect themselves from the competition.

In research on geniuses, one of the most important factors is their passion for their subject, known as the ‘rage to master.’ They study and practice so intensely not just because they’ve been made to, but because they want to because they must. They don’t feel right if they aren’t working on their “thing.” The author Hunter S. Thompson once wrote a brilliant letter when he had been asked for life advice, in which he advocates finding a lifestyle you enjoy and creating a career around it, rather than the other way round: “The goal is absolutely secondary: it is the functioning toward the goal which is important.” Let’s be honest, sound engineering is competitive, but you don’t need to be a genius. If sound is what you love, don’t wait for some authority to tell you that you have what it takes, to give you permission to do it. Decide now that you are one of those special people, and just do it. The Department of Who Does and Doesn’t Have an Ear for Music will never know. Maybe you won’t make a living out of it, but the only way to find out is to put yourself out there, learn, practice and improve. Even if you never get a gig bigger than the local bar, if no one hears your mixes, if no one subscribes to your podcasts, the important thing is that you enjoyed the process, and so the net positivity of the whole world is up.

 

Missed this Week’s Top Stories? Read our Quick Round-up!

It’s easy to miss the SoundGirls news and blogs, so we have put together a round-up of the blogs, articles, and news from the past week. You can keep up to date and read more at SoundGirls.org

June Feature Profile

The Road from Montreal to Louisville – Anne Gauthier

The Blogs

FOH Amanda Davis – Lifting Up Aspiring Engineers

Keeping it Real Section 3 – Mixing IEMS in 3D

Keeping it Real – Section 2

Keeping It Real

The Magic of Records

Miranda Hull Customer Care at Harman PRO


SoundGirls News

SoundGirls – Gaston-Bird Travel Fund

Shadowing Opportunity w/Guit Tech Claire Murphy

Shadowing Opportunity w/ FOH Engineer Kevin Madigan

Shadowing Opportunity w/ ME Aaron Foye

Letter for Trades and Manufacturers

https://soundgirls.org/scholarships-18/

Shadowing Opportunities

https://soundgirls.org/event/colorado-soundgirls-ice-cream-social/?instance_id=1313

SoundGirls Expo 2018 at Full Sail University

https://soundgirls.org/event/bay-area-soundgirls-smaart-overview/?instance_id=1316

https://soundgirls.org/event/bay-area-soundgirls-sept-meeting/?instance_id=1317

Round Up From the Internet

Interview with Kelly Kramarik on How to Get Started

 


 

 

2019 She Rocks Awards Nominations Now Open

 



SoundGirls Resources

Directory of Women in Professional Audio and Production

This directory provides a listing of women in disciplines industry-wide for networking and hiring. It’s free – add your name, upload your resume, and share with your colleagues across the industry.


Women-Owned Businesses

Member Benefits

Events

Sexual Harassment

https://soundgirls.org/about-us/soundgirls-chapters/

Jobs and Internships

Women in the Professional Audio

Keeping it Real Section 3 – Mixing IEMS in 3D

Section 1

Section 2

Until now, the physical constraints of IEMs – sound being delivered direct to our eardrums – has given us no way to experience the nuances of sound localisation. The fact that our moulds are in the ear means that we miss out on the out-of-body arrival of sounds and the information we glean from the travel of those sound waves around our heads and bodies.

Until now.

I recently had the pleasure of road-testing a stunning 3D in-ear monitoring system from German company Klang. My experience has convinced me that this is the next great leap forward for in-ears, almost as much of a game-changer as the 1990s introduction of IEMs in the first place, or the evolution from analogue to digital desks.

Think of a standard, high-quality stereo in-ear mix. You perceive the mix elements panned in varying degrees from dead centre all the way out to the peripheries of your ears. Maybe you’ve created some sense of depth with the different levels and EQ of those elements, maybe some atmosphere with reverbs, but that’s about as much as you can do.

Now imagine that you could take your ear moulds out and hear all of those elements placed around you acoustically in three dimensions. The relative volumes are the same, but all of a sudden there’s a sense of space and freedom as you liberate yourself from cramming all of those mix elements into the limited confines of the space between your ears. The detail in the sound of each instrument suddenly becomes a high-definition experience as inputs in similar frequency ranges no longer battle for space; some sounds feel as though they’re high in the air; others close to the ground; some are behind you; whilst others are at distances far beyond your arm’s reach.

That’s what it feels like to switch from a stereo mix to Klang 3D.

(Incidentally, going back the other way feels a bit like flying business and then returning to economy. Honestly, these guys have ruined stereo for me for life!)

Klang has used vast amounts of binaural hearing data to emulate what happens at a listener’s ear when the source is coming from outside the body. This data, gathered in lengthy experiments involving dummy heads with tiny microphones placed at the entrance to the ear to ‘hear’ sounds from different places, has enabled them to create an incredibly realistic 3D experience for in-ear monitoring. It is like virtual reality for the ears, but it’s more than that – it’s an ideal-world natural stage sound.

The Klang model combines all that we know about the nature of sound localisation – inter-aural time differences, inter-aural level differences, comb-filtering – with the subtle changes that we experience in frequency perception according to a sound’s location, to allow the monitor engineer to ‘place’ different inputs in various areas around the listener’s head in a 3D spectrum. The incredibly user-friendly interface depicts (on a laptop or more easily still, the touch-screen of an iPad) two different views of the listener’s environment: a bird’s eye view of the top of the head, where instruments appear to be on a virtual ring around your head, allowing you to place them not only to the left and the right but also in front and behind your head; and a landscape view which allows you to move them vertically – above and below your head.As you move inputs around using the touch screen, you feel as though they are indeed coming from a different three-dimensional location, due to the way the Klang unit subtly alters the sound using binaural hearing data.

So with all this newfound space, you can now place instruments wherever you like. While it seems obvious at first to place instruments on the orbit where you actually see them on stage, this is only one possible placement method.

Our brains determine the importance of a sound according to where it is coming from. Right in front of you, and elevated slightly higher than your own head, is perceived as of paramount importance, so it makes sense to put the listener’s own instrument and/or vocal here. Interestingly, I found that a critical sound positioned here didn’t require as much volume as the same sound centre-panned in a stereo mix – making it great news for anyone who requires some elements very loud, such as a drummer and their click.

We perceive sounds from slightly behind us with a wide left/right span as being less important, but still worth paying attention to; so for a singer I found this a good place to put keys and synth sounds, as well as a stereo electric guitar. Strings worked really well placed high and wide for an airy, slightly ethereal feel; and bass and kick felt good placed lower and directly behind me. Pitching information signals such as backing vocals and piano seemed most natural and effective placed evenly panned to the front, but narrower and lower than the strings.

The Klang Fabrik takes up to 56 inputs, and it was interesting to note that I could be even more flexible with my mixing by leaving some inputs (such as talk mics, which call for no special artistic treatment) out of the Klang domain. I simply brought the Klang outputs back into my console where I subbed them into an aux buss, to which I then added the talk mics and anything I didn’t need in the 3D arena. This retained all of the fantastic space and detail of the 3D mix, whilst allowing total freedom in the number of utility inputs.

The Klang app is free to download and comes with a demo track – all you have to do is plug your in-ears or headphones in and you can move the track inputs around and experience 3D sound for yourself. I highly recommend starting by listening in stereo (the app gives you the choice) and then switching to 3D for an A/B test – the difference really is astounding, akin to throwing open the shutters in a dark room!

I’m extremely excited to be taking a Klang system out on my next tour, and I know that the artist and band are going to be delighted by the whole new in-ear experience that this offers. The detail, space and musicality that it offers, make for a truly transformative mix. The only drawback is that they, too, will find themselves ruined for stereo for life!

Missed this Week’s Top Stories? Read our Quick Round-up!

It’s easy to miss the SoundGirls news and blogs, so we have put together a round-up of the blogs, articles, and news from the past week. You can keep up to date and read more at SoundGirls.org

June Feature Profile

The Road from Montreal to Louisville – Anne Gauthier

The Blogs

Keeping It Real

Keeping it Real – Section 2

How to Mix Using Multiple Reference Monitors

Ser bilingüe no siempre funciona

Being Bilingual Does Not Always Work


SoundGirls News

SoundGirls – Gaston-Bird Travel Fund

Shadowing Opportunity w/Guit Tech Claire Murphy

Shadowing Opportunity w/ FOH Engineer Kevin Madigan

Shadowing Opportunity w/ ME Aaron Foye

Letter for Trades and Manufacturers

https://soundgirls.org/scholarships-18/

Accepting Applications for Ladybug Music Festival

https://soundgirls.org/event/glasgow-soundgirls-meet-greet/?instance_id=1272

Shadowing Opportunities

Telefunken Tour & Workshop

https://soundgirls.org/event/colorado-soundgirls-ice-cream-social/?instance_id=1313

SoundGirls Expo 2018 at Full Sail University

https://soundgirls.org/event/bay-area-soundgirls-smaart-overview/?instance_id=1316

https://soundgirls.org/event/bay-area-soundgirls-sept-meeting/?instance_id=1317

Round Up From the Internet

Interview with Kelly Kramarik on How to Get Started

 


 

 

2019 She Rocks Awards Nominations Now Open

 



SoundGirls Resources

Directory of Women in Professional Audio and Production

This directory provides a listing of women in disciplines industry-wide for networking and hiring. It’s free – add your name, upload your resume, and share with your colleagues across the industry.


Women-Owned Businesses

Member Benefits

Events

Sexual Harassment

https://soundgirls.org/about-us/soundgirls-chapters/

Jobs and Internships

Women in the Professional Audio

Miranda Hull Customer Care at Harman PRO

Since 2011, Miranda Hull has worked for Harman PRO brands leading in the Regional Sales Office, now called Customer Care. In less than a year Miranda had a son and was diagnosed with 2A Hodgkin’s Lymphoma. Compounded with Lupus, and company-wide changes, Miranda quickly learned the value in work/life balances in a tech-focused industry.

Miranda and I start the conversation with small talk about the weather. Miranda in Texas, and myself in Kansas, we quickly jump into her life at Harman…

After Harman acquired AMX, it was decided that they would absorb AMX’s customer care team and be retrained to assist in areas outside of Video & Control. “[I had this team] reporting to me, and basically I had to train them all on Harman process and audio brands. Which was, you know, kind of a big task. We have a bunch of brands, and they were very familiar with one brand. For them, it was more of a relaxed day and our day [at Harman] is not like that at all. By that I mean we are crazy busy most of the time.” In September of 2017, Harman PRO had a company-wide restructuring. Just a few months prior to this restructure, Miranda had moved from Indiana to Texas to lead Customer Care for the west coast. Well, unfortunately, this restructure included the closing of the Indiana branch of Customer Care. “It was bad and good. You know, there’s always good things about [restructuring], but it was hard with my friends, the people I know, the seasonal employees.” Miranda goes on to explain, “I basically had AMX employees who I was training on audio, but there weren’t anywhere near…they just didn’t have the experience with the audio side of things which is a large majority of our business.” Miranda was told that she would now be managing the Customer Care team, a step up from the supervisory role she had been holding. Since September, Miranda has been on a hiring frenzy, trying to hire and train people to take care of customers for all of Harman Audio here in the US.

I add in, “Hiring two people and trying to train them can be a lot of work.”

Miranda has 20.

“I’m glad that, you know, that the change happened. I’ve got so much more experience now. And that wouldn’t have happened if I had gotten let go.”

“Is it safe to say that the biggest hurdle you’ve had at Harman so far has been the restructure and the hiring of a lot of people?” I ask.

We both chuckle and Miranda answers, “When you’re in a different role where you’re not in charge of things, life is a little easier. Definitely, the restructure was a huge change, and a lot of things fell into my lap. Even now, it kind of goes into another question you had asked…but being in a different state without friends and family…my time is very precious to me because I have a four-year-old child. I’ve gone through some stuff in my life where work isn’t number one in my life. It’s just not. My life is more than that.”

In an endless flow of emails, Miranda makes great effort to disconnect from work when necessary. Job security is often a double-edged sword in that you will likely always have a job, but it becomes difficult for a team or department to run with your absence. Miranda speaks on the balance of work and life:

“Because I have so much responsibility, I walk a very fine line of knowing when to put the computer away, put the phone down, and go home. That’s been hard for me because I feel a lot of pressure (not from any one person in particular) but just knowing there are a ton of people out there [17-20] that need me.” The day previous her son became ill, and Miranda needed to work from home. “[Whenever my son is sick] I make sure that I’m online all day. If [my team] has any questions they know to call me, text me, Jabber me, email me, and I will respond to them. And, I feel a little guilty about that, and I don’t want to feel guilty about that. My son is sick, and he’s my priority. For me, that is a fine line, and I need to be cognizant of that.”

I tell Miranda about my thoughts on work culture here in the US: “Just as a culture in America, particularly since the financial crash, has made a very dramatic shift to working way too much and not valuing the home time. I know that I struggle with it. I’m unsure if it’s because I’m a woman and I feel the need to overcompensate by always being there, always being available, always being clocked-in ready to go, and on-call 24/7. I can actually feel other parts of my life wither. I’ve put so much of my spirit and energy into work, which on one-hand is super fun, but is so easy to let other parts suffer as consequences.”

Miranda responds, “I’ve seen colleagues and team members answer emails at crazy hours. I think the unspoken rule is that you are always available. And I don’t want that. Especially since my cancer diagnosis.” We jokingly talk about the Do Not Disturb function on our phones as our only true escape from digital information.

Three years ago Miranda was diagnosed with cancer just nine months after having her son. If having a child wasn’t enough to refrain her thought process, the idea of cancer would certainly do the trick. “I had stage 2A of Hodgkins Lymphoma. My oncologist told me it was curable cancer and that stage 2 was a good stage to have, and if you were gonna have cancer Hodgkins was a better option.” Miranda underwent seven months of chemotherapy, then returned to work in her position (at the time) of team-lead. Harman clearly has the backs of their teams, supporting Miranda with a benefit concert as well as allowing her to not only have time after the birth of her son but the chemotherapy.

Miranda is an incredible woman, with an incredible journey. With so much change is such little time, she has truly been able to shine. The work/life balance she maintains along with her love for her position at Harman, it is easy to see how she is able to have it all, as they say. She spends her days training and teaching new hires all about audio troubleshooting and support, and her evenings with her family. If we as an industry could take a little piece of everything Miranda has learned through the last five years, we would all be better for it.

How to Mix Using Multiple Reference Monitors

And not drive yourself crazy

When I first started mixing, it sometimes felt like I was redoing my work over and over until I hit my deadline and was forced to stop. My mix process back then was mixing through my main speakers (full-range) then switching to small speakers for a pass. Then, I’d switch back to my main speakers and find a totally different set of problems. I’d do a pass-through a third set of speakers, and it’d open up another can of worms.

It was very hard to trust my mix decisions. I didn’t trust the rooms I was working in. I didn’t trust my speakers. I sometimes questioned my ears or ability. When there’s that much doubt how are you ever able to make a decision? You can’t. Constantly questioning what is “right” slows down the mix process severely.

From a mixing perspective, nearly every room is flawed in some way. There’s room resonances, bass management issues, less than ideal speaker placement, noise, reflections, or phase issues. Even a room that’s tuned by a great acoustician and considered flat can have 6dB variance or more! The only way to trust a room (or monitors) is to accept a room for what it is.

First and foremost, it helps to reduce as many changing variables as possible. Mix as much as you can in the same room using one set of references monitors. Think of it as your “home base.” The goal is to have a setup that you trust – not because it sounds amazing but because you know its quirks and flaws and strengths.

As you mix, make a mental note of things you notice, like, what frequencies are you always EQing? When you pan, is the imaging clear or muddy? Critical listening is about observation without judgment. Once you make judgments (especially that a mix sounds better or worse depending on the environment, plugin, etc.) it can turn into a psychological game. This is when you start questioning your speakers, room, and yourself.

Some of the best advice I’ve ever received about mixing is “mix, however, makes you comfortable.” Auratones speakers (a standard found in many post-production mix rooms) make my ears ring, so I don’t use them. If I mix through a television set, I listen at the same level I listen to tv at home. I quit mixing full-range at 82 dB (which I find uncomfortably loud sometimes) and closer to 78 dB or even lower on occasion. What I gain in confidence by listening at a comfortable level far outweighs what I lose sonically (by not mixing at the nominal calibrated level for a mix room).

Working in different rooms and monitoring situations can be used to your advantage. When I’m working on a film, I sometimes prefer to edit on headphones (especially to treat pops, clicks, unwanted noises). I like to do my detail EQ work and noise reduction in a room with near-field monitors (like a home studio). This allows you to hear detail that might be lost working in a theatrical mix stage. If I can work on a theatrical stage, that’s the best place to deal with bass management (like mixing to the subwoofer) and mixing in 5.1.

In post-production, we don’t just change monitors, but we sometimes change rooms completely. On top of it, the final mix might be going to a movie theater, television (Bluray, Video on Demand), and eventually online (to laptop or cell phone listeners). We’ve got 5.1 and stereo to consider (or even deeper into 3D Immersive Audio). Many projects don’t have the budget to do separate mixes so sometimes you have to make decisions that are good for one listening environment and bad for another. I find as a mixer I’m happier if I do one mix that I am really happy with versus trying to find a middle ground. I tend to cater to the audience that will have the most views.

It’s good to ask yourself, “what am I trying to achieve by changing monitors?” I don’t change monitors anymore unless there’s a specific reason, such as:

There’s definitely value in changing how you listen. I change my listening level a lot when I’m mixing film scores to hear how the mix sounds in context against dialog. If I’m mixing in 5.1, I might switch to the stereo to see how something I’ve mixed translates that way. I might listen through a tv or my phone if there’s a specific question or need for it.

A big part of learning to mix well is learning how to mix poorly, too. How often do you go back to an old mix and think, “that really sucked!” but at the time you thought it was great? We do what sounds “right” until we find something new that sounds right. There are times you have to accept that your mix is the best you’re going to do that day. Tomorrow is a new day, a new mix, and a chance to do something different

Missed this Week’s Top Stories? Read our Quick Round-up!

It’s easy to miss the SoundGirls news and blogs, so we have put together a round-up of the blogs, articles, and news from the past week. You can keep up to date and read more at SoundGirls.org

June Feature Profile

The Road from Montreal to Louisville – Anne Gauthier

The Blogs

Keeping It Real

Playing With Voices


SoundGirls News

Shadowing Opportunity w/Guit Tech Claire Murphy

Shadowing Opportunity w/ FOH Engineer Kevin Madigan

Shadowing Opportunity w/ ME Aaron Foye

Letter for Trades and Manufacturers

https://soundgirls.org/scholarships-18/

Accepting Applications for Ladybug Music Festival

SoundGirls London Chapter Social – June 17

https://soundgirls.org/event/glasgow-soundgirls-meet-greet/?instance_id=1272

Shadowing Opportunities

Telefunken Tour & Workshop

https://soundgirls.org/event/colorado-soundgirls-ice-cream-social/?instance_id=1313

SoundGirls Expo 2018 at Full Sail University

Round Up From the Internet

On tour with Brittany Kiefer

 

 


View from the Top: Maureen Droney, The Recording Academy

“I’m privileged to be an advocate for my favorite people: recording engineers and producers.”

 


SoundGirls Resources

Directory of Women in Professional Audio and Production

This directory provides a listing of women in disciplines industry-wide for networking and hiring. It’s free – add your name, upload your resume, and share with your colleagues across the industry.


Women-Owned Businesses

Member Benefits

Events

Sexual Harassment

https://soundgirls.org/about-us/soundgirls-chapters/

Jobs and Internships

Women in the Professional Audio

Keeping It Real

Using psychoacoustics in IEM mixing and the technology that takes it to the next level

SECTION 1

All monitor engineers know that there are many soft skills required in our job – building a trusting relationship with bands and artists is vital for them to feel supported so they can forget about monitoring and concentrate on their job of giving a great performance. But what do you know about how the brain and ears work together to create the auditory response, and how can you make use of it in your mixes?

Hearing is not simply a mechanical phenomenon of sound waves travelling into the ear canal and being converted into electrical impulses by the nerve cells of the inner ear; it’s also a perceptual experience. The ears and brain join forces to translate pressure waves into an informative event that tells us where a sound is coming from, how close it is, whether it’s stationary or moving, how much attention to give to it and whether to be alarmed or relaxed in response. Whilst additional elements of cognitive psychology are also at play – an individual’s personal expectations, prejudices and predispositions, which we cannot compensate for – monitor engineers can certainly make use of psychoacoustics to enhance our mixing chops. Over the space of my next three posts, we’ll look at the different phenomena which are relevant to what we do, and how to make use of them for better monitor mixes.

What A Feeling

Music is unusual in that it activates all areas of the brain. Our motor responses are stimulated when we hear a compelling rhythm and we feel the urge to tap our feet or dance; the emotional reactions of the limbic system are triggered by a melody and we feel our mood shift to one of joy or melancholy; and we’re instantly transported back in time upon hearing the opening bars of a familiar song as the memory centres are activated. Studies have shown that memories can be unlocked in severely brain-damaged people and dementia patients by playing them music they have loved throughout their lives.

The auditory cortex of the brain releases the reward chemical dopamine in response to music – the same potentially addictive chemical which is also released in response to sex, Facebook ‘likes’, chocolate and even cocaine…. making music one of the healthier ways of getting your high. DJs and producers use this release to great effect when creating a build-up to a chorus or the drop in a dance track; in a phenomenon called the anticipatory listening phase, our brains actually get hyped up waiting for that dopamine release when the music ‘resolves’, and it’s manipulating this pattern of tension and release which creates that Friday night feeling in your head.

Missing Fundamentals

Our brains are good at anticipating what’s coming next and filling in the gaps, and a phenomenon known as ‘missing fundamentals’ demonstrates a trick which our brains play on our audio perception. Sounds that are not a pure tone (ie a single frequency sine wave) have harmonics. These harmonics are linear in nature: that is, a sound with a root note of 100 Hz will have harmonics at 200, 300, 400, 500 Hz and so on. However, our ears don’t actually need to receive all of these frequencies in order to correctly perceive the chord structure. If you play those harmonic frequencies, and then remove the root frequency (in this case 100Hz), your brain will fill in the gaps and you’ll still perceive the chord in its entirety – you’ll still hear 100Hz even though it’s no longer there. You experience this every time you speak on the phone with a man – the root note of the average male voice is 150Hz, but most phones cannot reproduce below 300Hz. No matter – your brain fills in the gaps and tells you that you’re hearing exactly what you’d expect to hear. So whilst the tiny drivers of an in-ear mould may not physically be able to reproduce the very low fundamental notes of some bass guitars or kick drums, you’ll still hear them as long as the harmonics are in place.

A biased system

Human hearing is not linear – our ear canals and brains have evolved to give greater bias to the frequencies where speech intelligibility occurs. This is represented in the famous Fletcher-Munson equal-loudness curves, and it’s where the concept of A-weighting for measuring noise levels originated. As you can see from the diagram below, we perceive a 62.5 Hz tone to be equal in loudness to a 1 kHz tone, when the 1k tone is actually 30dB SPL quieter.

Similarly, the volume threshold at which we first perceive a sound varies according to frequency. The area of the lowest absolute threshold of hearing is between 1 and 5 kHz; that is, we can detect a whisper of human speech at far lower levels than we detect a frequency outside that window. However, if another sound of a similar frequency is also audible at the same time, we may experience the phenomenon known as auditory masking.

This can be illustrated by the experience of talking with a friend on a train station platform, and then having a train speed by. Because the noise of the train encompasses the same frequencies occupied by speech, suddenly we can no longer clearly hear what our friend is saying, and they have to either shout to be heard or wait for the train to pass: the train noise is masking the signal of the speech. The degree to which the masking effect is experienced is dependent on the individual – some people would still be able to make out what their friend was saying if they only slightly raised their voice, whilst others would need them to shout loudly in order to carry on the conversation.

Masking also occurs in a subtler way. When two sounds of different frequencies are played at the same time, as long as they are sufficiently far apart in frequency two separate sounds can be heard. However, if the two sounds are close in frequency they are said to occupy the same critical bandwidth, and the louder of the two sounds will render the quieter one inaudible. For example, if we were to play a 1kHz tone so that we could easily hear it, and then add a second tone of 1.1kHz at a few dB louder, the 1k tone would seem to disappear. When we mute the second tone, we confirm that the original tone is still there and was there all along; it was simply masked. If we then re-add the 1.1k tone so the original tone vanishes again, and slowly sweep the 1.1k tone up the frequency spectrum, we will hear the 1k tone gradually ‘re-appear’: the further away the second tone gets from the original one, the better we will hear them as distinct sounds.

This ability to hear frequencies distinctly is known as frequency resolution, which is a type of filtering that takes place in the basilar membrane of the cochlea. When two sounds are very close in frequency, we cannot distinguish between them and they are heard as a single signal. Someone with hearing loss due to cochlea damage will typically struggle to differentiate between consonants in speech.

This is an important phenomenon to be aware of when mixing. The frequency range to which our hearing is most attuned, 500Hz – 5k, is where many of our musical inputs such as guitars, keyboards, strings, brass and vocals reside; and when we over-populate this prime audio real estate, things can start to get messy. This is where judicious EQ’ing becomes very useful in cleaning up a mix – for example, although a kick drum mic will pick up frequencies in that mid-range region, that’s not where the information for that instrument is. The ‘boom’ and ‘thwack’ which characterise a good kick sound are lower and higher than that envelope, so by creating a deep EQ scoop in that mid-region, we can clear out some much-needed real estate and un-muddy the mix. Incidentally, because of the non-linear frequency response of our hearing, this also tricks the brain into thinking the sound is louder and more powerful than it is. The reverse is also true; rolling off the highs and lows of a signal creates a sense of front-to-back depth and distance.

It’s also worth considering whether all external track inputs are necessary for a monitor mix – frequently pads and effects occupy this territory, and whilst they may add to the overall picture on a large PA, are they helping or hindering when it comes to creating a musical yet informative IEM mix?

Next time: In the second part of this psychoacoustics series we’ll examine the Acoustic Reflex Threshold, the Haas effect, and how our brains and ears work together to determine where a sound is coming from; and we’ll explore what it all means for IEM mixes.


 

Playing With Voices

When I went to the Acoustical Society of America’s meeting a few years ago, I did not know what to expect.  I was presenting an undergraduate research paper on signal processing and was expecting individuals with similar backgrounds.  Instead, there were presentations on marine wildlife, tinnitus, acoustic invisibility and the speech patterns of endangered languages.  One individual, I met there was Colette Feehan, a linguistic doctorate student at Indiana University.  I gravitated to her upbeat personality and affinity towards collecting awesome trivia. When she mentioned to me in passing her interest in voice acting, I thought I should follow up and pick her brain on the nuances of voice acting.

Colette Feehan

What is voice acting?

Voice acting is providing vocalizations for various kinds of animated characters and objects. This can be speech, grunts, screams, musical instruments, animal vocalizations, and a whole array of other sounds. When watching an animated TV show or movie, every sound you hear has to come from either someone’s mouth or some creative use of props. Often voice acting draws from generalizations about language that both the actor and the audience hold. In a way, some might think of voice acting as acting with a handicap. You’re not just acting with one arm tied behind your back, your acting without the help of any of your body language, facial expressions, etc. You need to convey all that information using just your voice. It’s honestly quite fascinating.

What got you interested in voice acting?

As a kid, I would always imitate sounds from baby elephants to musical instruments to voicing children younger than me. I can’t think of one specific moment that made me interested in voice acting, but I can certainly say it has always been a part of my life.

Who are your favorite voice actors?

I have too many to count. Some classic voice actors are Daws Butler (Yogi Bear, Elroy Jetson, Cap’n’Crunch) and June Foray (Rocky the Flying Squirrel, Cindy Lou Who, Mulan’s Grandmother). There is also Charlie Adler (Cow, Chicken, and the Red Guy from Cow and Chicken, Mr. and Mrs. Big Head in Rocco’s Modern Life), Frank Welker (Fred Jones from Scooby Doo, Nibbler from Futurama), Rob Paulsen (Yakko Warner, Carl Wheezer, Pinky), Grey DeLisle (Mandy from the Grim Adventures of Billy and Mandy and Azula in Avatar), Tara Strong (Timmy Turner, Bubbles from Powerpuff Girls, Dil Pickles), and Dee Bradley Baker (Momo and Appa from Avatar, Olmec in Legends of the Hidden Temple, Perry the Platypus).

What are your favorite voices to do?

First, I think it’s important to mention that I study the linguistics, phonetics, and acoustics of voice actors MUCH more than I actually do voices myself, though I have lent my voice to some improv, plays, friends animated projects, etc.

I’m a bit of a one-trick pony when it comes to voices, though. I can do teenagers and little kids, but not much else.

Any favorite tricks or sounds?

In contrast, I can do loads of weird sounds: kazoo, trumpet, electric guitar, mourning dove, cats (meow and purr), dogs.

Does voice acting have a specific lingo, and if so what terms should directors learn for more efficient directing?

It does! I’ve actually considered starting a bit of an informal dictionary on terms while working with voice actors on the linguistics of voice acting. Most of the lingo that I’ve really paid attention to are linguistics concepts like what linguists call “dark L” some voice actors call it “Lazy L”. “Breathy Voice” in linguistics is called “Smokey Voice” by voice actors. The one that is really interesting is what Rebecca Starr (2015) calls “Sweet Voice” this is an EXTREMELY specialized kind of breathy voice found in Anime that indexes a very specific character archetype.

I have heard that you are doing some research on voice actors, could you tell me a little about that?

In the Speech Production Lab at Indiana University, I am using a special 3D/4D ultrasound set up to look at the articulatory phonetics of adult voice actors who produce child voices for TV and film. A lot of people either don’t know or don’t think about how when we listen to child characters, particularly in animated TV, those voices are often being produced by an adult. The big question I am asking with my dissertation is–What are adults doing with their vocal tract anatomy in order to sound like a child?

So if anyone doesn’t know a lot about how ultrasound works, here is a quick and dirty description:

Ultrasound works by emitting high-frequency noise and timing how long it takes for those sound waves to bounce back. We place an ultrasound probe (like what you use to see a baby) under the participant’s chin and record ultrasound data of their speech in real-time. What we can see using ultrasound is an outline of the surface of the tongue. The sound waves travel through the tissues of the face and tongue, which is a fairly dense medium to travel through. When the waves come into contact with the air along the surface of the tongue, which is a much lower density medium to travel through, they show up on the ultrasound as a bright line which we can trace to then create static images and dynamic video of the tongue movement. So what does 3D/4D mean? We have a fancy ultrasound probe that records in three planes: sagittal, coronal, and transverse. So we take all these static, 2D images, trace them, then compile them into one 3D representation of the tongue. Then we can sync this with a recording of the speech creating our 4th D, time. So we can create videos of what a 3D representation of the tongue is doing while speaking and we can hear what it was doing at that moment. It is really cool.

So back to voice actors. With my dissertation research, I am imaging a few voice actors in two conditions: 1) doing their regular, adult voices and 2) doing their child voices. Then I compare what changes across those two conditions and what doesn’t.

So things I am looking for are: What is the hyoid bone doing (the bone in your neck near where your neck meets your head)? Does the place where the tongue touches the roof of the mouth for different consonants change? Are general tongue shapes and movements different across the two conditions? How do the acoustics change (how does the sound change)? Are those changes in acoustics changes that we would predict based on what the anatomy is doing?

How balanced is diversity in the voice actor industry?

Voice acting has a bit of a double-edged sword in that you don’t have to *look* the part to get the role. It’s just your voice! So someone who might not be your size -6, blonde-haired, wide-eyed beauty can still get the opportunity to play that character. Where this becomes negative, however, is with actors of color. Because you don’t have to look the part, I think a lot of white actors get roles that otherwise would have HAD to go to an actor of color. I do know the field has recently been trying to address this issue, but we can certainly do better.

So what is your opinion on vocal fry

I love creaky voice (I’m going to use this term instead). It can mean so many different things, socially. Is the speaker a man or a woman? Are they in their 20s? Are they using uptalk? Are they just running out of air at the end of their utterance?

Why is there the focus on women’s vocal fry?

I can’t say I’ve studied why specifically women’s creaky voice has blown up so much recently. Creak is really common in deeper voices, so men do it all the time, but we don’t seem to notice. Maybe when women started doing it more people unconsciously associated it with being manly and negatively reacted to it. Or maybe it’s that creak is often paired with uptalk, so it became stigmatized really quickly.

How are men’s and women’s voices different?

Again, I’m not sure I’m the most qualified to talk about this, but I can say that men’s and women’s voices differ in many different categories. First, there is simply anatomy; men have an Adam’s apple which increases the area for resonance in the larynx. They also tend to be bigger, have bigger lungs, etc., making their voices different. Then there are a lot of social ways in which men’s and women’s voices differ. Taking creak for example again, when women use creak it is associated with very different things than when a man uses creak. So the same “thing” performed by a man compared to a woman doing the same thing can be interpreted quite differently. Humans are fascinating.

 

X