Empowering the Next Generation of Women in Audio

Join Us

Keeping it Real Section 3 – Mixing IEMS in 3D

Section 1

Section 2

Until now, the physical constraints of IEMs – sound being delivered direct to our eardrums – has given us no way to experience the nuances of sound localisation. The fact that our moulds are in the ear means that we miss out on the out-of-body arrival of sounds and the information we glean from the travel of those sound waves around our heads and bodies.

Until now.

I recently had the pleasure of road-testing a stunning 3D in-ear monitoring system from German company Klang. My experience has convinced me that this is the next great leap forward for in-ears, almost as much of a game-changer as the 1990s introduction of IEMs in the first place, or the evolution from analogue to digital desks.

Think of a standard, high-quality stereo in-ear mix. You perceive the mix elements panned in varying degrees from dead centre all the way out to the peripheries of your ears. Maybe you’ve created some sense of depth with the different levels and EQ of those elements, maybe some atmosphere with reverbs, but that’s about as much as you can do.

Now imagine that you could take your ear moulds out and hear all of those elements placed around you acoustically in three dimensions. The relative volumes are the same, but all of a sudden there’s a sense of space and freedom as you liberate yourself from cramming all of those mix elements into the limited confines of the space between your ears. The detail in the sound of each instrument suddenly becomes a high-definition experience as inputs in similar frequency ranges no longer battle for space; some sounds feel as though they’re high in the air; others close to the ground; some are behind you; whilst others are at distances far beyond your arm’s reach.

That’s what it feels like to switch from a stereo mix to Klang 3D.

(Incidentally, going back the other way feels a bit like flying business and then returning to economy. Honestly, these guys have ruined stereo for me for life!)

Klang has used vast amounts of binaural hearing data to emulate what happens at a listener’s ear when the source is coming from outside the body. This data, gathered in lengthy experiments involving dummy heads with tiny microphones placed at the entrance to the ear to ‘hear’ sounds from different places, has enabled them to create an incredibly realistic 3D experience for in-ear monitoring. It is like virtual reality for the ears, but it’s more than that – it’s an ideal-world natural stage sound.

The Klang model combines all that we know about the nature of sound localisation – inter-aural time differences, inter-aural level differences, comb-filtering – with the subtle changes that we experience in frequency perception according to a sound’s location, to allow the monitor engineer to ‘place’ different inputs in various areas around the listener’s head in a 3D spectrum. The incredibly user-friendly interface depicts (on a laptop or more easily still, the touch-screen of an iPad) two different views of the listener’s environment: a bird’s eye view of the top of the head, where instruments appear to be on a virtual ring around your head, allowing you to place them not only to the left and the right but also in front and behind your head; and a landscape view which allows you to move them vertically – above and below your head.As you move inputs around using the touch screen, you feel as though they are indeed coming from a different three-dimensional location, due to the way the Klang unit subtly alters the sound using binaural hearing data.

So with all this newfound space, you can now place instruments wherever you like. While it seems obvious at first to place instruments on the orbit where you actually see them on stage, this is only one possible placement method.

Our brains determine the importance of a sound according to where it is coming from. Right in front of you, and elevated slightly higher than your own head, is perceived as of paramount importance, so it makes sense to put the listener’s own instrument and/or vocal here. Interestingly, I found that a critical sound positioned here didn’t require as much volume as the same sound centre-panned in a stereo mix – making it great news for anyone who requires some elements very loud, such as a drummer and their click.

We perceive sounds from slightly behind us with a wide left/right span as being less important, but still worth paying attention to; so for a singer I found this a good place to put keys and synth sounds, as well as a stereo electric guitar. Strings worked really well placed high and wide for an airy, slightly ethereal feel; and bass and kick felt good placed lower and directly behind me. Pitching information signals such as backing vocals and piano seemed most natural and effective placed evenly panned to the front, but narrower and lower than the strings.

The Klang Fabrik takes up to 56 inputs, and it was interesting to note that I could be even more flexible with my mixing by leaving some inputs (such as talk mics, which call for no special artistic treatment) out of the Klang domain. I simply brought the Klang outputs back into my console where I subbed them into an aux buss, to which I then added the talk mics and anything I didn’t need in the 3D arena. This retained all of the fantastic space and detail of the 3D mix, whilst allowing total freedom in the number of utility inputs.

The Klang app is free to download and comes with a demo track – all you have to do is plug your in-ears or headphones in and you can move the track inputs around and experience 3D sound for yourself. I highly recommend starting by listening in stereo (the app gives you the choice) and then switching to 3D for an A/B test – the difference really is astounding, akin to throwing open the shutters in a dark room!

I’m extremely excited to be taking a Klang system out on my next tour, and I know that the artist and band are going to be delighted by the whole new in-ear experience that this offers. The detail, space and musicality that it offers, make for a truly transformative mix. The only drawback is that they, too, will find themselves ruined for stereo for life!

The Magic of Records

I love discovering fresh and exciting new music. But I often find myself fatigued in the search for it and end up putting on something older—usually Louis Armstrong or Gary Davis. After years of studying and trying my hand at music production and songwriting, my brain and ears are easily distracted dissecting these parts in new music. If nothing in a record really “grabs” me, I’m unable to listen passively. Instead, I’m listening for ideas and inspiration. I imagine that people working in film and TV have very similar experiences when watching movies and television.

The reason older music doesn’t distract me as much isn’t because I think it’s better. Rather, it’s because the production is simple, and there is not much to dissect. Using audio technology to create records with complex auditory experiences has not always been the goal of record-makers, i.e., producers. The earliest recording we know of is a wax cylinder recording of “Au Clair de la Lune” from 1860. The record is one barely audible voice. At this point, audio recordings were literally a form of preservation—a record-keeping device.

 

Musical preservation has existed in many forms (including the folk revival of the 1960s and the many, many attempts made by Western anthropologists to “understand” African music), but the least retrospective of these was probably the blues recordings made in the 1920s and 30s. At this time in America, there was a huge effort to preserve the songs of the Mississippi Delta, Appalachia and other song-heavy regions, as one generation of musicians and storytellers died out, and a new era of recording technology was becoming the norm. After blues and folk came jazz recordings, which eventually led to bebop, and then (by no small force of culture, story-telling, and talent) rock came shortly after that.

Until rock, there wasn’t much anyone could do as a recording “engineer” beyond capturing the beauty of the music. There are stories about New Orleans big bands bunching together and taking turns getting closer to the single microphone for their solos during their recording sessions. For all intents and purposes, this process is a form of production but is simple compared to what was to come a short time after.

Music production can only be as complex as the technology available at the time. Thusly, we see music production shift as audio technology shifts and, like technology, exponentially. Reverb and other time-based effects, multi-tracking, amp distortion, compression as a creative tool, the speed and efficacy of computers in music production—in this shortlist we have traveled from the 1950s to today!

In trying to pinpoint the moment I started hearing production in music, the earliest memory I can find is hearing Neutral Milk Hotel’s In The Aeroplane Over The Sea. At the time I was playing guitar and singing in a band that had similar instruments that are on the album, including an accordion and saw. I had spent a little bit of time recording in a small studio outside of my small town, as a 15-year-old at-home dabbler of Garageband.  The engineer, his assistant and I were re-recording four of my home demos (my guitar teacher had entered my recordings into a contest the studio was having, and I had unwittingly won the contest). I noticed how much time and effort it took to achieve a desired sound in the studio. We need to record the guitar part; are we plugging it directly into the computer? (Regarding guitars, the answer is almost always no.) Are we going to mic an amp in the big live room? Are we going to mic an amp in the isolation room? What amp are we going to use? What guitar are we going to use? How do we capture all the stuff we like about the demo, but somehow also make it better? And on and on for every sound.

In The Aeroplane Over The Sea cover art

The production played no small role in In The Aeroplane Over The Sea’s staying power. In the 21st century, there is a big difference between putting a microphone in a room and recording a band bunched up around it, and using multiple tracks, compression, vocal doubling, and arranging found sound noise to create an atmosphere that is reminiscent of a time and place, but isn’t literally a time or place (it’s a record). In The Aeroplane Over The Sea blends folk, noise and rock music and maintains a lo-fi quality, but is never messy or unprofessional. Also, it was not expected to be as popular as it was. The magic of this record is that the listener can experience the grittiness that songwriter and bandleader Jeff Mangum exhibited throughout all of his work and life, in the format of a record that sounds good to our ears.

The magic of records is that our ears are part of our culture, too. Even though most listeners of music are not trained in music production, their ears are discerning. They want a new perspective. They want something real. They want something fresh that can tell us a story about our world and lives.

So producers. Let’s make some magic records.

 

Editors Note: Folklorist Alan Lomax spent his career documenting folk music traditions from around the world. Now thousands of the songs and interviews he recorded are available for free online, many for the first time. It’s part of what Lomax envisioned for the collection — long before the age of the Internet.

Missed this Week’s Top Stories? Read our Quick Round-up!

It’s easy to miss the SoundGirls news and blogs, so we have put together a round-up of the blogs, articles, and news from the past week. You can keep up to date and read more at SoundGirls.org

June Feature Profile

The Road from Montreal to Louisville – Anne Gauthier

The Blogs

Keeping It Real

Keeping it Real – Section 2

How to Mix Using Multiple Reference Monitors

Ser bilingüe no siempre funciona

Being Bilingual Does Not Always Work


SoundGirls News

SoundGirls – Gaston-Bird Travel Fund

Shadowing Opportunity w/Guit Tech Claire Murphy

Shadowing Opportunity w/ FOH Engineer Kevin Madigan

Shadowing Opportunity w/ ME Aaron Foye

Letter for Trades and Manufacturers

https://soundgirls.org/scholarships-18/

Accepting Applications for Ladybug Music Festival

https://soundgirls.org/event/glasgow-soundgirls-meet-greet/?instance_id=1272

Shadowing Opportunities

Telefunken Tour & Workshop

https://soundgirls.org/event/colorado-soundgirls-ice-cream-social/?instance_id=1313

SoundGirls Expo 2018 at Full Sail University

https://soundgirls.org/event/bay-area-soundgirls-smaart-overview/?instance_id=1316

https://soundgirls.org/event/bay-area-soundgirls-sept-meeting/?instance_id=1317

Round Up From the Internet

Interview with Kelly Kramarik on How to Get Started

 


 

 

2019 She Rocks Awards Nominations Now Open

 



SoundGirls Resources

Directory of Women in Professional Audio and Production

This directory provides a listing of women in disciplines industry-wide for networking and hiring. It’s free – add your name, upload your resume, and share with your colleagues across the industry.


Women-Owned Businesses

Member Benefits

Events

Sexual Harassment

https://soundgirls.org/about-us/soundgirls-chapters/

Jobs and Internships

Women in the Professional Audio

Miranda Hull Customer Care at Harman PRO

Since 2011, Miranda Hull has worked for Harman PRO brands leading in the Regional Sales Office, now called Customer Care. In less than a year Miranda had a son and was diagnosed with 2A Hodgkin’s Lymphoma. Compounded with Lupus, and company-wide changes, Miranda quickly learned the value in work/life balances in a tech-focused industry.

Miranda and I start the conversation with small talk about the weather. Miranda in Texas, and myself in Kansas, we quickly jump into her life at Harman…

After Harman acquired AMX, it was decided that they would absorb AMX’s customer care team and be retrained to assist in areas outside of Video & Control. “[I had this team] reporting to me, and basically I had to train them all on Harman process and audio brands. Which was, you know, kind of a big task. We have a bunch of brands, and they were very familiar with one brand. For them, it was more of a relaxed day and our day [at Harman] is not like that at all. By that I mean we are crazy busy most of the time.” In September of 2017, Harman PRO had a company-wide restructuring. Just a few months prior to this restructure, Miranda had moved from Indiana to Texas to lead Customer Care for the west coast. Well, unfortunately, this restructure included the closing of the Indiana branch of Customer Care. “It was bad and good. You know, there’s always good things about [restructuring], but it was hard with my friends, the people I know, the seasonal employees.” Miranda goes on to explain, “I basically had AMX employees who I was training on audio, but there weren’t anywhere near…they just didn’t have the experience with the audio side of things which is a large majority of our business.” Miranda was told that she would now be managing the Customer Care team, a step up from the supervisory role she had been holding. Since September, Miranda has been on a hiring frenzy, trying to hire and train people to take care of customers for all of Harman Audio here in the US.

I add in, “Hiring two people and trying to train them can be a lot of work.”

Miranda has 20.

“I’m glad that, you know, that the change happened. I’ve got so much more experience now. And that wouldn’t have happened if I had gotten let go.”

“Is it safe to say that the biggest hurdle you’ve had at Harman so far has been the restructure and the hiring of a lot of people?” I ask.

We both chuckle and Miranda answers, “When you’re in a different role where you’re not in charge of things, life is a little easier. Definitely, the restructure was a huge change, and a lot of things fell into my lap. Even now, it kind of goes into another question you had asked…but being in a different state without friends and family…my time is very precious to me because I have a four-year-old child. I’ve gone through some stuff in my life where work isn’t number one in my life. It’s just not. My life is more than that.”

In an endless flow of emails, Miranda makes great effort to disconnect from work when necessary. Job security is often a double-edged sword in that you will likely always have a job, but it becomes difficult for a team or department to run with your absence. Miranda speaks on the balance of work and life:

“Because I have so much responsibility, I walk a very fine line of knowing when to put the computer away, put the phone down, and go home. That’s been hard for me because I feel a lot of pressure (not from any one person in particular) but just knowing there are a ton of people out there [17-20] that need me.” The day previous her son became ill, and Miranda needed to work from home. “[Whenever my son is sick] I make sure that I’m online all day. If [my team] has any questions they know to call me, text me, Jabber me, email me, and I will respond to them. And, I feel a little guilty about that, and I don’t want to feel guilty about that. My son is sick, and he’s my priority. For me, that is a fine line, and I need to be cognizant of that.”

I tell Miranda about my thoughts on work culture here in the US: “Just as a culture in America, particularly since the financial crash, has made a very dramatic shift to working way too much and not valuing the home time. I know that I struggle with it. I’m unsure if it’s because I’m a woman and I feel the need to overcompensate by always being there, always being available, always being clocked-in ready to go, and on-call 24/7. I can actually feel other parts of my life wither. I’ve put so much of my spirit and energy into work, which on one-hand is super fun, but is so easy to let other parts suffer as consequences.”

Miranda responds, “I’ve seen colleagues and team members answer emails at crazy hours. I think the unspoken rule is that you are always available. And I don’t want that. Especially since my cancer diagnosis.” We jokingly talk about the Do Not Disturb function on our phones as our only true escape from digital information.

Three years ago Miranda was diagnosed with cancer just nine months after having her son. If having a child wasn’t enough to refrain her thought process, the idea of cancer would certainly do the trick. “I had stage 2A of Hodgkins Lymphoma. My oncologist told me it was curable cancer and that stage 2 was a good stage to have, and if you were gonna have cancer Hodgkins was a better option.” Miranda underwent seven months of chemotherapy, then returned to work in her position (at the time) of team-lead. Harman clearly has the backs of their teams, supporting Miranda with a benefit concert as well as allowing her to not only have time after the birth of her son but the chemotherapy.

Miranda is an incredible woman, with an incredible journey. With so much change is such little time, she has truly been able to shine. The work/life balance she maintains along with her love for her position at Harman, it is easy to see how she is able to have it all, as they say. She spends her days training and teaching new hires all about audio troubleshooting and support, and her evenings with her family. If we as an industry could take a little piece of everything Miranda has learned through the last five years, we would all be better for it.

SoundGirls – Gaston-Bird Travel Fund

The SoundGirls – Gaston-Bird Travel fund has been established to increase the presence of women and those that identify as women at trade conferences. Women who have been invited to speak, or sit on panels at audio-related trade conferences are welcome to apply.

Fill out the application

Make a Donation

Scope:
This fund is meant to increase the presence of women and those identifying as women at audio-related trade conferences and conventions.

Rationale:

Many audio-related conferences and conventions are male-dominated. Women in various posts on social media have written, “I’d love to be part of this panel, but I don’t have the funds to go.” All-male panels at audio-related trade shows are currently the norm.

Criteria:

Types of costs covered:

Types of costs not covered:

Process

Applicants who have been accepted to a conference fill out the application.

Applications are peer-reviewed by a committee of professional women working in audio.

Applicants are notified by email of their acceptance usually within two weeks of submitting the application and dependent on funding available.

Funds will be dispersed via PayPal.

Sponsors

The funds will be hosted by SoundGirls and sponsors will be sought who can earmark funds for the travel fund.

Sponsors: you can make a tax-deductible donation here or contact soundgirls@soundgirls.org for more information.

Examples of Disciplines:

Examples of Conferences:

FAQ’s/Items to consider:

Are there funds available for women who simply want to attend?

No, but SoundGirls provides scholarships for coninuing education. Have a look at SoundGirls scholarships.

What if my paper or workshop hasn’t been accepted yet?

We can only offer funds to papers that have been accepted, but you are still welcome to apply. Simply check the box indicating approval for your paper is pending. If your paper is not accepted, your application will be withdrawn.

I’m a sponsor who would like to support SoundGirls and the travel fund.

You can support both. Please get in touch with us at soundgirls@soundgirls.org.

 

How to Mix Using Multiple Reference Monitors

And not drive yourself crazy

When I first started mixing, it sometimes felt like I was redoing my work over and over until I hit my deadline and was forced to stop. My mix process back then was mixing through my main speakers (full-range) then switching to small speakers for a pass. Then, I’d switch back to my main speakers and find a totally different set of problems. I’d do a pass-through a third set of speakers, and it’d open up another can of worms.

It was very hard to trust my mix decisions. I didn’t trust the rooms I was working in. I didn’t trust my speakers. I sometimes questioned my ears or ability. When there’s that much doubt how are you ever able to make a decision? You can’t. Constantly questioning what is “right” slows down the mix process severely.

From a mixing perspective, nearly every room is flawed in some way. There’s room resonances, bass management issues, less than ideal speaker placement, noise, reflections, or phase issues. Even a room that’s tuned by a great acoustician and considered flat can have 6dB variance or more! The only way to trust a room (or monitors) is to accept a room for what it is.

First and foremost, it helps to reduce as many changing variables as possible. Mix as much as you can in the same room using one set of references monitors. Think of it as your “home base.” The goal is to have a setup that you trust – not because it sounds amazing but because you know its quirks and flaws and strengths.

As you mix, make a mental note of things you notice, like, what frequencies are you always EQing? When you pan, is the imaging clear or muddy? Critical listening is about observation without judgment. Once you make judgments (especially that a mix sounds better or worse depending on the environment, plugin, etc.) it can turn into a psychological game. This is when you start questioning your speakers, room, and yourself.

Some of the best advice I’ve ever received about mixing is “mix, however, makes you comfortable.” Auratones speakers (a standard found in many post-production mix rooms) make my ears ring, so I don’t use them. If I mix through a television set, I listen at the same level I listen to tv at home. I quit mixing full-range at 82 dB (which I find uncomfortably loud sometimes) and closer to 78 dB or even lower on occasion. What I gain in confidence by listening at a comfortable level far outweighs what I lose sonically (by not mixing at the nominal calibrated level for a mix room).

Working in different rooms and monitoring situations can be used to your advantage. When I’m working on a film, I sometimes prefer to edit on headphones (especially to treat pops, clicks, unwanted noises). I like to do my detail EQ work and noise reduction in a room with near-field monitors (like a home studio). This allows you to hear detail that might be lost working in a theatrical mix stage. If I can work on a theatrical stage, that’s the best place to deal with bass management (like mixing to the subwoofer) and mixing in 5.1.

In post-production, we don’t just change monitors, but we sometimes change rooms completely. On top of it, the final mix might be going to a movie theater, television (Bluray, Video on Demand), and eventually online (to laptop or cell phone listeners). We’ve got 5.1 and stereo to consider (or even deeper into 3D Immersive Audio). Many projects don’t have the budget to do separate mixes so sometimes you have to make decisions that are good for one listening environment and bad for another. I find as a mixer I’m happier if I do one mix that I am really happy with versus trying to find a middle ground. I tend to cater to the audience that will have the most views.

It’s good to ask yourself, “what am I trying to achieve by changing monitors?” I don’t change monitors anymore unless there’s a specific reason, such as:

There’s definitely value in changing how you listen. I change my listening level a lot when I’m mixing film scores to hear how the mix sounds in context against dialog. If I’m mixing in 5.1, I might switch to the stereo to see how something I’ve mixed translates that way. I might listen through a tv or my phone if there’s a specific question or need for it.

A big part of learning to mix well is learning how to mix poorly, too. How often do you go back to an old mix and think, “that really sucked!” but at the time you thought it was great? We do what sounds “right” until we find something new that sounds right. There are times you have to accept that your mix is the best you’re going to do that day. Tomorrow is a new day, a new mix, and a chance to do something different

Ser bilingüe no siempre funciona

Por Andrea Arenas / Colaboración Vanessa Montilla

Es posible que hayas hecho varios cursos de idiomas. Sin embargo nada te prepara para trabajar el día a día como ingeniero de sonido, si estás de gira en un país donde se habla un idioma diferente a tu idioma materno. Es probable que por más cursos que hagas, en ninguno te hayan enseñado como le dicen a “peinar los cables”, y así a muchas palabras del argot técnico e inclusive del cotidiano.

Es por eso que he decidido hacer un pequeño glosario de objetos utilizados comúnmente en el audio pero que posiblemente no encontrarás en ningún libro de diseño de sistemas o de técnicas de grabación, y que por lo tanto no estás acostumbrado a utilizar en un idioma diferente al tuyo. Espero les sea útil y que además podamos completarlo entre todos en diferentes idiomas.


Cables /


Conectores/Connectors


Audio

 


Electricidad / Electrics


Herramientas / Tools / Gadgets


Artículos de oficina / Office supplies


Acciones / Actions


Instrumentos musicales / Musical instruments


Medidas / Mesurements

1.5m 5 feet
3m 10 feet
7.6m 25 feet
15m 50 feet
30m 100 feet
50m 165 feet
100m 330 feet

Being Bilingual Does Not Always Work

By Andrea Arenas / Collaborated by Vanessa Montilla

It is possible that you have done several language courses. However, nothing prepares you to work day-to-day as a sound engineer, if you are on tour in a country where a language other than your native language is spoken. It is likely that no matter how many courses you do, you have not been taught how to “comb the wires” (slang for ¨Untangle the wires¨ in Spanish), and many words of technical, and even everyday jargon.

That is why I have decided to make a small glossary of objects commonly used in audio but that you may not find in any book of system design or recording techniques, and that therefore you are not accustomed to using in a language other than of yours. I hope it is useful for you and that we can also complete it in different languages.


Cables


Conectores/Connectors


Audio

 


Electricidad / Electrics


Herramientas / Tools / Gadgets


Artículos de oficina / Office supplies


Acciones / Actions


Instrumentos musicales / Musical instruments


Medidas / Mesurements

1.5m 5 feet
3m 10 feet
7.6m 25 feet
15m 50 feet
30m 100 feet
50m 165 feet
100m 330 feet

Keeping it Real – Section 2

This is Section 2 of Becky Pell’s 3 Section Article on Using psychoacoustics in IEM mixing and the technology that takes it to the next level. Section 1

Acoustic Reflex Threshold

Have you ever noticed how you and the band can take a break from rehearsing, come back half an hour later, and when put your ears back in everything feels louder? And then how after a few moments it settles down and feels normal again? It’s because of a reflex action of the stapedius muscle in the middle ear. When this little muscle contracts, it pulls the stapes or ‘stirrup bone’ slightly away from the oval window of the cochlea, against which it normally vibrates to transmit pressure waves to be converted into nerve impulses. This action, which is a response to sounds of between 70-100dB SPL, effectively creates a compression effect resulting in a 20dB reduction in what you hear. However, the muscle can’t stay fully contracted for long periods, so after a few seconds, the tension drops to around 50% of the maximum. Whilst the initial reaction, at 150 milliseconds, is not fast enough to fully protect the ear against very loud and sudden transient sounds, it helps in reducing hearing fatigue over longer periods. Interestingly this reflex also occurs when a person vocalises, which helps to explain why a singer’s in-ear mix of the band might sound loud enough in isolation, but when they start singing they find they need more instrumentation. This happens in conjunction with the fact they are hearing themselves not only via the mix but through the bone conductivity of their skull. It’s well worth trying to sing along to an IEM mix that you’ve prepared for a singer to experience what this feels like for them because it’s a very different sensation from simply shouting down the mic to EQ it.

The acoustic reflex threshold also means that transients appear quieter than sustained sounds of the same level, and it’s the thinking behind a compression trick that is often used in studios and film production. When you compress the decay of a short sound such as a drum hit, it fools the brain into thinking the drum hit as a whole is significantly louder and punchier than it is, although the peak level – the transient – has not changed. Personally, I’d advocate caution if you’re going to try this in a monitor mix – the drummer needs to hear what their drums ACTUALLY sound like, and getting things such as drum tuning and mic placement correct at source are vital – but it’s an interesting thing to be aware of.

All in the timing

Our ability to perceive sounds as separate events is not only dependent on there being sufficient difference between them in frequency, but also on timing. This phenomenon is known as the ‘precedence effect’ and the ‘Haas effect.’

These effects describe how when two identical sounds are presented in quick succession, they are heard as a single sound. This perception occurs when the delay between the two sounds is between 1 to 5 ms for single click sounds, but up to 40 ms for more complex sounds such as piano music. When the lag is longer, the second sound is heard as an echo. A single reflection arriving within 5 to 30 ms can be up to 10 dB louder than the direct sound without being perceived as a distinct event. In 1951 Helmut Haas examined how the perception of speech is affected in the presence of a single reflection. He discovered that a reflection arriving later than 1 ms after the direct sound increases the perceived level and spaciousness (more precisely, the perceived width of the sound source), without being heard as a separate sound. This holds true up to around 20ms, at which point the sounds become distinguishable.

This can be an interesting experiment to try with a vocal mic and your IEMs. If you split the vocal mic down two channels, and delay one input somewhere between 1 and 20 ms, see what you notice. Then try panning one input hard left and the other hard right, and see how the vocal sounds thicker and creates a sense of width and space. Play with the delay time, and you’ll see that if it’s too short the signal starts to phase; too long and you lose the illusion. This game does make the signal susceptible to comb-filtering if you sum the inputs back to mono, especially at shorter delay times, so be aware of that.

Once again I would advocate extreme caution if you intend to use this in a monitor mix, as ‘tricking’ a singer in this way can backfire! However it’s a useful principle to be aware of if you have the opportunity to get creative with other sounds, and I use it a lot when adding pre-delay to a reverb – try it for yourself. No pre-delay creates a feeling of immediacy to the effect, but just 5-10ms creates a slight sense of space. If you’re after a little more breathiness and drama – ‘vampires swirling’ as I once heard it described – try increasing the pre-delay up to 20 ms and feel how it changes.

The Haas effect is also something to be very aware of for IEM mixing when it comes to digital latency. Every time we take a signal out of the console and send it somewhere else in the digital domain, a degree of minor time delay known as latency is introduced. Different processing devices introduce different amounts of latency, and obviously the less, the better. The more devices we add, the more the latency stacks up. Whilst a few milliseconds of latency may be totally imperceptible for, say, a guitarist; it’s a different matter when it comes to vocals. A singer will often be able to perceive something as being not quite right, without being able to put their finger on it, because when we vocalise and have that signal returned to our ears, the discrepancy between what we hear at the moment of making the sound, and the moment of it returning, becomes heightened in our awareness. Something to be vigilant about when dealing with any digital outboard such as plug-ins, for a singer.

Location Services

The Haas effect also affects where we perceive a sound to be coming from – the supposed location of the source is determined by the sound which arrives first, even though the sounds may be from two different physical locations. This holds true until the second sound is around 15dB louder than the first when the perception of direction changes.

Sound localisation is a very complex mechanism performed by the human brain. It’s not only dependent on the directional cues received by the ears, but it is also intertwined with the other senses, especially vision and proprioception. Our ability to determine a sound’s location and distance is called binaural hearing, and in addition to all the psychoacoustic effects discussed so far, it is also heavily influenced by the physical shape of our heads, ears, and even torsos. The outer ear or ‘pinna’ functions as a directional sound collector which funnels sound waves into the ear canal. The head and the topography of our face and torso influence how sounds from any position other than a 0° angle are heard, as they create an acoustic ‘shadow.’ Our brains process the differences between the information that our two ears collect, and interpret the results to determine where a sound is coming from, how far away it is, and whether it’s still or moving. At lower frequencies, below about 2kHz, this is mostly determined by the inter-aural time difference; that is, the discrepancy in time between when the sound reaches each ear. Above 2k the information gathered comes from the inter-aural level difference; that is, the discrepancy in volume between the sound that each ear hears. This clever evolutionary adaptation is due to the relative lengths of sound waves at different frequencies. For frequencies below 800 Hz, the dimensions of the head are smaller than the half wavelength of the sound waves so that the brain can determine phase delays between the ears.

However, for frequencies above 1600 Hz the dimensions of the head are greater than the length of the sound waves, so a determination of direction based on phase alone is not possible at higher frequencies; instead, we rely on the level difference between the two ears. These binaural disparities are known as Duplex theory and play an important role for sound localisation in the horizontal plane.

(As the frequency drops below 80 Hz it becomes difficult or impossible to use either time difference or level difference to determine a sound’s lateral source because the phase difference between the ears becomes too small for a directional evaluation, hence the experience of sub-bass frequencies being omnidirectional.)

Whilst this phenomenon makes it easy to sense which side a sound is coming from, it’s harder to determine direction in the up/down and front/back planes, due to our ears being placed at the same horizontal level as each other. Some types of owl have their ears placed at different heights, to allow for greater efficiency in finding prey when hunting at night, but humans have no such facility. This can result in ‘cones of confusion’, where we are unsure as to the elevation of a sound source because all sounds that lie in the mid-sagittal plane have similar inter-aural differences; however, once again the shapes of our bodies help us out. Imagine a sound source is right in front of you. There is a certain detour the torso reflection takes and hence a certain difference of this torso reflection in relation to the direct sound arriving at both ears. This yields a slight comb filter pattern which will change if you elevate this source. The same is true if this source is now moved behind you; the torso reflection changes and our brains process the information discrepancies to help us locate the source.

Next time: In the third and final section of this series on using psychoacoustics to enhance your monitor mixing, we’ll discover a ground-breaking new technology that takes IEMs to a whole new dimension.

X