Empowering the Next Generation of Women in Audio

Join Us

Zoe Thrall – Love of Gear, Recording, and Music Makers

Zoe Thrall is a groundbreaker and a legend with 40+ years working in the music industry. She spent years working as an engineer and studio manager for Power Station Studios and Hit Factory Studios in NYC, then touring with Steven Van Zandt and his band, The Disciples of Soul. In 2005, she relocated to Las Vegas taking over the reins at The Palms Studio until it was shuttered due to COVID. Zoe has moved to The Hideout as the Director of Studio Operations, where artists from Carlos Santana to Kendrick Lamar have recorded. Zoe is an artist, engineer, and is well versed in studio management.

Zoe was introduced to audio as a career path while a freshman in college, (State University of New York at Fredonia) where she had a friend who was majoring in audio engineering. She applied to the music department and then transferred to audio. While she attended all four years, she was offered a job in her fourth year and never finished her last eight credits.

Zoe was always interested in audio, she remembers as a kid “tinkering with my cassette machines and my records taking two tape machines and recording from one to the other.” Her parents loved music and she was exposed to all kinds of music growing up from pop standards to Broadway. At age eight Zoe says “I tried to learn any instrument I could get my hands on.  Turns out I was best on woodwind instruments and pursued learning them more seriously.” As we will learn woodwind instruments led her to record with Steven Van Zandt.

Working with Steven Van Zandt

Zoe was working at a studio as an assistant engineer that Steven was working on several albums he was producing, as well as his first solo album. Zoe remembers that he was looking for a specific sound, and his guitar tech mentioned that she played oboe and she ended up on the record. After the record was finished Steven asked her to go on the road. She was 22 years old and says “that was not something I ever considered.” Zoe would continue to work with Steven for eleven years, playing on and engineering several albums. Zoe says “I learned everything about the business from Steven, about music production and contracts and publishing. Steven was extremely politically active and so I also got involved in a number of social and political organizations, mostly in human rights.  I got to see that side of the world and meet Nelson Mandela. It was a whirlwind of 11 years and something I never dreamed of doing in terms of touring and being a member of a band.”

“Having a mentor like Steven was absolutely critical in my professional growth.  He would push me to do things that I would never thought I could do, but he trusted I could and that gave me the confidence to try.  There were so many invaluable lessons.  He would push me as a musician (playing keyboards on a Peter Gabriel track), as an engineer (building a home studio and recording his projects there), as a manager (rehearsing, hiring/firing band members), and even in the political arena where I was least comfortable.  One time he sent me to meet with Archbishop Desmond Tutu as the representative of our foundation, The Solidarity Foundation.  I was scared to death.  But I was able to discuss some of the programs we had instituted in the anti-apartheid movement.  These are just a few examples of what could get thrown at me at any given time”.

Zoe has been recognized for both her work and her humanitarian efforts including planning and co-organizing a fundraiser for Nelson Mandela, receiving a commendation from the United Nations for work done in the anti-apartheid movement, and serving 3 times as co-chair of 2005, 2006, and 2021 Conventions of the Audio Engineering Society.

Career Start

How did your early internships or jobs help build a foundation for where you are now?  

The internship was essential to my growth and my future.  It introduced me to some extremely talented engineers and producers who were my early mentors.  That specific internship led to every other door that opened for me.  11 years later I was back as that studio’s manager.

What did you learn interning or on your early gigs? 

Keep your mouth shut and your ears open.  Let a helping hand anywhere you can.  Put in as much time as you can and someone will notice.  Be honest, don’t try and do something you don’t know how to do (then learn how to do it later).  Be willing to do everything and anything asked of you (to a degree). Don’t count the hours.

Did you have a mentor or someone that really helped you?  

Initially, as I stated above I was fortunate to have been around some pretty talented (and tolerant) people from day one like Bob Clearmountain, Neil Dorfsman, James Farber, Tony Bongiovi.  But really my main mentor is Steven Van Zandt and then eventually worked with him for 11 years.  Everything I know about the music/recording industry I learned from him.

Career Now

What is a typical day like?  

You have to wear a lot of hats managing a commercial recording studio.  I’m the first one to come in the morning because I like to check the rooms and the rest of the facility before anyone gets here.  Then I make sure we have everything we need for the sessions coming in.  I keep an eye on when the staff is arriving to make sure they get here on time for their sessions.  I book studio time and negotiate the deals with the clients. I review the sessions from the previous day and do the billing.  As the day goes on I will check with the clients to see how their sessions are running.  Then mid-day I will look to see what the next few days are bringing us to be sure we are prepared for them.  There are many phone calls, overseeing staff, vendors, etc.

How do you stay organized and focused?  

I write everything down.  People make fun of me for it but if I write it down I won’t forget something.  There are so many details that come at you during the day I couldn’t possibly remember everything.

What do you enjoy the most about your job?  

Even though I no longer engineer I still love gear and the recording process.  I love music makers.  I love creativity.

What do you like least?  

Clients that expect to sound like Drake in three hours.  Their expectations are not realistic. Also, the 24 hours, 7-days-a-week aspect of it.

If you tour, what do you like best?  

I did tour when I was younger.  It’s really hard but exhilarating at the same time.  It’s an easy way to see the world.  I loved learning about different cultures. The feeling you get just before you step on the stage is something I’ve never felt doing anything else, whether it was to an audience of 200 or 100,000.

Zoe Thrall on The SoundGirls Podcast

 

 

 

I Always Cry On A Sunday

As you can tell from the title, and the use of the first person singular personal pronoun, this blog is personal – but aren’t they always? – but maybe a little confessional too.

It’s the end of February and we’re looking forward to March with its winds: March winds followed by April showers, at least that’s what we say in the UK.  In fact, they’ve been a bit early and a little ferocious this last week with ‘Storm Eunice’ gusting at over 100mph.  I mention this since maybe March might just turn out better than expected.

After a long wait, the Giuseppe Verdi Conservatorio in Turin finally has its new electroacoustic composition teacher, and my first lesson is on March 1st.  So… I’d better get moving.  I was persuaded to scrap, or at least set aside for now my previous idea of an electroacoustic piece with cello melodies based on the bodily curves of my girlfriend, the Mexican artist who eventually broke my heart.  That was eight months ago and as my title suggests, these days I still occasionally cry over lost love and what might have been, but always on a Sunday. The melodies had already been written, transcriptions of her side, shoulder, and neck seen from behind with counter melodies taken from shadows and lines on her back, to include a tattoo and I had already sketched in some ideas for granular and dusty effervescence electronics to represent shadows: where they are positioned, their density, and the amount of space occupied.

 

I had also begun to think about extending the sketch into a three-movement piece with the tattoo representing the struggle between the first movement’s depiction of voluptuousness and seduction and the redemption of the last movement symbolizing serenity and peace, with the granular drones containing fragments of spoken text softly woven into it.

 

 

Have I just persuaded myself to take it on again?  Will the redemption of the last movement be enough to release me from the pain of heartbreak and leave me at peace with myself?  Maybe yes, maybe no!  But, as I mentioned in a previous blog, I value authenticity in both the writing and the performance of a piece of music, and this would most certainly be authentic, there being a good reason for this art to exist.

Would it stop me crying on a Sunday?

I’ll tell you what made me cry on Sunday the 13th of February: my father dying of complications from contracting Covid 19.  He was 97 admittedly, but still in good shape and he had had his vaccinations, but complications set in, a minor stroke, taken into the ICU, and in the end, he was put into an induced coma and life support removed to allow him to go peacefully.  This was in Minneapolis while I was here in Turin, but I was kept up to date by my younger brother and sister while they managed things there.  He was absent for much of my childhood, but I can still thank him for the gift of music he passed on to us.  He was a tenor, with that typically English sound and I still remember, as a young child, sitting under the table listening to him practicing arias from opera and Neapolitan songs…that was ‘the gift’.

So, from when he was taken ill until Sunday 13th and up until the following Friday of the funeral. I was pretty useless with a total lack of drive and enthusiasm for anything, and music was the furthest thing from my mind.

Did I cry last Sunday?  No, actually!  I didn’t have time; I was too busy with Non Una di Meno, Torino (NUDM originally formed by feminist groups in Chile and now found across the continents), preparing for International Women’s Day, or as I like to call it: ‘Move over patriarchy!  You’ve been pretty useless for these last few thousand years so step aside and let us take over; we have both the intelligence and the empathy.’  In Italy we’re planning for a 24 hour national Strike and have been in touch with the Unions for their support; and we have a few grievances:

 

We deplore gender-based violence: Over 100 women murdered by men in 2021.

We also deplore the bias and inaction of the Italian judicial system which routinely pushes the blame for rape onto the victim.  And in one case I personally know of, when a young woman went to the police to report violence at the hands of her then-boyfriend, she was told that it was not a crime since it had taken place within a relationship.

Women have been the first to lose work due to the pandemic and are not treated equally as are their male counterparts.

Health care for women who suffer from endometriosis, vulvodynia, etc…basically anything to do with a woman’s womb or genitalia is at least six months of waiting or, if you can afford it, you can pay for private health care.

If only we could persuade all stay-at-home women to strike on that day, that would certainly put the male world of work, and its master, Capitalism, under some strain, even if only for 24 hours. As Carla Lonzi, (Italian feminist) wrote in her ‘Manifesto di Rivolta Femminile’ of 1970:

We recognize in the unpaid domestic work of women, the service which allows both state and private capitalism to exist.”

 

I keep this picture of a young woman of Non Una di Meno on my cellphone to remind me that there are young women willing to stand up and be counted, particularly for the women who don’t have a voice.  We have many migrants from North Africa in all major cities of Italy and the women, in particular, need advocacy.  One of our LGBTQIA+ groups held drop-in sessions for gay and lesbian refugees, who would have been tortured, imprisoned, or killed if sent back to their own country.  I helped two Nigerian lesbian refugees prepare their evidence for the commission that would decide if they were given permission to stay based on refugee status; they were successful.  As Covid restrictions ease these drop-ins might restart.

So I’ve been personal, confessional, and political, put more simply the last couple of weeks have been a bit meh!

Thus, finally, I arrive at the music part of the blog: the first song of my song cycle.

This first song of a projected song cycle: settings of poems by lesbian writers starts with La divinité inconnue written by Renée Vivien who lived on the Left Bank in Paris, and at some point, in the same block as Collette.  She was dubbed the new Sappho and translated some of her poems into French.  I mentioned authenticity in an earlier blog and Renée’s poems are indeed authentic: they are dark, using lugubrious imagery, and many expressing her love for women and, in particular, her intense love affair with the American heiress, Natalie Barney.  However, Natalie was very much against the idea of monogamous relationships: she desired that her lovers should be sincere with her, but only while her passion lasted.  So, after a bit more than two years together, Renée broke off the relationship.  It is thought that this breakup led to her early death at 32 through anorexia caused by alcohol and drug abuse.  This is probably the most extreme example of the pain and hurt she felt from Natalie’s infidelity: I shall later be setting this in the original French as part of the Song Cycle, but for the time being, I shall begin with La divinité inconnue.

I hate she who I love, and I love she who I hate.

I would love to torture most skilfully the wounded limbs of she who I love,

I would like to drink the sighs of her pain and the lamentations of her agony,

I would slowly suffocate the breath from her breast,

I would wish that a merciless dagger pierce her to the heart,

And I would be happy to see the blood weeping, drop by drop from her veins.

I would love to see her death on the bed of our caresses…

I love she who I hate.

When I spy her in a crowd, inside of me I feel an incurable burning desire to hold her tight in front of everyone and to possess her in the light of day.

The words of bitterness on my lips change to become sweet sighs of desire.

I push her away with all my anger, and yet I call to her with all my sensuality.

She is both ruthless and cowardly, and yet her body is passionate and fresh – a flame dissolving in the dew

I cannot look, without feeling breathless and without regrets, at the perfidy in her stare or the falsehood on her lips…..

I hate she who I love, and I love she who I hate.

‘Translation of a Polish Song’     Renée Vivien  (1903)

So that’s the poet.  What about the music and the setting?  I’ve projected the song for Soprano: light and angelic against the darkness of the text and the instrumentation, which is cello, B flat clarinet, Harp, and electronics.  I’m aiming to keep the instrumentation light and translucent so that there are very few tutti passages, while those that are will be sparse.  The final mixing will be key.  Practical considerations are: find a soprano who can sing in French.  For the instruments, for the time being, I will use orchestral samples from Spitfire Audio or the UVI IRCAM Solo Instruments 2, which has quite a few innovative instrumental techniques but obviously, live voice and instruments with live electronics for performance versions.

What am I using to create the music?  Keyboard and pencil for setting the words and melodies, and here I need to collaborate with the soprano since I’d like to build in some space for improvisation.  For the instruments, I’m using midi inputs for the instruments: these are going into Logic Pro – the generous 90-day trial gives me time to get to know the ins and outs.  I’m still tempted to put together the individual clips, loops, drones, etc. in Audition since I find it easy to manipulate them and balance and pan the individual tracks.  Final mixing, however, is not so good since Audition doesn’t seem to have faders like Logic Pro or Reaper.  So, this is telling me to record the midi onto Logic Pro and tweak as necessary, then bounce the audio tracks to Audition, though I still have to learn to do that. However, if I make the effort and become more familiar with Logic Pro before the trial is up then this maybe ‘the one’, especially since it has enough built-in sounds, instruments, and loops to keep me amused.

One more bit of software, MAX MSP.  Now I have a love/hate relationship with software, and this is one that I could really hate since it requires a kind of proto programming.  “Ha!” I hear those of you in the know and slick with Super Collider say.  My point is that I’m an artist, not a scientist or mathematician, though I can take an interest when I fancy.  No, if I were a violinist, I would no more consider building my own Stradivarius than I would programming music.  So, for that reason, I’m using the BEAP modules in MAX which are like my using the EMS 100 analog synthesizer in the late 70s.  I patch one module into another to control frequencies, modulations, envelope shapes, etc.  I may occasionally write in a small piece that will change my patches, but one step at a time.  Right at this moment, I am experimenting with various modules which allow me to treat and shape synthesized sounds as well as recorded sounds.  In the last few years, I have created thousands of clips while processing recorded sounds which I have used to create loops and drones, for example.  Therefore, I can take various individual stages of these sounds and repurpose them into something new; run them through a granulator, spectral filter, sampler, sequencer or random note generator.  I can treat them in so many ways and mix them with other sounds by multitracking; all I have to do is find them.  This is my ‘experimental’!

So far, I have been experimenting with two sine wave oscillators to create some simple frequency modulation, then adding a third, then using Low-Frequency oscillators to control the oscillation of the sound-producing oscillators.  And then adding a soft underlay of a recording of an espresso machine at work put through a granulator and controlled with another LF oscillator which causes it to sample the clip at various points and produce a kaleidoscope of sounds.

How’s it going so far?  Not very far… For the reasons spoken about earlier, there are other things that have occupied my ADHD-addled brain and together added a bit of confusion and depression.  But we’re coming out of that now and desperate to get composing again.  Besides, I need to take something to the Prof on March 1st.  So having finished this, I’ll add something to the soprano melody while still experimenting with the electronics.

I listen a lot because music is the same as me; and if I am not myself, I am nobody.  This is one song that has inspired me for its beauty and its authenticity. It is magical from the start.  Then at 59” something almost ecstatic happens… this is the kind of inspiration that will lead me through my piece, plus of course, Renée’s poetry and the sadness of her life, which kind of chimes with me, but now I only cry on a Sunday.

Love from Frà in Torino

She lost her voice That’s how we knew. Music by Frances White, sung by Soprano Kristin Norderval, libretto by Valeria Vasilevski

https://open.spotify.com/track/6CnP4XJbPYB9CHFYPq9bFT?si=8cd81879067d47f6

 

Demystifying Loudness Standards

               
Every sound engineer refers to some kind of meter to aid the judgments we make with our ears. Sometimes it is a meter on tracks in a DAW or that session’s master output meter, other times it is LEDs lighting up our consoles like a Christmas tree, sometimes it is a handheld sound level meter, other times a VU meter, etc. All of those meters measure audio signal using different scales, but they all use the decibel as a unit of measurement. There is also a way to measure the levels of mixes that are designed to represent the human perception of sound: loudness!

Our job as audio engineers and sound designers is to deliver a seamless aural experience. Loudness standards are a set of guides, measured by particular algorithms, to ensure that everyone who is mixing audio is delivering a product that sounds similar in volume across a streaming service, website, and radio or television station. The less work our audiences have to do, the better we have done our jobs. Loudness is one of the many tools that help us ensure that we are delivering the best experience possible.

History           

A big reason we started mixing to loudness standards was to achieve consistent volume, from program to program as well as within shows. Listeners and viewers used to complain to the FCC and BBC TV about jumps in volume between programs, and volume ranges within programs being too wide. Listeners had to perpetually make volume adjustments on their end when their radio or television suddenly got loud, or to hear what was being said if a moment was mixed too quietly compared to the rest of the program.

In 2007, the International Telecommunications Union (ITU) released the ITU-R BS 1770 standard; a set of algorithms to measure audio program loudness and true-peak level. (Chueks Blog.)  Then, the European Broadcast Union (EBU) began to work with the ITU standard. Then EBU modified their standard when they discovered that gaps of silence could bring a loud program down to their specifications. So they released a standard called EBU R-128. Levels below 8 LUFS of the ungated measurement do not count towards the integrated loudness level, which means that the quiet parts can not skew the measurement of the whole program. The ITU standard is still used internationally.

Even after all of this standardization, television viewers were still being blasted by painfully loud commercials.  So, on December 13th, 2012, the FCC passed the Commercial Advertisement Loudness Mitigation Act. From the FCC website: “Specifically, the CALM Act directs the Commission to establish rules that require TV stations, cable operators, satellite TV providers or other multichannel video program distributors to apply the ATSC A/85 Recommended Practice to commercial advertisements they transmit to viewers. The ATSC A/85 RP is a set of methods to measure and control the audio loudness of digital programming, including commercials.  This standard can be used by all broadcast television stations and pay-TV providers.”    And yup, listeners can file complaints to the FCC if a commercial is too loud. The CALM Act just regulates the loudness of commercials.

Non-Eurocentric countries have their own loudness standards, derived from the global ITU R B.S 1770. China’s standard for television broadcast is GY/T 282-2014; Japan’s is ARIB TR-B32; Australia’s and New Zealand’s is OP-29. Many European and South American countries, along with South Africa, use the EBU R-128 standard. There’s a link with a more comprehensive link at the end of this article, in the resources section.

Most clients you will mix for expect you, the sound designer or sound mixer, to abide by any one of these standards, depending on who is distributing it. (Apple, Spotify, Netflix, YouTube, broadcast, etc.) 

The Science Behind Loudness Measurements

Loudness is a measurement of human perception. If you have not experienced mixing with a loudness meter, you are (hopefully) paying attention to RMS, peak, or VU meters in your DAW or on your hardware. RMS (average level) and peak (loudest level) meters measure levels in decibels relative to full scale (dBFS). The numbers on those meters are based on the voltage of an audio signal. VU meters use a VU scale (where 0 VU is equal to +4 dBu), and like RMS and peak meters, are measuring the voltage of an audio signal.
Those measurements would work to measure loudness – if humans heard all frequencies in the audio spectrum at equal volume levels. But we don’t! Get familiar with the Fletcher-Munson Curve. It is a chart that shows, on average, how sensitive humans are to different frequencies. (Technically speaking, we all hear slightly differently from each other, but this is a solid basis.)

Humans need low frequencies to be cranked up in order to perceive them as the same volume as higher frequencies. And, sound coming from behind us is also weighed louder than sound in front of us. Perhaps it is an instinct that evolved with early humans. As animals, we are still on the lookout for predators that are sneaking up on us from behind.

Instead of measuring loudness in decibels (dB), we measure it in loudness units full scale (LUFS, or interchangeably, LKFS). LUFS measurements account for humans being less sensitive to low frequencies but more sensitive to sounds coming from behind them.

There are a couple more interesting things to know about how loudness meters work. We already mentioned how the EBU standard gates anything below 8 LUFS under the ungated measurement so the really quiet or silent parts do not skew the measurement of the whole mix (which would allow the loudest parts to be way too loud). Loudness standards also dictate the allowed dynamic range of a program (in LUFS). This is important so your audience does not have to tweak the volume to hear people during very quiet scenes, and it saves their ears from getting blasted by a World War Two bomb squadron or a kaiju if they had their stereo turned way up to hear a quiet conversation. (Though every sound designer and mixer knows that there will always be more sensitive listeners who will complain about a loud scene anyway.)

Terms

Here is a list of terms you will see on all loudness meters.

LUFS/LKFS – Loudness Units Full Scale (LKFS = K weighted, but they are effectively the same thing).

Weighting standards – When you mix to a loudness spec in LUFS, also know which standard you should use! The following are the most commonly used standards.

True Peak Max:  Bit of an explanation here. When you play audio in your DAW. you are hearing an analog reconstruction of digital audio data. Depending on how that audio data is decoded, the analog reconstruction might peak beyond the digital waveform. Those peaks are called inter-sample peaks. Inter-sample peaks will not be detected by a limiter or sample peak meter. But a True Peak Meter on a loudness meter will catch them. True peak is measured in dBTP.

Momentary loudness: Loudness at any given moment, for measuring the loudness of a section.

Long-term/ Integrated loudness: This is the average loudness of your mix.

Target Levels: What measurement in LUFS the mix should reach.

Range/LRA: Dynamic range, but in LUFS.  

How To Mix To Loudness Standards

Okay, you know the history, you are armed with the terminology…now what? First, let us talk about the consequences of not mixing to spec.

For every client, there are different devices at the distribution stage that decode your audio and play it out to the airwaves. Those devices have different specifications. The distributor will turn a mix-up or down to normalize the audio to their standards if the mix does not meet specifications. A couple of things happen as a result. One, loss of dynamic range. And, the quietest parts are still too quiet. If there are parts that are too loud, those parts will sound distorted and crushed due to compressed waveforms. The end result is a quiet mix, with no dynamics, with distortion.

To put mixing to loudness in practice, first, start with your ears. Mix what sounds good. Aim for intelligibility and consistency. Keep an eye on your RMS, Peak, or VU meters, but do not worry about LUFS yet.

Your second pass is when you mix to target  LUFS levels. Keep an eye on your loudness meter. I watch the momentary loudness reading because if I am consistently in the ballpark with momentary loudness, I will have a reliable integrated loudness reading and a dynamic range that is not too wide. Limiters can also be used to your advantage.

Then, bounce your mix. Bring the bounce into your session, select the clip, then open your loudness plugin and analyze the bounce. Your loudness plugin will give you a reading with the current specs for your bounce. (Caveat: I am using ProTools terminology. Check if your DAW has a feature similar to AudioSuite.) This also works great for analyzing sections of audio at a time while you are mixing.

Speaking of plugins, here are a few of the most used loudness meters. Insert one of these on your master track to measure your loudness.

Youlean Loudness Meter
This one is top of the list because it is FREE! It also has a cool feature where it shows a linear history of the loudness readings.

iZotope Insight
Insight is really cool. There are a lot of different views, including history and sound field views, and a spectrogram so you can see how different frequencies are being weighted. This plugin measures momentary loudness fast.




Waves WLM Meter

The Waves option may not have a bunch of flashy features like its iZotope competitor, but it does measure everything accurately and comes with an adjustable trim feature. The short-term loudness is accurate but does not bounce around as fast as Insight’s, which I actually prefer.

TC Electronic LMN Meter
I have not personally used this meter, but it looks like a great option for those of us mixing for 5.1 systems. And the radar display is pretty cool!

Wrapping Up: Making Art with Science

The science and history may be a little dry to research, but loudness mixing is an art form itself; Because if listeners have to constantly adjust volume, we are failing at our jobs of creating a distraction and hassle-free experience for our audience. Loudness standards go beyond a set of rules; they are an opportunity for audio engineers to use our scientific prowess to develop our work into a unifying experience.

Resources

First, big thanks to my editors (and fellow audio engineers) Jay Czys and Andie Huether.

The Loudness Standards (Measurement) – LUFS (Cheuks’ Blog)
https://cheuksblog.wordpress.com/2018/04/02/the-loudness-standards-measurement-lufs/#:~:text=Around%202007%2C%20an%20organization%20named,a%20value%20for%20the%20audio.

Loudness: Everything You Need to Know (Production Expert)
https://www.pro-tools-expert.com/production-expert-1/loudness-everything-you-need-to-know

Loud Commercials (The Federal Communications Commission)
https://www.fcc.gov/media/policy/loud-commercials

Loudness vs. True Peak: A Beginner’s Guide (NUGEN Audio)
https://nugenaudio.com/loudness-true-peak/

Worldwide Loudness Standards
https://www.rtw.com/en/blog/worldwide-loudness-delivery-standards.html

Home Recording with Kids in the House

Banished. That’s where my kids would be when I recorded at home pre-pandemic. Not banished to their knowledge, just hanging with their grandparents for at least an entire day, at most overnight. The point is they would be gone, and I’d have sent our dog with them, too, if I’d had my way.

During the pandemic, I found myself itching to record and determined to avoid exposing vulnerable elders to a virus. Short of going into debt building a proper studio at home, I applied my knowledge and resources to develop these strategies:

Directional microphones are your friend. In an ideal recording situation, a large-diaphragm condenser microphone, like my Lauten 220, would be my preference for recording acoustic guitar and vocals. In my loud house, however, noise reduction was key. My kids could be shrieking away in the other room, but with my SM7B pointed away from them, I could sing my heart out with only minimal bleed. Plus, I could point my SM57 or Sennheiser e609 at my amp, again facing away from the children, to get some electric guitar underway.

Sound treatment is your friend. Believe me, I wish my home were properly sound treated – for so many reasons, not just recording purposes. In a pinch, it helps to hang quilts, order some shipping blankets, or hide out in a closet. It all works to cut down on noise. However, be sure to avoid cheap foam options that attenuate only higher frequencies doing little for the low end.

Record directly if you can. If you have a decent DI to connect your instrument to your interface, you won’t have to worry about home interference at all. You can always add effects later, and you may even come up with something innovative and fun with that dry signal. Many interfaces even allow for this without a DI.

Do many takes. While you have everything set up, run through the miked stuff a few times, keeping the same distance from the microphone when you sing or play for consistency’s sake. The little ones won’t scream at the same exact part each time you sing the song unless they’re especially cruel and deliberate about it. You can stitch together the best takes later.

Communicate. Let the kids know you’re going to be doing something special and need quiet for a short while. Talk about what they can be doing to entertain themselves in the meantime. Set boundaries accordingly beforehand and there should be fewer interruptions. Just be prepared to keep your promises if you make any (i.e. ice cream following the recording session, etc.)

It’s never going to be perfect, and of course, it requires flexibility, but it’s completely possible to record at home with your kids around. Breaks may be necessary and you may not get the best sound of your life, but what’s the alternative? Not doing it? Make those perfectly imperfect recordings at home. Lead by example and show the young ones in your life that there is always room for creativity so that they can learn to prioritize their own as they move beyond childhood. And if all else fails, scrap it and try again when they’re asleep.

Designing with Meyer Constellation

Using an array of ambient sensing microphones, digital signal processing, and world-class speakers, Constellation modifies the reverberant characteristics of a venue and redistributes sound throughout the space – ensuring a natural acoustic experience. I am very fortunate to have had the experience to design with this system. The Krannert Center for the Performing Arts recently had one of these systems installed into their Colwell Playhouse Theatre. In this article, I will go over how I designed this system for the 2021 November Dance show, how I utilized the 100+ speakers, and how I shaped the environment of each dance piece.

I began the design process by grouping my outputs into zones where they could fulfill a certain purpose. In Cuestation; Meyer’s software interface for their D-Mitri systems; these groups are called buses. I utilized a total of ten buses and over 80 speakers out of the original 127. Paper working and making sure things were clear for my engineer was a new challenge. This system is large and I found color coding and adding legends with further notes really helped represent the system I needed, but also the system that would become the world for the show, audience, and art that dancers were bringing into the space.

These zones allowed me to create a truly immersive experience with the sound. I was consistently using the House Left and Right sides, Rears, and Ceiling buses. However, what I loved the most was the Sub bus. Rather than using the onstage subs with the arrays, I opted to use the installed flown subs. What I have experienced in previous designs is that I prefer the encompassing blanket of sound that subs give when they are flown from a distance. I really didn’t want to localize them to the stage. I did, however, use the Center and Front Fills buses to draw more attention to the stage and dancers. I found that I preferred this balance of sound and the image that is created as an audience member.

I also found that the color-coding, legends, and graphics really helped keep track of this system. It felt daunting at first, but this breakdown allowed me to easily manage all of my outputs. The dance productions here don’t get a ton of time for the tech process, so this setup really helped me adjust levels quickly and not get bogged. I hadn’t worked with this software for a show before and it comes with a learning curve. I needed to stay productive throughout the entire rehearsal process.

Playback also works differently in Meyer’s Cuestation. Playback is often triggered and played back in Wildtracks. Wildtracks uses decks – virtual decks that is. It felt reminiscent of my Dad’s tape deck when growing up. Even though the tech process for this production added several more decks and cues to my original paperwork, I will show you the initial documents and how I set up my playback.

Originally each dance piece had its own deck. You can also see that each dance had a varying amount of Cuestation Inputs. These are the Wildtrack inputs that I then assigned to my buses of speaker zones. For Anna and Jakki’s pieces, I received stereo files. Though this was less than ideal, I stilled sent the music to the buses and crafted a great sound for the piece. Subsequently, I was the designer for Harbored Weight, so I had more opportunities to work with stems and individual tracks to send and pan around the room.

This is the kind of world I like to think and live in as a designer. There was a fourth dance that used only live music. This one was titled Love and only had a Cellist mic’ed on stage. Harbored Weight also had a live pianist accompanying the dancers. With Cuestation, I was able to take the mic’ed signal from these instruments and also send them to my buses. I could do this for onstage monitoring for the dancers or artistically in the house. What I discovered though, was that I could achieve a beautiful presence in the house with the other half of this design – which involves Constellation.

I sculpted a unique constellation setting for each dance piece. This information would be saved within each Cuestation cue – thus being recalled for the top of each dance by the stage manager. Most of the choreographers really wanted a large-sounding reverb. One, in particular, asked for something as close to cave-like as possible. I love these kinds of design requests.

Not only was I able to start with a base setting like ‘large hall’, but I was also able to effect parameters like early reflections, which really helped create a huge immersive sounding space. I was up against a learning curve though. I realized that with the constellation cue, audience members would be applauding at the end of the dance and their claps would be accentuated and echoed around the theatre. I found this to be cool sounding, but obnoxious. This resulted in me having to program more cues and use more Wildtracks decks to turn off Constellation for the end of each dance.

Then there are the designated microphones that capture the sound that makes Constellation processing what it is. For Donald Byrd’s piece Love, I was able to put this already beautiful cello sound through the processing system and hug the audience with its sound and warmth. This really helped for a few reasons. The dance was set to several Benjamin Britten pieces and it was just the cellist and dancers on stage. One cellist can sound small in a large theatre and the choreographer really wanted a big full sound. I mic’d the cello with a DPA 4099, but also used the ambient microphones to capture the instrument and send the signal through the constellation processing and unique patch that I had created. I designed a really warm and enveloping sound that was still localized to the stage and gave the illusion of a full orchestra.

My design for the 2021 November Dance did not incorporate Meyer’s Spacemap side of Constellation. I was able to do everything artistically that I wanted and that the choreographers needed without using Spacemap. I do look forward to using it in future designs though. If this article intrigues you, I would highly recommend looking into Spacemap as well as Spacemap GO.

I love that I can find ways to be a designer and be artistic outside of the typical realm of what it means to be a sound designer. I challenge the idea that crafting a sound system that shapes the sound we play through it isn’t artistic. I think this article shows that this way of thinking is in fact art. Dance often defaults to left-right mains with onstage monitors and side fills, but contemporary dance is pushing against that envelope. Sound designers and other artistic minds need to be there to receive those pushbacks and birth a new way of making art. Much like how Meyer continues to develop innovative tools that help us be better artists and better storytellers.

    Photo credit goes to Natalie Foil. All other images within this article are from my personal paperwork for the 2021 November Dance production. 

 

Becoming a Member of Recording Academy® / Grammys®

SoundGirls! Interested in joining the Recording Academy® / Grammys®?

Join us on Thursday, Feb 8 at 1 pm PST to learn more.

Hosted by

Amanda Garcia Davenport – Membership Manager and

Catharine Wood – Producer, Engineer, and Studio Owner

REGISTER HERE

 

 

Four Portfolio Reel Tips

Some Facebook groups I found in the LA area.

I’d like to note that a reel typically consists of a compilation of clips of live-action or animated TV shows, films, or even video games where the sound is replaced with your own edit. The materials you choose can come from released media where you can use the existing sound as a guide for your edit. However, it’s also a great opportunity to collaborate with up-and-coming filmmakers in your creative community to put together the sound design from scratch. This was particularly common while I was in Boston where college students majoring in film and audio post-production could easily work together to fulfill a project. While it’s certainly not necessary for a great reel, I recommend using Facebook groups to connect with filmmakers, creatives, and more sound editors in your area.

KEEP IT SHORT

If you’ve been searching the internet for tips for your portfolio reel, this is probably the most common tip you’ve seen. While a “short” reel may be defined differently to various editors, it’s important to consider the attention span of the person viewing your reel and the variety in your reel. A good rule of thumb is to keep your reel between 2-4 minutes long. However, how you break down that 2-4 minutes can make a big difference, which leads me to my next point…

TAILOR TO YOUR DESIRED POSITION

Just like with any other resume, your portfolio reel should also be tweaked and adjusted based on the position you’re applying for. It’s important to get all the right information for the places where you want to work or for whose work interests you. For example, Boom Box Post specializes in post-production audio for animation, while Pinewood Studios focuses on live-action. A larger studio like Skywalker Sound spans across media, but many of their releases involve heavy live-action fighting sequences. Now, think about how to break down your reel based on the kinds of post-production studios you want to join. A portfolio reel for an animation-focused studio might include 3 1-minute clips involving different types of animation, while a portfolio reel for a large-scale live-action production studio could have 2 2-minute clips with long and dynamic fight sequences.

HAVE AN ORGANIZED DELIVERY METHOD

Your portfolio reel will most likely come in the form of a compiled video with a sharable link. Sometimes (however not as common) employers may ask to see the full ProTools session instead of or along with a link to a reel. If this is the case, they are evaluating your organization skills, so it’s essential to have all tracks labeled, clips organized, and a smooth signal flow in your session that makes it easy for them to see what’s happening and listen without any problems. We have a great blog on keeping your ProTools sessions organized, which you can read here. You can also check out this blog we have for solid file naming, which will give a great impression if you’re sending more than just a link to employers.

Example of Vimeo platform.

ProTools EQ-III.

If you’re sending a sharable link, there are a lot of great viewing options that are easy to use and easy for others to watch, including Vimeo, Youtube, and Squarespace. Once you’ve compiled your work together in a ProTools session and bounced a Quicktime video of your work, you can upload that video to any of these platforms and include text information to describe the work you did on each clip, breaking down dialogue, Foley, and sound effects.

CONSIDER EVERY ASPECT OF THE PROJECT

While you may be applying specifically to a sound editing position, you still have a chance to show off your understanding for the big picture. This can include recording your own sound effects, Foley, and dialogue, and putting together a basic mix for your reel. Adjusting levels and panning, and using stock ProTools plug-ins like EQ-III to balance out any unwanted frequencies is a great way to show your understanding of how your effects relate to each other.

Sometimes it is easier to record some of your own sounds instead of finding effects from libraries. While Soundly and Splice both offer a limited amount of free sound effects, other general library platforms like Pro Sound Effects can be very expensive. Recording your own Foley or vocal effects can offer more flexibility, and you can also put together your own sound effects libraries to show to employers, simply by collecting those sounds and creating playlists in SoundCloud.

Ultimately, your portfolio reel should have a concise demonstration of your skills as an editor, it should highlight the style or genre of the studios of your interest, and it should be easy to access and navigate through. Portfolio reels can come with a lot of opportunities to show off organization skills and resourcefulness, so be on the lookout for more ways to impress potential employers when you start building your reel.


The Perfect Ear

Miles once said, “When I think of those who have died, I get furious, so I don’t think about it. But their spirits are still wandering within me, so they are still here, they keep passing it on to others. It’s a spiritual thing, and they’re part of who I am today. It’s all in me, all the things they taught me to do. Music is about the spirit and the spiritual, and also about feelings. I think his music is still present, somewhere, you understand? The things we touched together have to keep floating somewhere, because we expelled them with our breath, and the result was something magical, something spiritual. All of that is still with us. All of us who saw our lives transformed by Miles can still hear the unmistakable voice of his trumpet and still feel it directly stuck in the heart.

 

Quincy Troupe (Miles and Me)

With extravagant personality Miles Davis among many skills, he could recognize the musical notes in any sound, what is called, absolute hearing.

What is absolute hearing?

Known as perfect hearing, it is when a person has the ability to identify the frequency of an isolated auditory stimulus without the help of a referential auditory stimulus, so it can be defined as the ability to read sounds.

Studies have shown that this ability could be under the control of DNA. The researchers compared the structures and activity of the auditory cortex, that is, the region of the cerebral cortex that receives auditory information.

The historical beginnings of sound analysis

Since the nineteenth century, science has focused on investigating the background of understanding musical tones and their measurement. Since the references of the tones were only precisely defined at this time, the perfect tone designation could only be analyzed from the nineteenth century. Thus, the perfect ear was explained simultaneously with a musical ear and with absolute awareness of tone.

Although the differences and distinctions lie in the details. Only one in 10,000 people have an absolute tone and are, therefore, able to sing every note perfectly. On a physical and functional level, there is no difference in the auditory system when you have this condition. Rather, it is an expression of a rare ability to analyze tone accurately. Therefore, this form of tone search is an act of cognition, whereby the frequencies heard are reproduced in the form of pure tones.

From a scientific point of view, this is a musical learning approach that can also be applied to sounds or colors. However, since shades are defined by a much finer distinction than common color patterns, this skill is much rarer.

Effects on language and perception

Influences around the perfect ear can also be defined from a cultural perspective. Above all, the incorporation of the C major scale generated new approaches that could significantly simplify identification with specific tones. Mostly, the played tones correspond to musical experiences, which then sound coherent or biased from the listener’s point of view. But also, in terms of language and perception, absolute hearing has led to some changes.

Language

Many dialects depend on variations in tone. Accentuations are also of elementary importance for classical speech processes.

Perception

Not only the reproduction of perfect tones but above all their perception plays a decisive role in perfect hearing. The exact rankings here are not defined by a better sense of listening, but exclusively by recognition and categorization. This is again due to the mechanisms of the brain.

Differences with relative hearing

The relative ear, unlike the absolute ear, lacks independence from the main note. With a perfect ear, you can define exactly a note and estimate it without concrete examples. With the relative ear, what happens is that you simply compare the note with another note and you can classify it this way. While this makes it much easier for musicians to find their way musically, it is a far cry from absolute tone. An absolute ear is an absolute rarity, even among musicians.

These skills are the foundation of the perfect ear

Absolute hearing is a very rare phenomenon. But what specifically can someone who suffers from it do? The following factors form the basis of this extremely rare skill and help identify an absolute tone:

The special feature of people with an accurate understanding of tone is that they can regain their skills again and again. So if you’re lucky and define a chord correctly, you may not have a perfect ear. This is a skill that is constantly remembered that allows you to demonstrate your understanding of tone and clearly identify any notes you can think of or touch it randomly.

A good basis for musical sensibility

An absolute ear is by no means a prerequisite for a music career. Very few people can enjoy this skill. Although having this ability can help you with your musical sensibility, someone without an absolute ear can also become a great musician. This is also evident in the few examples that can be given of outstanding musicians with a perfect sense of tone.

Musicians with Perfect Pitch

Among musicians, having absolute hearing is a real rarity. In the near past, for example, you can name Ella Fitzgerald, Charlie Puth, Bing Crosby, and Michael Jackson who have a perfect tone. However, from the many famous personalities that are missing, it can be seen that many well-known musicians do not have such skills, and yet they are well known.

The famous musician Charlie Puth was even bullied by his classmates during his childhood because of his special ability, he was able to assert himself in the music industry with his self-confidence and of course with the perfect tone. Today he is one of the most famous young artists, and he is also predicted to have a steep career in the future. However, it turns out that the requirements are not in the concrete singing, but mainly in the composition. There, musical sensitivity and exact knowledge are absolutely necessary.

Therefore, it is not surprising that many famous composers of the past have a perfect tone. These personalities include, for example, Mozart, Handel, Chopin, and Beethoven. This made it much easier to find the right additions for each instrument and create a harmonic effect. Within the search for the musician, absolute hearing became a universal and important skill even then.

How to achieve the perfect tone?

In most cases, musical understanding is innate. Even if you can also learn a lot of content about music theory and perception, this has nothing to do with the perfect tone. This is at least in line with the assumption of Brady and Levitin and Rogers, who criticize a possible workout for a perfect tone.

However, there is now some evidence that absolute hearing can develop without an innate and useful ability. To do this, the University of Chicago conducted a study in which several students with different musical experiences were tested. Immediately after the initial training sessions, the students showed significant improvement in tone recognition, which allowed them to expand their experience.

Any of us can have the opportunity to sharpen the knowledge of the right tones, with a lot of practice you can achieve it.

As a sound engineer, I have had the opportunity to collaborate with various musicians who possess this skill, it has been very enriching and a quite enriching experience.

I think that people who have this ability can have very interesting contributions to musical projects and therefore a development with a different approach to music and sound.

As Charly Garcia mentions,

“I know it’s genetic, that my great-grandfather also had absolute hearing and that sometimes when I pass by or I’m preparing a concert when I don’t sleep, that’s when things get a little difficult. There the degree of sound sensitivity is filtered to the other four senses and it is the sound that takes you and drags you. You see the sound and you play the sound, and you smell the sound and you like the sound.”

Absolute hearing can be a blessing and a curse at the same time, but it is a distinctive aspect, another artist who possesses this ability.

Carolina Anton is an internationally recognized leader in the field of live sound mixing, systems design, and optimization of sound reinforcement. For more than 15 years, Carolina has established a trajectory within her career, collaborating with distinguished artists and productions.

Carolina is co-founder of the company 3BH, which develops projects for technology integration, design, and speaker calibration for post-production and music studios in Mexico and Latin America, Anton is part of GoroGoro Immersive Labs, Mixing and creative studio specialized in different immersive formats such as Dolby ATMOS, Ambisonics, among others.

In 2016 she began to represent SoundGirls in Mexico, supporting women to professionalize in the entertainment industry.

 

El oído perfecto

Miles dijo en cierta ocasión: “Cuando pienso en los que han muerto me pongo furioso, de modo que rato de no pensar en ello. Pero sus espíritus siguen deambulando en mi interior, por lo que siguen aquí, siguen transmitiéndoselo a otros. Es algo espiritual, y ellos forman parte de lo que soy hoy en día. Está todo en mí, todas las cosas que ellos me enseñaron a hacer. La música tiene que ver con el espíritu y lo espiritual, y también con los sentimientos. Creo que su música aún está presente, en algún lugar, ¿entiendes? Las cosas que tocábamos juntos tienen que seguir flotando en algún sitio, porque nosotros las expulsamos con nuestro aliento, y el resultado fue algo mágico, algo espiritual. Todo eso sigue con nosotros. Todos aquellos que vimos nuestras vidas transformadas por Miles aún podemos escuchar la voz inconfundible de su trompeta y aún la sentimos directamente clavada en el corazón.

 

Quincy Troupe (Miles y Yo)

 

Con personalidad extravagante Miles Davis entre muchas habilidades, podía reconocer las notas musicales en cualquier sonido, lo que se le denomina, oído absoluto.

¿Qué es el oído absoluto?

Conocido como oído perfecto, es cuando una persona tiene la capacidad de identificar la frecuencia de un estímulo auditivo aislado sin la ayuda de un estímulo auditivo referencial, así que, se puede definir como la capacidad de leer los sonidos.

Estudios han demostrado que esta habilidad podría estar bajo el control del ADN. Los investigadores compararon las estructuras y la actividad de la corteza auditiva, es decir, la región de la corteza cerebral que recibe información auditiva.

Los inicios históricos del análisis de sonido

 

Desde el siglo XIX, la ciencia se ha centrado en investigar los antecedentes de la comprensión de los tonos musicales y su medición. Dado que las referencias de los tonos solo se definieron con precisión en este momento, la designación de tono perfecto solo se pudo analizar a partir del siglo XIX. Así, el oído perfecto se explicó simultáneamente con un oído musical y con una absoluta conciencia del tono.

Aunque las diferencias y distinciones radican en los detalles. Solo una de cada 10.000 personas tiene un tono absoluto y, por lo tanto, es capaz de cantar cada nota a la perfección. A nivel físico y funcional, no hay diferencia en el sistema auditivo cuando tienes esta condición. Más bien, es una expresión de una rara habilidad para analizar el tono con precisión. Por lo tanto, esta forma de búsqueda de tonos es un acto de cognición, mediante el cual las frecuencias escuchadas se reproducen en forma de tonos puros.

Desde un punto de vista científico, este es un enfoque de aprendizaje musical que también se puede aplicar a los sonidos o colores. Sin embargo, dado a que los tonos se definen mediante una distinción mucho más fina que los patrones de color comunes, esta habilidad es mucho más rara.

 

Efectos sobre el lenguaje y la percepción

Las influencias en torno al oído perfecto también se pueden definir desde una perspectiva cultural. Sobre todo, la incorporación de la escala de Do mayor generó nuevos enfoques que podrían simplificar significativamente la identificación con tonos concretos. En su mayoría, los tonos reproducidos corresponden a experiencias musicales, que luego suenan coherentes o sesgadas desde el punto de vista del oyente. Pero también en términos de lenguaje y percepción, el oído absoluto ha dado lugar a algunos cambios.

Idioma

Muchos dialectos dependen de variaciones en el tono. Las acentuaciones también son de importancia elemental para los procesos clásicos del habla.

Percepción

No solo la reproducción de tonos perfectos, sino sobre todo su percepción juega un papel decisivo en oído perfecto. Las clasificaciones exactas aquí no se definen por un mejor sentido de la escucha, sino exclusivamente por el reconocimiento y la categorización. Esto se debe nuevamente a los mecanismos del cerebro.

Las diferencias con la audición relativa

El oído relativo, a diferencia del oído absoluto, carece de independencia de una nota principal. Con un oído perfecto, se puede definir exactamente una nota y estimarla sin ejemplos concretos. Con el oído relativo, lo que sucede es que simplemente se compara la nota con otra nota y puede clasificarla de esta manera. Si bien esto hace que sea mucho más fácil para los músicos encontrar su camino musicalmente, está muy lejos del tono absoluto. Un oído absoluto es una rareza absoluta, incluso entre los músicos.

Estas habilidades son la base del oído perfecto.

El oído absoluto es un fenómeno muy raro. Pero ¿qué puede hacer específicamente alguien que lo padece? Los siguientes factores forman la base de esta habilidad extremadamente rara y ayudan a identificar un tono absoluto:

La característica especial de las personas con una comprensión exacta del tono es que pueden recuperar sus habilidades una y otra vez. Entonces, si tiene suerte y define un acorde correctamente, posiblemente no posee un oído perfecto. Esta es una habilidad que se recuerda constantemente que le permite demostrar su comprensión del tono e identificar claramente cualquier nota que se le ocurra o tocarla al azar.

Una buena base para la sensibilidad musical

Un oído absoluto no es de ninguna manera un requisito previo para una carrera musical. Muy pocas personas pueden disfrutar de esta habilidad. Aunque el tener esta habilidad, puede ayudarte con tu sensibilidad musical, alguien sin un oído absoluto también puede convertirse en un gran músico. Esto también es evidente en los pocos ejemplos que se pueden dar de músicos destacados con un perfecto sentido del tono.

Músicos con Perfect Pitch

Entre los músicos, el tener oído absoluto, es una verdadera rareza. En el pasado cercano, por ejemplo, se puede nombrar a Ella Fitzgerald, Charlie Puth, Bing Crosby y Michael Jackson que tienen un tono perfecto. Sin embargo, de las muchas personalidades famosas que faltan, se puede ver que muchos músicos conocidos no tienen tales habilidades y, sin embargo, son bien conocidos.

El famoso músico Charlie Puth incluso fue intimidado por sus compañeros de clase durante su infancia debido a la habilidad especial, fue capaz de afirmarse en la industria de la música con su confianza en sí mismo y por supuesto con el tono perfecto. Hoy en día es uno de los artistas jóvenes más famosos, y también se prevé que tenga una carrera empinada en el futuro. Sin embargo, resulta que los requisitos no están en el canto concreto, sino principalmente en la composición. Allí, la sensibilidad musical y el conocimiento exacto son absolutamente necesarios.

Por lo tanto, no es sorprendente que muchos compositores famosos del pasado tengan un tono perfecto. Estas personalidades incluyen, por ejemplo, Mozart, Handel, Chopin y Beethoven. Esto hizo que fuera mucho más fácil encontrar las adiciones adecuadas para cada instrumento y crear un efecto armónico. Dentro de la búsqueda del músico, el oído absoluto se convirtió en una habilidad universal e importante incluso entonces.

¿Cómo lograr el tono perfecto?

En la mayoría de los casos, la comprensión musical es innata. Incluso si además se puede aprender mucho contenido sobre teoría musical y percepción, esto no tiene nada que ver con el tono perfecto. Esto está al menos en línea con la suposición de Brady y Levitin y Rogers, quienes critican un posible entrenamiento para un tono perfecto.

Sin embargo, ahora hay alguna evidencia de que el oído absoluto se puede desarrollar sin una habilidad innata y útil. Para ello, la Universidad de Chicago realizó un estudio en el que se probaron varios estudiantes con diferentes experiencias musicales. Inmediatamente después de las sesiones de formación inicial, los estudiantes mostraron una mejora significativa en el reconocimiento de tono, lo que les permitió ampliar su experiencia.

Cualquiera de nosotros puede tener la oportunidad de agudizar el conocimiento de los tonos correctos, con mucha practica puedes lograrlo. Como ingeniera de sonido, he tenido la oportunidad de colaborar con diversos músicos que poseen esta habilidad, ha sido muy enriquecedor y una experiencia bastante enriquecedora.

Creo que las personas que cuentan con esta habilidad pueden tener aportaciones muy interesantes para proyectos musicales y por lo tanto un desarrollo con una aproximación diferente a la música y el sonido.

Como menciona Charly García,

“Sé que es genético, que mi bisabuelo también tenía oído absoluto y que a veces, cuando paso de largo o estoy preparando un concierto, cuando no duermo, es cuando la cosa se pone un poquito difícil. Ahí el grado de sensibilidad sonora se te filtra a los otros cuatro sentidos y es el sonido el que te lleva y te arrastra. Ves el sonido y tocás el sonido, y olés el sonido y gustás el sonido”.

El oído absoluto puede ser una bendición y una maldición al mismo tiempo, pero es un aspecto distintivo, otro artista que posee esta habilidad.

Carolina Antón es una líder reconocida internacionalmente en el campo de la mezcla de sonido en vivo, diseño de sistemas y optimización del refuerzo sonoro. Durante más de 15 años, Carolina ha establecido una trayectoria dentro de su carrera, colaborando con distinguidos artistas y producciones. Carolina es cofundadora de la empresa 3BH, que desarrolla proyectos de integración tecnológica, diseño y calibración de altavoces para estudios de postproducción y música en México y Latinoamérica, Ant n es parte de GoroGoro Immersive Labs, mixing y estudio creativo especializado en diferentes inmersivos formatos como Dolby ATMOS, Ambisonics, entre otros. En 2016 comenzó a representar a la Soundgirls.org en México, apoyando a las mujeres a profesionalizarse en la industria del entretenimiento.

 

X