Empowering the Next Generation of Women in Audio

Join Us

Commercial Music Publishing

Have you ever wondered what you can do with that hard drive full of composition that you didn’t use on a show or project?  A few years ago, I discovered a fun and easy way to get some of my music published.  AudioSparx is a music library and stock audio website where users can license music and sound effects for commercial productions such as film, television, corporate, and more. AudioSparx also operates a commercial music streaming site called RadioSparx which provides background music for businesses such as spas, hotels, and retail shops.

To submit your music as a vendor, there are just a few easy steps which you can read more about at https://www.audiosparx.com/sa/module/alliance/default.cfm.  On this page of their website, AudioSparx will post about the styles and genres of music they are currently looking for.  At the time of writing this blog, some of these styles included:  Pop Vocals, Contemporary Brit Pop, Hip Hop Dance, K-Pop, and Future Soul.  If you’re submitting your music for the first time, submitting something within one of these “need now” genres can definitely give you a leg up on getting your music published.

AudioSparx offers two kinds of license agreements:  AudioSparx Perpetual, Non-Exclusive, and RadioSparx-Only Non-Perpetual, Non-Exclusive.

With the AudioSparx license, artists can participate in all three of their websites, as well as an option to participate in multiple additional distribution and monetization options.  With this type of license, you agree to a perpetual commitment, which means that AudioSparx can license your music forever, but it is a non-exclusive license, which means that you are free to also license the same tracks to third parties at any time.  The major benefit of using this license type is that you earn money on any of your music that is used in commercial projects or productions.  You can also earn residual performance royalties for the use of your music in broadcast productions, and also for any commercial background music use when your tracks are included on subscription-based playback.  Even if your music is used in YouTube videos, you earn residual money.

With the RadioSparx only license, your music is not licensed forever, and you are not bound to license exclusively with RadioSparx.  Your music will be added to the RadioSparx website where users can license it as background music for their commercial businesses.  Users pay RadioSparx directly, and artists are then paid through direct licensing.  Residual income is also possible through subscription-based playback of your music.

You only need a minimum of 3 tracks to submit an artist application, and you can add tracks at any time, but the process becomes a lot more lucrative if you have at least 20 tracks available in your catalog at all times.  You can organize your tracks as individuals, or group them together in a mini-library when they share a common factor like style or instrumentation.

Getting paid is also easy to track.  You can run reports at any time to see where your music is getting played and how well your tracks are doing.  After the end of each calendar quarter, AudioSparx will email you a commission report, and 45 days after each quarter, they send payment via PayPal if you have earned commissions of $25 or more.

Ok, so that all sounds easy enough.  Here’s the hard part.  If you really want to make money in this way, you really have to stay on top of it, and it’s a lot of work if you want to do it right.  With every submission, you are prompted to add lots of details about the music, and the more details you add, the more successful you will be at getting picked up.  You also will be more successful if you can keep collections of multiple genres on your profile.  The more you have to offer, the more distribution is possible, and the more money you make.  In full disclosure, it took me several years to gain $9.64 from the 3 songs I have uploaded, but seeing anything at all gives me hope that if I put a lot more effort into this process, I might see more favorable returns.

Whether you are looking for a good side hustle, or just dabbling in music licensing, AudioSparx is a great and safe way to dip your toe in the water.

 

Making a Radio Documentary During Lockdown

A few weeks ago I got a nice surprise in the post: my finalist certificate from the New York Festivals Radio Awards for a documentary I made called Lennon: 40 Years On

The documentary was broadcast in December 2020 to mark the 40th anniversary of John Lennon’s senseless murder in New York. It was, therefore, eligible for entry in the 2021 awards and ended up placing as a finalist in the Music Documentary category. I was absolutely thrilled. For radio makers, it doesn’t get much better than having your work recognised by some of the world’s most respected industry professionals. For me personally, it was also an incredible honour to be considered alongside some major broadcasters — especially for a programme that was made entirely remotely, by myself, during lockdown.

It wasn’t supposed to be that way. As a huge Beatles fan, I first came up with the idea for the documentary one day in 2019, while musing that next year would be John Lennon’s 40th anniversary. I already knew at that stage what I didn’t want to make: a profile of Lennon’s life and career (there are plenty of those already), or an in-depth account of the murder itself. Instead, I wanted to find a way to reflect on how that tragic event had influenced his legacy, and how fans understand him today. I decided to look particularly at how he has been remembered in his birthplace of Liverpool and his adopted city of New York. The plan was to visit both cities to record interviews on location.

But then COVID happened.

When working from home became a requirement, I had to decide whether or not to carry on with the idea. I’d done so much research that it seemed crazy to give up. I spent the next few months contacting contributors and arranging interviews, then recorded all of them remotely. Zencastr was my lifesaver. I was able to set up a free account with eight hours of recording time per month, safe in the knowledge that each audio file was being recorded locally without the pain of internet connection dropouts and poor quality. Sure, you still have to hope your guest will have a decent microphone, but this is all stuff you can talk through with them beforehand.

Then came the process of listening through to hours’ worth of audio and highlighting the parts of each interview I would potentially use. Once that was done, I scripted and recorded my narration and began editing everything together. There was quite a bit of archive audio to work with, as well as music. Mixing took several weeks. As everything had been done remotely, with no access to studios or different locations, I wanted to devote plenty of time to getting it right and making sure it was as close to broadcast standard as possible. There were times during that six-month period when I didn’t think it would all come together. But it did, and by the time the 40th anniversary arrived, it was ready to air.

Fast forward 15 months and people I don’t know are still contacting me to tell me they’ve listened and enjoyed it. I’m not normally one to pat myself on the back for a job well done (though I’m trying to get out of the habit of being too hard on myself), but in this case, I don’t mind saying I’m incredibly proud of what I achieved with this documentary in spite of the challenges. I’m also beyond grateful to everyone who contributed, or who simply offered words of encouragement along the way. In retrospect, the pandemic probably forced me to be twice as productive. Being at home all day long instead of commuting to and from work meant I had more time to focus and get things done.

It was a massive undertaking, but I’m so glad I did it.

Link to the documentary: https://www.todayfm.com/podcasts/the-paul-mcloone-show/lennon-40-years-on

Pros and Cons of Formal Audio Education 

I remember seeing a tweet a couple of years ago from Grammy-winning producer Finneas O’Connell about going to school for music production. He believed that it wasn’t necessary to succeed in the music industry. While he proves his own theory, my first instinct when I read this was to defend my own education. At the time, I was studying audio engineering at Berklee College of Music, and I knew it was one of the most valuable programs for audio education in the country. Now that I’ve stepped out of academia and into the professional world of audio post-production, I thought about O’Connell’s tweet again, and how my perspective on his opinion has evolved.

I chose to study audio engineering and sound design at the undergraduate level for a number of reasons. First of all, it was my goal to earn my undergraduate degree, which isn’t particularly common for students where I went to school. Berklee’s latest statistics show a 67% graduation rate, and this is mainly because most students start working in the music industry before they even need to graduate. Nonetheless, I found an academic space to study music production and audio engineering to be really beneficial for my style of learning and for the previous experience I had. I started my first semester with no background or knowledge of audio technology, recording techniques, sound design, or music production. I only knew how to write songs and record in GarageBand. So going to classes to learn these fundamentals and having assignments and deadlines really served what I needed as a student. I also knew that I wanted to take the time to absorb all the information, so I didn’t feel rushed to enter the industry immediately.

Being a part of music production or audio education program provides step-by-step guidance and access to a huge amount of resources. I had the chance to connect with professors who specialized in my interests and to connect with other students who wanted to practice the concepts we tackled in classes together or plan out future networking opportunities. Having access to equipment and studio facilities meant that I didn’t have to buy my own until I graduated. Once I did purchase my own gear, I had some ideas of my own opinions on the gear I wanted like which equipment I liked the most or didn’t need. Furthermore, the variety of classes gave me insight into different fields, histories, and techniques, which led me into post-production sound editing, even though I started the program wanting to focus on producing my own music.

When I moved to Los Angeles, I submitted many job applications, received some interviews, and ultimately the job searching process was long and grueling. It made me think about how the process would have changed if I didn’t pursue a bachelor’s degree, and what kind of cons balance out the pros. The first and most considerable disadvantage of studying audio at the undergraduate level is the enormous financial decision it entails. Not everyone has the financial support to complete a degree, especially when audio engineering and music producing involves purchasing expensive gear such as software like DAWs, synthesizers, and plug-ins, an audio interface, headphones or monitors, a microphone for recording, and any make-shift room treatment to name some valuable home-recording equipment. Paying tuition or student loans on top of all of this equipment is really overwhelming, and most likely will impact your view of which expensive items or programs to prioritize. Also, for some producers or engineers, learning while working on the job can be a better method for learning than lectures, homework, projects and quizzes. Not all starting positions at recording studios require a college education, and starting out earlier in the music industry and in the right city where interests align is a great way to get started and build momentum. Even though I like to learn by viewing lectures and reading manuals, many people are stronger kinesthetic learners who will pick up on recording consoles and signal flow by working through the physical movements of setting up a recording at a studio. Furthermore, like any other field, improvement in music production or audio engineering comes with practice. However, in a college program, practice is assigned in the form of homework and projects. While it’s possible to cover concepts of interest in a syllabus, having the freedom to choose what you practice in your own home setup lets you focus on specific skills to achieve your own goals in the music production industry.

From what I’ve learned since graduating from college, it doesn’t really matter how you acquire your experience and abilities as an audio engineer or music producer. What does matter is that you choose the process that best suits your style of learning and your own goals, and that you can see improvements as you practice and continue to work on recordings or sound edits or MIDI programming. There is no pressure to follow anyone’s path to education but your own because the right method will serve your needs as you step into the industry. I don’t think Finneas O’Connell is wrong to say that formal audio education is unnecessary. However, I do think it’s too narrow of a belief for the diverse, creative minds that want to begin a career in music production.

Zoe Thrall – Love of Gear, Recording, and Music Makers

Zoe Thrall is a groundbreaker and a legend with 40+ years working in the music industry. She spent years working as an engineer and studio manager for Power Station Studios and Hit Factory Studios in NYC, then touring with Steven Van Zandt and his band, The Disciples of Soul. In 2005, she relocated to Las Vegas taking over the reins at The Palms Studio until it was shuttered due to COVID. Zoe has moved to The Hideout as the Director of Studio Operations, where artists from Carlos Santana to Kendrick Lamar have recorded. Zoe is an artist, engineer, and is well versed in studio management.

Zoe was introduced to audio as a career path while a freshman in college, (State University of New York at Fredonia) where she had a friend who was majoring in audio engineering. She applied to the music department and then transferred to audio. While she attended all four years, she was offered a job in her fourth year and never finished her last eight credits.

Zoe was always interested in audio, she remembers as a kid “tinkering with my cassette machines and my records taking two tape machines and recording from one to the other.” Her parents loved music and she was exposed to all kinds of music growing up from pop standards to Broadway. At age eight Zoe says “I tried to learn any instrument I could get my hands on.  Turns out I was best on woodwind instruments and pursued learning them more seriously.” As we will learn woodwind instruments led her to record with Steven Van Zandt.

Working with Steven Van Zandt

Zoe was working at a studio as an assistant engineer that Steven was working on several albums he was producing, as well as his first solo album. Zoe remembers that he was looking for a specific sound, and his guitar tech mentioned that she played oboe and she ended up on the record. After the record was finished Steven asked her to go on the road. She was 22 years old and says “that was not something I ever considered.” Zoe would continue to work with Steven for eleven years, playing on and engineering several albums. Zoe says “I learned everything about the business from Steven, about music production and contracts and publishing. Steven was extremely politically active and so I also got involved in a number of social and political organizations, mostly in human rights.  I got to see that side of the world and meet Nelson Mandela. It was a whirlwind of 11 years and something I never dreamed of doing in terms of touring and being a member of a band.”

“Having a mentor like Steven was absolutely critical in my professional growth.  He would push me to do things that I would never thought I could do, but he trusted I could and that gave me the confidence to try.  There were so many invaluable lessons.  He would push me as a musician (playing keyboards on a Peter Gabriel track), as an engineer (building a home studio and recording his projects there), as a manager (rehearsing, hiring/firing band members), and even in the political arena where I was least comfortable.  One time he sent me to meet with Archbishop Desmond Tutu as the representative of our foundation, The Solidarity Foundation.  I was scared to death.  But I was able to discuss some of the programs we had instituted in the anti-apartheid movement.  These are just a few examples of what could get thrown at me at any given time”.

Zoe has been recognized for both her work and her humanitarian efforts including planning and co-organizing a fundraiser for Nelson Mandela, receiving a commendation from the United Nations for work done in the anti-apartheid movement, and serving 3 times as co-chair of 2005, 2006, and 2021 Conventions of the Audio Engineering Society.

Career Start

How did your early internships or jobs help build a foundation for where you are now?  

The internship was essential to my growth and my future.  It introduced me to some extremely talented engineers and producers who were my early mentors.  That specific internship led to every other door that opened for me.  11 years later I was back as that studio’s manager.

What did you learn interning or on your early gigs? 

Keep your mouth shut and your ears open.  Let a helping hand anywhere you can.  Put in as much time as you can and someone will notice.  Be honest, don’t try and do something you don’t know how to do (then learn how to do it later).  Be willing to do everything and anything asked of you (to a degree). Don’t count the hours.

Did you have a mentor or someone that really helped you?  

Initially, as I stated above I was fortunate to have been around some pretty talented (and tolerant) people from day one like Bob Clearmountain, Neil Dorfsman, James Farber, Tony Bongiovi.  But really my main mentor is Steven Van Zandt and then eventually worked with him for 11 years.  Everything I know about the music/recording industry I learned from him.

Career Now

What is a typical day like?  

You have to wear a lot of hats managing a commercial recording studio.  I’m the first one to come in the morning because I like to check the rooms and the rest of the facility before anyone gets here.  Then I make sure we have everything we need for the sessions coming in.  I keep an eye on when the staff is arriving to make sure they get here on time for their sessions.  I book studio time and negotiate the deals with the clients. I review the sessions from the previous day and do the billing.  As the day goes on I will check with the clients to see how their sessions are running.  Then mid-day I will look to see what the next few days are bringing us to be sure we are prepared for them.  There are many phone calls, overseeing staff, vendors, etc.

How do you stay organized and focused?  

I write everything down.  People make fun of me for it but if I write it down I won’t forget something.  There are so many details that come at you during the day I couldn’t possibly remember everything.

What do you enjoy the most about your job?  

Even though I no longer engineer I still love gear and the recording process.  I love music makers.  I love creativity.

What do you like least?  

Clients that expect to sound like Drake in three hours.  Their expectations are not realistic. Also, the 24 hours, 7-days-a-week aspect of it.

If you tour, what do you like best?  

I did tour when I was younger.  It’s really hard but exhilarating at the same time.  It’s an easy way to see the world.  I loved learning about different cultures. The feeling you get just before you step on the stage is something I’ve never felt doing anything else, whether it was to an audience of 200 or 100,000.

Zoe Thrall on The SoundGirls Podcast

 

 

 

I Always Cry On A Sunday

As you can tell from the title, and the use of the first person singular personal pronoun, this blog is personal – but aren’t they always? – but maybe a little confessional too.

It’s the end of February and we’re looking forward to March with its winds: March winds followed by April showers, at least that’s what we say in the UK.  In fact, they’ve been a bit early and a little ferocious this last week with ‘Storm Eunice’ gusting at over 100mph.  I mention this since maybe March might just turn out better than expected.

After a long wait, the Giuseppe Verdi Conservatorio in Turin finally has its new electroacoustic composition teacher, and my first lesson is on March 1st.  So… I’d better get moving.  I was persuaded to scrap, or at least set aside for now my previous idea of an electroacoustic piece with cello melodies based on the bodily curves of my girlfriend, the Mexican artist who eventually broke my heart.  That was eight months ago and as my title suggests, these days I still occasionally cry over lost love and what might have been, but always on a Sunday. The melodies had already been written, transcriptions of her side, shoulder, and neck seen from behind with counter melodies taken from shadows and lines on her back, to include a tattoo and I had already sketched in some ideas for granular and dusty effervescence electronics to represent shadows: where they are positioned, their density, and the amount of space occupied.

 

I had also begun to think about extending the sketch into a three-movement piece with the tattoo representing the struggle between the first movement’s depiction of voluptuousness and seduction and the redemption of the last movement symbolizing serenity and peace, with the granular drones containing fragments of spoken text softly woven into it.

 

 

Have I just persuaded myself to take it on again?  Will the redemption of the last movement be enough to release me from the pain of heartbreak and leave me at peace with myself?  Maybe yes, maybe no!  But, as I mentioned in a previous blog, I value authenticity in both the writing and the performance of a piece of music, and this would most certainly be authentic, there being a good reason for this art to exist.

Would it stop me crying on a Sunday?

I’ll tell you what made me cry on Sunday the 13th of February: my father dying of complications from contracting Covid 19.  He was 97 admittedly, but still in good shape and he had had his vaccinations, but complications set in, a minor stroke, taken into the ICU, and in the end, he was put into an induced coma and life support removed to allow him to go peacefully.  This was in Minneapolis while I was here in Turin, but I was kept up to date by my younger brother and sister while they managed things there.  He was absent for much of my childhood, but I can still thank him for the gift of music he passed on to us.  He was a tenor, with that typically English sound and I still remember, as a young child, sitting under the table listening to him practicing arias from opera and Neapolitan songs…that was ‘the gift’.

So, from when he was taken ill until Sunday 13th and up until the following Friday of the funeral. I was pretty useless with a total lack of drive and enthusiasm for anything, and music was the furthest thing from my mind.

Did I cry last Sunday?  No, actually!  I didn’t have time; I was too busy with Non Una di Meno, Torino (NUDM originally formed by feminist groups in Chile and now found across the continents), preparing for International Women’s Day, or as I like to call it: ‘Move over patriarchy!  You’ve been pretty useless for these last few thousand years so step aside and let us take over; we have both the intelligence and the empathy.’  In Italy we’re planning for a 24 hour national Strike and have been in touch with the Unions for their support; and we have a few grievances:

 

We deplore gender-based violence: Over 100 women murdered by men in 2021.

We also deplore the bias and inaction of the Italian judicial system which routinely pushes the blame for rape onto the victim.  And in one case I personally know of, when a young woman went to the police to report violence at the hands of her then-boyfriend, she was told that it was not a crime since it had taken place within a relationship.

Women have been the first to lose work due to the pandemic and are not treated equally as are their male counterparts.

Health care for women who suffer from endometriosis, vulvodynia, etc…basically anything to do with a woman’s womb or genitalia is at least six months of waiting or, if you can afford it, you can pay for private health care.

If only we could persuade all stay-at-home women to strike on that day, that would certainly put the male world of work, and its master, Capitalism, under some strain, even if only for 24 hours. As Carla Lonzi, (Italian feminist) wrote in her ‘Manifesto di Rivolta Femminile’ of 1970:

We recognize in the unpaid domestic work of women, the service which allows both state and private capitalism to exist.”

 

I keep this picture of a young woman of Non Una di Meno on my cellphone to remind me that there are young women willing to stand up and be counted, particularly for the women who don’t have a voice.  We have many migrants from North Africa in all major cities of Italy and the women, in particular, need advocacy.  One of our LGBTQIA+ groups held drop-in sessions for gay and lesbian refugees, who would have been tortured, imprisoned, or killed if sent back to their own country.  I helped two Nigerian lesbian refugees prepare their evidence for the commission that would decide if they were given permission to stay based on refugee status; they were successful.  As Covid restrictions ease these drop-ins might restart.

So I’ve been personal, confessional, and political, put more simply the last couple of weeks have been a bit meh!

Thus, finally, I arrive at the music part of the blog: the first song of my song cycle.

This first song of a projected song cycle: settings of poems by lesbian writers starts with La divinité inconnue written by Renée Vivien who lived on the Left Bank in Paris, and at some point, in the same block as Collette.  She was dubbed the new Sappho and translated some of her poems into French.  I mentioned authenticity in an earlier blog and Renée’s poems are indeed authentic: they are dark, using lugubrious imagery, and many expressing her love for women and, in particular, her intense love affair with the American heiress, Natalie Barney.  However, Natalie was very much against the idea of monogamous relationships: she desired that her lovers should be sincere with her, but only while her passion lasted.  So, after a bit more than two years together, Renée broke off the relationship.  It is thought that this breakup led to her early death at 32 through anorexia caused by alcohol and drug abuse.  This is probably the most extreme example of the pain and hurt she felt from Natalie’s infidelity: I shall later be setting this in the original French as part of the Song Cycle, but for the time being, I shall begin with La divinité inconnue.

I hate she who I love, and I love she who I hate.

I would love to torture most skilfully the wounded limbs of she who I love,

I would like to drink the sighs of her pain and the lamentations of her agony,

I would slowly suffocate the breath from her breast,

I would wish that a merciless dagger pierce her to the heart,

And I would be happy to see the blood weeping, drop by drop from her veins.

I would love to see her death on the bed of our caresses…

I love she who I hate.

When I spy her in a crowd, inside of me I feel an incurable burning desire to hold her tight in front of everyone and to possess her in the light of day.

The words of bitterness on my lips change to become sweet sighs of desire.

I push her away with all my anger, and yet I call to her with all my sensuality.

She is both ruthless and cowardly, and yet her body is passionate and fresh – a flame dissolving in the dew

I cannot look, without feeling breathless and without regrets, at the perfidy in her stare or the falsehood on her lips…..

I hate she who I love, and I love she who I hate.

‘Translation of a Polish Song’     Renée Vivien  (1903)

So that’s the poet.  What about the music and the setting?  I’ve projected the song for Soprano: light and angelic against the darkness of the text and the instrumentation, which is cello, B flat clarinet, Harp, and electronics.  I’m aiming to keep the instrumentation light and translucent so that there are very few tutti passages, while those that are will be sparse.  The final mixing will be key.  Practical considerations are: find a soprano who can sing in French.  For the instruments, for the time being, I will use orchestral samples from Spitfire Audio or the UVI IRCAM Solo Instruments 2, which has quite a few innovative instrumental techniques but obviously, live voice and instruments with live electronics for performance versions.

What am I using to create the music?  Keyboard and pencil for setting the words and melodies, and here I need to collaborate with the soprano since I’d like to build in some space for improvisation.  For the instruments, I’m using midi inputs for the instruments: these are going into Logic Pro – the generous 90-day trial gives me time to get to know the ins and outs.  I’m still tempted to put together the individual clips, loops, drones, etc. in Audition since I find it easy to manipulate them and balance and pan the individual tracks.  Final mixing, however, is not so good since Audition doesn’t seem to have faders like Logic Pro or Reaper.  So, this is telling me to record the midi onto Logic Pro and tweak as necessary, then bounce the audio tracks to Audition, though I still have to learn to do that. However, if I make the effort and become more familiar with Logic Pro before the trial is up then this maybe ‘the one’, especially since it has enough built-in sounds, instruments, and loops to keep me amused.

One more bit of software, MAX MSP.  Now I have a love/hate relationship with software, and this is one that I could really hate since it requires a kind of proto programming.  “Ha!” I hear those of you in the know and slick with Super Collider say.  My point is that I’m an artist, not a scientist or mathematician, though I can take an interest when I fancy.  No, if I were a violinist, I would no more consider building my own Stradivarius than I would programming music.  So, for that reason, I’m using the BEAP modules in MAX which are like my using the EMS 100 analog synthesizer in the late 70s.  I patch one module into another to control frequencies, modulations, envelope shapes, etc.  I may occasionally write in a small piece that will change my patches, but one step at a time.  Right at this moment, I am experimenting with various modules which allow me to treat and shape synthesized sounds as well as recorded sounds.  In the last few years, I have created thousands of clips while processing recorded sounds which I have used to create loops and drones, for example.  Therefore, I can take various individual stages of these sounds and repurpose them into something new; run them through a granulator, spectral filter, sampler, sequencer or random note generator.  I can treat them in so many ways and mix them with other sounds by multitracking; all I have to do is find them.  This is my ‘experimental’!

So far, I have been experimenting with two sine wave oscillators to create some simple frequency modulation, then adding a third, then using Low-Frequency oscillators to control the oscillation of the sound-producing oscillators.  And then adding a soft underlay of a recording of an espresso machine at work put through a granulator and controlled with another LF oscillator which causes it to sample the clip at various points and produce a kaleidoscope of sounds.

How’s it going so far?  Not very far… For the reasons spoken about earlier, there are other things that have occupied my ADHD-addled brain and together added a bit of confusion and depression.  But we’re coming out of that now and desperate to get composing again.  Besides, I need to take something to the Prof on March 1st.  So having finished this, I’ll add something to the soprano melody while still experimenting with the electronics.

I listen a lot because music is the same as me; and if I am not myself, I am nobody.  This is one song that has inspired me for its beauty and its authenticity. It is magical from the start.  Then at 59” something almost ecstatic happens… this is the kind of inspiration that will lead me through my piece, plus of course, Renée’s poetry and the sadness of her life, which kind of chimes with me, but now I only cry on a Sunday.

Love from Frà in Torino

She lost her voice That’s how we knew. Music by Frances White, sung by Soprano Kristin Norderval, libretto by Valeria Vasilevski

https://open.spotify.com/track/6CnP4XJbPYB9CHFYPq9bFT?si=8cd81879067d47f6

 

Demystifying Loudness Standards

               
Every sound engineer refers to some kind of meter to aid the judgments we make with our ears. Sometimes it is a meter on tracks in a DAW or that session’s master output meter, other times it is LEDs lighting up our consoles like a Christmas tree, sometimes it is a handheld sound level meter, other times a VU meter, etc. All of those meters measure audio signal using different scales, but they all use the decibel as a unit of measurement. There is also a way to measure the levels of mixes that are designed to represent the human perception of sound: loudness!

Our job as audio engineers and sound designers is to deliver a seamless aural experience. Loudness standards are a set of guides, measured by particular algorithms, to ensure that everyone who is mixing audio is delivering a product that sounds similar in volume across a streaming service, website, and radio or television station. The less work our audiences have to do, the better we have done our jobs. Loudness is one of the many tools that help us ensure that we are delivering the best experience possible.

History           

A big reason we started mixing to loudness standards was to achieve consistent volume, from program to program as well as within shows. Listeners and viewers used to complain to the FCC and BBC TV about jumps in volume between programs, and volume ranges within programs being too wide. Listeners had to perpetually make volume adjustments on their end when their radio or television suddenly got loud, or to hear what was being said if a moment was mixed too quietly compared to the rest of the program.

In 2007, the International Telecommunications Union (ITU) released the ITU-R BS 1770 standard; a set of algorithms to measure audio program loudness and true-peak level. (Chueks Blog.)  Then, the European Broadcast Union (EBU) began to work with the ITU standard. Then EBU modified their standard when they discovered that gaps of silence could bring a loud program down to their specifications. So they released a standard called EBU R-128. Levels below 8 LUFS of the ungated measurement do not count towards the integrated loudness level, which means that the quiet parts can not skew the measurement of the whole program. The ITU standard is still used internationally.

Even after all of this standardization, television viewers were still being blasted by painfully loud commercials.  So, on December 13th, 2012, the FCC passed the Commercial Advertisement Loudness Mitigation Act. From the FCC website: “Specifically, the CALM Act directs the Commission to establish rules that require TV stations, cable operators, satellite TV providers or other multichannel video program distributors to apply the ATSC A/85 Recommended Practice to commercial advertisements they transmit to viewers. The ATSC A/85 RP is a set of methods to measure and control the audio loudness of digital programming, including commercials.  This standard can be used by all broadcast television stations and pay-TV providers.”    And yup, listeners can file complaints to the FCC if a commercial is too loud. The CALM Act just regulates the loudness of commercials.

Non-Eurocentric countries have their own loudness standards, derived from the global ITU R B.S 1770. China’s standard for television broadcast is GY/T 282-2014; Japan’s is ARIB TR-B32; Australia’s and New Zealand’s is OP-29. Many European and South American countries, along with South Africa, use the EBU R-128 standard. There’s a link with a more comprehensive link at the end of this article, in the resources section.

Most clients you will mix for expect you, the sound designer or sound mixer, to abide by any one of these standards, depending on who is distributing it. (Apple, Spotify, Netflix, YouTube, broadcast, etc.) 

The Science Behind Loudness Measurements

Loudness is a measurement of human perception. If you have not experienced mixing with a loudness meter, you are (hopefully) paying attention to RMS, peak, or VU meters in your DAW or on your hardware. RMS (average level) and peak (loudest level) meters measure levels in decibels relative to full scale (dBFS). The numbers on those meters are based on the voltage of an audio signal. VU meters use a VU scale (where 0 VU is equal to +4 dBu), and like RMS and peak meters, are measuring the voltage of an audio signal.
Those measurements would work to measure loudness – if humans heard all frequencies in the audio spectrum at equal volume levels. But we don’t! Get familiar with the Fletcher-Munson Curve. It is a chart that shows, on average, how sensitive humans are to different frequencies. (Technically speaking, we all hear slightly differently from each other, but this is a solid basis.)

Humans need low frequencies to be cranked up in order to perceive them as the same volume as higher frequencies. And, sound coming from behind us is also weighed louder than sound in front of us. Perhaps it is an instinct that evolved with early humans. As animals, we are still on the lookout for predators that are sneaking up on us from behind.

Instead of measuring loudness in decibels (dB), we measure it in loudness units full scale (LUFS, or interchangeably, LKFS). LUFS measurements account for humans being less sensitive to low frequencies but more sensitive to sounds coming from behind them.

There are a couple more interesting things to know about how loudness meters work. We already mentioned how the EBU standard gates anything below 8 LUFS under the ungated measurement so the really quiet or silent parts do not skew the measurement of the whole mix (which would allow the loudest parts to be way too loud). Loudness standards also dictate the allowed dynamic range of a program (in LUFS). This is important so your audience does not have to tweak the volume to hear people during very quiet scenes, and it saves their ears from getting blasted by a World War Two bomb squadron or a kaiju if they had their stereo turned way up to hear a quiet conversation. (Though every sound designer and mixer knows that there will always be more sensitive listeners who will complain about a loud scene anyway.)

Terms

Here is a list of terms you will see on all loudness meters.

LUFS/LKFS – Loudness Units Full Scale (LKFS = K weighted, but they are effectively the same thing).

Weighting standards – When you mix to a loudness spec in LUFS, also know which standard you should use! The following are the most commonly used standards.

True Peak Max:  Bit of an explanation here. When you play audio in your DAW. you are hearing an analog reconstruction of digital audio data. Depending on how that audio data is decoded, the analog reconstruction might peak beyond the digital waveform. Those peaks are called inter-sample peaks. Inter-sample peaks will not be detected by a limiter or sample peak meter. But a True Peak Meter on a loudness meter will catch them. True peak is measured in dBTP.

Momentary loudness: Loudness at any given moment, for measuring the loudness of a section.

Long-term/ Integrated loudness: This is the average loudness of your mix.

Target Levels: What measurement in LUFS the mix should reach.

Range/LRA: Dynamic range, but in LUFS.  

How To Mix To Loudness Standards

Okay, you know the history, you are armed with the terminology…now what? First, let us talk about the consequences of not mixing to spec.

For every client, there are different devices at the distribution stage that decode your audio and play it out to the airwaves. Those devices have different specifications. The distributor will turn a mix-up or down to normalize the audio to their standards if the mix does not meet specifications. A couple of things happen as a result. One, loss of dynamic range. And, the quietest parts are still too quiet. If there are parts that are too loud, those parts will sound distorted and crushed due to compressed waveforms. The end result is a quiet mix, with no dynamics, with distortion.

To put mixing to loudness in practice, first, start with your ears. Mix what sounds good. Aim for intelligibility and consistency. Keep an eye on your RMS, Peak, or VU meters, but do not worry about LUFS yet.

Your second pass is when you mix to target  LUFS levels. Keep an eye on your loudness meter. I watch the momentary loudness reading because if I am consistently in the ballpark with momentary loudness, I will have a reliable integrated loudness reading and a dynamic range that is not too wide. Limiters can also be used to your advantage.

Then, bounce your mix. Bring the bounce into your session, select the clip, then open your loudness plugin and analyze the bounce. Your loudness plugin will give you a reading with the current specs for your bounce. (Caveat: I am using ProTools terminology. Check if your DAW has a feature similar to AudioSuite.) This also works great for analyzing sections of audio at a time while you are mixing.

Speaking of plugins, here are a few of the most used loudness meters. Insert one of these on your master track to measure your loudness.

Youlean Loudness Meter
This one is top of the list because it is FREE! It also has a cool feature where it shows a linear history of the loudness readings.

iZotope Insight
Insight is really cool. There are a lot of different views, including history and sound field views, and a spectrogram so you can see how different frequencies are being weighted. This plugin measures momentary loudness fast.




Waves WLM Meter

The Waves option may not have a bunch of flashy features like its iZotope competitor, but it does measure everything accurately and comes with an adjustable trim feature. The short-term loudness is accurate but does not bounce around as fast as Insight’s, which I actually prefer.

TC Electronic LMN Meter
I have not personally used this meter, but it looks like a great option for those of us mixing for 5.1 systems. And the radar display is pretty cool!

Wrapping Up: Making Art with Science

The science and history may be a little dry to research, but loudness mixing is an art form itself; Because if listeners have to constantly adjust volume, we are failing at our jobs of creating a distraction and hassle-free experience for our audience. Loudness standards go beyond a set of rules; they are an opportunity for audio engineers to use our scientific prowess to develop our work into a unifying experience.

Resources

First, big thanks to my editors (and fellow audio engineers) Jay Czys and Andie Huether.

The Loudness Standards (Measurement) – LUFS (Cheuks’ Blog)
https://cheuksblog.wordpress.com/2018/04/02/the-loudness-standards-measurement-lufs/#:~:text=Around%202007%2C%20an%20organization%20named,a%20value%20for%20the%20audio.

Loudness: Everything You Need to Know (Production Expert)
https://www.pro-tools-expert.com/production-expert-1/loudness-everything-you-need-to-know

Loud Commercials (The Federal Communications Commission)
https://www.fcc.gov/media/policy/loud-commercials

Loudness vs. True Peak: A Beginner’s Guide (NUGEN Audio)
https://nugenaudio.com/loudness-true-peak/

Worldwide Loudness Standards
https://www.rtw.com/en/blog/worldwide-loudness-delivery-standards.html

Home Recording with Kids in the House

Banished. That’s where my kids would be when I recorded at home pre-pandemic. Not banished to their knowledge, just hanging with their grandparents for at least an entire day, at most overnight. The point is they would be gone, and I’d have sent our dog with them, too, if I’d had my way.

During the pandemic, I found myself itching to record and determined to avoid exposing vulnerable elders to a virus. Short of going into debt building a proper studio at home, I applied my knowledge and resources to develop these strategies:

Directional microphones are your friend. In an ideal recording situation, a large-diaphragm condenser microphone, like my Lauten 220, would be my preference for recording acoustic guitar and vocals. In my loud house, however, noise reduction was key. My kids could be shrieking away in the other room, but with my SM7B pointed away from them, I could sing my heart out with only minimal bleed. Plus, I could point my SM57 or Sennheiser e609 at my amp, again facing away from the children, to get some electric guitar underway.

Sound treatment is your friend. Believe me, I wish my home were properly sound treated – for so many reasons, not just recording purposes. In a pinch, it helps to hang quilts, order some shipping blankets, or hide out in a closet. It all works to cut down on noise. However, be sure to avoid cheap foam options that attenuate only higher frequencies doing little for the low end.

Record directly if you can. If you have a decent DI to connect your instrument to your interface, you won’t have to worry about home interference at all. You can always add effects later, and you may even come up with something innovative and fun with that dry signal. Many interfaces even allow for this without a DI.

Do many takes. While you have everything set up, run through the miked stuff a few times, keeping the same distance from the microphone when you sing or play for consistency’s sake. The little ones won’t scream at the same exact part each time you sing the song unless they’re especially cruel and deliberate about it. You can stitch together the best takes later.

Communicate. Let the kids know you’re going to be doing something special and need quiet for a short while. Talk about what they can be doing to entertain themselves in the meantime. Set boundaries accordingly beforehand and there should be fewer interruptions. Just be prepared to keep your promises if you make any (i.e. ice cream following the recording session, etc.)

It’s never going to be perfect, and of course, it requires flexibility, but it’s completely possible to record at home with your kids around. Breaks may be necessary and you may not get the best sound of your life, but what’s the alternative? Not doing it? Make those perfectly imperfect recordings at home. Lead by example and show the young ones in your life that there is always room for creativity so that they can learn to prioritize their own as they move beyond childhood. And if all else fails, scrap it and try again when they’re asleep.

Designing with Meyer Constellation

Using an array of ambient sensing microphones, digital signal processing, and world-class speakers, Constellation modifies the reverberant characteristics of a venue and redistributes sound throughout the space – ensuring a natural acoustic experience. I am very fortunate to have had the experience to design with this system. The Krannert Center for the Performing Arts recently had one of these systems installed into their Colwell Playhouse Theatre. In this article, I will go over how I designed this system for the 2021 November Dance show, how I utilized the 100+ speakers, and how I shaped the environment of each dance piece.

I began the design process by grouping my outputs into zones where they could fulfill a certain purpose. In Cuestation; Meyer’s software interface for their D-Mitri systems; these groups are called buses. I utilized a total of ten buses and over 80 speakers out of the original 127. Paper working and making sure things were clear for my engineer was a new challenge. This system is large and I found color coding and adding legends with further notes really helped represent the system I needed, but also the system that would become the world for the show, audience, and art that dancers were bringing into the space.

These zones allowed me to create a truly immersive experience with the sound. I was consistently using the House Left and Right sides, Rears, and Ceiling buses. However, what I loved the most was the Sub bus. Rather than using the onstage subs with the arrays, I opted to use the installed flown subs. What I have experienced in previous designs is that I prefer the encompassing blanket of sound that subs give when they are flown from a distance. I really didn’t want to localize them to the stage. I did, however, use the Center and Front Fills buses to draw more attention to the stage and dancers. I found that I preferred this balance of sound and the image that is created as an audience member.

I also found that the color-coding, legends, and graphics really helped keep track of this system. It felt daunting at first, but this breakdown allowed me to easily manage all of my outputs. The dance productions here don’t get a ton of time for the tech process, so this setup really helped me adjust levels quickly and not get bogged. I hadn’t worked with this software for a show before and it comes with a learning curve. I needed to stay productive throughout the entire rehearsal process.

Playback also works differently in Meyer’s Cuestation. Playback is often triggered and played back in Wildtracks. Wildtracks uses decks – virtual decks that is. It felt reminiscent of my Dad’s tape deck when growing up. Even though the tech process for this production added several more decks and cues to my original paperwork, I will show you the initial documents and how I set up my playback.

Originally each dance piece had its own deck. You can also see that each dance had a varying amount of Cuestation Inputs. These are the Wildtrack inputs that I then assigned to my buses of speaker zones. For Anna and Jakki’s pieces, I received stereo files. Though this was less than ideal, I stilled sent the music to the buses and crafted a great sound for the piece. Subsequently, I was the designer for Harbored Weight, so I had more opportunities to work with stems and individual tracks to send and pan around the room.

This is the kind of world I like to think and live in as a designer. There was a fourth dance that used only live music. This one was titled Love and only had a Cellist mic’ed on stage. Harbored Weight also had a live pianist accompanying the dancers. With Cuestation, I was able to take the mic’ed signal from these instruments and also send them to my buses. I could do this for onstage monitoring for the dancers or artistically in the house. What I discovered though, was that I could achieve a beautiful presence in the house with the other half of this design – which involves Constellation.

I sculpted a unique constellation setting for each dance piece. This information would be saved within each Cuestation cue – thus being recalled for the top of each dance by the stage manager. Most of the choreographers really wanted a large-sounding reverb. One, in particular, asked for something as close to cave-like as possible. I love these kinds of design requests.

Not only was I able to start with a base setting like ‘large hall’, but I was also able to effect parameters like early reflections, which really helped create a huge immersive sounding space. I was up against a learning curve though. I realized that with the constellation cue, audience members would be applauding at the end of the dance and their claps would be accentuated and echoed around the theatre. I found this to be cool sounding, but obnoxious. This resulted in me having to program more cues and use more Wildtracks decks to turn off Constellation for the end of each dance.

Then there are the designated microphones that capture the sound that makes Constellation processing what it is. For Donald Byrd’s piece Love, I was able to put this already beautiful cello sound through the processing system and hug the audience with its sound and warmth. This really helped for a few reasons. The dance was set to several Benjamin Britten pieces and it was just the cellist and dancers on stage. One cellist can sound small in a large theatre and the choreographer really wanted a big full sound. I mic’d the cello with a DPA 4099, but also used the ambient microphones to capture the instrument and send the signal through the constellation processing and unique patch that I had created. I designed a really warm and enveloping sound that was still localized to the stage and gave the illusion of a full orchestra.

My design for the 2021 November Dance did not incorporate Meyer’s Spacemap side of Constellation. I was able to do everything artistically that I wanted and that the choreographers needed without using Spacemap. I do look forward to using it in future designs though. If this article intrigues you, I would highly recommend looking into Spacemap as well as Spacemap GO.

I love that I can find ways to be a designer and be artistic outside of the typical realm of what it means to be a sound designer. I challenge the idea that crafting a sound system that shapes the sound we play through it isn’t artistic. I think this article shows that this way of thinking is in fact art. Dance often defaults to left-right mains with onstage monitors and side fills, but contemporary dance is pushing against that envelope. Sound designers and other artistic minds need to be there to receive those pushbacks and birth a new way of making art. Much like how Meyer continues to develop innovative tools that help us be better artists and better storytellers.

    Photo credit goes to Natalie Foil. All other images within this article are from my personal paperwork for the 2021 November Dance production. 

 

Becoming a Member of Recording Academy® / Grammys®

SoundGirls! Interested in joining the Recording Academy® / Grammys®?

Join us on Thursday, Feb 8 at 1 pm PST to learn more.

Hosted by

Amanda Garcia Davenport – Membership Manager and

Catharine Wood – Producer, Engineer, and Studio Owner

REGISTER HERE

 

 

X