Empowering the Next Generation of Women in Audio

Join Us

Ready to start your own Recording and Music Production Studio?

Are you thinking about starting your own business as a recording studio or music producer?

Have you recently finished audio school or interning? Have you simply been recording and producing on your own for a while, but are hesitant to go into business for yourself?

It can be intimidating or outright terrifying to think about putting up your virtual “open for business” sign as a freelance audio engineer or music producer. I totally understand! I had been teaching voice and songwriting lessons for 15 years but had only been “dabbling” in recording and production for a few years. I was terrified when I moved to a new city with no contacts and publicly announced that I was a recording engineer and music producer. At that moment in time, it was sink or swim. I had to buckle down and do it or I was going to have to go find a regular day job.

Now that my production business has been thriving for about 12 years, I’ve learned a few things! I came up with  8 tips that should help you get started today.

Create a business entity.

The easiest way to establish a business in the US is to start a sole proprietorship. Check your local city and state requirements, but it should be very simple using your social security number and home address. There are other entity options if you think you may have a more complicated situation, so be sure to check with your tax accountant to figure out what is best for you. In most cases, however, keep it simple and set it up as a Sole Proprietor and establish a “Doing Business As” or DBA. If you have a studio name or producer name you’ve been dying to use, make it official!

Establish your brand around your strengths and talents.

There are a lot of recording studios and a lot of producers. What sets you apart? What areas are you feeling really confident in? Focus on those areas and build your brand around them. Since I was a voice teacher when I started to learn how to record and produce, I started working with my students on their songwriting and creating demos to present to their producers. We did mock recording sessions to prepare them for their real recording sessions in the studio. After doing this for a number of years, I began recording the vocals for their final projects, eventually learning to edit them, mix them and do all vocal production. It was a process that took me several years but I was proficient with vocal production long before I knew how to mix a drum kit. What could be your niche? Are you a guitar player so you really know how to dial in tones? Are you an expert at micing up a drum kit? Create your niche around what you do best while you continue to build your knowledge in the areas you are less confident. As soon as you feel confident in other areas, shift your messaging and your brand to include it.

Create your client avatar around the niche you’ve established.

Now that you know what your niche will be in the recording and/or music production business, figure out your client avatar; what is their age, gender, what are their insecurities, and what are their problems you can solve? Will they all be remote or all local or a combination? All of your messaging and marketing; from your website copy to your photos should appeal to this client avatar.

Take yourself seriously.

You’ll be tempted to charge the lowest rate possible, work the craziest latest hours, and bend over backward to please clients that are never happy, just to bring in work. Knowing your value, in whatever niche you decide to focus on, and presenting yourself that way will attract people who are willing to pay what you are worth and respect your time and talent. Keep your rate competitive but shine in other areas, such as attention to detail, turn-around time, professionalism or just being fun.

Make your studio a comfortable, professional space with a vibe that makes you happy.

Do your best to present your space as professional and comfortable. Especially if you have clients coming to your home studio, make sure that it’s clean and presentable and as disconnected from “family living” as possible. I understand it’s not always possible to make a home studio feel like it’s not in your home. Believe me, I’ve had a variety of home studios and some situations were more ideal than others. Two studios ago, clients had to walk through the living room, kitchen, and family room to get down to the studio. Ugh! I hated it, but it was the only option at the time. I always kept the house as clean as possible (with teenagers….it wasn’t always easy!) And remember, this will be your “workspace” which is why you want to create a space and vibe where you are happy. If it’s adding plants, lava lamps, LED lights or whatever, do it a little at a time and make it your “happy place”.

Set up your website with testimonials and portfolio.

Marketing 101 advice is to have your own website because platforms such as Facebook and Instagram are just rentals. You do not have a direct connection with your clients or potential clients on social media. Every business should have a “home base” where people can come and get a clear picture that you are legit. Grab a domain with your business name that you’ve registered using GoDaddy. The annual fee to own this domain is relatively cheap. As soon as you have even just a few songs that demonstrate your abilities and a few happy clients, create a website (use a simple website-building platform such as Wix or Squarespace). Remember to keep the website simple. It could even be one page. Make a simple statement about who you are and what you can do for your client avatar, a professional picture, a few testimonials and a playlist widget featuring your work.

Start your email list.

This is how you connect with your audience and potential clients. Use a free email marketing program such as Mailchimp to add a “subscribe” widget to your website. Begin building this list and send updates once a month. These don’t need to be time-consuming or extravagant. What this does is it builds your authority and lets people who stop by your website know, “Oh, this person is serious.” Provide value to this audience and nurture it. Ask them to reply to questions so you can better understand them. Be real in your messages so that they feel like they can trust you.

Conduct yourself like a professional in all aspects of your life.

Keep the angry rants at your mother or ex off of social media. If your branding is political, keep it professional and “kind”. Go to networking events and shake people’s hands. If you “cold call/message” potential clients, do not spam them with copy/paste messages. If a client decides not to work with you or isn’t happy with your work, be humble and understanding. If given the opportunity, ask them what they were unhappy with and listen, rather than get defensive. Do not gossip or talk badly about others in your field.

That’s it! See, it’s not really that hard at all. I hope this was helpful and that you can ROCK your own Recording Studio/Music Production Business!

Creating Your Mission Statement as a Creative Entrepreneur

As a creative person, it can be a pretty big shift to think like a business owner. Entrepreneurial instincts aren’t exactly natural for all of us, just as musicality isn’t as natural for some as others. As I’ve worked with artists, songwriters, musicians and creatives of all kinds, I’ve found that creating a “mission statement” of sorts can get the ball rolling into a disciplined music business that is an authentic reflection of who you are.

“Authentic” is a word that’s a bit overused these days, however finding and embracing your authentic self as a creative entrepreneur in today’s world is perhaps the most important part of your journey to finding success. As an artist, it is what will draw fans to you and keep them there. As a music producer or audio engineer, authenticity builds trust and loyalty with your clients. As a songwriter, telling stories from a place of authenticity will keep your music fresh and relatable.

In the “authentic only” environment we have today, posers or fakers are relentlessly called out, and then inevitably, virtually crucified. That being said, the driving force behind finding your authenticity shouldn’t be fear, but a desire to find your place in this musical landscape and to find the people who feel they belong there with you; to create your own world and invite your “people” to join you. This is the very foundation of being a successful creative today.

Yes, there is still room for showmanship and even gimmicks, as long as it’s an extension of who you really are. I just had an interesting conversation with an artist about this. I was convinced he was making a choice with his branding that was confusing and off-putting. By the end of our conversation, I “got it”. I could clearly see that what wouldn’t work for most artists was perfect for him, as it reflected his rebellious spirit and a virtual finger to the establishment. He sold me on it because his feet were so solidly planted in his “authentic” self that I could see without a doubt that he wasn’t simply being stubborn, but was completely confident in who he is and has a clear vision of how he wanted people to experience his brand.

Finding who we are can be a process of digging, questioning, discovering, and peeling back layers. It should always start with these four questions:

Take about 20 or 30 minutes to sit with these questions without distraction. Brain dump your answers with no filter on a piece of paper or note app.

Now, shape the answers to these questions into your official mission statement. Your mission statement should only be a paragraph long, not a full-page essay. If writing isn’t your skill set, ask for some help. Also bear in mind that it doesn’t have to be perfectly written, only that we the reader should have no questions about who you are, what you stand for, and what your “mission” with your music is.

Going forward, every move you make (on social media, in your fan newsletter, in your youtube engagement, or wherever) as a creative should align with your mission statement. If you contradict yourself one too many times, your fans will detect this lack of authenticity and lose interest. They may even question why they liked you in the first place. This should make it easy for you! You never have to worry about what someone else is doing or what the current trends are. Just Be YOU!

 

Eight Tips for Getting Started Mixing in any DAW.

If you’ve spent any time recording in your DAW, you are certainly aware by now how hard it can be to get things to actually sound good. As soon as you do a little Youtube search to get some help, you’ll find hours and hours of tutorials ranging from the very basic steps to master-level mixing. Where does one even start figuring it out?

I decided to write up my own basic mixing tips for anyone who knows how to record but just can’t seem to get things to sound good yet. These are easy, baby steps that work in any DAW. I hope it’s helpful!

Bring the level of every track down to at least -10db to -15db. Your Master Bus should remain at 0 dB. This is part of what is called “gain-staging” and it basically means that you need to always be conscious of not crowding the ceiling of your mix. If every track is at zero (the loudest) then you will only hear a crowded, jumbled, even distorted mess once all the tracks are there. What often happens is the first track that is recorded stays at zero. Let’s say that the first track is an acoustic guitar track. Now you’ve recorded a vocal. You want the vocal a little louder than the acoustic so you turn up the vocal 2db. Before you know it, you are running out of headroom fast. So, if you start out by bringing everything down at least -10 to -15 db, you’ll give yourself the headroom you need to turn things up or down as you build out the production.

Organize your tracks into folders. Categorize them into groups, such as; lead vocals, backing vocals, drums and percussion, electric guitars, pads, keys, etc. Staying organized will allow you to focus on the more technical aspects of mixing.

Create sub-mixes or buses. These should be grouped according to how you want them to be mixed. I usually have a submix for all lead vocals, and a few different categories of backing vocals (stacks, gang, texture, etc.) Drums and Percussion, Electric Guitars, Acoustic Guitars, Bass, etc. The sub-mixes should be instruments that belong together and you want to mix them as a group. I will mix electric guitars and acoustic guitars differently, for example. So even though they are both guitars, I will create a submix for each. I approach backing vocals the same way.

Try some basic panning. Panning is the “left to right” spacing of sounds in the stereo field (my own definition, probably not textbook!) Generally speaking, your lead vocal, snare drum and kick drum, and bass are all straight up the middle, in the center. Everything else is fair game! Play around with spacing individual tracks throughout the stereo spectrum. You’ll be amazed at the difference in sound you’ll get by doubling (not cloning or duplicating the track but recording a second pass) certain instruments and panning one hard left and the other hard right. Try this with electric and acoustic guitars as well as with backing vocals.

Don’t be afraid to use presets. In the mixing world, presets are frowned upon. Apparently, it’s only for noobs. If you are a noob, then use them! When you are learning how to mix, presets can be incredibly helpful as a starting point. They can also help your ears hear the difference between different settings. I still use presets as my starting point on a few things. I tweak from there until I dial it in. Eventually, your ear will be trained enough to dial in your own settings from scratch, if you want. But if the preset provides you with a great starting point, why not use it? You can also save your own presets, so once you do get comfortable dialing in your own EQ settings on a lead vocal, for example, you can save your settings as your own preset!

Use EQ instead of the volume knob/fader. If something is too loud or too soft, the volume knob may not be the solution. Try using eq and find a specific preset and see if it helps an instrument to pop out more or not stick out as much.

Use inserts for a reverb (make sure the plugin itself is completely wet) then dial up or down the insert level. You can do this on your buses as well which helps add cohesiveness to a group of instruments. It helps all of your instruments sound like they are living in the same space.

Focus on learning one mixing tool at a time. There is a lot to learn and it all takes time and practice. The fundamentals are EQ, compression, reverbs, saturation, and chorus. Each of these fundamentals has a string of other tools and techniques. De-Lessing vocals, parallel compression, side-chaining, and so on. It’s easy to get overwhelmed once you dive down even one of these fundamentals. Pick one at a time, take some courses, or find tutorials for that specific tool and move on once you feel confident.

Learning to mix is much like learning a new instrument. If you approach it like learning an instrument, then you understand and respect the amount of dedication it takes to improve. Start with these basics and I promise, you’ll start to gain confidence and your mixes will start to sound legit.

 

Boosting Women’s Voices: Cutting Through The Noise

When it comes to editing voices, it’s a job filled with variety, constantly reacting to what hits the ears. While an initial setup of EQ templates might be a starting point for some, every voice is unique. Women’s voices tend to have wildly different tones and timbres that vary from person to person, and editing seems to be an area that’s often hit-and-miss across music and the spoken word. The NCBI Library of Medicine states that the male speaking voice averages around 60 – 180Hz, while the female voice generally sits around 160 – 300Hz, with roughly an octave’s difference in pitch. Despite this, there seems to be a wild disparity in how women’s voices are treated in general. Perhaps the most common problem can be summarised as cutting too much in the lower areas, and boosting too much in the higher areas when women’s voices are in the mix.

Spoken word

With the podcast industry booming, it’s interesting to observe the difference in the editing of women’s voices compared to men’s. The lack of De-esser treatment, and the copious boosting of high-end frequencies often lead to distraction with every ‘t’ and ‘s’ sound that occurs. Sibilance and harshness can abound, and pull us away from what women are actually saying.

Diagram of the Fletcher-Munson Curve

The Fletcher-Munson Curve measures how our bodies perceive loudness. It is also often referred to as the “equal loudness contour”. Created by Harvey Fletcher and Milden A. Munson in the 1930s, the pair demonstrated how loudness affects the human ear at different frequencies, and where we would perceive (or feel) these pitches and volumes as unpleasant. The most sensitive of these frequency areas that offends the ears lies between 3 – 5kHz, which is the danger zone for sibilance.

Business titan Barbara Corcoran is a fantastic speaker and all-around inspirational career woman. Her voice naturally leans to the high end in pitch and tone and has a propensity for sibilance. When I’d previously watched her on the television show Shark Tank, it was clear that this was her vocal sound, yet when I recently listened to her as a guest on a podcast, I was saddened to hear the edit of Barbara’s voice was jarring in the high-end, and desperately needed a De-esser. I was curious to see how closely my perception of the sound was aligned with what was measurably coming out, so I decided to analyse the podcast in contrast with another recording. I used a Spectral Analysis tool, capturing a snapshot of a word with an ‘s’ sound to compare the two different recordings as fairly as possible, and listened through the same speaker.

Barbara speaking at a TEDx Talk

 

I first measured Barbara speaking at a TEDx Talk. There was definitely a slight peak in the range of 3-5kHz when measuring Barbara’s talk, however, the peak was only a little above the others, notably its neighbour around 2kHz, and again a little above the 500Hz peak. Audibly, the voice still sounds high and naturally sibilant, however, there is a softness to the ‘s’ sound that does not detract from the talk.

In the bottom graph, the peak is marked around the 3 – 5kHz range and stands alone above the peaks in lower ranges, which demonstrates that this problem area is in fact considerably louder than the other frequencies, and not just perceived to be louder and distracting by the ear.

 

Diagram Barbara Corcoran’s voice in the TEDx Talk (top image) versus as a podcast guest (bottom image). 

Diagram Barbara Corcoran’s voice in the TEDx Talk (top image) versus as a podcast guest (bottom image).

 

Music

In music, the same problems surround women singers. Often, in striving to add ‘air’ or ‘brightness’ or ‘clarity’ to a vocal, women’s voices succumb to the harshness in the 3 – 5kHz range. In boosting above 2kHz a little too liberally, and adding reverb or other effects that can further highlight the high-end, women’s voices can end up sounding thin, jarring, and full of squeaky ‘s’ sounds. So how do the experts celebrate the richness and full tonal spectrum of strong women’s vocals, and do it so well?

In a 2011 interview talking about the making of Adele’s album 21, producer Paul Epworth and mix engineer Tom Elmhirst gave a run-down of their process. The pair have worked with some formidable women’s voices, from Florence + The Machine and Amy Winehouse to Adele. On the song Rolling In The Deep, Elmhirst used the Waves Q6 EQ on the chorus vocal, pulling out certain frequencies “very, very heavily”:

“I had the Q6 on the chorus vocal, notching out 930, 1634, and 3175 Hz very, very heavily: -18dB, -18dB, and -12.1dB respectively, with very narrow Q. I also had the EQIII on the lead-vocal sub, notching something out again. Something obviously needed to be taken out. The vocal is the most important thing in the track, and taking those frequencies out allowed me to keep it upfront in the mix, particularly in the chorus. Regarding the outboard, I had the Pultec EQ, Urei 1176, and the Tube-Tech CL1B on the lead vocal sub-insert. The Pultec boosted around 100Hz and 12k. It’s colourful, but not drastic. There was not a lot of gain.” 

 

Diagram of Adele Vocal EQ

 

When it came to De-essers, Elmhirst likes to add several for precision – on Rolling In The Deep, Elmhirst explained:

“I did use two Waves De-essers, one taking out at 5449Hz and the other at 11004Hz. Rather than use one to try to cover all the sibilance I used two. I do that quite often.”

While on Someone Like You, he went even further, summarising his EQ and De-esser decisions on the piano-vocal track:

“I had three de-essers on the lead vocal in this case, working at 4185, 7413 and 7712 Hz, and I did some notching on the Waves Q10, taking out 537, 2973, and 10899 Hz, with maximum Q in all cases. The Sonnox Oxford EQ simply takes out everything below 100Hz, and it adds a little around 8k.”

Boosting women’s voices

It’s interesting to compare and contrast the rich tapestry of content that is available to us these days, as well as the amount of guidance that is out there. Considering women’s speaking voices sit around 160 – 300 Hz it’s staggering how many guides and training materials generally recommend using a low pass filter cutting up to 200 Hz – where the voice actually is – and boosting from 4 kHz and up – where madness lies. Every voice needs something different, whether softly spoken, cutting through in an arrangement, or leading a band at a show.

Delving Into De-mix Technology

Since Peter Jackson’s Get Back documentary about The Beatles, the use of De-mix technology has been more prominent in the public realm, and it is a truly intriguing technique. Even the term ‘De-mix’ is a fascinating one, that mentally evokes a challenge similar to ‘un-baking’ a cake.

In fact, the process of De-mixing has been used by Abbey Road Studios for some time, and the technique was developed in partnership with mastermind technical analyst James Clarke, who recalled that the idea first came to him back in 2009. The first project Clarke created was in 2011, with the reimagining of The Beatles Live At The Hollywood Bowl, with many classic Beatles records subsequently following, including A Hard Day’s Night (the movie), 1+, parts from Sgt Pepper’s Lonely Hearts Club Band 50th Anniversary and The White Album. Aside from The Beatles, David Bowie’s Life on Mars, Rush 2112 – The Live Concert, and material from Cliff Richard and The Shadows, as well as Yusuf/Cat Stevens, have been similarly reworked with the De-mix technology.

What is De-mix technology and how does it work?

Abbey Road Studios explains on their website that in its simplest form, the software enhances the original vocals and helps to amplify the bass, which is something that mixes in the late 1960s were often unable to do.

“Using algorithms that are trained on target instruments, De-mix can extract these components to enhance or reduce targeted EQ or isolation. Not only can De-mix be used to adjust the levels of musical elements within a mix, it can also make vocal isolation or removal a reality.

The new process unlocks mono recordings or those where full multi-tracks do not exist, allowing our engineers to adjust the balance and levels of instruments and vocals within a track to rebuild, rebalance and remix the recording. For remastering projects, De-mix allows our engineers to perform targeted EQ balancing. For example, the engineer can adjust and EQ a bass guitar without any impact on the vocals or drums.”

Abbey Road engineer Lewis Jones talked about working on vintage tapes by The Rolling Stones back in 2018, likening the De-mix process to remastering – he drew similarities between taking an initial stereo track, and then making a multi-track of that stereo in order to edit the parts more individually, and enhance them. In the case of these older tracks, however, the source is more often a mono track, which was commonplace in the 1960s.

The comparison to the remastering process makes the technology a little easier to digest. Delving deeper into the science of how De-mix works, Clarke explains:

“The process is that you create spectrogram models of the target instrumentation you’re looking for, so vocals, guitars, bass, drums, stuff like that. And then the software starts to look for patterns within the mixed version that matches those models. It then creates what are called masks, which effectively, think of them like a specific sieve, you just drop the audio through it and the mask catches the bits it wants to keep and lets everything go through. It then does the same for all the other instruments and eventually, it works out that this bit of audio belongs to the drum, or the vocal or bass guitar.”

Clarke also explained that if engineers were having issues while working with the De-mix software, he could tweak the code and the models to assist the process. And looking to the future, Clarke says he is currently moving into a deep learning approach that uses the same concept of generating these masks to un-mix the audio, however, the masks are learnt rather than derived and can be applied to any song. He states that “It’s producing some stunning results at the moment”.

What could be the impact of De-mix technology?

There appears to be only positive potential in the use of De-mix technology, the most notable being the restorative nature of its application – old, forgotten, or bootlegged tracks can benefit hugely from these techniques, and become resurrected to live a second life.

Abbey Road Studios already offer the De-mix service to clients as a remix or remastering option, and the possibilities for the future usage, licensing, or commercialisation of this technique look promising; should Clarke’s deep learning approach continue to create new versions of De-mix, it seems feasible that the technology could one day become widely available to producers and creators. If it can eventually be used as an adaptive preset (as Clarke described in his description of the technology’s deep learning potential), the impact would be huge. Ultimately, taking the innovation and quality of the Abbey Road techniques, and making the software available to use on records everywhere, is a very exciting prospect.

Energy Conservation in Pop Music

I began using the law of conservation of energy to visualize the evolution of Pop music when I was in high school. I was studying higher-level chemistry and music history as part of my International Baccalaureate degree, and while it felt like two unrelated subjects, I was eager to make a connection between them. In general terms, the law of conservation of energy states that energy in a system can’t be created nor destroyed, it just transfers from one form of energy to another. I started visualizing this idea in musical expression while diving into Western European art music and the evolution of Jazz music in America. I noticed how often others recognized these vast musical genres as being more complex than Pop music, primarily because of how they used more intricate harmonies and orchestration. I thought to myself how unfair it was to label the Pop music I was growing up with as simple. I knew it wasn’t any less than the music that came before it, but I couldn’t articulate why. If everything else in the world around me was advancing, then how could music become more basic?

Imagine that Bach’s “Fugue in G minor” and Taylor Swift’s “Out of the Woods” are both their own “isolated systems” with energy, or in this case, musical expression that transfers between different musical elements in each system. In my opinion, both pieces are complex and full of musical expression, and they hold similar amounts of kinetic energy and potential energy. However, the energy is active in different elements of the songs, and for “Out of the Woods,” that element comes from a technological revolution that Bach had no access to. Bach’s fugue holds most of its musical energy in the counterpoint: the melodic composition and modulation drive the expression of the piece. Meanwhile, most of the musical expression in “Out of the Woods” comes from the interesting sonics of the music production, which is true for a lot of Pop music today. Many Pop songs have simplistic chord progressions, which I think is okay because now the energy resides in sound design, music technology, how something is mixed, or how a vocal is processed. I believe that what we’ve experienced as music evolves is a transfer of energy from composition to production because we have the means to do so.

Let’s look at some excerpts of the sheet music from both pieces stated above. Clearly, one melody is more varied and ornamented than the other. Most of Swift’s song is a singular note with little to no melodic contour and a simple I-V-VI-IV chord progression, while Bach’s composition highlights an intricate subject and countermelody with more advanced modulations. Now let’s imagine what the Pro Tools sessions for both songs might look like. Oh right, Bach didn’t have Pro Tools! The earliest known recordings come from the late 19th century, far past his lifetime, so he likely didn’t even consider the kind of microphone he could use or how he could compress the organ with a Teletronix LA-3A or create an entirely new sound on a synthesizer for the fugue. The energy of the piece is most active where Bach’s capabilities and resources exist: his understanding of advanced harmony and his performance technique. Had Taylor Swift been composing at a time when music production wasn’t really a thing, she might have songs with eclectic modulations and contrapuntal bass parts. However, with Jack Antonoff’s exciting vocal arrangement and sound design for electronic drums and synths, there’s already so much energy in the song, that the harmony in this piece doesn’t need to work as hard. Ultimately, I experience both Bach’s fugue and Swift’s single as having the same amount of musical energy, but the energy is utilized in different parts of both systems.

I know this argument all seems convoluted, but this concept has really helped me in my critical listening. When I listen to any recording, I ask myself, “Where has the energy transferred in this piece, where is it being held, and how is it serving the song?” Sometimes the answer is not the same within one genre or even within one artist. If an element in a song feels simple, we can break it down to its core elements to find where the energy is. It can be in the rhythm, the performance, the sampling technique, or the lyricism to name a few. When I write and produce, I approach a new song with this mentality too. Where do I want the energy to be, where can I simplify the song to let that element shine, and how does it work with the narrative of my song?

An Introduction to Classical Music Production

Many classical musicians have been dedicated to their craft since childhood: they’ve spent thousands of hours perfecting their playing technique, studying theory and harmony and history of music, taking lessons with awe-inspiring (and occasionally fear-inducing) professors, and developing a richness of sound that can fluctuate deftly between dramatic passion and subtle nuance, to make even the most hardened of hearts shed a tear of emotion at such sonic beauty! How do we capture in audio the complex compositions of classical music and the natural resonance of these acoustic instruments, and do justice to the sound that classical musicians and singers have worked so hard to create? Goodbye overdubs and hardcore effects processing: classical music recording and production is generally all about finding the most flattering acoustic space to record in, and capturing the musical instrument or voice in a way that best brings out its natural sonic qualities.

Recording session with chamber music ensemble Duo Otero.

Pre-production

One of the most important aspects when planning a classical music recording is finding a space with acoustics that will cradle the music in a glorious bath of reverb – not too much and not too little. When recording many other genres, we’re often striving for a dry-as-a-bone, deadened studio acoustic that will give us the most control over the sound so we can shape it later. Classical music, on the other hand, doesn’t require overdubbing, and so it’s in our best interest to record it in a nice-sounding space. For example, when listening to a live choral performance, isn’t the experience made so much better by those epic cathedral echoes? We also need to find a quiet place without too much external noise – there’s nothing more annoying than having to stop recording five times because five fire trucks have decided to pass by just at that moment! It’s important to do some research on the instruments and the music to be recorded, to be able to prepare an appropriate recording setup. Whether it’s a solo instrumentalist or a full opera production with orchestra will affect our choice and placement of the microphones, and the number of inputs needed.

Recording

Our aim is to capture a performer or ensemble playing together in a great-sounding acoustic – so the workflow is more linear or horizontal than it is vertical. We’re not overdubbing and layering new sounds on top, but we can capture several takes of the same music and then join the best takes together until we have the whole piece, so it sounds like one performance. Because of this way of working, it’s essential that the performer is as well-prepared as they can be, as we can’t make detailed corrections of pitch or timing as we can in other genre recordings (autotune is a no-no!). As we’re recording natural acoustic sounds that can’t be “fixed in the mix” (did I mention no autotune?), it’s important to choose microphones and pre-amps that will do an excellent job of capturing that audio faithfully without colouring the sound too strongly. When placing microphones, we should think about how and where the sound of an instrument is generated, and how it resonates in the acoustic space. A common basic technique is to use a stereo pair of microphones to capture a musician or a whole ensemble within its acoustic, and then to add “spot mics” – microphones placed closer to individual instruments – to capture more details. If there’s the luxury of an abundance of microphones, we might sometimes add an extra pair of microphones even further away from the sound source to capture more of the acoustic space, and then we can blend all of these microphones together, to taste.

Post-production

Mixing classical music usually involves finding a pleasing balance between the recorded channels (for example, the stereo pair, spot mics, and ambient mics), applying suitable panning, noise reduction, and light EQ, and limiting as necessary (perhaps compression for overly-excited percussion or other highly dynamic instruments). If it’s a large ensemble recording, we might use automation to bring up solo parts if they are shy and need a little help, or to highlight interesting musical details and textures. Often using a touch of digital reverb can add a smooth and satisfying sheen. Especially if a perfect-sounding recording space is just not available (it often happens): some epic digital reverb can help to glow up a flat and boring-sounding space.

Aside from live concert recordings, a lot of classical music post-production lies in the editing: often there’ll be several takes of the same material, and the challenge is to select the best performances and stitch it all together in a seamless way so that the transitions can’t be heard – while maintaining the original energy and pacing of the performance, and not going overboard on crazily detailed editing, as that’s kind of cheating (see TwoSetViolin’s hilarious video 1% Violin Skills 99% Editing Skills)! It is an advantage – and probably essential in some situations – to be able to read music scores. It’s really helpful to follow the score as the musicians are playing, to write notes on the best (and worst) takes, to guide them and suggest what they might like to repeat, change or improve, and to make sure that all parts of a piece have been recorded.

In summary

The world of classical music production is an exciting space where audio engineers, producers, and musicians collaborate closely together to immortalise wonderful compositions in audio so that a wider audience can hear and enjoy them. If you’d like to get into classical music production, there’s no better way than to learn by jumping in and practising doing lots of recordings – try different mics and positions in different acoustic spaces, listen to lots of classical music recordings, read up on the different instruments, and use your ears as your most important tool. You’ll soon be Bach for more!

 

 

An Introduction to Sync Licensing

As a musician, you have most likely become aware of the word “sync”. Perhaps you have researched and feel you have a pretty good understanding of the basics. Maybe you’ve even had sync placements. This blog is going to cover the basics for those who are just hearing the term, but more importantly, I want to help you figure out if sync is for you. In my opinion, there are two clear pathways for a musician to take when it comes to seeking out sync licensing opportunities. Hopefully, this will help you determine if one of those paths is right for you.

First, some definitions

Sync is short for “synchronization” or “synchronization licensing” which is referring to the license music creators need to give to folks who want to “synchronize” video of any kind to recorded music.

Music Supervisor is the person who chooses music for every moment of a film or show. Sometimes the composer and the supe are the same person (lower budget films, usually)

A Music Library is like a library of music. People searching for music can search the database, filtered by various features, such as mood, tempo, genre, female vocal, male vocal, instrumental, and so on.

A Sync Agent is a person (can be independent or work for an agency) that is like the go-between for music supes and musicians. They will often take a cut of the sync fee and might also work in a percentage of the master use.

Production Music is the common term used for “background music”, but may have vocals. Music libraries will often compile “albums” of production music by theme, a specific mood or genre, etc. The licensing is already handled with the creator, which makes it much easier for music supes to quickly select a song without having to wait for agreements, approvals, etc.

What are “songs” that “work for sync”?

When music supervisors are looking for music, they are looking for a certain type of energy, a mood, a feeling. Surprisingly, a good sync song may not necessarily be a “hit” song and a “hit” song may not necessarily work for sync. Once in a while, a hit song is also a great sync song, but that is not the norm. Either way, when the perfect song is found for a particular scene or ad, magic can happen.

The best way to really understand what sync is all about is to do a little observation exercise. You are simply going to observe your normal day of Netflix watching or whatever way you watch your shows. Only today, pay attention to the music being played in conjunction with whatever you are watching. Whether it be a movie, a documentary, a reality show, a TV show from the ’80s, pay attention to the music. How many snippets of songs do you hear in each episode? Do any of the songs sound like “radio” songs? How many are instrumentals? Now, what about the ads? I don’t watch regular TV anymore but I do have a few shows that I love to watch on YouTube. So I still see ads quite regularly. How about you? What kind of music are you hearing in the ads?

Every piece of music you heard was composed, written, performed by a person or people. Each piece of music supposedly has a proper license. A cue sheet was also submitted to a songwriting organization so that the songwriters and publishers can be paid a royalty. The value of that piece of music varies from a penny to hundreds of thousands of dollars and everything in between. The amount paid is based on numerous factors; is it background or under dialog, is it playing on the radio or jukebox on screen, is it with vocals, without vocals, how much of the song is played, where in the film, such as opening credits, montage scene, etc., and is it a well-known song or major artist or an indie? Sooooo many factors play into the “value” of that placement. Some songs are paid an upfront sync fee in addition to the songwriting/publishing royalties. Some are not. Some of the songs (especially background, instrumental music) are composed by someone who might work directly for the company creating the show/movie, or the composer may work for the publisher or library that licensed the music. It’s a complex biz.

So, what about the two paths?

I landed my first sync placement back in 2006 ish and it was sort of a fluke. The long story short is that a co-writer/co-producer and I wrote the song specifically for a small, independent film after reading the film synopsis. The song made it into the movie, which aired on ABC Family (and is still streamed regularly on a variety of platforms). Then we shopped it to some music libraries and a music publisher. One of the libraries secured multiple placements for that song, plus several other songs we had already written. In recent years, I’ve tried to dive deeper into creating specifically for sync but seem to have no time for that. I’ve become crazy busy as a full-time music producer for artists. This has helped me clearly see the two paths.

Path one: you are a creator of hundreds of songs, beats, tracks and are pitching almost as much as you are creating. In this path, it is a numbers game. It’s all about quantity. The more “content” you have, the better your chances of getting a sync placement. This scenario is ideal for you if you;

On this path, you can start out by pitching to music libraries but the ultimate goal will be to network to the point where you are receiving briefs from sync agents and production companies directly. This path can take a lot of time before you begin to see the fruits of your labor. Time is needed to make connections, to find good collaborators, and to earn the trust of sync agents and libraries.

Path two: you are an artist who is focusing on building your artistic brand, creating songs that connect with your fan base, creating music that you love and intend on performing. OR you are like me and love producing with and for artists to help them build their career as artists. This path is ideal for you if you:

If this is the path for you, pitching to a sync agent or a manager or producer who has connections to sync agents or music supervisors may be your best bet. If your genre is very current, it may have a short shelf life so get going on that pitching asap. This path requires that you focus on the main goal (building your music business as an artist and/or producer) and perhaps spend a few hours a week on pitching, emails, metadata, and contracts.

If you are on path two, you can try your hand at creating a song or two that are “you” as an artist and could be released on an album or as a single but would also work for sync. There’s nothing wrong with that approach! How do you know if your song would work for sync? Remember the statement above about songs for sync needing to capture a mood, etc? This is very important. What is also important is that there are no lyrics that are about a specific time, location, person, etc. Once in a while, a song with specific lyrics can work perfectly for a scene but it’s better to keep the lyrics “generic” enough to increase your chances for placement. Generic doesn’t mean boring! This is the actual struggle! Writing lyrics that are genuine, interesting, engaging but not specific is actually the hardest part.

Important Companies, Contacts and Resources:

If you are wondering if your songs are “sync ready”, I’m happy to give them a listen and throw my opinion at you. 😉 Send me up to 3 songs and put “Sync Songs?” in the subject line, to becky@voxfoxproductions.com

 

Demystifying Loudness Standards

               
Every sound engineer refers to some kind of meter to aid the judgments we make with our ears. Sometimes it is a meter on tracks in a DAW or that session’s master output meter, other times it is LEDs lighting up our consoles like a Christmas tree, sometimes it is a handheld sound level meter, other times a VU meter, etc. All of those meters measure audio signal using different scales, but they all use the decibel as a unit of measurement. There is also a way to measure the levels of mixes that are designed to represent the human perception of sound: loudness!

Our job as audio engineers and sound designers is to deliver a seamless aural experience. Loudness standards are a set of guides, measured by particular algorithms, to ensure that everyone who is mixing audio is delivering a product that sounds similar in volume across a streaming service, website, and radio or television station. The less work our audiences have to do, the better we have done our jobs. Loudness is one of the many tools that help us ensure that we are delivering the best experience possible.

History           

A big reason we started mixing to loudness standards was to achieve consistent volume, from program to program as well as within shows. Listeners and viewers used to complain to the FCC and BBC TV about jumps in volume between programs, and volume ranges within programs being too wide. Listeners had to perpetually make volume adjustments on their end when their radio or television suddenly got loud, or to hear what was being said if a moment was mixed too quietly compared to the rest of the program.

In 2007, the International Telecommunications Union (ITU) released the ITU-R BS 1770 standard; a set of algorithms to measure audio program loudness and true-peak level. (Chueks Blog.)  Then, the European Broadcast Union (EBU) began to work with the ITU standard. Then EBU modified their standard when they discovered that gaps of silence could bring a loud program down to their specifications. So they released a standard called EBU R-128. Levels below 8 LUFS of the ungated measurement do not count towards the integrated loudness level, which means that the quiet parts can not skew the measurement of the whole program. The ITU standard is still used internationally.

Even after all of this standardization, television viewers were still being blasted by painfully loud commercials.  So, on December 13th, 2012, the FCC passed the Commercial Advertisement Loudness Mitigation Act. From the FCC website: “Specifically, the CALM Act directs the Commission to establish rules that require TV stations, cable operators, satellite TV providers or other multichannel video program distributors to apply the ATSC A/85 Recommended Practice to commercial advertisements they transmit to viewers. The ATSC A/85 RP is a set of methods to measure and control the audio loudness of digital programming, including commercials.  This standard can be used by all broadcast television stations and pay-TV providers.”    And yup, listeners can file complaints to the FCC if a commercial is too loud. The CALM Act just regulates the loudness of commercials.

Non-Eurocentric countries have their own loudness standards, derived from the global ITU R B.S 1770. China’s standard for television broadcast is GY/T 282-2014; Japan’s is ARIB TR-B32; Australia’s and New Zealand’s is OP-29. Many European and South American countries, along with South Africa, use the EBU R-128 standard. There’s a link with a more comprehensive link at the end of this article, in the resources section.

Most clients you will mix for expect you, the sound designer or sound mixer, to abide by any one of these standards, depending on who is distributing it. (Apple, Spotify, Netflix, YouTube, broadcast, etc.) 

The Science Behind Loudness Measurements

Loudness is a measurement of human perception. If you have not experienced mixing with a loudness meter, you are (hopefully) paying attention to RMS, peak, or VU meters in your DAW or on your hardware. RMS (average level) and peak (loudest level) meters measure levels in decibels relative to full scale (dBFS). The numbers on those meters are based on the voltage of an audio signal. VU meters use a VU scale (where 0 VU is equal to +4 dBu), and like RMS and peak meters, are measuring the voltage of an audio signal.
Those measurements would work to measure loudness – if humans heard all frequencies in the audio spectrum at equal volume levels. But we don’t! Get familiar with the Fletcher-Munson Curve. It is a chart that shows, on average, how sensitive humans are to different frequencies. (Technically speaking, we all hear slightly differently from each other, but this is a solid basis.)

Humans need low frequencies to be cranked up in order to perceive them as the same volume as higher frequencies. And, sound coming from behind us is also weighed louder than sound in front of us. Perhaps it is an instinct that evolved with early humans. As animals, we are still on the lookout for predators that are sneaking up on us from behind.

Instead of measuring loudness in decibels (dB), we measure it in loudness units full scale (LUFS, or interchangeably, LKFS). LUFS measurements account for humans being less sensitive to low frequencies but more sensitive to sounds coming from behind them.

There are a couple more interesting things to know about how loudness meters work. We already mentioned how the EBU standard gates anything below 8 LUFS under the ungated measurement so the really quiet or silent parts do not skew the measurement of the whole mix (which would allow the loudest parts to be way too loud). Loudness standards also dictate the allowed dynamic range of a program (in LUFS). This is important so your audience does not have to tweak the volume to hear people during very quiet scenes, and it saves their ears from getting blasted by a World War Two bomb squadron or a kaiju if they had their stereo turned way up to hear a quiet conversation. (Though every sound designer and mixer knows that there will always be more sensitive listeners who will complain about a loud scene anyway.)

Terms

Here is a list of terms you will see on all loudness meters.

LUFS/LKFS – Loudness Units Full Scale (LKFS = K weighted, but they are effectively the same thing).

Weighting standards – When you mix to a loudness spec in LUFS, also know which standard you should use! The following are the most commonly used standards.

True Peak Max:  Bit of an explanation here. When you play audio in your DAW. you are hearing an analog reconstruction of digital audio data. Depending on how that audio data is decoded, the analog reconstruction might peak beyond the digital waveform. Those peaks are called inter-sample peaks. Inter-sample peaks will not be detected by a limiter or sample peak meter. But a True Peak Meter on a loudness meter will catch them. True peak is measured in dBTP.

Momentary loudness: Loudness at any given moment, for measuring the loudness of a section.

Long-term/ Integrated loudness: This is the average loudness of your mix.

Target Levels: What measurement in LUFS the mix should reach.

Range/LRA: Dynamic range, but in LUFS.  

How To Mix To Loudness Standards

Okay, you know the history, you are armed with the terminology…now what? First, let us talk about the consequences of not mixing to spec.

For every client, there are different devices at the distribution stage that decode your audio and play it out to the airwaves. Those devices have different specifications. The distributor will turn a mix-up or down to normalize the audio to their standards if the mix does not meet specifications. A couple of things happen as a result. One, loss of dynamic range. And, the quietest parts are still too quiet. If there are parts that are too loud, those parts will sound distorted and crushed due to compressed waveforms. The end result is a quiet mix, with no dynamics, with distortion.

To put mixing to loudness in practice, first, start with your ears. Mix what sounds good. Aim for intelligibility and consistency. Keep an eye on your RMS, Peak, or VU meters, but do not worry about LUFS yet.

Your second pass is when you mix to target  LUFS levels. Keep an eye on your loudness meter. I watch the momentary loudness reading because if I am consistently in the ballpark with momentary loudness, I will have a reliable integrated loudness reading and a dynamic range that is not too wide. Limiters can also be used to your advantage.

Then, bounce your mix. Bring the bounce into your session, select the clip, then open your loudness plugin and analyze the bounce. Your loudness plugin will give you a reading with the current specs for your bounce. (Caveat: I am using ProTools terminology. Check if your DAW has a feature similar to AudioSuite.) This also works great for analyzing sections of audio at a time while you are mixing.

Speaking of plugins, here are a few of the most used loudness meters. Insert one of these on your master track to measure your loudness.

Youlean Loudness Meter
This one is top of the list because it is FREE! It also has a cool feature where it shows a linear history of the loudness readings.

iZotope Insight
Insight is really cool. There are a lot of different views, including history and sound field views, and a spectrogram so you can see how different frequencies are being weighted. This plugin measures momentary loudness fast.




Waves WLM Meter

The Waves option may not have a bunch of flashy features like its iZotope competitor, but it does measure everything accurately and comes with an adjustable trim feature. The short-term loudness is accurate but does not bounce around as fast as Insight’s, which I actually prefer.

TC Electronic LMN Meter
I have not personally used this meter, but it looks like a great option for those of us mixing for 5.1 systems. And the radar display is pretty cool!

Wrapping Up: Making Art with Science

The science and history may be a little dry to research, but loudness mixing is an art form itself; Because if listeners have to constantly adjust volume, we are failing at our jobs of creating a distraction and hassle-free experience for our audience. Loudness standards go beyond a set of rules; they are an opportunity for audio engineers to use our scientific prowess to develop our work into a unifying experience.

Resources

First, big thanks to my editors (and fellow audio engineers) Jay Czys and Andie Huether.

The Loudness Standards (Measurement) – LUFS (Cheuks’ Blog)
https://cheuksblog.wordpress.com/2018/04/02/the-loudness-standards-measurement-lufs/#:~:text=Around%202007%2C%20an%20organization%20named,a%20value%20for%20the%20audio.

Loudness: Everything You Need to Know (Production Expert)
https://www.pro-tools-expert.com/production-expert-1/loudness-everything-you-need-to-know

Loud Commercials (The Federal Communications Commission)
https://www.fcc.gov/media/policy/loud-commercials

Loudness vs. True Peak: A Beginner’s Guide (NUGEN Audio)
https://nugenaudio.com/loudness-true-peak/

Worldwide Loudness Standards
https://www.rtw.com/en/blog/worldwide-loudness-delivery-standards.html

X