Empowering the Next Generation of Women in Audio

Join Us

Energy Conservation in Pop Music

I began using the law of conservation of energy to visualize the evolution of Pop music when I was in high school. I was studying higher-level chemistry and music history as part of my International Baccalaureate degree, and while it felt like two unrelated subjects, I was eager to make a connection between them. In general terms, the law of conservation of energy states that energy in a system can’t be created nor destroyed, it just transfers from one form of energy to another. I started visualizing this idea in musical expression while diving into Western European art music and the evolution of Jazz music in America. I noticed how often others recognized these vast musical genres as being more complex than Pop music, primarily because of how they used more intricate harmonies and orchestration. I thought to myself how unfair it was to label the Pop music I was growing up with as simple. I knew it wasn’t any less than the music that came before it, but I couldn’t articulate why. If everything else in the world around me was advancing, then how could music become more basic?

Imagine that Bach’s “Fugue in G minor” and Taylor Swift’s “Out of the Woods” are both their own “isolated systems” with energy, or in this case, musical expression that transfers between different musical elements in each system. In my opinion, both pieces are complex and full of musical expression, and they hold similar amounts of kinetic energy and potential energy. However, the energy is active in different elements of the songs, and for “Out of the Woods,” that element comes from a technological revolution that Bach had no access to. Bach’s fugue holds most of its musical energy in the counterpoint: the melodic composition and modulation drive the expression of the piece. Meanwhile, most of the musical expression in “Out of the Woods” comes from the interesting sonics of the music production, which is true for a lot of Pop music today. Many Pop songs have simplistic chord progressions, which I think is okay because now the energy resides in sound design, music technology, how something is mixed, or how a vocal is processed. I believe that what we’ve experienced as music evolves is a transfer of energy from composition to production because we have the means to do so.

Let’s look at some excerpts of the sheet music from both pieces stated above. Clearly, one melody is more varied and ornamented than the other. Most of Swift’s song is a singular note with little to no melodic contour and a simple I-V-VI-IV chord progression, while Bach’s composition highlights an intricate subject and countermelody with more advanced modulations. Now let’s imagine what the Pro Tools sessions for both songs might look like. Oh right, Bach didn’t have Pro Tools! The earliest known recordings come from the late 19th century, far past his lifetime, so he likely didn’t even consider the kind of microphone he could use or how he could compress the organ with a Teletronix LA-3A or create an entirely new sound on a synthesizer for the fugue. The energy of the piece is most active where Bach’s capabilities and resources exist: his understanding of advanced harmony and his performance technique. Had Taylor Swift been composing at a time when music production wasn’t really a thing, she might have songs with eclectic modulations and contrapuntal bass parts. However, with Jack Antonoff’s exciting vocal arrangement and sound design for electronic drums and synths, there’s already so much energy in the song, that the harmony in this piece doesn’t need to work as hard. Ultimately, I experience both Bach’s fugue and Swift’s single as having the same amount of musical energy, but the energy is utilized in different parts of both systems.

I know this argument all seems convoluted, but this concept has really helped me in my critical listening. When I listen to any recording, I ask myself, “Where has the energy transferred in this piece, where is it being held, and how is it serving the song?” Sometimes the answer is not the same within one genre or even within one artist. If an element in a song feels simple, we can break it down to its core elements to find where the energy is. It can be in the rhythm, the performance, the sampling technique, or the lyricism to name a few. When I write and produce, I approach a new song with this mentality too. Where do I want the energy to be, where can I simplify the song to let that element shine, and how does it work with the narrative of my song?

An Introduction to Classical Music Production

Many classical musicians have been dedicated to their craft since childhood: they’ve spent thousands of hours perfecting their playing technique, studying theory and harmony and history of music, taking lessons with awe-inspiring (and occasionally fear-inducing) professors, and developing a richness of sound that can fluctuate deftly between dramatic passion and subtle nuance, to make even the most hardened of hearts shed a tear of emotion at such sonic beauty! How do we capture in audio the complex compositions of classical music and the natural resonance of these acoustic instruments, and do justice to the sound that classical musicians and singers have worked so hard to create? Goodbye overdubs and hardcore effects processing: classical music recording and production is generally all about finding the most flattering acoustic space to record in, and capturing the musical instrument or voice in a way that best brings out its natural sonic qualities.

Recording session with chamber music ensemble Duo Otero.

Pre-production

One of the most important aspects when planning a classical music recording is finding a space with acoustics that will cradle the music in a glorious bath of reverb – not too much and not too little. When recording many other genres, we’re often striving for a dry-as-a-bone, deadened studio acoustic that will give us the most control over the sound so we can shape it later. Classical music, on the other hand, doesn’t require overdubbing, and so it’s in our best interest to record it in a nice-sounding space. For example, when listening to a live choral performance, isn’t the experience made so much better by those epic cathedral echoes? We also need to find a quiet place without too much external noise – there’s nothing more annoying than having to stop recording five times because five fire trucks have decided to pass by just at that moment! It’s important to do some research on the instruments and the music to be recorded, to be able to prepare an appropriate recording setup. Whether it’s a solo instrumentalist or a full opera production with orchestra will affect our choice and placement of the microphones, and the number of inputs needed.

Recording

Our aim is to capture a performer or ensemble playing together in a great-sounding acoustic – so the workflow is more linear or horizontal than it is vertical. We’re not overdubbing and layering new sounds on top, but we can capture several takes of the same music and then join the best takes together until we have the whole piece, so it sounds like one performance. Because of this way of working, it’s essential that the performer is as well-prepared as they can be, as we can’t make detailed corrections of pitch or timing as we can in other genre recordings (autotune is a no-no!). As we’re recording natural acoustic sounds that can’t be “fixed in the mix” (did I mention no autotune?), it’s important to choose microphones and pre-amps that will do an excellent job of capturing that audio faithfully without colouring the sound too strongly. When placing microphones, we should think about how and where the sound of an instrument is generated, and how it resonates in the acoustic space. A common basic technique is to use a stereo pair of microphones to capture a musician or a whole ensemble within its acoustic, and then to add “spot mics” – microphones placed closer to individual instruments – to capture more details. If there’s the luxury of an abundance of microphones, we might sometimes add an extra pair of microphones even further away from the sound source to capture more of the acoustic space, and then we can blend all of these microphones together, to taste.

Post-production

Mixing classical music usually involves finding a pleasing balance between the recorded channels (for example, the stereo pair, spot mics, and ambient mics), applying suitable panning, noise reduction, and light EQ, and limiting as necessary (perhaps compression for overly-excited percussion or other highly dynamic instruments). If it’s a large ensemble recording, we might use automation to bring up solo parts if they are shy and need a little help, or to highlight interesting musical details and textures. Often using a touch of digital reverb can add a smooth and satisfying sheen. Especially if a perfect-sounding recording space is just not available (it often happens): some epic digital reverb can help to glow up a flat and boring-sounding space.

Aside from live concert recordings, a lot of classical music post-production lies in the editing: often there’ll be several takes of the same material, and the challenge is to select the best performances and stitch it all together in a seamless way so that the transitions can’t be heard – while maintaining the original energy and pacing of the performance, and not going overboard on crazily detailed editing, as that’s kind of cheating (see TwoSetViolin’s hilarious video 1% Violin Skills 99% Editing Skills)! It is an advantage – and probably essential in some situations – to be able to read music scores. It’s really helpful to follow the score as the musicians are playing, to write notes on the best (and worst) takes, to guide them and suggest what they might like to repeat, change or improve, and to make sure that all parts of a piece have been recorded.

In summary

The world of classical music production is an exciting space where audio engineers, producers, and musicians collaborate closely together to immortalise wonderful compositions in audio so that a wider audience can hear and enjoy them. If you’d like to get into classical music production, there’s no better way than to learn by jumping in and practising doing lots of recordings – try different mics and positions in different acoustic spaces, listen to lots of classical music recordings, read up on the different instruments, and use your ears as your most important tool. You’ll soon be Bach for more!

 

 

An Introduction to Sync Licensing

As a musician, you have most likely become aware of the word “sync”. Perhaps you have researched and feel you have a pretty good understanding of the basics. Maybe you’ve even had sync placements. This blog is going to cover the basics for those who are just hearing the term, but more importantly, I want to help you figure out if sync is for you. In my opinion, there are two clear pathways for a musician to take when it comes to seeking out sync licensing opportunities. Hopefully, this will help you determine if one of those paths is right for you.

First, some definitions

Sync is short for “synchronization” or “synchronization licensing” which is referring to the license music creators need to give to folks who want to “synchronize” video of any kind to recorded music.

Music Supervisor is the person who chooses music for every moment of a film or show. Sometimes the composer and the supe are the same person (lower budget films, usually)

A Music Library is like a library of music. People searching for music can search the database, filtered by various features, such as mood, tempo, genre, female vocal, male vocal, instrumental, and so on.

A Sync Agent is a person (can be independent or work for an agency) that is like the go-between for music supes and musicians. They will often take a cut of the sync fee and might also work in a percentage of the master use.

Production Music is the common term used for “background music”, but may have vocals. Music libraries will often compile “albums” of production music by theme, a specific mood or genre, etc. The licensing is already handled with the creator, which makes it much easier for music supes to quickly select a song without having to wait for agreements, approvals, etc.

What are “songs” that “work for sync”?

When music supervisors are looking for music, they are looking for a certain type of energy, a mood, a feeling. Surprisingly, a good sync song may not necessarily be a “hit” song and a “hit” song may not necessarily work for sync. Once in a while, a hit song is also a great sync song, but that is not the norm. Either way, when the perfect song is found for a particular scene or ad, magic can happen.

The best way to really understand what sync is all about is to do a little observation exercise. You are simply going to observe your normal day of Netflix watching or whatever way you watch your shows. Only today, pay attention to the music being played in conjunction with whatever you are watching. Whether it be a movie, a documentary, a reality show, a TV show from the ’80s, pay attention to the music. How many snippets of songs do you hear in each episode? Do any of the songs sound like “radio” songs? How many are instrumentals? Now, what about the ads? I don’t watch regular TV anymore but I do have a few shows that I love to watch on YouTube. So I still see ads quite regularly. How about you? What kind of music are you hearing in the ads?

Every piece of music you heard was composed, written, performed by a person or people. Each piece of music supposedly has a proper license. A cue sheet was also submitted to a songwriting organization so that the songwriters and publishers can be paid a royalty. The value of that piece of music varies from a penny to hundreds of thousands of dollars and everything in between. The amount paid is based on numerous factors; is it background or under dialog, is it playing on the radio or jukebox on screen, is it with vocals, without vocals, how much of the song is played, where in the film, such as opening credits, montage scene, etc., and is it a well-known song or major artist or an indie? Sooooo many factors play into the “value” of that placement. Some songs are paid an upfront sync fee in addition to the songwriting/publishing royalties. Some are not. Some of the songs (especially background, instrumental music) are composed by someone who might work directly for the company creating the show/movie, or the composer may work for the publisher or library that licensed the music. It’s a complex biz.

So, what about the two paths?

I landed my first sync placement back in 2006 ish and it was sort of a fluke. The long story short is that a co-writer/co-producer and I wrote the song specifically for a small, independent film after reading the film synopsis. The song made it into the movie, which aired on ABC Family (and is still streamed regularly on a variety of platforms). Then we shopped it to some music libraries and a music publisher. One of the libraries secured multiple placements for that song, plus several other songs we had already written. In recent years, I’ve tried to dive deeper into creating specifically for sync but seem to have no time for that. I’ve become crazy busy as a full-time music producer for artists. This has helped me clearly see the two paths.

Path one: you are a creator of hundreds of songs, beats, tracks and are pitching almost as much as you are creating. In this path, it is a numbers game. It’s all about quantity. The more “content” you have, the better your chances of getting a sync placement. This scenario is ideal for you if you;

On this path, you can start out by pitching to music libraries but the ultimate goal will be to network to the point where you are receiving briefs from sync agents and production companies directly. This path can take a lot of time before you begin to see the fruits of your labor. Time is needed to make connections, to find good collaborators, and to earn the trust of sync agents and libraries.

Path two: you are an artist who is focusing on building your artistic brand, creating songs that connect with your fan base, creating music that you love and intend on performing. OR you are like me and love producing with and for artists to help them build their career as artists. This path is ideal for you if you:

If this is the path for you, pitching to a sync agent or a manager or producer who has connections to sync agents or music supervisors may be your best bet. If your genre is very current, it may have a short shelf life so get going on that pitching asap. This path requires that you focus on the main goal (building your music business as an artist and/or producer) and perhaps spend a few hours a week on pitching, emails, metadata, and contracts.

If you are on path two, you can try your hand at creating a song or two that are “you” as an artist and could be released on an album or as a single but would also work for sync. There’s nothing wrong with that approach! How do you know if your song would work for sync? Remember the statement above about songs for sync needing to capture a mood, etc? This is very important. What is also important is that there are no lyrics that are about a specific time, location, person, etc. Once in a while, a song with specific lyrics can work perfectly for a scene but it’s better to keep the lyrics “generic” enough to increase your chances for placement. Generic doesn’t mean boring! This is the actual struggle! Writing lyrics that are genuine, interesting, engaging but not specific is actually the hardest part.

Important Companies, Contacts and Resources:

If you are wondering if your songs are “sync ready”, I’m happy to give them a listen and throw my opinion at you. 😉 Send me up to 3 songs and put “Sync Songs?” in the subject line, to becky@voxfoxproductions.com

 

Demystifying Loudness Standards

               
Every sound engineer refers to some kind of meter to aid the judgments we make with our ears. Sometimes it is a meter on tracks in a DAW or that session’s master output meter, other times it is LEDs lighting up our consoles like a Christmas tree, sometimes it is a handheld sound level meter, other times a VU meter, etc. All of those meters measure audio signal using different scales, but they all use the decibel as a unit of measurement. There is also a way to measure the levels of mixes that are designed to represent the human perception of sound: loudness!

Our job as audio engineers and sound designers is to deliver a seamless aural experience. Loudness standards are a set of guides, measured by particular algorithms, to ensure that everyone who is mixing audio is delivering a product that sounds similar in volume across a streaming service, website, and radio or television station. The less work our audiences have to do, the better we have done our jobs. Loudness is one of the many tools that help us ensure that we are delivering the best experience possible.

History           

A big reason we started mixing to loudness standards was to achieve consistent volume, from program to program as well as within shows. Listeners and viewers used to complain to the FCC and BBC TV about jumps in volume between programs, and volume ranges within programs being too wide. Listeners had to perpetually make volume adjustments on their end when their radio or television suddenly got loud, or to hear what was being said if a moment was mixed too quietly compared to the rest of the program.

In 2007, the International Telecommunications Union (ITU) released the ITU-R BS 1770 standard; a set of algorithms to measure audio program loudness and true-peak level. (Chueks Blog.)  Then, the European Broadcast Union (EBU) began to work with the ITU standard. Then EBU modified their standard when they discovered that gaps of silence could bring a loud program down to their specifications. So they released a standard called EBU R-128. Levels below 8 LUFS of the ungated measurement do not count towards the integrated loudness level, which means that the quiet parts can not skew the measurement of the whole program. The ITU standard is still used internationally.

Even after all of this standardization, television viewers were still being blasted by painfully loud commercials.  So, on December 13th, 2012, the FCC passed the Commercial Advertisement Loudness Mitigation Act. From the FCC website: “Specifically, the CALM Act directs the Commission to establish rules that require TV stations, cable operators, satellite TV providers or other multichannel video program distributors to apply the ATSC A/85 Recommended Practice to commercial advertisements they transmit to viewers. The ATSC A/85 RP is a set of methods to measure and control the audio loudness of digital programming, including commercials.  This standard can be used by all broadcast television stations and pay-TV providers.”    And yup, listeners can file complaints to the FCC if a commercial is too loud. The CALM Act just regulates the loudness of commercials.

Non-Eurocentric countries have their own loudness standards, derived from the global ITU R B.S 1770. China’s standard for television broadcast is GY/T 282-2014; Japan’s is ARIB TR-B32; Australia’s and New Zealand’s is OP-29. Many European and South American countries, along with South Africa, use the EBU R-128 standard. There’s a link with a more comprehensive link at the end of this article, in the resources section.

Most clients you will mix for expect you, the sound designer or sound mixer, to abide by any one of these standards, depending on who is distributing it. (Apple, Spotify, Netflix, YouTube, broadcast, etc.) 

The Science Behind Loudness Measurements

Loudness is a measurement of human perception. If you have not experienced mixing with a loudness meter, you are (hopefully) paying attention to RMS, peak, or VU meters in your DAW or on your hardware. RMS (average level) and peak (loudest level) meters measure levels in decibels relative to full scale (dBFS). The numbers on those meters are based on the voltage of an audio signal. VU meters use a VU scale (where 0 VU is equal to +4 dBu), and like RMS and peak meters, are measuring the voltage of an audio signal.
Those measurements would work to measure loudness – if humans heard all frequencies in the audio spectrum at equal volume levels. But we don’t! Get familiar with the Fletcher-Munson Curve. It is a chart that shows, on average, how sensitive humans are to different frequencies. (Technically speaking, we all hear slightly differently from each other, but this is a solid basis.)

Humans need low frequencies to be cranked up in order to perceive them as the same volume as higher frequencies. And, sound coming from behind us is also weighed louder than sound in front of us. Perhaps it is an instinct that evolved with early humans. As animals, we are still on the lookout for predators that are sneaking up on us from behind.

Instead of measuring loudness in decibels (dB), we measure it in loudness units full scale (LUFS, or interchangeably, LKFS). LUFS measurements account for humans being less sensitive to low frequencies but more sensitive to sounds coming from behind them.

There are a couple more interesting things to know about how loudness meters work. We already mentioned how the EBU standard gates anything below 8 LUFS under the ungated measurement so the really quiet or silent parts do not skew the measurement of the whole mix (which would allow the loudest parts to be way too loud). Loudness standards also dictate the allowed dynamic range of a program (in LUFS). This is important so your audience does not have to tweak the volume to hear people during very quiet scenes, and it saves their ears from getting blasted by a World War Two bomb squadron or a kaiju if they had their stereo turned way up to hear a quiet conversation. (Though every sound designer and mixer knows that there will always be more sensitive listeners who will complain about a loud scene anyway.)

Terms

Here is a list of terms you will see on all loudness meters.

LUFS/LKFS – Loudness Units Full Scale (LKFS = K weighted, but they are effectively the same thing).

Weighting standards – When you mix to a loudness spec in LUFS, also know which standard you should use! The following are the most commonly used standards.

True Peak Max:  Bit of an explanation here. When you play audio in your DAW. you are hearing an analog reconstruction of digital audio data. Depending on how that audio data is decoded, the analog reconstruction might peak beyond the digital waveform. Those peaks are called inter-sample peaks. Inter-sample peaks will not be detected by a limiter or sample peak meter. But a True Peak Meter on a loudness meter will catch them. True peak is measured in dBTP.

Momentary loudness: Loudness at any given moment, for measuring the loudness of a section.

Long-term/ Integrated loudness: This is the average loudness of your mix.

Target Levels: What measurement in LUFS the mix should reach.

Range/LRA: Dynamic range, but in LUFS.  

How To Mix To Loudness Standards

Okay, you know the history, you are armed with the terminology…now what? First, let us talk about the consequences of not mixing to spec.

For every client, there are different devices at the distribution stage that decode your audio and play it out to the airwaves. Those devices have different specifications. The distributor will turn a mix-up or down to normalize the audio to their standards if the mix does not meet specifications. A couple of things happen as a result. One, loss of dynamic range. And, the quietest parts are still too quiet. If there are parts that are too loud, those parts will sound distorted and crushed due to compressed waveforms. The end result is a quiet mix, with no dynamics, with distortion.

To put mixing to loudness in practice, first, start with your ears. Mix what sounds good. Aim for intelligibility and consistency. Keep an eye on your RMS, Peak, or VU meters, but do not worry about LUFS yet.

Your second pass is when you mix to target  LUFS levels. Keep an eye on your loudness meter. I watch the momentary loudness reading because if I am consistently in the ballpark with momentary loudness, I will have a reliable integrated loudness reading and a dynamic range that is not too wide. Limiters can also be used to your advantage.

Then, bounce your mix. Bring the bounce into your session, select the clip, then open your loudness plugin and analyze the bounce. Your loudness plugin will give you a reading with the current specs for your bounce. (Caveat: I am using ProTools terminology. Check if your DAW has a feature similar to AudioSuite.) This also works great for analyzing sections of audio at a time while you are mixing.

Speaking of plugins, here are a few of the most used loudness meters. Insert one of these on your master track to measure your loudness.

Youlean Loudness Meter
This one is top of the list because it is FREE! It also has a cool feature where it shows a linear history of the loudness readings.

iZotope Insight
Insight is really cool. There are a lot of different views, including history and sound field views, and a spectrogram so you can see how different frequencies are being weighted. This plugin measures momentary loudness fast.




Waves WLM Meter

The Waves option may not have a bunch of flashy features like its iZotope competitor, but it does measure everything accurately and comes with an adjustable trim feature. The short-term loudness is accurate but does not bounce around as fast as Insight’s, which I actually prefer.

TC Electronic LMN Meter
I have not personally used this meter, but it looks like a great option for those of us mixing for 5.1 systems. And the radar display is pretty cool!

Wrapping Up: Making Art with Science

The science and history may be a little dry to research, but loudness mixing is an art form itself; Because if listeners have to constantly adjust volume, we are failing at our jobs of creating a distraction and hassle-free experience for our audience. Loudness standards go beyond a set of rules; they are an opportunity for audio engineers to use our scientific prowess to develop our work into a unifying experience.

Resources

First, big thanks to my editors (and fellow audio engineers) Jay Czys and Andie Huether.

The Loudness Standards (Measurement) – LUFS (Cheuks’ Blog)
https://cheuksblog.wordpress.com/2018/04/02/the-loudness-standards-measurement-lufs/#:~:text=Around%202007%2C%20an%20organization%20named,a%20value%20for%20the%20audio.

Loudness: Everything You Need to Know (Production Expert)
https://www.pro-tools-expert.com/production-expert-1/loudness-everything-you-need-to-know

Loud Commercials (The Federal Communications Commission)
https://www.fcc.gov/media/policy/loud-commercials

Loudness vs. True Peak: A Beginner’s Guide (NUGEN Audio)
https://nugenaudio.com/loudness-true-peak/

Worldwide Loudness Standards
https://www.rtw.com/en/blog/worldwide-loudness-delivery-standards.html

Home Recording with Kids in the House

Banished. That’s where my kids would be when I recorded at home pre-pandemic. Not banished to their knowledge, just hanging with their grandparents for at least an entire day, at most overnight. The point is they would be gone, and I’d have sent our dog with them, too, if I’d had my way.

During the pandemic, I found myself itching to record and determined to avoid exposing vulnerable elders to a virus. Short of going into debt building a proper studio at home, I applied my knowledge and resources to develop these strategies:

Directional microphones are your friend. In an ideal recording situation, a large-diaphragm condenser microphone, like my Lauten 220, would be my preference for recording acoustic guitar and vocals. In my loud house, however, noise reduction was key. My kids could be shrieking away in the other room, but with my SM7B pointed away from them, I could sing my heart out with only minimal bleed. Plus, I could point my SM57 or Sennheiser e609 at my amp, again facing away from the children, to get some electric guitar underway.

Sound treatment is your friend. Believe me, I wish my home were properly sound treated – for so many reasons, not just recording purposes. In a pinch, it helps to hang quilts, order some shipping blankets, or hide out in a closet. It all works to cut down on noise. However, be sure to avoid cheap foam options that attenuate only higher frequencies doing little for the low end.

Record directly if you can. If you have a decent DI to connect your instrument to your interface, you won’t have to worry about home interference at all. You can always add effects later, and you may even come up with something innovative and fun with that dry signal. Many interfaces even allow for this without a DI.

Do many takes. While you have everything set up, run through the miked stuff a few times, keeping the same distance from the microphone when you sing or play for consistency’s sake. The little ones won’t scream at the same exact part each time you sing the song unless they’re especially cruel and deliberate about it. You can stitch together the best takes later.

Communicate. Let the kids know you’re going to be doing something special and need quiet for a short while. Talk about what they can be doing to entertain themselves in the meantime. Set boundaries accordingly beforehand and there should be fewer interruptions. Just be prepared to keep your promises if you make any (i.e. ice cream following the recording session, etc.)

It’s never going to be perfect, and of course, it requires flexibility, but it’s completely possible to record at home with your kids around. Breaks may be necessary and you may not get the best sound of your life, but what’s the alternative? Not doing it? Make those perfectly imperfect recordings at home. Lead by example and show the young ones in your life that there is always room for creativity so that they can learn to prioritize their own as they move beyond childhood. And if all else fails, scrap it and try again when they’re asleep.

On Pressing the Button

There are two songs that I remember having written at age six: a rock n’ roll wonder called “Thunder and Lightning” (thunder and lightning/ yeah yeah/ thunder and lightning/ oh yeah) and a narrative style ballad about a little cat that emerged from a mysterious magical flowerbed to become my pet.

I remember them because I recorded them

My dad had a boombox, probably intended for demos of his own. I don’t know what kind it was. All I knew was when I pressed the key with the red circle on it and sang, my voice and songs would come back to me whenever I wanted them to. And I recorded more than those two improvised tunes at age six — I completely commandeered that boombox, interrupted and ruined countless demos of my dad’s. Sometimes I’d prank my younger brother and record the squealing result. Later, in my best robotic voice, I’d make a tape-recorded introduction to my bedroom, to be played whenever somebody rang my “doorbell,” AKA the soundbox from my stuffed tiger, ripped out of its belly and mounted beneath a hand-scrawled sign. I’d even go on to record a faux radio show with my neighborhood friends comprised of comedy bits, original music — vocals only, sung in unison — and, yes, pranking my brother.

Eventually, the boombox either moved on from me, broken or reclaimed by its previous owner, or I moved on from it. I didn’t record anything for a long time, even though I formed other vocals-only bands with friends and continued to write and develop as a songwriter.

Had the little red circle become scary? Was I just a kid, moving from interest to interest, finding myself? Probably the latter. But for some reason, as a young songwriter, I moved from bedroom to bathroom studio, from garage to basement studio, jumping at the chance whenever some dude friend with gear and a designated space offered to get my songs down in some form. Sometimes it was lovely. Other times boundaries were broken and long-term friendships combusted. I persisted because I believed that I needed the help, that I couldn’t record on my own.

Years ago I had a nightmare: I had died without having recorded my music. From a bus full of fellow ghosts with unfinished business, I desperately sang my songs down to the living, hoping someone would catch one, foster it, and let it live. In the early days of the pandemic, this nightmare haunted me. That red circle called to me.

Let’s press that record button, yeah? On whatever we’ve got that has one. I’ve had my songs tracked in spaces sanctioned “studios” by confident men, so why not my own spare room? Why not the record button on my own laptop screen? I’m setting an intention, for myself and for you. When I think about what I wish to provide for you as a Soundgirls blogger, it is this: the permission to record yourself on your own terms, wherever you are in your journey. You are valid.

Reverb Hacks to Make Your Tracks Sparkle

Reverb is a great tool to help bring a bit of life and presence into any track or sound. But why not make it sound even more interesting by applying a plug-in such as an EQ or a compressor to the reverb. It can often give your sound a unique spin as well as being quite fun just to play around with the different sounds that you can achieve.

EQ

The first EQ trick helps with applying reverb to vocals. Have you ever bussed a vocal to a reverb track but still felt like it sounds a bit muddy? Well, try adding an EQ before the reverb on your bus track. Sculpt out the low and high end until you have a rainbow curve. Play around with how much you take out and find what sounds great for your vocals. I often find by doing this you can the clarity of the lyrics as well as achieving a deep, well-echoed sound.  This tip also helps if you’re a bit like me and can’t get enough reverb!

Creating a Pad Sound

If you’re interested in making ambient or classical music, or even pop music that features a soft piano, you might be interested in creating a pad effect. What this does is essentially elongate the sound and sustain it so it gives this nice ambient drone throughout the track.

You can achieve this by creating a bus track and sending your instrument to it. Then open your reverb plugin making sure it is set to 100% wet. You can then play around with setting the decay to around 8.00s to 15.00s. Then send about 60% of your dry instrument track to this bus, making sure to adjust it if it sounds too much. Play around with these settings until you achieve a sound that you like.

In conclusion, Reverb is one of my favourite plugins to play around with and alter. It offers an incredible amount of versatility and can be used in conjunction with many other plugins to create unique and interesting sounds. This can be used on a wide variety of different music genres and comes in handy when you want to add a bit of sparkle to a track.

7 Steps to Making a Demo with Your Phone

The internet is full of songwriters asking the question; how good does my demo have to be? The answer is always, “it depends”. Demos generally have one purpose; to accurately display the lyrics and melody of a song. Yet, there are varying types of demos and demo requirements but for this blog’s purpose, that is our one purpose!

*(see the end of this blog for situations where you will want to have your song fully produced for pitching purposes)

If you are a

Demos for these purposes can be recorded on your phone. If you have recording software (otherwise known as a DAW: Digital Audio Workstation) you can use that too. The steps are the same. But for those who don’t have a recording set up and have no interest in diving into that world, your phone and a variety of phone apps make it super easy.

Figure out the tempo

The “beats per minute”, or BPM is a critical component to the momentum and energy of a song. Pretty much every novice singer/songwriter has a tendency to write their songs in various tempos. The verse starts off at a certain groove and then by the time the first chorus comes in, the tempo has gradually increased to a new bpm. Then it goes back down during the soft bridge, then back up to an even faster tempo at the end.

None of us were born with an internal metronome, so don’t beat yourself up about it. However, most mainstream music that we hear today is going to be in a set tempo for the majority of the song. There may be tempo changes, depending on what the song calls for but, generally speaking, most songs do not change tempo. You and your producer can decide if a song needs tempo changes or if it is the kind of song that should be played “freely”, with no metronome at all.

Start by playing your song, and imagine yourself walking to the beat of your song. Is it a brisk walk? Or a slow, sluggish walk? A brisk walk is 120 beats per minute. Pull up your metronome and pick a starting bpm, based on how brisk (or un-brisk) the imaginary walk feels. Set that tempo and then play along to it. If it’s feeling good, keep playing through until you’ve played every song section (verse, chorus, bridge) at that tempo. If it stopped feeling right at some point, adjust accordingly. Ideally, you’ll find that happy bpm that is perfect for the song.

Type up a lyric sheet: I have artists put these lyric sheets on Google Drive and share them with me so that we are always working off of the same lyric sheet as changes are made.

Mark tempo changes on the lyric sheet: mark specific tempo changes if there are any. Mark a ritard (ritard means to slow down) where they need to be as well. If there is going to be a ritard, it is usually in the outro.

Check the key: Do you accidentally change keys in different sections? Just like the case of tempo changes, beginner singer/songwriters, especially if they’ve written the lyrics and melody a cappella (without accompaniment) can easily change keys without knowing it. If you don’t play an instrument, that’s ok! Have a musician friend or teacher help you. Your producer can also help you with this, as long as that is included in the scope of their work. Ask beforehand. If you do know the key and have determined the chords, including those in your lyric sheet.

Can you sing it: Have you sung it full out with a voice teacher in the key you’ve written it in? Singing it quietly in your room in a way that won’t disturb your roommates might not be the way you want to sing it in the recording studio.

Record the song: Record the song with the metronome clicking out loud if you aren’t using an app (you may need two devices; one to play the metronome and one to record) There are apps available where you can record yourself while listening to the click track through earbuds, then when you listen back to the recording, you won’t hear the click track. The point is that you sang it in time. One app I’m aware of where you can do this is Cakewalk by Bandlab. There are many!

Share the file: Make sure you can share the audio recording in a file format they can play. MP3s are the most common compressed audio file that can easily be emailed but most of our phones don’t automatically turn our voice memos into mp3’s. As a matter of fact, some phones will squash an audio file into some weird file type that sounds like crap (I have a Samsung and it does this!)

The most important steps for creating a demo for the above-mentioned purposes are making sure you have fine-tuned lyrics, melody, and song structure in a (mostly) set tempo. Following all of these steps will make you a dream client for your producer!

*If you want to pitch a song for use in film or TV (licensing/sync) then it needs to be a fully produced song. Do NOT submit demos to music libraries or music supervisors. They need finished products.

If you want to pitch your song to a music publisher, who in turn will pitch your song to artists, they will want full production in most cases. The artist may have it entirely reproduced but you have to “sell” them the song. You want to shine it in the best light possible. A demo would be needed for the creative team (producer, studio musicians, etc.) who will create your produced version for pitching. 

 

Hybrid Recording: The Art of the Details 

Good art lies in the details. Oscar Wilde said of writing poetry: “I spent all morning taking out a comma and all afternoon putting it back in again,” and while facetious (like everything Wilde said), it rings true for all forms of creation. Once you get through the bulk, the initial drive for content, then it’s time for the details. A single comma can make all the difference. It can change the meaning, change the interpretation, change the flow, change the visual aesthetic of the poem. It is a decision as important as the change is small. It’s when you begin to dig in and to polish that art really takes off.

This is no different for music

You can get down your chords and melody, layer your instruments, get broad strokes mixing, and get it sounding pretty (damn) good with ease, but getting that last little push, that takes talent, and it takes time. How much time you have to make a record depends on your budget, and that graph is a smiley face. You start out home recording, no budget, just your time, and you can spend as long as you want creating and fine-tuning and to make the art that you want to make. Eventually, though, you may get the itch to do something more. Maybe you hate engineering or can’t wrap your head around mixing, maybe you want a producer to help you bring your musical vision to life. Maybe you just want to experience going into a studio. So you save and save and when you’re ready to realize that studios and engineers and producers are expensive. Thousands. At best you can do a couple of days. You’re now losing time. Now, instead of having forever, you have hardly any time, and the details get left behind. Of course, as your budget gets larger you can afford more time until you’re working with so much money that you’re back where you started, able to spend as long as it takes (unless you have a deadline) in order to get your art the way you want it.

Hoping for a million-dollar budget is probably out of the realm of possibility for you, as it is for most artists, but that doesn’t mean you can’t put the level of creation into your art that you want. Enter what I call hybrid recording (Not to be confused with recording with analog gear into a DAW like it used to refer to. These days, that’s just called “recording.”). You do your initial recording in a studio, getting your live tracking and as many parts as possible done (especially drums), and then go back to home recording for the rest of it. That’ll give you that studio sound, and that studio experience (since, you know, you’re recording in the studio), while also giving you the freedom and flexibility to not only create the art you want but also do so without feeling forced by time and by money into inspiration. That can yield amazing results, especially with a talented producer who can mitigate the anxiety of the clock (it is the producer and engineer’s job to keep track of time, btw, not the artists’), but sometimes what you really need is a late-night cup of coffee and a pair of headphones in the place you’re most comfortable.

Because of the pandemic, I’ve been doing a lot of distance/email producing, and I will say that it’s actually working out pretty well. My artist will record their song, with all the time they need to construct arrangements and get the right takes, and then email me a mix. I’ll listen through, make notes that I send back, and we’ll do it again for the next version until we’re both happy with the arrangement and the performances and feel the song is ready for mixing. It takes a lot less time on my part than actually going into the studio to do overdubs, and it gives the artist that flexibility and freedom. Yes, it does mean that I can’t be there to coach performances and to stoke inspiration in the moment, which are all really important roles for a producer (probably the most important), but that doesn’t mean we can’t get amazing results, because we do. It just takes a little while longer, a little more back and forth, and a little more guesswork (depending on the rough mixes).

So how does a recording session like this work?

For a 10-12 song LP, you’re going to be looking at 3 days in the studio. The first day will involve a lot of setup (low end 4-6 hours, possibly more), and at least the first two days will be over ten hours. If your engineer or studio won’t work that long, then you’ll have to add on another day. Be upfront with your budget, and try and have some padding in case things go over. Your producer/engineer should be able to adjust things to work with your needs. Maybe you skip using all that fancy outboard gear that takes a bunch of time to set up, so you can get an extra song or two done on day one. Still, you shouldn’t push or rush your way through everything in less than two days. I’ve done 6-7 song days before and they suck, and the results sound like it. In order to prepare to go in, though, you’ve got to practice. I tell my artists to practice two hours a day for two weeks leading up to their recording session. And why wouldn’t you? You’re spending a lot of money and you want to make the most out of it, ample practice not only gets you the best results but saves you money.

Editing is a vital part of making music, and good editing can often be the difference between sounding indie and sounding pro. Even with the practicing, you’re not going to go in and be a one-and-done kind of thing. You need to do multiple takes, no only because you need material in case you notice things you want to change later on, but because if the first take is great, the second take can always be greater. And there are always parts of each take that are better than the same parts in others. Even that killer magic take. Basic take comping will take up a good chunk of your recording time, and you don’t want to also have to spend that time having the individual parts edited. That sucks. It sucks for the band because it costs money, and it sucks for the engineer because editing is boring. I try to put off editing until after the recording session to save on studio fees. In a perfect world, I’d spend about 1-1.5 hours editing a song. In reality, I’ve spent 4-6 before, because the bands don’t listen to my practicing requirements. If I have to spend four hours per song on editing, for our 12 song LP that’s over a full week of work. Vs less than two days. How much money do you think you’ll save by practicing.

Once we get our studio tracking done, and our editing, it’s time for the actual fun and the whole point of this article. I’ll send you rough mix stems, so you can adjust your drum and bass and guitar and whatever levels, and then you’ll do what you do best: make magic. We do our back and forth, and once everyone is satisfied, we move on to mixing.

Mixing is a vital part of making music. It’s when we make all the different tracks we recorded sound good together, take care of that EQ and compression we skipped to save on studio fees, and handle the final creative touches to really make something special. Mixing can take a lot of time, certainly, the first song does, and might end up costing as much or more than your tracking session. You can get mix engineers for cheap, you can get those who are expensive. You can even get

Finally, we get to mastering. Mastering is the final polish that gets the songs ready for release. This is typically done by a different engineer to get a different pair of ears on the production. And after that, we’re done!

What I’ve outlined here is not making records on the cheap. There are oodles of ways to get your budgets down to next to nothing. Make friends with someone who has a home studio, invest a few hundred bucks into your own equipment, find someone fresh out of engineering school looking to build a discography. But if you want the next step, something more, working with an established pro in a proper professional studio, then this is a way to go about it that can give you that sound and experience, and make something that would otherwise be unaffordable just about within the realm of possibility.

But seriously, practice.

Lillian Blair is a producer, engineer, and audio educator working out of the Seattle area. She is currently a staff engineer at The Vera Project Studios, where she chairs the Audio Committee, teaches studio recording and audio mixing and mastering. She is also co-founder of the new Audio Engineering Certificate Program at North Seattle College.

 

 

 

X