Empowering the Next Generation of Women in Audio

Join Us

Seeing DAWs Introspectively

A common question amongst engineers, producers, and music makers alike is, “which digital audio workstation (DAW) do you use?” The first time someone asked me this question, I was a new student at Berklee aspiring to join the music production and engineering major. The question felt more like an investigation into my qualification for the program rather than innocent curiosity. For a while, I felt ashamed to share with anyone that I started producing music on GarageBand for fear that it reflected my lacking skill set. I even curated my own impressions of people based on the DAW they chose, assuming others were doing the same about my colleagues and me. I put Pro Tools on a pedestal and believed it was the only “correct” DAW to fulfill recording, editing, and mixing needs. Truthfully, I was stuck in this bubble for a while.

During the pandemic, I had the space to change my perspective on a lot of the opinions and ideas I picked up while at music school, and DAW choice was one such opinion. When I think of the same question now, it seems more equivalent to asking someone about their zodiac sign. I believe that DAWs have their own personalities that reflect the kind of creator that uses them. I experienced this through the development of my own production as I migrated from using Pro Tools to using Ableton Live.

I learned Pro Tools as a means for servicing the musicians that I was working with when I began focusing on engineering recording sessions and mixing. I knew that it was the main DAW used in professional recording studios for tracking live instrumentation, and I was intrigued by the technicality of it all. In a way, this took my mind off of the competitive environment of a hyper-talented musical community and gave me the chance to shine somewhere else. I saw Pro Tools as a very manually controlled software that encouraged me to take control of the intricate details of every recording session. Setting up the session parameters, I/O template, and playback engine and ensuring a smoothly run session was the ultimate expression of my technical competence.

While I still love to use Pro Tools for recording vocals, vocal tuning, and time-based editing and mixing, I recently recognized that Pro Tools wasn’t serving my needs as a creator. During the pandemic, I shifted gears to focus on my own music again, but I felt stuck in a pattern of using Pro Tools like it wasn’t meant for me. In order to form a healthier relationship with the DAW, I needed to step away from it and dive deeper into my own artistic desires.

While I tried using Logic Pro in the past for my music production, I struggled to break away from software instrument presets (I am a strong advocate for creating with presets, but I felt that I often trapped myself into a different sonic message than I intended for the song). I still felt like Logic Pro was making a lot of choices for me like EQing, routing, and time-based effects, and I even felt less in control with the playlist comping. This isn’t to say that Logic Pro is a bad DAW, although I might’ve assumed that a few years ago. There are loads of excellent songwriters and producers that work seamlessly in Logic Pro and make incredible music. I never needed to label Logic Pro a “bad DAW” just because it was bad for me. I only needed to recognize why Logic Pro wasn’t working for me, which stemmed from developing unhelpful habits that stifled any progress in producing a song.

I used Ableton Live lightly for some of my electronic production classes, but I never took the time to learn how I could curate the program to suit my songs. This was mostly because during my education I was purposefully distracting myself from discovering how a DAW like Ableton Live could serve me, so I didn’t have to discover the vulnerable desire within myself to use my production skills for me. With the space of the pandemic, I saw the chance to teach myself how a DAW that was untarnished by any of my own bias or insecurity could function as a vessel for my artistic evolution. Ableton Live had just the right balance of suggested presets and easy-access controls and still technical options to exercise the engineering part of my brain. I had ideas for how I wanted the electronic elements in my recorded music and performances to sound, and I had a much easier time bringing them to life and enhancing them in Ableton Live. I continue to learn more in Ableton now by practicing patience with the techniques I’m finding in it and by piecing my original music together in a calm and kind manner.

DAWs are less like a uniform you have to wear and more like an eclectic wardrobe that fits you perfectly. I used to mindlessly pass judgment on the tools that others in my field worked with, and with my own experiences, I’ve changed my mindset to accept that there is room in this world for everyone and everything. There is in fact space for all kinds of creators and musicians with unique ideas and messages and various software to support that reality. This is a dramatic way of saying use the DAW you love and not the one someone told you to use.

The Importance of Being a Good Networker

I’m sure you’ve heard it before. You’ve got to network. You’ve got to get involved. You’ve got to meet the right people. Well, here it is again. It’s really important to network. And get involved. And meet the right people. Because your skill will only take you so far, knowing the right people will take you farther.

This might seem intimidating at first, especially if you’re a bit of an introvert. But you can be the most amazing engineer on the planet, and if nobody knows it, you will continue to be the most amazing engineer on the planet without any leads. This is not to be confused with shouting from the rooftops that you are the bomb, this is about going out and making healthy human connections (which let’s face it a lot of times these days this is virtual).

When I first started, I would go up and introduce myself to ANYONE – and I was kind of shy, so that took a lot. I would think of any, and I mean any question – even one I knew the answer to, and I would go ask. If I went to a concert, I would locate the sound person. And if there was a free moment, I would go ask my question. If I loved an album, I checked the credits, found the engineer, and wrote them on Myspace (I just dated myself with that but anyway).

But the biggest and most important place to network and meet people is through professional organizations. Going to conferences, and signing up for meetings, for workshops – i.e. getting involved in your community is the best way to get ahead. Meeting people will not replace skill, so it’s a delicate balance. You have to meet people AND hone your craft because you have to be ready for the opportunity to knock on your door. But those connections you make are the things that will bring those opportunities knocking. You recall I said it wasn’t so much who you know, but who knows you? Well, that’s going to require getting out there and meeting people.

I know this looks a lot different these days since a lot of this happens virtually. But it’s not to say you can’t pop questions in the chat, or reach out after the fact. Some of the conferences I’ve seen have smaller breakout rooms that allow you to have closer one-on-one chats. And from what I’ve seen most people on panels are open to people reaching out and often give their social media.

Don’t be discouraged if nothing happens immediately. I remember one year I must have applied for what felt like hundreds of jobs, and I got no after no or just ghosted. I sat down with my therapist (therapy is important), and she said it’s ok, you’re planting seeds. At the time, I was angry, because I was like whatever, I want change – I want a job now. But, I’ve never forgotten that because you eventually do see results. See the interview process is almost in a way networking. I had tons of emails and direct contacts for interviews I had been on which resulted in jobs later on. Because instead of going through the internet, if I saw the job posted a year later, guess what – I had the email of the person I interviewed a year ago, so I just wrote them directly. Also, if you make a good impression on the recruiters/employers, it’s not to say they won’t contact you later.

This is why it’s so important to be cordial. Be careful how you speak and treat people because you never know when or where those people may pop back up in your future. You don’t want to burn any bridges. How you speak and treat people will follow you everywhere you go. There may be a lot of people across the globe, but the audio community is in fact very small.

They say that success happens when opportunity meets preparation. So make sure you’re ready,  and get out there (in person or virtually) and make some connections.

iZotope RX 101

There are many audio repair tools on the market, and arguably the most common one is iZotope RX. And no wonder – it gives the user very fine control over audio clean-up. I have come across questions from new users in several internet groups, so I thought it was about time that I shared everything I have learned about RX.

We will cover the basics: the interface, some user preferences, and the order of operations. This article will be heavily geared towards film, radio, and podcasts, but the software is also a workhorse in the music industry –  I just personally can not speak to how it is used in music. Lastly, I own RX 9 Advanced, so I am giving advice from that perspective. Take the advice in this article and apply what you can to your version of RX. RX Elements and RX Standard just have fewer modules, so a lot of this will just be extra advice. Older versions will have slightly different algorithms but much of this advice will still stand.

I do want to mention that I am in no way sponsored by, or being paid by iZotope, and in writing this I am not necessarily endorsing a single product. I just consider myself pretty good at using RX and want to share the wealth.

The User Interface

This diagram is simply an overview. Hover over any of these in the software’s interface to get the full name and use of the tools in the interface.

The Spectrogam View

The spectrogram is the “heat map” behind the waveform (when the waveform/spectrogram slider is set to center). It gives a detailed visual about the time, frequencies, and amplitude of your audio all in one graph. The loudest frequencies are the “hot” color temperature. The spectrogram helps to visually isolate audio problems like plosives, hums, clicks, buzzes, and intermittent noises like a cough, phone ringing, sirens, et cetera.

There are some settings that need tweaking for maximum efficiency, and to suit your personal preferences. Starting with the scales on the right-hand side:

The amplitude scale should be set to dB. The other options are normalized 16-bit, and percent. Set the view from the dropdown menu that becomes available when you hover over the scale and right-click.

Right-click frequency scale and select extended log. This is a zoomed-in view that allows you to see more details of each frequency range in the spectrogram.

The magnitude scale should be set to decibel. The settings for the amplitude and magnitude scales ensure the accuracy of decibel readings across frequencies in the spectrogram.

Once the scales are set, open Spectrogram Setting either by right-clicking any scale on the right-hand side and selecting Spectrogram Settings from the drop-down menu or in the menu go to  View > Spectrogram Settings.

You can save different presets depending on all the parameters in this window. But I want to draw attention to the color options. The default is cyan to orange. But I recommend blue to pink. The contrast is the most obvious when using this color combination, although it does come down to personal preference. (If you want casual onlookers to think you are dealing with paranormal activity, go with the green and white color map!)

Beware the Dangers of Overprocessing!
  
Overprocessing occurs when the user runs too many modules or modules at heavy settings. It sounds like added digital artifacts, squashed dynamics, alterations to the original sound, or dropouts in the audio. I recommend three strategies to  avoid overprocessing:

Only run the modules you have to. I have some processes I use all the time but might break my own rules if I feel like something needs a lot of one type of module. (i.e., Maybe I skip Spectral De-Noise if the bigger issue is too much room reverb, and I think I may run De-Reverb more than once.)

Run the lightest possible settings. Dial-in what you think you’ll need, then back off a bit, then render the module. It is better to run one module twice with light settings than once with really heavy settings.

Check your work by clicking back in your history window. If any version of the processed file makes the audio sound worse instead of better, undo everything up until that point. After taking a listening break, you might come back and realize that everything you have done sounds worse than the original audio. That is fine. It doesn’t make you bad at audio repair. Take a breath, and start over with fresh ears.

Ultimately, your goal is to keep it natural. Bring out the speech, and do not remove the environmental ambiance altogether. You can try to remove broadband noise so long as you can do so without affecting the audio you want to keep. Even some background noise is preferable over the distraction of noise cutting in and out in the background. Use RX as a tool to make voice intelligible, and assist in blending audio together seamlessly. And note: audio repair tools are not a substitute for a good recording.

Order of Operations

iZotope published this flowchart on their blog back in the days of RX 7. I more or less adhere to it and have managed to avoid creating digital artifacts for a long time.  

Here is my current order of operations when repairing audio for podcasts and films. My process is inspired by iZotope’s recommendations, but I have tailored it. All these steps I run in the RX standalone application. I have a slightly different workflow when I connect RX to Pro Tools and will go over that in a future article.

Mixing module: I run this first, and only if the audio is out of phase.

De-hum: Only if there is a hum, and also, the HPF on it is really nice. I tend to apply a 50 Hz HPF and a 60 Hz reduction. But sometimes I might not run this if I think audio will need a lot of processing.

Mouth de-click – use on just about everything. Set to “output clicks only.”  Dial-in until I hear bits of words, then back off, and back off some more. Uncheck “output clicks only” prior to rendering.

General denoising, a combination of any of the following but I will not usually run them all

Spectral denoise: Meant to target broadband noise, like hiss, or tonal noise. It tends to be super heavy-handed. Use only if I can grab a sample of the broadband audio so the algorithm can “learn” the noise profile. Set the threshold to taste. I usually do not use a reduction of more than 7 dB, but everyone has their own preference. I have had this module remove parts of the audio of a very dynamic talker, so less is more.

Dialogue Isolate: I use this a lot! Lowers background if not removing it altogether, depending on the content of the audio. Remember to keep it natural. Try to just enhance speech over the background.

Voice Denoise: Can also learn a noise profile from a sample. More gentle than Spectral Denoise. Sometimes I use this as a finishing touch to make vocals “pop.”

Then more manual things like painting out unwanted background sounds, plosives, and clicks that mouth de-click did not catch.

EQ is my last step if I am applying EQ in iZotope.

This should be a solid start if you are just getting started with iZotope RX. Remember to keep the audio natural and your goal is to enhance the audio quality. Although there is also creative potential with all of these tools! But having a foundational understanding will empower you to take it to a creative level. In future blogs, we will cover removing unwanted noises by hand and connecting RX to your digital audio workstation.

Becoming a Ham:  Venturing into the World of Amateur Radio

 

Late last year I saw a post calling for individuals to sign up for a class to become an Amateur Radio operator.  In the back of my mind flashed the opening scenes of Contact starring Jodie Foster.  Long story short, I signed up.  The Make Amateur Radio Easier (MORE) Project was started by Dr. Rebecca Mercuri for Amateur Radio outreach in order to attract underrepresented demographics of radio operators and is backed by Amateur Radio Digital Communications and by the Institute of Electrical and Electronics Engineers (IEEE).  Through MORE I and my classmates will receive a hand-held 2-way radio, training, and pass the Technician exam (fingers crossed) to be able to Get On The Air (GOTA).

Dr. Mercuri

Amateur Radio, or Ham as it is affectionately known, is communication by radio waves for non-commercial purposes.  Radio operators use frequencies within a designated band to broadcast text, data, voice, and even images.  They identify themselves with callsigns, a combination of letters and numbers assigned by their home country’s broadcast governing body.  While their host country is in charge of licensing, a radio operator with the right radio and antenna can broadcast all over the world.  Referencing Jodie Foster’s character in Contact again:  she contacts Australia from her midwestern home, and later is part of the search for extraterrestrial life.

Shortly after the advent of telegraphy, amateurs began broadcasting.  Women were there too in the earliest days of radio as landline telegraph operators circa 1840s.  As maritime radio gained steam, cultural ideas about the fragility of women in emergency situations led to the ousting of women from professions in telecommunications.  When the rise of amateur radio occurred in the beginning of the twentieth century, the hobby attracted fans indiscriminately.  Mrs. M.J. Glass and Olive Hearberg were two of the first women to the hobby, registering in the 1910s.  In the lingo of Ham Radio, male operators called each other “OM” or “old man” in addition to their call signs.  Starting in the 1920s women used “YL” or “young lady.”  Young Ladies Radio League (YLRL) was founded in 1939 by Ethel Smith after seeing an ad in the membership journal of the National Association for Amateur Radio.  She became curious about how many women were Hams and wanted to reach out to them.  After writing a letter of her own to the journal, she created the YLRL that exists to this day offering scholarships and networking opportunities of all kinds.

Owning an amateur radio involves more than just knowledge of antenna and equipment, there is a whole language and etiquette involved.  Morse code no longer is required for United States licenses, however, there are many Hams that still use it.  The International Phonetic Alphabet is useful for intelligibility, especially when many phrases are shortened to acronyms.  In turn, the acronyms are useful when a broadcast signal is full of distortion.  Often local Ham radio clubs offer mentorships to encourage new Hams to keep broadcasting.  Other advantages of clubs include access to more advanced equipment and opportunities to broadcast from unique locations like lighthouses.

I am excited to participate in the MORE course and find another way to marry my electrical and audio worlds.  To be fair, I am also excited to emulate Jodie Foster in some way.  Well, I need to jump off to study for my Technician exam, wish me luck.  I don’t have a call sign yet, so this is Nicole, final clear.

For MORE information: n2re.org/m-o-r-e-project

Dr. Rebecca Mercuri, Grant Administrator, rtmercuri@ieee.org

Dr. Rebecca Mercuri interviewed by SoundGirls here: https://www.youtube.com/watch?v=VAOEL-VR6yc

Website for Young Ladies Radio League:  https://ylrl.net/

Music Reading for Drama Technicians

This month’s blog will go over some basic music theory concepts that I have found useful in my work as a musical theatre mixer. Full credit for the title goes to Professor Thomas W. Douglas of Carnegie Mellon University, who taught a class by that name when I was an undergrad. I know that not everyone working in theatrical sound has a formal music education (and I am not suggesting that it’s a requirement) but I think that being able to understand what is going on in a score, follow along in the music, and in some cases, line-by-line mix from the score, are good skills for anyone in this field to have.

 

Part 1: From the Top

 

Here’s a full-size cheat sheet of music theory 101! Courtesy of Thomas W. Douglas.

As with any piece of writing, the most important information about a score is at the top of the page. This first set of symbols gives you a roadmap for what the song should sound like and how it should feel when played. Some of that basic information includes:

Tempo: the “speed” of a song. Sometimes delineated in Italian terms ranging from the slowest (largo) to fastest (prestissimo). Often in modern shows, and especially new musicals, you will see more descriptive tempo terms such as “steady rock beat” or “upbeat.” Some of the tempo descriptions for the new musical I am currently mixing include “bluesy protest song,” “Dylanesque,” “pop 4,” “feverish,” and my personal favorite, “Tempo di ‘Four Seasons.’” Also common in modern and new musicals is a specific bpm marking, e.g., “quarter note = 120.” This is often included even on songs that aren’t played to a click, just to give a specific sense of how the tune should feel.

Time signature: the “meter” of the song. Shown as two stacked numbers, with the top number representing the number of beats in a measure (or bar) of music, and the bottom one showing what note counts as 1 beat. So, in 4/4 time, 4 quarter notes, or any other combination of notes adding up to 4 quarter notes (such as 2 half notes), makes 1 bar of music. Since 4/4 is overwhelmingly the most common time signature, it is often abbreviated by just writing a “C” for “common time.” Additionally, time changes within the same song are more common in show tunes than in pop music, as they can be helpful ways to revisit motifs from previous songs or highlight a shift in plot, mood, or tension.

Key signature: what “scale” the piece is in (or at least, much like tempo and time signature, what key the song starts in.) A good way to learn key signatures is by studying the “Circle of Fifths” (https://en.wikipedia.org/wiki/Circle_of_fifths), and learning the shortcuts to analyzing sharps and flats to quickly discern a key. The “signs” section in the graphic above shows the symbols for sharp, flat, and natural.

 

 

Clefs: what note range this part is written in. Most vocal parts for musical theatre are written in treble clef or G clef. A piano-vocal score (or PV) for a show will have the vocal lines in treble clef (sometimes with bass parts shown in treble clef 8vb, meaning that the notes are written in treble clef but should be sung down an octave), and then treble and bass clef lines for the piano part.

 

 

 

Part 2: Following Along in a Score

While plenty of music, both classical and pop, contains a common set of musical conventions, there are some things that I specifically look for when analyzing a musical theatre score. Some of those things are:

Repeats, Codas, Vamps, and Safeties

Repeats are exactly what they sound like: a section of music played through twice (or more times if indicated, but always a specific number of times). See the above glossary for a picture of the repeat symbols in music notation. Repeats can be useful when a song has a clear verse and chorus that are melodically identical, therefore the copyist can just write them into the music once (with both sets of lyrics under the vocal line) and delineate the first and second endings instead of writing the whole figure out twice.

Another thing that repeats allow for is Codas. A coda is the “tail” of a piece and is only played the last time through a repeated piece. When a piece of music says “D.C al Coda” this means “play the piece through as many times as the repeats indicate, but on the final time through, skip ahead to the Coda where the music indicates to do so.” Coda markings look kind of like a set of crosshairs and are often accompanied by the words “to coda” or “al coda”.

What about vamps? Romanbenedict.com defines a vamp as “a section of music that is repeated several times while dialogue or onstage action occurs. It is usually directed by the conductor’s cue, and as such can cope with the unpredictability of long stretches of dialogue or indeterminable theatrical machinations.” Vamps might be used when a song has a scene break in the middle of it because, while an 8-bar section of music always takes roughly the same amount of time to play, the pacing of the script (or the speed of a scenic transition) is not so precisely timed and may vary in length from night to night. The cue to move out of the vamp could be a certain line of dialogue or a scene change completing and will be clearly cued by the music director. It’s good to know where the vamps are in a musical number so that you can keep track of where you are in the song and not accidentally miss a pickup, band move, or a snapshot.

Safeties can be thought of as “optional” vamps, meaning that they could be played or skipped entirely based on timing variations from performance to performance.

Dynamics: Dynamics, as we learn in audio, are variations in loudness. Similarly, in music, dynamic descriptions tell us where this piece of music lands on the soft-to-loud, or in this case, “piano” to “forte” spectrum. In scores, you will find dynamics abbreviated using p for “piano” aka soft, f for “forte” aka loud, and m for “mezzo” or moderately (used in combination with p or f such as mp or mf).

Changes in dynamics: the Italian terms for these are crescendo and decrescendo. A crescendo is a gradual increase in volume and decrescendo means a gradual decrease. They are written either as the abbreviation “cresc.” Or, more commonly, by putting an elongated “<” or >” symbol under the bars of music encompassing the duration of the dynamic shift. There may also be an indication of what dynamic you are moving to or from (such as p<f, meaning crescendo from piano to forte), but this is optional. Crescendo markings are one of my favorite shorthand symbols to use in my mix scripts, so rather than write out “fade band up to -8” I will simply write “B<-8”. I also often use crescendo markings at the end of songs to indicate a big band build, or decrescendo markings on the first lyric after the intro to indicate a small band decrease when the vocal starts.

Changes in tempo: there are a lot of Italian terms for slowing a tempo down; the most common one is ritardando, often abbreviated as “rit.” Other terms include rallantando (rall. for short), or “moso” which means movement, and can have further elaboration such as piu moso (a little faster) or meno moso (a little slower).

Key changes: also called modulations. These can be everywhere in musical theatre but are most common in the final verse of a song, where the music and action take a big emotional shift. You will know there is a key change because in the middle of the music there will be a new key signature that now supersedes the original key for the remainder of the song (until you get to the next key change).

Rubato: this means played freely, without a clear tempo.

Fermata: a long-held note, often at the end of a song as part of the “big finish.”

Button: Buttons aren’t necessarily explicitly defined in the music, but they’re hard to miss. A clean, 1-beat ending to a song. Here is a great thread from Lin-Manuel Miranda explaining the emotional intent of buttons and why some songs do or don’t have them: https://twitter.com/lin_manuel/status/951215051633037312

Pickup note(s): this is when a song begins with an incomplete measure of music. For an example of a pickup, we can revisit the opening of Les Misérables which I dissected in part 1. The song begins with an eighth-note pickup, such that melodically the music starts on the “and” of the 4th beat of the 0th measure.

 

The circled notes are the pickup and first beat.

Part 3: Putting it together

Now armed with the tools to read a score more clearly, the next step is to apply your music theory in action as a mixer!

When should you opt to mix using a PV instead of a script? The answer is “it depends.” Also, the decision to mix from a score does not have to be universal but can be decided on a song-by-song basis.

There are many reasons to use score or not, such as personal preference, designer preference, lack of access to an updated or well-formatted script, and many more. But basically, as always, it comes down to picking the best tool for the job, the job in this case mixing this number of the musical.

So, for a real-world case study, here are some example PV pages for “Finale Ultimo” from my mix script for The Drowsy Chaperone, which I chose to mix on the score for ease of clarity in making the pickups for the layered vocal parts that flow in and out as the main character, The Man in the Chair, sings the melody. This section of PV matches up to approximately 0:30-1:39 in the recording from the cast album linked below.

 

 

 

 

I hope this blog has made you a little more musically “street-smart” and as always, feel free to reach out to me with any questions or suggestions for future blog topics!

 

Pick of the Best Budget Synthetic Instruments & Amp Plugins

The current economic situation has meant that many creatives are experiencing uncertain and leaner times. Thankfully, one area that has been consistent throughout this difficult climate is the offering of reasonably priced, high-quality virtual instruments and plugins. Whether you’re unsure about making a big purchase or commitment to one library, there are an array of affordable sounds and tools out there, with many packages even available completely free.

Favourite free instrument sounds

Probably my favourite free instrument sounds of late are those in Spitfire LABS. This varied collection is extremely broad, ranging from realistic acoustic instruments to ambient and Avant-garde sampled sounds and textures. The plugins are extremely intuitive and easy to use,  with the huge bonus that they are compatible with any DAW, making them great for beginners and pros alike. The collection is updated regularly, and is always completely free, making it an all-around fantastic resource.

The Spitfire Product Library is a professional standard collection of instruments, many of which are regularly made in collaboration with the world’s biggest composers and creators. Spitfire often has package deals and offers on their products, and they also give 30% off for students and educators on all individual libraries. While a full professional orchestra library or an extensive synthesizer collection is pricey (though payment installment plans are available), many of the libraries and instruments are priced under $50 and $30 – a real
bargain if you’re after a specific addition.

In a similar vein to Spitfire, many other companies have followed suit in offering free products in parallel to their bigger collections. My favourites include the acoustic instruments from Orchestral Tools SINE factory and the eclectic collection of interesting sounds from Arturia that also include handy presets for easy variation and use.

Reasonably priced audio toolkit essentials

It’s worth signing up to company newsletters for offers and deals – this can be a lifesaver when there’s a particular product you’ve been saving for and waiting to upgrade. Promotions on iZotope products are featured regularly, with some free plugins always available, and smaller clean-up packages such as RX 8 Elements are currently priced at a very reasonable $29. Another one to watch is the Waves Plugins site, as the discounts on these products can make a huge difference – both to your collection and your wallet. With up to 80% off some items currently, there are also bundle deals and various offers to choose from. As Waves make such an array of products, being ready to pounce when sale time comes around can help to make a noticeable upgrade within a manageable budget.

How To Prep For Location Music Recording

A chance to get out of the studio, have a change of scenery, record in some exciting and different spaces, and explore a new acoustic – there are many reasons why recording music on location can be rewarding and great fun. Whether it’s a live concert or an album recording in a venue chosen specifically for its marvelous acoustics or unusual character, it’s never more important to be well-prepared. There’s nothing worse than driving for an hour, arriving and unpacking all your equipment at the venue only to discover that you left behind that essential little piece of metal that connects two other essential pieces of metal, and without it, all your equipment is essentially useless!

Location recording is very common in genres such as classical music, where good acoustics are vital. More and more, artists of all genres are wanting to capture their live performances in both audio and video. Here are some things to think about when planning a location recording, particularly if you’re working solo and bringing your own gear (larger-scale productions may have more variables, more equipment, a team of people, and more detailed planning). Here we’ll focus on stand-alone recordings rather than recording an amplified concert (of course many live sound engineers also capture recordings to be mixed later, which requires a different set of equipment).

Pre-Production

Recording at a different location than your usual studio or workplace means you’ll need to be flexible and ready to deal with possible unpredictable factors or situations. Find out as much information about the production/concert and the venue as you can beforehand. Make sure you agree on a reasonable schedule that gives you enough time to comfortably set up and account for unplanned delays, such as traffic.

A gear checklist is essential, and we’ll go over this in more detail later on. If there’s a chance to go and scout out the venue beforehand, this could be extremely useful for testing out the acoustics, deciding how the musicians and instruments might be placed in the space, checking for traffic and other noises, and determining the quietest time of day to record, and for other practical matters such as power outlet locations and figuring out an appropriate spot to set up your recording station.

Questions To Ask Beforehand

These are some things to think about, research, or ask the artists or venue directly:

Basic Gear Checklist

Additional useful bits and pieces: multi-tool, string, scissors, measuring tape, spare batteries, pen/pencil/highlighter, coloured tape for marking positions, torch/lamp, XLR and jack turnaround adaptors, headphone adaptors, mic stand thread adaptors.

Why Get Into Location Music Recording?

What makes recording music on location so enjoyable is the variety of projects and music you can work on, the thrill of capturing a live concert, the chance to explore new and interesting spaces, and the challenge of working out how to best capture music in an unfamiliar acoustic. You’ll learn how to problem-solve, you’ll likely never be bored from repetition, and you’ll have memorable recording sessions in beautiful, epic, and quirky spaces.

Learning a New Console

As I’ve started working more on the production side of things recently, and my home venue is replacing its beloved but falling-apart SC48s, I’ve found myself learning new consoles left and right. This month I thought I would lay out the process I use to get the hang of things when walking into a board I’ve never used before, although, of course, everyone will have their own method.

STEP ONE: SURFACE LEVEL

The first thing I do is open an existing file that is pre-routed to play around in. That way I don’t have to worry about the deeper settings and configuration yet. My goal is to get comfortable on the board at a surface level, so that I could theoretically walk into a room with someone else’s start file already up and mix a show on it.

I start with the simple: 

Can I pink the monitors or PA system?

Can I get music playing through the monitors or PA system?

Can I label and/or color-code my inputs?

Can I connect a mic and get my voice sent to the monitors and/or PA?

Can I put some basic EQ, and compression, on that mic?

Can I save, load, and transfer files easily?

Then I move on to some more complex things:

Can I route that mic through some reverb or other effects?

Can I link channels or make them stereo?

Can I change my patching efficiently?

If there’s a virtual soundcheck set up, how is that routed?

Can I build a mix relatively quickly?

STEP TWO: BACKEND

The next thing I do is load a default template file and try to build myself a start file. This way I can get familiar with all of the deeper functions of the console, see what settings exist, and configure and patch the file from scratch.

Can I configure my number of inputs, auxes, etc., and patch them correctly?

Can I route my matrices (for FOH) and/or auxes (for monitors)?

Can I configure my solo bus, talkback mic, and oscillator?

Can I set my customizable user keys?

Can I customize my fader banks and layers?

Can I set up and route effects?

Can I color-code my channel strips?

STEP THREE: BUILD A MOCK FILE

The last thing I do, if there’s time, is to build a file from scratch. Starting completely from scratch (or, if it exists, the start file I’ve already made), I go through the entire process as if I was running a show for a specific band. I normally build a file for the artist I do sound for because it’s an input list I know off of the top of my head and then I have a starting point of a file for if we ever do a show on one of these consoles, but it doesn’t really matter if you’re building a file for a specific artist or a generic rock show. The goal is to start from the ground up and do the entire process from start to finish: inputs, outputs, labels, arrange layers and locations, route effects, talkback, monitoring, house music, and pink noise.

Digital is Dull

 

A few years ago I purchased a record player. The purchase shocked my baby boomer parents as they were confused on why their Gen Z daughter was ditching her iPhone and AirPods for old A and B sides. However, upon receiving the record player I began to gather a collection of vinyl that spanned Creedence Clearwater revival to Taylor Swift.. Upon seeing my parent’s shock, I began to show them that record players weren’t for the 20th-century melodies, it was becoming a music medium for new and old music consumption. Recently I  surprised my 1980’s-DC-Punk-scene father by borrowing his cassette player to listen to a 2020 album I had bought on cassette tape.  However, I am not the only 2000’s baby who is listening to my favorite artist on physical manifestations, it is a growing trend spanning the 14-year-old Olivia Rodigo fans to late 20s One Direction fans.

Cassette Tapes

A new addition has emerged on artists’ online merch shops. Cassette tapes. From Dua Lipa to Harry Styles to Olivia Rodrigo, the rectangular boxes are the hip new collector item. Furthermore, the boxes are decorated with unique stickers and hued plastic to elevate the aesthetic appeal of the tapes. And while the convenience of an iPhone and headphones cannot be beaten, there are numerous websites selling portable cassette players. Stores that are frequented by the under 25 crowd, such as Urban Ourfitters, are stocked with cassette players in numerous colors for purchase. The vast option of cassette player colors and artists’ clear attention to cassette case design represents the aesthetic importance of the cassette tape.

Records

Last week on April 23 crowds of patrons lined up outside record stores around the United States. Across the 50 states, reports began to emerge that a large chunk of buyers were young people. This news came as no surprise to those that have been watching the upwards trend in Gen Z record collectors. Most artists these days release vinyl copies of the albums. Artists like Maggie Rodgers, Taylor Swift Bullie Eilish, and  Lizzo have partnered with Target to make vinyl with exclusive colors or covers that are only sold through Target.  Major corporations like Target actively promoting and selling out exclusive vinyl is one piece of evidence to support the claim that records are back for 21st century Top 40.

So why is this happening? While there’s no clear answer, I have a few theories. Anti-vax discourses have caused artists like Joni Mitchell and Neil Young to pull their music off the top streaming platform, Spotify. Furthermore, streaming platforms give little money to the artist for the number of plays listens to. These two issues with digital platforms could be at the root of the turn to physical copies of music. When a consumer buys a cassette tape or record, they are buying straight from the artist, cutting out any streaming platform conflicts. In an age where money and cooperative responsibility merge closer and closer, buying from the artist becomes a more promising avenue for music consumption.  Beyond the financial and morality theory, is the aesthetic theory. A quick look at social media trends will show that influencers have been promoting the aesthetic of records and cassette tapes. From room tours to outfit inspiration, the aesthetics of 70s florals and 90s mom jeans are back. To further fit the popularization of the 70s and 90s is the promotion of the music consumption styles of these decades.

X