Empowering the Next Generation of Women in Audio

Join Us

Sam Boone – Systems Engineer

 

Sam Boone has been working professionally in audio for just three years and is currently a freelance system engineer, completing her first tour with Volbeat in 2022. She discovered audio in middle school and spent considerable time working in churches through her teen years. Sam played in the school band as an oboist and then took up guitar as she decided she wanted to attend a school for music. She admits that she was a terrible musician but her love of music, would lead her to live event production. She would go on to attend Middle Tennessee State University as a part of their recording and music program. At the same time, she was interning with a local production company, and when they offered her a full-time position she dropped out of school.

Career Start

How did you get your start?

I got my start interning at a regional production company. I managed to land that internship by asking for an introduction from a family friend who was familiar with the company.

What did you learn interning or on your early gigs?

I learned several technical skills, primarily basics like cable management, show power, and troubleshooting and repairing gear. I also learned how to prep a tour from start to finish, line check, and build show files. More importantly, I began to see and learn how to interact with clients, how to ask questions, and observe.

Career Now

How did you discover System Engineering?

I discovered systems engineering during my internship while working in the shop, learning what a drive rack is and what it does. That led to me asking about the position of the person using the gear and what all systems engineering entailed.

Why were you drawn to System Engineering?

I was drawn to systems engineering because, unlike so many other aspects of live audio engineering, it’s as much a science as an art. For me, it’s taking the challenge of making the show sound the same in every seat into the context of a new venue daily. I enjoy that I can measure the system, see how well I’ve done, and see what I need to improve. It’s fascinating that I can see a lot of how something sounds on an analyzer. My work is a specific, measurable process, and nothing is random. It’s all a series of decisions with measurable effects, and I can go back to the data and say this is why I made these choices, and that, to me, is something I love.

If someone wants to pursue this path, what advice do you have for them? Education and skills?

The advice I have for someone jumping into this specific role in the industry is to not only get a mentor but also to read a lot. Sound Systems Design and Optimization by Bob McCarthy is a book that I have learned a lot from. I recommend reading Between the Lines by Michael Lawrence as well.

What is a typical day like?

My typical day on tour begins by making a 3D model of our venue for the day (or verifying a pre-made model if I was given sufficient information in advance). Then I’ll design the PA and send the splay angles, trim heights, and all other necessary information to our fly techs. From there, I’ll build FOH, run snakes, and get our FOH engineer powered up and ready.

Once our FOH engineer completes the virtual sound check, we tune the PA, take a walk and listen to it. At that point, we will make any changes we see fit. Then we go onto line check and soundcheck with the band.

Additionally, I’ll usually sit with all the front-of-house engineers through their soundchecks and make any changes they ask for in the PA. I typically have some downtime from there to relax, and finally, we have a show.

During the show, I walk around the venue and listen to the PA. I will also make any changes asked for by the engineer or any specific changes needed to make all areas of coverage sound the same tonally across the venue. Last, we load out and do it again the next day.

How do you stay organized and focused?

I use several spreadsheets and keep notes on everything from the patch to show file changes.

What do you enjoy the most about your job?

I enjoy the challenge of making every seat sound the same every day, regardless of the venue we are in. Some days we play in clubs, while others are in arenas. No matter the venue, my goal is to have every seat at every show sound as close to the same as possible.

What do you like least?

While I love doing tours in Europe,  what I dislike the most is the time change when I am there.  Tour life can be challenging to regulate and manage all aspects of your life, whether it be work, relationships, or simply trying to figure out how to have a functional schedule without burning yourself out. The time change simply adds another layer to the mix and makes talking to friends and family much more difficult.

If you tour, what do you like best?

I enjoy the people I meet and the travel.

What is your favorite day off activity? 

I go to the gym or run on days off to stay physically active. I also work on the next day’s gig, so I feel confident and prepared when I show up the following day.

What are your long-term goals?

Long term, I would love to become even better at my craft. I plan to eventually work on new technology or theory in research and development. I aim to contribute to the industry in a way that will outlast me. I plan to leave behind a better version of the industry than I found when I started.

What, if any, obstacles or barriers have you faced?

For me, the most challenging part of getting started was learning where to start asking questions. For a long time, I didn’t have enough knowledge to ask questions worth answering. Also, once I started learning about audio and its different aspects, there was a moment when it felt overwhelming to look at all the skills I needed to know.

How have you dealt with them?

I decided to deal with this by choosing one skill at a time to work on learning and then either further pursuing it if I was interested in it or moving on to the next one if I wasn’t. That’s how I gathered interest in systems engineering, leading me to my current job.

Advice you have for women who wish to enter the field?

My advice for young women joining this field would be not to be intimidated or deterred by the people around them. Some of the nicest people I have ever met, I’ve met on tour. We’re all figuring it out as we go, and we’re all constantly learning. If someone won’t answer your questions, it’s a sign you should be asking someone else.

Must have skills?

My must-have skills are troubleshooting, organization and communication.

Favorite gear?

My favorite piece of gear I’ve used this year is the Meyer Galaxy 816 processor. I’ll put one in front of any system, and it’s been a game changer to have access to U-shaping for tuning PAs.

You Can Find Sam on The Signal to Noise Podcast

 

Mechanics of Mixing

Mixing is an active experience

Anyone who’s watched me mix a show knows that I’m never standing still. I’m usually tapping my toes or bopping my head to the music while timing my fader throws. I’m constantly shifting my focus as I look up at the stage, down at my hands, or at the monitors on either side of me. I’m listening so my fingers can respond to the actors or musicians while keeping a thought on what’s coming up next. The actual mixing might happen in a small footprint, but there’s a lot going on. It helps to have a solid physical foundation to make your day-to-day life easier especially as so much of our job requires repetitive motion, which can take a toll on our bodies.

The first thing to look at is how you stand or sit at the console

If you’re sitting, it makes it easier because you can adjust your chair to the right height every time and call it good. Personally, I prefer to stand: it keeps me more alert and focused, especially when I’m on a show for months or years. Also, I’m short, so it’s easier for me to reach the top of the other fader banks of the console if I’m standing rather than having to get out of my chair or slide it any time I want to make an adjustment. If you prefer to stand as well, do yourself a favor and get an anti-fatigue mat. The floors at FOH can be anywhere from concrete to carpet to plywood, and it pays down the road to be nice to your knees now.

However, standing at the console can present a challenge if people mixing the same show are at different heights. If you’re short, you can stand on a case lid or apple box. If you’re tall, you can lift the console up with wooden blocks, or (if you already know when you’re in the shop) get racks that are taller and can make the board higher. Personally, I know that 16 space racks put the console at a good height for me to mix while standing.

In some cases, you might not able to find a good solution, or the console is already set to someone else’s height (if you’re a sub or A2 and the console is already set at a good height for the A1). In these cases, I end up using a chair, even though I’d rather stand. It’s far better to have a proper position and the minor inconvenience of having to get up if you need to make an adjustment than force yourself to mix in an uncomfortable position.

For me, a comfortable position means

That I aim for a console or chair height where my elbows are bent at a relaxed, roughly 90˚ angle so there’s an almost straight line from my elbow through my wrist when my hands are resting on the console, fingers on faders. If you’re too far above the console, your elbow ends up higher than your wrist and you put extra pressure on your joints as you naturally press through your palm with the way the wrist bends. On the other hand, if you’re too far below, your shoulders have to rotate outward to get your hands on top of the console and that puts pressure on your shoulders as well as the wrists.

Any rotation of a joint, even a small amount, can create problems over time. On Les Mis, I used my index and middle fingers to move the two orchestra faders, which is fairly common for most people. However, that rotated my wrist to an awkward angle which put stress on it. Eventually, my forearm muscles started to tighten up from that strain, which made it uncomfortable to mix. Even in the mix videos for that show (recorded after maybe 50-60 shows into the run), there are a couple of times where I have to find breaks to stretch out my hand or roll my wrist to relieve some of the tension. I went to physical therapy and got stretches and exercises to help (if something hurts, always go see a professional in a timely manner), but what actually fixed it was when I realized that I could use my middle and ring fingers for the band faders instead and that would shift my wrist to a better position. This eliminated the cause of the problem itself, and as a side benefit, I had my index finger free to make verb adjustments without having to move my hand off the band faders!

No one mixes the exact same way

So what works for me might not work for you, and that’s okay. I prefer to use my middle fingers as the primary for mixing dialogue, but some people use their index. It takes time and a willingness to experiment to develop what your mixing style looks like.

Here are a few things I’ve found that have helped me as a mixer

 

I use the heel of my hand as an anchor point while I’m mixing: as my hands have to move back and forth to different faders, that bone at the base of my palm always ends up resting on the same area of the console, just below the faders. From there, I have a general reference for where the fader is without having to look at my hands: I know based on how far my fingers are extended because my hand is always the same distance from the base of the fader. (With any rule, there are always exceptions: sometimes I’ll have to throw further than usual, so I’ll lift up the heel of my hand and use my pinky for additional stability, or a scene might have me jumping around more than usual so I’m not in one place long enough to truly anchor my hand. When it works, use it. If it doesn’t, find something that does.)

If my left hand (usually dialogue) is free, but my right hand (usually band, some vocals, and the button for sound effects, next scene, etc) is in the middle of a band move when I need to take a cue, I’ll cross my left hand over my right to hit the GO button, similar to playing a piano. I’ve gotten skeptical looks from mixers when doing it while I’m training on shows, but it’s something that works for me. It takes a little trial and error to make sure it’s the right choice and I’m not taking my hand off a fader when I really shouldn’t or my right hand actually does have a moment to talk the cue, but when it works, it helps to simplify my mix choreography.

I’ve spent a lot of time tweaking how my script works. While the script itself isn’t a mechanical part of mixing, how you integrate page turns definitely is. As I developed my system for marking and formatting, I made it my mission to condense the script to as few pages as possible and minimize how many times I had to reach up to flip a page. While that is a legitimate strategy, I found that it put my page turns at awkward points in the mix and had me scrambling at times. Over the course of several productions, I found that it worked far better for me to make sure that each page of the script ended on an easy (or as easy as possible) turn, whether that was a pause in the action or splitting a long line up over the end of one page and the beginning of another. This added a few page turns overall but put them at much easier places in my mix.

Something I need to continue to work on is my focus. Once I’ve been on a show for a while and I have the mix down, my mind will want to wander. Another mixer told me she uses yoga and meditation to help improve her concentration and her ability to bring herself back to the present and to the show. I’m slowly improving, but it’s another skill I need to hone, especially after I lost some of that ability while I didn’t have the chance to mix on a regular basis during the Covid hiatus.

However, consistency will help you as you develop better focus. While I obviously encourage being flexible, once you find what works, set a routine. That’s taking a cue on the same beat of a song, or presetting the band on the same word, even when you could do it anywhere in that sentence, or even taking a water break during the same line every show. Just like standing helps keep me focused when my show count-ticks into triple digits, consistency builds a muscle memory that has saved me more than a few times if my concentration slips.

The most important thing is to listen to your body and your instincts. If something hurts or feels uncomfortable, find a way to change your process so you don’t have to do that. If you have an idea for something that might streamline things, try it. The worst thing that happens is you go back to what was working just fine before and try the next idea when it comes along.

Greta Stromquist: Dialogue Editor and Associate Producer

When I began blogging for SoundGirls in January of 2022, I had hoped to interview various audio professionals from marginalized genders, but none more so than Greta Stromquist. We met at WAMCon Los Angeles 2019. We were both early to the conference at Walt Disney Studios, struck up a conversation that morning, and reconnected throughout the day. Whereas I was new to the very idea of recording and mixing my own projects, Greta was established, having already developed a partnership with mentors to record audiobooks. We exchanged numbers and stayed in touch. And when I really needed help in the early months of the pandemic, she agreed to edit the episodes of a Wilco fan podcast I co-host with Mary MacLane Mellas. Without her, it may never have been released. And so it is with gratitude and admiration that I introduce you to Greta Stromquist in my last SoundGirls blogging venture for the foreseeable future. Cherish the friends you make in audio. Now meet one of mine.

You got started in audio through the support of mentors. Tell us about that. At the time, you were working as a barista, right?

Yes. Yeah, I was working in a coffee shop. I feel like I got into audio a little bit unconventionally. When I met my mentors, they sat me down and introduced me to the world of ProTools and post-production. Then I spent a few years working with them on audiobooks, recording people for audiobooks. I’d be working [at the coffee shop] then I would go to the studio and work with them. It was honestly kind of like going to school. It was a special time when I got to be creative and have the support to do it.

You always hear about people forming these relationships with regulars at their workplace, and this seems like the most notable example of that that I have ever heard, where it literally changed the direction of your whole life.

It truly did. I think about that often. They’re both super generous with their knowledge and continue to be incredibly supportive. I’m not sure what I would [have been] doing right now, but it definitely wouldn’t have been audio, because there’s a lot of gatekeeping. Unless you go to school or know somebody, it is something that you really don’t get access to.

How does your art background influence your craft as a dialogue editor?

I always have had so many different interests, whether it’s painting, drawing, taking pictures or editing videos. All your skills from everywhere, even if they seem unrelated, they do come together.

What are some of your favorite podcasts? And in what ways do they influence your own work?

Anything public radio storytelling. Like Code Switch. It’s a genius way of melding in the human experience with incredibly thoughtful sound design and scoring of the episode that just draws you in. You just come into your own little world. It’s something I grew up listening to that’s always been something I’ve really enjoyed.

Describe the arts community you belong to.

I think in LA it’s been hard for me to feel a part of any community, but I will say I’m endlessly inspired by the individuals I know who pave the way for themselves to make the art that is important to them. For me, it’s been hard to find community, group-wise, but the friends that I do have are incredibly creative, and I draw inspiration from that.

Which project has challenged you the most? And how did it alter your process moving forward?

For the past year and a half, I’ve worked on an audio-reality podcast series. It was my first time working on a large-scale project where we were dealing with hundreds of hours of tape and I had to keep everything organized. I also got to work a bit as a story editor, and it was one of those jobs that I didn’t think I was qualified to do. I was shocked to even get an interview. It’s very interesting being on the other side of it, thinking back [to] how anxious I was for the first few months and having constant impostor syndrome. But now I feel proud of the work I did, like I’ve [become] a better editor and walked away with excellent organization skills. I think the biggest challenge of it, though, was honestly just believing in myself. It’s really cheesy and stupid, but that was really the hard part.

What are your go-to tools for dialogue editing?

I carry with me what my mentors have taught me. I think a lot of it is just being okay with how the recording itself sounds. Sometimes less really is more. There are all the really cool plugins that serve their purpose, and I can make stuff sound really crisp and clean. But yeah, all the little things that give it life: that’s how it sounds. That’s how it is.

What advice would you give others who wish to become dialog editors? And are you someone who would be interested in mentoring someone down the road?

Yeah! Imposter syndrome is like, “I can’t mentor someone, I don’t know enough,” but I actually do really enjoy teaching. Inevitably, when you’re teaching someone something, you’re learning, too.

“What advice…” If you’re interested in audio, or in the editing world, start small, recording something and bringing it into whatever DAW or NLE you have, playing with it, editing it, and trying plugins. Just go from there. Start small, then bug anyone and everyone you know. Reach out to anybody you want to talk to.

What goals do you have for yourself in the coming year?

I definitely want to keep working on projects that challenge me. I have enjoyed working in the podcast world, but I’m still drawn to film and TV. I would love to get my foot in the door. There’s [an] overlay of skills, for sure. I’ve had a taste of story editing and loved it, however, re-recording mixing and ADR is something I would love to explore. I am open and excited to new opportunities and to see where my skills will take me next.

Thank you, Greta, and all of you SoundGirls readers. Now go make some noise (and/or record some).

Jin, Jiyan, Azadi

Woman is not defined in relation to man.  On this understanding is founded our struggle for freedom.       

Carla Lonzi. Rivolta Femminile – Rome 1970

 

I hope you will forgive my overtly political opening, but we are SoundGirls and have the luxury of being able to stand up for our rights in this patriarchal society without the constant fear of being beaten, arrested, and killed even.  The story is very different in other parts of the world, and I don’t want to forget the bravery of Iranian women at this time.

As this is the last of the current series of blogs, it would seem a fairly obvious step to review my year of anything and everything.  As an activist for human rights, which sounds grand but in reality is a series of small gestures for the oppressed which for me, means: women’s rights, the LGBTQIA+ community, and, most pressingly at the moment, freedom and self-determination for the women of Iran.  I know that there are men alongside the women in Iran but symbolically this is a woman’s struggle:

And because women bring their radicalism to the uprising, it can be said that a government can still hope to get away with it when only men are in the streets, but when women come out en masse, that government is finished.

Rossana Rossanda, from Le altre, Manifestolibri 2021

 

The reason Iran is important to me is that the women of Iran have already given so much in the struggle that they must win.  Their bravery is nothing short of inspirational, and of course, we know that this is important for all the women of this world.  Iran is not about hair, though haircutting is a beautiful symbol of the struggle.  In the final analysis, it’s about one struggle… to achieve one goal: the freedom of self-determination for every woman and gender-fluid person, to be free and equal in, what is at present, a man’s world: Liberty is the pathway to Equality.

As an active feminist and member of the LGBTQIA+ community, these things that are generally seen as outside of the arts are for me fundamental and are expressed through art.  Art that subverts and is often revolutionary; art that represents the struggle and oftentimes becomes a rallying cry or a hymn, seems to be alive and well in Latin America.  By focusing on protest and revolutionary music of Latin America, reminds me that very early on in this cycle of blogs, I championed the virtues of authenticity in art. I don’t have an authoritative definition of what constitutes authenticity in art, but I imagine that it has something to do with the reasons for which it was created and why it exists in an artist’s oeuvre.  Music that is commercial can obviously fit into this paradigm of authenticity: Taylor Swift’s “Folklore” seems a good candidate in this respect.  An example by the Argentinian rapper, Ana Tijoux and Los Chikoz del Maiz found a place in my life today with their protest song and video, the moving images an important part of the song’s message.

This is the song that started me thinking about migration and racism: The Strange Journey by Ana Tijoux and Los Chikoz del Maiz https://youtu.be/3O9PWUvd3y8

Italy now has a fascist government which after a short while in office, is falling into line with its ideology of racism: Italy for the Italians, etc. There has been a standoff between Italy and the rest of Europe over refugees being banned from entry and kept at sea in insanitary conditions.  This is from this morning’s newspaper “La Stampa”:

The headline, “Italy Inhumane” and the byline, “Italy has been most inhumane and its authorities unprofessional in the face of the emergency”

The Bar chart on the right shows the actual numbers of migrants accepted; the last three, Italy, Hungary, and Poland all have far-right governments.

I’ll come back to the question of authenticity in a short while after I recap the timeline that has brought me to this delicious but scary point in my life.  I graduated from the University of East Anglia in 1978 having studied Music and Fine Arts – about 80/20 %.  My specialisms turned out to be – because you never know, they just happen sometimes – Early music and Contemporary music, I have since filled in the missing classical and romantic periods.

In my first blog, I tried to establish a link from my experiences of electroacoustic music of the late seventies to the present, which has taken me a while of experimenting with the sonic possibilities of newer technologies.  So, after a year of experimenting with processing my recorded sounds, having learned to make use of synthesized sounds through the MAX MSP modules and adapting sound samples from the Spitfire Audio library, I still mainly use my own recorded samples.  On a technical note, I use the Zoom H6 Handy recorder and I usually record at 96 kHz 24-bit; 32-bit floating point is not available on this recorder.  Though, as I have said in a previous blog, I also record on my iPhone since it is always with me.  Now I remember talking about ‘dirty recordings’, background noise, wind noise, accidental knocks, etc.  I can honestly say that I treasure my dirty recordings which are processed in Adobe Audition as 96kHz 32-bit wav stereo files; moreover, they remain ‘authentic’ since they represent a time. a place and a sound experience; they are original and individual, and the sound would not exist without the sound artist’s intervention in rescuing it and preserving its memory – thank God (though I am an atheist) for my iPhone.  The two links below demonstrate Zoom’s IQ6 and IQ7 microphones for iPhone or iPad; maybe worth carrying the IQ7 which has some interesting features and captures stereo via its specific software app.

https://www.youtube.com/watch?v=–FVSsSTTeM&t=15s

https://www.youtube.com/watch?v=ikWgl2eLwqk

In February, I considered the term ‘Experimental music’ and came across these definitions which hold some truth and possibility:

Experimentalism is entirely separate from any musical form and focuses on discovery and playfulness without an underlying intention.

In other words: Experimental compositional practice is defined broadly by exploratory sensibilities radically opposed to, and questioning, institutionalized compositional, performing, and aesthetic conventions in music.

If I’m honest, I start off with an underlying intention, but the experimentation and failures, and adapting means that the underlying intention for me is much more fluid.  Experimenting is a key element in sound art, including my own works. I suppose that what I do with my recorded samples satisfies most of the criteria cited. However, in my most recent piece, Debris of a Night, I used feedback recorded with my Zoom H6 patched through my interface and recorded onto Reaper, fig 1.  I recorded three tracks, and I got better at controlling the feedback with each take though the chance element was high which gives it its ‘chance’ credentials.  When I recorded the vocal track, I played track 3 at the same time so that I got a noisier version of the feedback alongside my vocal, and then, at the creation and mixing stage, I put both tracks slightly out of sync for an echo effect which is not always noticeable but drifts in and out as other sounds in the mix either mask or reveal.  Tracks in Audition during composition and mix are shown fig 2.  My evolution as a Sound Artist, though this is not the whole story, has been one of getting away from a classically inspired approach on which the narrative thread of each piece gives me a framework on which to hang my vision.  I can exemplify what I mean.

fig 1

 

fig 2

Looking at the works I created this year, Her Blacks Crackle and Drag, based on a work I built around the poet Sylvia Plath (another dream) is almost symphonic in its proportions which is probably due to the narrative structure that underlies it and my still classical ethos.  So, in five movements, lasting 24 minutes, the piece had 63 tracks and two buses and used 98 separate sound files.

My next piece, Bamboo: the foolishness of things is shorter at 15 minutes but still made use of 32 tracks and two buses. Although it was based on a short piece of text from The Book of Tea it is less noticeably a story but inhabits a more self-contained sound world. This can be found at the link below.

https://soundcloud.com/francesca-caston/bamboo-the-foolishness-of-things

Debris of a Night, as I have already suggested, was mainly improvised in the sense that the vocal track was improvised in terms of content and timing, by which I mean that I recorded the words to fit into the spaces and general feel of the three feedback tracks.  Not only were these four tracks improvised but so also were the instruments; after some rehearsal, they were recorded along with the voice and feedback.  The percussion track was added last, a midi file in Reaper was improvised alongside the vocal and then exported back to audition for the final balancing mix. This was not any easier technically but was only 11 tracks plus 6 buses since each track can only be sent to one bus in Audition.  This piece has taken me closer to how Sound Art and Electroacoustic music have changed over the years and is also a transitionary step towards working with live electronics and performing musicians.

For this piece, I made use of the new algorithmic reverb plugin from Baby Audio, Crystalline.  It came about since I got some feedback from CMMAS which suggested that I had overdone the reverb (EMT Rev Plate – 140), I had originally added delay to confuse the text, already struggling amongst the feedback tracks and the idea was that I would automate its gradual fade out so that the voice became clearer, bearing in mind that it was recorded, handheld within the feedback sounds so was already uneven.

At this point I have to explain my personal ethic as a sound artist, and here I invoke yet again the concept of authenticity.  This piece is based on a recurring dream I’ve had for years which is probably telling me that I have an anxious attachment style and a fear of abandonment dating back to my childhood, but I know this anyway. So, whilst not trying to create an aural equivalence of the dream, the sounds, and the music suggested confusion at the opening and the heavily processed voice of the original was me, gradually coming out of the oppression of the dream and gaining control over the situation.   However, also recognizing that it is a piece of art that wishes to communicate something, I also have to be aware of my potential audience and the need to create an aesthetic around my means of expression.  In other words, my deepest, most personal, and intimate thoughts and feelings are presented as a thing of beauty, poetry, and metaphor, ready to elicit analogous sensations in those who witness the performance.  In other words, I feel a duty to be clear.

 

While trying to work out a way in which I can represent personal feelings through art but in a way that might be comprehensible to an audience, I sketched out a few ideas in which my metaphorical somnambulism might be represented in a way that does not expose my innermost feelings and yet is interesting enough for an audience to want to listen and try to understand.  Through an analogous process of transliteration, I reinterpret what is hidden within me into a thing of beauty that is ready to be understood. I use the word beauty in the aesthetic sense.

Taking account of these considerations, I took on the technical challenge of wanting reverb on the voice to establish being lost in a wilderness of emptiness and yet also in an edifice that is both physical and an analogue of my mental state.  In the end, I decided to opt for Baby Audio’s algorithmic reverb unit which seemed to have a clean sound and a very user-friendly interface where the fine-tuning controls are presented as realities rather than just numbers; as an experimental composer, I find that twiddling and listening carefully is more natural to me than relying on visual numbers – notwithstanding the usefulness of numbers if I want to find a setting I like.

Now it’s clear to me that reverb can easily get lost among three tracks of feedback and so I listened to the vocal track soloed and also in the company of various other tracks and, as I’m sure you know, what sounds a bit too much solo, can be just right in the mix, and this has been another part of my development after my 40-year absence from electroacoustic music: the bus tracks!  I won’t go into the differences between bussing in Reaper or Audition, just enough to say that I still find Audition clearer for me to work with.  The following sound samples are based on a fragment of the solo percussion track, which was improvised live to complement the voice, though I had to do a small amount of splicing for precise entries.  I’m using the percussion track rather than the voice to exemplify the reverb options I considered since it is easier to distinguish the various phases of dry attack and wet reverb.  You can see from the waveforms how the various reverb units have affected the sound. The Soundcloud link takes you to the sound samples: 1: dry; 2: Audition Surround reverb; 3: Arturia EMT Plate – 140, and 4: Baby Audio – Crystalline.

https://soundcloud.com/francesca-caston/reverb-on-percussion

or if you cannot get access since the link is private the following link will work

https://soundcloud.com/francesca-caston/reverb-on-percussion/s-FskkBr2KGv6?si=e3bc23fb003f405fbc611844c90b6ce4&utm_source=clipboard&utm_medium=text&utm_campaign=social_sharing

 

The link below takes you to a full explanation of the controls of the reverb unit, so I shan’t overelaborate.  In this example, I used a preset that I created ‘my voice,’ which sounds better on the solo track than the percussion preset.  However, in the mix, I used the ‘percussion’ preset given that it was competing with three tracks of noisy and unpredictable feedback.  This preset highlighted the transients a bit more: shape/transients placed the emphasis on the attack and the clean-up/damping controls highlighted the upper partials by adjusting the low pass filter.  This is the beauty of this plug-in for me, the icons on the controls change shape to represent visually what is being shaped. The left side controls change the nature of the reverb and mimic the size of the space.  The BABY AUDIO button at the top is a by-pass and the center panel has the familiar dry/wet control and a ducker (see the video for an explanation of the latter – I didn’t use it). The controls I found really useful were the start/end allowing me to delay the onset of the reverb for a dry start, giving a cleaner attack and the end control allowed me to adjust the length of the reverb. I had a great time experimenting with these and felt that I could really shape my sounds.  Despite my antipathy for too many numbers, these settings, expressed in milliseconds, were useful, especially if I wanted to repeat a certain setting.  So, in summary, a user-friendly interface made controlling the reverb relatively easy.  If you are interested to know more the video (linked below) is a good introduction.  And of course, the sound is good, although the reverb component can be heard differently according to the nature of the input.  In these examples, the reverb changes from the first burst to the second in respect of the different instruments.

https://youtu.be/FquRvVSInZc

To hear the percussion within the piece Debris of a Night go to just after 5 minutes:

https://soundcloud.com/francesca-caston/debris-of-a-night

At this point, in my artistic development, I am looking forward to my move to Mexico with the main aim of working collaboratively with other artists at the Centro Mexicano para la Música y las Artes Sonoras (CMMAS) in Morelia, Michoacán, México which is just a four-hour train ride from México City where there is a strong presence of Sound girls.org and I have already contacted one of the Sound girls in Mexico City and look forward to getting to know them. There is also a thriving arts scene there.  Mabe Fratti, a young Guatemalan cellist, vocalist, and experimental composer is in that ‘sweet spot’ between experimental musician and contemporary performer.  She has the voice of an angel but can shred through her pedal board and electronics. Here she is with Concepción Huerta in live performance and live electronics in a very homely setting.  Oh, and I forgot to mention the way they look at each other at the end; real collaboration and a ‘did we do OK?’’ maybe.

https://youtu.be/2hqgSxJsdKI

I thought it might be OK to mention two Latin American Sound Artists with whom I hope to collaborate.  Rodrigo Sigal is the director of CMMAS and, as I discovered after reading his paper on the state of electroacoustic music in Latin America, did his Ph.D. with the same professor with whom I had studied in the late seventies.  So apart from the coincidence itself, there might be some similarity in our approaches to sound art since our shared teacher had studied in Paris with members of the French School: Pierre Schaeffer, Pierre Henry, and Bernard Parmegiani, among others; composers who spearheaded the musique concrète movement.  The center has a wealth of talent and materials to share including courses on Tidal Cycles; Super Collider; MAX MSP, as well as hybrid practices ranging from orchestral and instrumental to fully electronic music production.  This piece by Rodrigo, Frictions of things in other places, is typical of a style that inhabits a ‘sound world’ and moves the sound around in a dynamic way.

https://youtu.be/Uyx43HjuzKA

Ana Maria Romano Gomez is a composer I met online through the Oslo-based arts and technology group at NOTAM.  She is based in Bogotá, Columbia, and has almost single-handedly organized and represented a woman’s movement in contemporary music. I was delighted to discover that she is preparing a course for the center in Morelia and again, we both have a good deal in common from the fact that we are both active feminists to our creative spirit even though we are from different continents. We have been in touch, and I hope that we will find opportunities to collaborate.  Her introduction to the audio-visual work created for the Sound Perspectives 2022 at CMMAS states that:

“Ana María Romano Gomez is a Colombian interdisciplinary composer and sound artist, and her creativity questions the intersection between gender, sexualities, sound, and technology, and is traversed by listening, soundscape, space, body, and political dimensions in its creation.  In all aspects of her life, she considers collective and collaborative work fundamental. Her works have been presented and published in Latin America, North America, Europe, and Asia.  She has been an artist in residence at the Centro Mexicano para la Música y las Artes Sonoras  (CMMAS). In 2019 she was nominated for the Classical Next Award Innovation for the management of the En Tiempo Real Festival in bringing to prominence the work of women artists: I think I followed much of that online.

She has developed in-depth research on the composer Jacqueline Nova, a pioneer of electroacoustic music in Colombia. She currently teaches at the Universidad El Bosque, coordinates the Plataforma Feminista En Tiempo Real, and is a member of the network of Compositoras Latinoamericanas “

In her introduction to the pieces, she suggests listening with headphones.

https://youtu.be/TtBaOe8cIkE

So, I’m counting the days to be in México. My plan is to be in Cuidad de México on 8 March, International women’s day, and then begin thinking as a Méxican sound artist, by which I mean, I will draw my inspiration from the vibe that surrounds me and collaborate with other artists on all kinds of projects: Song Cycle for soprano, rapper, harp, bass clarinet, and live electronics is on my list as well as working with a choreographer, and of course, meet some of the indigenous people, the Purépecha and maybe be inspired by their music.

But in reality, who knows what future awaits me?  I’ll begin by just taking in the air of Michoacán, known as the soul of México, and absorb the musical vibe, and who knows… two quotes from Mexican singer, Natalia LaFourcade which I like. Speaking of a song collection and video she says: Un canto por México es una voz colectiva…  and naturally I would like to be part of that voice.  She also says: viva el trabajo en comunidad… well that just reminds me of one of the main reasons I want to be there.

And, if you like, you can see exactly what she means.  These musicians are a collective voice, they are working as a team but, most importantly, having a whale of a time and making great music.

https://youtu.be/emTLbk7jd8E

So: con tanto amore e sorellanza a tuttə le mie sorelle Soundgirls, 

baci

 

 

The Future is Spatial

You read that title correctly, the future is spatial ( and binaural ) audio. Here’s why!

Back in 2012 everyone was awaiting our Pixar-Disney princess, and despite mixed reviews on how people perceived the movie Brave, one thing was undeniable; Atmos was here. Dolby Atmos had debuted, testing the limits of immersive audio in film. In 2022 you can find the technology displayed in home theatres, gaming consoles such as the Xbox One, certain smartphones, Airpods, and even your car.

Why? Is stereo not enough? What exactly is so enthralling about it?

It’s the feeling of being or doing something you wouldn’t normally be able to do, that magic of watching a film, closing your eyes when listening to a podcast like Ronstadt, and feeling like you are in that space – doing what they are and experiencing what they are. Spatial audio is the experience of space and movement of sound in 360 degrees, in normal demonstrations you might find domes or sphere-like-shaped demonstrations with speakers set up to surround the listener head to toe. Binaural audio is this same experience, only over headphones ( and no – 12D is NOT a thing ).

Abersonics, the DAD system, and Atmos are examples of established companies. Now unless you are Coca-Cola with the trade secret of its recipe, things normally bleed out to the public. This happened when the synthesizers and drum machines were more accessible to the general public – that same thing is happening to spatial audio programs, plugins, and equipment. Enter MaxMSP – why buy a spatializer when you can build one? Max is a programming language for the tech-savvy music makers – maybe in the future I can go more in-depth on building patches and programming on MaxMSP ( and a shout out to Pure Data ) but we can hold off on that. My point on bringing them up is that just with new technology comes new innovation – and it often comes from the small individual rather than the large corporation. What will spatial audio paired with projectionists do for the next generation of theatre makers? Live sound? I would argue that a lot can and will happen regarding this within this decade – I’d bet on it too. As people use this tech to build and create new jobs and avenues that have never existed before will emerge.

Something that has captivated me is the fast-paced progression of projection technology. Over the summer I was lucky to see Between the Lines, the show mainly took place with our protagonist, Delilah, looking and talking to her male counterpart, Prince Oliver. Prince Oliver lives in the world inside of a children’s book, and in order to illustrate that the crew used a combination of set design, lighting, and projection. The latter was unbelievably impressive to me, how the projections surrounded Oliver to make him look and feel 2-dimensional without taking away from the fact that you indeed are looking at an actual person move and act in a certain way. Broadway has been looking into projection design more and more, go to one of your favorite productions and prove me wrong. The current 2022 Les Miserables tour uses projection to create a sense of depth, it’s visual paired with the reverberant orchestra pulls you in. I focus on Broadway here because I think that we as audio engineers and enthusiasts can help to bring in more involved audiences. I’m talking about Immersive theatrical experiences with in-the-round and thrust configurations with full-range projection and spatial audio compositionists. If we look to more productions emulating Sleep No More’s theatrical sass of having audiences become part of the show we may cultivate a blurred line between performer and audience – but is that a bad thing? I say no, I say we welcome this inevitably with open and excited arms.

The future is in immersion, the future is spatial.

Tips For Indie Artists Outside Major Music Cities

I recently moved back to my hometown from Los Angeles to kickstart my music career, which I’m sure sounds counterintuitive. Aren’t you supposed to move to the major music city, not away? Before I left for college, I was so ready to leave my hometown and explore music scenes elsewhere. However, after I quit my full-time job this year to be an independent artist, I decided to go home to save up money and work in a space where my creativity can flourish. If you’re a developing independent artist who either by choice or by chance lives in a small town or outside the likes of Los Angeles, New York, or Nashville, I want to share with you some ideas I have about making the most of your musical environment from my own experience.

Connect with your local music community.

The main challenge I’m facing now that I’m outside of Los Angeles is remote networking. I miss attending my friends’ and colleagues’ performances and connecting with other independent artists who follow a path similar to mine. Even though the music scene in my hometown is different, there are still opportunities to network with other artists. Here, many restaurants and non-profit groups host large community-building events that often have live music, so I can attend these events and meet local musicians this way. Many gigs around me require musicians to play mostly covers for long periods, which can be really exhausting, especially if you are trying to share original music in a non-acoustic genre. Even if this style of gigging isn’t something you want to do, it’s really easy to use the Facebook Events tab, add your location, and find these gigs in your area to attend. I’ve found that supporting other musicians at gigs while I’m working on recording and producing at home keeps me inspired and reminds me of how loved the live music scene is in my hometown. I also feel that bonds with local musicians lead to a unique, lifelong support system.

Set up a remote rig

I think setting up a small home studio no matter the quality is essential, even if you’ve just got a USB microphone, your laptop, a DAW, and some headphones. If you don’t intend on producing, you can still keep track of new ideas you have and you can seamlessly send off recordings or demo tracks to producers or industry professionals to work with remotely. I recommend looking for good beginner bundles on Sweetwater to get you going in the right direction. I’m a firm believer in investing in long-term gear, so I think it’s best to find an affordable starting place and then build on your home setup if you want to. You can isolate your sound for recordings by using closets and blankets to reduce room noise. While I hope to work with mixers in the future, I’m currently a one-woman recording studio with my bedroom setup. I can easily record my vocals, arrange MIDI tracks in my DAW, mix on headphones and speakers, and send off my prints to a mastering engineer. Even though I’m home, I’m still putting out new singles on Spotify and other streaming platforms with my rig.

Get on TikTok

If you’re like me, then the idea of making a video of yourself makes you cringe. I’ve avoided posting myself, video content, and ultimately my music on social media for most of the time I’ve been making music. Something I’ve learned recently is that just like performing in front of a live audience, taking videos of myself for TikTok takes practice to build confidence. Something else I’ve learned in the past year is that confidence isn’t absorbed from others, it’s generated within yourself when you take risks and do the things that scare you. Posting on TikTok scares me, but it is the largest audience for musicians, producers, and artists of all kinds right now. As independent artists, it is vital for us to adapt to the changing industry. So I’ve followed some tips I’ve learned from other friends who post regularly on TikTok and am developing some consistency and some confidence! It’s not every day I can really get myself to make a video, so a few days throughout the week when I’m really grounded, I will make a few videos at a time to have multiple to post for the week. Besides clips of my music, I share insight on my songwriting, recording, and production process, and I like to keep the material as authentic as possible so I can engage with an audience that is similar to me.

When I first moved back home, despite my determination to start putting out music, I was fully expecting to feel isolated from the entire music industry for a while. With an open mind, I feel more akin to the music industry than I expected. I know that being in a small town and shooting for the stars can feel hard when it seems like all the stars are concentrated in a big city or on a different coastline. However, as independent artists, we have the power to use all the incredible resources around us and step into the spotlight.

Designing With Vocals: Part Two

Part One Here

I just released a new song this month called “This Time” and thought it would be a great opportunity to expand on my tips for sound designing with vocals. Similar to my last release, I recorded all the lead vocals and harmonies in Pro Tools with a temporary instrumental track and click track for timing. I used iZotope RX9 and Melodyne to clean up and tune the vocals using AudioSuite and committing Melodyne. I automated the lead vocals and adjusted the balance of the harmonies before exporting the tracks into Ableton. For this session, I exported sums of the harmonies and backing vocals in order to focus on the production elements of the song in Ableton and not obsess over the balance of the vocals. It also makes it easy to manipulate groups of harmonies together since I’m exporting from one DAW to the other.

The main sonic element of the breakdown of my song is a multilayered “ah” vocal that carries throughout the section and sounds like its own synth. I did most of my design work with this sum of vocals, starting with the use of iZotope’s Stutter Edit in the intro of my song. This was my first time using this plug-in, and it was a bit intimidating when I first opened it up. I focused on manipulating the “rate” and “step” parameters under the “stutter” section to get some interesting patterns to combine with an opening low-pass filter as the intro of my song. I followed a helpful YouTube tutorial in order to get started and found a great preset to work off of called Delay Filter Build. In the picture below you can see I kept the parameters simple but found a great effect with it that ties the intro of the song to the breakdown.

 

Further building on the breakdown of “This Time,” I wanted to incorporate the nostalgic feeling I get from 2010’s House music like some of Calvin Harris’s earlier hits for example. I have all the synths and background vocals side-chained to a four-on-the-floor kick to give it this floating effect. Adding the previously mentioned “ah” vocal layer into the sidechain to make it more emotive and flowy was a much faster process for me since I summed those vocals into their own stereo track when I exported out of ProTools. All I had to do to cover this technique was use the default compressor plug-in in Ableton and activate the side chain. I made a separate muted track that followed the kick pattern so I could control when the sidechain was occurring throughout the song and isolate it to that section. I set this as the key to the sidechain for the vocals and synths and adjusted the attack and release times according to to feel. Some people like to find the length of one beat in the particular tempo they are using and set attack and release times based on those calculations. I have tried this before but generally find that it doesn’t always feel the way I want it to, so I just make sure I’m using the same sidechain parameters for all my tracks to keep it clean. In the image below you can see how I use this with an auto filter and a phaser to transition the vocal layers from the last chorus into the breakdown.

 

 

In my last blog on vocal designing, I used Simpler’s classic option to create a sampled melody from one of the lyrics in that song. For this song, I created a sampled vocal melody again with Simpler, but instead, I used the slice option for a more typical EDM-sounding sampled vocal. First, in ProTools, I took a chunk of the lead vocal and processed it with iZotope VocalSynth for autotune and formant shifting effects. I used this processed vocal in the breakdown as is, and I also added it to a MIDI track with Simpler on to create a new melody. With the slice option, I could map out the different notes of the existing melody, so I could control the rhythm and choppiness of those notes. I preferred this method far more than just using the one-shot method in my last song because I actually made a unique melody with it that diverged from the song’s original melody. This technique was also really intuitive to navigate and utilize and (honestly) made me feel like a real producer for maybe the first time…

 

 

I love using vocals to add effects and elements to my productions, and I’ve found that I’m really developing my own skills as a producer as I search for more exciting ways to express my recorded vocals. I hope to share more tips and tricks with my future songs as I discover more.

Mid-Side: The Perfect Microphone Rig for Podcasts & Radio

Podcasts are a booming industry, and there is much room to increase production value even further. Recording the subject is always priority number one. By adding to that priority and recording rich audio, it is likely that you will increase your listenership by having an experience that immerses the listener. For those podcasts that are conducting interviews outside or on location rather than in a studio, there is a sophisticated yet simple recording setup: the mid-side rig!

Think of mid-side recordings as customizable stereo. You record two channels, bring them into your DAW, and work some encoding magic to create a file with adjustable stereo width. From a storytelling perspective, you get your subject, or interviewee, in the center and also an immersive stereo ambience. Long used in music and field recording, there are many opportunities to make podcast production shine with this versatile technique.

Microphones & Accessories

The “mid” is a cardioid, hypercardioid, or supercardioid mic, and the “side” is a figure eight mic. (Caveat: there are variations with omni mics as the mid, which generally make for a wider stereo field.) The mid mic captures your subject in your center channel, and the side mic, which is aimed 90 degrees from the source, captures the environment.  Whether you choose a hyper or supercariod mic depends on what you are capturing. A supercardiod mic makes for a wider stereo image when you encode it later. And, despite hypercardiods being more directional, I personally tend to favor the supercardiod pattern for recording people in the field. A subject can afford to be a little more off-axis, ensuring that if the rig is not aimed 100% perfectly, there will still be a good capture of the subject.

The two sides of the figure eight mic are 180 degrees out of phase, so a positive charge on one side of the mic’s diaphragm creates a negative equal charge on the other side. The front of the mic (the plus side), is pointed to the left, while the rear (or negative side) is pointed to the right.  In the past, I have used a Sennheiser MKH 50 for the mid and an MKH 30 for the side. They are sturdy and sound great! In live concert situations, I have seen engineers use AKG 414s with the polar patterns set accordingly. The important thing is to avoid phasing by correctly lining up the mic capsules. In the pictures below, the MKH pair has the mid mic positioned so the top of the grill is behind the bottom of the side’s mic capsule. The mid of the 414 pair is facing “north-south” while the side is positioned “east-west.”

As far as physically rigging it up, I’ve personally used a Rycote pistol grip and blimp specifically made for mid-side recordings with the MKH mics. For interviews in the field, definitely use a pistol grip (or some kind of shock absorption) and wind protection that will fit your setup.

Examples of mid-side setups.

My recorder has a mid-side setting!

Don’t use it. Record each channel straight mono because you will encode the recording later! Leaving the work for your DAW keeps your stereo width customizable, which is the beauty behind mid-side recording. Record each mic to a single track.

Where the Science Happens

All pictures provided are of ProTools, but everything you need to do to master your recordings are basic functions of any DAW. After you import your recordings, make a third track. Copy the audio from the side mic to this track.

Flip the phase of the copied audio. Here, I’ve done it with the Trim plugin.

Pan the original side recording track left, and the copied one right. These two channels now represent what your side mic was hearing.
Your mid-track is your mono-center channel. Bring up the volume of the side channels — you start to introduce a stereo spread! The level of the mid channels affects how wide the stereo image is, which is why I enduringly call mid-side recordings “customizable stereo.” The lower the side channels, the narrower the stereo image. The higher the side channels, the wider the stereo image.

The Listener Benefit

There is a technical benefit to mid-side recordings other than ear candy. On stereo systems such as headphones, listeners will hear stereo ambience. On mono systems (such as a single bluetooth speaker), because the phase on the side copy gets flipped, when the audio gets summed to mono, the original audio cancels the copy out. The recording sums to mono automatically. Changing the phase relationship gives control over how the mix sounds on the distribution side.

Recording with the mid-side technique in the field is a serious consideration because it is an easy way to create immersion for your podcast. Try it on your next interview!

The Sound of “Silence”

 

Did you know that not all silence or room tones are made equal? While I would never advocate listening to things loudly, you do need to make sure you are listening loud enough to hear certain issues in your room tone. This was a mistake I made when I first started. Part of my first job archiving and restoring for the Metropolitan Opera with LongTail Audio (RIP) was to audition (listen to) the tapes as we transferred them. This had several motives. One– to make sure all the music was there (so that means we used a score), and Two– to document any noises or grave issues with the sound (heavy use of markers).

Because I was a newbie at things like this, I was super paranoid about damaging my hearing. I knew I was going to be listening on headphones for 8 hours or more a day, so naturally, I tried to make sure I didn’t overdo it. But, when you first start, everyone is watching your work (as they should be). And one of the main things that I was missing was dropouts. Dropouts happen in analog tape with anything from tape damage to the age of the tapes to how they play back on the machine. This is what they look like if you view the spectral content.

 

By looking at it, you would think it’s impossible that you wouldn’t hear this. (To be fair this picture is probably a digital dropout which means you lose everything even for a few ms). But a lot of times, the dropout doesn’t manifest like a loss of programming. Sometimes it’s a momentary drop of tape hiss. Sometimes it actually sounds like a thud.

 

 

The good thing is there are ways to fix them if you have programs that can interpolate – like Izotope RX’s Spectral Repair or Cedar, etc. But my main point of this blog is that you need to be able to hear them.

The engineer that trained me on this job was someone I really admired and looked up to, I-hua Tseng. She was an amazing engineer who left us too soon, and I’m happy that I had the opportunity to work and learn from her. What she told me was to focus on the hiss. Most artifacts would jump out at you, but if you focus on the hiss, any momentary change or loss of signal will also jump out at you since your ear becomes accustomed to the noise floor. So your ear will detect a change if there is a loss. Your ears are amazing, so make sure you use them to their full capacity!

This brings me to the next important piece of “silence” which is room tone.

Do you know that not all room tone sounds the same? We worked with an entire folder of different room tones to fix things when they were needed. We had mono room tone, stereo room tone, dark room tone, bright room tone, room tone from the 70s, 60s, 30s, 40s, Dolby encoded, not. (Feeling like Bubba Gump here, but you get the idea) Anytime we ran into a good length or room tone, we would cut and export and drop it in the folder for the future.

Why would you need room tone? Because you don’t always go to digital black after something ends. Let’s say you’re in between movements of something or the tape ends and the room tone cuts off abruptly so you just need a little more to create a nice fade out, these are some of the reasons you would need room tone.

As I said, not all room tones are equal. The reason we had folders of room tone is that sometimes the programming wouldn’t contain anything you could work with. In this case, you would find the one that matched the best, and crossfade that into the other. And listen, sometimes you couldn’t find a perfect match, so instead of fading the existing room tone with another not as closely sounding one, you just replace it with the new one. It’s like trying to match navy and black; if you can’t get them to match exactly, you will notice. So just stick with one.

Did you know you also need room tone in podcasting? If you have a reporter who was done a lot of field recording, you also need room tone. Sometimes the interviews are done in less-than-ideal environments, so once that interview is edited, you’ll need room tone so that the noise floor doesn’t drop right away. This may seem tedious sometimes if there is a lot from this interview, but it does wonders when you are listening to a podcast and you don’t have someone’s quote just cut off because there is not a nice smooth fade. You can help your producers by asking them to ALWAYS record room tone any time they are out in the field reporting. This way you’re not scrambling to fake and create things out of nothing.

This may seem like a no-brainer and you’re now questioning why I’m even bothering to write this blog, but you would be surprised how much sloppy room tone I’ve heard and/or received. Creating a nice unnoticeable room tone to the listener is an art – an art many people in this industry take for granted because they think they should be doing more important things. But even something as small as room tone should be done with care.

 

Above I said we would look for a good length of room tone when we found and save that. That’s because if you grab less than one second and loop it if there is one tiny little bump, it will look like the above. And anyone will hear that. It sounds like a rattle or even a weird stutter sound effect (which may be cool in your pop track but not here). The fact that someone sent this to me to finalize says to me this person was not listening at a level that you could hear this OR this person only listened on speakers. I know in our field people constantly tell you to listen on speakers and “mixing on headphones is a no-no,” but critical listening really is better on headphones (IMHO). I *always* listen to my work and my mixes with headphones at some point – usually at consistent intervals just for checks and balances.

You do learn to look (listen) out for these things, so nowadays after having done this for 15 years, I can identify them quickly. But it’s important to train your ears. Whether it’s identifying anomalies or learning what 250 hz sounds like, invest in your craft – and by invest I mean your time! Not everything has a price on it. The better you are at hearing things, the better engineer you will be.

This was a great tool when I started: Golden Ears by Moulton Laboratories. They were several CDs (lol CDs) that had exercises to train you to identify different frequencies, EQs, and different processing. (Someone also conveniently uploaded some to Soundcloud here, so get your listen on).

Nowadays there are lots of A.I. ways to create room tone. Izotope RX10 has Ambience Match which generates and matches the noise floor. But make sure you listen to your room tone, don’t settle because you’re in a hurry. Having attention to detail and seamless editing will set you apart from everyone else.

X