Empowering the Next Generation of Women in Audio

Join Us

So You Want to Mix Podcasts

Happy New Year Soundgirls. If this is the year you’re going to take the plunge into podcasting, then here are a few tips I wish I had known when I first started. Many of the engineers I know in podcasting came from music or theater or production backgrounds, so it’s not unheard of to make the switch. You already know the basics of audio, so there’s tons of overlap, but there is quite a bit you need to pay attention to when working with the voice.

First, here are some of the most typical podcast types.

Two-ways are just interviews. A host and a guest. Maybe some music.

Narrative or long-form are usually documentary-style meaning you’ll have a narrator telling the story along with voices from interview subjects. The interviewee’s soundbites are cut into what’s called selects and usually edited to fit the narration. These are scripted and will normally have scoring as well as sound effects.

Non-narrated podcasts have an interview subject telling their own story edited together with music. (Something like Song-Exploder might be a good example here)

You will get sessions in all different ways. So the first thing you’ll want to do is get your session organized.

THEN

Once you get into a rhythm, and you’ve done a few of these, you’ll probably want to save the template so you can just import your settings

On each individual track, I usually have an EQ, a compressor, an expander, a de-esser, and Izotope’s Mouth De-click. On each individual track, I use LIGHT compression – you want some compression because otherwise you’re going through a ton of small tiny bits of audio and adjusting each level and that will get tedious. But you don’t want to over-compress because the voice will sound unnatural. Expanders help get rid of any unwanted studio noise, but again I use light settings here. And you’ll have to play with your de-esser to find the right settings here.

On my VO aux, I have something similar. An EQ, a compressor, an expander. This allows me to set an overall EQ to ALL my VO together to give the podcast some uniformity but also allows me to manipulate the level of my VO in relation to the music and the sound effects. If you have a narrative podcast with something like 20 different tracks, having control of things with one aux is very helpful. I tend to err on light compression and expansion overall but depends on what kinds of voices I’m working with.

My music aux may or may not have light compression, but it does have EQ. Sometimes, I have a sidechain into my compressor that’s being fed by my VO aux. So my compressor will kick in when the voice comes in. This isn’t always necessary, but I’ve found I like it so the music isn’t always being compressed. For me, my main goal with the music aux is just to be able to manipulate as a whole. When you start getting into heavy automation within a podcast, it can get annoying to adjust that track by 1 dB and screw up your automation.

SFX aux is similar, for me it’s mainly level control.

My master usually has a limiter and a meter of some sort. I use Izotope Insight for metering and am usually mixing to -16 LUFS but it depends on who you’re mixing for. Broadcast standards can be -24 LUFS.

Because podcasting is all about the story being told, you want to make sure that the voice is the most prominent and important thing you’re hearing. This is why having auxes can be such a lifesaver.

You will want to take one initial pass through the edit sent to you that will be just to adjust levels and deal with any big issues. These can be things like buzz, noises, plosives, and other random things that can happen. I don’t work for Izotope, but they are great and have some of the best denoising plugins around. Vocal denoise, mouth declick, EQ match, deplosive, dewind, so many different things you can do to clean up the audio. During the pandemic, the audio quality has suffered a lot since people are recording anywhere and with anything. So, whether it’s Izotope or Waves denoising or whatever other tools you can get your hands on, get to know them, and know how to use them.

The most important thing with podcasting is paying attention to detail. Make sure you don’t cut off breaths but do cut off stutters or any fumbles. Don’t listen too loud, but make sure you listen at a level that you can hear if you cut off a breath! Pacing is important. Make sure the VO isn’t going too fast or slow. Enjoy the story you’re being told. Try to make sure the music is adding to that story and not detracting from it. And if you’re working with a script and something is off, make sure to check with your producer because they may have missed something.

Once you feel you’ve gone through your mix a few times, and you feel confident it’s ready to go, a good rule of thumb after bouncing is to re-import to your session and analyze the audio to make sure you’re at the correct levels. Also, make sure you didn’t cut off the beginning or the end.

Always give your work a final listen, so whether it’s right before you bounce or a quality control after you bounce, just make sure you listen. Podcasts can be long and tedious, so it’s easy for things to slip through the cracks. A 45-minute podcast can take anywhere from 4-6 hours to mix (or more if the audio is particularly bad). That’s just for a 2-way, narrative podcasts can take so much more. But if you pay attention to the details and deliver good work, you can get A LOT of work.

 

 

 

Twi McCallum – Sound Designer

Twi McCallum works on sound design for theater, post-production, audiobooks, and commercials. She has been freelancing since 2018 for Broadway, off-Broadway, and for regional theatres. Twi recently started working full-time at Skywalker Sound in sound editorial, and she will be relocating from NYC to the Bay Area.

Twi grew up in Baltimore and worked throughout high school at the National Aquarium, where she learned ocean conservation and marine biology. During the summers they created a play that was performed at local libraries. They would write the script, create costumes, backdrops, props, and music. This was Twi’s introduction to theater. She would go on to attend Howard University, where she found a class called TECH, where she became a crew member working behind the scenes for student productions. Twi remembers her first production, “my first tech assignment was a dresser for the musical Anything Goes, and there was a moment during invited dress when I was standing in the wings waiting for my actor to come offstage for a quick change. And I must have been standing in front of a speaker because I suddenly felt a wash of sound effects and music cascade over my body, and although I knew nothing about speakers, mics, or engineering at that time, I knew that’s what I wanted to jump into.”

Twi was working towards a Theater Administration major, studying things like stage management, producing, and technical theater. “At the time, my focus was costume design, which is laughable now, but there were no sound design professors and I failed my lighting  and scenic design classes which is why I dropped out of school and moved to New York.”  Twi would eventually graduate from Yale School of Drama’s one-year sound program in May 2021, which was virtual due to covid.

Her first job in NYC was a technical apprenticeship at a dance company called New York Live Arts, which was the first time Twi learned the fundamentals of audio such as how to stand on a ladder to hang a speaker, using a c-wrench, dropping a file into QLab, what an XLR cable is, and the basics of a mixing console which was the Yamaha DM1000. Twi says she knew she wanted to be a sound designer “because I was more moved by watching the dance performances than I was mixing them, and of course, getting yelled at as a mixer because nobody talks to the sound person unless they need to scold.”

When the apprenticeship ended, Twi worked as a stagehand at the Manhattan School of Music while sending her resume to a bunch of theaters that Twi said: “I was grossly underqualified for.” Her first design gigs were for Cape May Stage, TheatreSquared, and Kansas City Reps– all regional theaters that took a chance on her.

During covid, Twi took a post-production internship at a foley studio called Alchemy, and because of that opportunity, she was immediately hired as an apprentice sound editor on two scripted television shows for NBC and STARZ which allowed me to join Motion Picture Editors Guild Local 700. Those jobs qualified her to be hired at Skywalker Sound.

What did you learn interning or on your early gigs?

My one quirk is that I write everything down… when I’m at work I’m constantly scribbling in a notepad. My first job in New York was a technical theater internship (although criminally underpaid and abusive) at a dance company called New York Live Arts. It was my first time learning the basics of audio, and I still have my notebook from 3 years ago. I wrote down everything I learned…what does this button on the Yamaha DM1000 do, this is how many pins an XLR cable has, this is what a cluster is vs what a line array is. There is nothing embarrassing about needing to take notes, and there were times that it saved me because someone on the staff would ask me a question about the system that nobody else could answer but there it was in my trusty notebook! Even when I transitioned into post-production last year, I began keeping a typed journal of things I learned every day. My first professional television gig was as a sound apprentice on STARZ’s The Girlfriend Experience season 3, and the first thing my sound supervisor taught me was the importance of making region groups in ProTools for every episode. A year later, I still refer to those instructions whether I’m working on a professional tv show or an indie film.

Did you have a mentor or someone that really helped you?

My first mentor in theater sound was Megumi Katayama. There was a time in my life 2-3 years ago when I didn’t know any sound designers and I was emailing as many of them as I could find to inquire about their process. Megumi was a recent Yale MFA graduate when we met, already making strides with sold-out productions. I told her that I wanted to apply to Yale, so she invited me to assist her on production at Long Wharf Theater, which allowed me to tour and interview at Yale for my application. To this day she is still the only designer I’ve ever assisted.

My other theater sound design mentor is Nevin Steinberg, a legend, known for mega Broadway shows like Hamilton, Hadestown, and Dear Evan Hanson. When I emailed him as a fan with no major work experience, he called me on the phone the next day to my surprise, and since then he and I have talked at least every few weeks the past 2 years, sometimes just to make sure I’m emotionally okay.

In post-production, my biggest mentors are Bobbi Banks (ADR supervisor), Dann Fink, (loop group coordinator), and Bryan Parker who is a Supervising Sound Editor at Formosa Group and spent 6 months training me in sound effects and dialogue editorial. As I begin a new journey at Skywalker Sound, I admire Katy Wood, who I plan to work closely with over the next year.

I would be remiss if I did not mention that mentors also show up outside of my craft as a sound designer. The folks who always recommend me for big jobs, introduce me to directors, and take care of me in the workplace are costume, scenic, and lighting designers like Dede Ayite, Adam Honore, David Zinn, Clint Ramos, and Paul Tazwell. I advise any sound girl to reach out to other artists outside of audio to build a robust community.

Career Now

What is a typical day like?

In theater, I typically spend two weeks prior to tech being hands-on preparing for a production. This includes chats with the director, conceptual meetings with the scenic & lighting designers, group production meetings, and visiting rehearsals as often as possible. I also do a lot of paperwork such as cue sheets, console files, gear lists, and ground plans. Tech is typically 1-2 weeks long, and thankfully the theater industry is progressing away from the brutal 10 out of 12-hour workdays and six-day work weeks. Tech means stepping through every page of the script with all of the actors fully encompassed in the design elements. Then, there are usually 1-2 weeks of previews, which means a short rehearsal during the day to fix notes and a public audience performance in the evening.

How do you stay organized and focused?

My calendar is the key to me staying organized, Google calendar works miracles. As lame as it sounds, I maintain a daily, weekly, monthly, and annual to-do list. Annual to-do lists may feel overboard, but you’ll feel rewarded when the holiday season arrives and you realize you accomplished a long-term goal that you visualized 10 months prior. I am still learning to stay focused while acknowledging focusing doesn’t need to look the same for everyone. When I’m working from home, I like sitting on my couch with my laptop and listening to my tv in the background so I don’t feel alone. The best advice about focus that I’ve gotten from artists: spend 15 minutes every day in complete silence (from a costume designer), and try spending the first 1-2 hours every day that you wake up without any technology (from a playwright). Reducing social media usage has become critical for me, especially the drama of Instagram and Facebook.

What do you enjoy the most about your job?

What I love the most about theater sound design is sitting in the audience watching my show and being swarmed with the real-time reactions of the audience. The laughter, claps, cries, and yells, especially if it’s a result of a perfectly timed sound effect, assure me that I’ve done a great job. In theater, you will hear lots of designers say this theory, “The design is good when you don’t notice it.” But I disagree with that because there’s a line between noticing when your design is bad versus noticing when your design is compelling the storytelling. I like to believe we go to the theater to not only notice the actors but to enjoy the physical world of the play (scenic and costumes) and visceral world of the play (lighting and sound). I want the audience to notice my gunshots, earthquakes, music transition, spaceship takeoff, alarm clock, etc because they’re small yet inspiring parts of the bigger puzzle. For example, I designed a production of STEEL MAGNOLIAS at Everyman Theater and my director was adamant about the big gunshot moment, so I drove the point home and made it terrifying. I loved reading the performance reports via email from the stage manager every night that noted the audience jumping and holding each other at the surprise of the gunshot.

What do you like least?

In theater, I dislike the lack of budgeting of time and money from producers, production managers, directors, and other folks in power. Money is always used as an excuse for why designers, including sound designers, cannot be given the resources, staffing, and pay to properly do our jobs. There’s also a disregard for equitable scheduling of pre-production, rehearsal, and tech that impacts our personal lives.

What is your favorite day off activity? 

I play a lot of zombie video games (team PlayStation), plus I spend time with my Goldendoodle and pet snails as my happy places in my personal life. I’ve been watching some television shows, which are new to me because I’m more of a film lover. It took me a month to finish The Walking Dead but it was worth it, and I love Money Heist, You, Pose, Judge Judy, Top Chef, and Squid Game.

What are your long-term goals?

In 5-10 years, my heart is gazed upon being a re-recording mixer and supervising sound editor for big-budget film, television, and video games. I’m leaving behind theater sound design to transition into theatrical producing, so I can focus more on my post-production career. Eventually, I would love to teach sound design at an HBCU.

What if any obstacles or barriers have you faced? How have you dealt with them?

“Making it” is hard. However, I like to believe many of us make it over that hump eventually. What I wished someone talked to me about 2-3 years ago is what happens AFTER “making it”. For me, the insecurities have not stopped. At 25 years old and well-accomplished for my age according to other people… I am still comparing myself to others, taking it really hard when I don’t get hired for a particular show, and constantly wondering if I will maintain a career of longevity. And as a woman of color, surrounded by men as well as white women who have consistent streaks of accomplishments, I feel this sense of failure more often than people imagine. There are days that I cry, I wonder if I should change careers, I question if I will ever outdo myself and my peers. It’s important that I’m real and honest about these things because I know I’m not the only woman of sound in the world to experience these growing pains. This is where making a self-care plan kicks in, often we discuss self-care regarding busy schedules and needing time off from work. But self-care is also needed as a reminder to love ourselves and balance the highs and lows of our careers, even the lows that we are embarrassed to talk to other people about.

Advice you have for other women and young women who wish to enter the field?

Try your hardest not to take underpaid jobs. Even when you are first starting, do not take a gig that does not pay at least the legal minimum wage. Money is important, despite being in a craft where we’re supposed to love what we do unconditionally. Women are already underpaid and under-hired in sound, which makes us even more valuable. Companies that thrive on underpaid labor should not exist. The only places you should “volunteer” your time are schools, mentorship programs, and community theaters, all with a grain of salt of course. If you ever need to weigh the tradeoffs of taking a certain gig, do not be ashamed to reach out to someone with experience to ask for advice.

Must have skills?

The most important skill, in theater and post-production, is being able to quickly learn software. This includes drafting software like Vectorworks and DAWS like ProTools. Once you learn the basics of the software you need for work, the next challenge is learning how to use them efficiently. “Shortcuts” become important in the workplace, especially in post-production when it saves you 60 seconds of labor if you know a keyboard hotkey compared to needing to navigate a menu for the same function. These skills are not simple to learn, so be gentle with yourself on this learning journey. There are manuals and flashcards for all software, even ProTools keyboard covers to purchase!

Favorite gear?

In theater, I love Meyer’s SpaceMapGo. I implemented the software on my Broadway play CHICKEN & BISCUITS, to help move music and atmospheric cues around the theater in a 3D motion. In post-production, a similar asset is a plug-in called Waves Brauer Motion.

Summary of accomplishments

More on Twi

Twi McCallum on Hiring Black Designers and Creatives

https://open.spotify.com/episode/2aZE4fsUKT3pwFm3VAE4PE?si=FSn-cQGRQ8KxmN-z88xBeQ

On Pressing the Button

There are two songs that I remember having written at age six: a rock n’ roll wonder called “Thunder and Lightning” (thunder and lightning/ yeah yeah/ thunder and lightning/ oh yeah) and a narrative style ballad about a little cat that emerged from a mysterious magical flowerbed to become my pet.

I remember them because I recorded them

My dad had a boombox, probably intended for demos of his own. I don’t know what kind it was. All I knew was when I pressed the key with the red circle on it and sang, my voice and songs would come back to me whenever I wanted them to. And I recorded more than those two improvised tunes at age six — I completely commandeered that boombox, interrupted and ruined countless demos of my dad’s. Sometimes I’d prank my younger brother and record the squealing result. Later, in my best robotic voice, I’d make a tape-recorded introduction to my bedroom, to be played whenever somebody rang my “doorbell,” AKA the soundbox from my stuffed tiger, ripped out of its belly and mounted beneath a hand-scrawled sign. I’d even go on to record a faux radio show with my neighborhood friends comprised of comedy bits, original music — vocals only, sung in unison — and, yes, pranking my brother.

Eventually, the boombox either moved on from me, broken or reclaimed by its previous owner, or I moved on from it. I didn’t record anything for a long time, even though I formed other vocals-only bands with friends and continued to write and develop as a songwriter.

Had the little red circle become scary? Was I just a kid, moving from interest to interest, finding myself? Probably the latter. But for some reason, as a young songwriter, I moved from bedroom to bathroom studio, from garage to basement studio, jumping at the chance whenever some dude friend with gear and a designated space offered to get my songs down in some form. Sometimes it was lovely. Other times boundaries were broken and long-term friendships combusted. I persisted because I believed that I needed the help, that I couldn’t record on my own.

Years ago I had a nightmare: I had died without having recorded my music. From a bus full of fellow ghosts with unfinished business, I desperately sang my songs down to the living, hoping someone would catch one, foster it, and let it live. In the early days of the pandemic, this nightmare haunted me. That red circle called to me.

Let’s press that record button, yeah? On whatever we’ve got that has one. I’ve had my songs tracked in spaces sanctioned “studios” by confident men, so why not my own spare room? Why not the record button on my own laptop screen? I’m setting an intention, for myself and for you. When I think about what I wish to provide for you as a Soundgirls blogger, it is this: the permission to record yourself on your own terms, wherever you are in your journey. You are valid.

With Change

I hear the jingling from my coat pocket. It hits my plastic credit card and lining the soft interior of the coat, the coins bounce about unaware that they are the last of my savings.  These spare coins survived the move from New York to Philadelphia, renting out a new apartment, groceries, and frankly a giant array of activities that I can only describe as a blur.

Yet here I am, still able to write from you from my new corner of the world. I was always told growing up that with change comes responsibility – a responsibility I did not understand until I left home.

I’d like to take some time to recognize this new start, I have a strong feeling I’m not the only one to jump headfirst into adulting and not know what the h-double-hockey-sticks I’m doing.

What Exactly is “Adulting”

Does anyone really know? Is it the responsibility of caring for a child or the responsibility of paying your bills or having credit or knowing what the hell are mortgages? Does it mean to be legally entitled to your shortcomings or your crimes? Doesn’t it mean you are exempt from certain privileges that children may have or teenagers for that matter? Or maybe adulting is just a word we give to describe our age.

If you are waiting for an answer or my thoughts I don’t have any. Maybe that is the answer, I’m still growing and I’m of the belief that so is everyone. There are people that have their life together same as the people that aren’t, people who are still figuring it out, and people that are somewhere in the middle.

Maybe adulting means growing

If that’s even a fraction of what adulting means, I think I’m doing a good job of it. This year even if it’s small come up with something you look to aspire to.

I know Covid still hanging out sucks, but life needs to continue- even if it means we have to dust off work boots and rusty people skills- next month we’ll get to the meat and potatoes of some tech, and a brief history that is sure to put a smile on any musical theatre techies out there.

Until then I guess we’ll all have to keep adulting, with or without change.

Is It Ever Okay To Work For Free?

Anyone who has worked in a creative industry, including audio, has probably been asked at some point to work for free.

We’ve all seen the ads for unpaid internships that promise a wealth of experience, but with no guarantee of a permanent position at the end. Then there are the “jobs” that crop up on LinkedIn and seem perfectly fine until you get to the bottom of the listing and see the words:

“We can’t afford to pay anyone right now.”

Is it ever acceptable to expect someone to work for free?

When I was a student, I was eager to gain any bit of experience I could get my hands on. I’d spend each summer emailing radio stations and production companies, hoping for a chance to shadow for a day at the very least. At that early stage in my audio journey, I didn’t care what was involved as long as it meant getting a foot in the door. Immediately after graduating, when jobs were hard to come by, I was still open to the idea of unpaid work — within reason. There were opportunities I turned down because the cons outweighed the pros. Transport, accommodation, and the ability to feed yourself all have to be considered, and sometimes it’s just not worth the added stress.

I understand the desperation students and graduates often feel, because I’ve been there myself. I also understand that plenty of companies take on interns with a view to hiring them later. They offer people a chance to learn and grow, and to feel like a valued member of the team. But there are still too many out there who exploit graduates. They’re not interested in hiring someone; they just want free labour for as long as they can get it, before moving on to the next person. This kind of attitude usually tells you everything you need to know about the work culture at that company.

Internships are one thing; free labour masquerading as a full-time job is another. I’m not including volunteer work when I say this. People who get involved in community radio, for example, do so on the understanding that they’re volunteering, and that can be for a variety of reasons. But you should always be wary of anything that appears to be a 9-5 job with a detailed list of responsibilities, but no pay. I was browsing LinkedIn recently and came across a London-based production company looking for a podcast producer. The job looked great on the surface. Then came the kicker: “Unfortunately we have no budget right now but hope to be able to pay our employees in the future.” But are you even an employee if you’re not getting paid? I thought to myself, surely no one will apply for something that requires them to live in one of the most expensive cities in the world, with no time for other (paid) work, and therefore no means of paying rent or bills? I was wrong. The role had over 160 applications when I last checked.

The podcast world can be especially frustrating in this regard. More people than ever before are starting their own podcasts, and as many of them are hobbyists, they understandably don’t want to spend money on a professional editing service. But I am increasingly noticing professional podcasters who decide to take on an editor, yet are unwilling to pay them. Maybe it’s because they think it’s a quick and easy job — but if that were the case, they’d just do it themselves in the first place, right? No matter what the reason is, if they are earning money from it themselves, their editor should be too.

To sum up, there are circumstances where it’s okay to work for free — as long as you’re not being taken advantage of. If you’re just starting out in your career and you stand to learn something that will genuinely help you progress, that’s a good thing. So is returning the favour for a friend who may have previously helped you out, or volunteering your time and skills for an organisation or cause you care about (if you can afford to do so). But if you find yourself putting in long hours and a lot of effort for no reward, it’s probably best to reconsider your options

More on Should You Work for Free

Should You Work For Free?

Should You Work a Gig for Free for Exposure?

Reverb Hacks to Make Your Tracks Sparkle

Reverb is a great tool to help bring a bit of life and presence into any track or sound. But why not make it sound even more interesting by applying a plug-in such as an EQ or a compressor to the reverb. It can often give your sound a unique spin as well as being quite fun just to play around with the different sounds that you can achieve.

EQ

The first EQ trick helps with applying reverb to vocals. Have you ever bussed a vocal to a reverb track but still felt like it sounds a bit muddy? Well, try adding an EQ before the reverb on your bus track. Sculpt out the low and high end until you have a rainbow curve. Play around with how much you take out and find what sounds great for your vocals. I often find by doing this you can the clarity of the lyrics as well as achieving a deep, well-echoed sound.  This tip also helps if you’re a bit like me and can’t get enough reverb!

Creating a Pad Sound

If you’re interested in making ambient or classical music, or even pop music that features a soft piano, you might be interested in creating a pad effect. What this does is essentially elongate the sound and sustain it so it gives this nice ambient drone throughout the track.

You can achieve this by creating a bus track and sending your instrument to it. Then open your reverb plugin making sure it is set to 100% wet. You can then play around with setting the decay to around 8.00s to 15.00s. Then send about 60% of your dry instrument track to this bus, making sure to adjust it if it sounds too much. Play around with these settings until you achieve a sound that you like.

In conclusion, Reverb is one of my favourite plugins to play around with and alter. It offers an incredible amount of versatility and can be used in conjunction with many other plugins to create unique and interesting sounds. This can be used on a wide variety of different music genres and comes in handy when you want to add a bit of sparkle to a track.

What is experimental about “Experimental Music?”

This month’s blog is kind of a pseudo-philosophical question. Does it matter what we call things?  Of course, it does; it’s not the name itself but what it connotes for us aesthetically, culturally, and any other …ally you can think of.  So, I may end up proving myself wrong on this, but at least I should have a better understanding…

On a personal level: is my music experimental?  I’ve proudly declared that it is and have been sometimes “snooty” about other terms.  So, I’m going in for a bit of Megahertz cleansing.  See!  I just wrote something, and I don’t even know what it means; so gawd knows what you, my dear reader, will make of it. However, I digress …

I can’t remember when or why I started referring to myself as an experimental composer.  I think the why was because it sounded cool and besides, any time I mentioned electronic music to friends, they immediately envisaged me in a basement club surrounded by flashing lights and entranced dancers (is that me being snooty?).  Also, I had seen it as a genre, for example, it can be searched for on the Bandcamp website, where they tell us that, …

The artists represented here aren’t interested in tradition. Whether it’s clattering avant-garde music, deafening drone, or wild improvisation, artists who define their music as “experimental” are all interested in the same thing: pushing the boundaries of what we consider “music,” and finding fascinating new song shapes and structures.

Apart from the rather ‘unkind’ adjectives, there is a sense in which artists will feel that that is what they are doing when they compose.  Following a few of these tags on the Bandcamp label in a purely random fashion, I went from experimental to musique concrète, where I found an Example from Eliane Radigue, ‘Feedback Works 1969 – 1970’. She does indeed come from the musique concrète era, having worked with Pierre Schaeffer and Pierre Henry.  I then linked to drone music, where again I found a composition by Eliane Radigue, ‘Occam XXV,’ which was written for organ in 2018.  Is this experimental music?  According to Bandcamp and the tags assigned, it is, but I believe it less so than her 1969 Feedback works, I’ll look at this later in relation to what John Cage considered to be experimental music.  So, pressing on with randomly linked tags, it seemed that noise and, in particular, harsh noise might be representative of a kind of experimental music.  In this category, ‘Human Butcher Shop’ also tagged as metal, was a kind of slash guitar with distorted feedback; the kind of thing Jimi Hendrix was doing in the 70s.  So, to my mind, not really ‘pushing the boundaries…’, which makes me think that these kinds of criteria are not really helpful in trying to find a true home for experimental music; always assuming that the quest is a valid one.

I think, therefore, that the Bandcamp definition of experimental music is not really helpful.  Does it matter what labels we attach to music?  I would suggest that it does to the artists, in the sense that the label helps define us as artists in the eyes of our audiences.  I assume that as a search tag it might be useful for the consumer, even if there is still a lot of trawling to be done.  Before I leave Bandcamp, and as an example of how trawling and labels can lead to serendipitous moments, I decided to give ‘Dysfunctional Voiding’ by Piss Enema a quick listen.  The cover was ugly by any standards, but then I imagine that this was the intention of the artist wishing to occupy a certain genre as suggested by their tags:  experimental, death industrial, harsh noise, power electronics, etc.  However, the music was not dissimilar to other styles of musique concrète, even if less appealing, to my ears anyway.  So, as I had supposed, this adventure proved to be less than fruitful when one remembers that online tags create links and therefore visibility.  So, if you put your music on a music site, you would probably put as many related tags as possible to reach your intended audience.

NB: I have included links to Bandcamp tracks, and my understanding is that you can listen once to sample a song, but not repeatedly.

I mentioned John Cage earlier, so let’s see what he has to say about experimental music, a term he was using as early as 1955.  But first, some other attempts at definition and some of experimental music’s characteristics. According to Wikipedia, Experimental Music is not to be confused with Avant-garde music and this is qualified in this definition from the website ‘MasterClass’:

Though the terms “experimental” and “avant-garde” are sometimes used interchangeably, some music scholars and composers consider avant-garde music, which aims to innovate, as the furthest expression of an established musical form. Experimentalism is entirely separate from any musical form and focuses on discovery and playfulness without an underlying intention.

In other words: Experimental compositional practice is defined broadly by exploratory sensibilities radically opposed to, and questioning of, institutionalized compositional, performing, and aesthetic conventions in music.

So, if in my own work, I take a recorded sample and try to push it to create new sounds, or become a part of something greater, am I being experimental?  Do I have exploratory sensibilities?  Yes, I think I do.  Am I questioning the accepted conventions of music practice as they are?  Again, in my work, I think I am.  It seems to me that it all depends on the kind of music we are making.  If we are to use environmental sounds, as the Futurists in Milan had already done in the early part of the 1900s, then it does perforce suggest deprecating musical convention.  It is interesting to note that Pierre Boulez a composer of aleatoric music could be quite conventional when conducting.  His recordings of Stravinsky’s The Rite of Spring and Debussy’s Trois Nocturnes are faithful to the scores.  So even the arch-modernist knew when to be radically opposed to musical convention and when not to be.

As we shall see, John Cage’s definition of Experimental music includes elements of indeterminacy and chance, either in its composition or its performance in such a way that the outcomes of the music are unknown.  Indeterminate, or aleatoric music used ‘chance’ as a key component.  Cage’s “Music of Changes” of 1951 uses the I Ching Chinese text to influence the sound and length of each performance.  Chance, but at the moment of listening and recording, is present in Pauline Oliveros’s “Cave Water” of 1990 where the dripping of water is not under the slightest control of the composer.

https://paulineoliveros1.bandcamp.com/track/cave-water

I earlier referred to the two pieces by Eliane Radigue, who had worked in the 50s with one of the prime movers of Experimentalism in Europe, Pierre Schaeffer; Cage would occupy a similar role in the US.   In Occam XXV, Radigue wrote the piece for organ to be performed by an organist with no room for improvisation, as far as I can tell. The Feedback Works 1969 – 1970 were composed in her home studio, which she used while bringing up her three children.  The equipment at her disposal:  three tape recorders, a mixing board, an amplifier, two loudspeakers, and a microphone were used to create the feedback works.  With her children asleep, she often worked through the night in her basement home studio, holding a microphone, shifting it here and there by small increments thus playing with the feedback.  Since so much depended on the microphone, speakers, the limits of the magnetic tape, and the acoustics of the room, there is that element that the outcomes of the music are unknown.  This chance element of how the music will be when the compositional process is finished certainly gives this piece its experimental status.  By the way, Eliane’s last electronic composition was in 1998, L’Île re​-​sonante.  She continues to compose, but for live instruments. In fact, at the time of writing, she has a concert at the INA GRM Salle de Concerts in Paris this coming Wednesday, the 26th of January alongside another of my favorite experimental composers, Félicia Atkinson.  Eliane’s first track, Stress Osaka is the shortest at 11:35 and it’s pretty cool – a lot to listen to.

https://elianeradigue.bandcamp.com/album/feedback-works-1969-1970

This track, the hidden, from Felicia’s newest recording, especially the second half, frames her elegantly in a long line of French composers of “musique experimentale” later to be changed by the same Pierre Schaeffer to “recherche musicale”

https://shelterpress.bandcamp.com/track/the-hidden

Are any of the gifted women experimental composers, experimental?  Can a ‘live-work which might contain elements of indeterminacy, for example, improvisation and/or ‘chance’, be considered experimental?  Would a studio recording of the same piece still be considered experimental?  That depends… if it is/was considered experimental at its inception then the answer appears to be yes, given Cage’s assertion that the experimentalism can be contained in composition or performance.

Two artists who are well into indeterminacy at the composition phase are the London-based artist Klein and Claire Rousay from San Antonio, Texas.  When performing live, they may also experiment in performance: both artists have live performances coming up, by the way: Klein in Bristol, 26th January, and London 30th January, Claire Rousay in Knoxville (TN), 25th  March

https://klein1997.bandcamp.com/track/needed-and-saved-2

https://clairerousay.bandcamp.com/track/stoned-gesture

https://clairerousay.bandcamp.com/track/a-kind-of-promise

Claire Rousay is particularly interesting and this album, a softer focus represents her at her most melodic, almost pop.   From Bandcamp: claire rousay is based in San Antonio, Texas. Her music zeroes in on personal emotions and the minutiae of everyday life — voicemails, haptics, environmental recordings, stopwatches, whispers, and conversations — exploding their significance.  The link below is to a short interview with Claire Roussay.  Me, I’m struck by her authenticity:

https://daily.bandcamp.com/features/claire-rousay-softer-focus-interview?utm_source=footer

Klein’s work has been described as “grainy pop collages,” using heavily manipulated audio samples, drones, and sonic artifacts induced by time-stretching and pitch shifting. She assembles her tracks in sound editing program Audacity

I’m beginning to think, well actually a while ago now, that the word experimental is not so important, especially since there are other definitions we could use for this kind of music.  But what kind of music is it? And what kind of music is my music?  So, the question is now, am I an experimental composer?  If you read my January blog, you will know that I’ve been picking up on music from forty years ago.  Yes, even then it had elements of indeterminacy, but the style then was to process sound sources until one arrived at the kind of sound we had been seeking. On the other hand, I remember using a random number generator on the EMS 100 synth to haphazardly scramble the input of my sound source, giving me a kind of bubbly granulated texture.  However, the finished piece had been crafted to a loose narrative structure and existed as a composition of ‘fixed media’.  The only variations that could be made in performances, were the diffusion of the sounds around a sound space through an array of loudspeakers. So, what is my music?  My course at the conservatoire is titled musica elettroacustica II.  Fair enough since it uses recorded acoustic sounds and electronics to modify and add to the composition.  In performance, the music is called Acousmatic music on fixed media.  In other words, it is a musical object whose performance does not necessarily require the composer’s presence; this is often discussed as to what is our role at a concert if we just sit in front of a laptop?  Obviously, other possibilities exist: the electronics can be combined with live performers, the electronics can be controlled and distributed in an improvisatory way, through the use of prepared loops and touchpads.  For example, Ableton Live has a feature for stage performances:

Note chance

Set the probability that a note or drum hit will occur and let Live generate surprising variations to your patterns that change over time.

This is all fine and dandy but this is Frà’s blog, and she still hasn’t answered her own question: is her music experimental?  I come from a mixed musical background: Italian tenor arias in infancy while listening to my father practice; the rock and roll years as a teenager; as I moved into my twenties it was Jazz, but not any old Jazz.  It was the free-form stuff from late Coltrane, Yusef Lateef, Giuseppe Logan Who??? Anyway, from them to Soft machine in the 70s and then, behind the curve as always, I was introduced to Stravinsky, Debussy, Messaien, and off I went to Uni to study music and Fine art. So, with this mixed evolutionary musical background and my ADHD (which has been a bit of a bother [we English gals are very good at euphemism -it’s more like manic] is why this blog is particularly discursive – I am drawing to a close, I promise).

Anyway, where were we?  Ah, yes; so, I have a kind of quasi-classical background which means experimentation and extemporization have been a part of my musical language: I performed Terry Riley’s “In C” at Uni alongside other fairly “free” pieces, but I think I’ve been too stuck in the musique concrete style up until now.  Realizing that maybe I’m not as experimental as I thought I was.  My next piece, “Aston Expressway” (which I am writing for my son – you’ll get the story when the piece is finished) will make more use of unexpected samples in the making, though I have a narrative for the concept. Indeed, at the moment, since I’m still vague about it, If I don’t compose it, it will remain uber experimental.

As I’ve been writing this, I’ve been listening to EST’s live Hamburg recording of Tuesday Wonderland which is much longer than the studio recording, has live electronics and free improvisation, so also fitting the criteria for experimental.  However, since it already has a tag of Jazz, it probably doesn’t need to claim its experimental credentials, even if what is happening on stage and is being recorded is in all likelihood experimental.

Just as a side note, Kind of Blue, arguably one of the most well-known jazz records of all time was mostly improvised.  The photograph below is Cannonball Adderley’s music for Flamenco Sketches, a piece that lasts nine and a half minutes.  Since they were improvising within recognizable musical forms, it is hard to call it experimental, even if the second take is different from the first.  But who cares?  It’s still a great piece of music and both takes are beautiful as works of art in their own right?  Coltrane’s entry is just out of this world as is Cannonball Adderley’s and Bill Evans of course, the rest of the band are delicately there… Sorry but I haven’t listened to this in a while and I’m frozen to the spot, in a good way …

So, if I’ve been over-picky about a commonly used genre of music (I am Virgo after all), it’s not to deny anyone agency in their chosen field, but simply a reflection on what some of us and me, in particular, are trying to do with the music we create.  I come back again to this word authenticity, which is beginning to become a bit of a mantra for me; not all music can be authentic in its existence, it may just be a jingle selling a product (actually, if I could write a few, I might make some money) but that which excites me is the art that connects one person to another.  It’s not always easy to recognize but I do sense it in much of the work of the younger generation of experimental composers – there, I used the word. If I can connect through my art, I’ll be happy to be called simply a musician.

Pierre Boulez, the French composer, and conductor, in his response to critics of the ‘New Music’, who he referred to as Ostriches, said that “There is no such thing as experimental music … but there is a very real distinction between sterility and invention”.

So, there you have it, it either doesn’t exist or it’s everything that’s inventive.

Invent, connect and be authentic.

Frà sends her love from Torino to SoundGirls everywhere.

 

 

The Psychoacoustics of Modulation

Modulation is still an impactful tool in Pop music, even though it has been around for centuries. There are a number of well-known key changes in many successful Pop songs of recent musical decades. Modulation like a lot of tonal harmonies involves tension and resolution: we take a few uneasy steps towards the new key and then we settle into it. I find that 21st-century modulation serves as more of a production technique than the compositional technique it served in early Western European art music (this is a conversation for another day…).

 Example of modulation where the same chord exists in both keys with different functions.

 

Nowadays, it often occurs at the start of the final chorus of a song to support a Fibonacci Sequence and mark a dynamic transformation in the story of the song. Although more recent key changes feel like a gimmick, they are still relatively effective and seem to work just fine. However, instead of exploring modern modulation from the perspective of music theory, I want to look into two specific concepts in psychoacoustics: critical bands and auditory scene analysis, and how they are working in two songs with memorable key changes: “Livin’ On A Prayer” by Bon Jovi and “Golden Lady” by Stevie Wonder.

Consonant and dissonant relationships in music are represented mathematically as integer-ratios; however, we also experience consonance and dissonance as neurological sensations. To summarize, when a sound enters our inner ear, a mechanism called the basilar membrane response by oscillating at different locations along the membrane. This mapping process called tonotopicity is maintained in the auditory nerve bundle and essentially helps us identify frequency information. The frequency information devised by the inner ear is organized through auditory filtering that works as a series of band-pass filters, forming critical bands that distinguish the relationships between simultaneous frequencies. To review, two frequencies that are within the same critical band are experienced as “sensory dissonant,” while two frequencies in separate critical bands are experienced as “sensory consonant.” This is a very generalized version of this theory, but it essentially describes how frequencies in nearby harmonics like minor seconds and tritones are interfering with each other in the same critical band, causing frequency masking and roughness.

 

Depiction of two frequencies in the same critical bandwidth.

 

Let’s take a quick look at some important critical bands during the modulation in “Livin’ On A Prayer.” This song is in the key of G (392 Hz at G4) but changes at the final chorus to the key of Bb (466 Hz at Bb4). There are a few things to note in the lead sheet here. The key change is a difference of three semitones, and the tonic notes of both keys are in different critical bands, with G in band 4 (300-400 Hz) and Bb in band 5 (400-510 Hz). Additionally, the chord leading into the key change is D major (293 Hz at D4) with D4 in band 3 (200-300 Hz). Musically, D major’s strongest relationship to the key of Bb is that it is the dominant chord of G, the minor sixth in the key of Bb. Its placement makes sense because previously the chorus starts on the minor sixth in the key of G, which is E minor. Even though it has a weaker relationship to Bb major which kicks off the last chorus, D4 and Bb4 are in different critical bands and if played together would function as a major third and create sensory consonance. Other notes in those chords are in the same critical band: F4 is 349 Hz and F#4 is 370 Hz, placing both frequencies in band 4 and if played together would function as a minor second and cause sensory roughness. There are a lot of perceptual changes in this modulation, and while breaking down critical bands doesn’t necessarily reveal what makes this key change so memorable, it does provide an interesting perspective.

A key change is more than just consonant and dissonant relationships though, and the context provided around the modulation gives us a lot of information about what to expect. This relates to another psychoacoustics concept called auditory scene analysis which describes how we perceive auditory changes in our environment. There are a lot of different elements to auditory scene analysis including attention feedback, localization of sound sources, and grouping by frequency proximity, that all contribute to how we respond to and understand acoustical cues. I’m focusing on the grouping aspect because it offers information on how we follow harmonic changes over time. Many Gestalt principles like proximity and good continuation help us group frequencies that are similar in tone, near each other, or serve our expectations of what’s to come based on what has already happened. For example, when a stream of high notes and low notes is played at a fast tempo, their proximity to each other in time is prioritized, and we hear one stream of tones. However, as this stream slows down, the value in proximity shifts from the closeness in timing to the closeness in pitch, and two streams of different high pitches and low pitches are heard.

 Demonstration of “fission” of two streams of notes based on pitch and tempo.

 

Let’s look at these principles through the lens of “Golden Lady” which has a lot of modulation at the end of the song. As the song refrains about every eight measures, the key changes by a half-step or semitone upwards to the next adjacent key. This occurs quite a few times, and each time the last chord in each key before the modulation is the parallel major seventh of the upcoming minor key. While the modulation is moving upwards by half steps, however, the melody in the song is moving generally downwards by half steps, opposing the direction of the key changes. Even though there are a lot of changes and combating movements happening at this point in the song, we’re able to follow along because we have eight measures to settle into each new key. The grouping priority is on the frequency proximity occurring in the melody rather than the timing of the key changes, making it easier to follow. Furthermore, because there are multiple key changes, the principle of “good continuation” helps us anticipate the next modulation within the context of the song and the experience of the previous modulation. Again, auditory scene analysis doesn’t directly explain every reason for how modulation works in this song, but it gives us ulterior insight into how we’re absorbing the harmonic changes in the music.

Master the Art of Saving Your Live Show File

Total recall for a better workflow and to avoid embarrassment 

If you found this blog because your show file isn’t recalling scenes properly, skip to the “in case of emergency” section and come back to read the rest when you have time.

We learned as soon as we started using computers that we need to save our work as often as possible. We all know that sinking feeling when that essay or email we had worked so long and hard on, without backing up, suddenly became the victim of a spilled drink or blue screen of death. I’m sure more than a few of us also know this feeling from when we didn’t save our show file correctly, maybe even causing thousands of people to boo us because everything’s gone quiet all of a sudden. Digital desks are just computers with a fancy keyboard, but unlike writing a simple essay, there are many more ‘features’ in show files that can trip you up if you don’t fully understand them. Explaining the ins and outs of every desk’s save functions is beyond the scope of this article (pun intended), but learning the principles of how and why everything should be saved will help to make your workflow more efficient and reliable, and hopefully save you from an embarrassing ‘dog ate my show file moment.

The lingo

For some reason, desk manufacturers love to reinvent the wheel and so have their own words to describe the same thing. I have tried to include the different terms that I know of, but once you understand the underlying principles you should be able to recognise what is meant if you encounter other names for them. It really pays to read your desk’s manual, especially when it comes to show files. Brands have different approaches which might not always be intuitive, so getting familiar with them before you even start will help to avoid all your work going down the drain when you don’t tick the right box or press the right button.

Automation: This refers to the whole concept of having different settings for different parts of the performance. The term comes from studio post-production and is a little bit of a misnomer for live sound because most of the time it isn’t automatic as such; the engineer still needs to trigger the next setting, even though the desk takes care of the rest (if you’re really fancy some desks can trigger scene changes off midi or timecode. It is modern-day magic but you still need to be there to make sure things run smoothly and to justify your fee).

Show file/show/session: The parent file. This covers all the higher level desk settings, like how many busses you have and what type, your user preferences, EQ libraries, etc. It is the framework that the scenes build on, but also contains the scenes.

Scene/snapshot: Individual states within the show file, like documents within a folder. They store the current values for things like fader levels, mutes, pan, and effects settings. Every time you want things to change without having to make those adjustments by hand, you should have a new scene.

Scope/focus/filter: Defines which parameters get recalled (or stored. See next section) with the scene. For example, you might want everything except the mutes and fader levels to stay the same throughout the whole show, so they would be the only things in your scenes’ recall scope.

N.B.! Midas (and perhaps some other manufacturers) defines scope as what gets excluded from being recalled, and so it works the other way round (see figure 1). Be very sure you know which definition your desk is using! To avoid confusion, references to scope in this post mean what gets included.

Store vs. recall: Some desks, e.g. Midas, offer store scope as well as recall scope. This means you can control what gets saved as well as how much of that information later gets brought back to the surface. Much like the solo in place button, you need to be 100% sure of what you’re doing before you use this feature. It might seem like a good idea to take something you won’t want later, like the settings for a spare vocal mic when the MD uses it during rehearsals, out of the store scope. However, it’s much safer to just take it out of the recall scope instead. It’s better to have all the information at your disposal and choose what to use, rather than not having data you might later need. You also risk forgetting to reset the store scope when you need to record that parameter again, or setting the scope incorrectly. The worst-case scenario is accidentally taking everything out of the store scope (Midas even gives you a handy “all” button so you can do it with one click!): You can spend hours or even days diligently working on a show, getting all your scenes and recall scopes perfect, then have absolutely nothing to show for it at the end because nothing got saved in order to be recalled. Yes, this happens. It’s simply best to leave store scope alone.

Safe/hardware safe/iso (isolate): You can ‘safe’ things that you don’t want to be affected by scene changes, for example, the changeover DJ on a multi-band bill or an emergency announcement mic. Recall safes are applied globally so if you want to recall something for some scenes and not others, you should take it out of the relevant scenes’ recall scope instead.

Global: Applies to all scenes. What parameters you can and can’t assign or change globally varies according to manufacturer.

Absolute vs. relative: Some desks, e.g. SSLs, let you specify whether a change you make is absolute or relative. This applies when making changes to several scenes at once, either through the global or grouping options. For example, if you move a channel’s fader from -5 to 0, saving it as “absolute” would mean that that fader is at 0 in every scene you’re editing, but saving it as “relative” means the fader is raised by 5dB in every scene, compared to where it was already.

Fade/transition/timing: Scene changes are instantaneous by default, but a lot of desks give you the option to dictate how gradually you change from one scene to another, how the crossfade works, and whether a scene automatically follows on from the one before it after a certain length of time. These can be useful for theatrical applications in particular.

The diagram from Digico’s S21 manual illustrating recall scope (top) and the Midas Pro2 manual’s diagram (bottom). Both show that if elements are highlighted green, they are in the recall scope. Unfortunately Digico defines scope as what does get recalled, while Midas defines it as what doesn’t. Very similar screens, identical wording, entirely opposite results. It was a bad day when I found that out the hard way.

Best practice

Keep it simple!: With so many different approaches to automation from different manufacturers and so many aspects of a show file to keep track of, it is easy to tie yourself in knots if you aren’t careful. There are many ways to undo or override your settings without even noticing. The order in which data filters are applied and what takes precedence can vary according to manufacturer (see figure 2 for an illustration of one). Keep your show file as simple as possible until you’re confident with how everything works, and always save everything and back it up to your USB stick before making any major change. It’s much easier to mix a bit more by hand than to try to fix a problem with the automation, especially one that reappears every time you change the scene!

Keep it tidy: As with any aspect of the job, keep your work neat and annotated. There are comment boxes for each show and scene where you can note down what changes you made, what stage you were at when you saved, or what the scene is even for. This is very useful when troubleshooting or if someone needs to cover you.

Be prepared: Show files can be fiddly and soundchecks can be rushed and chaotic. It’s a good idea to make a generic show file with your preferences and the settings you need to start off with for every show, then build individual show files from there. You can make your files with an offline editor and have several options ready so you can hit the ground running as soon as you get to the venue. If you aren’t sure how certain aspects of the automation work, test them out ahead of time.

Don’t rely on the USB: Never run your show straight from your USB stick if you can avoid it. Some desks don’t offer space to store your show file, but if yours does you should always copy your file into the desk straight away. Work on that copy, before saving onboard and then backing it up back to the USB stick. Some desks don’t handle accessing information on external drives in real-time well, so everything might seem fine until the DSP is stretched or something fails, and you can end up with errors right at a crucial part of the performance. Plus, just imagine if someone knocked it out of its socket mid-show! You should also invest in good quality drives because a lot of desks don’t recognise low-quality ones (including some of the ones that desk manufacturers themselves hand out!).

Where to start: It can be tempting to start with someone else’s show file and tweak it for your gig. If that person has kept a neat, clear file (and they’ve given you permission to use it!) it could work well, but keep in mind that there might be settings hidden in menus that you aren’t aware of or tricks they use that suit their workflow that will just trip you up. Check through the file thoroughly before you use it.

Most desks have some sort of template scene or scenes to get you started. Some are more useful than others, and you need to watch out for their little quirks. The Midas Pro2 had a notoriously sparse start scene when it first came out, with absolutely nothing patched, not even the headphones! You also need to be aware of your desk’s general default settings. Yamaha CL and QL series take head amp information from the “port” (stage box socket, Dante source, etc.) rather than the channel by default. That is the safest option for when you’re sharing the ports between multiple desks but is pretty useless if you aren’t and actively confusing if you’re moving your file between several setups, as you inherit the gains from each device you patch to.

Make it yours: It’s your show file, structure it in the way that’s best for you. The number of scenes you have will depend on how you like to work and the kind of show you’re doing. You might be happy to have one starting scene and do all the mixing as you go along. You might have a scene per band or per song. If you’re mixing a musical you might like to have a new scene every few lines, to deal with cast members coming on and off stage (see “further resources” for some more information about theatre’s approach to automation and line by line mixing). Find the settings and shortcuts that help you work most efficiently. Just keep everything clear and well-labeled for anyone who might need to step in. If you’re sharing mixing duties with others you will obviously need to work together to find a system that suits everyone.

Save early, save often: You should save each show file after soundcheck at the very least, even if nothing is going to change before the performance, as a backup. You should also save it after the show for when, or in case, you work with that act again. Apart from that, it’s good practice to save as often as you can, to make sure nothing gets lost. Some desks offer an autosave feature but don’t rely on it to save everything, or to save it at the right point. Store each scene before you move on to the next one when possible. Remember each scene is a starting point, so if you make manual changes during the scene reset them before saving.

Periodically save your show under a new name so you can roll back to a previous version if something goes wrong or the act changes their mind. You should save the current scene, then the show, then save it to two USB sticks which you store in different places in case you lose or damage one. It is a good idea to keep one with you and leave the other one either with the audio gear or with a trusted colleague, in case you can’t make it to the next show.

In case of emergency

If you find that your file isn’t recalling properly, all is not necessarily lost. First off, do not save anything until you’ve figured out the problem! You risk overwriting salvageable data with new/blank data.

Utility scenes

When you’re confident with your automation skills you can utilise scenes for more than just changing state during the show. Here are a few examples of how they can be used:

Master settings: As soon as you start adjusting the recall scope, you should have a “settings” scene where you store everything, including parameters you know won’t change during the performance. Then you can take those parameters out of the recall scope for the rest of the scenes so you don’t change them accidentally. It is very important that they are stored somewhere, to begin with though! As monitor engineer Dan Speed shared:

“Always have a snapshot where all parameters are within the recall scope and be sure to update it regularly so it’s relevant. I learnt this the hard way with a Midas when I recalled the safe scene [the desk’s “blank slate” scene] and lost a week’s worth of gain/EQ/dynamics settings 30 minutes before the band turned up to soundcheck!”

I would also personally recommend saving your gain in this scene only. Having gain stored in every scene can cause a lot of hassle if you need to soft patch your inputs for any reason (e.g. when you’re a guest engineer where they can’t accommodate your channel list as is) or you need to adjust the gain mid-gig because a mic has slipped, etc. If you need to change the gain you would then need to make a block edit while the desk is live, “safe” the affected channel’s gain alone (and so lose any gain adjustments you had saved in subsequent scenes anyway), or re-adjust the gain every time you change the scene: all ways to risk making unnecessary mistakes. Some people disagree, but for most live music cases at least, if you consistently find that you can’t achieve the level changes needed within a show from the faders and other tools on the desk, you should revisit your gain structure rather than include gain changes in automation. A notable exception to this would be for multi-band bills: If a few seconds of silence is acceptable, for example, if you’re doing monitors, it is best to save each band as their own show file and switch over. Otherwise, if you need to keep the changeover music or announcement mics live, you can treat each set as a mini-show within the file and have a “master” starting scene for each one, then take the gain out of any other scenes.

Line system check: If you need to test that your whole line system is working, rather than line checking a particular setup, you should plug a phantom-powered mic into each channel and listen to it (phantom power checkers don’t pick up everything that might be wrong with a channel. It’s best to check with your own ears while testing the line system). A scene where everything is flat, patched 1-1, and phantom is sent to every channel makes this quick and easy, and easy to undo when you move on to the actual setup.

Multitrack playback: If you have a multitrack recording of your show but your desk doesn’t have a virtual playback option, you can make your own. Make two scenes with just input patching in their recall scope: one with the mics patched to the channels, and one with the multitrack patched instead. Take input patching out of every other scene’s recall scope. Now you can use the patch scenes to flip between live and playback, without affecting the rest of the show file. (Thanks to the awesome Michael Nunan for this tip!).

Despite the length of this post, I have only scratched the surface when it comes to the power of automation and what can be achieved with it. Unfortunately, it also has the power to ruin your gig, and maybe even lose your work. Truly understanding the principles of automation and building simple, clear show files will help your show run smoothly, and give you a solid foundation from which to build more complex ones when you need them.

Further resources:

Sound designer Kirsty Gillmore briefly outlines how automation can be approached for mixing musicals in part 2 of her Soundgirls blog on the topic:  https://soundgirls.org/mixing-for-musicals-2/

Sound designer Gareth Owen explains the rationale for line by line mixing in musical theatre and demonstrates how automation makes it possible in this interview about Bat Out of Hell: https://youtu.be/25-tUKYqcY0?t=477

Aleš Štefančič from Sound Design Live has tips for Digico users and their sessions: https://www.sounddesignlive.com/top-5-common-mistakes-when-using-a-digico-console/

Nathan Lively from Sound Design Live has lots of great advice and tips for workflow and snapshots in his ultimate guide to mixing on a Digico SD5:

https://www.sounddesignlive.com/ultimate-guide-creative-mixing-digico-sd5-tutorial/

X