Empowering the Next Generation of Women in Audio

Join Us

With Change

I hear the jingling from my coat pocket. It hits my plastic credit card and lining the soft interior of the coat, the coins bounce about unaware that they are the last of my savings.  These spare coins survived the move from New York to Philadelphia, renting out a new apartment, groceries, and frankly a giant array of activities that I can only describe as a blur.

Yet here I am, still able to write from you from my new corner of the world. I was always told growing up that with change comes responsibility – a responsibility I did not understand until I left home.

I’d like to take some time to recognize this new start, I have a strong feeling I’m not the only one to jump headfirst into adulting and not know what the h-double-hockey-sticks I’m doing.

What Exactly is “Adulting”

Does anyone really know? Is it the responsibility of caring for a child or the responsibility of paying your bills or having credit or knowing what the hell are mortgages? Does it mean to be legally entitled to your shortcomings or your crimes? Doesn’t it mean you are exempt from certain privileges that children may have or teenagers for that matter? Or maybe adulting is just a word we give to describe our age.

If you are waiting for an answer or my thoughts I don’t have any. Maybe that is the answer, I’m still growing and I’m of the belief that so is everyone. There are people that have their life together same as the people that aren’t, people who are still figuring it out, and people that are somewhere in the middle.

Maybe adulting means growing

If that’s even a fraction of what adulting means, I think I’m doing a good job of it. This year even if it’s small come up with something you look to aspire to.

I know Covid still hanging out sucks, but life needs to continue- even if it means we have to dust off work boots and rusty people skills- next month we’ll get to the meat and potatoes of some tech, and a brief history that is sure to put a smile on any musical theatre techies out there.

Until then I guess we’ll all have to keep adulting, with or without change.

Is It Ever Okay To Work For Free?

Anyone who has worked in a creative industry, including audio, has probably been asked at some point to work for free.

We’ve all seen the ads for unpaid internships that promise a wealth of experience, but with no guarantee of a permanent position at the end. Then there are the “jobs” that crop up on LinkedIn and seem perfectly fine until you get to the bottom of the listing and see the words:

“We can’t afford to pay anyone right now.”

Is it ever acceptable to expect someone to work for free?

When I was a student, I was eager to gain any bit of experience I could get my hands on. I’d spend each summer emailing radio stations and production companies, hoping for a chance to shadow for a day at the very least. At that early stage in my audio journey, I didn’t care what was involved as long as it meant getting a foot in the door. Immediately after graduating, when jobs were hard to come by, I was still open to the idea of unpaid work — within reason. There were opportunities I turned down because the cons outweighed the pros. Transport, accommodation, and the ability to feed yourself all have to be considered, and sometimes it’s just not worth the added stress.

I understand the desperation students and graduates often feel, because I’ve been there myself. I also understand that plenty of companies take on interns with a view to hiring them later. They offer people a chance to learn and grow, and to feel like a valued member of the team. But there are still too many out there who exploit graduates. They’re not interested in hiring someone; they just want free labour for as long as they can get it, before moving on to the next person. This kind of attitude usually tells you everything you need to know about the work culture at that company.

Internships are one thing; free labour masquerading as a full-time job is another. I’m not including volunteer work when I say this. People who get involved in community radio, for example, do so on the understanding that they’re volunteering, and that can be for a variety of reasons. But you should always be wary of anything that appears to be a 9-5 job with a detailed list of responsibilities, but no pay. I was browsing LinkedIn recently and came across a London-based production company looking for a podcast producer. The job looked great on the surface. Then came the kicker: “Unfortunately we have no budget right now but hope to be able to pay our employees in the future.” But are you even an employee if you’re not getting paid? I thought to myself, surely no one will apply for something that requires them to live in one of the most expensive cities in the world, with no time for other (paid) work, and therefore no means of paying rent or bills? I was wrong. The role had over 160 applications when I last checked.

The podcast world can be especially frustrating in this regard. More people than ever before are starting their own podcasts, and as many of them are hobbyists, they understandably don’t want to spend money on a professional editing service. But I am increasingly noticing professional podcasters who decide to take on an editor, yet are unwilling to pay them. Maybe it’s because they think it’s a quick and easy job — but if that were the case, they’d just do it themselves in the first place, right? No matter what the reason is, if they are earning money from it themselves, their editor should be too.

To sum up, there are circumstances where it’s okay to work for free — as long as you’re not being taken advantage of. If you’re just starting out in your career and you stand to learn something that will genuinely help you progress, that’s a good thing. So is returning the favour for a friend who may have previously helped you out, or volunteering your time and skills for an organisation or cause you care about (if you can afford to do so). But if you find yourself putting in long hours and a lot of effort for no reward, it’s probably best to reconsider your options

More on Should You Work for Free

Should You Work For Free?

Should You Work a Gig for Free for Exposure?

Reverb Hacks to Make Your Tracks Sparkle

Reverb is a great tool to help bring a bit of life and presence into any track or sound. But why not make it sound even more interesting by applying a plug-in such as an EQ or a compressor to the reverb. It can often give your sound a unique spin as well as being quite fun just to play around with the different sounds that you can achieve.

EQ

The first EQ trick helps with applying reverb to vocals. Have you ever bussed a vocal to a reverb track but still felt like it sounds a bit muddy? Well, try adding an EQ before the reverb on your bus track. Sculpt out the low and high end until you have a rainbow curve. Play around with how much you take out and find what sounds great for your vocals. I often find by doing this you can the clarity of the lyrics as well as achieving a deep, well-echoed sound.  This tip also helps if you’re a bit like me and can’t get enough reverb!

Creating a Pad Sound

If you’re interested in making ambient or classical music, or even pop music that features a soft piano, you might be interested in creating a pad effect. What this does is essentially elongate the sound and sustain it so it gives this nice ambient drone throughout the track.

You can achieve this by creating a bus track and sending your instrument to it. Then open your reverb plugin making sure it is set to 100% wet. You can then play around with setting the decay to around 8.00s to 15.00s. Then send about 60% of your dry instrument track to this bus, making sure to adjust it if it sounds too much. Play around with these settings until you achieve a sound that you like.

In conclusion, Reverb is one of my favourite plugins to play around with and alter. It offers an incredible amount of versatility and can be used in conjunction with many other plugins to create unique and interesting sounds. This can be used on a wide variety of different music genres and comes in handy when you want to add a bit of sparkle to a track.

What is experimental about “Experimental Music?”

This month’s blog is kind of a pseudo-philosophical question. Does it matter what we call things?  Of course, it does; it’s not the name itself but what it connotes for us aesthetically, culturally, and any other …ally you can think of.  So, I may end up proving myself wrong on this, but at least I should have a better understanding…

On a personal level: is my music experimental?  I’ve proudly declared that it is and have been sometimes “snooty” about other terms.  So, I’m going in for a bit of Megahertz cleansing.  See!  I just wrote something, and I don’t even know what it means; so gawd knows what you, my dear reader, will make of it. However, I digress …

I can’t remember when or why I started referring to myself as an experimental composer.  I think the why was because it sounded cool and besides, any time I mentioned electronic music to friends, they immediately envisaged me in a basement club surrounded by flashing lights and entranced dancers (is that me being snooty?).  Also, I had seen it as a genre, for example, it can be searched for on the Bandcamp website, where they tell us that, …

The artists represented here aren’t interested in tradition. Whether it’s clattering avant-garde music, deafening drone, or wild improvisation, artists who define their music as “experimental” are all interested in the same thing: pushing the boundaries of what we consider “music,” and finding fascinating new song shapes and structures.

Apart from the rather ‘unkind’ adjectives, there is a sense in which artists will feel that that is what they are doing when they compose.  Following a few of these tags on the Bandcamp label in a purely random fashion, I went from experimental to musique concrète, where I found an Example from Eliane Radigue, ‘Feedback Works 1969 – 1970’. She does indeed come from the musique concrète era, having worked with Pierre Schaeffer and Pierre Henry.  I then linked to drone music, where again I found a composition by Eliane Radigue, ‘Occam XXV,’ which was written for organ in 2018.  Is this experimental music?  According to Bandcamp and the tags assigned, it is, but I believe it less so than her 1969 Feedback works, I’ll look at this later in relation to what John Cage considered to be experimental music.  So, pressing on with randomly linked tags, it seemed that noise and, in particular, harsh noise might be representative of a kind of experimental music.  In this category, ‘Human Butcher Shop’ also tagged as metal, was a kind of slash guitar with distorted feedback; the kind of thing Jimi Hendrix was doing in the 70s.  So, to my mind, not really ‘pushing the boundaries…’, which makes me think that these kinds of criteria are not really helpful in trying to find a true home for experimental music; always assuming that the quest is a valid one.

I think, therefore, that the Bandcamp definition of experimental music is not really helpful.  Does it matter what labels we attach to music?  I would suggest that it does to the artists, in the sense that the label helps define us as artists in the eyes of our audiences.  I assume that as a search tag it might be useful for the consumer, even if there is still a lot of trawling to be done.  Before I leave Bandcamp, and as an example of how trawling and labels can lead to serendipitous moments, I decided to give ‘Dysfunctional Voiding’ by Piss Enema a quick listen.  The cover was ugly by any standards, but then I imagine that this was the intention of the artist wishing to occupy a certain genre as suggested by their tags:  experimental, death industrial, harsh noise, power electronics, etc.  However, the music was not dissimilar to other styles of musique concrète, even if less appealing, to my ears anyway.  So, as I had supposed, this adventure proved to be less than fruitful when one remembers that online tags create links and therefore visibility.  So, if you put your music on a music site, you would probably put as many related tags as possible to reach your intended audience.

NB: I have included links to Bandcamp tracks, and my understanding is that you can listen once to sample a song, but not repeatedly.

I mentioned John Cage earlier, so let’s see what he has to say about experimental music, a term he was using as early as 1955.  But first, some other attempts at definition and some of experimental music’s characteristics. According to Wikipedia, Experimental Music is not to be confused with Avant-garde music and this is qualified in this definition from the website ‘MasterClass’:

Though the terms “experimental” and “avant-garde” are sometimes used interchangeably, some music scholars and composers consider avant-garde music, which aims to innovate, as the furthest expression of an established musical form. Experimentalism is entirely separate from any musical form and focuses on discovery and playfulness without an underlying intention.

In other words: Experimental compositional practice is defined broadly by exploratory sensibilities radically opposed to, and questioning of, institutionalized compositional, performing, and aesthetic conventions in music.

So, if in my own work, I take a recorded sample and try to push it to create new sounds, or become a part of something greater, am I being experimental?  Do I have exploratory sensibilities?  Yes, I think I do.  Am I questioning the accepted conventions of music practice as they are?  Again, in my work, I think I am.  It seems to me that it all depends on the kind of music we are making.  If we are to use environmental sounds, as the Futurists in Milan had already done in the early part of the 1900s, then it does perforce suggest deprecating musical convention.  It is interesting to note that Pierre Boulez a composer of aleatoric music could be quite conventional when conducting.  His recordings of Stravinsky’s The Rite of Spring and Debussy’s Trois Nocturnes are faithful to the scores.  So even the arch-modernist knew when to be radically opposed to musical convention and when not to be.

As we shall see, John Cage’s definition of Experimental music includes elements of indeterminacy and chance, either in its composition or its performance in such a way that the outcomes of the music are unknown.  Indeterminate, or aleatoric music used ‘chance’ as a key component.  Cage’s “Music of Changes” of 1951 uses the I Ching Chinese text to influence the sound and length of each performance.  Chance, but at the moment of listening and recording, is present in Pauline Oliveros’s “Cave Water” of 1990 where the dripping of water is not under the slightest control of the composer.

https://paulineoliveros1.bandcamp.com/track/cave-water

I earlier referred to the two pieces by Eliane Radigue, who had worked in the 50s with one of the prime movers of Experimentalism in Europe, Pierre Schaeffer; Cage would occupy a similar role in the US.   In Occam XXV, Radigue wrote the piece for organ to be performed by an organist with no room for improvisation, as far as I can tell. The Feedback Works 1969 – 1970 were composed in her home studio, which she used while bringing up her three children.  The equipment at her disposal:  three tape recorders, a mixing board, an amplifier, two loudspeakers, and a microphone were used to create the feedback works.  With her children asleep, she often worked through the night in her basement home studio, holding a microphone, shifting it here and there by small increments thus playing with the feedback.  Since so much depended on the microphone, speakers, the limits of the magnetic tape, and the acoustics of the room, there is that element that the outcomes of the music are unknown.  This chance element of how the music will be when the compositional process is finished certainly gives this piece its experimental status.  By the way, Eliane’s last electronic composition was in 1998, L’Île re​-​sonante.  She continues to compose, but for live instruments. In fact, at the time of writing, she has a concert at the INA GRM Salle de Concerts in Paris this coming Wednesday, the 26th of January alongside another of my favorite experimental composers, Félicia Atkinson.  Eliane’s first track, Stress Osaka is the shortest at 11:35 and it’s pretty cool – a lot to listen to.

https://elianeradigue.bandcamp.com/album/feedback-works-1969-1970

This track, the hidden, from Felicia’s newest recording, especially the second half, frames her elegantly in a long line of French composers of “musique experimentale” later to be changed by the same Pierre Schaeffer to “recherche musicale”

https://shelterpress.bandcamp.com/track/the-hidden

Are any of the gifted women experimental composers, experimental?  Can a ‘live-work which might contain elements of indeterminacy, for example, improvisation and/or ‘chance’, be considered experimental?  Would a studio recording of the same piece still be considered experimental?  That depends… if it is/was considered experimental at its inception then the answer appears to be yes, given Cage’s assertion that the experimentalism can be contained in composition or performance.

Two artists who are well into indeterminacy at the composition phase are the London-based artist Klein and Claire Rousay from San Antonio, Texas.  When performing live, they may also experiment in performance: both artists have live performances coming up, by the way: Klein in Bristol, 26th January, and London 30th January, Claire Rousay in Knoxville (TN), 25th  March

https://klein1997.bandcamp.com/track/needed-and-saved-2

https://clairerousay.bandcamp.com/track/stoned-gesture

https://clairerousay.bandcamp.com/track/a-kind-of-promise

Claire Rousay is particularly interesting and this album, a softer focus represents her at her most melodic, almost pop.   From Bandcamp: claire rousay is based in San Antonio, Texas. Her music zeroes in on personal emotions and the minutiae of everyday life — voicemails, haptics, environmental recordings, stopwatches, whispers, and conversations — exploding their significance.  The link below is to a short interview with Claire Roussay.  Me, I’m struck by her authenticity:

https://daily.bandcamp.com/features/claire-rousay-softer-focus-interview?utm_source=footer

Klein’s work has been described as “grainy pop collages,” using heavily manipulated audio samples, drones, and sonic artifacts induced by time-stretching and pitch shifting. She assembles her tracks in sound editing program Audacity

I’m beginning to think, well actually a while ago now, that the word experimental is not so important, especially since there are other definitions we could use for this kind of music.  But what kind of music is it? And what kind of music is my music?  So, the question is now, am I an experimental composer?  If you read my January blog, you will know that I’ve been picking up on music from forty years ago.  Yes, even then it had elements of indeterminacy, but the style then was to process sound sources until one arrived at the kind of sound we had been seeking. On the other hand, I remember using a random number generator on the EMS 100 synth to haphazardly scramble the input of my sound source, giving me a kind of bubbly granulated texture.  However, the finished piece had been crafted to a loose narrative structure and existed as a composition of ‘fixed media’.  The only variations that could be made in performances, were the diffusion of the sounds around a sound space through an array of loudspeakers. So, what is my music?  My course at the conservatoire is titled musica elettroacustica II.  Fair enough since it uses recorded acoustic sounds and electronics to modify and add to the composition.  In performance, the music is called Acousmatic music on fixed media.  In other words, it is a musical object whose performance does not necessarily require the composer’s presence; this is often discussed as to what is our role at a concert if we just sit in front of a laptop?  Obviously, other possibilities exist: the electronics can be combined with live performers, the electronics can be controlled and distributed in an improvisatory way, through the use of prepared loops and touchpads.  For example, Ableton Live has a feature for stage performances:

Note chance

Set the probability that a note or drum hit will occur and let Live generate surprising variations to your patterns that change over time.

This is all fine and dandy but this is Frà’s blog, and she still hasn’t answered her own question: is her music experimental?  I come from a mixed musical background: Italian tenor arias in infancy while listening to my father practice; the rock and roll years as a teenager; as I moved into my twenties it was Jazz, but not any old Jazz.  It was the free-form stuff from late Coltrane, Yusef Lateef, Giuseppe Logan Who??? Anyway, from them to Soft machine in the 70s and then, behind the curve as always, I was introduced to Stravinsky, Debussy, Messaien, and off I went to Uni to study music and Fine art. So, with this mixed evolutionary musical background and my ADHD (which has been a bit of a bother [we English gals are very good at euphemism -it’s more like manic] is why this blog is particularly discursive – I am drawing to a close, I promise).

Anyway, where were we?  Ah, yes; so, I have a kind of quasi-classical background which means experimentation and extemporization have been a part of my musical language: I performed Terry Riley’s “In C” at Uni alongside other fairly “free” pieces, but I think I’ve been too stuck in the musique concrete style up until now.  Realizing that maybe I’m not as experimental as I thought I was.  My next piece, “Aston Expressway” (which I am writing for my son – you’ll get the story when the piece is finished) will make more use of unexpected samples in the making, though I have a narrative for the concept. Indeed, at the moment, since I’m still vague about it, If I don’t compose it, it will remain uber experimental.

As I’ve been writing this, I’ve been listening to EST’s live Hamburg recording of Tuesday Wonderland which is much longer than the studio recording, has live electronics and free improvisation, so also fitting the criteria for experimental.  However, since it already has a tag of Jazz, it probably doesn’t need to claim its experimental credentials, even if what is happening on stage and is being recorded is in all likelihood experimental.

Just as a side note, Kind of Blue, arguably one of the most well-known jazz records of all time was mostly improvised.  The photograph below is Cannonball Adderley’s music for Flamenco Sketches, a piece that lasts nine and a half minutes.  Since they were improvising within recognizable musical forms, it is hard to call it experimental, even if the second take is different from the first.  But who cares?  It’s still a great piece of music and both takes are beautiful as works of art in their own right?  Coltrane’s entry is just out of this world as is Cannonball Adderley’s and Bill Evans of course, the rest of the band are delicately there… Sorry but I haven’t listened to this in a while and I’m frozen to the spot, in a good way …

So, if I’ve been over-picky about a commonly used genre of music (I am Virgo after all), it’s not to deny anyone agency in their chosen field, but simply a reflection on what some of us and me, in particular, are trying to do with the music we create.  I come back again to this word authenticity, which is beginning to become a bit of a mantra for me; not all music can be authentic in its existence, it may just be a jingle selling a product (actually, if I could write a few, I might make some money) but that which excites me is the art that connects one person to another.  It’s not always easy to recognize but I do sense it in much of the work of the younger generation of experimental composers – there, I used the word. If I can connect through my art, I’ll be happy to be called simply a musician.

Pierre Boulez, the French composer, and conductor, in his response to critics of the ‘New Music’, who he referred to as Ostriches, said that “There is no such thing as experimental music … but there is a very real distinction between sterility and invention”.

So, there you have it, it either doesn’t exist or it’s everything that’s inventive.

Invent, connect and be authentic.

Frà sends her love from Torino to SoundGirls everywhere.

 

 

The Psychoacoustics of Modulation

Modulation is still an impactful tool in Pop music, even though it has been around for centuries. There are a number of well-known key changes in many successful Pop songs of recent musical decades. Modulation like a lot of tonal harmonies involves tension and resolution: we take a few uneasy steps towards the new key and then we settle into it. I find that 21st-century modulation serves as more of a production technique than the compositional technique it served in early Western European art music (this is a conversation for another day…).

 Example of modulation where the same chord exists in both keys with different functions.

 

Nowadays, it often occurs at the start of the final chorus of a song to support a Fibonacci Sequence and mark a dynamic transformation in the story of the song. Although more recent key changes feel like a gimmick, they are still relatively effective and seem to work just fine. However, instead of exploring modern modulation from the perspective of music theory, I want to look into two specific concepts in psychoacoustics: critical bands and auditory scene analysis, and how they are working in two songs with memorable key changes: “Livin’ On A Prayer” by Bon Jovi and “Golden Lady” by Stevie Wonder.

Consonant and dissonant relationships in music are represented mathematically as integer-ratios; however, we also experience consonance and dissonance as neurological sensations. To summarize, when a sound enters our inner ear, a mechanism called the basilar membrane response by oscillating at different locations along the membrane. This mapping process called tonotopicity is maintained in the auditory nerve bundle and essentially helps us identify frequency information. The frequency information devised by the inner ear is organized through auditory filtering that works as a series of band-pass filters, forming critical bands that distinguish the relationships between simultaneous frequencies. To review, two frequencies that are within the same critical band are experienced as “sensory dissonant,” while two frequencies in separate critical bands are experienced as “sensory consonant.” This is a very generalized version of this theory, but it essentially describes how frequencies in nearby harmonics like minor seconds and tritones are interfering with each other in the same critical band, causing frequency masking and roughness.

 

Depiction of two frequencies in the same critical bandwidth.

 

Let’s take a quick look at some important critical bands during the modulation in “Livin’ On A Prayer.” This song is in the key of G (392 Hz at G4) but changes at the final chorus to the key of Bb (466 Hz at Bb4). There are a few things to note in the lead sheet here. The key change is a difference of three semitones, and the tonic notes of both keys are in different critical bands, with G in band 4 (300-400 Hz) and Bb in band 5 (400-510 Hz). Additionally, the chord leading into the key change is D major (293 Hz at D4) with D4 in band 3 (200-300 Hz). Musically, D major’s strongest relationship to the key of Bb is that it is the dominant chord of G, the minor sixth in the key of Bb. Its placement makes sense because previously the chorus starts on the minor sixth in the key of G, which is E minor. Even though it has a weaker relationship to Bb major which kicks off the last chorus, D4 and Bb4 are in different critical bands and if played together would function as a major third and create sensory consonance. Other notes in those chords are in the same critical band: F4 is 349 Hz and F#4 is 370 Hz, placing both frequencies in band 4 and if played together would function as a minor second and cause sensory roughness. There are a lot of perceptual changes in this modulation, and while breaking down critical bands doesn’t necessarily reveal what makes this key change so memorable, it does provide an interesting perspective.

A key change is more than just consonant and dissonant relationships though, and the context provided around the modulation gives us a lot of information about what to expect. This relates to another psychoacoustics concept called auditory scene analysis which describes how we perceive auditory changes in our environment. There are a lot of different elements to auditory scene analysis including attention feedback, localization of sound sources, and grouping by frequency proximity, that all contribute to how we respond to and understand acoustical cues. I’m focusing on the grouping aspect because it offers information on how we follow harmonic changes over time. Many Gestalt principles like proximity and good continuation help us group frequencies that are similar in tone, near each other, or serve our expectations of what’s to come based on what has already happened. For example, when a stream of high notes and low notes is played at a fast tempo, their proximity to each other in time is prioritized, and we hear one stream of tones. However, as this stream slows down, the value in proximity shifts from the closeness in timing to the closeness in pitch, and two streams of different high pitches and low pitches are heard.

 Demonstration of “fission” of two streams of notes based on pitch and tempo.

 

Let’s look at these principles through the lens of “Golden Lady” which has a lot of modulation at the end of the song. As the song refrains about every eight measures, the key changes by a half-step or semitone upwards to the next adjacent key. This occurs quite a few times, and each time the last chord in each key before the modulation is the parallel major seventh of the upcoming minor key. While the modulation is moving upwards by half steps, however, the melody in the song is moving generally downwards by half steps, opposing the direction of the key changes. Even though there are a lot of changes and combating movements happening at this point in the song, we’re able to follow along because we have eight measures to settle into each new key. The grouping priority is on the frequency proximity occurring in the melody rather than the timing of the key changes, making it easier to follow. Furthermore, because there are multiple key changes, the principle of “good continuation” helps us anticipate the next modulation within the context of the song and the experience of the previous modulation. Again, auditory scene analysis doesn’t directly explain every reason for how modulation works in this song, but it gives us ulterior insight into how we’re absorbing the harmonic changes in the music.

Master the Art of Saving Your Live Show File

Total recall for a better workflow and to avoid embarrassment 

If you found this blog because your show file isn’t recalling scenes properly, skip to the “in case of emergency” section and come back to read the rest when you have time.

We learned as soon as we started using computers that we need to save our work as often as possible. We all know that sinking feeling when that essay or email we had worked so long and hard on, without backing up, suddenly became the victim of a spilled drink or blue screen of death. I’m sure more than a few of us also know this feeling from when we didn’t save our show file correctly, maybe even causing thousands of people to boo us because everything’s gone quiet all of a sudden. Digital desks are just computers with a fancy keyboard, but unlike writing a simple essay, there are many more ‘features’ in show files that can trip you up if you don’t fully understand them. Explaining the ins and outs of every desk’s save functions is beyond the scope of this article (pun intended), but learning the principles of how and why everything should be saved will help to make your workflow more efficient and reliable, and hopefully save you from an embarrassing ‘dog ate my show file moment.

The lingo

For some reason, desk manufacturers love to reinvent the wheel and so have their own words to describe the same thing. I have tried to include the different terms that I know of, but once you understand the underlying principles you should be able to recognise what is meant if you encounter other names for them. It really pays to read your desk’s manual, especially when it comes to show files. Brands have different approaches which might not always be intuitive, so getting familiar with them before you even start will help to avoid all your work going down the drain when you don’t tick the right box or press the right button.

Automation: This refers to the whole concept of having different settings for different parts of the performance. The term comes from studio post-production and is a little bit of a misnomer for live sound because most of the time it isn’t automatic as such; the engineer still needs to trigger the next setting, even though the desk takes care of the rest (if you’re really fancy some desks can trigger scene changes off midi or timecode. It is modern-day magic but you still need to be there to make sure things run smoothly and to justify your fee).

Show file/show/session: The parent file. This covers all the higher level desk settings, like how many busses you have and what type, your user preferences, EQ libraries, etc. It is the framework that the scenes build on, but also contains the scenes.

Scene/snapshot: Individual states within the show file, like documents within a folder. They store the current values for things like fader levels, mutes, pan, and effects settings. Every time you want things to change without having to make those adjustments by hand, you should have a new scene.

Scope/focus/filter: Defines which parameters get recalled (or stored. See next section) with the scene. For example, you might want everything except the mutes and fader levels to stay the same throughout the whole show, so they would be the only things in your scenes’ recall scope.

N.B.! Midas (and perhaps some other manufacturers) defines scope as what gets excluded from being recalled, and so it works the other way round (see figure 1). Be very sure you know which definition your desk is using! To avoid confusion, references to scope in this post mean what gets included.

Store vs. recall: Some desks, e.g. Midas, offer store scope as well as recall scope. This means you can control what gets saved as well as how much of that information later gets brought back to the surface. Much like the solo in place button, you need to be 100% sure of what you’re doing before you use this feature. It might seem like a good idea to take something you won’t want later, like the settings for a spare vocal mic when the MD uses it during rehearsals, out of the store scope. However, it’s much safer to just take it out of the recall scope instead. It’s better to have all the information at your disposal and choose what to use, rather than not having data you might later need. You also risk forgetting to reset the store scope when you need to record that parameter again, or setting the scope incorrectly. The worst-case scenario is accidentally taking everything out of the store scope (Midas even gives you a handy “all” button so you can do it with one click!): You can spend hours or even days diligently working on a show, getting all your scenes and recall scopes perfect, then have absolutely nothing to show for it at the end because nothing got saved in order to be recalled. Yes, this happens. It’s simply best to leave store scope alone.

Safe/hardware safe/iso (isolate): You can ‘safe’ things that you don’t want to be affected by scene changes, for example, the changeover DJ on a multi-band bill or an emergency announcement mic. Recall safes are applied globally so if you want to recall something for some scenes and not others, you should take it out of the relevant scenes’ recall scope instead.

Global: Applies to all scenes. What parameters you can and can’t assign or change globally varies according to manufacturer.

Absolute vs. relative: Some desks, e.g. SSLs, let you specify whether a change you make is absolute or relative. This applies when making changes to several scenes at once, either through the global or grouping options. For example, if you move a channel’s fader from -5 to 0, saving it as “absolute” would mean that that fader is at 0 in every scene you’re editing, but saving it as “relative” means the fader is raised by 5dB in every scene, compared to where it was already.

Fade/transition/timing: Scene changes are instantaneous by default, but a lot of desks give you the option to dictate how gradually you change from one scene to another, how the crossfade works, and whether a scene automatically follows on from the one before it after a certain length of time. These can be useful for theatrical applications in particular.

The diagram from Digico’s S21 manual illustrating recall scope (top) and the Midas Pro2 manual’s diagram (bottom). Both show that if elements are highlighted green, they are in the recall scope. Unfortunately Digico defines scope as what does get recalled, while Midas defines it as what doesn’t. Very similar screens, identical wording, entirely opposite results. It was a bad day when I found that out the hard way.

Best practice

Keep it simple!: With so many different approaches to automation from different manufacturers and so many aspects of a show file to keep track of, it is easy to tie yourself in knots if you aren’t careful. There are many ways to undo or override your settings without even noticing. The order in which data filters are applied and what takes precedence can vary according to manufacturer (see figure 2 for an illustration of one). Keep your show file as simple as possible until you’re confident with how everything works, and always save everything and back it up to your USB stick before making any major change. It’s much easier to mix a bit more by hand than to try to fix a problem with the automation, especially one that reappears every time you change the scene!

Keep it tidy: As with any aspect of the job, keep your work neat and annotated. There are comment boxes for each show and scene where you can note down what changes you made, what stage you were at when you saved, or what the scene is even for. This is very useful when troubleshooting or if someone needs to cover you.

Be prepared: Show files can be fiddly and soundchecks can be rushed and chaotic. It’s a good idea to make a generic show file with your preferences and the settings you need to start off with for every show, then build individual show files from there. You can make your files with an offline editor and have several options ready so you can hit the ground running as soon as you get to the venue. If you aren’t sure how certain aspects of the automation work, test them out ahead of time.

Don’t rely on the USB: Never run your show straight from your USB stick if you can avoid it. Some desks don’t offer space to store your show file, but if yours does you should always copy your file into the desk straight away. Work on that copy, before saving onboard and then backing it up back to the USB stick. Some desks don’t handle accessing information on external drives in real-time well, so everything might seem fine until the DSP is stretched or something fails, and you can end up with errors right at a crucial part of the performance. Plus, just imagine if someone knocked it out of its socket mid-show! You should also invest in good quality drives because a lot of desks don’t recognise low-quality ones (including some of the ones that desk manufacturers themselves hand out!).

Where to start: It can be tempting to start with someone else’s show file and tweak it for your gig. If that person has kept a neat, clear file (and they’ve given you permission to use it!) it could work well, but keep in mind that there might be settings hidden in menus that you aren’t aware of or tricks they use that suit their workflow that will just trip you up. Check through the file thoroughly before you use it.

Most desks have some sort of template scene or scenes to get you started. Some are more useful than others, and you need to watch out for their little quirks. The Midas Pro2 had a notoriously sparse start scene when it first came out, with absolutely nothing patched, not even the headphones! You also need to be aware of your desk’s general default settings. Yamaha CL and QL series take head amp information from the “port” (stage box socket, Dante source, etc.) rather than the channel by default. That is the safest option for when you’re sharing the ports between multiple desks but is pretty useless if you aren’t and actively confusing if you’re moving your file between several setups, as you inherit the gains from each device you patch to.

Make it yours: It’s your show file, structure it in the way that’s best for you. The number of scenes you have will depend on how you like to work and the kind of show you’re doing. You might be happy to have one starting scene and do all the mixing as you go along. You might have a scene per band or per song. If you’re mixing a musical you might like to have a new scene every few lines, to deal with cast members coming on and off stage (see “further resources” for some more information about theatre’s approach to automation and line by line mixing). Find the settings and shortcuts that help you work most efficiently. Just keep everything clear and well-labeled for anyone who might need to step in. If you’re sharing mixing duties with others you will obviously need to work together to find a system that suits everyone.

Save early, save often: You should save each show file after soundcheck at the very least, even if nothing is going to change before the performance, as a backup. You should also save it after the show for when, or in case, you work with that act again. Apart from that, it’s good practice to save as often as you can, to make sure nothing gets lost. Some desks offer an autosave feature but don’t rely on it to save everything, or to save it at the right point. Store each scene before you move on to the next one when possible. Remember each scene is a starting point, so if you make manual changes during the scene reset them before saving.

Periodically save your show under a new name so you can roll back to a previous version if something goes wrong or the act changes their mind. You should save the current scene, then the show, then save it to two USB sticks which you store in different places in case you lose or damage one. It is a good idea to keep one with you and leave the other one either with the audio gear or with a trusted colleague, in case you can’t make it to the next show.

In case of emergency

If you find that your file isn’t recalling properly, all is not necessarily lost. First off, do not save anything until you’ve figured out the problem! You risk overwriting salvageable data with new/blank data.

Utility scenes

When you’re confident with your automation skills you can utilise scenes for more than just changing state during the show. Here are a few examples of how they can be used:

Master settings: As soon as you start adjusting the recall scope, you should have a “settings” scene where you store everything, including parameters you know won’t change during the performance. Then you can take those parameters out of the recall scope for the rest of the scenes so you don’t change them accidentally. It is very important that they are stored somewhere, to begin with though! As monitor engineer Dan Speed shared:

“Always have a snapshot where all parameters are within the recall scope and be sure to update it regularly so it’s relevant. I learnt this the hard way with a Midas when I recalled the safe scene [the desk’s “blank slate” scene] and lost a week’s worth of gain/EQ/dynamics settings 30 minutes before the band turned up to soundcheck!”

I would also personally recommend saving your gain in this scene only. Having gain stored in every scene can cause a lot of hassle if you need to soft patch your inputs for any reason (e.g. when you’re a guest engineer where they can’t accommodate your channel list as is) or you need to adjust the gain mid-gig because a mic has slipped, etc. If you need to change the gain you would then need to make a block edit while the desk is live, “safe” the affected channel’s gain alone (and so lose any gain adjustments you had saved in subsequent scenes anyway), or re-adjust the gain every time you change the scene: all ways to risk making unnecessary mistakes. Some people disagree, but for most live music cases at least, if you consistently find that you can’t achieve the level changes needed within a show from the faders and other tools on the desk, you should revisit your gain structure rather than include gain changes in automation. A notable exception to this would be for multi-band bills: If a few seconds of silence is acceptable, for example, if you’re doing monitors, it is best to save each band as their own show file and switch over. Otherwise, if you need to keep the changeover music or announcement mics live, you can treat each set as a mini-show within the file and have a “master” starting scene for each one, then take the gain out of any other scenes.

Line system check: If you need to test that your whole line system is working, rather than line checking a particular setup, you should plug a phantom-powered mic into each channel and listen to it (phantom power checkers don’t pick up everything that might be wrong with a channel. It’s best to check with your own ears while testing the line system). A scene where everything is flat, patched 1-1, and phantom is sent to every channel makes this quick and easy, and easy to undo when you move on to the actual setup.

Multitrack playback: If you have a multitrack recording of your show but your desk doesn’t have a virtual playback option, you can make your own. Make two scenes with just input patching in their recall scope: one with the mics patched to the channels, and one with the multitrack patched instead. Take input patching out of every other scene’s recall scope. Now you can use the patch scenes to flip between live and playback, without affecting the rest of the show file. (Thanks to the awesome Michael Nunan for this tip!).

Despite the length of this post, I have only scratched the surface when it comes to the power of automation and what can be achieved with it. Unfortunately, it also has the power to ruin your gig, and maybe even lose your work. Truly understanding the principles of automation and building simple, clear show files will help your show run smoothly, and give you a solid foundation from which to build more complex ones when you need them.

Further resources:

Sound designer Kirsty Gillmore briefly outlines how automation can be approached for mixing musicals in part 2 of her Soundgirls blog on the topic:  https://soundgirls.org/mixing-for-musicals-2/

Sound designer Gareth Owen explains the rationale for line by line mixing in musical theatre and demonstrates how automation makes it possible in this interview about Bat Out of Hell: https://youtu.be/25-tUKYqcY0?t=477

Aleš Štefančič from Sound Design Live has tips for Digico users and their sessions: https://www.sounddesignlive.com/top-5-common-mistakes-when-using-a-digico-console/

Nathan Lively from Sound Design Live has lots of great advice and tips for workflow and snapshots in his ultimate guide to mixing on a Digico SD5:

https://www.sounddesignlive.com/ultimate-guide-creative-mixing-digico-sd5-tutorial/

Review of Behind the Sound Cart

 

If you are looking for a master class in production sound, Behind the Sound Cart: A Veteran’s Guide to Sound on the Set by Patrushkha Mierzwa is just that.  From gear to career development this book covers it all.  With her many years of experience as a Utility Sound Technician (UST), Mierzwa provides more than tips and tricks.  Packed in each chapter is a guide to best practices and the reasons why.

Behind the Sound Cart is divided into chapters based on topics beginning with an overview of the UST’s duties.  Also known as 2nd Assistant Sound, they work on everything sound-related not covered by the Mixer or the Boom Operator, even then the UST might have to use a second boom, or even cover for the mixer.  In light of how flexible the UST must be, it makes sense to use them as a focal point for a guidebook on production sound.  Mierzwa has the reader follow her footsteps through nearly every scenario a UST might face.  I cannot believe I ever set foot on a set without Behind the Sound Cart.

Mierzwa stresses the importance of safety with every chapter.  Current events show us that this emphasis is always necessary.  However, safety is not just protection from a dolly running you over:  heat, stress, and fatigue can also be deadly.  Don’t skip the sections on first aid and COVID protocols either.  Gear cleaning and maintenance fall into this category as well.

From cover to cover, Mierzwa leads by example of professionalism and integrity.  Do not expect this book to be full of celebrity anecdotes.  Part of being a respected UST is respecting the cast. One might expect a book on the basics of production sound to be dry without juicy gossip, but there are plenty of stories and jokes peppered through each chapter.  Attached in the appendices are forms, paperwork, and other documents used throughout the film production process.  Those alone are worth the price of this book.  Refreshing is the way Mierzwa uses “she/her” as the default pronouns over “he/him.”  Sure, a more neutral pronoun like the singular “they” would be optimal, it allows one to imagine a film crew that is more diverse than the “industry standard.”

I recommend Behind the Sound Cart to anyone looking to succeed in the film industry.  That includes early career professionals, as well as students and production assistants.  I would even recommend this book for fledgling directors and cinematographers.  Patrushkha Mierzwa has put a career’s worth of information into a manageable package, and it should be in every production sound engineer’s library.

Do Musicians Need to Know About Sound?

Music and Sound: Part 1

Modern and changing times have pushed people to learn and use technology more and more, especially musicians. But particularly during the pandemic, many musicians have had the need to record themselves, edit and mix their own music.  Does this mean now that they have to master a new career as sound engineers too besides being musicians?

I would say yes, but only if it is their true interest. Diving into a sound career implies a lot of technical terms to learn, gear to buy, and aptitudes to have. So, I would say no, if you are not much of a technophile and you don’t want to consume your instrument study time into troubleshooting equipment or learning about deep theoretical and technical aspects of sound.

That being said, my first and best advice would be to always hire a professional sound person to help you set up your home studio, teach you how to do your recordings and mixes, and give you professional advice. However, if you are still thinking to give it a try and set up your own home studio, mix your own music, and doing it all by yourself, I may have some tips for you.

Technical aptitudes are part of the important things to consider: computer skills and good problem-solving skills are basic aptitudes you’ll need to enhance to set up, use, and master your own music studio. Keep in mind that you might have to update or buy a computer that can manage recording and music software requirements. Most websites have now a specific list of technical requirements to use their products, so you might want to take a look through their websites to make sure your computer is up to date. The most important things to consider for a computer to be able to manage music and recording software are mainly: processor type, operation system version, RAM size, disk space, ports, etc. If any of these terms are in a foreign language for you, you may also need help from a person how knows about computers.

Here is an example of Ableton Live Computer requirements for a Windows Computer:

Windows 10 (Build 1909 and later)

Intel® Core™ i5 processor or an AMD multi-core processor.

8 GB RAM

1366×768 display resolution

ASIO compatible audio hardware for Link support (also recommended for optimal audio performance)

Access to an internet connection for authorizing Live (for downloading additional content and updating Live, a fast internet connection is recommended)

Approximately 3 GB disk space on the system drive for the basic installation (8 GB free disk space recommended)

Up to 76 GB disk space for additionally available sound content

Digital Audio Workstations

The next thing you will need to consider is getting digital audio workstations (DAWs) and/or music creation software. DAWs are computer programs designed to record any sound into a computer, manipulate the audio, mix it, add effects and export it in multiple formats.

You will need to choose according to your needs and preferences among many workstations that are available online from free versions to monthly subscriptions or perpetual licenses. Some of the most popular DAWs between professional sound engineers are Pro Tools, Cubase, Logic Pro, Ableton Live, Reaper, Luna, Studio One, but you can also find others for free or less than USD $100:

To learn how to use any of these DAWs you will be able to find many resources online on the manufacture’s websites, Google or YouTube, such as training videos, workshops, live sessions, etc. Here is an example of a tutorial video for Pro tools that can be found on Avid’s YouTube channel: Get Started Fast with Pro Tools | First — Episode 1: https://www.youtube.com/watch?v=9H–Q-fwJ1g

Some theoretical concepts will also come up when doing recordings and mixing, like stereo track, mono track, multitrack, bit depth, sample rate, phantom power, condenser mics, phase, plugin, gain, DI, etc. Multiple free online resources to learn about those concepts are available all over the internet. Just take your time to learn them.

You can read about educational resources at https://soundgirls.org/educational-resources/

Audio Interface

The next thing you are going to need is an Audio Interface, but why?

Audio interfaces are hardware units that allow you to connect microphones, instruments, midi controllers, studio monitors and headphones to your computer. They translate electric signals produced by soundwaves to a digital protocol (0s and 1s) so your computer can understand it.

Depending on your requirements as a musician you may need to record one track at a time or more. For example, if you play drums you may need more than one mic, but if you are a singer probably one mic is just enough. This means that you will find audio interfaces with different amounts of inputs where usually the price is attached to it, the greater the number of channels and preamps, the more money you’ll need. Audio interfaces will also have different types of inputs: for microphones, for instruments (with a DI), or both (combo), make sure you choose the proper one for your needs. Especially, make sure it has a built-in preamplifier in case you are using condenser mics to record.

There are also microphones that you can plug directly into your computer or phone via USB, this means no audio interface is needed (it’s built-in). This type of mics might be helpful for podcasters, broadcasters, video streamers. However, bear in mind that even if you try your best, this type of recordings may not have the same results as a professional recording and mixing.

Microphones

Learning about microphones and microphone technics might take lots of blogs to read and videos to watch, so I will narrow it down: there are no straight formulas for sound or strict rules to follow regarding to microphones. The mic you choose can vary depending on your budget, the type of instrument you play, and what you are using your microphone for. For this, you will need to search and learn about types of mics depending on their construction (dynamic, condenser, ribbon, etc.), types of polar pattern (cardioid, super-cardioid, Omni, etc), and some recommendations of mics based on the instruments you’ll record.

For example, you may find definitions for commonly-used terms for microphones and Audix products on their website: https://audixusa.com/glossary/. Or you can register for Sennheiser Sound Academy Seminars at https://en-ae.sennheiser.com/seminar-recordings.

If you want to read more about Stereo Microphone Techniques you can also check: https://www.andreaarenas.com/post/2017/11/06/stereo-microphone-techniques

Midi Controllers

Midi controllers: Musical Instrument Digital Interfaces are mostly used to generate digital data that can be used to trigger other equipment or software, meaning that they do not generate sound by themselves. A MIDI controller can be a keyboard, drum pad-style device, or a combination of the two. You will need to learn how to program and map your midi controller to be able to use it creatively for your productions.

You will also find many resources online that will help you learn about midi controllers, such as Ableton on how to set up your midi: https://www.youtube.com/watch?v=CWOXblksDxE

Acoustics

The acoustics of the room is also important, the lack of acoustic treatment can make your recordings sound different, and usually in a bad way. Sound gets reflected and absorbed in all surfaces present in a room and noise can interact with your recordings too. If you are in an improvised room in your house and no professional acoustic treatment is possible to make, you might have in mind some basics like avoiding recording in rooms with parallel walls, square or rectangle design pattern with square corners and hard surfaces, minimizing the reflected sounds with carpets, soft couches, pillows, etc.

Once again, considering hiring a sound engineer as a consultant might be your best option if you are planning to take the next step as a musician to learn about sound engineering. It would make you save time; money and you’ll be employing a friend.

 

 

7 Steps to Making a Demo with Your Phone

The internet is full of songwriters asking the question; how good does my demo have to be? The answer is always, “it depends”. Demos generally have one purpose; to accurately display the lyrics and melody of a song. Yet, there are varying types of demos and demo requirements but for this blog’s purpose, that is our one purpose!

*(see the end of this blog for situations where you will want to have your song fully produced for pitching purposes)

If you are a

Demos for these purposes can be recorded on your phone. If you have recording software (otherwise known as a DAW: Digital Audio Workstation) you can use that too. The steps are the same. But for those who don’t have a recording set up and have no interest in diving into that world, your phone and a variety of phone apps make it super easy.

Figure out the tempo

The “beats per minute”, or BPM is a critical component to the momentum and energy of a song. Pretty much every novice singer/songwriter has a tendency to write their songs in various tempos. The verse starts off at a certain groove and then by the time the first chorus comes in, the tempo has gradually increased to a new bpm. Then it goes back down during the soft bridge, then back up to an even faster tempo at the end.

None of us were born with an internal metronome, so don’t beat yourself up about it. However, most mainstream music that we hear today is going to be in a set tempo for the majority of the song. There may be tempo changes, depending on what the song calls for but, generally speaking, most songs do not change tempo. You and your producer can decide if a song needs tempo changes or if it is the kind of song that should be played “freely”, with no metronome at all.

Start by playing your song, and imagine yourself walking to the beat of your song. Is it a brisk walk? Or a slow, sluggish walk? A brisk walk is 120 beats per minute. Pull up your metronome and pick a starting bpm, based on how brisk (or un-brisk) the imaginary walk feels. Set that tempo and then play along to it. If it’s feeling good, keep playing through until you’ve played every song section (verse, chorus, bridge) at that tempo. If it stopped feeling right at some point, adjust accordingly. Ideally, you’ll find that happy bpm that is perfect for the song.

Type up a lyric sheet: I have artists put these lyric sheets on Google Drive and share them with me so that we are always working off of the same lyric sheet as changes are made.

Mark tempo changes on the lyric sheet: mark specific tempo changes if there are any. Mark a ritard (ritard means to slow down) where they need to be as well. If there is going to be a ritard, it is usually in the outro.

Check the key: Do you accidentally change keys in different sections? Just like the case of tempo changes, beginner singer/songwriters, especially if they’ve written the lyrics and melody a cappella (without accompaniment) can easily change keys without knowing it. If you don’t play an instrument, that’s ok! Have a musician friend or teacher help you. Your producer can also help you with this, as long as that is included in the scope of their work. Ask beforehand. If you do know the key and have determined the chords, including those in your lyric sheet.

Can you sing it: Have you sung it full out with a voice teacher in the key you’ve written it in? Singing it quietly in your room in a way that won’t disturb your roommates might not be the way you want to sing it in the recording studio.

Record the song: Record the song with the metronome clicking out loud if you aren’t using an app (you may need two devices; one to play the metronome and one to record) There are apps available where you can record yourself while listening to the click track through earbuds, then when you listen back to the recording, you won’t hear the click track. The point is that you sang it in time. One app I’m aware of where you can do this is Cakewalk by Bandlab. There are many!

Share the file: Make sure you can share the audio recording in a file format they can play. MP3s are the most common compressed audio file that can easily be emailed but most of our phones don’t automatically turn our voice memos into mp3’s. As a matter of fact, some phones will squash an audio file into some weird file type that sounds like crap (I have a Samsung and it does this!)

The most important steps for creating a demo for the above-mentioned purposes are making sure you have fine-tuned lyrics, melody, and song structure in a (mostly) set tempo. Following all of these steps will make you a dream client for your producer!

*If you want to pitch a song for use in film or TV (licensing/sync) then it needs to be a fully produced song. Do NOT submit demos to music libraries or music supervisors. They need finished products.

If you want to pitch your song to a music publisher, who in turn will pitch your song to artists, they will want full production in most cases. The artist may have it entirely reproduced but you have to “sell” them the song. You want to shine it in the best light possible. A demo would be needed for the creative team (producer, studio musicians, etc.) who will create your produced version for pitching. 

 

X