Empowering the Next Generation of Women in Audio

Join Us

Gain Without the Pain

 

Gain Structure for Live Sound Part 1

Gain structure and gain staging are terms that get thrown about a lot, but often get skimmed over as being obvious, without ever being fully explained. The way some people talk about it, and mock other people for theirs, you’d think proper gain structure was some special secret skill, known only to the most talented engineers. It’s actually pretty straightforward, but knowing how to do it well will save you a lot of headaches down the line. All it really is is setting your channels’ gain levels high enough that you get plenty of signal to work with, without risking distortion. It often gets discussed in studio circles, because it’s incredibly important to the tone and quality of a recording, but we have other things to consider on top of that in a live setting.

So, what exactly is gain?

It seems like the most basic question in sound, but the term is often misunderstood. Gain is not simply the same as volume. It’s a term that comes from electronics, which refers to the increase in amplitude of an incoming signal when you apply electricity to it. In our case, it’s how much we change our input’s amplitude by turning the gain knob. In analogue desks, that means engaging more circuits in the preamp to increase the gain as you turn (have you ever used an old desk where you needed just a bit more level, so you slowly and smoothly turned the gain knob, it made barely any difference… nothing… nothing… then suddenly it was much louder? It was probably because it crossed the threshold to the next circuit being engaged).

Digital desks do something similar but using digital signal processing. It is often called trim instead of gain, especially if no actual preamp is involved. For example, many desks won’t show you a gain knob if you plug something into a local input on the back of it, because its only preamps are in its stagebox. You will see a knob labelled trim instead (I do know these knobs are technically rotary encoders because they don’t have a defined end point, but they are commonly referred to as knobs. Please don’t email in). Trim can also be used to refer to finer adjustments in the input’s signal level, but as a rule of thumb, it’s pretty much the same as gain. Gain is measured as the difference between the signal level when it arrives at the desk to when it leaves the preamp at the top of the channel strip, so it makes sense that it’s measured in decibels (dB), which is a measurement of ratios.

The volume of the channel’s signal once it’s gone through the rest of the channel strip and any outboard is controlled by the fader. You can think of the gain knob as controlling input, and the fader as controlling output (let’s ignore desks with a gain on fader feature. They make it easier for the user to visualise the gain but the work is still being done at the top of the channel strip).

Now, how do you structure it?

For studio recording, the main concern is getting a good amount of signal over the noise floor of all the equipment being used in the signal chain. Unless you’re purposefully going for a lo-fi, old-school sound, you don’t want a lot of background hiss all over your tracks. A nice big signal-to-noise ratio, without distortion, is the goal. In live settings, we can view other instruments or stray noises in the room as part of that noise floor, and we also have to avoid feedback at the other end of the scale. There are two main approaches to setting gains:

Gain first: With the fader all the way down, you dial the gain in until it’s tickling the yellow or orange LEDs on your channel or PFL while the signal is at its loudest, but not quite going into the red or ‘peak’ LEDs (of course, if it’s hitting the red without any gain, you can stick a pad in. You might find a switch on the microphone, instrument or DI box, and the desk. If the mic is being overwhelmed by the sound source it’s best to use its internal pad if it has one, so it can handle it better and deliver a distortion-free signal to the desk). You then bring the fader up until the channel is at the required level. This method gives you a nice, strong signal. It also gives that to anyone sharing the preamps with you, for example, monitors sharing the stagebox or multitrack recording. However, because faders are measured in dBs, which are logarithmic, it can cause some issues. If you look at a fader strip, you’ll see the numbers get closer together the further down they go. So if you have a channel where the fader is near the bottom, and you want to change the volume by 1dB, you’d have to move it about a millimetre. Anything other than a tiny change could make the channel blaringly loud, or so quiet it gets lost in the mix.

Fader at 0: You set all your faders at 0 (or ‘unity’), then bring the gain up to the desired level. This gives you more control over those small volume changes, while still leaving you headroom at the top of the fader’s travel. It’s easier to see if a fader has been knocked or to know where to return a fader to after boosting for a solo, for example. However, it can leave anyone sharing gains with weak or uneven signals. If you’re working with an act you are unfamiliar with, or one that is particularly dynamic, having the faders at zero might not leave you enough headroom for quieter sections, forcing you to have to increase the gain mid-show. This is far from ideal, especially if you are running monitors, because you’re changing everyone’s mix without being able to hear those changes in real-time, and increasing the gain increases the likelihood of feedback. In these cases, it might be beneficial to set all your faders at -5, for example, just in case.

In researching this blog, I found some people set their faders as a visual representation of their mix levels, then adjust their gains accordingly. It isn’t a technique I’ve seen in real life, but if you know the act well and it makes sense to your workflow, it could be worth trying. Once you’ve set your gates, compressors, EQ, and effects, and added the volume of all the channels together you’ll probably need to go back to adjust your gains or faders again, but these approaches will get you in the right ballpark very quickly.

All these methods have their pros and cons, and you may want to choose between them for different situations. I learned sound using the first method, but I now prefer the second method, especially for monitors. It’s clear where all the faders should sit even though the sends to auxes might be completely different, and change song to song. Despite what some people might say, there is no gospel for gain structure that must be followed. In part 2 I’ll discuss a few approaches for different situations, and how to get the best signal-to-noise ratio in those circumstances. Gain structure isn’t some esoteric mystery, but it is important to get right. If you know the underlying concepts you can make informed decisions to get the best out of each channel, which is the foundation for every great mix.

 

Whose Job is It? When Plug-in Effects are Sound Design vs. Mix Choices.

We’ve reached out to our blog readership several times to ask for blog post suggestions.  And surprisingly, this blog suggestion has come up every single time. It seems that there’s a lot of confusion about who should be processing what.  So, I’m going to attempt to break it down for you.  Keep in mind that these are my thoughts on the subject as someone with 12 years of experience as a sound effects editor and supervising sound editor.  In writing this, I’m hoping to clarify the general thought process behind making the distinction between who should process what.  However, if you ever have a specific question on this topic, I would highly encourage you to reach out to your mixer.

Before we get into the specifics of who should process what, I think the first step to understanding this issue is understanding the role of mixer versus sound designer.

UNDERSTANDING THE ROLES

THE MIXER

If we overly simplify the role of the re-recording mixer, I would say that they have three main objectives when it comes to mixing sound effects.  First, they must balance all of the elements together so that everything is clear and the narrative is dynamic.  Second, they must place everything into the stereo or surround space by panning the elements appropriately.  Third, they must place everything into the acoustic space shown on screen by adding reverb, delay, and EQ.

Obviously, there are many other things accomplished in a mix, but these are the absolute bullet points and the most important for you to understand in this particular scenario.

THE SOUND DESIGNER

The sound designer’s job is to create, edit, and sync sound effects to the picture.


BREAKING IT DOWN

EQ

It is the mixer’s job to EQ effects if they are coming from behind a door, are on a television screen, etc.  Basically, anything where all elements should be futzed for any reason.  If this is the case, do your mixer a favor and ask ahead of time if he/she would like you to split those FX out onto “Futz FX” tracks. You’ll totally win brownie points just for asking.  It is important not to do the actual processing in the SFX editorial, as the mixer may want to alter the amount of “futz” that is applied to achieve maximum clarity, depending on what is happening in the rest of the mix.

It is the sound designer’s job to EQ SFX if any particular elements have too much/too little of any frequency to be appropriate for what’s happening on screen.  Do not ever assume that your mixer is going to listen to every single element you cut in a build, and then individually EQ them to make them sound better.  That’s your job!  Or, better yet, don’t choose crappy SFX in the first place!

REVERB/DELAY

It is the mixer’s job to add reverb or delay to all sound effects when appropriate in order to help them to sit within the physical space shown on screen.  For example, he or she may add a bit of reverb to all sound effects which occur while the characters on screen are walking through an underground cave.  Or, he or she may add a bit of reverb and delay to all sound effects when we’re in a narrow but tall canyon.  The mixer would probably choose not to add reverb or delay to any sound effects that occur while a scene plays out in a small closet.

As a sound designer, you should be extremely wary of adding reverb to almost any sound effect.  If you are doing so to help sell that it is occurring in the physical space, check with your mixer first.  Chances are, he or she would rather have full control by adding the reverb themselves.

Sound designers should also use delay fairly sparingly.  This is only a good choice if it is truly a design choice, not a spatial one.  For example, if you are designing a futuristic laser gun blast, you may want to add a very short delay to the sound you’re designing purely for design purposes.

When deciding whether or not to add reverb or delay, always ask yourself whether it is a design choice or a spatial choice.  As long as the reverb/delay has absolutely nothing to do with where the sound effect is occurring, you’re probably in the clear.  But, you may still want to supply a muted version without the effect in the track below, just in case, your mixer finds that the affected one does not play well in the mix.

COMPRESSORS/LIMITERS

Adding compressors or limiters should be the mixer’s job 99% of the time.

The only instance in which I have ever used dynamics processing in my editorial was when a client asked to trigger a pulsing sound effect whenever a particular character spoke (there was a visual pulsing to match).  I used a side chain and gate to do this, but first I had an extensive conversation with my mixer about if he would rather I did this and gave him the tracks, or if he would prefer to set it up himself.  If you are gating any sound effects purely to clean them up, then my recommendation would be to just find a better sound.

PITCH SHIFTING

A mixer does not often pitch shift sound effects unless a client specifically asks that he or she do so.

Thus, pitch shifting almost always falls on the shoulders of the sound designer.  This is because when it comes to sound effects, changing the pitch is almost always a design choice rather than a balance/spatial choice.

MODULATION

A mixer will use modulation effects when processing dialogue sometimes, but it is very uncommon for them to dig into sound effects to use this type of processing.

Most often this type of processing is done purely for design purposes, and thus lands in the wheelhouse of the sound designer.  You should never design something with unprocessed elements, assuming that your mixer will go in and process everything so that it sounds cooler.  It’s the designer’s job to make all of the elements as appropriate as possible to what is on the screen.  So, go ahead and modulate away!

NOISE REDUCTION

Mixers will often employ noise reduction plugins to clean up noisy sounds.  But, this should never be the case with sound effects, since you should be cutting pristine SFX in the first place.

In short, neither of you should be using noise reduction plugins.  If you find yourself reaching for RX while editing sound effects, you should instead reach for a better sound! If you’re dead set on using something that, say, you recorded yourself and is just too perfect to pass up but incredibly noisy, then by all means process it with noise reduction software.  Never assume that your mixer will do this for you.  There’s a much better chance that the offending sound effect will simply be muted in the mix.


ADDITIONAL NOTES

INSERTS VS AUDIOSUITE

I have one final note about inserts versus AudioSuite plug-in use.  Summed up, it’s this: don’t use inserts as an FX editor/sound designer.  Always assume that your mixer is going to grab all of the regions from your tracks and drag them into his or her own tracks within the mix template.  There’s a great chance that your mixer will never even notice that you added an insert.  If you want an effect to play in the mix, then make sure that it’s been printed to your sound files.

AUTOMATION AS EFFECTS

In the same vein, it’s a risky business to create audio effects with automation, such as zany panning or square-wave volume automation.  These may sound really cool, but always give your mixer a heads up ahead of time if you plan to do something like this.  Some mixers automatically delete all of your automation so that they can start fresh.  If there’s any automation that you believe is crucial to the design of a sound, then make sure to mention it before your work gets dragged into the mix template.

Mix Messiah – Leslie Gaston-Bird

Leslie Gaston-Bird is a freelance re-recording mixer and sound editor, and owner of Mix Messiah Productions. She is currently based in Brighton, England and is the author of the book “Women in Audio“. She is a voting member of The Recording Academy and sits on these AES committees: Board of Governors, Awards, Conference Policy, Convention Policy, Education, Membership, and Co-Chairs the Diversity & Inclusion Committee with Piper Payne. She was a tenured Associate Professor of Recording Arts at the University of Colorado Denver. Leslie also is Co-Director for SoundGirls U.K. Chapter and SoundGirls Scholarships and Travel Grants. She has worked in the industry for over 30 years.

Leslie has done research into audio for planetariums, multichannel audio on Blu-Ray, and a comparison of multichannel codecs that was published in the AES Journal (Gaston, L. and Sanders, R. (2008), “Evaluation of HE-AAC, AC-3, and E-AC-3 Codecs”, Journal of the Audio Engineering Society of America, 56(3)). She frequently presents at AES conferences and conventions.

She has been working in the industry for over 30 years: 12 years in public radio, 17 in sound for picture, and 13 years as an educator (some of these years overlap). Her interest in sound for film was sparked by seeing Leslie Ann Jones on the cover of Mix Magazine in the 1980s. She attended Indiana University Bloomington and graduated with an A.S. in Audio Technology and a B.A. in Telecommunications. While she was at Indiana University Bloomington, she signed up for a work-study job as a board operator at the campus radio station, WFIU-Bloomington. This gave her the skills she needed for her first job, which was at National Public Radio in Washington, D.C.

Leslie worked at NPR from 1991-1995 as their audio systems manager. She recorded and edited radio pieces and did a ton of remote recording and interviews on DAT tape.  (Who remembers DAT tape?) From NPR she went on to work for Colorado Public Radio as their Audio Systems Manager.

Although Leslie loved working for both NPR and Colorado Public Radio, her passion was sound for film, and it was not easy for her to get her foot in the door.  It took her over four years to find someone who would take a chance on her. Her gratitude for this opportunity goes to Patsy Butterfield, David Emrich, and Chuck Biddlecom at Post Modern Company in Denver.

Leslie still works as a freelancer in Film Sound and has currently been working on several horror films and thrillers.  “For some reason, I keep getting horror films to work on. I recently did the sound for Leap of Faith, a documentary about The Exorcist which has been selected for the Sundance Film Festival in 2020. Also coming out is A Feral World, a post-apocalyptic tale of survival about a young boy who befriends the mother of a missing girl. It’s not a horror film but there are a few violent scenes. I also did sound for Doc of the Dead, a documentary about zombies and zombie culture. The plot for the current film I’m working on, Rent-A-Pal, is one I’m not at liberty to disclose, but suffice it to say there’s a pattern here. However, I have also done some great documentaries focused on peace and harmony, too! Three Worlds, One Stage featured a woman directing/producing team (Jessica McGaugh and Roma Sur of Desert Girl Films) and told the story of three people from different cultures who moved to the United States and choreographed a dance together, and Enough White Teacups (directed by Michelle Carpenter) which explores the winners of the Index design awards which recognize innovations designed to improve the human condition. Michelle also did Klocked, a story of a mother-daughter-daughter motorcycle racing team. I’m proud to have worked on these woman-powered projects.”

While Leslie was working at Post Modern at night, she was also pursuing a Masters Degree and her professors encouraged her to apply for a teaching position.  She did and ended up as a tenured professor at the University of Colorado Denver, where she taught until 2018 when she relocated to Brighton, England. She was also encouraged by her professors the late Rich Sanders and Roy Pritts to join the AES where she became heavily involved.

“It has opened so many doors. I met Dave Malham at an AES convention in San Francisco and he ended up being my sponsor for a Fulbright Award at the University of York, England. I have done lots with AES, from being secretary of my local section to chair, then Western Region VP and Governor. In 2016 Piper Payne helped me to start the Diversity and Inclusion committee which we co-chair. We have come a long way, most recently partnering with Dr. Amandine Pras at the University of Lethbridge for their “Microaggressions in the Studio” survey. I’m really proud of the changes we have made, the AES Convention in New York was proof of our impact, with high visibility of women and underrepresented groups on panels, presenting papers and workshops, and even in the exhibit floor. In my 15 years of attending conferences I’ve never seen anything like it and we received so much positive feedback. We have more work to do but we have every reason to be proud of these accomplishments.”

In 2018, Leslie and her family relocated to Brighton, England, to be closer to her husband’s family (he is British) and it looks like they will be there for the foreseeable future. In addition to running her own business, her work with AES, writing Women in Sound, a (did we mention she is starting a Ph.D.?) Leslie is the mother of two children.  She balances it all by being highly organized and managing her time well. She says, “Somewhere I read that mothers of siblings are more productive. I think it’s because you have to be focused when you work. I think to myself, “okay, I only have 3 hours to do x-y-x” and I’m on it! No time to procrastinate! It’s not easy but in ways, it’s better because you learn the value of budgeting time and focusing on the task at hand.”

Leslie has a book coming out in December, Women in Audio and she shares the experience of writing it and the importance of it:

“More than anything, I hope this book is a testament to my commitment and indebtedness to the women who have trusted me with their stories. I must say, I have been nervous at times because the weight of these stories is truly immense; women whose stories might otherwise go untold are brought to light here. I have found so many pioneering women throughout history: inventors, record producers, acousticians I’ve tried to cover every field of audio I could. Altogether there are around 100 profiles. It’s really a must-have for women and girls seeking inspiration; for schools who want to add diversity to their curriculum (I took care to seek out women from all over the globe); for professionals who may think they’re the only woman in their area of expertise. I also talk about role models, mentoring, and networking. I’m really looking forward to sharing it with everyone!”

With a career spanning over 30 years, working in several roles as Educator, Mixer, Musician/Talent, Production Sound Mixer/Sound Recordist, Recording Engineer, Re-Recording Mixer, Researcher, Sound Supervisor, and Author; you would think Leslie is ready to rest on her laurels, but no, in 2020 at the age of 51 she will begin her Ph.D. at the University of Surrey.

What do you like best about working in Film Sound?

What I like most about working on films is the meditative rhythm of finding and selecting sounds, shaping the sounds, and giving the film a sense of realism.

What do you like least?

The thing I like least is computer crashes. It’s the rise of the machines – they are training us.

What is your favorite day off activity?

Hanging out with my kids.

What are your long term goals?

I have written a book on Women in Audio, which I hope to follow up with another volume. There are so many amazing women in Women in Audio: 1st Edition (Paperback) all sorts of audio fields, and it is an honor to share their stories. I would also like to continue supporting women to travel to and attend conferences with the fund I set up with SoundGirls.

What if any obstacles or barriers have you faced?

Moving to England and leaving a tenured position at a university was equal parts confidence and insanity. I have always believed in risks, but at age 50 I still feel the need to prove myself. I’m planning to start a Ph.D., but I have a feeling that women – more than their male counterparts – feel the need to seek higher academic qualifications in order to compete in the job market. It’s something I hope will change.

How have you dealt with them?

Well, by applying for a Ph.D.  I’ve been accepted at the University of Surrey and will start in 2020, the year I turn 51.

The advice you have for other women and young women who wish to enter the field?

Stay versatile and stay connected.

Must have skills?

You can always train your ears and learn the equipment, but the most valuable skills are creativity, diplomacy and client service.

Favorite gear?

Loudspeakers: Genelec, PMC. Preamps: Grace, Neve 5012.

Parting Words:

I suppose one thing I’d like readers to know about is a moment I had recently, standing in my dining room, looking over some pictures that I had received from a man named Dana Burwell. The pictures were of Joan Lowe, a recording engineer that worked on some feminist albums in the 1970s (The Changer and the Changed, among others). Joan Lowe did not have family, and these pictures were entrusted to me for the purposes of writing the book, Women in Audio. The only reason Dana knew me was because I had reached out to Joan in November to interview her for the book. Joan had emailed me answers to my questions but passed away in February. If I hadn’t been in touch with Joan, I wonder what would Dana have done with those photos?

So there I was, standing in the living room, with pictures of a very friendly woman who I just met, who shared her story with me – and who trusted me with her story – and who passed away a short while later. I now had the duty to share her story.  It’s a responsibility I haven’t taken lightly. On that day it happened to be sunny. I looked up at the sky, and thanked Joan, with an expression on my face that was a combination of awestruck and joyful. I continued writing with a renewed passion that day. Something else in me changed, too, but I’ll leave that for another interview.  In the meantime, it’s an honor and a privilege to bring these stories to our audio community.

More on Leslie

Find More Profiles on The Five Percent

Profiles of Women in Audio

AI Composition Technology

 

It feels like technology is developing at an incredible rate with every year that passes, and in the music world, these changes continue to push the boundaries of what is possible for creators as we approach 2020. Several companies specialising in AI music creation have been targeting composers lately, headhunting and recruiting them to develop the technology behind the artificial composition. So who are the AI companies and what do they do?

AIVA

One company called ‘AIVA’ has been the most prevalent that I’ve been aware of this year, and they have reached out to recruit composers stating they are ‘building a platform intended to help composers face the challenges of the creative process’.  Their system is based on preset algorithms, simplified and categorised by genre as a starting point.

I set up an account to experiment and found it to be quite different from the demo on the landing page led me to believe. The demo video demonstrates how the user can choose from a major or minor key, instrumentation, and song length to create a new track, and that is it – the piece is created! The playback of the piece has overtones of the keyboard demos of my youth in its overall vibe however I have to admit I am genuinely impressed with the functionality of the melody, harmony, and rhythms as well as the piano roll midi output that is practical for importing into a DAW – it’s really not bad at all.

The magic happens while watching the rest of the demo and seeing how the composer modifies the melody to make slightly more technical sense and sound more thought-out and playable, they shift the voicing and instrumentation of the harmony and add their own contributions to the AI idea. I have to admit that I have similar methods for composing parts when inspiration is thin on the ground, but my methods are not so fast, slick or lengthy and I can completely see the appeal of AIVA being used as a tool for overcoming writers’ block or getting an initial idea that develops quickly.

On the argument against, I was pretty stunned how little input was required from the user to generate the entire piece, which has fundamentally been created by someone else. The biggest musical stumbling block for me was that the melodies sounded obviously computer-generated and a little atonal, not always moving away from the diatonic in the most pleasing ways and transported me back to my lecturing days marking composition and music theory of those learning the fundamentals.

In generating a piece in each of the genres on offer, I generally liked most of the chord progressions and felt this was a high point that would probably be the most useful to me for working speedily, arranging and re-voicing any unconvincing elements with relative ease. While I’m still not 100% sure where I stand morally on the whole thing, my first impressions are that the service is extremely usable, does what it claims to do, and ultimately has been created by composers for those who need help to compose.

Track 1 – https://soundcloud.com/michelle_s-1/aiva-modern-cinematic-eb-minor-strings-brass-110-bpm

Track 2 – https://soundcloud.com/michelle_s-1/aiva-tango-d-major-small-tango-band-90-bpm

Amper

‘Amper’ music is a different yet interesting AI composition site that assists in the creation of music, and the company states that the technology has been taught music theory and how to recognise which music triggers which emotions. The nerd in me disagrees with this concept profusely (the major key ukulele arrangement of ‘Somewhere over the rainbow’ by Israel Kamakawiwo’ole is just one example of why music is far more complex than key and instrumentation assumptions) however in looking at the target market for Amper, this makes far more sense – they provide a service primarily aimed at non-musicians who are faced with the prospect of trawling through reams of library music as a means to support concept such as a corporate video. In a similar vein to AIVA, Amper creates fully-formed ideas to the brief of set parameters such as timing length and tempo with the addition of incorporating a video to the music creation stage, making this a really practical tool for those looking for supporting music. I loaded a piece from the given options and found it to be very usable and accessible to non-musicians. While the price tag to own and use the pieces seems steep, it’s also reassuring that the composers should have been paid a fair fee.

IBM

Similarly, IBM has created compositional AI they have named ‘Watson Beat’ which its creator Janani Mukundan says has been taught how to compose. The website states:

“To teach the system, we broke the music down into its core elements, such as pitch, rhythm, chord progression and instrumentation. We fed a huge number of data points into the neural network and linked them with information on both emotions and musical genres. As a simple example, a ‘spooky’ piece of music will often use an octatonic scale. The idea was to give the system a set of structural reference points so that we would be able to define the kind of music we wanted to hear in natural-language terms. To use Watson Beat, you simply provide up to ten seconds of MIDI music—maybe by plugging in a keyboard and playing a basic melody or set of chords—and tell the system what kind of mood you want the output to sound like. The neural network understands music theory and how emotions are connected to different musical elements, and then it takes your basic ideas and creates something completely new.”

While this poses the same arguments to me as AIVA and Amper with its pros and cons, it’s clearly advertised as a tool to enhance the skills of composers rather than replace them, which is something I appreciated once again and I am curious to see where IBM takes this technology with their consumers in the coming years.

Humtap

The last piece of software I tried myself was an app downloaded onto my phone called ‘Humtap’ which was a slightly different take on AI for music composition. In a lot of ways, this was the least musical of all the software, yet conversely, it was the only one I tried that required something of a live performance – the app works by singing a melody into the phone and choosing the genre. I hummed a simple two-bar melody and played around with the options of what instrument played it back and where the strong beats should fall in the rhythm. The app then creates a harmonic progression around the melody, a separate B section, and this can all loop indefinitely. It’s really easy to experiment, undo, redo, and intuitively create short tracks of electronic, diatonic sounding music. This app by its nature seems like it’s aimed at young people, and I felt that was pretty positive – if Humtap works as a gateway app in getting youngsters interested in creating music using technology at home, then that’s a win from me.

There’s always a discussion to be had around the role of AI in music composition, and I suspect everyone will have a slightly different opinion on where they stand. Some fear the machines will take over and replace humans, others make the argument that this kind of technology will mean everybody will have to work faster because of it, and there are some who fear it will open up the market to less able composers at the mid and lower end of the scale. On the other side, we have to accept that we all crave new, better sounds and sample libraries to work with, and that the development of technology within music has been responsible for much of the good we can all universally agree has happened through the last 5 decades. My lasting impression in researching and experimenting with some of these available AI tools is that they are useful assets to composers but they are simply not capable of the same things as a live composer. To me, emotion cannot be conveyed in the same way because it needs to be felt by the creator and ultimately, music composition is far more complex and meaningful than algorithms and convention.

The Basics of Sound

We all like to pretend that sound is a dark art that only a few chosen ones have chosen to understand and practice. However, this dark art is actually not just for the few chosen ones, even if you do not want to practice it full time it is useful for you to know about it.

Sound is physics, we can all agree on that. But you do not have to be good at math or be a ‘techy person’ to understand the basics of sound. To understand sound, all you need is a bit of common sense. Being able to work out how A is connected to B, that is it!

What is sound?

Easy right? It is not more complicated than that! Sound comes from A. The object which transmits it to B. our ears.

We like to think that things are more complicated than they actually are. But with all things tech, a human has designed and invented it. So if we stop ourselves for a minute and go ‘hang on, what would the most logical solution be?’ you’ll find yourself knowing the answer. All things tech have a signal flow, and that is what you need to figure out. How to connect the A to B.

When we amplify sound, it works in a similar way. But rather than transmitting the sound over just air, we transmit it via microphones & cables, i.e., metal! We transmit the sound from the stage to the receiver, which will be the mixing desk. From the mixing desk, it goes out to the speakers, which transmit the sound to our ears in the audience. That is a simple signal flow.

Why is it good to know about the signal flow? If you regularly perform live or record at home or in studios, how many times have you encountered issues? I’d say that every session or live gig has technical issues that usually come down to signal flow. You’ll solve things quicker if you know what might cause the issue by tracing the signal flow.

What about me and/or my instrument sound?

It surprises me that a lot of the musicians and artists that come my way have very little knowledge about their sound and how it is being produced, but more importantly, how they want it to sound to other people.

The only instrument I know how to play is the piano. But I have the knowledge of how I want drums to sound, how to reskin them and how to tune them. Perhaps it has been an advantage of having worked with so many drum kits. I know what a good kit sounds like, but more importantly what a bad kit sounds like!

Like breathing, we often forget that we are doing it. We just do! It is the same with actually listening and tuning in to something. Paying attention at a gig, what does it sound like? What is a good sound?

What do I want to sound like?

Be curious! 

Ever thought about how something is done? Google it! Read and learn about it; knowledge is power!

As I mentioned with drum kits, I don’t play drums, but I was curious to know how it all works. What are the differences, why do they sound so different, why do they need so many cymbals, etc.

As passionate as I am talking about sound, most full-time musicians will passionately talk about their instruments. They have perfected their skills and put so many hours into practice that finally they can tell somebody about it! Ask away!

Communication:

It goes both ways, as sound technicians or as musicians, knowing what sound you like makes it easier for you to start the conversation with each other. We shall always thrive on working as a team and not as separate entities; we need to be able to communicate with each other.

 

Info Hoarders

 

Many of us have worked in the live event or recording industry for years, and have no issues sharing our knowledge and experiences with others. The passion that surrounds this career is what keeps us motivated and creates incredible mentors and teachers.

There is another portion of the audio engineering industry that keeps their techniques to themselves with paranoid motives. They may refuse to share a technique or even explain to somebody what they’re doing because they’re afraid of that person taking their job. As an instructor, I have always been open with my students about my work, resources, and assets. If I create a show file or I show them a technique, I am doing it so that I can share knowledge with them, and then they take it and make it their own. I’m not worried that those students are going to take my job.

The competitiveness of our industry is highly present and sometimes aggressive. Of course, you can find any number of people to fill that position who could technically have the same skill set, but that does not make a person merely disposable. When the production company makes it known, they feel that way it creates that sense of urgency and paranoia to keep your job. At times this has led me to feel replaceable or irrelevant to a show. That mindset is toxic on both sides and can become all-consuming. I have seen people intentionally building a system or show files impossible to understand by anyone else, forcing their security in that position. They are hoarding information, possibly for reasons of self-preservation. A toxic work environment creates these situations, and being fired from them could be in your best interests in the long run. It sucks when it happens, though, especially when there’s no logical reason that you are dismissed.

We are not seamlessly replaceable, especially when you can look at your crew as humans rather than robots programmed to accomplish their tasks. I may not be special, but I’m certainly not dispensable. My abilities to handle emergencies, intelligent problem solving, or even my willingness to help others are special skills that others may not possess. What’s more important than knowing the basics or even being a very skilled engineer is being a person that can work as part of the team. This is preferable over a condescending jerk who hovers over their work, refusing to collaborate and hoarding resources.

We are living in this amazing moment where almost everything is accessible and often free. Humanity seeks to make a connection with others, and when we’re passionate about a subject, we can’t wait to share it. Becoming a dragon-like being with a hidden cache of information and no intention of sharing it is greedy. The people who behave this way, and the people who make these creatures should be held accountable for their toxicity. I’m not sure how to do this, other than being one of the helpful and supportive resources for my students and colleagues. Access to a network of supportive people is invaluable. We’re not meant to be on our own islands; this is a collaborative business. All of us at SoundGirls are forming these little alliances in support of the greater good. Connecting our islands through sharing information and mentorship is a huge step toward progress, and I am so happy to be part of this group..

 

Love for Chaos: Willa Snow Live Sound Engineer

Willa Snow is an independent FOH, Monitor Engineer, and system tech based in Austin, TX. While she has only been working in Live Sound for just over three years, she is filling up her resume.  She regularly works with Texas Performing Arts, Stage Alliance, and C3 Presents, amongst others. She works as a board op/system tech for Bass Concert Hall, as a monitor engineer at Historic Scoot Inn and Emo’s, and as a FOH/MON engineer for several other clubs in town. She has toured with the Grammy-nominated choir Conspirare during the fall of 2018 for their piece, “Considering Matthew Shepard,” as an assistant stage manager and general audio tech.

Before Willa discovered the world of audio, she was pursuing a career as a singer/songwriter. She was playing coffee shops and small venue gigs at the age of 15, and she says “despite that I had no clue about the world of audio, all I knew was that I had to sing into a mic nice n loud.  I don’t recall ever having a monitor mix, or even an engineer introduce themselves.”

She would enroll in college with the intention of going into performance. This was until she was required to take a recording technology course for her major.  “That year I fell HARD for working in the studio. I loved how many variables there were to play with, and all the different directions that you could take a piece of music in. The creative process was suddenly busted wide open for me, and I couldn’t let that go, so I switched my focus to engineering. My decision to change solidified when I found out how few women there are on this side of the industry… less than 5% is just B.S! I became even more impassioned when I started working in live sound at 23 and discovered all the directions that you could take that path in, and all the wonderful types of music and performance that you’re exposed to! Since being a youngin singing acoustic pop-punk in run-down venues in Silicon Valley, my instrument has changed from a guitar and my voice to a console and mics. Each show that I work is a chance to explore and express my musicality alongside the incredible talent that I get to work with here in Austin, TX.”

Willa started out working in recording studios while in college as a ProTools op and audio engineer. She has a BA of Contemporary Music from Santa Fe University of Art and Design, where she was trained in various instruments, music theory, orchestration, advanced vocal techniques, western and world music history, and basic business management, as well as studio production. In contrast, all of her live sound knowledge has been developed on the job and through independent research on various subjects.

After graduating from college, she moved to Texas and ended up taking a job in live sound as an A2 for a small local production company, where she was taught how to build PAs and tune systems. While there, she soaked up everything she could learn and said she “initially hated live sound! In comparison to the studio, it’s loud, chaotic, and terrifying, Everything’s happening all at once, and almost nothing goes according to the original plan. I must have developed Stockholm Syndrome because now I can’t get enough of it! I’ve learned to love the fluidity and chaos, and I’m constantly finding myself challenged to grow and inspired by the techs that I encounter and the artists that I get to work with.”

Like many of us, when Willa first started running sound, she was terrified of failing. She put a lot of pressure on herself and says she feltthat as a woman, people were going to be looking at me as an example of all women engineers. If I wasn’t 100% absolutely perfect, then it would be reflected 100x worse on me than it would a male in my position, and it would be a stain on the reputation of women engineers the world over. I put all that pressure on myself, despite having only just begun my journey into live sound!”

Then  Willa started to notice something… “in my conversations with more experienced engineers and hearing their origin stories, they all said the same thing: they were TERRIBLE when they were starting out! I heard many tales of butchering mixes and struggling to make the broken gear work in dirty clubs. I finally realized that in order to grow and move past this mentality, I needed to give myself permission to fail. So, before every gig, I would have the following conversation with myself: “let’s go out there and SUCK! Let’s have the worst mix ever, and get shamed out of the club! The band is going to hate everything you do, and the gear’s going to catch fire, and it’s going to be GREAT!” And strangely, that worked for me. Giving myself the space to be an inexperienced failure allowed me to embrace that risk, and to go in with a clear head and tackle the show. At the end of the day, we’re all human, and humans mess up and make mistakes, and that’s okay; the key is how you recover from that mistake. Do you own it, fix what needs fixing, and learn from it? Or do you wallow? After a few months, I didn’t need that non-pep pep talk anymore. Now I just walk in with my shoulders back and a big, fat smile on my face.”

One of Willa’s Early Failures

Early on in my experience (I think it was my second gig), I had a show where All The Things Went Terribly. I was given an incorrect load-in time; I hooked up the mains wrong, my iPad mixer was futzing out, the stage sound was terrible, the FOH mix was REALLY bad… so bad in fact that when the singer of the band greeted the crowd and asked, ”how’s it sounding out there?” the audience responded with, “clap… clap… crickets…” An audience member standing near me even leaned over and asked me, “it doesn’t sound good, does it?” I could do nothing but admit that indeed it did not. Oh, it was so embarrassing!! Thankfully the band was very kind and even tipped me at the end of the night.

As soon as I got home, I called up one of my sound buddies and took him out for beers. I walked him through the entire gig, top to bottom, and asked him for some guidance on the mix, and for advice on how to do things better.

A few weeks later, I got the opportunity to mix the same band again. I made sure to get to the venue extra early, set up and rang out the stage as cleanly as I could, incorporated some suggestions my friend made into my mix and remembered exactly how the band set up the stage and where they needed lines. The band showed up, and this time, All The Things Went Smoothly. Stage and FOH sound were vastly improved, the band had a great time, the audience had a great time, they even gave me a ‘thank you’ shout out!

As Willa continues to learn and grow, her long-term goals are to become a touring FOH /Monitor Engineer and System Tech.

What do you like best about touring?

I like hearing how the sound of the music changes in different venues, and the constant momentum of traveling from place to place

What do you like least?

I miss my loved ones and my own bed while I’m away.

What is your favorite day off activity?

My favorite day off activities are resting and taking care of my plant collection. It’s lovely to have a period of quiet and calm after the storm.

What are your long-term goals?

I have several interests that I’m avidly working towards, my main ones being touring as a FOH and/or MON engineer, and/or as a system/PA tech.

What, if any, obstacles or barriers have you faced?

I’ve been turned down for a tour because of my gender, and am all-too-often dealing with unwarranted attention and sexist comments.

How have you dealt with them?

It depends on the situation. For the tour, I let it go and decided that wasn’t a tour I wanted to be involved with anyway. I turned to the SoundGirls forum for advice when going through that process, and deeply appreciated the support and words of encouragement that I received from the group. When dealing with sexist comments on the job, sometimes I’ll ignore them, while others I’ll confront head-on and shoot something back (ex: if I get called honey, I’ll call them sweetie. Stops that sh** real fast.)

The advice you have for other women and young women who wish to enter the field?

Learn as much as you can from every situation and interaction, and ask as many questions as you can at appropriate times. Don’t be afraid to work hard, and allow your enthusiasm to drive you. Always keep an air of professionalism at every gig, no matter how big or small. Say yes to every challenge and opportunity possible. Be authentically who you are and embrace that; faking it until you make it is not a thing. It’s okay to stand up for yourself when you are being mistreated; no amount of abuse is worth your time or mental health.

Must have skills?

Have a running knowledge of basic signal flow, mic placement, gain structure and EQ techniques, and learn to embrace failure (how else are you going to learn?). Be kind and cool to those you interact with, and keep your connections positive as much as possible.

Favorite gear?

Work gloves, c-wrench, and my Shure SE846 IEMs. An Allen & Heath desk is always preferred.

 

 

 

Self-care: Develop a Routine That Works For You.

 

Self-care is a trending phrase and life choice that many people choose to participate in, designed to create a healthy environment for one’s self to deal with various factors within their lives.

Personally, I think self-care is a healthy practice, but for people in our industry, it may look drastically different compared to others. Advice or health blogs suggest self-care steps such as sleep when you are tired, meditate daily, meal prep, exercise for an hour every day, eat right, and more.

All great ideas, but not always plausible for people in our industry. How can we practice self-care when working extremely long hours, living off buses, jumping from show to show, meeting recording deadlines, and more?  Here are some ideas that you can tailor into your daily self-care routine or develop one with.

Drink water – Start your day off with a large glass of water.  Easy to do wherever you are and a healthy first step to any day to get you started on the right foot.

Bring your favorite snack – Already know your day is going to be long with limited breaks. Grab a few of your favorite snacks, preventing yourself from getting hangry and something you can look forward to in your busy day.

Exercise – It doesn’t have to be an hour; it can be 10 minutes. Challenge your coworkers to a plank challenge. Develop a 15-minute routine you can do anywhere consisting of pushups, sit-ups, squats, and jumping jacks.

Wear one of your favorites – A favorite shirt, shoes, socks, or even your favorite necklace. Wear it. Frequently we wear black, and that’s ok, but no one says you can’t wear a cute pair of earrings with your black clothes. Wear something you enjoy and do it for you.

Journal – When your day is done, instead of streaming social media until you fall asleep, write about your day. Journal your thoughts and feelings, let our some of the bottled-up emotions out, leave it on paper, and then move forward.

Take a minute for yourself – It’s ok to take a minute for yourself even on an extremely hectic day. Step away, regain your thoughts, make an action plan, and move forward. In the long run, taking that moment can help you so much more than not. If you absolutely can’t do this, then find someone who can help you. Send them for your favorite drink or to grab a plate from catering for you. Take that moment to make the rest of the day better.

Speak positively to yourself – We tend to be hard on ourselves and even worse on tough days. Change your inner voice and speak positively to yourself. Work on developing a new perspective to notice positive things first, then address the negative things striving to make them positive.

Take a moment to permanently solve a problem– If you are continually running into an issue as you jump from show to show or recording session instead of spending 10 minutes temporarily fixing it only to do it again tomorrow. Take that hour to permanently fix it. This will save you frustration and annoyance each day and is self-care. Finding permeant solutions to daily issues make it easier and is a benefit to you every day. This frees up time and energy for anything else that may pop up or could actually allow for you to take that deserved break.

If you find you cannot fit in any or enough self-care steps every day, then make sure to set aside a day or two for yourself each month. Take yourself on a movie date, shut off all electronics for a day, read something for fun, cook for yourself. Find something you enjoy that provides satisfaction and do it. Taking care of yourself means you will be able to continue taking care of everything, and everyone else you encounter each day. Self-care will look and feel differently for everyone. Find 2-3 things for you, so you can handle our crazy industry a little bit better every day.

 

Sonic Memories

At Boom Box Post, we try to take the time to meet with nearly everyone who asks: be it for an interview or a to give career advice to a young editor.  Among the most inspiring parts of interacting with those who are new to the profession are the questions they pose that cause us to look again at our job with fresh eyes (and ears!).  One of these questions which was posed to me by a recent audio school graduate was, “What should I do to prepare myself to be an editor?”

My answer is, “Start listening.”

Unlike visuals of which we take constant notice, sound is often an unnoticed undercurrent in our lives. Ask yourself: when you tell a story to a friend, do you describe what you saw or what you heard?  Most likely, you focus on the visuals.  Now think about how hearing a sound from your childhood can suddenly thrust you back to the emotions from that time in your life.   Sound can be an incredibly powerful storytelling device.  Think about what emotional state the story asks of the viewer. It is our job to connect our personal sonic memories to those emotions.

To give you an example, I’d like to share one of my favorite memories from childhood: going camping on an isolated lake in northern Wisconsin with my family. I’d like to tell the first part with visual descriptions and the second with sonic descriptions.  Think about which one you find yourself connecting to more.

THE VISUAL TAKE

When I was young, we often went camping at a lake in northern Wisconsin.  My father always said, “It’s not a vacation if I see anyone else.”  So we drove for hours to part of the north woods, parked in a remote lot, and then carried our gear and canoe along a path to a little piece of beach no wider than a child’s arm span and launched out into the lake.  From there, we paddled to our campsite which was accessible only by water.

Once we had settled in, we spent most of the days by ourselves.  My father wandered off amongst the trees to take photos of butterflies, mallards, or sometimes us.  My mother took care of the camp, cooking the meals and washing dishes, and my brother and I played in the forest.  Each evening, we shared a special moment together: a canoe ride at sunset.

THE SONIC TAKE

As the sun dipped lower in the sky and began to cast a shadow over the lake, the sound of the forest suddenly turned.  The lively birds and cicadas of the day ceased and a period of pure silence washed over us.  Our canoe scraped against the grit of the shore as we pushed it into the water, then only the sound of the tip of the bow cutting the water could be heard.  We paddled into the center of the lake to the steady beat of oars splashing into calm water, and then stopped and just sat, letting the silence envelope us.  After a while, we heard what we were waiting for:  a loon.  It skimmed across the water, letting loose its lonely cry, and we heard this solemn sound echoing off the banks and folding back on us like an origami bird.

SPINNING STORIES FROM SONIC MEMORIES

When sound enters the equation, don’t you feel not only a better understanding of the events of the story but also an emotional connection to it?  This is what I attempt to achieve in each project.  As sound editors, it is not just our job to look at the screen, and place the sound for the action we see (door open, door close, car ignition on, gear shift), but also to think about what emotional state the story asks of the viewer.  It is our job to connect our personal sonic memories to those emotions and use them to trigger the right feeling for the audience.  For example, whenever I’m faced with a scene that asks the audience to appreciate a lonely expanse of wilderness, I add in a loon.

THE LISTENING PROJECT

Now that you understand the importance of sound in storytelling and how to use it to make emotional connections for the viewer, there’s only one thing left:  start listening.  As you go about your daily life, start taking note of what you hear.  This will help you in your ability to draw on these sounds as you edit.  Think about this the next time you go for a hike, enjoy dinner downtown, attend a party with friends.

QUESTION: WHAT ARE YOUR FAVORITE SONIC MEMORIES FROM CHILDHOOD?  

Mine are: the loon from my story, the sound of a foghorn coming through my window on a hot summer night, and the perfect hollow pop that a tennis ball makes as it hits a racquet. 

X