Empowering the Next Generation of Women in Audio

Join Us

What is a Sound Design Associate?

A Sound Design Associate works closely with the Sound Designer and Director, undertaking much of the work. It can include finding music and sound effects dictated by the Sound Designer and Director, maintaining the paperwork, and assisting the Sound Designer in cuing the show. The Sound Design Associate may also work with the Sound Board Operator providing instruction, and assistance in making changes to the cues during rehearsal.

Each designer has their way of doing things and being able to be the associate for more than one Sound Designer has been an invaluable education. It puts me in a unique and privileged position, as I get to see different techniques and how they are used by excellent designers. Did I mention I also get paid.  It’s interesting to see how another designer programs a cue list, sets up a system, or interacts with the rest of the design team.

The role is very different depending on the designer I’m working with. Sometimes I handle all the paperwork and translate the designer’s ideas into a spec sheet for a hire company. Sometimes I’m taking care of the SFX while the designer is looking after the system, and the band or the reverse situation can happen. I tune the system and work with the operator on the desk while the Designer is creating the soundscape.

I have recently been the Sound Design Associate for John Leonard. I’ve been John’s Sound Design Associate on more than one occasion, and it is always an excellent opportunity to learn from someone who is well respected and has been doing this a long time. 

My Approach to being a Sound Design Associate

I usually am hired as an associate when a Sound Designer I have worked with before has production periods that overlap, or if there is a big project that needs to be produced in a short time frame.  Designers can hire an Associate, and they can take on more than one production. An Associate will be their representative and manage the designers’ interests in their absence.

There may be days of Tech or Preview that the Designer cannot attend and I will represent the Designer. In this case, the Designer needed someone to look after the show from Preview 1 to Press night.  I went to a couple of run-throughs and I sat with John during tech to get a feel for Johns and Iqbul Khan’s (the director’s) vision for the production.  I then took over the lead after preview one.

As an Associate, I think it is important to remember this is not my show. I may have artistic input, and if the director asks for something, I will work hard to make it happen. But I always keep the designer aware of any changes I have made. When working with John, he always gives me a free hand, but I do remember I am representing the reputation of another designer as well as my own.Looking across to the Musicians Gallery

For the recent production of Macbeth, there were a lot of changes after the first preview. John trusted that I would make the necessary changes and also keep him in the loop, providing detailed notes. Although being an associate isn’t the lead role in the design process I find learning from and being exposed to different techniques a deeply satisfying experience.

More on the job duties of a Sound Design Associate

The Sound Design of Brideshead Revisited

Brideshead Revisited is a co-production between English Touring Theatre and York Theatre Royal. The play reopens York Theatre Royal after its refit and then it will tour theatres around England.

Brideshead was adapted for the stage by Bryony Lavery it’s based on a book written by Evelyn Waugh and first published in 1945. Brideshead Revisited is set around the life of an aristocratic family in England between World War I and World War II. The play is presented from the point of view of Charles Ryder, who is an army officer in World War II. When the play opens with Charles remembering the events around the countryseat of Brideshead. It is his memory of events that the play centers around.

Here are some things we worked into the sound design.

Memory is a major theme of the play; in design meetings we discussed how memories are triggered and what happens in your mind at the time. There was a discussion of the language of memory portrayal in the film, which often utilizes reverb and the sense that memories sometimes seem to approach from a distance. I knew that would mean playing with a sound heavy with reverb and then getting closer and dryer and landing a moment before the action on stage took up the dialogue or sound in real-time.

A lot of the creative team had memories from childhood that were attached to certain sounds and birds seemed to dominate this. I grew up in the East End of London, and I have memories of lying in bed in the early morning listening to seagulls. (The sound of London birds is the sound of seagulls for me. I know they don’t often make it into the collective agreement of how London sounds, but if you are within a mile of the river then there are seagulls) So I knew birds would feature in the sound design. Memory in relation to sound often revolves around phrases that we play to ourselves over and over in our heads. Doubling of dialogue was also something I thought we could work into the
sound design.

We wanted the process of storytelling to be visible to the audience; the cast handles the scene changes on stage, setting up and changing the props. They also set microphones on stage and perform some on-stage Foley.

Alcohol is a big part of the first section of the play, and we worked on amplifying the sound of wine being poured to emphasize that point.

brideshead-york-theatre-royal-last-780x520-2

We decided to amplify the sound of a projector vs. working to silence it and cover it with a sound effect.

We used radio mics, but not every cast member received a dedicated mic. Ryder, who did a lot of the narrating/ remembering of the play, wore a radio mic. His mic was used to change the tone of his narration and to put him in a different space for those bits of the play rather than for amplification. I was using it in a different way than when I would use a radio mic for `musical theatre. If you can imagine BBC radio drama announcer, that’s the kind of sound I was going for.

Some of the play took place in Venice in an old house. As this was a static talking head moment of the play, I used one of the two 414s on a stand to pick up the voices and send it to some gentle short reverb to help give the sense of being in a big stone house.

Scene changes were marked with music and soundscapes were woven together. The composer (Chris Madin) and I worked closely together to get the tone of these transitions right and to carve out or give room to the dialogue that surrounded the transitions.

The plot of Brideshead takes us to Oxford, London, to a country house in Venice, Manhattan and aboard a ship. The moments on board the ship were potentially challenging; there was a lot of dialogue in this scene as well as a big storm, and I had to make sure the storm sound effects allowed enough room for the dialogue as well.

There was a division in the way sound effects were reproduced compared to the music in the show. The SFX tended to come from onstage SFX speakers, and the FOH system was primarily reserved for music playback.

The pre-playback was a selection of pre-recorded excerpts of dialogue from the cast. They had been asked to mull over lines of dialogue that they thought were particularly representative of their character. I used these lines in the pre-show to create a repeating slowly building round of whispered memories. The pre-show builds and builds and culminates in a sudden cutoff that leaves Ryder in Brideshead at the end of World War II.

I was fortunate to work with the company during rehearsals. We were able to discover things about the play in a much more cohesive way than if I had just joined the production for technical rehearsals. It was great to be able to play sound and music in the rehearsal room. It helped the cast to build a relationship with the soundscape and for us to integrate the use of microphones into the play. There were a few moments in the play of whispered conversations that the rest of the characters in the play weren’t supposed to hear. They obviously needed to be heard by the audience, these were mostly spoken into a couple of 414’s and routed to FOH.

One of the best discussions I had in my early days as a sound designer was with a vocal coach. We use to discuss listening to the whole play rather than just the elements of the sound design. I found this useful for this production where the amplified and un-amplified voices had to be woven together and although they needed to highlight different moments in the play they all also needed to sound like they were part of the same world.

 

QLab: An Introduction

 

QLab is my software of choice for playback in musicals and plays. QLab is a Mac-based piece of software that I have found it to be robust, flexible, and quick to program. If you need a playback engine for music tracks or sound effects and you have a Mac, then it’s absolutely worth looking at.

When you first open up QLab, you will see an untitled workspace (Fig. 1).

Fig. 1

In order to optimize things while you are programming, it’s wise to set a few preferences. These can be changed later, but I’ll share the way I set things up. At the bottom right-hand side of the workspace is the icon of a cogwheel. Click that and you get to play with the settings behind the screen. (Fig 2)

Fig. 2

If you click on the audio menu on the left-hand side, it will take you to the section where you can set the output device, label the outputs, and set the levels for any new sound cues. I set new cues to be silent with all the cross points in so that I can fade them up to set level rather than fade them down. Then I select the group menu and select the “Start all children simultaneously” option. Clicking “Done” will flip the screen back to the workspace.

Getting audio in is as simple as drag and drop. When you drop something in, a new set of tabs appear at the bottom of the workspace in the inspector. (Fig 3) The tabs I use the most are Device & Levels, Time & Loops, and Audio Effects. As this particular sound effect is something that will run continuously under a scene, I’m going to want to loop it. I’ll need to select the outputs it’s going through and I also may want to add some EQ. That’s all available there in the software.

Fig. 3

Playing just traffic on its own could get a bit monotonous, so I want to add a few car horns, but I don’t want to trigger them all individually. In order to do this, I need to create a group cue.  By clicking on the square outline on the top menu bar (or select it from the cues menu) a box will appear in the workspace. As I’ve chosen all children at once, it will be a green one.  Drag in the audio files you want to include in this cue.  They can now be treated together.

If I were to trigger this cue as it is, then both wavs would start playing at the same time and the traffic that I had looped would play forever. That’s not what I want to happen. To fix this, I’ll put a pre-wait in on the car horn so it starts a bit later. At some point, I’m going to want the traffic to fade out, so I will need a fade cue. I can drag the faders icon onto the workspace and assign the cue I want to effect by drag and drop.  The fade length can be changed, as can the fade curve. (Fig 4)

Fig. 4

At the moment, you only have a cue list and not really a show file, so you need to bundle the workspace.  This will collect all the audio files you have dropped onto the workspace, make a copy of them and place them together in a folder with the workspace.  When you do this, you can transfer the whole folder to anywhere you want and take your show with you. Fig 5 is an example of what happens when you don’t bundle a workspace. The red crosses show that QLab can no longer find the audio files.

Fig. 5 shows the preshow of a show I recently designed. The show was set at the end of World War II and you can see there are lots of loops triggering. In the pre-wait column, you can see the delay time I put in for each of the audio files to trigger. They then looped at various lengths until the next cue, which stopped the preshow and the SFX that started the show were triggered.

Fig. 5

QLab is made by figure 53. You can download a free version of it here: http://figure53.com/qlab/ The free version gives you two channels and doesn’t give you anything under the audio effects tab. You can still use it to create a show and then rent the software by the day from figure 53. Or, if you can make do with only two outputs, you can use it to run a show.

With this information, you can create a basic cue-list and get a show together. As you dig deeper, you will find you can vamp and de-vamp cues, trigger or be triggered by midi, and much more. QLab can also be used for video.  The complexity of your cue list is up to you, and everyone will use it in a way that suits how they create a show.

Signal Flow

 

In Yvonne’s Top 52 Tips To Remember, signal flow was one of the things I flagged as important, so I thought it might be a good idea to cover that in more detail. You have a bunch of awesome equipment, you have awesome musicians, and you need to get the sound from the musician or SFX playback computer through all that awesome equipment and out into the world or recorded in some way. Once you understand signal flow, troubleshooting will become a whole lot easier.

No matter how big the system is, the same principles of signal flow apply. If you are responsible for that system going together or responsible for keeping it working, then it’s important you understand the signal flow of that system.

Signal flow in relation to fault finding

Signal flow in its most basic form can be expressed as Fig. 1

sigflow-fig1

Fig. 1

.

Assume you have plugged mic one into line one on the stage box and line one is patched into channel one on the desk. Assume you have done the same thing for mic two – mic two to line two to channel two.

If you aren’t getting signal into the desk from mic two, and when you swap the mics the problem doesn’t move, you know the problem isn’t with the microphone. The signal from mic one to the desk works, the signal from mic two to the desk doesn’t. If the only thing you have done is swap the mics and the signal still isn’t getting to channel two, then the fault is further up the signal chain than the microphone. If you swap the XLRs between the mics and the stage box and the fault still doesn’t move, you know it’s further up the signal chain. Work your way up the signal chain swapping equipment until the fault moves. When you manage to get the fault to move, you will know which piece of equipment is faulty – or at least where in the signal chain the fault is likely to be. This is fault-finding in its most basic form. Sometimes a cable will start working again, though not for long because you touched it and made the dry solder or loose connection make contact.

sigflow-fig2

Fig. 2

Internal to the desk, the same principles of signal flow apply (Fig. 2). The signal flows from the input through the group or aux into the matrix and out of the desk. If you can follow the signal through the desk, then you should be able to find the fader that has been left down or channel that is muted, or where the fault is.

Signal flow in relation to monitoring when fed from the FOH desk.

Imagine you have a band with an Aviom or wireless in-ear system and a stage that you’ve put into different time zones. There is a DSM/show caller that needs to hear the vocals, a feed going off to archive, and the band and an offstage vocal booth who need to hear what’s happening on stage. Where in the signal flow do you tap off the vocal monitoring to feed the different needs of the listeners?

If you send the vocal feed from the radio mics pre-fader (i.e before the fader in the signal flow), the person listening will hear the cast offstage and in their dressing rooms. But if you want to send the feed off to multitrack or a broadcast truck where it will be mixed later, then pre-fade may be the correct think to do.

The band and the vocal booth aren’t going to want to hear the radio mics pre-fader; they will only want to hear the mics that are live to front of house. So you’ll want to send post-fade. But do you want to send it to the band/vocal booth direct from the channels or from the vocal group? If the vocal group has a changing delay time to allow for stage position, what would happen to the vocal booth if they were singing along to it? What would happen to the band if they had a feed that was time-delayed before it reached them?

Signal flow in system design

When using compression, where in the signal flow should it be: on an individual channel, on a group, or across the outputs? Do you want to EQ something that’s been compressed, or do you want to compress something that’s been EQ’d? The effect is different depending on which way round you do it.

Putting processing in different places in the signal flow can have very different results. If you needed to use EQ on a signal processed with reverb, should you EQ the aux send to the reverb, or should you EQ the return channels from the reverb back into the desk? There are no correct answers, other than what fits the situation at the time But understanding the signal flow will enable you to make better decisions in order to achieve the results you want.

A Month in the Life of a London SoundGirl

Let me first say, London rush hour is awful and I do whatever I can to avoid it. Sometimes I cannot, early morning production meetings and tech rehearsals have me squeezing onto a London tube with a rucksack that has a laptop and my hard drive. Not Fun.  This last month I have found myself doing a lot of rush hour commuting.

First, I was working on two musicals, both were small-scale productions and they were running at the same theatre, but on alternating nights. Keeping track of both was a skill in itself.  There were scary schedules issued with no time allotted to work on sound, that had to be dealt with. After pointing this out, we were able to get a few solid hours to set up the system, which had been rigged the week before.  We set amp levels, and eq’d and time-aligned the system. Then I started plotting the SFX. I discovered there would be no full runs of the show until I was in tech for the other show. I knew what the SFX would be and had met with the director, but would not be able to see the context of the cues until we were in tech. It did make for a couple of weeks of really long days with a commute of over an hour each way.

I went to visit Motown the Musical, it was fitting up in the west end.  I knew most of the sound staff as I had worked with them on Rock of Ages and other shows over the years. I needed to talk to the UK designer about a show he had asked me to be his associate on. The West End Musical – Theatre scene can be a bit like a Victorian Lady Socialite. In that, you visit other shows, go and say hello during the production and tech. weeks. You poke around the sound system, ask questions and exchange gossip. It’s a good way to remind people that you exist and it’s an excellent opportunity to learn about a sound system that you haven’t had a hand in putting together.

I submitted my tax return, something that would be easier to do if I’ve been organised through the year. Last year I promised myself to take my receipts out of my pockets and my wallet at the end of every day and sort them. Maybe I’ll do it at the end of every week.

Chasing invoices is something I don’t enjoy doing and I’ve had to do that twice this month. One was a genuine mistake but the other was with a company that was refusing to pay me until I proved that I have self-employed status. I sent more information than I have ever done before to this company and still nothing. Thank goodness I’m still a member of a union. I had been considering not renewing my membership of BECTU. You get a good deal on public liability insurance from BECTU but I had recently joined the Association of Sound Designers  who also offer insurance. But I’ve never had anyone refuse to pay me before and after many emails and phones calls it became apparent that having the weight of a union behind me was a useful thing. They managed to resolve the issue quickly with no more input from me.
I went to visit a theatre I may do a show with later in the year. It’s the Rose Playhouse in Southwark and it’s in the basement of an office block. The Rose is the first Tudor theatre built on Bankside and they are raising money to preserve the theatre. The sound equipment is limited and we will supplement it a bit. But it is Shakespeare and it will be atmospheric.

I occasionally mentor students at Royal Central School of Speech and Drama. I had been brought in to look after the operator for the musical they are running.  The students are in their third year and are of a high standard. They were doing Sweet Charity with loads of radio mics and a full band.

This is an example of a typical and varied month for me. A bit of tech, specing equipment, keeping on top of the business end and production meetings, Mostly I love it but not that commute.

 

Ghost the Musical at Guildford School of Acting

GHOST is a timeless fantasy about the power of love. Walking back to their apartment one night, Sam and Molly are mugged, leaving Sam murdered on a dark street. Sam is trapped as a ghost between this world and the next and unable to leave Molly who he learns is in grave danger. With the help of a phony storefront psychic, Oda Mae Brown, Sam tries to communicate with Molly in the hope of saving and protecting her.

I knew the musical Ghost would be fun to do. There would be loads to play with; the ghost battles and the deaths. Add a band and loads of comedy and I knew this would be a great show.

Sword Fights – Electricity – Demons – Trains- What’s Not to Like

SoundScape
I wanted the sound of the ghosts interacting with each other to be a strong sound. It needed to be full of energy and still have an element of impact within it. My first thought was to base the sound of the battles around sword fights but to give some other energy as well. Death and organisms seem very analogue to me and very elemental so I thought I’d throw some electricity around in there.  The battles with the ghosts needed to be timed to be exact so the SFX would match with the live-action on stage.

I decided to film the fight scenes during rehearsals. Stage fights are choreographed and well-rehearsed, and I was confident the scenes would be the same every night. By recording the scenes, I was able to make sound effects to fit and come to the technical rehearsal with the nuts and bolts or the spine of the soundscape in place.

Trains seem to be featured heavily in the shows I have done recently but this was the first time I needed to time the percussive sound of the train on tracks with the music. Finding a small section I could loop and time stretch to match the tempo of the no.  It meant the SFX, which was very loud and sudden within the show, would help move a number along rather than distract from it or break the spell.

The other big soundscape moments were the transitions from life to the afterlife.  We didn’t want to lay on a thick moral interpretation of that; so we designed two versions for that transition. The first soundscape was dense; demonic, throbbing, growling, and animalistic. The second version used an element of glass and bells, to convey a sense of air and space. The denser more growling sound was used to signal the deaths that had some element of discord.

When the guy who killed Sam died or when Sam’s friend, who orchestrated the whole plot, died I used this soundscape. It helped to give a sense of the discord within those characters and a hint of them being surrounded by something not pleasant.  When Sam died there was an element of the second soundscape there to give a hint of where he could go and to accent the choice he makes to stay with his girlfriend Molly. When Sam finally moves on to whatever comes next I used a fuller version of the second soundscape to covey a sense of having resolved things.

I created a ghost reverb to use when characters died; it was not too long or too short. It just put the ghosts in a slightly different space from the characters that were still alive. This did prove problematic at first, as it became messy moving from a ghost-speaking reverb to a ghost singing reverb during the show. Laura, (the No. 1 on the show and was programming the SD8), and I decided to use the ghost-speaking reverb for numbers as well. Maybe with a little tweak if we needed something longer for a ballad.

The set was a very open and lovely and all of it was required for the acting space. This meant the band would have to be remote. The band was a five-piece, plus brass and strings on tracks. The tracks were run by Qlab and triggered by the Musical Director (MD). A band room was constructed into a cloth store on the side of the stage; the band relied on a video monitor to see what was happening on stage. Also, the MD had a camera that was broadcast to the vocal booth on the other side of the stage and FOH. The cast and crew could see the MD at all times and were able to follow his upbeats, etc. Vocals and fold back of the band were fed to the room through an Aviom system.

Mixing tracks with a live band presents challenges; you want the tracks that were produced in a different space to sound as close to the band and the room. It’s a good idea to have as many stems from the tracks as the console and equipment will allow.  Separating the string and brass tracks, etc. allows you to treat them differently. It helps if the tracks are as untreated as possible, so you can ride the faders and follow the dynamics of the show. Eliminating pre-recorded reverbs, allows you to use the same reverb on similar types of instruments and will help the mix to gel together. All of the vocals were live although some were sung off stage in a vocal booth. Laura the No. 1 did an excellent job in combining all of these elements into a cohesive mix.

Laura Sound No1

Laura No1

I really enjoyed working on this production and with all the talented women on the sound team.
Laura – No1
Gemma –  Production Sound Engineer
Sarah and Olivia – Backstage and radio mics.

Guildford School of Acting uses a professional band and creative team to put shows on. The cast and the technicians are all students supported by the in-house professional technical team.

 

Yvonne’s Top 52 Tips to Remember

Or How to Be a Bad Ass Sound Engineer

(more…)

It’s Panto Season

 

 

For those of you not in the British Isles.

A Pantomime is a musical children’s show put on at Christmas. It has a long history that can be traced back hundreds of years. It is always based on a children’s story like Jack and the Beanstalk or Cinderella. (more…)

Life Long Learning

You weren’t born knowing anything; nobody was. Everything you do that isn’t an automated function such as breathing is something you had to learn to do, even walking. All the sound engineers you know had to learn and be taught things and never stop learning. (more…)

X