Empowering the Next Generation of Women in Audio

Join Us

Choosing Software

There are many ways to control show cues on various programmes, and exactly which programmes used are entirely dependent on what the show’s needs are.

My upcoming show in RADA is proving to be a show that has much more than just a standard Qlab and a few microphones; I’ll also be composing for the show, but the composition is very much in fitting with the almost experimental and ‘found sound’ element of said show. It’s set simultaneously in 1882 and 2011, and there should be a ‘Stomp’-esque soundtrack that is driven by the sound, music, and choreography. This presents various challenges, and one of them initially has been deciding what to run the show on. Naturally, I’ll be using Qlab as the main brains of the show. However, Ableton Live will be utilized as well as live mixing.

Qlab is incredibly versatile, and as I’ve mentioned in previous posts, it can deal with OSC and MIDI incredibly well. In terms of advanced programming, you can get super specific and create your own set of commands and macros that will do whatever you need it to do, and quickly. Rich Walsh has a fantastic set of downloadable scripts and macros to use with Qlab that can all be found on a Qlab Google Group . Mic Pool has the most definitive Qlab Cookbook that can also be found here  (as with OSC and MIDI, you will need a Qlab Pro Audio license to access these features which can be purchased daily, monthly, or annually on the Figure 53 website).

To get Qlab to talk to Ableton is relatively straightforward – again, it’s all MIDI and specifically Control Change. MIDI is incredibly useful in that per channel, we can achieve 128 commands, and each channel (which is up to 8 output devices in Qlab V3) can be partitioned off for separate cues (i.e. Channel 1 might go to Ableton, Channel 2 might go to Lighting, 3 might be Video, and so on). Couple the Control Change with both Ableton’s MIDI Input Ports and its MIDI Map Mode, and you’re on your way to starting to control Ableton via Qlab. Things can get as specific as fade up/down over certain times, fade back up over certain times, stop cues, start loops, and generally control Ableton as if you were live mixing it yourself. The only thing to be wary of at this stage is to ensure that all levels in Ableton are set back to 0db with a separate MIDI cue once desired fades, etc. are completed – Ableton will only be as intelligent as it needs to be!

Using both macros/scripts and sending MIDI cues to Ableton are all features that I will cover in a separate post, only because they deserve their own post to understand all of the features.

So Ableton can do a lot, regarding controlling a show, and it does give us the flexibility to work, but artistically it also opens up a whole new world of opportunity. In RADA we are fortunate enough to own several Ableton Push 2’s, and they’ve very quickly become my new favourite toy! Push is useful as a sampler at its core, but there is so much flexibility that will be incredibly helpful during this next show. I can create loops, edit times, effects, sample rates, and can load any plugins simply; for me, it’s completely changed the live theatre game. I can react in real-time in the rehearsal room based on the choreography and can load new sounds from a whole suite of instruments and drum packs.

 

I’ll let Ableton themselves tell you more about the Push and what it can do – I’ve only recently started to use Ableton, so it’s as much as voyage of discovery for me, as I’m sure it is for you! More can be read on their website.

I primarily also use Pro Tools for editing any SFX and dialogue; this is because it’s a programme I’ve come to know very well and find that it is dynamic enough for what I need to do. I can again, load plugins quickly, it’s versatile and can load hundreds of tracks, and can talk to external hardware simply (such as the Avid Artist Control which we have in RADA’s main recording studio).

I also sometimes use Logic Pro as well, although I would only use this for music editing. This is because I prefer its ability to quickly load time signatures and is elastic enough that whenever a new track is loaded, it quickly adapts to the time signature on imported audio, and often comes pre-loaded with a vast amount of samples and plugins as standard.

With Ableton edging its way in, however, I might just have to choose a favourite soon because for me Ableton can often provide more realistic sounds, greater flexibility in drag-and-drop (wildly editable) plugins, auto-looping, and can be easily controlled in a live setting.

Often with software though, as with hardware, it’s more about what the sound designer or musician is comfortable with using and what the desired outcome is for the show.

Weapons Up: Explorations into Radio Drama

Over the past six years, my main areas of work have been as a sound designer, voice actor, and producer for commercial, gaming and animation voice demos,

These disciplines often overlap and complement each other. I’ve provided voice-overs for plays that I’ve sound-designed, for example, and actors for whom I’ve produced voice demos have recommended me as a sound designer to directors. But sometimes an opportunity arises that is such a great combination of your skills and interests, you wonder why you didn’t think of exploring it earlier. My introduction to sound design for radio drama was this kind of opportunity.

Back in October last year, I received an email from a voice actor friend, who approached myself and two other actor friends with the idea of creating a showcase for our voice acting skills, in the form of an audio, or radio, drama. We would write a short script for four female actors, record and produce it and then send it out to radio drama directors and producers who we thought might be interested in hiring us.

The UK has a long history of radio drama, mainly thanks to the British Broadcasting Corporation (BBC), which broadcasts hundreds of radio dramas every year. The creative possibilities of radio dramas appeal to me both as a sound designer and an actor. For me, it’s about learning how to tell stories without relying on what you can see.

Initially, I planned on acting in one or more roles in our fledging radio drama and doing all the sound design and mixing. Then in a flash of inspiration one afternoon I drafted an initial synopsis for a dramatic science-fiction thriller, and after a few drafts, it became apparent that while everybody loved the story, nobody else in the team was keen to take on the task of actually writing a script based on it. And so I found myself writing, sound designing, mixing and acting in a sci-fi radio drama called The Converged.

Once the script was ready, we booked a studio that could accommodate four actors in the same studio, found an experienced director and recorded three takes of the script in an hour-long session.

My first task after I got the recordings back to my studio was more editorial role than sound design or mixing. I had to decide what take to use for each line. As our script was short and we weren’t under any pressure from a commercial publishing company, we had the luxury of having two hours of rehearsal time with our director beforehand, and being able to record three full takes plus pickups and efforts (grunts, groans, and other vocalisations). Rehearsals and multiple takes are pretty much unheard of in the commercial work of radio drama – most directors aim to record between 60 – 90 minutes of material per day, which doesn’t leave any time for rehearsals and limited time for multiple takes.

So it was a bonus to have three full takes of the script. The director had given me a few notes on his preferences, plus I had made notes during the recording session – one of the benefits of acting in the play as well. But I would still need to make the final choices of which take to use for each line.

Each character was recorded on a separate track during the recording session. The engineer had also kindly labeled the different takes, so I had three audio files per character, one for each take, which I could lay them out in Protools on separate tracks. I immediately discarded the first take, as a quick way to reduce my options. That left takes two and three, which I could A/B to find the take for each line that I thought worked the best. I did this for each character – fortunately, there are only five in total!

The next step was editing, made much easier by the fact that I didn’t have to do much cleanup, thanks to high-quality recordings, and processing on each vocal channel.

Then I left the voiceovers alone for a little while to concentrate on the next important piece in the process: planning the sound design.

My two main reasons for writing a sci-fi script were a mix of creative and purely practical.  There is a lot of scope for creative sound work within the sci-fi genre, and I already have an extensive library of sci-fi sound effects. Normally, I like to create as many sounds from scratch as possible, but I knew I would only have a short window to design and mix the first episode, hence wanting to stay within the boundaries of what I could already accommodate.

I also knew that if we wanted to produce more episodes of the drama (and we do – there’s a cliffhanger at the end of Episode 1 for this very reason), I would need to have a plan of the overall tone and style for the design.  Throwing in random sounds that sound impressive won’t work for an episodic drama where the sonic world needed to be consistent enough to be recognisable from episode to episode, and adaptable enough to sonically create a variety of environments.

I divided my overall sound design plan for The Converged into categories: atmospheres and drones, interface beeps and noises, weapons, explosions, foley (mainly footsteps, doors and operating various tools), mechanical sounds, vocal processing and miscellaneous.

Following a timeline of the script, I mapped out the important points for each category. These included: where we needed to hear a change of the base environment (for an atmosphere or drone), or a character used equipment or a weapon, where the Foley happened and when the vocal processing would change depending on the character.  For example, when characters needed to sound like they were in space suits.

The major choices I made about vocal processing were the sounds of the astronaut suits and the AI character. A plugin called Cosmonaut came to my rescue on the first, and I auditioned various modulation plugins until I found one that gave the detached, slightly jarring chorused quality that I wanted for the second.

Once I had my sound map, I started making decisions about the sounds themselves. How futuristic did I want the spaceship (the location for the episode) to sound? Ultra high-tech or a bit more organic? In the end, I went for a combination of processed organic electronic sounds (bell-like chimes for interface noises), and recognisable mechanical Foley sounds e.g. the sound of metal doors opening and closing on military ships and rifle handling sounds.

After I had all my sounds in place, it was time for track-laying and mixing.  Panning is particularly vital in a creative sense for audio drama. Without a picture to follow, it’s up to the sound designer to locate the action for the listener for each scene and make sure it makes sense with the script and the story. When you don’t have a picture as a reference point, it’s easy to forget that a character is a collection of sounds – footsteps, equipment beeps, clothing movement, gun movement – and not just a voice.

The teaser for Episode 1 was released last week with the full episode due to be released within the next month.

I’ve learned a lot from my first foray into radio drama, and I already know there will be some changes to the sound of Episode 1 of The Converged, and probably to the sound of the following episodes as well.

It would be interesting to incorporate binaural sound, especially in sci-fi drama. I’d also like to experiment with the ideas explored in the film Gravity, of only hearing sound in space when conducted through touch. Possibly a step too far for a radio drama? After this introduction to its creative possibilities, I’m keen to continue my explorations.

 

X