Empowering the Next Generation of Women in Audio

Join Us

Ableton Show Control

For a show not so long ago in RADA (Scuttlers, written by Rona Munro), it was my intention to use Ableton Live for the playback of a variety of songs, beats, and rhythms which the cast would create and interact with throughout the show.

As I have mentioned in my blog Choosing Software, I had decided to use Ableton Live in shows because it allows me the diversity to create my own sound palettes, add in effects, and take them away again easily. Crucially, I can control all of this via MIDI in Qlab, which adds important stability for the show, but still, retains a wide dynamic range of filters and features that can be blended and mixed.

*I’m using a Mac for all of the following features, coupled with Ableton Live 9 Suite, and Qlab 3 with a Pro Audio licence.

First things first, you’ll need to go into your computer’s Audio MIDI Setup, you’ll want to go to Window in the Finder bar, and select Show MIDI Studio.

Show MIDI Studio in the Audio MIDI Setup Window in the Mac Mini

 

Qlab Live will pop up as an IAC Driver, and you’ll need to double-click the Qlab Driver to show the Qlab Live Properties.

Qlab IAC Driver in the MIDI Studio

 

In this new window, you’ll need to add a second Port such as below:

Creating a second bus under the Ports pane

 

These buses will be used to trigger Ableton from Qlab, and Ableton to trigger itself internally.

This then brings us to setting up Ableton MIDI. You’ll need to open a new Ableton file and open up the Preferences pane, from here you’ll need to set up the internal MIDI ports to transmit and receive MIDI via the buses to Qlab that we previously set up in the Mac Mini’s own Audio MIDI Setup. It should look something like below:

Ableton’s MIDI Preferences

You can then open up Qlab and check the MIDI Port Routing in the MIDI preferences and ensure that MIDI is being sent to Ableton via one of the ports like so:

You’re probably going to want to leave at least one MIDI port before the Ableton bus free for a MIDI send to your sound desk, or even to Lighting or Video.

Once you’ve set up these initial steps, this is when it gets slightly more complicated. You’ll need to keep a strict record of the MIDI triggers that you’re sending, and indeed all of the values and channel numbers. These will eventually each do different commands so getting one value crossed with another could end up with not only a lot of confusion, but you could end up triggering cues before they’re supposed to Go!

In your Ableton session, look to the top right-hand corner, and you will see a small MIDI toggle button. This is your MIDI view button, and when clicked you’ll also be able to track your MIDI across your session and throughout the show. It will be generic Ableton colour until you click it, when it will become pale blue:

 

A portion of the rest of your Ableton session will also be highlighted in blue, and the highlighted sections are all of the features available for MIDI control. This can range from volume control on Ableton channels, changing the tempo, fading in/out effects, and starting ‘scenes’ on the Master channel bank.

So I’m now dragging in a sample to the first Audio channel in Ableton

This is the first Audio track that I’d like to MIDI, so I set up a new MIDI cue in Qlab, and make sure that it’s a simple Note On MIDI command – Qlab will always default to Channel 1, Note Number 60, Velocity 64, but this can be changed depending on how you plan on tracking your commands. I’ll set this to Channel 4 (leaving the first 3 Channels free for desk MIDI, LX and maybe Video or spare in case something needs re-working during tech). I’ve then set it to Note 1, with a Velocity of 104 (104 is a key number here, this roughly works out at 0db within Ableton, so is handy to remember if MIDI’ing any level changes). Because all I’ve done here is send a simple ‘Go’ command to the Audio track, however, the Velocity number is sort of irrelevant – because the track is at 0db anyway, it will simply play at 0db.

I’ll then ensure that MIDI output is enabled in Qlab, and open the MIDI window in Ableton, again, from the top right-hand corner, and select my track with my mouse (this might not necessarily be highlighted any more, but it will be selected). I’ll then jump back to Qlab, and fire off the MIDI cue. Ableton will recognise this, and not only will the programmed MIDI show up in the MIDI Mappings side of the session, but it will show up directly on top of the Audio cue, like thus:

So now that we have an audio track playing and the action is happening on stage, you might have even fired through several other generic Qlab cues, but you want to stop the music and start the scene. There is no escape in Qlab for Ableton, so Ableton is going to keep going until we programme some more MIDI cues; So I’m simply going to programme a fade down of the music, and then a stop.

What I’ve done it programme a MIDI fade, which as you can see in the picture, it starts at the 0db value of 104, and then fades down over 5 seconds to 0, or infinity. You can also control the curve shape of the fade as usual in Qlab, and of course, the fade time is completely adjustable.

Once I’ve programmed the fade and added in the stop, my MIDI window looks a bit like this:

Ableton has accepted what ‘notes,’ or for Qlab, what values I’ve added in that complete different commands, and also given me a description of what these are doing. Something to note here is that the value to change the volume, whether you’re adding in fades up or down, will always be the same – it is the volume value in Qlab that will see the change.

So now that I’ve stopped the music, I might want to start it again in a separate scene if it was a motif for a character, for example. This programming can be part of the same cue:

Again, you’ll notice that the Ableton fader is resetting back to 0db. Of course, this is just one channel, and just one track within Ableton, and the more you add, the more complicated the programming can get. I’ve also added in a channel stop to make sure that should we want to play something off a separate scene in Ableton; nothing else gets fired off with it (just in case).

In terms of MIDI’ing within Ableton, when in your MIDI pane, as a general rule, anything that shows up in blue is viable to receive and be altered by MIDI. This means that you can add in reverbs over a certain amount of time, take them away again, and alter any of the highlighted parameters completely to taste. You’ll then just need to go back and make sure that any fade ins have outs again and a reset.

This is a brief intro to having more control over Ableton during a show within Qlab, and of course the more effects and cues might get added, the more complicated the MIDI mapping becomes.

The great thing about using Ableton in a show is that there are certain parameters (also with MIDI control) that can be changed such as how long after receiving a stop should the track last (one bar, or half a bar, or a beat for example) to always ensure that music ends on beat and makes sense to the listeners. For me, Ableton allows you enough control over what it does, but enough flexibility.

Choosing Software

There are many ways to control show cues on various programmes, and exactly which programmes used are entirely dependent on what the show’s needs are.

My upcoming show in RADA is proving to be a show that has much more than just a standard Qlab and a few microphones; I’ll also be composing for the show, but the composition is very much in fitting with the almost experimental and ‘found sound’ element of said show. It’s set simultaneously in 1882 and 2011, and there should be a ‘Stomp’-esque soundtrack that is driven by the sound, music, and choreography. This presents various challenges, and one of them initially has been deciding what to run the show on. Naturally, I’ll be using Qlab as the main brains of the show. However, Ableton Live will be utilized as well as live mixing.

Qlab is incredibly versatile, and as I’ve mentioned in previous posts, it can deal with OSC and MIDI incredibly well. In terms of advanced programming, you can get super specific and create your own set of commands and macros that will do whatever you need it to do, and quickly. Rich Walsh has a fantastic set of downloadable scripts and macros to use with Qlab that can all be found on a Qlab Google Group . Mic Pool has the most definitive Qlab Cookbook that can also be found here  (as with OSC and MIDI, you will need a Qlab Pro Audio license to access these features which can be purchased daily, monthly, or annually on the Figure 53 website).

To get Qlab to talk to Ableton is relatively straightforward – again, it’s all MIDI and specifically Control Change. MIDI is incredibly useful in that per channel, we can achieve 128 commands, and each channel (which is up to 8 output devices in Qlab V3) can be partitioned off for separate cues (i.e. Channel 1 might go to Ableton, Channel 2 might go to Lighting, 3 might be Video, and so on). Couple the Control Change with both Ableton’s MIDI Input Ports and its MIDI Map Mode, and you’re on your way to starting to control Ableton via Qlab. Things can get as specific as fade up/down over certain times, fade back up over certain times, stop cues, start loops, and generally control Ableton as if you were live mixing it yourself. The only thing to be wary of at this stage is to ensure that all levels in Ableton are set back to 0db with a separate MIDI cue once desired fades, etc. are completed – Ableton will only be as intelligent as it needs to be!

Using both macros/scripts and sending MIDI cues to Ableton are all features that I will cover in a separate post, only because they deserve their own post to understand all of the features.

So Ableton can do a lot, regarding controlling a show, and it does give us the flexibility to work, but artistically it also opens up a whole new world of opportunity. In RADA we are fortunate enough to own several Ableton Push 2’s, and they’ve very quickly become my new favourite toy! Push is useful as a sampler at its core, but there is so much flexibility that will be incredibly helpful during this next show. I can create loops, edit times, effects, sample rates, and can load any plugins simply; for me, it’s completely changed the live theatre game. I can react in real-time in the rehearsal room based on the choreography and can load new sounds from a whole suite of instruments and drum packs.

 

I’ll let Ableton themselves tell you more about the Push and what it can do – I’ve only recently started to use Ableton, so it’s as much as voyage of discovery for me, as I’m sure it is for you! More can be read on their website.

I primarily also use Pro Tools for editing any SFX and dialogue; this is because it’s a programme I’ve come to know very well and find that it is dynamic enough for what I need to do. I can again, load plugins quickly, it’s versatile and can load hundreds of tracks, and can talk to external hardware simply (such as the Avid Artist Control which we have in RADA’s main recording studio).

I also sometimes use Logic Pro as well, although I would only use this for music editing. This is because I prefer its ability to quickly load time signatures and is elastic enough that whenever a new track is loaded, it quickly adapts to the time signature on imported audio, and often comes pre-loaded with a vast amount of samples and plugins as standard.

With Ableton edging its way in, however, I might just have to choose a favourite soon because for me Ableton can often provide more realistic sounds, greater flexibility in drag-and-drop (wildly editable) plugins, auto-looping, and can be easily controlled in a live setting.

Often with software though, as with hardware, it’s more about what the sound designer or musician is comfortable with using and what the desired outcome is for the show.

Preconceptions in Human Hearing

As sound designers, we often have to fight against what something actually sounds like, and what audiences expect things to sound like. For example, an authentic phone ring might not necessarily fit the tone of the piece, and actually, a phone from a different era would suffice in creating urgency and tonality.

As a starter for ten, human hearing is fairly straightforward. Sound waves are transmitted through the cochlea which then eventually reach the Primary Auditory Cortex and the syntax processing areas of the brain. We can say that these processing areas of the brain share the sound waves and do their best to find some rhythm and harmony in what we are hearing. This is because of the linguistic processing tendencies we have, and our innate need for understanding and communication.

Our perception of sounds stems from our memories, and the human memory is typically untrustworthy. How many times have you shared a story and had someone remember a completely different version? We could argue that it’s the same premise for sound.

While it’s true that our echoic (hearing/auditory related) memory lasts longer and has a quicker processing time than our iconic memory (visual related), and could therefore be described as more reliable; our echoic memory can only hear things once, and things once heard cannot be unheard.

This is also where our short and long-term memories come into play. If you were sitting in a packed auditorium at front of house and heard an announcement (the quarter call, for instance), nine times out of ten we would hear the call, process it, and then completely forget about it. Should somebody then ask you, five minutes later, what that call was, you may just be at a loss as to what it was, but could probably remember the tone, the clarity, and more about the speaker’s voice than the actual message. There are a number of factors to blame here.

Upon recognising that there was no immediate danger, you would blend out the rest of the call, and continue your own conversation. This is our basic selective hearing, but what of the rest of the call? We attenuate the rest of the information and store it in case it becomes useful, but it’s not always remembered accurately. This is further because our memories store a lot of information, whether it be in the long-term or short-term, and intrinsically we link memories to other memories to aid said storage. Of course when talking about sound, and sound effects, it entirely depends on the context of how/when/where a listener has heard them before – no two natural sound effects will ever be the same, and nor will their memory recalls within individual human beings.

But what does all of this mean for sound design? And particularly sound design for theatre? If we are playing on audience perceptions of what sounds, atmospheres, or even conversations between actors should sound like, then it depends on the effect being sought. If we’re talking a straight play, then a doorbell from 1911 should probably be true to the text – this means a bell on a pull.

On the other hand, I have absolutely used a recorded shop doorbell because it fitted the tone of the piece better. The bell was, due to pitch, smaller than any of the real house bells we tried, which meant it was a slightly lighter sound, and therefore more whimsical. Of course, this steers us into the territory of scenes in a play, and their overall tones (not to be confused with musical tones). A big old rusty house doorbell would often seem too clanky and boisterous for the entrance of the next-door neighbour (unless, of course, this is the exact effect that you’re heading for).

Sound designers will often never use just one sound effect to attain the overall effect that they are seeking; this may be as part of a sequence or even underscore/atmosphere. As we can see below from my recent show A Little Night Music, I used multiple tracks to create two car arrivals:

It’s often the textures of the sounds that I aim to create when sound designing, and often they do end up being true to what authentic/real-life things sound like, but more often they do not. This can often be for the reasons stated above. It can also end up being that, again, they do not fit the set, tone, or overall direction of the piece.

This is where the overall direction, sound design, and artistic licensing come into play. We can, with our best intentions, want something to sound authentic, however realistically, as designers and artists, we will borrow from different genres and times to make happen what we want to happen. This again, however, can come back to our own personal memories and experiences of sound and effects, and the ideas that they give us in terms of what we want to create.

Ideas fuel other ideas, as do our memories and creative minds, so the more that we feed into said ideas and the ethos of our creations, the more we contribute to the expectations of what things should, or could, sound like.

Practically Perfect

Recently for a RADA show that I was sound designing, it seemed that there was scope to make a practical radio. Practicals are some of the best fun in theatre without the audience knowing that there are little bits of trickery happening.

The show was Clybourne Park by Bruce Norris in RADA’s GBS Theatre, in-the-round, directed by Michael Fentiman and designed by James Turner. The story is told in two halves; the first Act being set in 1959 in a suburban Chicago house, and we are introduced to a married couple. As the Act goes on we learn that their son died, and that the remaining parents are moving to escape neighbourhood gossip; what follows is a heated discussion as to who should be allowed to move into the house after they’ve gone. Introducing Act 2, we have moved on 50 years and are now in 2009 in the same house. A group of people from the neighbourhood are discussing what should become of the house, and who exactly should move into it (which echoes Act 1). More arguments ensue and the play ends on a flashback to 1959, with a conversation between the deceased son and his mother. It’s a politically charged play full of dark humour and uncomfortable truths.

Here is the end product:

Clybourne Park 2016 – GBS Theatre, RADA)

 

I found that 1950’s-era replica in Deptford Market for a tenner, and it’s the best practical I’ve ever made (and I must confess that it functioned as a real radio before I destroyed the inside of it). I’d decided to go ahead and make the practical myself, by way of a challenge in between attending rehearsals and dealing with paperwork.

So I bring the radio back to the sound workshop, and I’ve ordered a mini-amp online that will sit inside the radio, along with an IEM, and I’d been hoping to hook it up to the speaker that came with the radio itself. Quite happily, my mini-amp arrived that same day so I can get started straight away.

One problem, however.

Now I don’t know what I was expecting, but I certainly didn’t expect that the amp would come in pieces and I’d have to solder it all myself.

So now I had to solder this thing having never really paid attention to circuit boards before. I dug out some instructions (all 2 pages of them) off the company’s website and set to work.

That said, this is probably the most common way to create a practical in theatre with the basic workflow as such: Qlab – Sound Desk – IEM Transmitter – IEM Receiver – Mini-Amp – Speaker

Most mini-speakers will simply be attached to the IEM because they’ll be self-powered, however, my system just so happened to need an amp because it was just the cone that sits inside the radio. Below is my system diagram for the show, so that we can see where the practical will sit in the larger scale of things (relevant signal flow is highlighted).

System diagram for Clybourne Park

Practical radios are almost two a penny in theatre amongst other fan favourites such as doorbells, telephones, intercoms, etc., all of which would have their own tried-and-tested ways of being produced.

For instance, we’ve had a couple of shows that require practical mobile phones, and for this we use an app called StageCaller that works over Midi/OSC – you’ll need an iPhone to do this and Dropbox, and for the best results, a stable WiFi connection that is used solely for the practical phone.

To get the sounds onto the StageCaller app you’ll need to download them from Dropbox and upload them in the app – all you’re doing in Qlab is sending OSC commands – all of the audio lives in the app. So from here you can trigger the sounds via OSC from your Qlab file (with relevant IP addresses) and in the most recent version, the app allows you to set up ‘heartbeat’ pings so that it doesn’t become completely inactive and triggers precisely when you want it to. There are various other little tricks that you can set up, too, including the sound cutting out as the character lifts the phone to their ear, or no sound at all and just a text vibrate.

You can find out more about the functionality of StageCaller on the Figure 53 website

 

The amp inside the radio hooked up to the internal speaker – IEM not pictured

Back to the radio, I powered through and soldered the entire circuit board and tested it by plugging my phone into the mini-amp via a mini jack-mini jack cable, and hooking up 2 other speakers (L/R) that also came free with the kit from the online shop. Miraculously – it worked!

So now I had a working system, and all I had left to do was to find a way to attach all of this inside the radio (lots of glue and velcro was involved here – not my finest prop-making) and plug up my IEM – for this I was using a Shure PSM300 System – and have a go at sending audio to it via Qlab.

It worked like a dream and was the most stable practical that I’ve used – of course, I had a backup in place just in case something went wrong with it, which is also quite common practice. The backup was simply an assigned key on my sound desk (a Yamaha 01V96i) which my operators could press, and the audio being sent to the radio would be internally reassigned to a JBL Control 1 speaker rigged above the stage, which would hopefully not interrupt the action! (As it was, the ops never had to use the backup, but it’s very good practice to have something in place anyway). My Operators, who took turns opping the show every day, then had the task of looking after the practical radio and changing the batteries before every show.

Practicals to me are little bits of fun that we can add in to a show to represent something that otherwise, a good few years ago, would have ended up being played from the nearest speaker, or being mimed. We’re quite fortunate that we now have such wide ranges of technology to play with, and again little tricks up our sleeves to truly create our own version of reality.

*all production photo credits belong to Linda Carter for RADA

Drama School, Darling

Knock, knock.

(Who’s there?)

The sound designer because the practical doorbell doesn’t work.

(and that is the most wholesome joke that I could come up with – don’t let anyone tell you that people who work in Sound aren’t funny)

So anyway…

My name is Candice Weaver and I am student at RADA studying towards a postgraduate degree in Sound Design for Theatre.

Prior to my current degree, I also completed an Undergraduate Bachelors in Commercial Music at the University of Westminster, where I really discovered sound design and started working in theatre. Since then I have been fortunate enough to work with the English National Opera, Secret Cinema, and casual at the Royal Opera House among others (Sleep? Never heard of it).

Having realised that I definitely didn’t possess the skills to really get into theatre yet, naturally I thought ‘Well hey! Drama school sounds good!’, but little did I know that it was this exhausting, this time-consuming, and often just a little bit ridiculous.

It is also, however, ridiculously rewarding and without a doubt the best thing I have ever done.

At RADA (or, the Royal Academy of Dramatic Art as we are sometimes known), we really run as a mini-rep production house and we have three theatres:

– the Jerwood Vanbrugh Theatre

– the George Bernard Shaw Theatre (The GBS for short)

– and, the John Gielgud Theatre

They each vary in size and can have stagings in any variation. For instance the Vanbrugh Theatre is traditionally a proscenium arch, however we have a musical opening this February which will be staged in-the-round.

Every six weeks we turn around three new shows in each of our theatres (excepting twice a year when we do a Film/Radio production block), and we can easily get through hundreds of shows/productions and events/film screenings/galas throughout the academic year. Every student coming into RADA has the opportunity to work on these shows, which are all staffed by students in every role – from third year actors to sound/LX designers, scenic artists, construction, flymen/women, technical management and stage management). They’re also directed by external directors, and for the majority are Designed (costume/set) by external professionals, too. What’s better is that the public can actively come and see our productions (which each run for a couple of weeks after opening, after which we tear them down and start all over again).

 

I’ve now worked in all three of our theatres as both Production Sound Engineer and Sound Designer, and the next project is the musical A Little Night Music staged in-the-round, in our Vanbrugh Theatre. I will be the Associate Sound Designer for this production – for a musical, which we only stage once a year, we tend to get in industry Sound/Lighting Designers simply because the musical is usually quite a momentous task; this naturally still means that I’ll be dealing with the rig plans, budget, organising system diagrams, attending rehearsals, and passing on any relevant information to my PSE’s and Sound No. 1/Sound No. 2’s/Operators.

The show roles are generally given out based on what our next step of learning might be, as well as what our personal goals are – for instance in my first year of RADA, I only did a couple of sound designs because I needed to focus on my Production Sound and practical skills.

 

I’ve also just finished a Film block where I was the Sound Assistant/Boom Operator – we filmed three films across a few weeks, all on locations found by my fellow students. In my first year I completed a Radio block which also involved studio recordings of three plays in RADA’s main studio in the Sound Department, editing them together, adding sound design, and eventually taking them to be mastered in a professional studio.

I’ve certainly had plenty to keep me busy since starting RADA in September 2015, from production roles to projects, and I really am looking forward to getting our next shows up and running. It’s incredibly rewarding to be able to have something for audiences to come and see, and be able to understand where sound design sits in the larger scale of productions.

I’ll also look forward to sharing some of the things that I’ve been up to with you, and my experiences as I complete my final year of drama school (darling).

(I’ll definitely be bringing more jokes with me)

*Photo credits for 1/2/3/4/5/6/7 belong to Linda Carter for RADA

 

X