Empowering the Next Generation of Women in Audio

Join Us

The Creative Process of Illumination

The importance of light is not only to appreciate the stage or the viewer to see the artist, whether an actor, musician, dancer, etc. Lighting is a complement to the show as important as any other. With light, we can create an entire concept, a unique atmosphere for each show depending on the type of event.

Now, how can this be achieved? It is definitely not an easy task!

First of all, we must bear in mind that to be an illuminator it is necessary to have knowledge of the equipment, color theory, voltages, among others, we must also have a lot of practice since lighting is a profession like any other.

Second, there are some specific factors that we need to consider before we start; Let’s start by identifying the type of show we are going to light, it could be dance, opera, a concert or television, depending on the type of show it we can determine how we are going to enlighten. Once we identify it, we have to know the overall concept of the work, for example, if our artist has a more theatrical concept such as Castañeda, Tricycle Circus Band, Lady Gaga, Michael Jackson, to name a few. Working with these types of artists is complicated since they have a very peculiar way of visualizing their show (the music, the theatrical concept, the drama of the show) and the way they seek to communicate with their audience is complex.  I do not mean that other artists do not have their complications, but these examples are useful because the idea of a concert exceeds what we usually see in a show, this makes it unique and unforgettable for each of its spectators.

Now, since we have the concept; the next step is to know the tastes of our artist (favorite colors, if you like strobes, etc.). This step is essential to start designing, and we can do it without the need of a very sophisticated equipment or software. Simply by having a great imagination, blank paper, and some colors you´ll have more than enough, although the above makes the work considerably easier.

To design, you must first realize your light plot or plane of lights. This is the basis of all your design since in it you place the equipment you are going to use and the location according to the characteristics of the place where the show will be presented. In it you must specify the type of luminaires you need, if they are conventional, LED, mobile, etc., how many you need, the voltage they use, the watts of power you require and if necessary the filters, gelatins or Lucas (whatever you call them) and even the console you want to use.

Once you have your light plot, then you start designing your cues or render. This can be done from a downloaded software either free or paid, there are many in the market, some of the best I have tested are Titan de avolites, MA of GrandMA and Magic Q of ChamSys, which are also free; but as I said, it is not mandatory that you use them, you could also use a sheet of paper and colors. The important thing is that when you get to the venue you have a clear idea of what you want to achieve.

The advantage of designing/using software is that you can arrive with a programmed show and optimize the time on the console to correct the positions of the lights you will use at important moments, to highlight an action or the artist himself. Although in the entertainment world not everyone has the opportunity to always have the console we want, there will come a time when we can ask to have a fixed rider, although for that you´ll need a lot of experience and patience.

In the case of the theater, we depend on the console that the venue has, for example, in the Lebanese theater where I work, there is an ETC Element. Soon will arrive an ETC ION XE, but in the City Theater Esperanza Iris there is a roadhog console and Avolites Pearl Expert (both Venues located in the CDMX), which is why it is more complicated to schedule a show, but once you have your render, you can arrive at the venue to program your show without any problem.

Finally, a good design of lights does not depend on the equipment that is counted, but on the creativity and imagination that each illuminator has. In addition to the constant preparation and practice, the trial and error that each of the operators can have makes your performance grows.


Mary J. Varher – Dancer, choreographer, teacher and illuminator. She started her career as a dancer in Guadalajara at age 14 and at 15  as an actress, that’s where she had her first approach to enlightenment. Later she moved to the CDMX where she studied the degree in classical dance teaching at the National School of Classical and Contemporary Dance, where she also receives scenic production classes from Jana Lara, thus having her second approach to enlightenment and being fascinated so it could be achieved with an unlikely element. She did her social service at the Raúl Flores Canelo Theater under the tutelage of Ivonne Flores. As their needs increased, they continued to learn from great teachers such as Carlos Mendoza, Zanoni Blanco and Mario Flores. She has worked on various projects such as “Sound-body-image connection” under the direction of Antonio Isaac, in dance companies “Project Bara” “Spatio Ac Tempore” and “Mexico Espectacular”, among others, she has participated as an engineer of lighting with artists such as: Genocides of the Mystery and Descartes a Kant. She is currently working as head of lighting at the Lebanese Theater in the Lebanese center located at the CDMX.

El proceso creativo de la Iluminación

La importancia que tiene la luz, no sólo es para que apreciemos  el escenario y el espectador vea al artista, ya sea actor, músico, bailarín, etc.

La iluminación es un complemento para el espectáculo igual de importante que cualquier otro. Con la luz podemos crear todo un concepto, una atmósfera única para cada espectáculo más allá del tipo de evento que se trate.

Ahora, ¿cómo se puede lograr esto?, pues definitivamente no es una tarea fácil,

Primero que nada se debe tener en cuenta que para ser iluminador es necesario tener conocimiento del equipo, teoría del color, voltajes, entre otros, además debemos de tener  mucha práctica, ya que la iluminación es una profesión como cualquier otra.

Segundo, hay algunos factores específicos que debemos tener en cuenta antes de comenzar; empecemos por identificar el tipo de espectáculo que vamos a iluminar, puede ser danza, ópera, un concierto o televisión, dependiendo del tipo de show es la manera en que vamos a iluminar. Una vez que lo identificamos, hay que conocer el concepto global de la obra, por ejemplo, si nuestro artista tiene un concepto más teatral como: la Castañeda, Triciclo Circus Band, Lady Gaga, Michael Jackson, por mencionar algunos. El trabajar con este tipo de artistas es complicado ya que tienen una forma muy peculiar de visualizar su espectáculo  (la música, el concepto teatral, la dramática del show) y la forma que buscan comunicarse con su público es complejo, no quiero decir que otros artistas no tengan su complicación, pero estos ejemplos me sirven ya que la idea de un concierto sobrepasa lo que usualmente vemos en un espectáculo, esto lo hace único e inolvidable para todos y cada uno de sus espectadores.

Ahora bien, ya que contamos con el concepto; el siguiente paso es saber los gustos de nuestro artista (colores favoritos, si les gustan los estrobos, etc.), este paso es primordial para comenzar a diseñar, y lo podemos hacer sin necesidad de un equipo o software muy sofisticados, simplemente con tener una gran imaginación, papel en blanco y algunos colores es más que suficiente, aunque lo anterior nos facilita considerablemente el trabajo.

Para diseñar, primero debes realizar tu lightplot o plano de luces, éste es la base de todo tu diseño, ya que en él colocas el equipo que vas a utilizar y la ubicación de acuerdo a las características del lugar en donde se presentará el espectáculo. En él debes especificar el tipo de luminarias que necesitas, si son convencionales, LED, móviles, etc., Cuántas necesitas, el voltaje que utilizan, los watts de potencia que requieres y de ser necesario los filtros, gelatinas o Lucas (como sea que les llames) e inclusive la consola que deseas utilizar.

Una vez tienes tú lightplot, entonces comienzas a diseñar tus cues o render, esto lo puedes realizar desde un software de descarga ya sea libre o alguno pagado, existen muchos en el mercado, pero algunos de los mejores que yo he probado son Titan de avolites, MA de Grand MA y Magic Q de ChamSys, además son gratuitos; pero como ya lo dije, no es obligatorio que los uses, puedes utilizar una hoja de papel y colores, lo importante es que al momento en que llegues al venue tengas claro qué es lo que quieres lograr.

La ventaja de diseñar utilizando un software es que puedes llegar con un show programado y  optimizar el tiempo en la consola para poder corregir las posiciones de las luces que utilizarás en los momentos importantes, en resaltar una acción o al artista mismo. Aunque en el mundo del espectáculo no todos tenemos la oportunidad de tener siempre la consola que deseamos, llegará un momento en el que podamos pedir y tener un rider fijo, aunque para eso se necesita mucha experiencia y paciencia.

En caso del teatro, dependemos de la consola con la que cuente el recinto, por ejemplo, en el teatro libanés que es donde yo trabajo, hay una ETC Element y próximamente llegará una ETC ION XE, pero en el Teatro de la Ciudad Esperanza Iris hay una consola roadhog y otra avolites pearl expert (ambos Venues ubicados en la CDMX), razón por la cuál es más complicado programar un show, pero una vez teniendo tu render, puedes llegar al recinto para programar tu show sin ningún problema.

Finalmente, un buen diseño de luces no depende del equipo con el que se cuente, sino de la creatividad e imaginación que cada iluminador tiene, además de la constante preparación y práctica, la prueba y error que cada uno de los operadores pueda tener hace que tu desempeño crezca.


Mary J. Varher – Bailarina, coreógrafa, docente e iluminadora. Comenzó su carrera de bailarina en Guadalajara a los      14 años y a los 15 la de actriz, ahí fue donde tuvo su primer acercamiento a la iluminación. Posteriormente se mudó a la CDMX donde estudió la licenciatura en docencia de danza clásica en la Escuela Nacional de Danza Clásica y Contemporánea, en donde también recibe clases de producción escénica de parte de Jana Lara, teniendo así su segundo acercamiento a la iluminación y quedando fascinada por lo que puede lograrse con un elemento tan poco tangible. Hizo su servicio social en el Teatro Raúl Flores Canelo bajo la tutela de Ivonne Flores. Conforme sus necesidades fueron aumentando, continuó aprendiendo de grandes maestros como lo son Carlos Mendoza, Zanoni Blanco y Mario Flores. Ha trabajado en diversos proyectos como lo es “Conexión sonido-cuerpo-imagen” bajo la dirección de Antonio Isaac, en las compañías de danza “Proyecto Bara” “Spatio Ac Tempore” y “México Espectacular”, entre otras, ha participado como ingeniero de iluminación con artistas tales como: Genocidas del Misterio y Descartes a Kant. Actualmente se encuentra como jefa de iluminación en el Teatro Libanés del centro Libanés ubicado en la CDMX.

 

SoundGirls and SoundGym

Collaborate to support women in the audio industry

SoundGym and SoundGirls collaboration in order to encourage and support women and girls in the audio industry.

SoundGym members have been donating Pro subscriptions to support women in sound.

First Step Register for a free account at soundgym.co
Then in your settings use our school code CP8I4084H89
The second Step Fill out this form and we will provide you a year subscription.

Register Here

 

 

 

 

Career Paths in Film and Television Sound

Tour of The Bakery, Sony Scoring Sound Stage, Panel Discussion, Q&A, Networking and Mentoring Social.

You must register for this event to obtain parking permit and reservation

Register Here

Moderator: Anne Marie Slack – Executive of Organization Services Motion Picture Sound Editors (MPSE)

Panelists

Karen Baker – Supervising Sound Editor, Warner Brothers

Karen is a two-time Academy Award-winning sound editor. She also has won and been nominated for several Motion Picture Sound Editors awards as well as winning the BAFTA Award for Best Sound. Her credits include Skyfall and the Bourne films.

Onnalee Blank, CAS – Re-Recording Mixer, Warners Brothers

Onnalee was a ballet dancer before getting into audio. Since then, Onnalee has won 3 Emmys and 5 Cinema Audio Society (CAS) awards for her work as dialog and music mixer on Game of Thrones.

Karol Urban, CAS, MPSE – Re-Recording Mixer

Karol has worked on Grey’s Anatomy, New Girl, Station 19, Band Aid, Breaking 2, and #Realityhigh. She has a diverse list of mix credits spanning work on feature films, TV (scripted and unscripted), TV movies, and documentaries over the last 18 years. She currently serves of the TV Academy’s Governor’s Mixing Peer Group as well as on the board of directors for the Cinema Audio Society and is an editor of the CAS Quarterly publication.

Katy Wood – Supervising Sound Editor, Warner Brothers

Katy has worked as ADR supervisor on the recent films such as Sicario: Day of the Soldado and Guardians of the Galaxy Vol. 2. Katy’s career in sound for film has spanned the last 20 years. Katy has worked extensively in the United States, New Zealand (including the Lord of the Rings series), Australia, and the United Kingdom.


Do you have a passion for sound? Music sound… movie sound… all audio sound? Then you should consider a career in audio post production. Audio post-production careers cover a lot of areas including television, web, movies, commercials, live events, scripted shows and movies, documentary/reality, sports, and more.

Join us for a panel discussion and Q&A featuring some talented women working and succeeding in the world of post-production audio. The evening will end with a casual mentoring and networking session.

Below are just a few of the exciting jobs in post production audio.


Sound assistants or machine room operators prep materials and offer tech support to sound editors, mixers, and engineers.

Dialog editors focus on spoken word. A dialog editor listens to all of the mics for quality, smooths out transitions, fixes technical problems, and removes unwanted sounds from dialog when possible.

Music editors are responsible for adjusting music edits and finessing placement for music in a scene. A music editor also coordinates with the composer on a project, delivers all the music to the re-recording mixer, and often attends mixes (as the representative of the music department).

Sound fx editors (sound designers) are person responsible for non-language sounds. The sound designer has a sound effects library (a catalog of sounds) but also records specialized sounds when needed. He/she adds background ambience sounds and will embellish sounds like explosions, car engines, or guns. The sound designer also has to build sounds from scratch for visual effects or creatures that don’t exist.

ADR Mixers are responsible for recording actors in a studio. The actor performs the line while watching it on a screen and the ADR engineer adjusts microphones and watches for sync (how well the new recorded audio matches their lip movements on-screen). In some cases, ADR is recorded without picture (some cartoons, for example).

Foley mixers are responsible for recording certain non-speaking sounds. The Foley engineer works in a studio with a Foley artist, who makes the sounds while the engineer records it. The Foley team covers sounds such as footsteps, cloth movement, eating, touching or handling objects.

Supervising Sound Editors (or Sound Supervisors) oversees the sound crew working on a project (sort of like a manager). They communicate with directors, producers and picture editors about sound, supervises ADR sessions, and attend the dub mix. Sometimes there are multiple sound supervisors on a project and are split up by element. ADR & Dialog Supervisor, for example, only focuses on those two elements.

Re-recording mixers combine all the sound elements (dialog, voice-over, sound fx, Foley, and music) into one project. The mixer adjusts the levels of those sounds together (similar to the job of a live sound mixer or a music mixer). Sound mixers may work alone or in teams with each person focusing on different elements. After the re-recording mixers adjust for balance (looking at it technically and creatively), there will be a review with the producer, director, picture editor, or other members of the film crew to listen, give notes, and make adjustments.

Basic Sound Circuit Glossary

Have you ever read the spec sheet on your favorite piece of gear and wondered what the terms mean?  Are you interested in modifying your gear, but are intimidated by the jargon? Now you can have a cheat sheet for those little components that work hard to make electricity into music.

Active device – A component that uses an outside electrical signal to control current.  Transistors are generally active devices.

Attenuator – A signal dampening device that is often in the form of a potentiometer (pot), a variable resistor like a volume knob or fader, but can be as simple as a single resistor.

Capacitor – A passive component that stores charge, and is often used in the circuit like a temporary battery.  It is also used to remove unwanted DC electricity from a circuit. When repairing circuits this is the little demon that can cause harm even when the power is off.  It also has a tendency to short, and is generally the first component to go bad.

Diode – A component that only allows current to pass one way.  It is used in voltage rectifying (turning AC into DC). Light emitting diodes (LED) are another common application for these components.

Inductor – A passive component that stores magnetic charge, and resist changes in current.  It can be used to block AC electricity while allowing DC to pass through.

Load – Any device that you plug into your designed circuit.  It is the catchall term, especially when the circuit is in the designing stage.

Operational Amplifier – A voltage amplifier that uses an external DC voltage to produce a high gain output.  It often takes the difference between two input signals and outputs a single amplified signal. They are a key component in analog circuits, and have a variety of useful functions when combined.

Oscillator – A circuit that creates a periodic signal, often sound when energized, usually by a DC signal.  There are a variety of ways to build an oscillation circuit, but many of them function on the principle of creating a feedback cycle that self-sustains.

Passive device – A component that does not control the electrical current by means of an outside electrical signal.

Resistor – The simplest, passive component on your circuit board.  It attenuates or dampens the signal. Every circuit has a resistor in some form, and a circuit without any resistance is a short.

Transformer – A device that transfers electricity to another circuit using magnetically coupled inductors.  They can be used to step-up or step-down the voltage from one circuit to the next.

Transistor – An active device that can amplify or switch electric signals.  It is one of the key components in electronics. They are used in analog and digital circuits, and can be found in tube or chip form.

 

SoundGirls Expo in Orlando, Florida

The event was hosted by the Orlando SoundGirls Chapter and Full Sail University.

A while back a few of us were talking at one of our monthly meet-ups, and I asked everyone “what would you all like to see us do and what do you want  to learn” The responses were varied, but everyone agreed they wanted a day of training and networking with other women working the industry.

From those conversations, I started dreaming about what we could do. I made a few calls and sent some emails. I asked Karrie Keyes for some advice, and she suggested I reach out to some of the local manufacturers that had shown interest in supporting SoundGirls, and I did just that. One of the emails I sent was to Full Sail University to see if we could do some training at the university. Mark Johnson head of the Show Production program at Full Sail University asked me to come in for a meeting with them, and from there, things just started coming together.

One of the teachers for the entertainment business program, Monika Mason, said she was a member of SoundGirls and wanted to help any way she could. Mark suggested having a two day Expo with manufacturers, and Monika suggested we also have panels and discussions surrounding women in audio. I’ll be honest it was a big event to try to pull off and I wasn’t even sure anyone would attend as we are still a growing chapter that hasn’t been up and running for even a year yet. We had two more monthly meet-ups where we spread the word, and we all started talking about it on social media. Mark from Full Sail connected me with Chet Neal from Mainline Marketing who has a ton of reps under their belt. He asked me a lot of information and told me he wanted to see if he could get different manufacturers to come and have female reps and promote what we do. I thought that was a nice touch.

We started planning in January and landed on a date in July. July is a slower month for us in Orlando, so that worked well for others in the industry to be able to attend. We continued planning, talking, and dreaming. Manufacturers like Shure jumped on board and said they would send Laura Davidson, Analog Way jumped on board and said they would send Chrissy Spurlock, Allen and Heath jumped on board and said they would send Willa Snow (who happens to be the Chapter Head for SoundGirls in Austin, TX), and local SoundGirls supporters Clear Tune Monitors jumped on board and said they would send Sandra Cardona and Castor Milano. This was all coming together! I started to get excited!

One of the greatest things I saw while putting this all together was how everyone was so willing to say “yes” and “what can I do to help?” My company, B4 Media Production sponsored breakfast for the vendors and volunteers. Chet’s company,  Mainline marketing sponsored lunch for the vendors and volunteers. Mark, Monika, and Full Sail got us crew, a space on Campus to have the event, and also marketed to the students.

I reached out to some other women who have been in sound for years to have a panel discussion, an audio engineer for a local theater who also teaches audio and video at Full Sail, Susan Williams, a sound engineer from NASA, Alexandria Perryman, and myself did the panel discussion and we just opened it up for any questions. That was my favorite part of the entire event. We had a real discussion about real topics for over an hour both days. Everything from “how did you get your foot in the door?” To “what is a good freelance rate to quote someone.”  All the manufacturers joined in, and attendees all asked questions. We laughed, we were encouraged, and we learned so much from one another.

In addition to informative training sessions, and interactive gear displays, the event highlighted and supported the SoundGirls organizational mission, “to create a supportive community for women in audio and music production, providing the tools, knowledge, and support to further their careers.”

One of the SoundGirls I talked to this last weekend told me “I got emotional seeing all the women in one place learning from other women on the consoles and the Shure system and the IEMs and so on. She said she had always been one of the only girls in the field and she was so encouraged to be surrounded by women running top-of-the-line gear in the real world.” It was great hearing just how energized she was.

I still can’t get over how much fun we had and how inspiring it was. As a veteran of this industry for 18 years, this is the first time I was ever a part of something that helped raise us up as women in this field without it being a requirement or a political statement to do so. Professional women just being professionals, helping and inspiring up and coming women and helping them get a leg up on a ladder that took a lot of us a long time to climb.

I spoke with two other women at the (who found us via the social media events pages) veterans of the industry while at the Expo, one who has been a broadcast engineer for 20 years and one who has been a FOH engineer for 42 years and both women encouraged us to keep going and said, “if you do another one we will come and bring our friends and contacts too.” One of those ladies said to me “you know a lot of our generation is getting ready to retire, it’s great to see the future of the industry is in such great hands and I wanna help you ladies out!” One of the women said, “I wished when I was coming up we had something like SoundGirls, this is such an encouragement to me as a veteran to see women working together not back stabbing one another for the one spot available to women. She shared how men have always helped her, and how great it is to see us come together and unite with one another and the men supporting us to help raise us up not tear us down.” I said, “I to am encouraged by that!” She said, “are you doing another one next year?” I said, “I don’t see why we wouldn’t! This just proved to me that we need these kind of events as well as the monthly meet-up to be an encouragement to one another if nothing else “ she agreed and then said, “I’m going to reach out to all my contacts and help you make this even bigger next year.”

I would encourage all the SoundGirls chapters to try to have some sort of training or expo where you can invite new people and open discussions where you can share with one another. It was one of the most amazing productive things I have ever been a part of in our industry. We will definitely do this again next year! I am looking forward to what the future holds for us women in audio now.


Beckie Campbell is the owner of B4MediaProduction, a growing production company, supplying anything from small corporate set-ups and medium to large concert system set-ups. Being versatile, Beckie also works as an independent contractor to several companies around the US. Beckie’s experience  in the audio field is comprehensive, having the ability to work as Production Management, FoH/Monitors, and as a PA/System or monitor tech. Beckie is the chapter head of the SoundGirls Orlando Chapter. Read SoundGirls Profile on Beckie Campbell

SoundGirls and Girls Rock Camp

Orlando SoundGirls Susan Williams, Cristina Sigala, Roey Lee and myself Tzu-Wei Peng were invited by Rachael and Jamie O’Berry from Girls Rock Camp St.Pete to provide a workshop about basic live sound techniques and what to expect in production. We adjusted the content of workshops for different ages of girls covering all the positions behind a gig, signal flow, what is feedback and how to avoid it, to production dress code, tools and how to work as a team, etc.

Susan and Cristina are both great teachers and professionals in the industry; it was great having them at the workshop!

Hands on experience time.
Susan is teaching her how to patch a microphone and signal flow.

 

Cable wrapping skill is a must learned, no matter you are a musician or a production people. And Lilly is doing great on this!

 

Tzu-Wei is showing Audretta how to read the layout on a console and operate it.

The main objective of Girls Rock Camp is to promote self-confidence, creativity, and a sense of community amongst girls and young women through music.

This is our very first time got involved with Girls Rock Camp in Florida, and we had a blast! The girls were eager to learn and paid attention during the class. We have heard that they were taking things they learned from the workshop and putting it to use in their band practice. They are the future kick-ass women in the music industry!

Beckie Campbell started Orlando SoundGirls Chapter this year, and she has been organizing events and meet ups to encourage more women in Florida who are interested in sound to network. We will keep this community growing and empower more women in sound like we did with Girls Rock Camp!


Tzu Wei Peng works in Live Sound Production as a sound engineer and backline tech. She also works as a recording engineer and is an active member of SoundGirls.

Live Digital Audio in Plain English Part 1

Digitizing the audio

Digital audio is nothing new, but there is still a lot of misunderstanding and confusion about how it really works, and how to fix it when things go wrong. If you’ve ever tried to find out more about digital audio topics, you will know that there are a lot of dry, complicated, and frankly, boring articles out there, seemingly written by automatons. I’m going to spend the next few posts tackling the fundamental ideas, specifically as they relate to live audio (rather than recording, which seems to have been covered a lot more), in plain English. For the sake of clarity and brevity, some things may be oversimplified or a bit wrong. If unsure, consult the internet, your local library, or a pedantic friend.

So, how does audio become digital in the first place? The analogue signal travels from the source (e.g., a mic) into the desk or its stagebox, where it gets turned into a series of 1s and 0s by an analogue-digital converter (AD converter or ADC). AD converters work by taking lots of snapshots (called samples) of the waveform in very quick succession to build up a digital reconstruction of it: a method known as pulse-code modulation (PCM. Don’t worry about remembering all these terms; it’s just useful to understand the whole process. In over ten years of live gigs, I’ve never heard anyone discuss PCM, and I’ve heard some pretty nerdy conversations). Two factors control how accurate that reconstruction will be: sample rate and bit depth.

Sample rate is the rate at which the samples are taken! Not surprisingly, the more samples per second, the smaller the gap between them (sample interval) and the less information that is lost. Think of it like frame rate in film – a low sample rate is like a jerky, stop-motion video, high sample rate is like 48 frames per second fancy Peter Jackson stuff.

Bit depth is the number of bits (piece of information encoded in binary for electronic use – so a 0 or a 1) in each sample. 8 bits make a byte, and samples are set to capture the same number of bytes each time. They record the amplitude of the signal – more bits mean there are more discrete amplitudes that it can be recorded as (See figure 1), so the resolution of the soundwave becomes clearer. Bits are like pixels on a screen – low bit depth is similar to blocky, unclear footage, high bit depth is like high definition where you can see every detail. Back in the early days of computer games, there wasn’t much available memory in the cartridges, so all the sound was recorded in 8-bit. The low-resolution audio matched the pixelated video.

Figure 1: Bit depth vs. sample rate. Time is represented on the x-axis, amplitude on the y-axis. Source: https://www.horusmusic.global/music-formats-explained/ Original source unknown.

Looking at figure 1, it’s clear that the longer the bit depth and the higher the sample rate, the closer you can get to the original waveform. Realistically you can’t take an infinite number of infinitely detailed samples every second – even very high values of each produce an unmanageable amount of data to process, and costs too much to be practical. The Nyquist-Shannon theorem states that to reproduce a waveform accurately for a given bandwidth you need to take more than twice as many samples per second as the highest frequency that you are converting. If you take fewer samples than the highest frequency, an entire wavelength could happen between samples but wouldn’t be recorded. With between as many and twice as many, you still wouldn’t collect enough data about that waveform to differentiate it from all other frequencies, as is shown in figure 2.

Figure 2: Aliasing. If a waveform isn’t sampled often enough, it can be confused with other, lower frequency, ones.Source: Eboomer84 via Wikimedia Commons https://commons.wikimedia.org/wiki/File:Aliasing.JPG

For music, we usually assume the bandwidth is the range of human hearing: roughly 20Hz-20kHz. Twice that range is just under 40kHz, but the Sony corporation figured out that 44.1kHz synced up nicely with the video recording equipment they already had while leaving a nice margin for error, so it became the standard for recording film audio and CDs. Later 48kHz was adopted because it worked well with new digital video recording gear, and could reproduce even higher frequencies. Most digital mixing desks work on 48kHz or 96kHz.

Moiré patterns like this, or the weird lines when you take a photo of a screen, can be caused by the visual equivalent of aliasing. We have more in common with the video department than we might like to admit. Credit: “angry aliasing in a webgl fragment shader” by Adam Smith on flickr. https://creativecommons.org/licenses/

Why bother with 96kHz? No one can hear 48kHz, so what’s the point in sampling enough to cover it? It isn’t strictly necessary, but there are a few reasons to do it anyway. Firstly there’s the argument that, much like when choosing a speaker’s frequency range, frequencies above the limit of human hearing can still affect the overall waveform, and so ignoring them can change the resulting sound. Secondly, in digital sampling, higher frequencies can have a real and detrimental effect called aliasing. In figure 2 you can see that the AD converter would not be able to tell whether the points it’s recorded belong to a very high-frequency waveform or a lower one. It has been told what bandwidth to expect to see, so it will assume that waveform is the lower one, within the defined bandwidth. This causes it to be artificially added to the digital audio, making it sound… just not quite right. AD converters use low pass filters, called anti-aliasing filters, to get rid of these high frequencies but they aren’t perfect; they aren’t like a brick wall stopping everything above 20kHz (or whatever they’re set to) getting through, they have a sloping response just like other filters. Increasing the sample rate can clarify which waveform is which and take the pressure off the anti-aliasing filter, moving the highest frequency that can be accurately recognised higher than that slope. Thirdly, AD converters use complex mathematical formulae to take an educated guess at filling in the blanks between samples, known as quantisation. The more samples you have, the smaller the blanks that need to be filled and the more accurate that quantisation can be.

Increasing the bit depth also greatly reduces quantisation errors. Quantisation is basically rounding to the nearest amplitude point to smooth off the ‘pixelated’ waveform – more bits mean more options to find as close a point to the real value as possible. When this process is inaccurate, the guesswork introduces noise that isn’t present in the original signal. Increasing the bit depth reduces that guesswork, increasing the ‘signal to quantisation noise ratio.’ 24 bit, which is common in live digital audio, can give you over 120dB of dynamic range because it significantly lowers that quantisation noise floor, and so can give your true signal more space and reduce the likelihood of it clipping.

As ever, your sound will only be as good as the weakest link in the chain. You might never notice the differences between these options in a live setting as a lot of live gear is not sensitive enough to show them. This might be why there is so much more discussion about them in relation to studios. However, it helps to know what processes are at work, especially when it comes to troubleshooting, which I’ll cover in a future post.


Beth O’Leary is a freelance live sound engineer and tech-based in Sheffield, England. While studying for her degree in zoology, she got distracted working for her university’s volunteer entertainments society and ended up in the music industry instead of wildlife conservation. Over the last ten years, she has done everything from pushing boxes in tiny clubs to touring arenas and spends a lot of her life in muddy fields working on most of the major festivals in the UK. She has a particular passion for flying PA, the black magic that is RF, travel, and good coffee. 

Read Beth’s Blog

Film Score Mixing with a Team

I was recently at the Banff Centre for Arts and Creativity in Canada to supervise the film score mix of a three-part documentary series (by filmmaker Niobe Thompson and music by composer Darren Fung). We needed to mix over 100 minutes of music – nearly 200 tracks of audio – in about a week. Luckily, we had a large crew available (over ten people and three mix rooms), so we decided to work in an unusual fashion: mixing all three episodes at the same time.

Normally you have one mixer doing the whole score working in the same mix room. Even if he/she mixes on different days (or has assistants doing some of the work), chances are the sound will be pretty similar. It’s a challenge when you have ten mixers with different tastes and ears working in different rooms with different monitors, consoles, control surfaces, etc. What we decided to do was work together for part of the mix to get our general sound then let each group finish independently.

The tracks included orchestra, choir, organ, Taiko drums, percussion, miscellaneous overdubbed instruments and electronic/synth elements. It was recorded/overdubbed the week prior at the Winspear Centre in Edmonton, Alberta. The Pro Tools session came to us mostly edited, so the best performances were already selected, and wrong notes/unwanted noises were edited out (as much as possible). Our first task was to take the edited session and prepare it to be a film score mix session.

When mixing a film score, the final music mix is delivered to a mix stage with tracks summed into groups (called “stems”). For this project, we had stems for orchestra, choir, organ, taiko, percussion, and a couple of others. Each stem needs its own auxes/routing, reverb (isolated from other stems), and record tracks (to sum each of the stems to a new file). I talk about working with stems more in this blog: Why We Don’t Use Buss Compression.

Once the routing and tech were set, we worked on the basic mix. We balanced each of the mics (tackling a group at a time – orchestra, choir, organ, etc.), set pans, reverbs, sends to the subwoofer (since it’s a 5.1 mix for film). In film score mixing, it’s important to keep the center channel as clear as possible. Some tv networks don’t want the center channel used for music at all (if you’re not sure, ask the re-recording mixer who’s doing the final mix). From there, our strategy was to polish a couple of cues that could be used as a reference for mixing the rest. Once our composer gave notes and approved those cues, we made multiple copies of the session file – one for each team to focus on their assigned portion of the music.

Every project has its unique challenges even if it’s recorded really well. When you’re on a tight time schedule, it helps to identify early on what will take extra time or what problems need to be solved. Some parts needed more editing to tighten up against the orchestra (which is very normal when you have overdubs). When the brass played, it bled into most of the orchestra mics (a very common occurrence with orchestral recording). There are usually some spot mics that are problematic – either placed too close or far, pick up unwanted instrument noise, or too much bleed from neighboring instruments. Most of the time you can work around it (masking it with other mics), but it may take more time to mix if you need to feature that mic at some point.

What really makes a film score mix effective is bringing out important musical lines. So, the bulk of the mix work is focused on balance. I think of it like giving an instrument a chance to be the soloist then go back to blending with the ensemble when the solo line is done. Sometimes it’s as easy as bringing a spot mic up a few dB (like a solo part within the orchestra). Sometimes it takes panning the instrument closer to the center or adding a bit of reverb (to make it feel like a soloist in front of the orchestra). Mix choices are more exaggerated in a film score mix because ultimately the score isn’t going to be played alone. There’s dialog sound fx, Foley, and voice-over all competing in the final mix. On top of everything else, it has to work with the picture.

Film score mixing is sort of like mixing an instrumental of a song. The dialog is the equivalent of a lead vocal. I encourage listening in context because what sounds balanced when listening to the score alone may be different than when you listen to your mixdown 10 dB and with dialog. Some instruments are going to stick out too much or conflict with dialog. Other instruments disappear underneath sound fx. Sometimes the re-recording mixer can send you a temp mix to work with, but often all you have is a guide track with rough mics or temp voice-over. Even with that, you can get a general idea how your mix is going to sound and can adjust accordingly.

One unique part of this project was the mix crew was composed of 50% women! Our composer, Darren Fung, put it well when he said, “This is amazing – but it should just be normal.”

Equus: Story of the Horse will debut in Canada in September 2018 on CBC TV “The Nature of Things.” In the US, Equus will air on PBS “Nature” and “Nova” in February 2019. It will also air worldwide in early 2019.

Score Mixers: Matthew Manifould, Alex Bohn, Joaquin Gomez, Esther Gadd, Kseniya Degtyareva, Mariana Hutten, Luisa Pinzon, Jonathan Kaspy, Aleksandra Landsmann, Lilita Dunska

Supervising mixers: James Clemens-Seely and April Tucker

X