Empowering the Next Generation of Women in Audio

Join Us

Subway Conversations

 

 

The time is 12:46 PM. I sit here staring at my Macbook – cutting tracks and writing all at the same time. As I wondered what would be the best to write about, I came to look at the world around me. You can hear the eerie nature of the quiet set over New York- this city only last year had tourists bustling in the streets; Broadway both on and off was popping with showstoppers like BeetleJuice, Lion King, and Little Shop of Horrors.  The walls of music were not confined to the four corners of the room I sit in now, but it was the symphony of speeding cars and dozens crowding into the subway.  Concerts on every platform of the underground brimmed with light only seen in Hallmark Christmas films- This was my home.

My home is not the music in the city, rather it is the individual that sings, the one that strums a guitar on the M train, and the ones in the studio that are recording ad-libs for the twentieth time. These artists are at home, replaced by MTA workers fumigating the station like clockwork. I never thought I would come to despise this quiet.

Recently I had to break quarantine to go over to the city from my bubble in queens. I expected the trains to be mostly empty with scattered commuters in each train car, and believe me it was. We went through each stop with people dripping in and out. Yet when I got to 34th st. and went up the escalator I heard something familiar. A man sat on a bucket playing his saxophone -his mask laying on his chin. I was early to where I needed to be so I listened for a while.

I asked the man after his set if I could bother him with some questions, he said he would be willing to as long as I kept his identity anonymous. He claimed that he played down in Greenwich Village,  and when Covid-19 hit he lost most of his gigs since bars shut down. “I got Blasio don’t want people to get sick. I got that, but if I can go to an Applebee’s now and have a beer with my fries – hell will it make a difference if I go in with my sax.” he vented.

The mayor and governor Cuomo have recently reopened restaurants to indoor dining – but only to 35% as of yet.

“ Y’all know that zoom isn’t the best concert hall, even with the Carnegie Hall filters.”

When asked how he felt inspired to come out here today he came back with “ All these funny people have that cash to tell everyone else to do what they say. I say if I wanna play without pay – I’m gonna do it in the only city that will appreciate it.”

When asked about his plans for post-quarantine? “You’ll catch me back at the Village”

With that, he went back to singing with his saxophone. With each note as I listened, I felt nothing but inspiration. With more patience ( and believe me I know that’s easier said than done ), and a lot of creative solutions we might be able to get back to pre-quarantine life. Tourists will come to the city from all over to see a Broadway show, eat at a showy or simple restaurant, and maybe even see a concert down in Greenwich for a show like no other. The eerie quiet over this city is slowly starting to fade- replaced only by the symphony of speeding cars and dozens crowding into the subway, concerts on every platform of the underground brimming with light, and the knowledge that this last year of only being confined to the four corners of our rooms was temporary – even when it felt like it wasn’t.

The Innovation of Theatre During a Pandemic

2020 was the year that Broadway, and so many other theaters, closed their doors. Consoles remained covered. Houses stayed empty. The lonely ghost light stood center stage. However, the year also came with great innovation, which is something that cannot be ignored by those who remain working in this industry. Though our theatre doors may be shut, many have turned to other ways to safely continue community involvement.

Streaming, zooming, and filming have now become the norm. I think a lot about older family members cursing at their computers and then I do a broadcasted performance where the streaming equipment outweighs the audio equipment in the setup. Although I feel incredibly inexperienced working audiovisual, this is the road theatre must take to maintain activity and reach their communities. While it may seem frustrating and foreign, streaming ensembles and filming theatre are some of the only ways we can continue to do shows at the moment. My peers and I have had first-hand experience in dealing with familiar and not-so-familiar challenges while working this past year.

At the start of 2021, I took on the role of recording engineer for a musical turned film-musical called Gay Card, written by Jonathan Keebler and orchestrated by Ryan Korell. Because of Covid-19, the director, Jordan Ratliff, had to adapt and form safe plans to see this production to fruition. Fortunately, my job was to record spoken lines and sung musical numbers, since backing tracks were provided. The cast is a mix of seasoned actors/actresses and first-time performers. Many cast members had never recorded in a studio before, which can be challenging for both the engineer and the vocalist.

When recording, I prefer to think I am capturing an experience, or a slice of this moment; perfection is not a real, pre-existing thing that I think a lot of people assume they are going to capture when they walk into a studio. Not only does this put an immense amount of pressure on the person in the booth, but adds unnecessary stress for the engineer. The captured experience is vital for a high-energy musical such as Gay Card. 

I worked alongside the director and sound designer to be certain what we recorded met the needs of the musical. Filming was a hybrid of wide shots for dance numbers and filming through Zoom of more intimate shots. After the filming and editing process has been complete, our production of Gay Card will follow the typical assembly line that a movie or short film might follow. The sound designer will add their sound to the picture-locked film, as well as, mix and master the finished product.

Many of the designers, technicians, and actors on this production come from the theatrical world and have had little to no experience with film production. We were incredibly fortunate to work with a small filming crew who could turn this piece into an actualized creation. It is collaborations and adaptations like this that make me so fond of the industry that I am in, and even though the current pandemic has halted the typical theatre experience, it has not stopped innovators from finding ways to continue their craft.

On the other hand, some productions have been produced entirely remotely with both designers and actors working from home. I was able to talk with my peer, Kayla Sierra-Lee about her experience as the sound designer for a recent production of Sex by Mae West.

Kayla Sierra-Lee on the difference in design technique

In terms of the differences in design techniques, streaming was relatively limiting. All of my sound effects were ran through QLab like normal, but the only speaker focuses I had were a right and left computer speaker. We had a specific person dedicated to running the stream which included most of the audio and effects. After editing our filmed cast, frames were built in Wirecast and filmed sections were put in with the live actors. For running the show itself, all of it was streamed through youtube.

And you also had to work with the music that the composer had already created. How was that?

There was a change in direction from the director, so a lot of the music the composer gave me no longer fit the tone of the play. The pieces we did use were added into my QLab file, as well as, other pieces I had pulled to fit the era of the show. This was a challenge due to all of the copyright laws for streaming, which is something not a lot of theaters have ever had to think about.

Were you the only one not local to the area? Was that a challenge for you?

A majority of us weren’t local and a lot of the filming/streaming took place in people’s homes. Needing a clear connection for streaming was also a high priority for both actors and designers. None of us had access to in-person rehearsals. That made it a challenge because we couldn’t gauge the emotions and reactions that would normally be happening on stage. We didn’t have a set, so it was important for us as designers to bring home the theme and location of 1930’s Montreal. What helped the most was having a director that knew what they wanted and was able to communicate that in our production meetings. Being able to say “this is the tone I want, the mood I want, and the audience reaction I want” was great since a lot of those things are usually pulled from in-person rehearsals.

Did you miss not being able to fully collaborate with your fellow designers?

I really did miss that connection with other designers, but I was able to work with people from all over the country and at all different levels of profession. That doesn’t get to happen very often unless either person travels to the theater, so this was a very unique opportunity for me to have.

Though theater doors have shut, some doors have remained opened for professionals that normally would not have been available to them otherwise. It is this strength and resilience that I find most attractive about our industry. If anything, this article has renewed your hope and inspiration for your work and its application.

This production of Gay Card is anticipated to be finished in late spring 2021. Information and videos of this production of Sex can be found on the Facebook page titled ‘Play Your Part Seattle’. There, you will be able to find many videos of the cast, designers, and director talking about their experience with this production and the process.

A very special thank you to Kayla Sierra-Lee for her contribution to this article. Sierra-Lee can be found at kmsounddesign.com and is a graduate student in the UIUC Sound Design program.

 

Black Music influenced the Culture & Music of the United States and the World.

February, we celebrate Black History Month. Remember, embracing, and recognizing the amazingly creative and entrepreneurial excellence African-Americans have made in the United States. In this article, I will be discussing genres of music that derived their roots and influence from African-American Roots.

Gospel Music

Mahalia_Jackson

Gospel Music or Sacred Music was the earliest form of black musical expression in the United States. Gospel music was based on Christain psalm and hymns merging from African music styles and seculars. Gospel music originated in Black churches and has become a genre recognized globally.

Did You Know? Gospel Music is based on classical music theory. To learn how both genres relate check out: Learn Gospel Music Theory


The Blues

Sister Rosetta Tharpe

Blues is a music genre and musical form which was originated in the Deep South of the United States around the 1860s by African-Americans from roots in African musical traditions, African-American work songs, and spirituals. and characterized by “bent” or “blue” notes, not on the standard scale. The songs expressed a longing, loss, or desire which is why it was also called the “Blues”.  Regional origins include the Mississippi Delta, Memphis, Chicago, and Southern Texas.

Beginner Guitarist? Learn Blues Guitar theory here.

Did You Know? The harmonic structure to a Blues Progression is using the I-IV-V chords of the blues scale. The Blue Scale Formula:  Major: 1-2-♭3-3-5-6    Minor- 1-♭3-4-♭5-5-♭7


Jazz

Buddy Bolden, the Man Who ‘Invented Jazz’

Jazz evolved from ragtime, an American style of syncopated instrumental music. Jazz first materialized in New Orleans and is often distinguished by African American musical innovation. Multiple forms of the genre exist today, from the dance-oriented music of the 1920s big-band era to the experimental flair of modern avant-garde jazz

Did You Know? There are over 10 kinds of music jazz scales. Learn your jazz scales here.

 

 

 


Rhythm and Blues

Tina Turner & Ike

R&B is a diverse genre with roots in jazz, the blues, and gospel music. R&B helped spread African American culture and popularized racial integration on the airwaves and in white society during the 1960s. The term was originally used by record companies to describe recordings marketed predominantly to urban African AmericansToday’s iteration of the genre has assimilated soul and funk characteristics.


 

Rock and Roll

Sister Rosetta Tharpe, The Godmother Of Rock ‘N’ Roll

Rocks’ first guitar heroine was none other than Sister Rosetta Tharpe, her single “Rock Me” in 1938 took storm. Sister Rosetta’s influence extended far beyond her own career. Johnny Cash called her his favorite singer, covering several songs on his 1979 gospel album A Believer Sings the Truth. Elvis Presley performed her version of “Up Above My Head” at his 1968 comeback special. The Staple Singers. Nina Simone. Paul Butterfield. Van Morrison. Led Zeppelin. The Grateful Dead, these are a handful of artists who’ve covered Tharpe’s classic blues song “Nobody’s Fault But Mine.

 

 

 


Country Music

DeFord Bailey was the first Black performer to be introduced on the Grand Ole Opry,

African American Influence on Country Music Can’t Be Understated. Country music has roots in African American jazz and blues of the south. The blues emerged from African American folk musical forms, which arose in the southern United States and became internationally popular in the 20th century. Blues styles have been used and adapted extensively throughout country music’s recorded history. Jimmie Rodgers, sometimes called the father of country music, was known for combining the blues, gospel, jazz, cowboy, and folk styles in his songs.

Rewriting Country Music’s Racist History Artists like Yola and Rhiannon Giddens are blowing up what Giddens calls a “manufactured image of country music being white and being poor”


 

Hip-Hop and Rap

In 1980, “Rapper’s Delight” by Sugarhill Gang was the first Hip-Hop recorded released and charted #1 on Billboards the same year. Hip-Hop and Rap music are embedded with Jazz, Gospel, and Rock roots, becoming a global phenomenon and the development of mass media and pop culture attention.

To Learn More Visit A timeline of history-making Black music

 

 

 

Common Sound Editorial Mistakes That Can Become Big Mix Problems

As a mixer, I see all kinds of issues cropping up that originated in sound editorial. And with my background in sound editorial, I’ve surely committed every one of them myself at some point. Here’s a list of some common problems we see on the mix stage. Avoiding these problems will not only make your work easier to handle and more professionally presented, but it will also hopefully save you a snarky email or comment from a mixer!


Sound Effects With Baked In Processing

As soon as you commit to an EQ, Reverb, or other processing choices with Audio Suite, your mixer’s hands are tied. Yes, you may be making a very creative choice, however, that choice can not be undone and often processed editorial simply needs to be thrown out and recut to make it mixable.

But what if you just have you present your creative vision in this way, be it for a client review or to get an idea across? In that case, your best move is to copy and mute the sound clip. Place the copy directly below the one you plan to process so it can easily be unmuted and utilized. In this case, your mixer has the option to work with the dry effect. Another alternative, if you’re dealing with EQ processing, is to use Clip Effects. Just be sure that downstream the mix stage has the proper version of Pro Tools or this information won’t be passed along.

processed clip with muted clip below.png

How about if the sound has room on it, but you didn’t put it there? I’ve gotten handclaps that sound like they were recorded in a gymnasium cut in an intimate small scene. That’s just a bad sound choice and you need to find a better one.

Stereo Sound Effects Used for Center Channel Material in Surround Or Larger Mix Formats

Sound editors, especially those that work from home, do not often cut in a surround sound environment. The result of cutting in stereo for a surround (or larger format) project, is the lack of knowledge on how things will translate.

One of my big pet peeves is when center channel material – actions happening on screen, in the middle of the frame – is cut with stereo sound effects. The result of, say, a punch or distant explosion cut in stereo when translated to the 5.1 mix is a disconnect for the listener. Ultimately, we as mixers need to go into the panning and center both channels to get the proper directionality.

Now, it’s not an impossible problem to solve when working in stereo. Just avoid cutting sound effects in stereo tracks that do not engulf the entire frame, provide ambience, or are outside of picture as a whole. Your mixer will thank you for it.

Splitting Sound Effects Build Between Food Groups

We have written extensively on the idea of using “Food Groups” in your editorial to keep things organized (see links below).

The dark side of this, however, is some editors can get carried away with these designations. The error to avoid here is to be sure anything that may need to be mixed together, stays together.

For example, if you have a series with lots of vehicles, it may seem to make sense to have a Car food group, as well as a Tires food group. The Car group would get the engine sounds and the Tires the textures, like gravel and skids. But when it comes time to mix, this extra bit of organization ends up making the job extremely difficult. If a car goes by from screen left to right, the mixer needs to pan and ramp the volume of those elements. If you group them all together in one chunk of tracks, it’s an easy move to group them. If you split them up among food groups, the mixer then has to hunt around for the proper sounds, then group across the multiple food groups. It’s simply too cumbersome. Not to mention that it takes the functionality of the VCA out of the picture. A solution, in this case, would be to simply have a Vehicle food group that encompasses all aspects of the car that could require simultaneous mixing.

Layering Random Sounds Into Food Groups

Speaking of food groups and functionality, the whole point of a food group is to be able to control everything by using one fader (VCA). That functionality also becomes void if sounds not applicable to that group are dropped in.

For example, if we have an Ambience food group with babbling brook steadys and a client wants all the “River sounds” turned down, the VCA for that food group makes it a snap. However, if an editor cuts splashes of a character swimming in that same food group, it suddenly ruins the entire concept. True, splashing is water, but that misses the entire point of the food group.

Single sounds layered in with long ambiences render the VCA useless

Single sounds layered in with long ambiences render the VCA useless

Worse yet, is when an editor simply places sounds in an already utilized food group because they ran out of room on other tracks. This only works as a solution for layout issues if you have an extra, empty food group.

Breaking Basic Rules In Order to Follow Another

There’s a basic hierarchy to rules of sound editorial. Some rules you just can’t break, plain and simple. Like crossfading two entirely different sound effects with one another. That’s a mixing nightmare, one that simply needs to be reorganized in order to successfully pull off the job. But sometimes the breaking of these rules comes with the best of intentions. I have two examples for you.

incorrect layout.png

In this case, the editor ran out of space in a food group and opted to use this crossfade, rather than break up the food group. It’s important to not only know the rules but even more important to know when to make an exception. In this case, there was the simple solution of moving this one sound into the hard SFX tracks, or simply adding a track to the food group (with permission from your supervisor or mixer), solving the issue and not creating any new ones.

Screen Shot 2021-01-26 at 1.27.30 PM.png

Here again, we have an editor with the best of intentions. An insect is on-screen moving in and out of frame from left to right. The editor thought that since the camera angle did not change, it did not warrant cutting the second chunk of sounds on a different set of tracks. Mixing this once again is impossible. As there is no time between the fading out and fading back in of the sound, there’s no magic way for the mixer to change an essential property, in this case, the panning. A proper understanding of perspective cutting would have avoided this issue.

Over Color Coding

Using colors to code your editorial is another topic we’ve covered extensively (see links below).

While color-coding your work is immensely helpful, here too lies a potential issue.

Let’s say you have a sequence in a swimming pool. There are steady water lapping sounds, swimming sounds, big splashes from jumping off the diving board. An editor may see this and think, it’s all water so I’m going to color all of these elements blue. The purpose of the color code is to delineate clips from one another to speed up the mixing process. When an editor liberally color codes their work one color, you end up with no relevant information at all. In this case, each of these categories of sounds should be colored differently from one another so that it’s obvious they are for different parts of the scene.

POOR LAYOUT FOR FUTZ MATERIALS

Materials that need special treatment, like sound effects coming from a television, need to stay clustered together within an unused food group or at the very least on the same set of tracks. I like to have my futz clusters live on the bottom-most hard sound effects tracks, color coding the regions the same to make your intentions absolutely clear. This allows the mixer to very quickly and easily highlight the cluster and set a group treatment, like EQ. Think of it as temporarily dedicating some tracks for this purpose and stair-step your work around them, being careful to not intermingle non-futz materials on those tracks for the duration of the necessary treatment, which is equally problematic.

Why go to the effort? If you sprinkle these materials throughout your editorial, it becomes a game of hunting around for the mixer to find what needs futzing. Odds are your mixer will need to stop mixing and reorganize your entire layout to fix the problem and make it mixable.

Bonus Issues

Women and BIPOC Industry Directories

 

Never Famous

Bands, festivals, TV shows, traveling Broadway musicals, and other touring groups need competent and diverse personnel who perform their tasks with a high level of expertise and professionalism day-in and day-out. Touring personnel need a way to market their expertise and let their availability be known within the industry. Both groups need a way to broaden the scope of available jobs, resources, and candidates, and break out of the cycle of peer-to-peer referrals and word of mouth as the primary way to hire and get hired.

POC in Audio Directory

The directory features over 500 people of color who work in audio around the world. You’ll find editors, hosts, writers, producers, sound designers, engineers, project managers, musicians, reporters, and content strategists with varied experience from within the industry and in related fields.

While recruiting diverse candidates is a great first step, it’s not going to be enough if we want the industry to look and sound meaningfully different in the future. Let us be clear: this isn’t about numbers alone. This is about getting the respect that people of color—and people of different faiths, abilities, ages, socioeconomic statuses, educational backgrounds, gender identities, and sexual orientation—deserve.

Tour Collective

Helps artists hire a great crew for their tours, live shows, and virtual performances.

We believe that every crew person should continually have the opportunity to find jobs, advance their career, and ultimately create a better life for themselves and their families.

Here’s how it works:
1. Sign up at this link https://tourcollective.co/jointhecrew
2. Fill out the form
3. Keep an eye on your inbox. You’ll automatically get notified when jobs come in that match your profile. Then you can apply for the ones that you’re interested in.

Find Production People

Women in Lighting

Femnoise

A collective fighting for the reduction of the gender gap in the music industry. But we soon realized that the solution is not just activism. We have to go one step further: to connect and empower underrepresented individuals on a large scale, worldwide.

POC Theatre Designers and Techs

Wingspace

is committed to the cause of equity in the field.  There are significant barriers to accessing a career in theatrical design and we see inequalities of race, socioeconomic status, gender identity, sexual orientation and disability across the field.

Parity Productions

Fills creative roles on their productions with women and trans and gender nonconforming (TGNC) artists. In addition to producing their own work, they actively promote other theatre companies that follow their 50% hiring standard.

Production on Deck

Uplifting underrepresented communities in the arts. Their main goal is to curate a set of resources to help amplify the visibility of (primarily) People of Color in the arts.

She is the Music DataBase

Live Nation Urban’s Black Tour Directory

Music Industry Diversity and Inclusion Talent Directory

The F-List Directory of U.K. Musicians

FUTURE MUSIC INDUSTRY 

WOMEN/ NON-BINARY DJS/PRODUCERS

Entourage Pro

South America – Productores por país – Podcasteros

Women in Live Music DataBase

Women’s Audio Mission Hire Women Referrals

One Size Does Not Fit All in Acoustics

Have you ever stood outside when it has been snowing and noticed that it feels “quieter” than normal? Have you ever heard your sibling or housemate play music or talk in the room next to you and hear only the lower frequency content on the other side of the wall? People are better at perceptually understanding acoustics than we give ourselves credit for. In fact our hearing and our ability to perceive where a sound is coming from is important to our survival because we need to be able to tell if danger is approaching. Without necessarily thinking about it, we get a lot of information about the world around us through localization cues gathered from the time offsets between direct and reflected sounds arriving at our ears that our brain performs quick analysis on compared to our visual cues.

Enter the entire world of psychoacoustics

Whenever I walk into a music venue during a morning walk-through, I try to bring my attention to the space around me: What am I hearing? How am I hearing it? How does that compare to the visual data I’m gathering about my surroundings? This clandestine, subjective information gathering is important to reality check the data collected during the formal, objective measurement processes of systems tunings. People spend entire lifetimes researching the field of acoustics, so instead of trying to give a “crash course” in acoustics, we are going to talk about some concepts to get you interested in the behavior that you have already been spending your whole life learning from an experiential perspective without realizing it. I hope that by the end of reading this you will realize that the interactions of signals in the audible human hearing range are complex because the perspective changes depending on the relationships of frequency, wavelength, and phase between the signals.

The Magnitudes of Wavelength

Before we head down this rabbit hole, I want to point out one of the biggest “Eureka!” moments I had in my audio education was when I truly understood what Jean-Baptiste Fourier discovered in 1807 [1] regarding the nature of complex waveforms. Jean-Baptiste Fourier discovered that a complex waveform can be “broken down” into its many component waves that when recombined create the original complex waveform. For example, this means that a complex waveform, say the sound of a human singing, can be broken down into the many composite sine waves that add together to create the complex original waveform of the singer. I like to conceptualize the behavior of sound under the philosophical framework of Fourier’s discoveries. Instead of being overwhelmed by the complexities as you go further down the rabbit hole, I like to think that the more that I learn, the more the complex waveform gets broken into its component sine waves.

Conceptualizing sound field behavior is frequency-dependent

 

One of the most fundamental quandaries about analyzing the behavior of sound propagation is due to the fact that the wavelengths that we work with in the audible frequency range vary in orders of magnitude. We generally understand the audible frequency range of human hearing to be 20 cycles per second (Hertz) -20,000 cycles per second (20 kilohertz), which varies with age and other factors such as hearing damage. Now recall the basic formula for determining wavelength at a given frequency:

Wavelength (in feet or meters) = speed of sound (feet or meters) / frequency (Hertz) **must use same units for wavelength and speed of sound i.e. meters and meters per second**

So let’s look at some numbers here given specific parameters of the speed of sound since we know that the speed of sound varies due to factors such as altitude, temperature, and humidity. The speed of sound at “average sea level”, which is roughly 1 atmosphere or 101.3 kiloPascals [2]), at 68 degrees Fahrenheit (20 degrees Celsius), and at 0% humidity is approximately 343 meters per second or approximately 1,125 feet per second [3]. There is a great calculator online at sengpielaudio.com if you don’t want to have to manually calculate this [3]. So if we use the formula above to calculate the wavelength for 20 Hz and 20kHz with this value for the speed of sound we get (we will use Imperial units because I live in the United States):

Wavelength of 20 Hz= 1,125 ft/s / 20 Hz = 56.25 feet

Wavelength of 20 kHz or 20,000 Hertz = 1,125 ft/s / 20,000 Hz = 0.0563 feet or 0.675 inches

This means that we are dealing with wavelengths that range from roughly the size of a penny to the size of a building. We see this in a different way as we move up in octaves along the audible range from 20 Hz to 20 kHz because as we increase frequency, the number of frequencies per octave band increases logarithmically.

32 Hz-63 Hz

63-125 Hz

125-250 Hz

250-500 Hz

500-1000 Hz

1000-2000 Hz

2000-4000 Hz

4000-8000 Hz

8000-16000 Hz

Look familiar??

Unfortunately, what this ends up meaning to us sound engineers is that there is no “catch-all” way of modeling the behavior of sound that can be applied to the entire audible frequency spectrum. It means that the size of objects and surfaces obstructing or interacting with sound may or may not create issues depending on its size in relation to the frequency under scrutiny.

For example, take the practice of placing a measurement mic on top of a flat board to gather what is known as a “ground plane” measurement. For example, placing the mic on top of a board, and putting the board on top of seats in a theater. This is a tactic I use primarily in highly reflective room environments to take measurements of a loudspeaker system in order to observe the system behavior without the degradation from the reflections in the room. Usually, because I don’t have control over changing the acoustics of the room itself (see using in-house, pre-installed PAs in a venue). The caveat to this method is that if you use a board, the board has to be at least a wavelength at the lowest frequency of interest. So if you have a 4ft x 4 ft board for your ground plane, the measurements are really only helpful from roughly 280 Hz and above (solve for : 1,125 ft/s / 4 ft  ~280 Hz given the assumption of the speed of sound discussed earlier). Below that frequency, the wavelengths of the signal under test will be larger in relation to the board so the benefits of the ground plane do not apply. The other option to extend the usable range of the ground plane measurement is to place the mic on the ground (like in an arena) so that the floor becomes an extension of the boundary itself.

Free Field vs. Reverberant Field:

When we start talking about the behavior of sound, it’s very important to make the distinction about what type of sound field behavior we are observing, modeling, and/or analyzing. If that isn’t confusing enough, depending on the scenario, the sound field behavior will change depending on what frequency range is under scrutiny. Most loudspeaker prediction software works by using calculations based on measurements of the loudspeaker in the free field. To conceptualize how sound operates in the free field, imagine a single, point-source loudspeaker floating high above the ground, outside, and with no obstructions insight. Based on the directivity index of the loudspeaker, the sound intensity will propagate outward from the origin according to the inverse square law. We must remember that the directivity index is frequency-dependent, which means that we must look at this behavior as frequency-dependent. As a refresher, this spherical radiation of sound intensity from the point source results in 6dB loss per doubling of distance. As seen in Figure A, sound intensity propagating at radius “r” will increase by a factor of r^2 since we are in the free field and sound pressure radiates omnidirectionally as a sphere outward from the origin.

Figure A. A point source in the free field exhibits spherical behavior according to the inverse square law where sound intensity is lost 6dB per doubling of distance

 

The inverse square law applies to point-source behavior in the free field, yet things grow more complex when we start talking about line sources and Fresnel zones. The relationship between point source and line source behavior changes whether we are observing the source in the near field or far field since a directional source becomes a point source if observed in the far-field. Line source behavior is a subject that can have an entire blog or book on its own, so for the sake of brevity, I will redirect you to the Audio Engineering Society white papers on the subject such as the 2003 white paper on “Wavefront Sculpture Technology” by Christian Heil, Marcel Urban, and Paul Bauman [4].

Free field behavior, by definition, does not take into account the acoustical properties of the venue that the speakers exist in. Free field conditions exist pretty much only outdoors in an open area. The free field does, however, make speaker interactions easier to predict especially when we have known direct (on-axis) and off-axis measurements comprising the loudspeakers’ polar data. Since loudspeakers manufacturers have this high-resolution polar data of their speakers, they can predict how elements will interact with one another in the free field. The only problem is that anyone who has ever been inside a venue with a PA system knows that we aren’t just listening to the direct field of the loudspeakers even when we have great audience coverage of a system. We also listen to the energy returned from the room in the reverberant field.

As mentioned in the introduction to this blog, our hearing allows us to gather information about the environment that we are in. Sound radiates in all directions, but it has directivity relative to the frequency range being considered and the dispersion pattern of the source. Now if we take that imaginary point source loudspeaker from our earlier example and listen to it in a small room, we will hear not only the direct sound coming from the loudspeaker to our ears, but also the reflections from the loudspeaker bouncing off the walls and then back at our ears delayed by some offset in time. Direct sound often correlates to something we see visually like hearing the on-axis, direct signal from a loudspeaker. Since reflections result from the sound bouncing off other surfaces then arriving at our ears, what they don’t contribute to the direct field, they add to the reverberant field that helps us perceive spatial information about the room we are in.

 

Signals arriving on an obstructed path to our ears we perceive as direct arrivals, whereas signals bouncing off a surface and arriving with some offset in time are reflections

 

Our ears are like little microphones that send aural information to our brain. Our ears vary from person to person in size, shape, and the distance between them. This gives everyone their own unique time and level offsets based on the geometry between their ears which create our own individual head-related transfer functions (HRTF). Our brain combines the data of the direct and reflected signals to discern where the sound is coming from. The time offsets between a reflected signal and the direct arrival determine whether our brain will perceive the signals as coming from one source or two distinct sources. This is known as the precedence effect or Haas effect. Sound System Engineering by Don Davis, Eugene Patronis, Jr., & Pat Brown (2013), notes that our brain integrates early reflections arriving within “35-50 ms” from the direct arrival as a single source. Once again, we must remember that this is an approximate value for time since actual timing will be frequency-dependent. Late reflections that arrive later than 50ms do not get integrated with the direct arrival and instead are perceived as two separate sources [5]. When two signals have a large enough time offset between them, we start to perceive the two separate sources as echoes. Specular reflections can be particularly obnoxious because they arrive at our ears either with an increased level or angle of incidence such that they can interfere with our perception of localized sources.

Specular reflections act like reflections off a mirror bouncing back at the listener

 

Diffuse reflections, on the other hand, tend to lack localization and add more to the perception of “spaciousness” of the room, yet depending on frequency and level can still degrade intelligibility. Whether the presence of certain reflections will degrade or add to the original source are highly dependent on their relationship to the dimensions of the room.

 

Various acoustic diffusers and absorbers used to spread out reflections [6]

In the Master Handbook of Acoustics by F. Alton Everest and Ken C. Pohlmann (2015), they illustrate how “the behavior of sound is greatly affected by the wavelength of the sound in comparison to the size of objects encountered” [7]. Everest & Pohlmann describe how the varying size of wavelength depending on frequency means that how we model sound behavior will vary in relation to the room dimensions. There is a frequency range at which in smaller rooms, the dimensions of the room are shorter than the wavelength such that the room cannot contribute boosts due to resonance effects [7]. Everest & Pohlmann note that when the wavelength becomes comparable to room dimensions, we enter modal behavior. At the top of this range marks the “cutoff frequency” to which we can begin to describe the interactions using “wave acoustics”, and as we progress into the higher frequencies of the audible range we can model these short-wavelength interactions using ray behavior. One can find the equations for estimating these ranges based on room length, width, and height dimensions in the Master Handbook of Acoustics. It’s important to note that while we haven’t explicitly discussed phase, its importance is implied since it is a necessary component to understanding the relationship between signals. After all, the phase relationship between two copies of the same signal will determine whether their interaction will result in constructive or destructive interference. What Everest & Pohlmann are getting at is that how we model and predict sound field behavior will change based on wavelength, frequency, and room dimensions. It’s not as easy as applying one set of rules to the entire audible spectrum.

Just the Beginning

So we haven’t even begun to talk about the effects of properties of surfaces such absorption coefficients and RT60 times, and yet we already see the increasing complexity of the interactions between signals based on the fact we are dealing with wavelengths that differ in orders of magnitude. In order to simplify predictions, most loudspeaker prediction software uses measurements gathered in the free field. Although acoustic simulation software, such as EASE, exists that allows the user to factor in properties of the surfaces, often we don’t know the information that is needed to account for things such as absorption coefficients of a material unless someone gets paid to go and take those measurements. Or the acoustician involved with the design has well documented the decisions that were made during the architecture of the venue. Yet despite the simplifications needed to make prediction easier, we still carry one of the best tools for acoustical analysis with us every day: our ears. Our ability to perceive information about the space around us based on interaural level and time differences from signals arriving at our ears allows us to analyze the effects of room acoustics based on experience alone. It’s important when looking at the complexity involved with acoustic analysis to remember the pros and cons of our subjective and objective tools. Do the computer’s predictions make sense based on what I hear happening in the room around me? Measurement analysis tools allow us to objectively identify problems and their origins that aren’t necessarily perceptible to our ears. Yet remembering to reality check with our ears is important because otherwise, it’s easy to get lost in the rabbit hole of increasing complexity as we get further into our engineering of audio. At the end of the day, our goal is to make the show sound “good”, whatever that means to you.

Endnotes:

[1] https://www.aps.org/publications/apsnews/201003/physicshistory.cfm

[2] (pg. 345) Giancoli, D.C. (2009). Physics for Scientists & Engineers with Modern Physics. Pearson Prentice Hall.

[3] http://www.sengpielaudio.com/calculator-airpressure.htm

[4] https://www.aes.org/e-lib/browse.cfm?elib=12200

[5] (pg. 454) Davis, D., Patronis, Jr., E. & Brown, P. Sound System Engineering. (2013). 4th ed. Focal Press.

[6] “recording studio 2” by JDB Sound Photography is licensed with CC BY-NC-SA 2.0. To view a copy of this license, visit https://creativecommons.org/licenses/by-nc-sa/2.0/

[7] (pg. 235) Everest, F.A. & Pohlmann, K. (2015). Master Handbook of Acoustics. 6th ed. McGraw-Hill Education.

Resources:

American Physical Society. (2010, March). This Month in Physics History March 21, 1768: Birth of Jean-Baptiste Joseph Fourier. APS Newshttps://www.aps.org/publications/apsnews/201003/physicshistory.cfm

Davis, D., Patronis, Jr., E. & Brown, P. Sound System Engineering. (2013). 4th ed. Focal Press.

Everest, F.A. & Pohlmann, K. (2015). Master Handbook of Acoustics. 6th ed. McGraw-Hill Education.

Giancoli, D.C. (2009). Physics for Scientists & Engineers with Modern Physics. Pearson Prentice Hall.

JDB Photography. (n.d.). [recording studio 2] [Photograph]. Creative Commons. https://live.staticflickr.com/7352/9725447152_8f79df5789_b.jpg

Sengpielaudio. (n.d.). Calculation: Speed of sound in humid air (Relative humidity). Sengelpielaudio. http://www.sengpielaudio.com/calculator-airpressure.htm

Urban, M., Heil, C., & Bauman, P. (2003). Wavefront Sculpture Technology. [White paper]. Journal of the Audio Engineering Society, 51(10), 912-932.

https://www.aes.org/e-lib/browse.cfm?elib=12200

AES Design Competition – How to create an award winning design webinar

The AES student design competition is an opportunity for you to showcase your hardware design abilities to the wider audio community. But what makes a prize-winning design? Does it have to be complicated, unique, contain software, neural networks, or other esoteric components?

The purpose of this session is to present some ideas about what makes a design prize-worthy and answer some of your questions about it.

We will focus on an audio design that won a national design (UK) for its student inventor, along with a couple of other AES competition-winning ideas. Jamie Angus-Whiteoak will take the prize-winning design and break it down to show what it was about this design that made it “The best example of concurrent engineering that the judges had seen!”

From this, you will learn the basic principles behind design, and see what really matters for a prize-winning one. Surprisingly it might be simpler than you might have thought!

March 9 at 10 AM PST

Register and Post Your Questions Here

Moderated by Leslie Gaston-Bird

Leslie Gaston-Bird

Leslie Gaston-Bird (AMPS, M.P.S.E.) is the author of the book “Women in Audio”, part of the AES Presents series and published by Focal Press (Routledge). She is a voting member of the Recording Academy (The Grammys®). Currently, she is a freelance re-recording mixer and sound editor and owner of Mix Messiah Productions specializing in 5.1 mixing. Prior to that, she was a tenured Associate Professor of Recording Arts at the University of Colorado Denver (2005-2018) where she also served as Chair of the Department of Music and Entertainment Industry Studies. She led groups of Recording Arts students in study abroad courses in England, Germany, and Italy which included participation in AES Conventions. Leslie has done research on audio for planetariums, multichannel audio on Blu-Ray, and a comparison of multichannel codecs that was published in the AES Journal (Gaston, L. and Sanders, R. (2008), “Evaluation of HE-AAC, AC-3, and E-AC-3 Codecs”, Journal of the Audio Engineering Society of America, 56(3)).

Presenter

Jamie Angus-Whiteoak: Was a Professor of Audio Technology at Salford University. Her interest in audio was crystallized at age 11 when she visited the WOR studios in NYC on a school trip in 1967. After this, she was hooked and spent much of her free time studying audio, radio, synthesizers, and loudspeakers, and even managed to build some!

During secondary education in Scotland, she learned about valve (tube) circuits and repaired radios and televisions, as well as building loudspeakers from recovered components. Then at 17, she attended the University of Lethbridge in Alberta Canada. There, in addition to her studies in physics, music, computing, drama, philosophy, and English composition, she repaired their VCS3 synthesizer, and so obtained coveted access to the electronic music lab. She was also able to design her first proper loudspeakers and had a quadraphonic system in her 7’5” cube bedroom.

She studied electronics at the University of Kent (UK) doing her BSc and Ph.D. there from 1974 to 1980. During her undergraduate days, she designed and built a variety of audio circuits, added acoustic treatment to the student’s radio station’s studios, and designed and built an electronic piano. She also developed and built a complete AM stereo modulation system for her final year project. During her Ph.D. study, she became interested in A/D conversion, and worked on a sigma-delta approach, but had to give it up to concentrate on her Thesis topic of designing a general-purpose Digital Signal Processor with flexible arithmetic for finite field transforms. After her PhD she joined Standard Telecommunications Laboratories, which invented optical fibres and PCM. There she worked on integrated optics, speech coding, speech synthesis, and recognition in the early 80s, and invented a novel 32kBits speech coding method. She has been active in audio and acoustic research since then.

She was appointed as the BT Lecturer at the University of York in 1983, to develop the first integrated master’s (Meng) in Electronic and Communication Engineering in conjunction with British Telecom. She then co-created the UK’s first Music Technology course in 1986 when it was considered a “silly idea”!   At York, she developed courses on electronic product design and co-authored a book on it. She also created and supervised many design-orientated final year projects, and several of her students won national design awards for their work.

She is the inventor of; modulated, wideband and absorbing diffusers, direct processing of Super Audio CD signals, and one of the first 4-channel digital tape recorders. She has done work on signal processing, analogue circuits, and numerous other audio technology topics.

She has taught analogue circuit design, communications systems, computer architecture, microprogramming and logic design, design for testability and reliability, audio and video signal processing, digital signal processing, psychoacoustics, sound reproduction, electroacoustics, loudspeaker and microphone design,  studio design, and audio, speech, and video coding. She has co-written two textbooks and has authored, or co-authored over 200 journal and conference papers and 4 patents. She is currently investigating environmentally friendly audio technology.

She has been active in the AES for 30 years and has been the paper’s co-chair for conventions as well as a judge for the student project and Matlab competitions.

She has been awarded an AES fellowship, the IOA Peter Barnet Memorial Award, and the AES Silver Medal Award, for her contributions to audio and acoustics.

For relaxation she likes playing drums and dancing, but not at the same time.

 

Ask the Experts – Music Editors for Film & TV

 

There is a lot of music work that happens behind the scenes in post-production of a film or tv show. Music editors can wear a lot of hats beyond just editing music tracks in a DAW – working with picture editors and directors/filmmakers to find the right musical mood for a scene, coordinating with music supervisors to find the perfect song, being a liaison between directors and composers, attending recording/scoring sessions, and attending the final mix on the dub stage.

Being a music editor takes having a range of skills from music, audio/sound, tv/film, communication/interpersonal, and more. How do you get started as a music editor, and how do you make a career out of it? We will be exploring this and the questions you have for our experts about music editing for film and tv.

March 6, 2021 at 11 AM PST

Register and Post Your Questions

Moderated by April Tucker

April Tucker is a re-recording mixer for television and film in Los Angeles. She is a “Jane of all trades” in post-production sound and has worked in every role of the process from music editor to sound supervisor. She is currently writing a textbook for Routledge about career paths in the audio industry, and the skills needed to survive early in your career.

Panelists

Jillinda Palmer has a decade of experience as a music editor. Her credits include Deadwood: The Movie, Crazy Ex-Girlfriend, and Diary of a Future President (Disney+). Jillinda is also an experienced sound designer and dialog editor, singer/songwriter and performer.

Jillinda has received one Primetime Emmy Nomination, one Primetime Emmy Honor, and 2 Golden Reel Nominations for music editing. Working as a music editor enables Jillinda to apply her fundamental knowledge of music along with her editorial expertise to enhance her clients’ original intent.


Poppy Kavanagh has been operating within the music industry as a musician, music editor, DJ, and audio engineer. Poppy started out working for Ilan Eshkeri and Steve Mclaughlin where she learnt about the art of film music. She then sidestepped to work as an assistant engineer at Mark Knopfler’s British Grove Studios. It was there that she discovered the world of Music Editing. Poppy has worked with a wide range of artists including Ian Brown, Van Morrison, The Rolling Stones, Tim Wheeler, KT Tunstall, Ilan Eshkeri and Steve Price. In 2019 Poppy was nominated for a primetime Emmy award for her music editing work on HBO’s Leaving Neverland.


Shari Johanson is a NYC-based music editor who has been working in the film and television industry for nearly 30 years. Most recently she has collaborated with Robin Wright on her Directorial debut film LAND. Some other Directors she has worked with are Cary Fukanaga, Paul Schrader, Kevin Smith and Milos Forman. Shari has worked with composers such as R.E.M, Howard Shore, Hans Zimmer, Dave Arnold, Carter Burwell and most recently Jonathan Zalben on Disney+ ON POINTE, as well as TIME FOR THREE and Ben Solee on LAND.

Shari won a Motion Picture Sound Editors Golden Reel Award for Best Music Editing for her work on the film HIGH FIDELITY, and was also nominated for Showtime’s BILLIONS, as well as for the Netflix limited series MANIAC.

Additional credits include the Oscar – and Golden-Globe-winning biopic I, TONYA; the HBO sensation BAD EDUCATION; the Emmy-winning hit series TRUE DETECTIVE S1; the Netflix original series MARCO POLO; as well as John Tururro’s JESUS ROLLS. Shari is about to embark on the continuation of BILLIONS S5. Full list of credits


Del Spiva is a multi Emmy-nominated music editor whose credits include The Defiant Ones, Genius, and A Quiet Place Part II. His upcoming film credits include Coming 2 America and Top Gun: Maverick. Prior to music editing, Del worked as an assistant sound editor for films.

 

 

 

 

 

Acoustemology?

During these last months of 2020, I started a master’s degree that has pleasantly surprised me, and although it seems to be unrelated to my professional facet of audio, studying “cultural management” has led me to know a new and exciting world, which has more related to my interests than it seems.

But why do I want to talk about acoustemology and cultural management when I have been in sound engineering for almost 14 years focusing only on the technical aspects? In some reading, I found that term, acoustemology. At the time I did not know but that due to its etymological roots caught my attention.

Already with ethnomusicology and some branches of anthropology in conjunction with acoustics, studies of music, ecological acoustics and soundscapes have been carried out, helping to interpret sound waves as representations of collective relationships and social structures, such is the case of sound maps of different cities and countries, which reflect information on indigenous languages, music, urban areas, forest areas, etc., some examples:

Mexican sounds through time and space: https://mapasonoro.cultura.gob.mx/

Sound Map of the Indigenous Languages ​​of Peru: https://play.google.com/store/apps/details?id=com.mc.mapasonoro&hl=en_US&gl=US

Meeting point for the rest of the sound maps of the Spanish territory: https://www.mapasonoro.es/

The life that was, the one that is and the one that is collectively remembered in the Sound Map of Uruguay: http://www.mapasonoro.uy/

As Carlos de Hita says, our cultural development has been accompanied by soundscapes or soundtracks that include the voices of animals, the sound of wind, water, reverberation, temperature, echo, and distance.

But it is with the term acoustemology, which emerged in 1992 with Steven Feld, where the ideas of a soundscape that is perceived and interpreted by those who resonate with their bodies and lives in a social space and time converge. An attempt is made to argue an epistemological theory of how sound and sound experiences shape the different ways of being and knowing the world, and of our cultural realities.

But then another concept comes into play, perception. Perception is mediated by culture: the way we see, smell, or hear is not a free determination but rather the product of various factors that condition it (Polti 2014). Perception is what really determines the success of our work as audio professionals, so I would like to take a moment with this post to think over the following ideas and invite you to do it with me.

As professionals dedicated to the sound world, do we stop to think about the impact of our work on the cultures in which we are immersed? Do we worry about taking into account the culture in which we are immersed when doing an event? Or do we only develop our work in compliance with economic and technological guidelines instead of cultural ones?

When we plan an event, do we use what is really needed, do we have a limit or to attend to our ego we use everything that manufacturers sell us without stopping to think about the impact (economic, social, and environmental) that this planning has in the place where these events will be taking place? Do we really care about what we want to transmit or do we only care about making the audio sound as loud as possible or even louder? Do we stop to think what kind of amplification an event really requires or do we just want to put a lot of microphones, a lot of speakers, if it’s immersive sound the better, make it sound loud, and good luck if you understand? Do we care about what the audience really wants to hear? Are we aware of noise pollution or do we just want the concert to be so loud that people can’t even hear their own thoughts?

Are we conscious of making recordings that reflect and preserve our own culture and that of the performer, or do we only care about obtaining awards at all costs? Have we already shared all the knowledge we have about audio or are we still competing to show that we know everything, that I am technically the best? Or is it time to humanize and put our practice as audio professionals in a cultural context?

I remember an anecdote from a colleague, where he told how after doing all the set up for a concert in a Mexican city, of which I do not remember the details, it was only after the blessing of the shamans and the approval of the gods that the event was possible.

Our work as audio professionals should be focused on dedicating ourselves to telling stories in more than acoustic terms, telling stories that bear witness to our sociocultural context and who we are.

“Beyond any consideration of an acoustic and/or physiological order, the ear belongs to a great extent to culture, it is above all a cultural organ” (García 2007

References:

Bull, Michael; Plan, Les. 2003. The Auditory culture reader. Oxford: Berg. New York.

De Hita, Carlos. 2020. Sound diary of a naturalist. We Learn Together BBVA. Spain Available at https://www.youtube.com/watch?v=RdFHyCPtrNE&list=WL&index=14

García, Miguel A. 2007. “The ears of the anthropologist. Pilagá music in the narratives of Enrique Palavecino and Alfred Metraux ”, Runa, 27: 49-68 and (2012) Ethnographies of the encounter. Knowledge and stories about other music. Anthropology Series. Buenos Aires: Ed. Del Sol.

Rice, Timothy. 2003. “Time, Place, and Metaphor in Music Experience and Ethnography.” Ethnomusicology 47 (2): 151-179.

Macchiarella, Ignazio. 2014. “Exploring micro-worlds of music meanings”. The thinking ear 2 (1). Available at http://ppct.caicyt.gov.ar/index.php/oidopensante.

Victoria Polti. 2014. Acustemología y reflexividad: aportes para un debate teórico-metodológico en etnomusicología. XI Congreso iaspmal • música y territorialidades: los sonidos de los lugares y sus contextos socioculturales. Brazil


Andrea Arenas is a sound engineer and her first approach to music was through percussion. She graduated with a degree in electronic engineering and has been dedicated to audio since 2006. More about Andrea on her website https://www.andreaarenas.com/

X