Empowering the Next Generation of Women in Audio

Join Us

Common Sound Editorial Mistakes That Can Become Big Mix Problems

As a mixer, I see all kinds of issues cropping up that originated in sound editorial. And with my background in sound editorial, I’ve surely committed every one of them myself at some point. Here’s a list of some common problems we see on the mix stage. Avoiding these problems will not only make your work easier to handle and more professionally presented, but it will also hopefully save you a snarky email or comment from a mixer!


Sound Effects With Baked In Processing

As soon as you commit to an EQ, Reverb, or other processing choices with Audio Suite, your mixer’s hands are tied. Yes, you may be making a very creative choice, however, that choice can not be undone and often processed editorial simply needs to be thrown out and recut to make it mixable.

But what if you just have you present your creative vision in this way, be it for a client review or to get an idea across? In that case, your best move is to copy and mute the sound clip. Place the copy directly below the one you plan to process so it can easily be unmuted and utilized. In this case, your mixer has the option to work with the dry effect. Another alternative, if you’re dealing with EQ processing, is to use Clip Effects. Just be sure that downstream the mix stage has the proper version of Pro Tools or this information won’t be passed along.

processed clip with muted clip below.png

How about if the sound has room on it, but you didn’t put it there? I’ve gotten handclaps that sound like they were recorded in a gymnasium cut in an intimate small scene. That’s just a bad sound choice and you need to find a better one.

Stereo Sound Effects Used for Center Channel Material in Surround Or Larger Mix Formats

Sound editors, especially those that work from home, do not often cut in a surround sound environment. The result of cutting in stereo for a surround (or larger format) project, is the lack of knowledge on how things will translate.

One of my big pet peeves is when center channel material – actions happening on screen, in the middle of the frame – is cut with stereo sound effects. The result of, say, a punch or distant explosion cut in stereo when translated to the 5.1 mix is a disconnect for the listener. Ultimately, we as mixers need to go into the panning and center both channels to get the proper directionality.

Now, it’s not an impossible problem to solve when working in stereo. Just avoid cutting sound effects in stereo tracks that do not engulf the entire frame, provide ambience, or are outside of picture as a whole. Your mixer will thank you for it.

Splitting Sound Effects Build Between Food Groups

We have written extensively on the idea of using “Food Groups” in your editorial to keep things organized (see links below).

The dark side of this, however, is some editors can get carried away with these designations. The error to avoid here is to be sure anything that may need to be mixed together, stays together.

For example, if you have a series with lots of vehicles, it may seem to make sense to have a Car food group, as well as a Tires food group. The Car group would get the engine sounds and the Tires the textures, like gravel and skids. But when it comes time to mix, this extra bit of organization ends up making the job extremely difficult. If a car goes by from screen left to right, the mixer needs to pan and ramp the volume of those elements. If you group them all together in one chunk of tracks, it’s an easy move to group them. If you split them up among food groups, the mixer then has to hunt around for the proper sounds, then group across the multiple food groups. It’s simply too cumbersome. Not to mention that it takes the functionality of the VCA out of the picture. A solution, in this case, would be to simply have a Vehicle food group that encompasses all aspects of the car that could require simultaneous mixing.

Layering Random Sounds Into Food Groups

Speaking of food groups and functionality, the whole point of a food group is to be able to control everything by using one fader (VCA). That functionality also becomes void if sounds not applicable to that group are dropped in.

For example, if we have an Ambience food group with babbling brook steadys and a client wants all the “River sounds” turned down, the VCA for that food group makes it a snap. However, if an editor cuts splashes of a character swimming in that same food group, it suddenly ruins the entire concept. True, splashing is water, but that misses the entire point of the food group.

Single sounds layered in with long ambiences render the VCA useless

Single sounds layered in with long ambiences render the VCA useless

Worse yet, is when an editor simply places sounds in an already utilized food group because they ran out of room on other tracks. This only works as a solution for layout issues if you have an extra, empty food group.

Breaking Basic Rules In Order to Follow Another

There’s a basic hierarchy to rules of sound editorial. Some rules you just can’t break, plain and simple. Like crossfading two entirely different sound effects with one another. That’s a mixing nightmare, one that simply needs to be reorganized in order to successfully pull off the job. But sometimes the breaking of these rules comes with the best of intentions. I have two examples for you.

incorrect layout.png

In this case, the editor ran out of space in a food group and opted to use this crossfade, rather than break up the food group. It’s important to not only know the rules but even more important to know when to make an exception. In this case, there was the simple solution of moving this one sound into the hard SFX tracks, or simply adding a track to the food group (with permission from your supervisor or mixer), solving the issue and not creating any new ones.

Screen Shot 2021-01-26 at 1.27.30 PM.png

Here again, we have an editor with the best of intentions. An insect is on-screen moving in and out of frame from left to right. The editor thought that since the camera angle did not change, it did not warrant cutting the second chunk of sounds on a different set of tracks. Mixing this once again is impossible. As there is no time between the fading out and fading back in of the sound, there’s no magic way for the mixer to change an essential property, in this case, the panning. A proper understanding of perspective cutting would have avoided this issue.

Over Color Coding

Using colors to code your editorial is another topic we’ve covered extensively (see links below).

While color-coding your work is immensely helpful, here too lies a potential issue.

Let’s say you have a sequence in a swimming pool. There are steady water lapping sounds, swimming sounds, big splashes from jumping off the diving board. An editor may see this and think, it’s all water so I’m going to color all of these elements blue. The purpose of the color code is to delineate clips from one another to speed up the mixing process. When an editor liberally color codes their work one color, you end up with no relevant information at all. In this case, each of these categories of sounds should be colored differently from one another so that it’s obvious they are for different parts of the scene.

POOR LAYOUT FOR FUTZ MATERIALS

Materials that need special treatment, like sound effects coming from a television, need to stay clustered together within an unused food group or at the very least on the same set of tracks. I like to have my futz clusters live on the bottom-most hard sound effects tracks, color coding the regions the same to make your intentions absolutely clear. This allows the mixer to very quickly and easily highlight the cluster and set a group treatment, like EQ. Think of it as temporarily dedicating some tracks for this purpose and stair-step your work around them, being careful to not intermingle non-futz materials on those tracks for the duration of the necessary treatment, which is equally problematic.

Why go to the effort? If you sprinkle these materials throughout your editorial, it becomes a game of hunting around for the mixer to find what needs futzing. Odds are your mixer will need to stop mixing and reorganize your entire layout to fix the problem and make it mixable.

Bonus Issues

Women and BIPOC Industry Directories

 

Never Famous

Bands, festivals, TV shows, traveling Broadway musicals, and other touring groups need competent and diverse personnel who perform their tasks with a high level of expertise and professionalism day-in and day-out. Touring personnel need a way to market their expertise and let their availability be known within the industry. Both groups need a way to broaden the scope of available jobs, resources, and candidates, and break out of the cycle of peer-to-peer referrals and word of mouth as the primary way to hire and get hired.

POC in Audio Directory

The directory features over 500 people of color who work in audio around the world. You’ll find editors, hosts, writers, producers, sound designers, engineers, project managers, musicians, reporters, and content strategists with varied experience from within the industry and in related fields.

While recruiting diverse candidates is a great first step, it’s not going to be enough if we want the industry to look and sound meaningfully different in the future. Let us be clear: this isn’t about numbers alone. This is about getting the respect that people of color—and people of different faiths, abilities, ages, socioeconomic statuses, educational backgrounds, gender identities, and sexual orientation—deserve.

Tour Collective

Helps artists hire a great crew for their tours, live shows, and virtual performances.

We believe that every crew person should continually have the opportunity to find jobs, advance their career, and ultimately create a better life for themselves and their families.

Here’s how it works:
1. Sign up at this link https://tourcollective.co/jointhecrew
2. Fill out the form
3. Keep an eye on your inbox. You’ll automatically get notified when jobs come in that match your profile. Then you can apply for the ones that you’re interested in.

Find Production People

Women in Lighting

Femnoise

A collective fighting for the reduction of the gender gap in the music industry. But we soon realized that the solution is not just activism. We have to go one step further: to connect and empower underrepresented individuals on a large scale, worldwide.

POC Theatre Designers and Techs

Wingspace

is committed to the cause of equity in the field.  There are significant barriers to accessing a career in theatrical design and we see inequalities of race, socioeconomic status, gender identity, sexual orientation and disability across the field.

Parity Productions

Fills creative roles on their productions with women and trans and gender nonconforming (TGNC) artists. In addition to producing their own work, they actively promote other theatre companies that follow their 50% hiring standard.

Production on Deck

Uplifting underrepresented communities in the arts. Their main goal is to curate a set of resources to help amplify the visibility of (primarily) People of Color in the arts.

She is the Music DataBase

Live Nation Urban’s Black Tour Directory

Music Industry Diversity and Inclusion Talent Directory

The F-List Directory of U.K. Musicians

FUTURE MUSIC INDUSTRY 

WOMEN/ NON-BINARY DJS/PRODUCERS

Entourage Pro

South America – Productores por país – Podcasteros

Women in Live Music DataBase

Women’s Audio Mission Hire Women Referrals

One Size Does Not Fit All in Acoustics

Have you ever stood outside when it has been snowing and noticed that it feels “quieter” than normal? Have you ever heard your sibling or housemate play music or talk in the room next to you and hear only the lower frequency content on the other side of the wall? People are better at perceptually understanding acoustics than we give ourselves credit for. In fact our hearing and our ability to perceive where a sound is coming from is important to our survival because we need to be able to tell if danger is approaching. Without necessarily thinking about it, we get a lot of information about the world around us through localization cues gathered from the time offsets between direct and reflected sounds arriving at our ears that our brain performs quick analysis on compared to our visual cues.

Enter the entire world of psychoacoustics

Whenever I walk into a music venue during a morning walk-through, I try to bring my attention to the space around me: What am I hearing? How am I hearing it? How does that compare to the visual data I’m gathering about my surroundings? This clandestine, subjective information gathering is important to reality check the data collected during the formal, objective measurement processes of systems tunings. People spend entire lifetimes researching the field of acoustics, so instead of trying to give a “crash course” in acoustics, we are going to talk about some concepts to get you interested in the behavior that you have already been spending your whole life learning from an experiential perspective without realizing it. I hope that by the end of reading this you will realize that the interactions of signals in the audible human hearing range are complex because the perspective changes depending on the relationships of frequency, wavelength, and phase between the signals.

The Magnitudes of Wavelength

Before we head down this rabbit hole, I want to point out one of the biggest “Eureka!” moments I had in my audio education was when I truly understood what Jean-Baptiste Fourier discovered in 1807 [1] regarding the nature of complex waveforms. Jean-Baptiste Fourier discovered that a complex waveform can be “broken down” into its many component waves that when recombined create the original complex waveform. For example, this means that a complex waveform, say the sound of a human singing, can be broken down into the many composite sine waves that add together to create the complex original waveform of the singer. I like to conceptualize the behavior of sound under the philosophical framework of Fourier’s discoveries. Instead of being overwhelmed by the complexities as you go further down the rabbit hole, I like to think that the more that I learn, the more the complex waveform gets broken into its component sine waves.

Conceptualizing sound field behavior is frequency-dependent

 

One of the most fundamental quandaries about analyzing the behavior of sound propagation is due to the fact that the wavelengths that we work with in the audible frequency range vary in orders of magnitude. We generally understand the audible frequency range of human hearing to be 20 cycles per second (Hertz) -20,000 cycles per second (20 kilohertz), which varies with age and other factors such as hearing damage. Now recall the basic formula for determining wavelength at a given frequency:

Wavelength (in feet or meters) = speed of sound (feet or meters) / frequency (Hertz) **must use same units for wavelength and speed of sound i.e. meters and meters per second**

So let’s look at some numbers here given specific parameters of the speed of sound since we know that the speed of sound varies due to factors such as altitude, temperature, and humidity. The speed of sound at “average sea level”, which is roughly 1 atmosphere or 101.3 kiloPascals [2]), at 68 degrees Fahrenheit (20 degrees Celsius), and at 0% humidity is approximately 343 meters per second or approximately 1,125 feet per second [3]. There is a great calculator online at sengpielaudio.com if you don’t want to have to manually calculate this [3]. So if we use the formula above to calculate the wavelength for 20 Hz and 20kHz with this value for the speed of sound we get (we will use Imperial units because I live in the United States):

Wavelength of 20 Hz= 1,125 ft/s / 20 Hz = 56.25 feet

Wavelength of 20 kHz or 20,000 Hertz = 1,125 ft/s / 20,000 Hz = 0.0563 feet or 0.675 inches

This means that we are dealing with wavelengths that range from roughly the size of a penny to the size of a building. We see this in a different way as we move up in octaves along the audible range from 20 Hz to 20 kHz because as we increase frequency, the number of frequencies per octave band increases logarithmically.

32 Hz-63 Hz

63-125 Hz

125-250 Hz

250-500 Hz

500-1000 Hz

1000-2000 Hz

2000-4000 Hz

4000-8000 Hz

8000-16000 Hz

Look familiar??

Unfortunately, what this ends up meaning to us sound engineers is that there is no “catch-all” way of modeling the behavior of sound that can be applied to the entire audible frequency spectrum. It means that the size of objects and surfaces obstructing or interacting with sound may or may not create issues depending on its size in relation to the frequency under scrutiny.

For example, take the practice of placing a measurement mic on top of a flat board to gather what is known as a “ground plane” measurement. For example, placing the mic on top of a board, and putting the board on top of seats in a theater. This is a tactic I use primarily in highly reflective room environments to take measurements of a loudspeaker system in order to observe the system behavior without the degradation from the reflections in the room. Usually, because I don’t have control over changing the acoustics of the room itself (see using in-house, pre-installed PAs in a venue). The caveat to this method is that if you use a board, the board has to be at least a wavelength at the lowest frequency of interest. So if you have a 4ft x 4 ft board for your ground plane, the measurements are really only helpful from roughly 280 Hz and above (solve for : 1,125 ft/s / 4 ft  ~280 Hz given the assumption of the speed of sound discussed earlier). Below that frequency, the wavelengths of the signal under test will be larger in relation to the board so the benefits of the ground plane do not apply. The other option to extend the usable range of the ground plane measurement is to place the mic on the ground (like in an arena) so that the floor becomes an extension of the boundary itself.

Free Field vs. Reverberant Field:

When we start talking about the behavior of sound, it’s very important to make the distinction about what type of sound field behavior we are observing, modeling, and/or analyzing. If that isn’t confusing enough, depending on the scenario, the sound field behavior will change depending on what frequency range is under scrutiny. Most loudspeaker prediction software works by using calculations based on measurements of the loudspeaker in the free field. To conceptualize how sound operates in the free field, imagine a single, point-source loudspeaker floating high above the ground, outside, and with no obstructions insight. Based on the directivity index of the loudspeaker, the sound intensity will propagate outward from the origin according to the inverse square law. We must remember that the directivity index is frequency-dependent, which means that we must look at this behavior as frequency-dependent. As a refresher, this spherical radiation of sound intensity from the point source results in 6dB loss per doubling of distance. As seen in Figure A, sound intensity propagating at radius “r” will increase by a factor of r^2 since we are in the free field and sound pressure radiates omnidirectionally as a sphere outward from the origin.

Figure A. A point source in the free field exhibits spherical behavior according to the inverse square law where sound intensity is lost 6dB per doubling of distance

 

The inverse square law applies to point-source behavior in the free field, yet things grow more complex when we start talking about line sources and Fresnel zones. The relationship between point source and line source behavior changes whether we are observing the source in the near field or far field since a directional source becomes a point source if observed in the far-field. Line source behavior is a subject that can have an entire blog or book on its own, so for the sake of brevity, I will redirect you to the Audio Engineering Society white papers on the subject such as the 2003 white paper on “Wavefront Sculpture Technology” by Christian Heil, Marcel Urban, and Paul Bauman [4].

Free field behavior, by definition, does not take into account the acoustical properties of the venue that the speakers exist in. Free field conditions exist pretty much only outdoors in an open area. The free field does, however, make speaker interactions easier to predict especially when we have known direct (on-axis) and off-axis measurements comprising the loudspeakers’ polar data. Since loudspeakers manufacturers have this high-resolution polar data of their speakers, they can predict how elements will interact with one another in the free field. The only problem is that anyone who has ever been inside a venue with a PA system knows that we aren’t just listening to the direct field of the loudspeakers even when we have great audience coverage of a system. We also listen to the energy returned from the room in the reverberant field.

As mentioned in the introduction to this blog, our hearing allows us to gather information about the environment that we are in. Sound radiates in all directions, but it has directivity relative to the frequency range being considered and the dispersion pattern of the source. Now if we take that imaginary point source loudspeaker from our earlier example and listen to it in a small room, we will hear not only the direct sound coming from the loudspeaker to our ears, but also the reflections from the loudspeaker bouncing off the walls and then back at our ears delayed by some offset in time. Direct sound often correlates to something we see visually like hearing the on-axis, direct signal from a loudspeaker. Since reflections result from the sound bouncing off other surfaces then arriving at our ears, what they don’t contribute to the direct field, they add to the reverberant field that helps us perceive spatial information about the room we are in.

 

Signals arriving on an obstructed path to our ears we perceive as direct arrivals, whereas signals bouncing off a surface and arriving with some offset in time are reflections

 

Our ears are like little microphones that send aural information to our brain. Our ears vary from person to person in size, shape, and the distance between them. This gives everyone their own unique time and level offsets based on the geometry between their ears which create our own individual head-related transfer functions (HRTF). Our brain combines the data of the direct and reflected signals to discern where the sound is coming from. The time offsets between a reflected signal and the direct arrival determine whether our brain will perceive the signals as coming from one source or two distinct sources. This is known as the precedence effect or Haas effect. Sound System Engineering by Don Davis, Eugene Patronis, Jr., & Pat Brown (2013), notes that our brain integrates early reflections arriving within “35-50 ms” from the direct arrival as a single source. Once again, we must remember that this is an approximate value for time since actual timing will be frequency-dependent. Late reflections that arrive later than 50ms do not get integrated with the direct arrival and instead are perceived as two separate sources [5]. When two signals have a large enough time offset between them, we start to perceive the two separate sources as echoes. Specular reflections can be particularly obnoxious because they arrive at our ears either with an increased level or angle of incidence such that they can interfere with our perception of localized sources.

Specular reflections act like reflections off a mirror bouncing back at the listener

 

Diffuse reflections, on the other hand, tend to lack localization and add more to the perception of “spaciousness” of the room, yet depending on frequency and level can still degrade intelligibility. Whether the presence of certain reflections will degrade or add to the original source are highly dependent on their relationship to the dimensions of the room.

 

Various acoustic diffusers and absorbers used to spread out reflections [6]

In the Master Handbook of Acoustics by F. Alton Everest and Ken C. Pohlmann (2015), they illustrate how “the behavior of sound is greatly affected by the wavelength of the sound in comparison to the size of objects encountered” [7]. Everest & Pohlmann describe how the varying size of wavelength depending on frequency means that how we model sound behavior will vary in relation to the room dimensions. There is a frequency range at which in smaller rooms, the dimensions of the room are shorter than the wavelength such that the room cannot contribute boosts due to resonance effects [7]. Everest & Pohlmann note that when the wavelength becomes comparable to room dimensions, we enter modal behavior. At the top of this range marks the “cutoff frequency” to which we can begin to describe the interactions using “wave acoustics”, and as we progress into the higher frequencies of the audible range we can model these short-wavelength interactions using ray behavior. One can find the equations for estimating these ranges based on room length, width, and height dimensions in the Master Handbook of Acoustics. It’s important to note that while we haven’t explicitly discussed phase, its importance is implied since it is a necessary component to understanding the relationship between signals. After all, the phase relationship between two copies of the same signal will determine whether their interaction will result in constructive or destructive interference. What Everest & Pohlmann are getting at is that how we model and predict sound field behavior will change based on wavelength, frequency, and room dimensions. It’s not as easy as applying one set of rules to the entire audible spectrum.

Just the Beginning

So we haven’t even begun to talk about the effects of properties of surfaces such absorption coefficients and RT60 times, and yet we already see the increasing complexity of the interactions between signals based on the fact we are dealing with wavelengths that differ in orders of magnitude. In order to simplify predictions, most loudspeaker prediction software uses measurements gathered in the free field. Although acoustic simulation software, such as EASE, exists that allows the user to factor in properties of the surfaces, often we don’t know the information that is needed to account for things such as absorption coefficients of a material unless someone gets paid to go and take those measurements. Or the acoustician involved with the design has well documented the decisions that were made during the architecture of the venue. Yet despite the simplifications needed to make prediction easier, we still carry one of the best tools for acoustical analysis with us every day: our ears. Our ability to perceive information about the space around us based on interaural level and time differences from signals arriving at our ears allows us to analyze the effects of room acoustics based on experience alone. It’s important when looking at the complexity involved with acoustic analysis to remember the pros and cons of our subjective and objective tools. Do the computer’s predictions make sense based on what I hear happening in the room around me? Measurement analysis tools allow us to objectively identify problems and their origins that aren’t necessarily perceptible to our ears. Yet remembering to reality check with our ears is important because otherwise, it’s easy to get lost in the rabbit hole of increasing complexity as we get further into our engineering of audio. At the end of the day, our goal is to make the show sound “good”, whatever that means to you.

Endnotes:

[1] https://www.aps.org/publications/apsnews/201003/physicshistory.cfm

[2] (pg. 345) Giancoli, D.C. (2009). Physics for Scientists & Engineers with Modern Physics. Pearson Prentice Hall.

[3] http://www.sengpielaudio.com/calculator-airpressure.htm

[4] https://www.aes.org/e-lib/browse.cfm?elib=12200

[5] (pg. 454) Davis, D., Patronis, Jr., E. & Brown, P. Sound System Engineering. (2013). 4th ed. Focal Press.

[6] “recording studio 2” by JDB Sound Photography is licensed with CC BY-NC-SA 2.0. To view a copy of this license, visit https://creativecommons.org/licenses/by-nc-sa/2.0/

[7] (pg. 235) Everest, F.A. & Pohlmann, K. (2015). Master Handbook of Acoustics. 6th ed. McGraw-Hill Education.

Resources:

American Physical Society. (2010, March). This Month in Physics History March 21, 1768: Birth of Jean-Baptiste Joseph Fourier. APS Newshttps://www.aps.org/publications/apsnews/201003/physicshistory.cfm

Davis, D., Patronis, Jr., E. & Brown, P. Sound System Engineering. (2013). 4th ed. Focal Press.

Everest, F.A. & Pohlmann, K. (2015). Master Handbook of Acoustics. 6th ed. McGraw-Hill Education.

Giancoli, D.C. (2009). Physics for Scientists & Engineers with Modern Physics. Pearson Prentice Hall.

JDB Photography. (n.d.). [recording studio 2] [Photograph]. Creative Commons. https://live.staticflickr.com/7352/9725447152_8f79df5789_b.jpg

Sengpielaudio. (n.d.). Calculation: Speed of sound in humid air (Relative humidity). Sengelpielaudio. http://www.sengpielaudio.com/calculator-airpressure.htm

Urban, M., Heil, C., & Bauman, P. (2003). Wavefront Sculpture Technology. [White paper]. Journal of the Audio Engineering Society, 51(10), 912-932.

https://www.aes.org/e-lib/browse.cfm?elib=12200

AES Design Competition – How to create an award winning design webinar

The AES student design competition is an opportunity for you to showcase your hardware design abilities to the wider audio community. But what makes a prize-winning design? Does it have to be complicated, unique, contain software, neural networks, or other esoteric components?

The purpose of this session is to present some ideas about what makes a design prize-worthy and answer some of your questions about it.

We will focus on an audio design that won a national design (UK) for its student inventor, along with a couple of other AES competition-winning ideas. Jamie Angus-Whiteoak will take the prize-winning design and break it down to show what it was about this design that made it “The best example of concurrent engineering that the judges had seen!”

From this, you will learn the basic principles behind design, and see what really matters for a prize-winning one. Surprisingly it might be simpler than you might have thought!

March 9 at 10 AM PST

Register and Post Your Questions Here

Moderated by Leslie Gaston-Bird

Leslie Gaston-Bird

Leslie Gaston-Bird (AMPS, M.P.S.E.) is the author of the book “Women in Audio”, part of the AES Presents series and published by Focal Press (Routledge). She is a voting member of the Recording Academy (The Grammys®). Currently, she is a freelance re-recording mixer and sound editor and owner of Mix Messiah Productions specializing in 5.1 mixing. Prior to that, she was a tenured Associate Professor of Recording Arts at the University of Colorado Denver (2005-2018) where she also served as Chair of the Department of Music and Entertainment Industry Studies. She led groups of Recording Arts students in study abroad courses in England, Germany, and Italy which included participation in AES Conventions. Leslie has done research on audio for planetariums, multichannel audio on Blu-Ray, and a comparison of multichannel codecs that was published in the AES Journal (Gaston, L. and Sanders, R. (2008), “Evaluation of HE-AAC, AC-3, and E-AC-3 Codecs”, Journal of the Audio Engineering Society of America, 56(3)).

Presenter

Jamie Angus-Whiteoak: Was a Professor of Audio Technology at Salford University. Her interest in audio was crystallized at age 11 when she visited the WOR studios in NYC on a school trip in 1967. After this, she was hooked and spent much of her free time studying audio, radio, synthesizers, and loudspeakers, and even managed to build some!

During secondary education in Scotland, she learned about valve (tube) circuits and repaired radios and televisions, as well as building loudspeakers from recovered components. Then at 17, she attended the University of Lethbridge in Alberta Canada. There, in addition to her studies in physics, music, computing, drama, philosophy, and English composition, she repaired their VCS3 synthesizer, and so obtained coveted access to the electronic music lab. She was also able to design her first proper loudspeakers and had a quadraphonic system in her 7’5” cube bedroom.

She studied electronics at the University of Kent (UK) doing her BSc and Ph.D. there from 1974 to 1980. During her undergraduate days, she designed and built a variety of audio circuits, added acoustic treatment to the student’s radio station’s studios, and designed and built an electronic piano. She also developed and built a complete AM stereo modulation system for her final year project. During her Ph.D. study, she became interested in A/D conversion, and worked on a sigma-delta approach, but had to give it up to concentrate on her Thesis topic of designing a general-purpose Digital Signal Processor with flexible arithmetic for finite field transforms. After her PhD she joined Standard Telecommunications Laboratories, which invented optical fibres and PCM. There she worked on integrated optics, speech coding, speech synthesis, and recognition in the early 80s, and invented a novel 32kBits speech coding method. She has been active in audio and acoustic research since then.

She was appointed as the BT Lecturer at the University of York in 1983, to develop the first integrated master’s (Meng) in Electronic and Communication Engineering in conjunction with British Telecom. She then co-created the UK’s first Music Technology course in 1986 when it was considered a “silly idea”!   At York, she developed courses on electronic product design and co-authored a book on it. She also created and supervised many design-orientated final year projects, and several of her students won national design awards for their work.

She is the inventor of; modulated, wideband and absorbing diffusers, direct processing of Super Audio CD signals, and one of the first 4-channel digital tape recorders. She has done work on signal processing, analogue circuits, and numerous other audio technology topics.

She has taught analogue circuit design, communications systems, computer architecture, microprogramming and logic design, design for testability and reliability, audio and video signal processing, digital signal processing, psychoacoustics, sound reproduction, electroacoustics, loudspeaker and microphone design,  studio design, and audio, speech, and video coding. She has co-written two textbooks and has authored, or co-authored over 200 journal and conference papers and 4 patents. She is currently investigating environmentally friendly audio technology.

She has been active in the AES for 30 years and has been the paper’s co-chair for conventions as well as a judge for the student project and Matlab competitions.

She has been awarded an AES fellowship, the IOA Peter Barnet Memorial Award, and the AES Silver Medal Award, for her contributions to audio and acoustics.

For relaxation she likes playing drums and dancing, but not at the same time.

 

Ask the Experts – Music Editors for Film & TV

 

There is a lot of music work that happens behind the scenes in post-production of a film or tv show. Music editors can wear a lot of hats beyond just editing music tracks in a DAW – working with picture editors and directors/filmmakers to find the right musical mood for a scene, coordinating with music supervisors to find the perfect song, being a liaison between directors and composers, attending recording/scoring sessions, and attending the final mix on the dub stage.

Being a music editor takes having a range of skills from music, audio/sound, tv/film, communication/interpersonal, and more. How do you get started as a music editor, and how do you make a career out of it? We will be exploring this and the questions you have for our experts about music editing for film and tv.

March 6, 2021 at 11 AM PST

Register and Post Your Questions

Moderated by April Tucker

April Tucker is a re-recording mixer for television and film in Los Angeles. She is a “Jane of all trades” in post-production sound and has worked in every role of the process from music editor to sound supervisor. She is currently writing a textbook for Routledge about career paths in the audio industry, and the skills needed to survive early in your career.

Panelists

Jillinda Palmer has a decade of experience as a music editor. Her credits include Deadwood: The Movie, Crazy Ex-Girlfriend, and Diary of a Future President (Disney+). Jillinda is also an experienced sound designer and dialog editor, singer/songwriter and performer.

Jillinda has received one Primetime Emmy Nomination, one Primetime Emmy Honor, and 2 Golden Reel Nominations for music editing. Working as a music editor enables Jillinda to apply her fundamental knowledge of music along with her editorial expertise to enhance her clients’ original intent.


Poppy Kavanagh has been operating within the music industry as a musician, music editor, DJ, and audio engineer. Poppy started out working for Ilan Eshkeri and Steve Mclaughlin where she learnt about the art of film music. She then sidestepped to work as an assistant engineer at Mark Knopfler’s British Grove Studios. It was there that she discovered the world of Music Editing. Poppy has worked with a wide range of artists including Ian Brown, Van Morrison, The Rolling Stones, Tim Wheeler, KT Tunstall, Ilan Eshkeri and Steve Price. In 2019 Poppy was nominated for a primetime Emmy award for her music editing work on HBO’s Leaving Neverland.


Shari Johanson is a NYC-based music editor who has been working in the film and television industry for nearly 30 years. Most recently she has collaborated with Robin Wright on her Directorial debut film LAND. Some other Directors she has worked with are Cary Fukanaga, Paul Schrader, Kevin Smith and Milos Forman. Shari has worked with composers such as R.E.M, Howard Shore, Hans Zimmer, Dave Arnold, Carter Burwell and most recently Jonathan Zalben on Disney+ ON POINTE, as well as TIME FOR THREE and Ben Solee on LAND.

Shari won a Motion Picture Sound Editors Golden Reel Award for Best Music Editing for her work on the film HIGH FIDELITY, and was also nominated for Showtime’s BILLIONS, as well as for the Netflix limited series MANIAC.

Additional credits include the Oscar – and Golden-Globe-winning biopic I, TONYA; the HBO sensation BAD EDUCATION; the Emmy-winning hit series TRUE DETECTIVE S1; the Netflix original series MARCO POLO; as well as John Tururro’s JESUS ROLLS. Shari is about to embark on the continuation of BILLIONS S5. Full list of credits


Del Spiva is a multi Emmy-nominated music editor whose credits include The Defiant Ones, Genius, and A Quiet Place Part II. His upcoming film credits include Coming 2 America and Top Gun: Maverick. Prior to music editing, Del worked as an assistant sound editor for films.

 

 

 

 

 

Acoustemology?

During these last months of 2020, I started a master’s degree that has pleasantly surprised me, and although it seems to be unrelated to my professional facet of audio, studying “cultural management” has led me to know a new and exciting world, which has more related to my interests than it seems.

But why do I want to talk about acoustemology and cultural management when I have been in sound engineering for almost 14 years focusing only on the technical aspects? In some reading, I found that term, acoustemology. At the time I did not know but that due to its etymological roots caught my attention.

Already with ethnomusicology and some branches of anthropology in conjunction with acoustics, studies of music, ecological acoustics and soundscapes have been carried out, helping to interpret sound waves as representations of collective relationships and social structures, such is the case of sound maps of different cities and countries, which reflect information on indigenous languages, music, urban areas, forest areas, etc., some examples:

Mexican sounds through time and space: https://mapasonoro.cultura.gob.mx/

Sound Map of the Indigenous Languages ​​of Peru: https://play.google.com/store/apps/details?id=com.mc.mapasonoro&hl=en_US&gl=US

Meeting point for the rest of the sound maps of the Spanish territory: https://www.mapasonoro.es/

The life that was, the one that is and the one that is collectively remembered in the Sound Map of Uruguay: http://www.mapasonoro.uy/

As Carlos de Hita says, our cultural development has been accompanied by soundscapes or soundtracks that include the voices of animals, the sound of wind, water, reverberation, temperature, echo, and distance.

But it is with the term acoustemology, which emerged in 1992 with Steven Feld, where the ideas of a soundscape that is perceived and interpreted by those who resonate with their bodies and lives in a social space and time converge. An attempt is made to argue an epistemological theory of how sound and sound experiences shape the different ways of being and knowing the world, and of our cultural realities.

But then another concept comes into play, perception. Perception is mediated by culture: the way we see, smell, or hear is not a free determination but rather the product of various factors that condition it (Polti 2014). Perception is what really determines the success of our work as audio professionals, so I would like to take a moment with this post to think over the following ideas and invite you to do it with me.

As professionals dedicated to the sound world, do we stop to think about the impact of our work on the cultures in which we are immersed? Do we worry about taking into account the culture in which we are immersed when doing an event? Or do we only develop our work in compliance with economic and technological guidelines instead of cultural ones?

When we plan an event, do we use what is really needed, do we have a limit or to attend to our ego we use everything that manufacturers sell us without stopping to think about the impact (economic, social, and environmental) that this planning has in the place where these events will be taking place? Do we really care about what we want to transmit or do we only care about making the audio sound as loud as possible or even louder? Do we stop to think what kind of amplification an event really requires or do we just want to put a lot of microphones, a lot of speakers, if it’s immersive sound the better, make it sound loud, and good luck if you understand? Do we care about what the audience really wants to hear? Are we aware of noise pollution or do we just want the concert to be so loud that people can’t even hear their own thoughts?

Are we conscious of making recordings that reflect and preserve our own culture and that of the performer, or do we only care about obtaining awards at all costs? Have we already shared all the knowledge we have about audio or are we still competing to show that we know everything, that I am technically the best? Or is it time to humanize and put our practice as audio professionals in a cultural context?

I remember an anecdote from a colleague, where he told how after doing all the set up for a concert in a Mexican city, of which I do not remember the details, it was only after the blessing of the shamans and the approval of the gods that the event was possible.

Our work as audio professionals should be focused on dedicating ourselves to telling stories in more than acoustic terms, telling stories that bear witness to our sociocultural context and who we are.

“Beyond any consideration of an acoustic and/or physiological order, the ear belongs to a great extent to culture, it is above all a cultural organ” (García 2007

References:

Bull, Michael; Plan, Les. 2003. The Auditory culture reader. Oxford: Berg. New York.

De Hita, Carlos. 2020. Sound diary of a naturalist. We Learn Together BBVA. Spain Available at https://www.youtube.com/watch?v=RdFHyCPtrNE&list=WL&index=14

García, Miguel A. 2007. “The ears of the anthropologist. Pilagá music in the narratives of Enrique Palavecino and Alfred Metraux ”, Runa, 27: 49-68 and (2012) Ethnographies of the encounter. Knowledge and stories about other music. Anthropology Series. Buenos Aires: Ed. Del Sol.

Rice, Timothy. 2003. “Time, Place, and Metaphor in Music Experience and Ethnography.” Ethnomusicology 47 (2): 151-179.

Macchiarella, Ignazio. 2014. “Exploring micro-worlds of music meanings”. The thinking ear 2 (1). Available at http://ppct.caicyt.gov.ar/index.php/oidopensante.

Victoria Polti. 2014. Acustemología y reflexividad: aportes para un debate teórico-metodológico en etnomusicología. XI Congreso iaspmal • música y territorialidades: los sonidos de los lugares y sus contextos socioculturales. Brazil


Andrea Arenas is a sound engineer and her first approach to music was through percussion. She graduated with a degree in electronic engineering and has been dedicated to audio since 2006. More about Andrea on her website https://www.andreaarenas.com/

Headphones / Headphones – How to Choose the Ideal? 

Choosing our ideal headphones is usually a topic that binds us, and we tend to resort to the first direct recommendation that comes our way, whether it is from our friends, experts in the field, or the first thing that comes to us online, which is not that bad, but; how could they know exactly what we are looking for or, rather, what we are listening to?

The first question we must ask in order to find our perfect headphones is what do I need them for? Due to their characteristics, not all of them are optimal for the same functions.

The objective of this article is that at the end of it we will be able to identify and understand the specifications that each presents. And thus having sufficient criteria to be able to select the headset that we really need.

Let’s start by talking about their sizes …

Over ear: They completely cover the ears, without a doubt a favorite of recording studios and console monitoring. These offer us, in addition to greater coverage, also comfort to be able to use them for a long time, they are usually more robust headphones and consequently a little more expensive.

On ear: This is a medium size, it barely covers the diameter of our ears, they are more popular for listening to music on mobile devices, with some alterations in certain frequencies in order to make certain music more attractive to our senses, they are also popular with DJ’s because they are easier to take off and put on during the event.

These are usually a little cheaper and made of lighter materials, but with certain losses in sound quality. Mainly if what we are looking for is a flatter sound.

In ear: In-ear or intraural headphones, are those that go inside the cavity of our ear, by small we do not mean that they are of lower quality, there are many brands, styles, and prices. (as in all) These due to their size have the characteristic of isolating the ambient sound much more. Ideal for personal monitoring or also in its less professional and much cheaper version for use on mobile devices.

Let’s continue with the technical characteristics.

Frequency: When we talk about frequency, we mean how wide is the reproduction range in our earphone potential, that is, what is the frequency range that these reproduce. Taking into consideration that, no matter how healthy a human ear is, it will not be able to perceive more than 20 Hz to 20khz, there is not much to worry about at this point since most headphones have, if not a range very close to this it will be rather up to one that surpasses it. If you ask yourself the latter, you will see. Some very demanding people not only want to listen but to feel those extra frequencies in their bodies as well.

Sensitivity: In summary, this would be the ability of headphones to convert electrical impulses into audible sounds, the higher the value, the louder the headset is measured in decibels, a recommended value would be between 100 and 105 dB.

Impedance: We are going to understand this characteristic in a simpler way, as the energy (electricity) that our headphones need to produce sound. This is measured in ohms, the lower the impedance, the less energy they need to function, and the higher the impedance the higher the demand. This is not to say that the higher impedance headphone has better sound, although it is true that operating with higher energy makes it less prone to signal failures, interference, or unwanted noise. Of course, we must be aware that the preamplifier to which we are going to connect our headphones knows how to receive their impedance. So, before purchasing our headphones, we must check the impedance of our equipment. If our equipment works in a low impedance, 30-ohm headphones will be perfect, and if our equipment is of a high impedance of 30 ohms onwards they will be ideal.

And before deciding one last question …

Open or closed headphones?

Just when we feel that we have found our ideal headset, the store technician asks us, or we find ourselves in the specification list with the characteristic of…, open or closed? And we go back to zero, but it really is not that complicated.

Let’s see, a Semi-Open or Open Headphone is one that lets part of the sound we are listening to escape. Ideal for mixing since it gives us a much more real feeling of space. It does not encapsulate sound.

On the other hand, Closed Headphones isolate the ambient sound, concentrating only the sound source in the headphones, recommended for live audio, also for the recording studio. But then because the closed one is and not ideal for mixing. I explain.

The ideal is an open one if you are in a space without external noise and with ideal acoustic treatment. Since the closed will create a reverb within the same earphone. But in case of having a lot of external noise and a room without treatment, it is a great option.

Headphones are the extension of our ears; we must try to make a successful, and above all personalized choice.

So now, which headset are you going to buy? Don’t hesitate and go for it!


Maria Fernanda Medina, from Tegucigalpa Honduras. I studied a BA in Acoustic Technology and Digital Sound at Galileo University in Guatemala City. I have mainly worked in ​​the live audio field as a freelancer, and with Audio rental companies. Developing myself in the Backline, Stage manager, and Production area. Both in international concerts and national festivals. Currently, my passion for audio and the social commitment I feel with my country have guided me in the dissemination and education. A facet that I explore and enjoy more every day.

 

 

Auriculares/Audífonos ¿Cómo escoger el ideal?

Escoger nuestros auriculares ideales suele ser un tema que nos lía, y tendemos a recurrir a la primera recomendación directa que se nos cruce en el camino, ya sea de nuestros amigos, expertos en el tema o lo primero que se nos presenta por internet, lo cual no es que esté mal, pero. ¿Como ellos podrían saber exactamente que buscamos o, más bien dicho ¿que escuchamos?

La primera pregunta que debemos resolver para poder encontrar nuestros auriculares perfectos es ¿para que los necesito? Y si, porque debido a sus características no todos son óptimos para las mismas funciones.

El objetivo de este artículo es que al finalizarlo seamos capaces de identificar y comprender las especificaciones que estos presentan. Y así tener el criterio suficiente para poder seleccionar el auricular que verdaderamente necesitamos

Comencemos hablando de sus tamaños…

Over ear: son aquellos que cubren por completo los oídos, sin duda un favorito de los estudios de grabación y monitoreo en consola.

Estos nos ofrecen además de mayor cobertura, también comodidad para poder estar con ellos un tiempo prolongado suelen ser auriculares más robustos y por consecuencia  un poco más costosos.

On ear: este es un tamaño medio, apenas cubre el diámetro de nuestras orejas, son más populares para escuchar música en dispositivos móviles, con algunas alteraciones en ciertas frecuencias con el objetivo de hacer más atractiva cierta música a nuestros sentidos, también son populares con los DJ´s, por ser más y fáciles de quitar y poner durante el evento.

Estos suelen ser un poco más económicos y de materiales más livianos, pero con ciertas pérdidas en calidad de sonido. Principalmente si lo que buscamos es un sonido más plano.

In ear: o auriculares intraurales, son los que van dentro de la cavidad de nuestra oreja, por pequeño no queremos decir que son de menor calidad, los hay de muchas marcas, estilos y precios. (como en todos) Estos por su tamaño tienen la característica de aislar mucho más el sonido ambiente. Ideales para monitoreo personal. O también en su versión menos profesional y mucho más económica para uso en dispositivos móviles.

 

un audífono se define por mucho más que su tamaño, así que podemos guiarnos únicamente por esto a la hora de escoger el nuestro, debemos principalmente conocer que los construye por dentro y es por eso por lo que ahora…

seguimos con las Características técnicas

Frecuencia: Cuando hablamos de frecuencia nos referimos a que tan amplio es el rango de reproducción en nuestro potencial auricular, osea cual es el rango de frecuencia que estos reproducen. Tomando en cuenta. Que, por más saludable que esté un oído humano no podrá percibir más de 20 Hz a 20 Khz, no hay mucho de qué preocuparnos en este punto ya que la mayoría de los auriculares tienen, si no un rango muy cercano a este será más bien hasta uno que lo sobrepase. Si se preguntan esto último, verán.  Algunas personas muy exigentes no solo desean escuchar, si no sentir esas frecuencias extras en su cuerpo también.

Sensibilidad: En resumen, esta sería la capacidad que tienen los auriculares de convertir los impulsos eléctricos en sonidos audibles, entre más alto sea el valor más fuerte sonora el auricular este se mide en decibeles un valor recomendado estaría entre 100 y 105 dB

Impedancia: Vamos a entender de una manera más sencilla esta característica, como la energía(electricidad) que necesitan nuestros auriculares para producir sonido. esta se mide en ohmios, entre menor sea la impedancia menos energía necesitan para funcionar y a mayor impedancia su exigencia es más alta. Con esto no queremos decir que el auricular de mayor impedancia tiene mejor sonido, si bien es cierto que al funcionar con mayor energía lo hace menos propenso a fallas de señal, interferencia o ruidos no deseados. Eso sí, debemos estar consientes que el preamplificador al cual vamos a conectar nuestros auriculares sepa recibir la impedancia de estos. Así que, antes de adquirir nuestros auriculares debemos revisar la impedancia de nuestro equipo. Si nuestro equipo trabaja en una baja impedancia unos auriculares de 30 ohm nos quedarán perfectos, y si nuestro equipo es de alta impedancia de 30 ohm en adelante serán ideales.

Y antes de decidir una última pregunta…

¿Auriculares abiertos o cerrados? 

Justo cuando sentimos que encontramos nuestro auricular ideal nos pregunta el técnico de la tienda, o nos encontramos en la lista de especificaciones con la característica de…, ¿abierto o cerrado? Y volvemos a quedarnos en cero, pero realmente no es tan complicado.

Veamos, un Auricular Semiabierto o Abierto, es aquel que deja escapar parte del sonido que estamos escuchando. Ideales para mezclar, ya que nos brinda una sensación del espacio mucho más real. No encapsula el sonido.

En cambio, los auriculares cerrados aíslan el sonido ambiente, concentrando únicamente la fuente de sonido en los auriculares, recomendados para audio en vivo, y también estudio de grabación. Pero, entonces porque el cerrado es y no ideal para mezclar. Te explico.

Lo ideal es un abierto, siempre y cuando estés en un espacio sin ruidos externos y con un tratamiento acústico ideal. Ya que el cerrado creará una reverberación dentro del mismo auricular. Pero en caso de tener mucho ruido externo y un cuarto sin tratamiento es una gran opción.

Los auriculares son la extensión de nuestros oídos, debemos procurar hacer una elección acertada y sobre todo personalizada.

Así que, ahora, ¿Qué auricular vas a comprar? No lo dudes más y ¡anda por el!


Maria Fernanda Medina Desde Tegucigalpa Honduras. Estudié una licenciatura en Tecnología Acústica y Sonido Digital en la Universidad Galileo de la Ciudad de Guatemala. He trabajado principalmente en el campo del audio en vivo como autónomo y con empresas de alquiler de audio. Desarrollándome en el área de Backline, Stage manager y Producción. Tanto en conciertos internacionales como en festivales nacionales. Actualmente, mi pasión por el audio y el compromiso social que siento con mi país me han guiado en la difusión y educación. Faceta que exploro y disfruto más cada día.

 

Hit Like A Girl And SoundGirls Team Up To Promote Music Education And Expansion.

 

Hit Like A Girl and SoundGirls, leading organizations in the effort to grow the music community for girls, women, and non-binary and trans people have announced a new collaboration that will add impact to Hit Like A Girl X, the 10th annual edition of the groundbreaking contest for women drummers and beatmakers. In addition to sharing SoundGirls’ educational content about tuning, miking and recording acoustic and electronic drums on the HLAG-X website, SoundGirls members have been added to the judging panels for HLAG-X drumset and beat-making categories. The group is also contributing lessons and seminars from SoundGirls mentors as contest prizes and awards.

“It is so important to be multi-versed in music production and we are excited to provide tools for drummers to learn how to record and produce their own music,” says SoundGirls co-founder Karrie Keyes. “With current events forcing many players and programmers to become sound engineers, we are happy to for the opportunity to work with Hit Like A Girl.”

Adds Hit Like A Girl Executive Director, David Levine, “Our organizations share a common goal to encourage a higher level of participation as well as a higher level of contribution to music by women. Providing the Hit Like A Girl community with additional knowledge and support from such a reliable source will only accelerate the process.”

SoundGirls was established in 2013 to provide a support network for women and marginalized people working in the professional audio industry and to assist those with a drive to be successful in audio. The organization supports girls, women and marginalized groups working in professional audio and music production by:

To learn more, visit www.hitlikeagirlcontest.com and www.soundgirls.org.

Hit Like A Girl® is a celebration of female drummers, percussionists and beatmakers. The organization was founded in 2012 by Phil Hood (Drum!), David Levine (Full Circle Management) and Mindy Abovitz-Monk (Tom Tom). Now in its 10th year, the Hit Like A Girl Contest has had more than 10,000 participants from 50 countries and has reached nearly 100,000,000 online impressions. Additional HLAG Directors include Louise King (Rhythm Magazine), Sarah Hagan (Marketing & Artist Relations), Danielle Thwaites (Beats By Girlz) and Diane Downs (Louisville Leopard Percussionists). HLAG sponsors include many of today’s leading drum, percussion, electronics, accessory and media companies while its judges include many of the most popular, most respected drummers on the planet.

X