Empowering the Next Generation of Women in Audio

Join Us

Ready for the Road?

 

I’ve been on the road for the better part of a decade, so I’ll easily admit that I’m biased in favor of tour life, but it’s fascinating to hear what other people think my work is like. Mostly they see the glamour of a life that some only dream of being paid to travel across the country or even the world. They’re less enamored when they hear what my work schedule actually entails and that I’m not some carefree nomad having adventures and playing pretend every night. Still, I bet most would give it a go if they ever got the chance.

So what does the reality of touring look like? Well, let’s start with the least appealing side of it and get that out of the way

Time and Stress

Since tours only make money when they’re actively on the road, the ideal is to be booked constantly. Most shows have a few weeks scattered throughout the schedule that aren’t booked and the actors, musicians, and crew are laid off. To a 9-5 worker, “layoff” is a horrible word, but on tour, it’s synonymous with a scheduled, short, unpaid vacation, and you’re still working 45-50 weeks out of the year. However, that means there’s limited time off to see friends and family back home or just to recharge, and it can be difficult to get time off for events like weddings, graduations, or even family emergencies.

Then you have your day-to-day work schedule. On a whim, I calculated how many days I’ve had off in an average year on tour. That qualifies as a day not in the theatre, not traveling to the next venue, nothing work-related. My average was 70-75 days off per year over seven years on the road. To put it in 9-5 terms, if you just count weekends that’s two days off a week, multiplied by 52 weeks, most people get 104 days off in a year, not even looking at holidays or vacation time.

(Touring data based on my 2019 year on tour with Miss Saigon, then Mean Girls.)

Plus, 40-hour work is the norm, but on the road, you’re looking at anywhere from a 60 – 80 hour workweek depending on how often you have to load in and out.

Moving on to stress

Somedays tour feels like holding 10 pounds of crazy and staring at a 5-pound bag, trying to formulate a plan that gets everything in. Each show and every venue have quirks and your job is to figure out how to work with or around them. Sometimes it’s easy: in Cleveland, there’s only space for the actual show deck onstage, so the local crew knows that amp racks typically go in an alcove in the house. Other times it takes some finagling: in DC, the Les Mis speaker towers weighed about 3000 lbs all together, but the structure the motor was attached to could only support 2000 lbs, so I calculated a way to build most of the tower, then slide the rest into place so we didn’t exceed the weight limit and still kept most of the build on the motor instead of overtaxing our manpower.

But, if you think that sounds stressful, those are the times when things went pretty well and we were able to come up with a solution that still accomplished the design. There are times you simply can’t do what you’ve planned: in Hartford, we had to get a mid-load in delivery of truss when the measurements we’d had for the rigging points were wrong. We found out partway into the day that the points were simply too far apart to safely fly the smaller truss we carried. Or something malfunctions right before the show is ready to start and you have a stage manager watching you, giving play-by-play commentary to the SM at the call desk as you attempt to suss out the problem, knowing the curtain is waiting on your troubleshooting skills.

These stressors can take a toll on your mental and emotional well-being, which affects your physical health. Fast and unhealthy food is much easier to access on the road, and the post-show default is to head to the nearest bar with your crew to unwind from the day and socialize. As an introvert, I had to learn to pay better attention to what I needed socially: some days it was respecting my need to relax, other times it was noticing that I’d lacked social interaction and, despite the habitual ease of just heading back to the hotel, I’d actually prefer to be out with the crew.

Mostly what it comes down to is fatigue. It takes a concerted effort to take care of yourself on the road: finding or choosing healthy foods, making time to exercise, checking in with yourself. Sometimes you don’t have the energy to deal with that after a long day of work, and your well-being falls to the wayside.

All that being said, touring sounds really appealing right? Well, let’s take a look at what’s kept me on the road for so long.

Experience

One huge benefit is experience. That same stress that fell into the Con column has equal footing in the Pro side by virtue of the adage “what doesn’t kill you makes you stronger.” Every load in and out, you’re handed new challenges to solve and, by the sheer repetition of it, you learn how to analyze situations faster and build a log of potential fixes you’ve tried before.

Plus, it’s all hands-on practice. You can talk about the theory as much as you want, but it will never be as beneficial as putting a contingency plan into action.

Along with problem-solving, you also (hopefully) gain people skills: just like analyzing situations, you also learn how to read people. Part of your job is learning if you can hand a project off to the house head and let them direct the crew, or if you’ll have to check in constantly to make sure it gets done. It’s noticing someone who’s willing to work, but is new and needs detailed directions, yet is too nervous to say they don’t understand. There are times you have to light a (figurative) fire to get a languorous crew moving, but others where you can joke and enjoy chatting and they’ll still get the job done.

The Pay

A large appeal of touring is the money. On the road, the company will provide you with accommodations or per diem for food and housing, so the majority of your survival expenses are taken care of. With that covered, it frees up the majority of your salary to pay down credit card debt, mortgages, or student loans, while simultaneously having some money to save or use for a guilt-free splurge. Personally, having the opportunity to up my savings percentage paved the way for me to discover the financial independence community, which is worth exploring no matter where you are in your financial journey. (Check out this list of FI blogs, or two of my favorites: JL Collins or Afford Anything)

The People

Last, but absolutely not least, are the people. Your crew and coworkers become family. Often boisterous and sometimes dysfunctional, you’ll find some of your best life-long friends on the road. When you’re together day in and day out, you help each other solve problems, pull off incredible under-the-wire show saves, or make it through a crappy day that you can laugh about afterward. Stagehands are the best kind of people I know to take lemons and turn it into an epic comedy of errors, and there are always new stories whenever you end up in the same city again to catch up.

Touring is life where the amp is always turned to 11

The lows are confidence-shattering and lonely, but the highs are soul-affirming and leave you with the feeling that there’s nowhere else you’d rather be.

I’m the first to tell anyone that they should absolutely tour if they have any desire to do it, but I’m also the first to say that it isn’t for everyone. I’ve learned that I’m built to tour. Even when I wasn’t sure if I was any good at sound, I still knew I loved touring: stressful situations are puzzles to solve and most days I thrive on the challenge, plus my family has always been understanding that I have very tight constraints on my schedule. The pros of touring outweigh the cons by a mile for me, however, even I (and my knees) know that the day I look towards getting off the road isn’t all that far down the line. For others, life on the road just isn’t appealing from the get-go: I know people who are amazing at their job but hate the lifestyle, the stress, and the mental and physical toll it takes.

It’s always important to take stock of how you honestly feel and refrain from talking yourself into signing up for another tour if the cons outweigh the pros. It’s not worth making yourself (and everyone you work with) miserable if you hate your life day in and day out.

But if you do like it, pack those suitcases and get ready for an adventure. I know I wouldn’t trade the experience for anything!

 

10 things you need to be successful (and they’re all free!)

We have made it to June! 6 months into 2021, halfway through the post-pandemic year. Things are looking a little brighter, shows are starting to get booked, calls are coming in for work. You might be thinking about getting back on track with finding yourself a job on tour. In my book, I write about the 10 qualities or attributes you need to be successful. Let’s take a look at them.

Being on time

This is huge. You need to respect everyone’s time on the tour. If 10 other people have sacrificed sleep, a coffee, a workout, or whatever else to make sure they’re on time for lobby call, then you’d better make sure you’re on time too! Oh and on time is late, make sure you’re there 15 mins before you’re meant to be. The bus WILL leave without you!

Work ethic

If someone doesn’t want to be on a tour or doesn’t want to be part of a team, they won’t last long. If you have a strong work ethic and make yourself indispensable, you’ll have a long career.

Effort

Make sure you are putting some effort in, try a little harder, it’ll get noticed. Also see point 9.

Body Language

Whether we like it or not, we all judge and are all judged on how we look or stand. Quick first impressions or even people you’ve worked with a long time. This is something totally within your control to change the attitude of the room and the people around you, which in turn will make a more pleasant experience for you too.

Energy

It can be very tiring on tour, and as the above point, it is easy to slip into a negative mindset here and there. If you aim to bring the highest energy every day, you can pick someone else up which is a win all around.

Attitude

It may be a cliche but a positive mental attitude will get you very far in life. It’s difficult out there, don’t get me wrong.. but we can try to improve our mental state with things such as meditation or working out or just making sure we get enough sleep. We can then tackle each day with the best attitude.

Passion

It’s the reason why we’re here. We love what we do. If you stop loving it, maybe try a different path, a different job on tour, but always be passionate about what you do. As the saying goes “If you do what you love, you’ll never work a day in your life”.

Being coachable

Even having a lifetime of experience doesn’t mean you know everything. Be open to learning from others.

Doing extra/going the extra mile

This will always get noticed and come back to you down the road. Remember why you’re doing your job, remember the sacrifices that got you to where you are now. Keep working harder and pushing harder and you will reap the rewards.

Being prepared

As they say, “By failing to prepare, you are preparing to fail”. Know what you are doing, where you’re supposed to be, what’s happening tomorrow, the week ahead. Be on top of things. Carry a notepad, make notes, set reminders, whatever you need to do.

You see, you don’t need to be an expert at your job to start with, you just need the right attitude and to arm yourself with these attributes and you’ll do just fine.

To read more about breaking into the world of touring, check out my book on Amazon here: https://www.amazon.com/Girl-Road-Touring-Female-Perspective/dp/B084QGRKVW

Life in the Less-Than-5%

 

As hate against those who look like me has skyrocketed in the past year, and been largely ignored by the music industry, I’ve started to rethink my assumptions about how I can move through the audio world. If women make up 5% of sound engineers, then the percentage of women of color like me is even smaller. In my nine years in live sound, I have never crossed paths with another Asian-American sound tech, although I know we exist. The times that someone onstage has looked like me have been far and few between. I always thought I would have to be extra careful about my safety because of my gender, not because of my ethnicity. Clearly, that was naïve.

As strange as it feels to say, I am one of the “lucky” ones: nothing I’ve gone through has been bad enough to force me out of the industry. A friend of mine, who is also Chinese-American, had such a bad experience interning at Big Name Music Hall with a boss and coworkers that constantly asked him incredibly invasive, weird, and racist questions that he decided to stop pursuing a career in live sound altogether. I’ve experienced nothing so constant and pervasive. The worst environment I’ve been in was as at my first and only training at a production company whose manager went on a bizarre, semi-incoherent rant for several minutes about how racism doesn’t exist, and “the only racism” is green (money), which was triggered by a comment made about the Papa John’s Pizza we were eating. 

Most of the racism I’ve experienced come in the form of the harassment most women face anyway, just with an extra racial component. The stereotype of Asian women as sex-hungry “dragon ladies” who exist only to serve white male pleasure is alive and well (just look at the coverage of the Atlanta shootings). So the assholes who aggressively hit on me and wouldn’t take no for an answer might throw in a reference to anime, hentai, massages, and happy endings, Japanese schoolgirls, or anything else that would make what they are saying that much more degrading. Another non-white friend and I found ourselves and our credentials excessively scrutinized at the few AES meetings we have gone to, compared to the other new faces at the meetings. The gatekeeping worked – I haven’t gone back. Moving from clubs and bars, where often there is no one able (or willing) to back you up, into the more structured world of larger music venues, where the touring crew probably know my coworkers and I am suddenly a friend-of-a-friend instead of a complete stranger, has helped cut a lot of this.

What never goes away are the offhand comments and assumptions. The negative ones are self-explanatory: assuming I don’t speak English or learned English as a second language, pressing me about “Where [I’m] really from” or asking “What are you”, arguing with me about whether or not I am a different Asian sound engineer who you worked within a city I’ve never lived in, being asked to confirm my citizenship by someone literally holding my U.S. Passport when filling out paperwork. Being called ‘China doll’, having someone proudly explain to me how they can tell the different types of us Asians apart as if that deserves my congratulations and gratitude. The supposedly complimentary ones, often based in stereotypes like the model minority myth, are equally as gross: saying that they’re glad I’m Asian because I’ll work harder, or assuming I can do a quick calculation on the spot because I’m Asian, and therefore good at math. And of course, there is the classic ‘Oh I love your culture!’, which is quickly followed by a bunch of half-baked romanticized stereotypes that probably aren’t even from the right country. 

Overall, the biggest issue I’ve run into in my career is tokenism: being paraded or held up for being a person of color as proof of diversity. It was particularly bad at my first job, whereas the only non-white sound engineer I was constantly pressured to participate in the marketing campaigns, fundraising events, tabling, and basically become the face of the audio program. There was a hard push to show how diverse we were as an organization when we really were not. A single person cannot be diverse! I declined until I was eventually left alone, but it was extremely uncomfortable to go through, especially as a high schooler.

Recently it’s resurfaced again, in a slightly different form. I have become the token woman/minority audio engineer success story to a white coworker of mine, who I barely even know. This person has tagged me in social media posts about how inspirational it is to see a non-white woman in audio, and has privately sent me several long messages of solidarity and apology over inconsequential things the venue has done. Did I have to ask my venue to put STOP AAPI HATE up on the marquee? Yes. Was it painful or traumatizing that they didn’t put it up automatically and I had to make that request? No. It was moderately annoying at best, and it’s insulting to decide that it was something deeply distressing on my behalf. To continue doing so after I have explained that to you that this not the case at all is ridiculous. Removing my agency from the situation and operating under the assumption that it is the duty of white people to swoop in and save me, is not ‘being an ally’. It is an unhelpful and infantilizing statement that paints me so delicate that something as simple as requesting my venue speak out is a shattering ordeal. 

Flattening me into a single dimension, whatever the intention, is not okay. It takes the complex, whole person that I am and reduces me down to be defined solely by my race. It doesn’t matter how much solidarity you claim to have if you can’t see past the surface of my skin. Especially at work, the body that I am in should come second to what really matters: the fact that I am a great sound engineer. 

 

This Show Must Go Off – Episode 3

 

Backstage On Broadway at The Bowery Ballroom

The event I want to talk you through today is another private rental. This was the venue’s first video live stream since the pandemic began. Generally, events like these are pretty rare for us. All systems are set up to enjoy a great experience in the moment. Looking ahead to the future of events, live streaming is going to be a great tool to reach a large and diverse audience, and promote accessibility to more patrons. We look forward to the future of these events and are excited to share all that we have learned from this broadcast.

To achieve the video broadcast our client was after, hi-speed Internet service is required. In our case, a separate and dedicated Internet hardline was put in by a service provider, which could be used exclusively for live streaming. The goal of this event was to broadcast approximately 60 minutes of a well-rehearsed “backstage on Broadway” experience involving show tunes sung by loveable Broadway stars. The streaming host was Chase Private Client, and this was a way for the company to give back to their customers, in lieu of the Broadway and live events shutdown.

Chase hired a team of producers to achieve their vision. The producers then work within a set budget and hire in the talent, subcontract the video, audio, and lighting, and scout the location (which is where we came in). The job of the venue was to provide a covid-safe location to film, assist with security, and supplement production needs. After the completion of the initial advance, it was determined that the house would provide power, use of our lighting rig (with a supervising LD), use of atmospherics (including our in house haze machine), use of our monitoring systems (including an engineer) and use of our front of house desk for a broadcast mix. House was also in charge of fabricating a staircase for the stage. It was clear from the start that I would be stretched thin on this event, so I put a network of systems in place to ensure all departments, including myself, could have necessary support if things were not running on time, or if any issues arose. 

The producers of the event were fairly new to the world of Broadway and concert production. The more advances you go through, and the more time you spend on the job, the more you can catch the nuance, and know what questions to ask, to give you a clue as to what to expect. It was much easier for me to deal with each department and their needs separately, rather than having the producers act as a middleman. Each event is different in this way, and can also apply to tours and Tour/Production management. You do not need to know every detail of every aspect of the production/tour, but know what you need to make your job easy. PMs- have your designers/department heads type up riders, plots, and inputs/instrument lists that speak the language directly to those that need it. TMs- same thing, have all of your riders and show needs together before you hit the road. If questions come up, do not hesitate to ask the person the question is intended for. It is less important for you to know all the answers, than it is for you to know who to get the answers from. 

The Covid Compliance team on-site was incredible. The testing process, a little less so. This client had our team schedule virtual testing, with testing kits that were mailed to us, and needed to be mailed back. For me, this was an easy process, but I recognized it is also problematic and prohibitive. First, all those being tested needed to have access to a personal electronic device capable of handling video conferencing, they needed to be fluent in English, have a permanent home address where they can receive mail, and needed to be able to access a UPS mailing point. I would not recommend this system unless you have pre-screened all employees and they feel comfortable to test in this way. In-person testing at a fixed location near the venue or area of work is preferred, with language assistance available. Our venue is currently working closely with Spotlight Medical to ensure fair, effective, and accessible testing for all of our staff, once we begin our own events.

The audio team was a broadcast engineer, (incredibly talented, extremely intelligent, and long lover of analog who I greatly enjoyed working with) and an RF provider with tech for all 3 days, who doubled as an A2. He was another great talent, who gave me a laugh as I watched 16 Shure RF mics go into foil containers typically used for leftovers. All in the name of Covid Compliance. House supplemented staff with a monitor engineer. In hindsight, a backline tech/stagehand would have been extremely helpful. 

The lighting team was fantastic and old pals. They consisted of a designer/programmer, grip, and lighting vendor with tech for all 3 days. House supplemented with our LD, doubling as an electrical supervisor.  One fixture would be hung, and the rest were ground supported on pipe and base in the balcony wings. The designer chose to use intelligent fixtures for all of his design, and stay away from our incandescent/conventional lighting. This allowed the designer to color and flicker or pulse width modulation correctly using the console.  

Day One

The event slated two days of load-in, setup, and rehearsal, and 1 day for last touchups and broadcast. My biggest concern was the analog desk and outboard. I feared after 12 months of lying dormant, things would not work. At this point, my brain has forgotten about the funny little nuance of the desk. The channel with the scratchy fader I needed to replace, the auxiliary buss that behaves funny, the gate that does not gate, etc. There is such a joy of analog, of touching buttons and faders and mixing with your hands, but they require a dedicated and consistent level of maintenance and care, which you can imagine becomes difficult when you are a sweaty, smoky, packed rock club that sees a different show every night, and your responsibilities encompass more than audio. On our first day with the broadcast engineer, I was immediately put at ease. He saw the joy that I see in the desk and had the immense level of experience and knowledge to not only make it sound great but even open up and clean some faders. Prior to load in, I had managed to clean all 1800 of the knobs on the desk but ran out of time for the faders. I welcomed the assist. 

Lighting and video loaded in first, audio and backline followed after that. The first day ended with setting the backline, pinning the stage, and getting the bulk of the lighting programming complete. The video team was able to get set up and a majority of their cabling was completed.

Day Two

Consisted of even more programming and our first run-through of the performance. We were given start and stop rehearsal, and a direct cue to cue which was about 75% of show ready.

Day Three

Started off with a bit of a hiccup from our end. I do my best to make sure our venue staff has all that they need to succeed at their job- resources, time, support, etc. Even still, I can forget that we have been out of practice for a year, and we are navigating new waters from the usual rock show. Our Monitor engineer, unused to theatre cue to cue style mixing and speaking on coms, found himself in the weeds. Unprepared for how to quickly and effectively use snapshots, he lost his work from the day before. The A2 and I quickly rallied in support of the monitor engineer to go through the program ahead of talent, and make sure the wedges were all dialed in, cue to cue. I am sure by the end of it our engineer became a pro at snapshots.  Unfortunately, it only reinforced his lack of interest in theatrical mixing. 

This was another note to myself to hire effectively and hire people who are excited by the event itself, not just the mixing aspect, or the need for a paycheck. We spoke at length about his experience afterward, and throughout it all, he handled the situation calmly and with a great attitude. The last rehearsal before the broadcast was rock solid.

Come the broadcast, I was huddled with the A1 watching a display monitor, hoping there were no streaming hiccups or issues on our end. Sure enough, the show looked beautiful and I only wish I could have heard the mix! It has been 14+ months since I listened to a mix from an engineer I love, on the Midas desk. Nonetheless, it was extremely nice to work together and talk shop, as well as share our love of motorcycles. 

Loadout happened in record time, unfortunate for our dinner break which we worked right past but grateful for a sigh of relief that everyone made it out of the building safe and sound, after a great few days of work. We started on stage-  breaking down backline, audio, and lighting, before moving to the balcony and front of house. This schedule gave the video and communications team a chance to organize and break down and left space for us to finish our loadout without interruption or breaking compliance. 

As of now, we are still taking it easy at Bowery, and remaining cautious to reopen. It is not yet beneficial to us, or to the health of our patrons to open just yet. We are going to continue to focus on some important upgrades, study the data, and figure out a way to make artists, staff and patrons safe and excited to come back to music again.  I cannot wait to share all of our new projects with you, so keep tuning in, and stay safe! 

More Resources

My Take on Line-By-Line Mixing for Theatre


 

Alesia Hendley – From Live Sound to AV & IT

Make Audio Work for You – Don’t Work for Audio.

Alesia Hendley’s introduction to audio started with a traditional path. She learned about sound at her dad’s church in Connecticut and decided to study audio at a trade school in Texas after her family moved there. Even though the program was focused on music production, her career focus at the time was still live sound. 

Alesia recognized she needed to get her hands on live soundboards. “When I was in school, I got to work with SSL consoles and it was amazing, but I knew those boards weren’t at venues. I couldn’t walk into these places saying, ‘I’m an audio engineer, but I have no experience with the consoles you have.’” She kept note of boards she saw at local venues, searched the Guitar Center stores in the area to see what consoles they had, and got some hands-on time at the store. “One of my classmates actually had a job at a Guitar Center. I’d go in and get some work in. We were bouncing ideas off of each other and improving together.”

While in school, Alesia created a music label and a publishing company, but it was a major challenge to make a business out of it.“Everybody’s doing their own thing. I was an audio engineer with nobody to record,” She said. I’m not making any money here. So what the hell am I gonna do?”

She tried a few avenues for freelance gigs, saying yes to everything (including sound for hotel events and church services). She applied and got an interview for a part-time job opening at a multipurpose facility, even though she needed full-time work. “The technical manager should have never gave me his card because I kept calling him and was like, ‘I’ll take anything. I’ll take four hours a week. I’ll do whatever.’ That’s how I started. They brought me on part-time, and I just kept building up the hours. They saw what I can do. Six months later, I was full-time.”

The facility was a stadium, arena, and conference center for the school’s district’s major events (such as plays, football games, proms, and graduations). She started seeing audio outside the “traditional” box she had learned it in. She explains, “When I started exploring the other components of AV, I found all of these spaces and verticals need audio. Even though it’s not just me running front of house, I can still be a part of creating this overall experience, which is what I love about audio anyway. When you’re behind that board at front of house and you’re doing a gig whether it’s a band or a play, it’s just a rush. So I wanted to fill that rush, no matter what part of the experience I was in.”

Alesia recognized a major need related to audio: people who also understood IT and networks. “All these digital consoles – it’s all connected to a network. The network goes down, and nobody on our AV team knows how to fix it. We had to call the IT team of the school district, which was a language barrier because traditional IT doesn’t really like to play with our AV stuff. They don’t want the AV stuff on their network. So, the IT team had a learning curve as well.” She realized, “If I don’t learn networking, I’m going to be out of a job in this AV thing sooner or later.” Alesia took a risk, and applied to Access Network, a company she had been interested in for some time. “It’s basically an IT company, but everybody that works for this company is an AV person. They’ve been an integrator in some form or fashion.”

She landed a job. “What we do is we design networks for AV solutions. Everything lives on the network. About 85 to 90% of what we do is in the home because our clients are people who have home studios or have smart homes. The other 10% of what we do is on the commercial side, where you’re in those corporate environments, where there are Dante, Shure ceiling microphones. So it’s been very, very exciting to constantly pivot but let audio lead me through all of these different roles.”

She finds her company is welcoming to diversity. “Don’t get me wrong – I’m still surrounded by men because we’re in technology, but there’s more discussion of being diverse. They’re more open and more welcome, instead of you running into the knucklehead behind the console that doesn’t want to move aside because he’s front of house – he’s the sound guy.”

Alesia still has “traditional” audio in her life, including a podcast about a personal interest, digital signage. “I’m still creating. I host it, I create all the content for it, I do all the recording. Me and my team, we do the editing. We created the intros and outros. I still have a home studio, because now I can afford to invest in a home studio.”

On Pivoting out of Live Sound

It was a bunch of soul searching. It did take some time. I stayed in my facility job for an additional two and a half years. It takes time to really do that kind of soul searching and figuring out what is the next step to help you pivot.

Of course, I miss running front of house, but my pivot was for education. I needed to learn about a network. I didn’t want to just go join a random IT company, and they weren’t going to hire me because I have no IT background. I needed to get with a company that understands that IT needs to talk to the gear that I love. 

I started off with Dante. That was my first touchpoint with audio or AV on a network. When I transitioned to this company, the education continued to roll in with this company. They paid for a lot of training. The education came within that package.

Bringing AV, IT and Audio Together

I’m a SoundGirl at heart. I love audio. I love everything about it. But what I realized is I had to look at the bigger scope of this experience that I loved creating. That led me up to the point where I’m at now, doing the IT side of this AV/audio lifestyle.

I have people ask me all the time, ‘Do you miss running front of house? You do nothing with audio now.’ You can look at it that way, or you can look at it: I’m the person who’s orchestrating the sound that people experience. They need the network that I’ve designed. Without it, it’s not going to work. So, it’s about perception. Change the perception of it and try to look at it in a different sense that’s more positive versus ‘I’m losing something.’

At the end of the day, you’re not pressing the physical console buttons, but you’re pressing the overall button. Without you, it doesn’t exist. That’s a huge button, like, the biggest button on the console. You’re still creating this experience. 

Honestly, so many people don’t even know this exists. I had to randomly find it. This is years and years of time being put in to do something that is different.

On how AV work is creative

My work is still creative because of the things that create the experience. It needs us. It doesn’t exist without us. Yes, I would love to be mixing for whoever my favorite artists at the time or running Front of House. That is more creative, and that is more goosebumpy. But my focus was, where’s the industry going? If we don’t learn how to pivot, then we’re stuck in these positions. I was an audio engineer with a studio background, but there was no money in that vertical for me. There’s nothing wrong with doing what you love, but you have to have a balance of creativity and money flow.

Advice

Make audio work for you – don’t work for audio.

We all have the odds against us. Don’t let it dumb down your greatness. Just realize that the odds are there, work three times harder, get gritty and find ways to freakin make it work. Keep knocking down the door as much as you can, like, keep kicking it open until something happens.

Don’t just latch on to the microphone or the soundboard. Explore what these things lead to, or what they create. There’s just so much opportunity. At one point, I felt like I didn’t fit into SoundGirls anymore. I’m not mixing music. I’m not doing this stuff, so maybe I’m not a sound girl. Then I was like, wait a second – You’re pulling the strings here. You’re doing sound, just in a different perspective, in a different way, and it works.

Figure out your little milestones. You can have a goal, but what are your realistic goals in between? And if you focus on that, you’ll find your career path and you’ll grow a lot faster – instead of stumbling into it like many of us have done.

None of this happens overnight. I hated loads and load-outs, but I don’t regret one bit of it. I got a little muscle on my arm. I learned how to hold my own. I learned from people who had been in the business for 20 years. That groundwork is what matters the most, so don’t run from it. Don’t be like, ‘I hate load-ins and load-outs. Stick with it for a while and see what happens. 

More on Alesia

Alesia’s website: https://www.thesmoothfactor.com

Alesia’s Blog for SoundGirls 

Sound & Communications Articles

Alesia’s Podcast Interviews


 

Find More Profiles on The Five Percent

Profiles of Women in Audio

What Is a FIR Filter?

The use of FIR filters (or finite impulse response filters) has grown in popularity in the live sound world as digital signal processing (DSP) for loudspeakers becomes more and more sophisticated. While not a new technology in itself, these filters provide a powerful tool in system optimization due to their linear phase properties. But what exactly do we mean by “finite impulse response” and how do these filters work? In order to understand digital signal processing better we are going to need to take a step back into our understanding of mathematics and levels of abstraction.

A (Very) Brief Intro To DSP

One of the reasons I find mathematics so awesome is because we are able to take values in the real or imaginary world and represent them either symbolically or as a variable in order to analyze them. We can use the number “2” to represent two physical oranges or apples. Similarly, we can take it up another level of abstraction by saying we have “x” amount of oranges or apples to represent a variable amount of said item. Let’s say we wanted to describe an increasing amount of apples where for every new index of apples, we add the sum of the previous number of apples. We can write this as an arithmetic series for all positive integer number “n” of apples as:

Where for each index of apples starting at 1, 2, 3, 4…etc onto infinity we have the current index value n plus the sum of all the values before it. Ok, you might be asking yourself why we are talking about apples when we are supposed to be talking about FIR filters. Well, the reason is that digital signal processing can be represented using this series notation and it makes it a lot easier than writing out the value for every single input into a filter. If we were to sample a sine wave like the one below, we could express the total number of samples over the period from t1 to t2 as the sum of all the samples over that given period.

In fact, as Lyons points out in Understanding Digital Signal Processing (2011) we can express the discrete-time sequence for a given sine-wave at frequency f (in Hertz) at a given time t (in seconds) with the function f(n) = This equation allows us to translate each value of the sine wave, for example, voltage in an electric signal, for a discrete moment in time into an integer value that can be plotted in digital form.

What our brain wants to do is draw lines in between these values to create a continuous waveform so it looks like the original continuous sine wave that we sampled. In fact, this is not possible because each of these integers are discrete values and thus must be seen separately as compared to an analog, continuous signal. Now, what if the waveform that we sampled wasn’t a perfect sine wave, but instead had peaks and transient values? The nature of FIR filters has the ability to “smooth out” these stray values with linear phase properties.

How It Works

The finite impulse response filter gets its name because the same number, or finite, input values you get going into the filter, you get coming out the output. In Understanding Digital Signal Processing, Lyons uses a great analogy of how FIR filters average out summations like averaging the number of cars crossing over a bridge [2]. If you counted the number of cars going over a bridge every minute and then took an average over the last five minutes of the total number of cars, this averaging has the effect of smoothing out the outlying higher or lower number of vehicles to create a more steady average over time. FIR filters function similarly by taking each input sample and multiplying it by the filter’s coefficients and then summing them at the filter’s output. Lyons points out how this can be described as a series which illustrates the convolution equation for a general “M-tap FIR filter” [3]:

While this may look scary at first, remember from the discussion at the beginning of this blog that mathematical symbols package concepts into something more succinct for us to analyze. What this series is saying is that for every sample value x whose index value is n-k, k being some integer greater than zero, we multiply its value times the coefficient h(k) and sum the values for the number of taps in the filter (M-1). So here’s where things start to get interesting: the filter coefficients h(k) are the FIR filter’s impulse response. Without going too far down the rabbit hole in discussing convolution and different types of FIR windows for filter design, let’s jump into the phase properties of these filters then focus on their applications.

The major advantage of the FIR filter compared to other filters such as the IIR (or infinite impulse response) filter lies in the symmetrical nature of the delay introduced into the signal that doesn’t introduce phase shift into the output of the system. As Lyons points out this relates to the group delay of the system:

When the group delay is constant, as it is over the passband of all FIR filters having symmetrical coefficients, all frequency components of the filter input signal are delayed by an equal amount of time […] before they reach the filter’s output. This means that no phase distortion is induced in the filter’s desired output signal […] [4]

It is well known that phase shift, especially at different frequency ranges, can cause detrimental constructive and/or destructive effects between two signals. Having a filter at your disposal that allows gain and attenuation without introducing phase shift has significant advantages especially when used as a way of optimizing frequency response between zones of loudspeaker cabinets in line arrays. So now that we have talked about what a FIR filter is and its benefits, let’s discuss a case for the application of FIR filters.

Applications of FIR filters

Before sophisticated DSP and processors were so readily available, a common tactic of handling multiway sound systems, particularly line arrays, with problematic high-frequencies was to go up to the amplifier of the offending zone of boxes and physically turn down the amplifier running the HF drivers. I’m not going to argue against doing what you have to do to save people’s ears in dire situations, but the problem with this method is that when you change the gain of the amplifier for the HF in a multiway loudspeaker, you effectively change the crossover point as well. One of our goals in optimizing a sound system is to maintain the isophasic response of the array throughout all the elements and zones of the system. By using FIR filters to adjust the frequency response of a system, we can make adjustments and “smooth out” the summation effects of the interelement angles between loudspeaker cabinets without introducing phase shift in-between zones of our line array.

Remember the example Lyons gave comparing the averaging effects of FIR filters to averaging the number of cars crossing a bridge? Now instead of cars, imagine we are trying to “average” out the outlier values for a given frequency band in the high-frequency range of different zones in our line array. These variances are due to the summation effects dependent on the interelement angles between cabinets. Figure A depicts a 16 box large-format line array with only optimized interelement angles between boxes using L-Acoustics’ loudspeaker prediction software Soundvision.

Figure A

Each blue line represents a measurement of the frequency response along the coverage area of the array. Notice the high amount of variance in frequency response particularly above 8kHz between the boxes across the target audience area for each loudspeaker. Now when we use FIR filtering available in the amplifier controllers and implemented via Network Manager to smooth out these variances like in the car analogy, we get a smoother response closer to the target curve above 8kHz as seen in Figure B.

Figure B

In this example, FIR filtering allows us to essentially apply EQ to individual zones of boxes within the array without introducing a relative phase shift that would break the isophasic response of the entire array.

Unfortunately, there is still no such thing as a free lunch. What you win in phase coherence, you pay for in propagation time. That is why, sadly, FIR filters aren’t very practical for lower frequency ranges in live sound because the amount of introduced delay at those frequency ranges would not be practical in real-time applications.

Conclusion

By taking discrete samples of a signal in time and representing it with a series expressions, we are able to define filters in digital signal processing as manipulations of a function. Finite impulse response filters with symmetric coefficients are able to smooth out variances in the input signal due to the averaging nature of the filter’s summation. The added advantage here is that this happens without introducing phase distortion, which makes the FIR filter a handy tool for optimizing zones of loudspeaker cabinets within a line array. Today, most professional loudspeaker manufacturers employ FIR filters to some degree in processing their point source, constant curvature, and variable curvature arrays. Whether the use of these filters creates a smoother sounding frequency response is up to the user to decide.

Endnotes:

[1] (pg. 2) Lyons, R.G. (2011). Understanding Digital Signal Processing. 3rd ed. Prentice-Hall: Pearson Education.

[2] (pg. 170) Lyons, R.G. (2011). Understanding Digital Signal Processing. 3rd ed. Prentice-Hall: Pearson Education.

[3] (pg. 176) Lyons, R.G. (2011). Understanding Digital Signal Processing. 3rd ed. Prentice-Hall: Pearson Education.

[4] (pg. 211) Lyons, R.G. (2011). Understanding Digital Signal Processing. 3rd ed. Prentice-Hall: Pearson Education.

Resources:

John. M. (n.d.) Audio FIR Filtering: A Guide to Fundamental FIR Filter Concepts & Applications in Loudspeakers. Eclipse Audio. https://eclipseaudio.com/fir-filter-guide/

Lyons, R.G. (2011). Understanding Digital Signal Processing. 3rd ed. Prentice-Hall: Pearson Education.

Explaining Effects: Reverb

“Can I get some (more) reverb on my vocals, please?”

If I had a dollar for every time I’ve been asked that, I’d have… a lot of money. Reverb is one of the most-used audio effects, and with good reason, since natural reverb defines our perception of everyday sound. In fact, we are so used to hearing it that completely dry sounds can seem strange and jarring. It’s no wonder that everyone wants a bit of reverb on their vocals.

What we perceive as reverb is a combination of two things, called early reflections and late reflections. Early reflections are the first reflections of the source sound that make it back to our ear; they are the reflections that travel out, reflect off of something once, and head back. Late reflections are the reflections that spend time bouncing off of multiple surfaces before returning to our ear. Because we experience such a large number of reflections arriving at our ears so closely together, we do not hear them as an individual, echoed copies – instead, we get the smooth sound of reverberation.

Analog Reverb

There are two main types of mechanical reverb systems: plate and spring. Plate reverb was one of the first to come along. It revolves around the suspension of a large, suspended steel plate, roughly 4×8 feet, in a frame with a speaker driver at one end and a microphone at the other. When the speaker driver vibrates the plate, the vibrations travel through the plate to the microphone, mimicking the way soundwaves travel through air. The tightness of the plate controls the amount of delay – the tighter the plate, the longer the decay, as the energy of the vibrations takes longer to be absorbed. Additionally, dampers may be used to press against the plate and fine-tune the amount of delay. Of course, the unwieldy size and design of plate reverb present some pretty significant logistical challenges. Aside from the amount of space needed, its microphone-based design means that any external noise is easily picked up, so keeping the units away and isolated from any noise is also essential. For these reasons, its use was relegated almost exclusively in studios. A famous example of plate reverb is the Pink Floyd album Dark Side of the Moon – plate reverb (specifically the EMT-140) is the only reverb used on that album.

Spring reverb, developed a little later, is much smaller, more portable, and what you will find built into most amplifiers today. Unlike plate reverb, it relies on electrical signals and does not need any speakers or microphones to function. Like plate reverb, it relies on creating vibrations but does this by sandwiching a spring between a transducer and pickup. The transducer is used to create a vibration within the spring, which the pickup then converts into signal. Spring gained popularity as the defining sound of surf music, where you will find it used in copious amounts – any Dick Dale record, for example, is a good way to get familiar with how it sounds.

Digital Reverb

Like analog reverb, digital reverb can also be divided into two main categories: algorithmic and convolution. Most digital reverbs are algorithmic reverbs. Algorithmic reverbs require less processing power than their convolution-based reverb counterparts, and most of the pre-stocked reverb plugins you’ll find in your DAW will fall into this category. Algorithmic reverbs work by using delays and feedback loops on the samples of your audio file to mimic the early and late reflections that make up analog reverb, creating and defining the sound of a hypothetical room based on the parameters that you set. The early reflection component is created by sending the dry signal through several delay lines, which result in closely spaced copies of the original signal. Late reflections are then created by taking the already-generated early reflections and feeding them back through the algorithm repeatedly, re-applying the hypothetical room’s tonal qualities and resulting in additional delays.

Convolution is the more complex method of creating digital reverb. It involves capturing the characteristics of physical space, defining a mathematical function called an impulse response that can apply that space’s characteristic response to any input signal and doing an operation called convolution to get the (wet) output. Essentially, you are using a mathematical model to define the reflective properties of a physical room and imprinting that room’s unique signature onto your digital sample. The entire process is based on the measurement of a room’s response to what is called an impulse, an acoustic trigger meant to engage the acoustics of the room. These are usually atonal sounds, such as a white noise blast or sine sweep. Microphones are used to register both the trigger sound and the resulting acoustic response. This audio is then fed into a convolution processor, which separates out the triggering sound and defines the room’s impulse response. With the impulse response obtained, the convolution processor can now use convolution to apply that room’s response to any input signal it receives, essentially multiplying the frequency spectra of the input signal and impulse response together and coloring the output sound with the harmonics and timbre of the impulse response. The end result is a signal that is a convincing model of the input sound being played in the space the impulse response defines.

The versatility of digital reverb means that the sound of just about every space you could want, real or imagined, is at your disposal. If used well, it can add completely new dimensions to your mixes or create wild effects. Just be careful not to wash yourself away in the process.

Do I Still Know How To Do My Job?

My last mixing FOH for a real audience was January 8, 2020. For some reason I feel a necessity to write down a year as well, being afraid that if this stand-still stays longer than we all hope I’ll still be able to track down to my last real show. I only wish we won’t get in the scenario of those memes where there’s a senior person being led by a young kid saying “my 2020 gigs were rescheduled again” and the youngster answers “let it go granny, it’s 2063 already”.

So, it’s over a year at this point without being surrounded by live music, audience cheers, and the feeling of those butterflies in a stomach two minutes before a show starts. I had rehearsals with my supposed-to-be cast on a cruise ship through mid-February to mid-March, so I feel like I was still in the right vibe. And after that –  that’s it.

The slow sway of a vaccination process gives some positive thoughts that we’re moving in the right direction and one day to be able to get back to our jobs, Here comes the scariest part. Do I still know how to do my job?

I was talking with other artists and randomly we started talking about skill loss when not practicing. An artist that I genuinely admire, shared his experience that after not painting for a year it was very frustrating to take a brush in a hand again, and then it took time to get back the same technique. A scriptwriter told me that not writing a script for some time has become a struggle to get those creative juices going again. Then I thought about myself, as an ex-drummer. When our high school band split up and I stopped drumming for good. A couple of years later I got a job as a backline tech and one day I got asked to do a drum soundcheck. Kick – fine. Snare- fine. You know how it goes. And then a guy running FOH who knew about my drummer career asked me to play something. I froze, couldn’t keep a steady 4/4 beat. So, at this point, I already know how it feels to try doing something that you knew well some time ago, but suddenly it feels so unknown. –

Let’s point out that I’m not only talking about mixing. Mixing is easy, I see live sound engineering as a complex set of skills. A lot of us, live sound engineers, didn’t have a necessity to be sharp for over a year, no 5-minute changeovers, no crew management, no immediate problem solving on the fly, no 300ft power cable ran backward, you name it. All of these skills didn’t come overnight. It took years and years going through fire and ice just not to freak out and learn how to calmly make the right decisions. Thinking about that honestly makes me worry, do I still know how to behave? Or is it just like riding a bike? Am I the only one in the industry concerned? Or will it be a slow start for everybody when live shows will get back? Is there a way to do a self-check? Or it is not necessary, because everything we knew comes back naturally once we start doing what we’ve been doing?

Can’t tell how much I appreciate those virtual product presentations, free training, and Q&A sessions. I haven’t watched that many educational videos ever in my life. But does that keep us, live sound engineers acute and prepared for the live environment? Can’t wait the day to come to find out!


Dovile Bindokaite is currently based and working as a freelance sound engineer in Lithuania. She has an MA degree in sound engineering and started working in sound in 2012. Since 2014, she has worked in various positions in live sound including FOH, monitor engineer, sound engineer for broadcasting, RF coordinator, backline tech, stage tech, stage manager. For the past year, she was part of an audio team at Schubert Systems Group (USA). She has experience working in theatre as a sound designer and recording studios as a recording engineer.

 

One Size Does Not Fit All in Acoustics

Have you ever stood outside when it has been snowing and noticed that it feels “quieter” than normal? Have you ever heard your sibling or housemate play music or talk in the room next to you and hear only the lower frequency content on the other side of the wall? People are better at perceptually understanding acoustics than we give ourselves credit for. In fact our hearing and our ability to perceive where a sound is coming from is important to our survival because we need to be able to tell if danger is approaching. Without necessarily thinking about it, we get a lot of information about the world around us through localization cues gathered from the time offsets between direct and reflected sounds arriving at our ears that our brain performs quick analysis on compared to our visual cues.

Enter the entire world of psychoacoustics

Whenever I walk into a music venue during a morning walk-through, I try to bring my attention to the space around me: What am I hearing? How am I hearing it? How does that compare to the visual data I’m gathering about my surroundings? This clandestine, subjective information gathering is important to reality check the data collected during the formal, objective measurement processes of systems tunings. People spend entire lifetimes researching the field of acoustics, so instead of trying to give a “crash course” in acoustics, we are going to talk about some concepts to get you interested in the behavior that you have already been spending your whole life learning from an experiential perspective without realizing it. I hope that by the end of reading this you will realize that the interactions of signals in the audible human hearing range are complex because the perspective changes depending on the relationships of frequency, wavelength, and phase between the signals.

The Magnitudes of Wavelength

Before we head down this rabbit hole, I want to point out one of the biggest “Eureka!” moments I had in my audio education was when I truly understood what Jean-Baptiste Fourier discovered in 1807 [1] regarding the nature of complex waveforms. Jean-Baptiste Fourier discovered that a complex waveform can be “broken down” into its many component waves that when recombined create the original complex waveform. For example, this means that a complex waveform, say the sound of a human singing, can be broken down into the many composite sine waves that add together to create the complex original waveform of the singer. I like to conceptualize the behavior of sound under the philosophical framework of Fourier’s discoveries. Instead of being overwhelmed by the complexities as you go further down the rabbit hole, I like to think that the more that I learn, the more the complex waveform gets broken into its component sine waves.

Conceptualizing sound field behavior is frequency-dependent

 

One of the most fundamental quandaries about analyzing the behavior of sound propagation is due to the fact that the wavelengths that we work with in the audible frequency range vary in orders of magnitude. We generally understand the audible frequency range of human hearing to be 20 cycles per second (Hertz) -20,000 cycles per second (20 kilohertz), which varies with age and other factors such as hearing damage. Now recall the basic formula for determining wavelength at a given frequency:

Wavelength (in feet or meters) = speed of sound (feet or meters) / frequency (Hertz) **must use same units for wavelength and speed of sound i.e. meters and meters per second**

So let’s look at some numbers here given specific parameters of the speed of sound since we know that the speed of sound varies due to factors such as altitude, temperature, and humidity. The speed of sound at “average sea level”, which is roughly 1 atmosphere or 101.3 kiloPascals [2]), at 68 degrees Fahrenheit (20 degrees Celsius), and at 0% humidity is approximately 343 meters per second or approximately 1,125 feet per second [3]. There is a great calculator online at sengpielaudio.com if you don’t want to have to manually calculate this [3]. So if we use the formula above to calculate the wavelength for 20 Hz and 20kHz with this value for the speed of sound we get (we will use Imperial units because I live in the United States):

Wavelength of 20 Hz= 1,125 ft/s / 20 Hz = 56.25 feet

Wavelength of 20 kHz or 20,000 Hertz = 1,125 ft/s / 20,000 Hz = 0.0563 feet or 0.675 inches

This means that we are dealing with wavelengths that range from roughly the size of a penny to the size of a building. We see this in a different way as we move up in octaves along the audible range from 20 Hz to 20 kHz because as we increase frequency, the number of frequencies per octave band increases logarithmically.

32 Hz-63 Hz

63-125 Hz

125-250 Hz

250-500 Hz

500-1000 Hz

1000-2000 Hz

2000-4000 Hz

4000-8000 Hz

8000-16000 Hz

Look familiar??

Unfortunately, what this ends up meaning to us sound engineers is that there is no “catch-all” way of modeling the behavior of sound that can be applied to the entire audible frequency spectrum. It means that the size of objects and surfaces obstructing or interacting with sound may or may not create issues depending on its size in relation to the frequency under scrutiny.

For example, take the practice of placing a measurement mic on top of a flat board to gather what is known as a “ground plane” measurement. For example, placing the mic on top of a board, and putting the board on top of seats in a theater. This is a tactic I use primarily in highly reflective room environments to take measurements of a loudspeaker system in order to observe the system behavior without the degradation from the reflections in the room. Usually, because I don’t have control over changing the acoustics of the room itself (see using in-house, pre-installed PAs in a venue). The caveat to this method is that if you use a board, the board has to be at least a wavelength at the lowest frequency of interest. So if you have a 4ft x 4 ft board for your ground plane, the measurements are really only helpful from roughly 280 Hz and above (solve for : 1,125 ft/s / 4 ft  ~280 Hz given the assumption of the speed of sound discussed earlier). Below that frequency, the wavelengths of the signal under test will be larger in relation to the board so the benefits of the ground plane do not apply. The other option to extend the usable range of the ground plane measurement is to place the mic on the ground (like in an arena) so that the floor becomes an extension of the boundary itself.

Free Field vs. Reverberant Field:

When we start talking about the behavior of sound, it’s very important to make the distinction about what type of sound field behavior we are observing, modeling, and/or analyzing. If that isn’t confusing enough, depending on the scenario, the sound field behavior will change depending on what frequency range is under scrutiny. Most loudspeaker prediction software works by using calculations based on measurements of the loudspeaker in the free field. To conceptualize how sound operates in the free field, imagine a single, point-source loudspeaker floating high above the ground, outside, and with no obstructions insight. Based on the directivity index of the loudspeaker, the sound intensity will propagate outward from the origin according to the inverse square law. We must remember that the directivity index is frequency-dependent, which means that we must look at this behavior as frequency-dependent. As a refresher, this spherical radiation of sound intensity from the point source results in 6dB loss per doubling of distance. As seen in Figure A, sound intensity propagating at radius “r” will increase by a factor of r^2 since we are in the free field and sound pressure radiates omnidirectionally as a sphere outward from the origin.

Figure A. A point source in the free field exhibits spherical behavior according to the inverse square law where sound intensity is lost 6dB per doubling of distance

 

The inverse square law applies to point-source behavior in the free field, yet things grow more complex when we start talking about line sources and Fresnel zones. The relationship between point source and line source behavior changes whether we are observing the source in the near field or far field since a directional source becomes a point source if observed in the far-field. Line source behavior is a subject that can have an entire blog or book on its own, so for the sake of brevity, I will redirect you to the Audio Engineering Society white papers on the subject such as the 2003 white paper on “Wavefront Sculpture Technology” by Christian Heil, Marcel Urban, and Paul Bauman [4].

Free field behavior, by definition, does not take into account the acoustical properties of the venue that the speakers exist in. Free field conditions exist pretty much only outdoors in an open area. The free field does, however, make speaker interactions easier to predict especially when we have known direct (on-axis) and off-axis measurements comprising the loudspeakers’ polar data. Since loudspeakers manufacturers have this high-resolution polar data of their speakers, they can predict how elements will interact with one another in the free field. The only problem is that anyone who has ever been inside a venue with a PA system knows that we aren’t just listening to the direct field of the loudspeakers even when we have great audience coverage of a system. We also listen to the energy returned from the room in the reverberant field.

As mentioned in the introduction to this blog, our hearing allows us to gather information about the environment that we are in. Sound radiates in all directions, but it has directivity relative to the frequency range being considered and the dispersion pattern of the source. Now if we take that imaginary point source loudspeaker from our earlier example and listen to it in a small room, we will hear not only the direct sound coming from the loudspeaker to our ears, but also the reflections from the loudspeaker bouncing off the walls and then back at our ears delayed by some offset in time. Direct sound often correlates to something we see visually like hearing the on-axis, direct signal from a loudspeaker. Since reflections result from the sound bouncing off other surfaces then arriving at our ears, what they don’t contribute to the direct field, they add to the reverberant field that helps us perceive spatial information about the room we are in.

 

Signals arriving on an obstructed path to our ears we perceive as direct arrivals, whereas signals bouncing off a surface and arriving with some offset in time are reflections

 

Our ears are like little microphones that send aural information to our brain. Our ears vary from person to person in size, shape, and the distance between them. This gives everyone their own unique time and level offsets based on the geometry between their ears which create our own individual head-related transfer functions (HRTF). Our brain combines the data of the direct and reflected signals to discern where the sound is coming from. The time offsets between a reflected signal and the direct arrival determine whether our brain will perceive the signals as coming from one source or two distinct sources. This is known as the precedence effect or Haas effect. Sound System Engineering by Don Davis, Eugene Patronis, Jr., & Pat Brown (2013), notes that our brain integrates early reflections arriving within “35-50 ms” from the direct arrival as a single source. Once again, we must remember that this is an approximate value for time since actual timing will be frequency-dependent. Late reflections that arrive later than 50ms do not get integrated with the direct arrival and instead are perceived as two separate sources [5]. When two signals have a large enough time offset between them, we start to perceive the two separate sources as echoes. Specular reflections can be particularly obnoxious because they arrive at our ears either with an increased level or angle of incidence such that they can interfere with our perception of localized sources.

Specular reflections act like reflections off a mirror bouncing back at the listener

 

Diffuse reflections, on the other hand, tend to lack localization and add more to the perception of “spaciousness” of the room, yet depending on frequency and level can still degrade intelligibility. Whether the presence of certain reflections will degrade or add to the original source are highly dependent on their relationship to the dimensions of the room.

 

Various acoustic diffusers and absorbers used to spread out reflections [6]

 

In the Master Handbook of Acoustics by F. Alton Everest and Ken C. Pohlmann (2015), they illustrate how “the behavior of sound is greatly affected by the wavelength of the sound in comparison to the size of objects encountered” [7]. Everest & Pohlmann describe how the varying size of wavelength depending on frequency means that how we model sound behavior will vary in relation to the room dimensions. There is a frequency range at which in smaller rooms, the dimensions of the room are shorter than the wavelength such that the room cannot contribute boosts due to resonance effects [7]. Everest & Pohlmann note that when the wavelength becomes comparable to room dimensions, we enter modal behavior. At the top of this range marks the “cutoff frequency” to which we can begin to describe the interactions using “wave acoustics”, and as we progress into the higher frequencies of the audible range we can model these short-wavelength interactions using ray behavior. One can find the equations for estimating these ranges based on room length, width, and height dimensions in the Master Handbook of Acoustics. It’s important to note that while we haven’t explicitly discussed phase, its importance is implied since it is a necessary component to understanding the relationship between signals. After all, the phase relationship between two copies of the same signal will determine whether their interaction will result in constructive or destructive interference. What Everest & Pohlmann are getting at is that how we model and predict sound field behavior will change based on wavelength, frequency, and room dimensions. It’s not as easy as applying one set of rules to the entire audible spectrum.

Just the Beginning

So we haven’t even begun to talk about the effects of properties of surfaces such absorption coefficients and RT60 times, and yet we already see the increasing complexity of the interactions between signals based on the fact we are dealing with wavelengths that differ in orders of magnitude. In order to simplify predictions, most loudspeaker prediction software uses measurements gathered in the free field. Although acoustic simulation software, such as EASE, exists that allows the user to factor in properties of the surfaces, often we don’t know the information that is needed to account for things such as absorption coefficients of a material unless someone gets paid to go and take those measurements. Or the acoustician involved with the design has well documented the decisions that were made during the architecture of the venue. Yet despite the simplifications needed to make prediction easier, we still carry one of the best tools for acoustical analysis with us every day: our ears. Our ability to perceive information about the space around us based on interaural level and time differences from signals arriving at our ears allows us to analyze the effects of room acoustics based on experience alone. It’s important when looking at the complexity involved with acoustic analysis to remember the pros and cons of our subjective and objective tools. Do the computer’s predictions make sense based on what I hear happening in the room around me? Measurement analysis tools allow us to objectively identify problems and their origins that aren’t necessarily perceptible to our ears. Yet remembering to reality check with our ears is important because otherwise, it’s easy to get lost in the rabbit hole of increasing complexity as we get further into our engineering of audio. At the end of the day, our goal is to make the show sound “good”, whatever that means to you.

Endnotes:

[1] https://www.aps.org/publications/apsnews/201003/physicshistory.cfm

[2] (pg. 345) Giancoli, D.C. (2009). Physics for Scientists & Engineers with Modern Physics. Pearson Prentice Hall.

[3] http://www.sengpielaudio.com/calculator-airpressure.htm

[4] https://www.aes.org/e-lib/browse.cfm?elib=12200

[5] (pg. 454) Davis, D., Patronis, Jr., E. & Brown, P. Sound System Engineering. (2013). 4th ed. Focal Press.

[6] “recording studio 2” by JDB Sound Photography is licensed with CC BY-NC-SA 2.0. To view a copy of this license, visit https://creativecommons.org/licenses/by-nc-sa/2.0/

[7] (pg. 235) Everest, F.A. & Pohlmann, K. (2015). Master Handbook of Acoustics. 6th ed. McGraw-Hill Education.

Resources:

American Physical Society. (2010, March). This Month in Physics History March 21, 1768: Birth of Jean-Baptiste Joseph Fourier. APS Newshttps://www.aps.org/publications/apsnews/201003/physicshistory.cfm

Davis, D., Patronis, Jr., E. & Brown, P. Sound System Engineering. (2013). 4th ed. Focal Press.

Everest, F.A. & Pohlmann, K. (2015). Master Handbook of Acoustics. 6th ed. McGraw-Hill Education.

Giancoli, D.C. (2009). Physics for Scientists & Engineers with Modern Physics. Pearson Prentice Hall.

JDB Photography. (n.d.). [recording studio 2] [Photograph]. Creative Commons. https://live.staticflickr.com/7352/9725447152_8f79df5789_b.jpg

Sengpielaudio. (n.d.). Calculation: Speed of sound in humid air (Relative humidity). Sengelpielaudio. http://www.sengpielaudio.com/calculator-airpressure.htm

Urban, M., Heil, C., & Bauman, P. (2003). Wavefront Sculpture Technology. [White paper]. Journal of the Audio Engineering Society, 51(10), 912-932.

https://www.aes.org/e-lib/browse.cfm?elib=12200

X