Empowering the Next Generation of Women in Audio

Join Us

Word Clocks, Clock Masters, SRC, and Digital Clocks

And Why They Matter To You

Three digital audio consoles walk into a festival/bar and put in their drink orders. The bartender/front-end processor says, “You can order whatever you want, but I’m going to determine when you drink it.” In the modern audio world, we are able to keep our signal chain in the digital realm from the microphone to the loudspeaker longer without hopping back and forth through analog-to-digital (and vice versa) converters. In looking at our digital signal flow there are some important concepts to keep in mind when designing a system. In order to keep digital artifacts from rearing their ugly heads amongst our delivery of crispy, pristine audio, we must consider our application of sample rate conversions and clock sources.

Let’s back up a bit to define some basic terminology: What is a sample rate? What is the Nyquist frequency? What is bit depth? If we take a period of one second of a waveform and chop it up into digital samples, the number of “chops” per second is our sample rate. For example, the common sample rates of 44.1 KHz, 48 KHz, and 192 KHz refer to 44,100 samples per second; 48,000 samples per second; and 192,000 samples per second.

A waveform signal “chopped” into 16 samples

Why do these specific numbers matter you may ask? This brings us to the concept of the Nyquist theorem and Nyquist frequency:

“The Nyquist Theorem states that in order to adequately reproduce a signal it should be periodically sampled at a rate that is 2X the highest frequency you wish to record.”

(Ruzin, 2009)*.

*Sampling theory is not just for audio, it applies to imaging too! See references

So if the human ear can hear 20Hz-20kHz, then in theory in order to reproduce the frequency spectrum of the human ear, the minimum sample rate must be 40,000 samples per second. Soooo why don’t we have sample rates of 40 KHz? Well, the short answer is that it doesn’t sound very good. The long answer is that it doesn’t sound good because the frequency response of the sampled waveform is affected by frequencies above the Nyquist frequency due to aliasing. According to “Introduction to Computer Music: Volume One”, by Professor Jeffrey Hass of Indiana University, these partials or overtones above the sample frequency range are “mirrored the same distance below the Nyquist frequency as the originals were above it, at the original amplitudes” (2017-2018). This means that those frequencies sampled above the range of human hearing can affect the frequency response of our audible bandwidth given high enough amplitude! So without going down the rabbit hole of recording music history, CDs, and DVDs, you can see part of the reasoning behind these higher sample rates is to provide better spectral bandwidth for what us humans can perceive. Another important term for us to discuss here is bit depth and word length when talking about digital audio.

Not only is the integrity of our digital waveform affected by the minimum number of samples per second, but bit depth affects it as well. Think of bit depth as the “size” of the “chops” of our waveform where the higher the bit depth, the greater discretization of our samples. Imagine you are painting a landscape with dots of paint: if you used large dots to draw this landscape, the image would be chunkier and perhaps it would be harder to interpret the image being conveyed. As you paint with smaller and smaller dots closer together, the dots start approaching the characteristics of lines the smaller and closer together they get. As a result, the level of articulation within the drawing significantly increases.

 

Landscape portrayed with dots of smaller sample size

 

Landscape portrayed with dots of larger sample size

 

When you have higher bit depths, the waveform is “chopped” into smaller pieces creating increased articulation of the signal. Each chop is described by a “word” in the digital realm that translates to a computer value by the device doing the sampling. The word length or bit depth describes in computer language to the device how discrete and fine to make the dots in the painting. So who is telling these audio devices when to start taking samples and at what rates to do so? Here is where the device’s internal clock comes in.

Every computer device from your laptop to your USB audio interface has some sort of clock in it whether it’s in the processor’s logic board or in a separate chip. This clock acts kind of like a police officer in the middle of an intersection directing the traffic of bits based on time in your computer. You can imagine how mission-critical this is, especially for an audio device, because our entire existence in the audio world lives as a function of the time domain. If an analog signal from a microphone is going to be converted into a digital signal at a given sample rate, the clock inside the device with the analog-to-digital and digital-to-analog converter needs to “keep time” for that sampling rate so that all the electronic signals traveling through the device don’t turn into a mush of cars slamming into each other at random intervals of an intersection. Chances are if you have spent enough time with digital audio, you have come into some situation where there was a sample rate discrepancy or clock slipping error that reared its ugly head and the only solution is to get the clocks of the devices in sync or change the sample rates to be consistent throughout the signal chain.

One of the solutions for keeping all these devices in line is to use an external word clock. Many consoles, recording interfaces, and other digital audio devices allow the use of an external clocking device to act as the “master” for everything downstream of it. Some engineers claim the sonic benefits of using an external clock for increased fidelity in the system since the idea is that all the converters in the downstream devices connected to the external clock are beginning their samples at the same time. Yet regardless of whether you use an external clock or not, the MOST important thing to know is WHO/WHAT is acting as the clock master.

Let’s go back to our opening joke of this blog about the different consoles walking into a festival, umm I mean bar. Let’s say you have a PA being driven via the AES/EBU standard and a drive rack at FOH with a processor that is acting as a matrix for all the guest consoles/devices into the system. If a guest console comes in running at 96 KHz, another at 48 KHz, another at 192 KHz, and the system is being driven via AES at 96 KHz, for the sake of this discussion, who is determining where the samples of the electronic signals being shipped around start and end? Aren’t there going to be bits “lost” since one console is operating at one sample rate and another at a totally different one? I think now is the time to bring up the topic of SRC or “Sample Rate Conversion”.

My favorite expression in the industry is, “There is no such thing as a free lunch” because life really is a game of balancing compromises for the good and the bad. Some party in the above scenario is going to have to yield to the traffic of a master clock source or cars are going to start slamming into each other in the form of digital artifacts, i.e. “pops” and “clicks”. Fortunately for us, manufacturers have thought of this for the most part. Somewhere in a given digital device’s signal chain they put a sample rate converter to match the other device chained to it so that this traffic jam doesn’t happen. Whether this sample rate conversion happens at the input or the output and synchronously or asynchronously of the other device is manufacturer specific.

What YOU need to understand as the human deploying these devices is what device is going to be the police officer directing the traffic. Sure, there is a likelihood that if you leave these devices to sort their sample rate conversions out for themselves there may not be any clock slip errors and everyone can pat themselves on the back that they made it through this hellish intersection safe and sound. After all, these manufacturers have put a lot of R&D into making sure their devices work flawlessly in these scenarios…right? Well, as a system designer, we have to look at what we have control over in our system to try and eliminate the factors that could create errors based on the lowest common denominator.

Let’s consider several scenarios of how we can use our trusty common sense and our newfound understanding of clocks to determine an appropriate selection of a clock master source for our system. Going back to our bartending-festival scenario, if all these consoles operating at different sample rates are being fed into one system for the PA, it makes sense for a front-end processor that is taking in all these consoles to operate its clock internally and independently. If the sample rate conversion happens internally in the front-end processor and independent of the input, then it doesn’t really care what sample rate comes into it because it all gets converted to match the 96 KHz sample rate at its output to AES.

 

Front-end DSP clocking internally with SRC

In another scenario, let’s say we have a control package where the FOH and monitor desk are operating on a fiber loop and the engineers are also operating playback devices that are gathering time domain-related data from that fiber loop. The FOH console is feeding a processor in a drive rack via AES that in turn feeds a PA system. In this scenario, it makes the most sense for the fiber loop to be the clock source and the front-end processor to gather clock and SRC data from the AES input of the console upstream of it because if you think about it as a flow chart, all the source data is coming back to the fiber loop. In a way, you could think of where the clock master comes from to be the delegation of the police officer that has the most influence on the audio path under discussion.

 

Fiber loop as chosen origin of clock source for the system

 

As digital audio expands further into the world of networked audio, the concept of a clock master becomes increasingly important to understanding signal processing when you dive into the realms of protocols such as AVB or Dante. Our electronic signal turns into data packets on a network stream and the network itself starts determining where the best clock source is coming from and can even switch between clock masters if one were to fail. (For more information check out www.audinate.com for info on Dante and www.avnu.org for info on AVB). As technology progresses and computers get increasingly more capable for large amounts of digital signal processing, it will be interesting to see how the manifestation of fidelity correlates to better sample rates, bit-perfect converters, and how we can continue to seek perfection in the representation of a beautiful analog waveform in the digital realm.

The views in this blog are for educational purposes only and the opinion of the author alone and not to be interpreted as an endorsement or reflect the views of the aforementioned sources. 

Resources:

Hass, Jeffrey. 2017-2018. Chapter Five: Digital Audio. Introduction to Computer Music: Volume One. Indiana University. https://cecm.indiana.edu/etext/digital_audio/chapter5_nyquist.shtml

https://avnu.org/

Ruzin, Steven. 2009, April 9. Capturing Images. UC Berkeley. http://microscopy.berkeley.edu/courses/dib/sections/02Images/sampling.html

www.audinate.com

**Omnigraffle stencils by Jorge Rosas: https://www.graffletopia.com/stencils/435

 

There Really Is No Such Thing As A Free Lunch

Using The Scientific Method in Assessment of System Optimization

A couple of years ago, I took a class for the first time from Jamie Anderson at Rational Acoustics where he said something that has stuck with me ever since. He said something to the effect of our job as system engineers is to make it sound the same everywhere, and it is the job of the mix engineer to make it sound “good” or “bad”.

The reality in the world of live sound is that there are many variables stacked up against us. A scenic element being in the way of speaker coverage, a client that does not want to see a speaker in the first place, a speaker that has done one too many gigs and decides that today is the day for one driver to die during load-in or any other myriad of things that can stand in the way of the ultimate goal: a verified, calibrated sound system.

The Challenges Of Reality

 

One distinction that must be made before beginning the discussion of system optimization is that we must draw a line here and make all intentions clear: what is our role at this gig? Are you just performing the tasks of the systems engineer? Are you the systems engineer and FOH mix engineer? Are you the tour manager as well and work directly with the artist’s manager? Why does this matter, you may ask? The fact of the matter is that when it comes down to making final evaluations on the system, there are going to be executive decisions that will need to be made, especially in moments of triage. Having clearly defined what one’s role at the gig is will help in making these decisions when the clock is ticking away.

So in this context, we are going to discuss the decisions of system optimization from the point of the systems engineer. We have decided that the most important task of our gig is to make sure that everyone in the audience is having the same show as the person mixing at front-of-house. I’ve always thought of this as a comparison to a painter and a blank canvas. It is the mix engineer’s job to paint the picture for the audience to hear, it is our job as system engineers to make sure the painting sounds the same every day by providing the same blank canvas.

The scientific method teaches the concept of control with independent and dependent variables. We have an objective that we wish to achieve, we assess our variables in each scenario to come up with a hypothesis of what we believe will happen. Then we execute a procedure, controlling the variables we can, and analyze the results given the tools at hand to draw conclusions and determine whether we have achieved our objective. Recall that an independent variable is a factor that remains the same in an experiment, while a dependent variable is the component that you manipulate and observe the results. In the production world, these terms can have a variety of implications. It is an unfortunate, commonly held belief that system optimization starts at the EQ stage when really there are so many steps before that. If there is a column in front of a hang of speakers, no EQ in the world is going to make them sound like they are not shadowed behind a column.

Now everybody take a deep breath in and say, “EQ is not the solution to a mechanical problem.” And breathe out…

Let’s start with preproduction. It is time to assess our first round of variables. What are the limitations of the venue? Trim height? Rigging limitations? What are the limitations proposed by the client? Maybe there is another element to the show that necessitates the PA being placed in a certain position over another; maybe the client doesn’t want to see speakers at all. We must ask our technical brains and our career paths in each scenario, what can we change and what can we not change? Note that it will not always be the same in every circumstance. In one scenario, we may be able to convince the client to let us put the PA anywhere we want, making it a dependent variable. In another situation, for the sake of our gig, we must accept that the PA will not move or that the low steel of the roof is a bleak 35 feet in the air, and thus we face an independent variable.

The many steps of system optimization that lie before EQ

 

After assessing these first sets of variables, we can now move into the next phase and look at our system design. Again, say it with me, “EQ is not the solution to a mechanical problem.” We must assess our variables again in this next phase of the optimization process. We have been given the technical rider of the venue that we are going to be at and maybe due to budgetary restraints we cannot change the PA: independent variable. Perhaps we are carrying our own PA and thus have control over the design with limitations from the venue: dependent variable forms, but with caveats. Let’s look deeper into this particular scenario and ask ourselves: as engineers building our design, what do we have control over now?

The first step lies in what speaker we choose for the job. Given the ultimate design control scenario where we get the luxury to pick and choose the loudspeakers we get to use in our design, different directivity designs will lend themselves better in one scenario versus another. A point source has just as much validity as the deployment of a line array depending on the situation. For a small audience of 150 people with a jazz band, a point source speaker over a sub may be more valid than showing up with a 12 box line array that necessitates a rigging call to fly from the ceiling. But even in this scenario, there are caveats in our delicate weighing of variables. Where are those 150 people going to be? Are we in a ballroom or a theater? Even the evaluation of our choices on what box to choose for a design are as varied as deciding what type of canvas we wish to use for the mix engineer’s painting.

So let’s create a scenario: let’s say we are doing an arena show and the design has been established with a set number of boxes for daily deployment with an agreed-upon design by the production team. Even the design is pretty much cut and paste in terms of rigging points, but we have varying limitations to trim height due to high and low steel of the venue. What variables do we now have control over? We still have a decent amount of control over trim height up to a (literal) limit of the motor, but we also have control over the vertical directivity of our (let’s make the design decision for the purpose of discussion) line array. There is a hidden assumption here that is often under-represented when talking about system designs.

A friend and colleague of mine, Sully (Chris) Sullivan once pointed out to me that the hidden design assumption that we often make as system engineers, but don’t necessarily acknowledge, is that we assume that the loudspeaker manufacturer has actually achieved the horizontal coverage dictated by technical specifications. This made me reconsider the things I take for granted in a given system. In our design, we choose to use Manufacturer X’s 120-degree line source element. They have established in their technical specs that there is a measurable point at 60 degrees off-axis (total 120-degree coverage) where the polar response drops 6 dB. We can take our measurement microphone and check that the response is what we think it is, but if it isn’t what really are our options? Perhaps we have a manufacturer defect or a blown driver somewhere, but unless we change the physical parameters of the loudspeaker, this is a variable that we put in the trust of the manufacturers. So what do we have control over? He pointed out to me that our decision choices lie in the manipulation of the vertical.

Entire books and papers can and have been written about how we can control the vertical coverage of our loudspeaker arrays, but certain factors remain consistent throughout. Inter-element angles, or splay angles, let us control the summation of elements within an array. Site angle and trim height let us control the geometric relationship of the source to the audience and thus affect the spread of SPL over distance. Azimuth also gives us geometric control of the directivity pattern of the entire array along a horizontal dispersion pattern. Note that this is a distinction from the horizontal pattern control of the frequency response radiating from the enclosure, of which we have handed responsibility over to the manufacturer. Fortunately, the myriad of loudspeaker prediction software available from modern manufacturers has given the modern system engineer an unprecedented level of ability to assess these parameters before a single speaker goes up into the air.

At this point, we have made a lot of decisions on the design of our system and weighed the variables along every step of the way to draw out our procedure for the system deployment. It is now time to analyze our results and verify that what we thought was going to happen did or did not happen. Here we introduce our tools to verify our procedure in a two step-process of mechanical then acoustical verification. First, we use tools such as protractors and laser inclinometers as a means of collecting data to assess whether we have achieved our mechanical design goal. For example, our model says we need a site angle of 2 degrees to achieve this result so we verify with the laser inclinometer that we got there. Once we have assessed that we made our design’s mechanical goals, we must analyze the acoustical results.

Laser inclinometers are just one example of a tool we can use to verify the mechanical actualization of a design

.

It is here only at this stage that we are finally introducing the examination software to analyze the response of our system. After examining our role at the gig, the criteria involved in pre-production, choosing design elements appropriate for the task, and verifying their deployment, only now can move into the realm of analysis software to see if all those goals were met. We can utilize dual-channel measurement software to take transfer functions at different stages of the input and output of our system to verify that our design goals have been met, but more importantly to see if they have not been met and why. This is where our ability to critically interpret the data comes in to play. By evaluating impulse response data, dual-channel FFT (Fast-Fourier Transform) functions, and the coherence of our gathered data we can make an assessment of how our design has been achieved in the acoustical and electronic realm.

What’s interesting to me is that often the discussion of system optimization starts here. In fact, as we have seen, the process begins as early as the pre-production stage when talking with different departments and the client, and even when asking ourselves what our role is at the gig. The final analysis of any design comes down to the tool that we always carry with us: our ears. Our ears are the final arbiters after our evaluation of acoustical and mechanical variables, and are used along every step of our design path along with our trusty use of  “common sense.” In the end, our careful assessment of variables leads us to utilize the power of the scientific method to make educated decisions to work towards our end goal: the blank canvas, ready to be painted.

Big thanks to the following for letting me reference them in this article: Jamie Anderson at Rational Acoustics, Sully (Chris) Sullivan, and Alignarray (www.alignarray.com)

Arica Rust: In Love with Live Sound Technology

Arica Rust works for Sound on Stage in San Francisco as a staff engineer. Sound on Stage is a sound system rental company based in the San Francisco Bay Area, providing systems for a wide range of events ranging from high-profile corporate entertainment to rock festivals like Outside Lands and Treasure Island Music Festival. Arica has been with SOS for the last six years, which means she wears many hats and works as an engineer for FOH and Monitors, stage patch, and whatever else they might throw at her. Her favorite position is as a FOH systems engineer. She enjoys being on the road and recently completed the North American leg of the Ben Howard Tour as the PA Systems Tech.

Arica has been working in live sound for the last nine years and came to it as many do, with a love of music. Her initial dream was to work in a recording studio. Her journey into live sound started when she went to City College of San Francisco to study studio recording and found herself in the live sound classes as well.

City College of San Francisco offers an excellent audio program, providing several different certificate programs and is headed up by SoundGirl Dana Labrecque. (Dana runs the Bay Area SoundGirls Chapter and is a Co-Director or SoundGirls). After attending the live sound classes and her first internship, that was it; Arica knew live sound was where she wanted to be. When she was a teenager, Arica says, “ I spent all my lunch money buying records and going to concerts with my friends. I originally went to college in upstate New York out of high school to study avant-garde Electronic Music and Creative Writing at Bard College”.

“I want to be able to make people experience music the way that I do with that same feeling where it lights your brain on fire. I figured the best way to shape people’s experience was to be on the technical side of the stage”.

Arica and her friend Tiffani used to throw underground electronic music events in the Bay Area and would use her friends’ rental company Word of Mouth Sound. When she was looking for her first internship while at City College of San Francisco, she contacted them and ended up working behind the scenes at the events she used to attend. She completed her trade certificates in Live Sound and Recording Arts at CCSF before transferring to San Francisco State University.

Realizing that she wanted to work in live sound on the technical side set Arica on her way. She went on to study at San Francisco State University and earned a Bachelor’s Degree in Broadcasting and Electronic Media Arts with a focus in Audio Production. Professor John Barsotti taught the audio program in the broadcasting department and introduced Arica to Sound on Stage.

Arica continues to immerse herself in ongoing education and training, receiving certification in Rational Acoustics’ SMAART, L-Acoustics Levels 1 and 2, and attends various AES related conventions and events. “I value the importance of education and feel that no matter how much one thinks they know, there is always something new to discover. I try to learn from a variety of sources whether it is from the war stories of other engineers or diversifying my training from different manufacturers”.

Arica’s long-term goals have changed since she started on her audio path: “It’s funny how your goals change over time as you learn more. I went to school imagining myself mixing albums for bands, but now I am way more interested in the science of sound and designing, deploying, and tuning systems for different clients”. She also loves teaching and getting people excited about science.

What if any obstacles or barriers have you faced?

The biggest obstacles I have had to face have always been the ones I create for myself. I think I will forever be plagued with Imposter Syndrome: the feeling that I am not good enough, smart enough, know enough, etc. to be where I am. No matter how much I try to prove myself there is always that feeling in the back of my head of self-doubt, but then I’ll have those magical moments where the show starts and maybe it’s music I’ve heard before or, even better, a band I’m unfamiliar with that just blows me away, and I feel like I’m right where I need to be doing what I love.

How have you dealt with them?

I just keep telling myself over and over that “I got this” when I start doubting myself. I stay focused on doing the best I possibly can. I try not to let my demons in my head get the best of me and put 110% in everything I do. It’s easy to get jaded, but even if this time you don’t get acknowledgment for your efforts, eventually hard work shows and people respect that. I do things to help me relax and get in a confident headspace. For example, I have a playlist that I sometimes listen to before going into work to get myself ready to go.

The advice you have for women and non-binary people who wish to enter the field?

I wish we lived in a world where people do not change the way they interact with you based on what they perceive to be your gender, but sadly that is not the reality yet. Things are getting better slowly but surely, but my best advice is to have a tough skin and be the bigger person. People should not be allowed to get away with unprofessional behavior, but you have to counter these situations with professionalism. If you work hard and show everyone your value, then it should not matter who you are. I want to be seen for my skills as an engineer not what people perceive to be my gender.

Must have skills?

I joke that this industry is 20% technical knowledge and 80% customer service skills. You can teach anyone how to operate a board, but not everyone can learn the people skills to interact with artists and clients. A good attitude and a willingness to work will get you farther at first than knowing how to mix. Also always be open to exploring new things and learning from others. I am continually learning and re-evaluating my current knowledge because technology is ever changing and I respect the wisdom of people who share their experience with me.

Favorite gear?

My favorite rig is L-Acoustics K2 with KS28 subs, Kara Outfill, and Arc Wide front fill. I don’t think I could leave home without my laptop running SMAART v8 and the modeling software of the manufacturer whose PA I am working with, my ISEMCON EMX-7150 measurement mic, my multimeter, and my disto. I have Roland Octa-capture and Focusrite Scarlett 2i2 USB interfaces in my A and B rigs as well as a soldering iron to fix problems on the job.

What is your favorite day off activity?

I enjoy spending time and catching up with my friends when I am not working. The industry demands you to sacrifice a lot of your social life, but it is essential to make an effort to keep in touch with your loved ones when you can. Your real friends understand when you are busy because they want to see you doing what you love to do. I also am passionate about my dance practice and reading anything from comic books to technical white papers.

Anything else you would like to leave us with?

I would like to stress the importance of self-care. I think there is a lot of taboo around taking care of your self because everyone works hard and plays hard. I’ve failed, many times, to eat enough, sleep enough, drink enough water while working long hours and paid the consequences on my body and mind sooner or later. It’s important to take time to decompress and reset your brain, even just for a minute that you get to step away. This is a stressful job, but it is also a labor of love. Please feel free to reach out to me! I enjoy geeking out. You can contact Arica at aricarust@gmail.com

Learn more about Arica:

Find More Profiles on The Five Percent:

Profiles of Women in Audio

X