Empowering the Next Generation of Women in Audio

Join Us

How to Record a Podcast Remotely And Get It Right The First Time

Remote interviews are a fact of life for every podcaster, and in today’s era of social distancing, more so than ever. Since you rarely get the chance at an interview do-over, nailing down your remote recording workflow is essential. We’ll show you how to prepare for and record a remote interview, so you get it right the first time — with some additional tips along the way to make sure all your bases are covered.

Choose the right remote recording setup for your podcast

The first step is to determine the remote recording setup that best suits the format and content of your podcast and your production and editing workflow.

In most cases, your best solution will involve recording remote interviews on Zoom, Skype, Google Hangouts, or a similar online conferencing service. This low-friction setup makes it easy for guests or co-hosts to contribute, but you’ll need to make sure you have the right software to record these interviews.

It’s also wise to make sure you can record phone calls. Phone interviews don’t offer great audio fidelity, but they make a great backup option in case of technical problems or schedule changes. Phone interviews probably won’t be your first choice, but it’s a good idea to be able to record a phone call just in case you need to.

If you’re recording with the same remote co-host on each episode of your podcast, consider a double-ender setup, in which you and your co-host record your own audio tracks locally and combine them in post-production. For most podcasters, this isn’t the most convenient solution, but it does translate into the highest audio fidelity for you and your co-host.

The best way to record an interview is to prepare for it

When it comes to interviewing — especially remote interviewing — a little preparation goes a long way.

Do some research into your guest’s background, expertise, and projects. Who are they? Why is their work notable? What do you (and in turn, your audience) hope to learn from them?

Putting together a rough outline of the questions you’d like to ask will come in very handy. Write down a handful of specific questions and key points, but keep your outline broad and high-level. That’ll allow you to more easily adapt to the flow of conversation.

Maintaining that conversational flow remotely can be substantially trickier than doing so person-to-person. Prime yourself to listen more than you speak — in particular, try not to interrupt your guest. Editing out awkward silences between speakers is much easier than dealing with too much crosstalk!

When it’s time to record the interview, take a couple final preparatory steps to ensure a clean recording. Close all unnecessary software and set your computer to “Do Not Disturb” mode to make sure unwanted distractions don’t pop up (or worse: end up in the recording).

How to record a Skype call, Zoom interview, or Google Hangout

For most remote recording situations, Zoom, Skype, or Google Hangouts are your platforms of choice. All three are easy to set up, simple for guests to use, and feature audio fidelity good enough for most podcasts.

Both Zoom and Skype offer built-in call recording functionality, but Google Hangouts currently limits this offering to enterprise users. There’s an additional caveat: the file format (.MP4 or .M4A) that each platform outputs may not be what you want, depending on your podcast production and editing workflow.

For maximum control over your final product, you’re better off using third-party apps to record computer system audio directly into the recording software of your choice rather than relying on their recording functionality.

If you’re on a Mac, BlackHole is a great open-source tool that allows you to route audio between apps, which means you can record the audio output from Zoom (or Skype, or Google Hangouts) directly into your preferred recording software. On Windows, Virtual Audio Cable offers similar functionality.

If you’re already using Descript to record, you won’t need to use additional audio routing software. When recording audio into Descript, open the Record panel, choose Add a Track, select your input, and choose “Computer audio.” Click the Record button whenever you’re ready, and audio from Zoom, Skype, or Google Hangouts will be piped into Descript.

No matter which remote recording setup you use, make sure you test it — and test it again — with a friend or colleague before you’re actually recording your podcast. Troubleshooting when you should be interviewing ranks near the top of everyone’s Least Favorite Things To Deal With, so make sure everything is in order before your guest is on the line.

How to record a phone interview with Google Voice

Social distancing means nearly everyone has gotten used to handling calls and meetings on Zoom, Skype, or Google Hangouts. But maybe your podcast guest is really old-school, or their computer is on the fritz, or maybe they’re simply only able to access a phone during your scheduled call time. It’s likely phone interviews will never be your first choice, but being able to record an old-fashioned phone call will come in handy.

Recording phone calls can be tricky, but using Google Voice to make an outgoing phone call from your computer means you can use the same remote recording setup detailed above to record the call.

Follow Google’s instructions to set up Google Voice and then learn how to make an outgoing call. Once everything’s set up, you’ll be able to record phone calls with Google Voice just like you’d record an interview on Zoom or Skype.

Again, make sure to test with a friend and then test again before your interview.

If lossless audio quality is a must, record a “double-ender”

For most remote recording situations, Zoom, Skype, or Google Hangouts are your platforms of choice. All three are easy to set up, simple for guests to use, and feature audio fidelity good enough for most podcasts.

But if you have a remote co-host that regularly appears on your podcast, and you want to maximize the quality of your audio, a “double-ender” is the way to go: Each host or guest records themselves locally, and audio tracks are combined in post-production. For an additional cost, you can use third-party recording platforms that simulate double-enders without each speaker managing their own recording software.

A traditional double-ender sees each speaker recording their own audio track using their recording software of choice (Descript, Audacity, Quicktime, etc.), and then the host or editor combines each speaker’s recording into a finished product. Each speaker should have a decent microphone — if they’re using a laptop microphone to record, you probably won’t hear a substantial advantage with a double-ender over a Zoom, Skype, or Google Hangouts recording.

Alternatively, you can simulate a double-ender by using a platform like SquadCast, Zencastr, or Cleanfeed. These services record lossless audio from each speaker, upload each track to the cloud and combine them automatically. These platforms cost money, but they’re a great alternative to a double-ender when guests or co-hosts don’t have the time or wherewithal to fiddle with recording themselves locally. Again, make sure each speaker has a decent microphone — otherwise, you won’t reap the full benefits of lossless audio.

Make remote recording hassles a thing of the past

Recording your podcast remotely isn’t painless, but once you get the hang of it — and nail down your workflow — it’ll become second nature.

This Originally appeared on Descript.com

The Best First Impression

Taking the time to evaluate and critique your resume is vital to putting your best foot forward, and as we’re all at a pause, there’s no better time to do that than now. That piece of paper (or pdf file) becomes your first impression to designers, production managers, and other employers. But all too often, even starting to write your resume can feel like an impossible task. Even more so because there’s very little standardization of what they should look like for theatre. Sifting through websites with tips and helpful hints for a traditional, corporate-based structure can range from confusing to downright frustrating when you’re trying to apply it to a completely different world.

At its heart, your resume is telling a story. Where did you come from? What have you done? How have you progressed over your career? You’re just telling it in bullet points instead of prose. Anyone looking at your resume is trying to do a couple of things: they want to know what skills you have and what shows you’ve worked on, or people you’ve worked with, but they’re also looking for information about who you are and if you’d be a good fit for a team they’re building. Your resume gives them small indications of that based on its presentation: did you slap together a slipshod line of things you’ve worked on with your name pasted at the top? Or does it look like you took some time and pride in presenting yourself to potential employers?

Over the course of your career, you’ll end up with a couple different versions of your resume. When you’re first starting out and have less experience, you may include some details about what your jobs entailed, but once you’re established in your career, your resume neatens up and becomes a list of the shows you’ve worked on and what your role was. Even then, you still may have a couple versions to focus on different skills or shows: if you’re looking for design work as opposed to mixing work as opposed to production work.

For example, when I left college, I broke my experience in a few categories: Touring, A1, A2, Corporate, and Other Experience. It was busier than it needed to be and didn’t have much organization other than dividing up my experience. After working for a few years, my resume shifted to two categories: Touring and A1. That simplified things by getting rid of my college experience, starting to use sound designers instead of directors for shows, and formulating a better narrative. Instead of throwing every show, I’d done in the mix, I used my touring experience to highlight my progression from an A2 to A1, and selected certain shows I’d mixed off of tour to show that I’d worked at the same festival multiple years in a row (i.e. people wanted to work with me again).

Looking forward, if my goals shift to getting off the road in (likely to find a mixing-focused job where I could stay in one location), I would make a new resume that focuses on my A1 experience (including touring, sit-downs, festivals, one-offs, etc), and pushes my A2 work on the road into a less prominent category.

When you start writing your resume, it helps to break it down into manageable chunks. In my experience, most resumes have four general categories: Identifier, Experience, Skills/Education, and References.

 

Typically the hardest part of a resume to write is the Experience category. While your name, contact info, education, and skills are cut and dry lists, here you have to look through your jobs and sort out which ones you want to use.  To help, start by asking yourself what story you want to tell:

Once you know what you’re going to put in your resume, here are some overall notes to keep in mind:

Finally, references. This can be the most important category of your resume. A first-hand account of your abilities and work ethic from a trusted source has more influence than any words on a page. This is another area where you can choose to personalize your resume on a job-by-job basis if you have mutual acquaintances with the reader.

When picking a reference, it depends on your job. If you’re a designer, you want to choose directors or other designers you’ve worked with. As an A1 or A2, use designers, associates, or production colleagues. (I’ll use resident directors or music directors as well, but it’s better to prioritize other sound people first.) A2s can also use their A1s.

Always ask for permission before you include someone as a reference. It’s the polite and professional thing to do as well as letting them know if you’re sending out resumes, especially if they might get a call. In the age of telemarketers and spam phone calls, all of us default to ignoring unknown numbers.

You should always include your references. There are a couple of exceptions to this: if you’re sending resume-blasts out to a variety of potential jobs, or if you’re posting your resume online in a public forum where your reference might not want their personal contact information displayed.

So, let’s take a look at a not-so-great resume:

 

And if we make some edits:

And this is what my actual resume looks like. “Other Experience” is simplified down to a list, and it’s simple, concise, and easy to skim:

Once you have a resume written, always double-check for typos, inconsistencies, etc. (Then have a friend check, or two or three to be on the safe side.) This is something you’ll constantly add to and change as you progress in your career. After doing research for this post, I went back and made several tweaks, and that was a resume I’ve used for several years. Eventually, your reputation may proceed you enough that you don’t use your resume as much, but until then, make sure you make the best first impression you can.

Reasons Every Musician Needs An Audio Interface

 

The question many musicians ask is if they need an audio interface while recording their music through a computer. Ideally, any device that has a sound card can record music. This includes a cell phone, tablet and computer. In addition, you can plug your instruments directly to the computer or tablet using an adapter, use computer-generated virtual instruments, or even connect a USB microphone and record music without an audio interface.

So, if you can record music without an audio interface, why do you need one anyway? Here are the reasons why you should definitely invest in one.

You can connect several items simultaneously

An audio interface comes with a number of inputs and outputs. Inputs allow things to be connected to the audio interface. On the other hand, outputs allow the audio interface to be connected to where the sound is going to such as a computer or monitor speakers.

With that in mind, an audio interface allows you to connect several instruments and microphones all at the same time. You also don’t need to have adapters to connect the instruments to the audio interface. This means that you don’t overwhelm the computer with so many items plugged in.

It provides a high-quality sound

While you can record music on practically any device with a sound card, it is good to note that these devices serve other functionalities as well. Thus, the quality will not be good.

An audio interface can be compared to the sound card in the computer or cell phone, only that it is an external hardware that is connected to such devices. It is purposely built for transporting audio in and out of the computer or tablet. Compared to the inbuilt sound card, an audio interface offers a higher quality sound.

In addition, an audio interface includes other features such as built-in preamps and phantom power. These features help in improving the tone and quality of vocals generated from a microphone.

It reduces latency

Recording music with a computer involves analog signal from the microphone to the audio interface, it is then changed to a digital signal. From there it is sent to the computer where it is recorded and processed in the digital audio workstation (DAW). It is then sent to the audio interface where it is converted to analog signal and sent to the headphones where you hear the sound. The time in between is known as latency, and when it is longer, the music doesn’t sound good.

As a musician, you obviously don’t want your audios to have latency.

Most computers are manufactured for numerous functionalities. In this case, they are not designed for low latency performance. An audio interface, on the other hand, is a dedicated device in matters recording. Manufacturers put this in mind and design them for low latency.

That being said, the audio interface reduces latency in audios compared to when you record without one.

It allows recording without an amplifier

Ideally, when you want to record a guitar or a bass, you will need to place a microphone or several in front of an amplifier. You then start sound checking until you have a good setup.

You agree that this is tedious, especially when you are itching to start recording. In addition, a guitar amp takes space, so if you are recording from a tiny room, having an amp can be an added stress for you. Lastly, there is a high possibility to experience sound interferences from the surrounding or other instruments when recording with an amplifier.

Thankfully, you don’t really need an amplifier or to go through all that to start recording a guitar. The solution lies in an audio interface. These come with inbuilt preamps that are capable to amplify the signal. It also allows you to change the settings and tone of the guitar recording to how you prefer it. An audio interface also eliminates interferences, ensuring that you only capture the guitar.

Conclusion

As discussed above, an audio interface is the difference between a basic sound and high quality and professional sound. If you are serious about recording such sounds, then you can’t afford not to have this device.

There are various models in the market. Be sure to consider how many inputs and outputs you need when purchasing one. Do not forget to check for compatibility with your computer, the sound quality that you want and off course your pocket.


Guest Blog by Tim McAllins

I’ve always loved music, and I’ve always loved sound. I started to produce music when I was about 2, with my mom’s pots and dishes, so it was not much of a surprise when I choose music as my professional career. Now I am more focused on making this industry a giving environment. I look forward to meeting young talented folks with great potential.”

Guest Blogs

Healthy Ears Are Happy Ears

 

As we grow older, hearing loss is something we will all have to deal with in one way or another.   Human hearing deteriorates at different rates and severity for each of us.  As members of the audio community, it is extremely important for us to keep our ears in tip-top condition for as long as possible.  Here are a couple of tips to help your ears stay young and healthy.

Regular Ear Exams

Audio professionals should get their hearing checked at least once a year.  Hearing tests can be done with your physician during your annual physical, or by making an appointment with an audiologist.  If you are an avid industry convention attendee, chances are that there might be an audiology group providing free hearing screenings.  This has been the case over the years with AES, NAMM, USITT, and even LDI to name a few.  If circumstances prevent you from getting a hearing test in person, here are a few online options:

No Smoking

In 2019, Reuters News reported that smokers were 60% more likely to develop high-frequency hearing loss as compared to non-smokers.  Studies show that nicotine and cigarette smoke interfere with the neurotransmitters responsible for delivering sound information to the brain, irritate the Eustachian tube and lining of the middle ear, and can even cause tinnitus.  If you’re having a hard time quitting smoking, your physician can help map out a plan that works for you.  There are also many free resources designed to assist with quitting smoking.  Some of these are:

Ditch the Cotton Swabs

I have to admit that this one is hard for me.  I hate the feeling of anything in my ears, including water, so I definitely gravitate toward the Q-Tips right after a shower, but this is probably the worst way to clean your ears.  In fact, cotton swabs can make matters worse by pushing ear wax deeper into the ear, and they can even damage the eardrum.  Physicians and audiologists suggest towel drying the parts of the ear you can reach, and then cleaning out ear wax with an at-home irrigation kit or over-the-counter ear drops to lubricate and soften ear wax to prevent it from hardening and plugging up the ear canal.  If your ear becomes plugged with wax, and one of these methods is not doing the trick, your doctor can provide in-office irrigation.  BTW, there is no science behind the effectiveness of ear candles, and actually, some studies have shown that ear candles can cause an increase in wax due to candle wax deposits.  So, if you’re thinking that this will be a fun and natural solution for cleaning your ears, think again and stick with science.

Keep Them Covered

We all know that we need to be wearing earplugs or earmuffs at concerts, but loud music is not the only thing that will damage your ears.  If you have to yell over a noise to be heard, it’s too loud for your naked ears.  Construction sites, home power tools, electric yard tools, and even areas with high human population (farmer’s markets, shopping malls, airports) all produce consistent high noise levels.  For more information on choosing the right hearing protection for you, check out one of these links!:

The National Institute for Occupational Safety and Health (NIOSH)

Hearing Protection for Musicians

SELECTING AND VALIDATING HEARING PROTECTION DEVICES PDF

These are just a few of the many things we can do to protect our ears.  Remember, as audio professionals, our ears are our greatest asset, and we need to always remember to be our own advocate in protecting them.  If you’re at a restaurant or bar that has loud music playing, it’s ok to ask an employee or manager to turn it down.  We also have complete control over the electronics and tools we buy for our personal use.  Most have a noise rating listed, and this can help inform what kind of device you want to have around your precious little moneymakers.  We can’t stop the inevitable, but we can, and should, prolong it.  Long live the ears.

Dana Wachs – Sound Engineer, Tour Manager, Musician

Dana Wachs is a Brooklyn-based Audio Engineer, Tour Manager, and Composer/Musician. Dana started her career in music in 1994, as a bass player for the Dischord band Holy Rollers, which ignited her interest in live sound, after a national tour supporting 7 Year Bitch. Her first foray into the practice of live sound began after that at the Black Cat DC, and later the infamous 9:30 club.

Dana’s first national tour was as TM/FOH for Peaches supporting Queens of the Stone Age in 2002.  Her first International tour quickly followed in 2003 with Cat Power.  Since then, touring has kept her on the road 9 to 11 months out of the year with bands such as MGMT, St. Vincent, M.I.A., Grizzly Bear, Foster the People, Nils Frahm, Deerhunter, and Jon Hopkins to name a few.

Outside of touring, Dana composes and performs under the name Vorhees, with two releases on Styles Upon Styles (Brooklyn), and is currently composing her first feature film score.

Although she is mainly a FOH engineer, she has toured as a monitor engineer, before the pandemic hit she was on tour as TM/FOH for Yves Tumor and before that was working with Deerhunter and Jon Hopkins.

Early love for Music and Recording

Dana’s musical background started early, at age nine she began to learn to play the Cello, moving onto electric bass at 11, and then was gifted an acoustic guitar. She is mainly self-taught and learning to play a range of instruments helped build the basic foundation for the basics of recording.

Dana was also exposed to a wide range of music growing up as her father worked in radio and sent her the promo top 40 LPS every month. Dana explains thattop 40 was everything from new wave to new jack swing, so the sonic representation was quite broad for the mainstream.” One year her father gave her a dual cassette recorder and Dana would spend her allowance on blank tapes, recording snippets of songs from the radio and her favorite songs.  Dana remembers this  “lead to me seeking out a 4-track recorder, and recording myself playing with the built-in microphone.”

Dana would receive a partial scholarship to Adelphi University for acting but spent more time in the radio station.  She received an FCC license while there (it used to be required to even be on college radio) and learned how to cue records, tapes, and began an unhealthy relationship with collecting 7” records.

The Spark

I was on tour as a 19-year-old, playing bass for Holy Rollers (Dischord) on a 5 week US tour, supporting 7 Year Bitch.  I would watch 7YB’s set from the sound booth, often focusing on their sound engineer, Lisa Fay.  I began to ask her standard newbie questions (“What does that button do?”) and thankfully, she was not only patient but incredibly enthusiastic and encouraging.  Her passion really excited me, and I realized the potential of engineering as a creative role, and not just purely technical.

Audio was an obvious extension of my love for music. I considered audio as the representation of creativity in a tangible way, though of course, it can seem very abstract when first starting out.  Playing instruments as a kid, as well as vocal lessons, impressed upon me the importance of really listening, and not just hearing.  That aspect of experiencing music, rather than passively enjoying it from afar, was a cathartic epiphany.

Career Start

After that tour supporting 7 Year Bitch, I began to work at the Black Cat in DC.  I worked almost every service job I could (door person, barback, line chef) and saw more legendary shows than I can list.  One show that made a major impact on me was Stereolab supporting their album “Emperor Tomato Ketchup”.  This was probably ’95 or ’96?  Anyway, their FOH engineer had a bank of synthesizers that he’d play while he was mixing, and it suddenly clicked that THAT’S WHAT I WANT TO DO.  Eventually, I began to hang out at FOH on days off, and again would ask question after question (this time of Nick Pellicciotto, FOH for Fugazi and main house engineer at Black Cat), and was very graciously and patiently walked through signal chains, outboard gear, etc.

Eventually, I felt confident enough to offer my very amateurish skills for free to regional bands.  Without any formal education, I found that hands-on experience has been the best way for me to learn on the job.  I moved to New York in 1997 and began to intern at Greene Street Recordings, eventually assisting for the first time for a Pete Rock-produced track.  Shortly after that, Pro Tools became prosumer, and I transitioned back to live sound as that’s where the work was.  I became the regular house engineer at Tonic in the Lower East Side, mixing legends like John Zorn, Marc Ribot, and other luminaries of the downtown/experimental scene.

The Importance of Mentors

Besides Lisa and Nick as mentioned above, my first National tour was as TM/FOH for Peaches, opening for Queens of the Stone Age.  Their FOH, Hutch, was incredibly helpful and welcoming.

Because I am self-taught, I have recognized that self-education is a discipline in and of itself.  There are constant advances in the technology of the tools we use on the job, and without formal instruction, I can only rely on myself to seek out the education necessary to stay on top of the trade.

Career Now

What is a typical day like

Assuming I am on a bus tour and also TM, I wake up in my bunk anytime the bus parks, so I am first up.  Priorities are to send out the latest version of the day sheet to the touring party, get the bus driver checked into their hotel room asap, get day room keys for the crew/band, and shower/coffee before load in.

I’ll meet the production team during load-in and then act as the “catcher” as cases are brought into the venue.  I’ll drag/push/carry cases to their appropriate stage position as the backline tech sends gear in with local hands, and then, once the backline tech shows up, will work with the house FOH engineer to load my file and talk through the output patches, before moving back to the stage to place microphones.  I’m super particular about most microphone placements, so this gets checked and rechecked throughout soundcheck, as hey….we’re all human and occasionally things get shifted between musicians and stage crew.

I don’t typically use pre-recorded music to tune a system if I’m in a small theater/large club situation.  If I’m in a higher capacity venue, I will have playback and walk around all of the zones with a Lake EQ tablet, making slight adjustments as needed and letting the audio tech know if I hear any misalignments or time delay issues.

Then soundcheck.  As long as it takes. I find, I prefer mainly to use soundcheck to make sure meters are reading healthy and lines are clean.  I double-check all my effects and request the band finish the soundcheck with the first song of the set, so we are all pre-set for the opening of the show.  Most larger rooms will sound very different once the audience covers the floors, dampening reflective surfaces, so I’ll get my mix finalized during the first song and cruise from there.

Post-show, I immediately break down FOH as needed and collect microphones, paying special attention to odd/rare mic clips that sometimes are snatched with mic stands by local hands (even though EVERYTHING I use is labeled).  If the stage is crowded with our crew and local hands, I’ll run and fetch the team water and beer (if it’s a casual, non-disco load-out), and then help direct the load-out under the guidance of the production manager/backline tech.  Every tour is different, so I find my place and routine within the crew during the first couple of days.  Mainly, being present until the truck is loaded and/or the trailer is locked, is the goal.  Then, if I am also TM, it’s time to settle.  I’ll look over the contract at the top of the day and address any questions before doors.  At the time of settlement, I’ll hope to avoid an audit, but if expenses are not as detailed in the contract and advance, I will take the time to collect receipts.

Time to get on the bus.  If we are crossing a border any time in the next 20 hours, I will collect passports before we roll and leave them with the bus driver, as well as make sure they have the float they need and confirm the drive schedule.  Finally, if everyone seems content, it’s time for a glass of wine as I confirm the day sheet for the next morning.

How do you stay organized and focused

Any advance email has the date and venue in the subject line.  If I get an email with several subjects in the body, I will answer separate matters in separate emails, with appropriate subject lines.  It’s the ONLY way I can manage the thousands of emails that come through when tour managing.  I’ll also use Google calendar to keep deadline notes with early notifications and have a simple shorthand system for noting when I have first addressed a need, received response, and accomplished the task.  It makes it easier to search for unfinished tasks from my laptop or my phone.

Focus comes from not wanting to drop the ball, and while on tour, the momentum of the day, (and coffee), keeps me moving.  No matter how many moving parts there are at any given time, you can only manage one thing in that moment.  I do my best to prioritize, and then accomplish each goal in that order.  Occasionally I remember to breathe.

What do you enjoy the most about your job?

I love mixing, hands down, the best.  It’s more creative for me than technical, and my favorite shows I’ve mixed remain an indelible favorite memory always. .

What do you like least

When the lighting guy gets props from a drunk audience member for the show I just mixed.

What do you love about touring

I love International travel, and cherish the few days off in new locales to explore and eat my way through the local culture.  Also, I now have friends all over the world, and getting to see them as I tour is the best

What do you like least?

The lack of privacy and personal space can be hard, as well as trying to maintain personal and familial relationships at home.  True friends will of course understand.

What is your favorite day off activity?

Getting breakfast at the latest hour it is served before hitting a museum (I prefer art and history museums), a botanical garden stroll, or having a swim, and then eating and drinking local specials.  If we’re in a less specialized cuisine locale, sushi and sake, all day/every day.  A movie is a great way to escape from tour life for a couple of hours, and a nightcap at the hotel bar is always welcome.

What if any obstacles or barriers have you faced?

Especially at the beginning of my career, gender discrimination was a common occurrence. It’s less common now, but unfortunately, still occurs, even recently being hit on by the house engineer during the show, which was distracting to say the least.  I am also self-taught and have been challenged in some of my processes, (which at times tied into gender discrimination).  On my largest tour, I had a really awful audio tech.  He’d sleep on the bus during soundcheck, but chastise me if I insisted my desk be tipped exactly parallel to the stage.  Every obstacle on the road is heightened by the intense physical and mental toll constant touring can strain an individual with, but at the end of the day, the job needs to get done.

How have you dealt with them?

I avoid confrontations as much as possible.  That said, when I was younger, my emotions would lead my actions, with the perceived injustice of the situation fueling frustration.  As I grew, I learned to save my emotions for when I had some privacy (a ladies room stall is generally a good spot to take a couple deep breaths).  I am more secure in myself and my skills by now, and if challenged in any way, I can calmly explain my position, firmly make the point that my position is well earned, while reminding any others that we are on the same team to make the show happen.

Advice you have for other women and young women who wish to enter the field

You really have to be passionate to withstand the rigors of touring, so make sure you love the work.  Know your goals and maintain a skill set that will put you in the position you seek. Don’t let anyone talk down to you (they will try!) and don’t be afraid to ask questions.

Must have skills

Curiosity, stamina, learn the basics of a standard toolkit, and before anything, please learn how to wrap a cable properly!

Favorite gear

I am well known for my FOH fx manipulation, and I absolutely LOVE all Eventide boxes to achieve unique sounds.  I’ve used everything from the H9 stompbox to my vintage H3000 at FOH for most of the bands I travel with.  Their reverbs are some of the most natural-sounding I’ve heard, though they can also provide the most mind-bending modulations, all with precise parameter control.  Tony Visconti said it best when recording David Bowie’s “Low” album, “It fucks with the fabric of time”.

What are your long-term goals?

I’ve been on the road for almost 25 years now, and do not intend to ever “quit”.  When I develop a real relationship with a band, it can be quite rewarding.  However, now with all tours decimated by Coronavirus, I am focusing more on my own music creation, as well as production/mixing jobs.  I am thankful to have been hired to score my first feature film, so that is taking up most of my time now.  I’ll record a new record this year as well, (I record and perform under the name Vorhees since 2009), and…well…we will see what happens.

Learn more about Dana 

 

Find More Profiles on The Five Percent

Profiles of Women in Audio

(Not So) Basic Networking For Live Sound Engineers

Part Three: Networking Protocols

(or A History of IEEE Standards)

Read Part One Here

Read Part Two Here

Evaluating Applications

One thing I have learned from my do-it-yourself research in computer science that I have applied to understanding the world in general is the concept of building on “levels of abstraction.” (Once again, here I am quoting Carrie Ann Philbin from the “Crash Course: Computer Science” YouTube series) [1]. From the laptop that this blog was written on, to performing a show in an arena, all these things would not be possible if it were not for the multitude of smaller parts working together to create a system. Whether it is an arena concert divided into different departments to execute the gig or a data network broken up into different steps in the OSI Model, we can take a complicated system and break it down into its composite parts to understand how it works as a whole. Similarly, the efficiency and innovation of this compartmentalization in technology lays in the fact that one person can work on just one section of the OSI Model (like the Transport Layer) while not really needing to know anything about what’s happening on the other layers.

 

This is why I have spent all this time in the last two blogs of “Basic Networking For Live Sound Engineers” breaking up the daunting concept of networking into smaller composites from defining what is a network to designing topologies including VLANS and trunks. At this point, we have spent a lot of time talking about how everything from Cat6 cable to switches physically and conceptually works together. Now it’s time to really dive deep into the languages, or protocols, that these devices use to transmit audio. This is a fundamental piece in deciding on a network design because one protocol may be more appropriate for a particular design versus another. As we discuss how these protocols handle different aspects of a data packet differently, I want you to think about why one might be more beneficial in one situation versus another. After all, there are so many factors that go into the design of a system from working in pre-existing infrastructures to building networks from scratch, that we must take these variables into account in our network design decisions. A joke often appears in the world of live entertainment: you can have cheap, efficient, or quality. Pick 2.

What Is In A Packet, Really?

As a quick refresher from Part 2, data gets encapsulated in a process that involves the formation of a header and body for each packet. The very basic overall structure of a packet or frame includes a header and body. How you define each section and whether it is actually called a “packet” or “frame” depends on what layer of the OSI Model you are referring to.

Basic structure of a data packet…or do I mean frame? It depends!!

 

Now this back and forth of terminology seemed really confusing until I read a thread in StackExchange that pointed out that the “combination” of the header and data at Level 2 is called a frame and at Level 3 is called a packet [2]. The change in terminology corresponds to different additions in the encapsulation process at different layers in the OSI Model.

In an article by Alison Quine on “How Encapsulation Works Within the TCP/IP Model,” the encapsulation process involves adding headers onto a body of data at each step starting from the top of the OSI model at the Application layer and moving down to Physical Layer, and then stripping off each of those headers as you move back up the OSI Model in reverse through each process [3]. That means that during the encapsulation process at each parameter within the OSI Model for a given network, there is another header that gets added on to help the data get to the right place. Audinate’s Dante Level 3 training on “IP Encapsulation” talks about this process in a network stack. At the Application level, we start with a piece of data. Then at the Transport Layer, the source port, destination port, and the transport protocol attach to the data or payload. At the Network Layer, the Destination and Source IP address add on top of what already exists in the Transport Layer. Then at the Data Link layer, the destination and source MAC addresses attach on top of everything else in the frame by referencing an ARP table [4]. ARP, or Address Resolution Protocol, uses message requests to build tables in devices (like a switch, for example) to match IP addresses to MAC addresses, and vice versa.

So I want to pause for a second before we move onward to really drive the point home that the OSI Model is a conceptual tool used for educational purposes to talk about different aspects of networking. For example, you can use the OSI Model to understand network protocols or understand different types of switches. The point is we are using it here to understand the signal flow in the encapsulation process of data, just as you would look at a chart of signal flow for a mixer.

Check 1, Check 2…

There is the old visage that time equals money, but the reality of working in live sound is that time is of the essence. Lost audio packets that create jitter or sound audibly delayed (our brains are very good at detecting time differences) are not acceptable. So it goes without saying that data has to arrive as close to synchronously as possible. In my previous blog on clocks, I talked about the importance of different digital audio devices starting their sampling at the same rate based on a leader clock (also referred to as a master clock) in order to preserve the original waveform. An accurate clock is important in preserving the word length, or bits, of the data. Let’s look at this example:

 

1010001111001110

1010001111001110

 

In this example, we have two 16 bit words which represent two copies of the same sample of data traveling between two devices that are in sync because of the same clock. Now, what happens if the clock is off by just one bit?

If the sample is off by even just one bit, the whole word gets shifted and produces an entirely different value altogether! This manifests itself as digital artifacts, jitter, or no signal at all. So move up a “level of abstraction” to the data packet at the Network level in the OSI Model and you can understand why it is important for packets to arrive on time in a network so that bits of data don’t get lost or packets don’t collide because otherwise, it will create a broadcast storm. But as I’ve mentioned before, UDP and TCP/IP handles data accuracy and timing differences.

 

Recall from Part 2 that TCP/IP checks for a “handshake” between the receiver and sender to validate the data transmission at the cost of time, while UDP decreases transmission time in exchange for not doing this back and forth validation. In an article from LearnCisco on “Understanding the TCP/IP Transport Layer,” TCP/IP is a “connection-oriented protocol” that requires adding more processes into the header to verify the “handshake” between the sender and receiver [5]. On the other hand, UDP acts as a “connectionless protocol”:

[…] there will be some error checking in the form of checksums that go along with the packet to verify integrity of those packets. There is also a pseudo-header or small header that includes source and destination ports. And so, if the service is not running on a specific machine, then UDP will return an error message saying that the service is not available. [5]

So instead of verifying that the data made it to the destination, UDP will check that the packet’s integrity is solid and if there is a path available for it to take. If there is no available path, the packet just won’t get sent. Due to the lack of “error checking” in UDP, it is imperative that the packets arrive at their correct destination and on time. So how does a network actually keep time? In reference to what?

Time, Media Clocking, and PTP

Let’s get philosophical for a moment and talk about the abstraction of time. So I have a calendar on my phone that I schedule events and reminders based on a day divided into hours and minutes. This division of hours and minutes are arguably pointless without being referenced to some standard of time, which in this case is the clock on my phone. I assume that the clock inside my phone is accurate in relation to a greater reference of time wherever I am located. The standard for civil time is UTC or “Coordinated Universal Time” which is a compromise between the TAI standard, based on atomic clocks, and UT1, which is based on an average solar day, by making up for it in leap seconds [6]. In order for me to have a Zoom call with someone in another time zone, we need a reference to the same moment wherever we are because it doesn’t matter if I say our Zoom call is at 12 pm Pacific Standard Time and they think it is at 3 pm Eastern Standard Time as long as our clocks have the same ultimate point of reference, which for us civilians is UTC. In this same sense, digital devices need a media clock with reference to a common master (but we are going to update this term to leader) in order to make sure data gets transmitted without bit-slippage as we discussed earlier.

 

In a white paper titled “Media Clock Synchronization Based On PTP” from the Audio Engineering Society 44th International Conference in San Diego, Hans Weibel and Stefan Heinzmann note that, “In a networked media system it is desirable to use the network itself for ensuring synchronization, rather than requiring a separate clock distribution system that uses its own wiring” [7]. This is where PTP or Precision Time Protocol comes in. The IEEE (Institute of Electrical and Electronics Engineers) 1588 standardized this protocol in 2002, and expanded it further in 2008 [7]. The 2002 standard created PTPv1 that works using UDP on a level of microsecond accuracy by sending sync messages between leader and follower clocks. As described in the Weibel and Heinzmann paper, on the Application layer follower nodes compare their local clocks to the sync messages sent by the leader and adjust their clocks to match while also taking into account the absolute time offset in the delay between the leader and follower [7]. Say we have two Devices A and B:

 

Device A (our leader for all intents and purposes) sends a Sync message to Device B saying, “This is what time it is. 11:00 A.M.”

Device B says, “Ok. I think it’s 12:00 P.M,” This is the Follow_Up message.“What time did you send that message?” says the Delay_Request message.

Device A replies, “At 11:00 A.M.” This is the Delay_Response message. “What time did you receive it?”

Device B replies, “At 12:15 P.M. Ok, I’ll adjust.”

Analogy of clocking communication in PTPv1 as described in IEEE 1588-2002

This back and forth allows the follower to adjust their clocks to whatever clock is considered the leader according to the best master clock algorithm (which should be renamed the best leader clock algorithm) and the ultimate reference being considered the grandmaster clock/grandleader clock [8]. Fun fact: in the Weibel and Heinzmann paper, they point out that “the epoch of the PTP time scale is midnight on 1 January TAI. A sampling point coinciding with this point in absolute time is said to have zero phase” [9].

So in 2008, the standards got updated to PTPv2, which of course is not backwards compatible with PTPv1 [10]. But this update includes changing how clock quality is determined, going from all PTP messages being multicast in v1 to having the option of unicast in v2, improving clocking accuracy from microseconds to nanoseconds, and the introduction of transparent clocks. The 1588-2002 standard introduced the concept of ordinary clocks as a device or clock node with one port while boundary clocks have two or more ports [11]. Switches and routers can be an example of a boundary clock while other end-point devices including audio equipment can be examples of ordinary clocks. A Luminex article titled “PTPv2 Timing protocol in AV Networks” describes how “[a] Transparent Clock will calculate how long packets have spent inside of itself and add a correction for that to the packets as they leave. In that sense, the [boundary clock] becomes ‘transparent’ in time, as if it is not contributing to delay in the network” [12]. PTPv2 improves upon the Sync message system by adding an announce message scheme for electing the grandmaster/grandleader clock. The Luminex article illustrates this by describing how a PTPv2 device starts up in a state “listening” for announce messages that include information about the quality of the clock until a determined amount of time called the Announce Timeout Interval. If no messages arrive, that device becomes the leader. Yet if it receives an announce message indicating the other clock has superior quality, it will revert to a follower and make the other device the leader [13]. It is these differences in the handling of clocking between IEEE 1588-2002 and 2008 that will be key to understanding the underlying difference when talking about Dante versus AVB.

Dante, AVB, AES67, RAVENNA, and Milan

Much like the battles between Blu-Ray, HD DVDs, and other contending audiovisual formats, you can bet that there has been a struggle over the years to create a manufacturer-independent standard for audio-over-IP or networking protocols used in the audio world. The two major players that have come out on top in terms of widespread use in the audio industry are AVB and Dante. AES67 and RAVENNA are popular as well, RAVENNA dominating the world of broadcast.

Dante, created by the company Audinate, began in 2003 under the key principle that still makes the protocol appealing today: the ability to use pre-existing IT infrastructures to distribute audio over a network [14]. Its other major appeal is that it allows for use of redundancy that makes it particularly appealing to the world of live production. In a Dante network you can set up a primary and secondary network, the secondary being an identical “copy” of the primary so that if the primary network fails, it switches over seamlessly to the secondary. Dante works at the Network Layer (Layer 3) of the OSI Model by resting on top of the IP addressing schemes already in place in a standard IT networking system and works above this. It’s understandable financially why a major corporate office would want to use this protocol because of the savings on overhauling the entire infrastructure of an office building to put in new switches, upgrade topologies, and so on.

An example of a basic Dante Network with redundant primary (blue) and secondary (red) networks

The adaptable nature of Dante comes from existing as a Layer 3 protocol, which allows one to use most Gigabit switches and even sometimes 100Mbps switches to distribute a Dante network (but only if it’s solely a 100Mbps network) [15]. That being said, there are some caveats. It is strongly recommended (and in 100Mbps networks, mandatory) to use specific Quality of Service (QoS) settings when configuring managed switches (switches whose ports and other features are configurable usually via a software GUI) to be used for Dante. This includes flagging specific DSCP values that are important to Dante traffic as high priority, including our friend PTP. Other network traffic can exist alongside Dante traffic on a network as long as the subnets are configured correctly (for more info on what I mean by subnets, see Part 1 of this blog series). I myself personally prefer configuring specific VLANs for dedicated network control traffic and Dante to keep the waters clear between the two. This is because I know control network traffic will not be prioritized over Dante traffic because of QoS, but at the same time Dante was made for this so as long as your subnets are configured correctly, it should be fine. The issue is that with Dante using PTPv1, even with proper QoS settings the clock precision can get choked if there are issues with bandwidth. The Luminex article mentioned earlier discusses this: “Clock precision can still be affected by the volume of traffic and how much contention there is for priority. Thus; PTP clock messages can get stuck and delayed in the backbone; in the switches between your devices” [16].

So since Dante uses PTPv1, Dante will find the best device on the network to be the Master (Leader) Clock using PTP as the clocking system for the entire network, and if one device drops out, it will elect a new Master (Leader) Clock based on the parameters we discussed in PTPv1. This can be manually configured too if necessary. According to the 1588-2008 standard, PTPv2 was not backwards compatible with PTPv1, but ANOTHER revision of the standard in 2019 (IEEE 1588-2019) included backwards compatibility [17]. AES67, RAVENNA, and AVB use PTPv2 (although AVB uses its own profile of IEEE 1588-2008, which we will talk about later). In a Shure article on “Dante And AES67 Clocking In Depth,” they point out that PTPv1 and PTPv2 can “coexist on the same network”, but “[i]f there is a higher prevision PTPv2 clock on a network, then one Dante device will synchronize to the higher-precision PTPv2 clock and act as a Boundary Clock for PTPv1 devices” [18]. So what we see happening is that end devices in the network that support PTPv2 introduce backwards compatibility with PTPv1, but the problem is that since these Layer 3 networks rely on standard network infrastructures, it’s not as easy to find switches that are capable of handling PTPv1 and PTPv2. On top of that, there is this juggling of keeping track of which devices are using what clocking system, and you can imagine that as this scales upward, it becomes a bigger and bigger headache to manage.

AES67 and RAVENNA use PTPv2 as well, but try to address some of these issues with improvements without reinventing the wheel. AES67 and RAVENNA also operate as Layer 3 protocols on top of standard IP networks, but were created by different organizations. The Audio Engineering Society came up with the standards outlining AES67 first in 2013 with revisions thereafter [19]. The goal of AES67 is to create a set of standards that allow for interoperability between devices, which is a concept we are going to see come up again when we talk about AVB in more depth, but AES67 applies it differently. What AES67 aimed to achieve is to use preexisting standards from the IEEE and IETF (Internet Engineering Task Force) to make a higher performing audio networking protocol.  What’s interesting is that because AES67 shares many of the same standards as RAVENNA, RAVENNA supports a profile of AES67 as a result [20]. RAVENNA is an audio-over-IP protocol popular particularly in the broadcast world. The place of RAVENNA as the standard in broadcasting comes from its flexibility in ability to transport a multitude of different data formats and sampling rates for both audio and video, along with low latency, and support of WAN connections [21]. So as technology improves, new protocols keep being made to try to accommodate the new advances, but one starts to wonder why don’t the standards just get revised themselves instead of trying to make the products reflect an ever-changing industry? AES67 kind of addresses this by using the latest IEEE and IETF standards, but maybe the solution is deeper than that. Well that’s exactly what happened with the creation of AVB.

AVB stands for Audio Video Bridging and differs on a fundamental level from Dante because it is a Data Link, Layer 2 protocol, whereas Dante is a Network, Level 3 protocol. So since these standards affect Layer 2, a switch must be designed for AVB implementation in order to be compatible with the standards on that fundamental level. This brings in an OSI Model conceptualization of switches designed for a Layer 2 implementation versus a Layer 3 implementation. In fact, the concept behind designing AVB stemmed from the need to “standardize” audio-over-IP so compatible different devices could talk across different manufacturers. Dante, being owned by a company, requires specific licensing for devices to be “Dante-enabled.” The IEEE wanted to create standards for AVB to ensure compatibility across all devices on the network regardless of the manufacturer. These AVB compatible switches have been notoriously magnitudes more expensive than a more common, run-of-the-mill TCP/IP switch, so it has often been seen as a roadblock to AVB deployments simply because of the cost factor in replacing an infrastructure of more common (read cheaper), Layer 3 switches with Layer 2 AVB-compatible (read more expensive) switches.

When talking about most networking protocols, especially AVB, the discussion dives into layers and layers of standards and revisions. AVB in and of itself, refers to the IEEE 802.1 set of standards along with others outlined in IEEE 1722 and IEEE 1733 [22]. So I know all this talk of IEEE standards gets really confusing so it is helpful to remember that there is a hierarchy to all this. In an AES White Paper by Axel Holzinger and Andreas Hildebrand with a very long title called “Realtime Linear Audio Distribution Over Networks A Comparison of Layer 2 And 3 Solutions Using The Example Of Ethernet AVB And RAVENNA” they lay out the four AVB protocols in 802.1:

 

 

It’s important here to stop and go over some new terminology when discussing devices in an AVB domain since it is Layer 2, after all. Instead of talking about a network, senders, receivers, and switches we are going to replace the same consecutive terms with domain, talkers, listeners, and bridges [24].

An example of a basic AVB network

IEEE 802.1AS is basically an AVB-specific profile of the IEEE 1588 standards for PTPv2. One of the editions of this standard, IEEE 802.1AS-2011, introduces gPTP (or “generalized PTP”). When used in conjunction with IEEE 1722-2011, gPTP introduces a presentation time for media data which indicates “when the rendered media data shall be presented to the viewer or listener” [25]. What I have learned from all this research is that the IEEE loves nesting new standards within other standards like a convoluted russian doll. The Stream Reservation Protocol (SRP also known as IEEE 802.1Qat) is the key that makes AVB shine from other network protocols because it allows endpoints in the network to check routes and reserve bandwidth, and SRP “checks end-to-end bandwidth availability before an A/V stream starts” [26]. This basically ensures that data won’t be sent until stream bandwidth is available and lets the endpoints decide the best route to take in the domain. So in a Dante deployment, adding additional switches daisy-chained in a network increases overall network latency the more hops that are added, and results in a need to reevaluate the network topology configuration entirely. Dante latency is set per device and depending on the size of the network, but with AVB, thanks to SRP and the QoS improvements, the bandwidth reservation gets announced through the network and latency times are kept lower even with large network deployments.

The solidity and fast communications of AVB networks have made them more common because of their ability, as the name implies, to carry audio, video, and data on the same network. The problem with all these network protocols follows the logic of Moore’s Law. If you couldn’t tell from all the revisions of IEEE standards that I have been listing, these technologies improve and get revised very quickly. Because technology is constantly improving at a blinding pace, it’s no wonder that gear manufacturing companies haven’t been able to “settle” on a common standard the way that they settled on, say, the XLR cable. This is where the newest addition to the onslaught of protocols comes in: Milan.

The standards of AVB kept developing with more improvements just like the revisions of IEEE 1588, and have led to the latest development in AVB technology called Milan. With the collaboration of some of the biggest names in the business, Milan was developed as a subset of standards within the overarching protocol of AVB. Milan includes the use of a primary and secondary redundancy scheme like that of Dante, which was not available in previous AVB networks, among other features. The key here is that Milan is open source meaning that manufacturers can develop their own implementation of Milan specific to their gear as long as it follows the outlined standards [27]. This is pretty huge if you consider how many different networking protocols are used across different pieces of gear in the audio industry. Avnu Alliance, the organization of collaborating manufacturers who developed Milan, have put together the series of specifications for Milan under the idea that any product that is released with a “Milan-ready” certification, or a badge of that nature, will be able to talk to one another over this Milan network [28].

 

A Note On OSC And The Future

Before we conclude our journey through the world of networking, I want to take a minute for  OSC. Open Sound Control protocol, or OSC, is an open source communications protocol that was originally designed for use with electronic music instruments but has expanded to streamlining the communications between anything from controlling synthesizers, to connecting movement trackers and software programs, to controlling virtual reality [29]. It is not an audio transport protocol, but used for device communication like MIDI (except not like MIDI because it is IP-based). I think this is a great place to end on because OSC is a great example of the power of open source technology. The versatility in OSC and its open-source platform has allowed for many programs from small to large to implement this protocol, and it is a testimony to the improvement of workflows when everyone (i.e. open-source) has the ability to input changes to make things better. We’ve spent this entire blog talking about the many different standards that have been implemented over the years to try and improve upon previous technology. Yet a gridlock of progress ensues mostly due to the fact that a standard gets made and by the time it actually gets enacted, the standard is already out of date because the technology has already surpassed that previous point in time.

 

So maybe it’s time for something different.

Maybe the open source nature of Milan and OSC are the way of the future because if everyone can put their heads together to try and develop specifications that are fluid and open to change as opposed to restricted by the rigidity of bureaucracy, maybe hardware will finally be able to keep up with the pace of the minds of the people using it.

Endnotes

[1] https://www.youtube.com/playlist?list=PL8dPuuaLjXtNlUrzyH5r6jN9ulIgZBpdo

[2]https://networkengineering.stackexchange.com/questions/35016/whats-the-difference-between-frame-packet-and-payload

[3] https://www.itprc.com/how-encapsulation-works-within-the-tcpip-model/

[4] https://youtu.be/9glJEQ1lNy0

[5] https://www.learncisco.net/courses/icnd-1/building-a-network/tcpip-transport-layer.html

[6] https://www.iol.unh.edu/sites/default/files/knowledgebase/1588/ptp_overview.pdf

[7] https://www.aes.org/e-lib/browse.cfm?elib=16146 (pages 1-2)

[8] https://www.nist.gov/system/files/documents/el/isd/ieee/tutorial-basic.pdf

[9] https://www.aes.org/e-lib/browse.cfm?elib=16146 (page 5)

[10] https://en.wikipedia.org/wiki/Precision_Time_Protocol

[11]https://community.cambiumnetworks.com/t5/PTP-FAQ/IEEE-1588-What-s-the-difference-between-a-Boundary-Clock-and/td-p/50392

[12]https://www.luminex.be/improve-your-timekeeping-with-ptpv2/

[13] ibid.

[14]https://www.audinate.com/company/about/history

[15]https://www.audinate.com/support/networks-and-switches

[16]https://www.luminex.be/improve-your-timekeeping-with-ptpv2/

[17]https://en.wikipedia.org/wiki/Precision_Time_Protocol

[18]https://service.shure.com/s/article/dante-and-aes-clocking-in-depth?language=en_US

[19]https://www.ravenna-network.com/app/download/13999773923/AES67%20and%20RAVENNA%20in%20a%20nutshell.pdf?t=1559740374

[20] ibid.

[21]https://www.ravenna-network.com/using-ravenna/overview

[22 ]Kreifeldt, R. (2009, July 30). AVB for Professional A/V Use [White paper]. Avnu Alliance.

[23] https://www.aes.org/e-lib/browse.cfm?elib=16147

[24] ibid.

[25] https://www.aes.org/e-lib/browse.cfm?elib=16146 (page 6)

[26] Kreifeldt, R. (2009, July 30). AVB for Professional A/V Use [White paper]. Avnu Alliance.

[27]https://avnu.org/wp-content/uploads/2014/05/Milan-Whitepaper_FINAL-1.pdf (page 7)

[28]https://avnu.org/specifications/

[29] http://opensoundcontrol.org/osc-application-areas

 

Resources

Audinate. (2018, July 5). Dante Certification Program – Level 3 – Module 5: IP Encapsulation [Video]. YouTube.

https://www.youtube.com/watch?v=9glJEQ1lNy0&list=PLLvRirFt63Gc6FCnGVyZrqQpp73ngToBz&index=5

Audinate. (2018, July 5). Dante Certification Program – Level 3 – Module 8: ARP [Video]. YouTube. https://www.youtube.com/watch?v=x4l8Q4JwtXQ

Audinate. (2018, July 5). Dante Certification Program – Level 3 – Module 23: Advanced Clocking [Video]. YouTube.

https://www.youtube.com/watch?v=a7Y3IYr5iMs&list=PLLvRirFt63Gc6FCnGVyZrqQpp73ngToBz&index=23

Audinate. (2019, December). The Relationship Between Dante, AES67, and SMPTE ST 2110 [White paper]. Uploaded to Scribd. Retrieved from

https://www.scribd.com/document/439524961/Audinate-Dante-Domain-Manager-Broadcast-Aes67-Smpte-2110

Audinate. (n.d.). History. https://www.audinate.com/company/about/history

Audinate. (n.d.). Networks and Switches.

https://www.audinate.com/support/networks-and-switches

Avnu Alliance. (n.d.). Avnu Alliance Test Plans and Specifications.

https://avnu.org/specifications/

Bakker, R., Cooper, A. & Kitagawa, A. (2014). An introduction to networked audio [White paper]. Yamaha Commercial Audio. Retrieved from

https://download.yamaha.com/files/tcm:39-322551

Cambium Networks Community [Mark Thomas]. (2016, February 19). IEEE 1588: What’s the difference between a Boundary Clock and Transparent Clock? [Online forum post]. https://community.cambiumnetworks.com/t5/PTP-FAQ/IEEE-1588-What-s-the-difference-between-a-Boundary-Clock-and/td-p/50392

Cisco. (n.d.) Layer 3 vs Layer 2 Switching.

https://documentation.meraki.com/MS/Layer_3_Switching/Layer_3_vs_Layer_2_Switching

Crash Course. (2020, March 19). Computer Science [Video Playlist]. YouTube. https://www.youtube.com/playlist?list=PL8dPuuaLjXtNlUrzyH5r6jN9ulIgZBpdo

Eidson, J. (2005, October 10). IEEE 1588 Standard for a Precision Clock Synchronization Protocol for Networked Measurement and Control Systems [PDF of slides]. Agilent Technologies. Retrieved from

https://www.nist.gov/system/files/documents/el/isd/ieee/tutorial-basic.pdf

Garner, G. (2010, May 28). IEEE 802.1AS and IEEE 1588 [Lecture slides]. Presented at Joint ITU-T/IEEE Workshop on The Future of Ethernet Transport, Geneva 28 May 2010. Retrieved from https://www.itu.int/dms_pub/itu-t/oth/06/38/T06380000040002PDFE.pdf

Holzinger, A. & Hildebrand, A. (2011, November). Realtime Linear Audio Distribution Over Networks A Comparison Of Layer 2 And Layer 3 Solutions Using The Example Of Ethernet AVB And RAVENNA [White paper]. Presented at the AES 44th International Conference, San Diego, CA, 2011 November 18-20. Retrieved from https://www.aes.org/e-lib/browse.cfm?elib=16147

Johns, Ian. (2017, July). Ethernet Audio. Sound On Sound. Retrieved from https://www.soundonsound.com/techniques/ethernet-audio

Kreifeldt, R. (2009, July 30). AVB for Professional A/V Use [White paper]. Avnu Alliance.

Laird, Jeff. (2012, July). PTP Background and Overview. University of New Hampshire InterOperability Laboratory. Retrieved from

https://www.iol.unh.edu/sites/default/files/knowledgebase/1588/ptp_overview.pdf

LearnCisco. (n.d.). Understanding The TCP/IP Transport Layer.

TCP vs UDP | TCP 3 Way Handshake

LearnLinux. (n.d.). ARP and the ARP table.

http://www.learnlinux.org.za/courses/build/net-admin/ch03s05.html

Luminex. (2017, June 6). PTPv2 Timing protocol in AV networks. https://www.luminex.be/improve-your-timekeeping-with-ptpv2/

Milan Avnu. (2019, November). Milan: A Networked AV System Architecture [PDF of slides].

Mullins, M. (2001, July 2). Exploring the anatomy of a data packet. TechRepublic. https://www.techrepublic.com/article/exploring-the-anatomy-of-a-data-packet/

Network Engineering [radiantshaw]. (2016, September 18). What’s the difference between Frame, Packet, and Payload? [Online forum post]. Stack Exchange.

https://networkengineering.stackexchange.com/questions/35016/whats-the-difference-between-frame-packet-and-payload

Opensoundcontrol.org. (n.d.). OSC Application Areas. Retrieved August 10, 2020 from http://opensoundcontrol.org/osc-application-areas

Perales, V. & Kaltheuner, H. (2018, June 1). Milan Whitepaper [White Paper]. Avnu Alliance. https://avnu.org/wp-content/uploads/2014/05/Milan-Whitepaper_FINAL-1.pdf

Precision Time Protocol. (n.d.). In Wikipedia. Retrieved August 10, 2020, from https://en.wikipedia.org/wiki/Precision_Time_Protocol

Presonus. (n.d.). Can Dante enabled devices exist with other AVB devices on my network? https://support.presonus.com/hc/en-us/articles/210048823-Can-Dante-enabled-devices-exist-with-other-AVB-devices-on-my-network-

Quine, A. (2008, January 27). How Encapsulation Works Within the TCP/IP Model. IT Professional’s Resource Center.

https://www.itprc.com/how-encapsulation-works-within-the-tcpip-model/

Quine, A. (2008, January 27). How The Transport Layer Works. IT Professional’s Resource Center. https://www.itprc.com/how-transport-layer-works/

RAVENNA. (n.d.). AES67 and RAVENNA In A Nutshell [White Paper]. RAVENNA. https://www.ravenna-network.com/app/download/13999773923/AES67%20and%20RAVENNA%20in%20a%20nutshell.pdf?t=1559740374

RAVENNA. (n.d.). What is RAVENNA?

https://www.ravenna-network.com/using-ravenna/overview/

Rose, B., Haighton, T. & Liu, D. (n.d.). Open Sound Control. Retrieved August 10, 2020 from https://staas.home.xs4all.nl/t/swtr/documents/wt2015_osc.pdf

Shure. (2020, March 20). Dante And AES67 Clocking In Depth. Retrieved August 10, 2020 from https://service.shure.com/s/article/dante-and-aes-clocking-in-depth?language=en_US

Weibel, H. & Heinzmann, S. (2011, November). Media Clock Synchronization Based On PTP [White Paper]. Presented at the AES 44th International Conference, San Diego, CA, 2011 November 18-20. Retrieved from https://www.aes.org/e-lib/browse.cfm?elib=16146

Sexual Violence Prevention Guide

Here For The Music Campaign

Live music is a place for fun, community, and open expression – sexual harassment and assault don’t belong. The Here For The Music Campaign wants to build a world without sexual violence and believe in music’s power to create change.

The #HereForTheMusic campaign works to build true safety with all parties who come together to create a show or festival: artists, promoters, fans, venue staff, touring professionals, media professionals, and more.

Contact Executive Director Kim Warnick for policy consultations and/or anti-harassment program design. Participants will learn how to identify potentially harmful behavior, intervene safely/effectively, and build skills/confidence, no matter what your role is.

They offer training to music fans, artists and touring pros, venue staff/volunteers, festival staff/volunteers and other music industry professionals

They provide tools to support the campaign:

For more information visit #HereForTheMusic campaign

Download the Guide

898146_aa8b55382ac342208184639f0d2ca603(2)

Affordable Starter Plug-ins

Post Grad Resources:

This time around for post-grad resources, I looked into plug-ins must have’s for those starting out on their own. Ones that are quality and include an affordable price or offer free downloads. As always, this article is opinion-based. I am sharing information that I have gathered in hopes that it can help other young professionals.

Waves Silver Bundle

The Waves Silver bundle is an excellent package of plug-ins to have. It includes 16 different plug-ins, several different compressors, limiters, equalizers, reverbs, and delays. Waves Silver also includes two different analyzers, which I think everyone needs to have in their toolbox. It offers several different transparent and character plug-ins which offer users the option of transparency or to add coloration to tracks. It is repeatedly said to provide a solid foundation of plug-ins for the purchaser.

Waves state the bundle is geared for music production, mixing, and mastering in personal to professional studios. It comes with all of the basic plug-ins you may need to take a mix from start to finish. I personally have used over half of the plug-ins in this bundle and find all of them to be incredibly easy and simple to use. They do not evade scrutiny though. Compared to other similar plug-ins, the Waves plug-ins appear outdated and may be clunky to use due to its dated appearance. GUI can be a big deal to some users and several of the plug-ins on this list have a really pleasing interface to work with. But at just 89 dollars, it is a lot of bang for your buck.

Waves CLA – 76 Compressor / Limiter

Another Waves plug-in to add to your arsenal is the CLA – 76 compressor/limiter. This combo brings character and coloration modeled after amplifiers from the 60s. Like the plug-ins talked about before, it has a similar GUI and user functionality. The price of this plug-in is unbeatable; 30 dollars. It can also be found in many of the different Waves bundles, but unfortunately not the Silver bundle.

Several of the positive features are its explosive attack, built-in distortion modeling, and sounding amazing on drums. Some argue that the coloration is lackluster, but the price just cannot be beaten. Another bonus to add is that Waves plug-ins can be added to most consoles and used for live sound. Just an added bonus if you chose to do so.

Izotope Elements Suite

This bundle is a steal of a deal with over a three hundred dollar savings. Izotope Elements markets itself as everything you need to repair, mix, and master. It bundles four different Izotope packs for the price of 199 dollars.

The first pack is Ozone Elements which includes an equalizer, imager, and maximizer. This plug-in is an excellent tool to master with. Its appearance is really nice and it combines the EQ, imager, and maximizer into one plug-in. This is a great feature that keeps your work process flowing and stops from slowing you down. This plug-in alone is one of my favorites.

The next thing included in the suite is Neutron Elements. Neutron is meant to assist in the mixing process. The plug-in comes with a 12 – band equalizer, transient shaper, an exciter, and compressor. Again, all features are combined into one plug-in; one place. At first, I had never heard of a ‘transient shaper’. What it is meant to be used for is cleaning up transients, adding punch or attack, and bringing more clarity and presence to the mix.

RX Elements represents many of the repair features of this bundle. It includes a de-hum, de-click, de-clip, voice specific de-noise, and a spectrogram view of the audio files. The spectrogram allows you to see your recording in a graph form where it is representing frequencies and their intensity against time. This allows you to target very specific frequencies that you may need to focus on.

And last on this list is Nectar Elements. Nectar Elements is also used to repair audio files. It helps clean up sibilance and overall tone, as well as set reverb and compression. The plug-in incorporates adjustable sliders to set levels of the many different features. It has a slightly different view that users may not be accustomed to.

FabFilter Essentials Bundle

The FabFilter Essentials bundle is on the more expensive end of this list. This essentials pack includes the Pro-R reverb plug-in, Pro-Q 3 EQ plug-in, and the Pro-C 2 compressor plug-in at 399 dollars. I have referred to these plug-ins repeatedly when researching for this article. They are described as being incredibly powerful, beautiful to look at, and also easy to use.

The reverbs in the Pro-R are convincing and intuitive. The selections are realistic and add a natural-sounding coloration to recordings. It offers reverbs ranging from small rooms to huge spaces. The Pro-C 2 is a transparent compressor that comes with all of the modern features to make up for its lack of coloration. Some might find this plug-in a bit more advanced compared to others on this list. But FabFilter offers tons of user resources and helpful videos on its website if you ever need the support. The Pro-Q 3 is one of my favorite EQs to use. I find that it meets all of my needs when it comes to mixing. It offers a wide range of filter types to really fine-tune your mix and the interface is one of the best on this list. I find this really important when I am focusing on frequencies and adjusting filters for any given amount of time.

The biggest plus to this pack is that it allows users to obtain good-sounding tracks in a short amount of time. The price may be on the steeper end, but the value of the product and the quality it will add to your work is well worth it to many.

Celemony Melodyne 5 Essentials

Melodyne 5 Essentials is currently just 99 dollars and is essential to anyone editing vocals, pitch, and timing. Celemony is repeatedly praised as being award-winning technology and is a reliable tool to have in the sound and music industry. There can be a bit of a learning curve with this software. The interface can appear foreign and confusing to those who are new to it. Editing pitch and time takes a patient ear and tracks can easily be taken beyond the point of sounding realistic. Thus, the creation of autotune.

Like I mentioned above, Melodyne is used for things like pitch correction, time adjustments, it can apply automatic adjustments as well as manual, and it can also analyze and transpose audio files into a musical format. This can be helpful for those who have a musical background. Melodyne 5 Essentials is supported on both Mac and Windows and can be used on most DAW’s without issue. It is also easily upgradable if you ever chose to get a fuller version of Melodyne. Therefore, the Essentials version is a great way to get your feet wet when it comes to this kind of editing.

ValhallaDSP

Last on this list is ValhallaDSP. I am always looking to add to my reverb collection and I have found ValhallaDSP is a great company to help feed my addiction. They offer several affordable reverbs, delays, and other modulating filters and effects. Currently, they are offering three free downloads; a large reverb, an echo type effect, and a spacey flanging modulator.

These plug-ins have been praised for their storytelling capabilities by designers looking to be more experimental. They are described as being eccentric and over the top. They are not intended to be used to create a realistic or authentic sound, though it may be achieved if worked at. They are also easy to use and look at. Many of the parameter descriptions show up when hovering over a given parameter. It is made to help the user achieve what they want without having them work too hard. Along with the price point, it is a really great, simplistic line of plug-ins.

That concludes the list of plug-ins I have chosen to look into for this article. I personally think they are all great additions to those who are just starting out on their audio journey or individuals who have graduated and find themselves without resources. I am sure there are many, MANY other great options out there and I welcome feedback and conversation on the subject. Furthering the sharing of information and resources to those in need and want to learn is one of my agendas and I think it is also one of SoundGirls.

Special thanks to Allen Harrison, Ryan Nicklas, David Peterson, Tyler Quinn, and Drew Stockero for helping me research this article. You are appreciated.

 

The Importance of Gain Staging and Automation

Gain Staging is the act of managing the levels of your track. Automation can then give you the control to increase or decrease the volume so that it sounds equal and at a similar level throughout. By implementing gain staging and automation in your mix, you can immediately make your track sound more professional. So, here’s how to do it.

I personally like to focus on Gain Staging and Automation on vocal tracks as it creates a radio-ready sound. If you have an instrumental track it can work well with a lead instrument. In this case, it’s best to trust your ears as you don’t want to diminish any dynamics being performed.

The first step in good Gain Staging technique is creating a good recording level. Ideally you want your levels coming in at -18 dBFS but no more than -12 dBFS

I tend to apply some EQ on the main vocal channel and then send the signal to a vocal bus for compression etc. You want the automation to be on the main vocal channel NOT the bus channel.

Once you’ve got a good take you can then start automating your vocal paying attention to the level of each word and syllable and turning it up or down so that the sound is level with the rest of the recording. You basically don’t want any loud peaks or very quiet sounds the goal is to have each word and syllable at roughly the same level. If it helps you can insert a loudness meter just to keep an eye on your levels.

Once you’re done, the automation on your track can look a bit crazy but that’s perfectly fine. Hopefully, now you have a great vocal performance that levels out the loudness and quietest parts of your track to create an engaging performance. The great part about automation is that it leaves less work to do for the compressor so I’ve often found vocals sound a bit more vibrant.

I hope this technique helps even out your recordings and helps you craft that radio-ready track!

 

X