Empowering the Next Generation of Women in Audio

Join Us

Mixing with Vanessa Silberman

Mixing tips with Vanessa Silberman presented by SoundGirls, Female Frequency, and A Diamond Heart Production

Thursday, May 6th – 6 PM EDT / 3 PM PDT

Join us for this exclusive webinar on mixing tips with Songwriter, Recording Producer, Engineer & Label Owner Vanessa Silberman.

Vanessa will walk us through a recording session and give tips on mixing using 2 DAWS: Pro Tools & Logic. You’ll get tips on everything from stemming files to Editing, EQ, Compression, Panning and much more.

Register and Post Questions

About Vanessa Silberman:
Vanessa Silberman is an international touring singer, guitarist and songwriter from Brooklyn, NY (via Los Angeles, CA). She is also a record producer, engineer, an independent A&R and runs an artist development Label called A Diamond Heart Production.

As an artist her music has often been compared to the raw bare bones rock ‘n roll of Nirvana along with the appeal and vocal qualities of Lana Del Ray and authenticity of classic artists such as Patti Smith and Neil Young. But more recently with the release of her latest single ‘My Love’ she’s incorporated more samples, beats and synths akin to artists such as Chvrches, Phantogram and Sylvan Esso.

Widely known for having a very strong DIY ethic and wearing many different hats in the music business —in the past Vanessa has worked for heavy hitter’s in the music business such as Producer / Songwriter Dr. Luke as well as for many companies and places ranging from the Foo Fighter’s Studio 606 to Epitaph Records. Vanessa has also Engineered for everyone from Tony Visconti (David Bowie, T Rex) to Kimbra & Harper Simon. She is also a Recording Academy Grammy Member, the co-chair of the New York Chapter of Soundgirls, an advisor to the Florence Belsky Charitable Foundation and assists California Women’s Music often co-hosting their virtual music festivals.

vanessasilbermanofficial.com
adiamondheartproduction.com

Female Frequency
Female Frequency is a community dedicated to empowering female, transgender & non-binary artists through the creation of music that is entirely female generated.

The first Female Frequency EP made entirely by women is available here–>
femalefrequency.bandcamp.com
femalefrequency.com

 

Starting A Show

In any normal year, early spring is when the staffing process begins for tours going out in the fall. You probably won’t have a contract in hand yet, but your resume has gone off to designers and production companies, or (if you’re currently on tour) you’ve had a conversation with your design team or production manager about the shows going into production, and what they might have in mind for you.

However, there are still months before you’ll hit the shop to build a new show, and longer until you’re in the venue to tech it. So spring and summer become the perfect time to start learning a new show so you can give yourself a running start. Right now, conversations center around maybes: someone has your resume on their desk; they’d like to inquire about your availability for a possible project; we’d like to see if you might be a good fit. That sort of language. At this point, nothing is for certain, but I’ll start in on some cursory research for the show I’m under consideration for. This mostly involves cyber stalking the show: searching YouTube for Tony Award or press event performances, Googling pictures of the production, and listening to the most recent cast album or recording of the show.

On Official Offer

Conversations use more concrete terms: yes, we’d like you to do the show; we’re sending your resume to the production manager; you should hear from this person soon, etc. At this point, the show’s soundtrack becomes the new underscore of my life. I cannot stress enough how important it is to listen to the show. Replicating the sound of it is your job, so the more familiar you are with it, the better. Plus, knowing what’s happening gives you a solid foundation to start tech and make intelligent mixing choices.

Finally, once I have an official offer I can start my formal prep. At this point I ask for a packet of information from the designers or production consisting of: a script (preferably a mixing script if it isn’t a brand new show), any audio recording that might be available, and a console file (again, this is if there’s a version of the show currently running).

The Script

The script is the basis for most of my paperwork. The audio recording hopefully gives me the full show to listen to, including dialogue. The console file lets me dive into the structure of the physical show as well as providing details about programming that might not be clear in the script.

From the script, I’ll build an initial set of paperwork starting with my own mixing script. Even if I get a complete, annotated mix script, I will always make my own for two reasons:

#1. I like my formatting. I have a system with color-coded notes that is easy for me to read, and I can put page breaks in convenient places. Plus, re-entering cues and notes means that I know exactly where each one goes.

#2. It’s another opportunity to get the show in my head. I always re-type the script which forces me to go over every single word of the show. Usually multiple times with annotations and proofreading.

In conjunction with the script, I’ll do some additional paperwork and make a spreadsheet to document (or for a new show, create) DCA assignments. This has the basic information of how many console scenes are in the show, what the name of each DCA fader is in each scene, and which specific mics are assigned on a given fader (if it’s not obvious, such as faders labeled chorus, altos, or one-off solo lines). This helps while annotating my script if I have a question where a cue needs to go or who’s in what scene, and becomes a quick reference for programming the console when I get to tech.

This is where the console file can come in handy. Most consoles have an offline editor that you can use to open it on your computer and look around to see how the show is laid out. When I’m building paperwork, I’ll double-check the file if I have questions about who exactly is singing which part in a scene.

Practice

Once I have an annotated script, my basic paperwork, and the audio recording, I’ll start to put the mix into practice. I use two methods, one that requires my practice board and another I can do pretty much anywhere.

Using my practice board (a set of faders that don’t control anything which you can find versions on casecraft.com, er3designs.com, or I, personally, have a custom board made by Scott Kuker), I’ll grab my script and the recording and move through the mix of the show. I’ll go over difficult transitions or fast sections multiple times to start developing some muscle memory, and if I’m having trouble, I’ll play around and see if there’s a more efficient way to mix the scene. That might be adjusting the DCA programming or changing which hand covers which faders. On Les Mis and Saigon, those shows are almost entirely sung-through, and there’s always music. So I used my right hand on the orchestra faders for the majority of the show and did the vocal choreography with my left hand. Practicing for those two shows involved figuring out where I needed both hands for vocals and should switch my right hand from covering the orchestra faders to assisting with dialogue. Mean Girls on the other hand has dialogue scenes with no underscoring, so I spent more time using both hands-on vocal faders and then shifting back over to the band for songs.

The second method I use is something I call pointing through the show. I can practice with this technique anywhere with just a piece of paper (the aforementioned DCA breakdown paperwork), and the audio recording of the show. For this, I’ll listen to the show, pointing along on the paper to who’s mic should be up at the moment. This tests how well I’ve memorized the show because there’s no way to hide if I can’t point to who’s talking. Then I’ll go over any problem scenes with my script. Most often these are dialogue scenes where it’s constantly switching between several different people or scenes with a lot of one-liners. Pretty much anything that might cause you to skip around on the faders if there’s no good way to do typewriter programming.

I started practicing this way because I got into the habit early in my career of working to get off the book as soon as possible. Pointing through the show gives me a head start on memorizing the show and I can usually put my script away a couple of weeks after tech. I find I pay better attention to how the show is sounding when I don’t have my head in my script. Other people prefer the security of having the script in front of them to reference, even if they don’t necessarily need it. It’s purely a personal preference, but you should always make sure you are comfortable and confident that you truly have the show memorized before you completely put your script away.

*    *    *

But what happens when you don’t have all this time to learn a show? The prep process I’ve outlined can take weeks or even months. What happens if you get thrown into a show at the last minute or won’t even get a script until a couple of days before tech? Or what if it’s a short run where you just can’t justify months of preparation?

In this case, I do some basic preparation but focus on making the notes in my script are clear since I’ll likely be sight-reading it in tech. I won’t retype my entire script, but instead use the limited prep time to make sure annotations and notes are easy to follow and my fader or DCA layout is as logical and simple as possible. If I have time to physically practice, I’ll focus on the complicated parts to make sure they’re efficient. I’ll always make and print out a DCA breakdown so I have a quick reference for programming the console.

Every bit of preparation helps, no matter how much or little time I have, and I’ve never met a designer that wasn’t happy to give me whatever they could to help me learn the show. So don’t be afraid to ask for materials, your designer will appreciate your initiative and everyone (yourself included!) will love it when you’re self-sufficient in tech.

Sonarworks Raffle

Sonarworks is providing one complete edition for speaker and headphones calibration software delivering consistently accurate studio reference sound.

Complete edition for Speakers & Headphones with measurement mic $299

2 licences for Headphones $99

Everyone is eligible to try out the software – They are offering a 21-day free trial here

SoundGirls will be raffling off these items on April 23, 2021. This raffle is available to all members of SoundGirls, if you are not a member you can register for free here

Enter Raffle Here

In less than 20 minutes you can calibrate your existing studio speakers with a SoundID Reference measurement microphone and calibrate your existing headphones with more than 280 headphone calibration profiles already included in the software as ready-to-use presets. With an applied calibration profile the software sets the frequency response target to be completely flat across all audible frequencies so you can make music that sounds great everywhere. You can also now make custom adjustments to the target curve in real-time with the new custom target feature.

With accurate studio reference sound, you can seamlessly switch between speakers, headphones, and rooms. Finally, ensure continuity of your workflow regardless of distance, gear, or set-up.


About Sonarworks

Sonarworks is an award-winning audio technology innovator delivering an individually perfected sound experience to every music creator and lover. Sonarworks started off in the professional audio space in 2012. Its patented technologies are now used in more than 70,000 studios globally, including many Grammy-Award-winning engineers recording A-list stars (like Lady Gaga, Madonna, Rihanna, Adele, Coldplay, and more). After conducting the biggest consumer sound preference research ever, Sonarworks now is on the mission to put personal sound front row and center for every music listener worldwide. With its industry-leading SoundID audio personalization technology Sonarworks offers category excellence for data-driven machine learning technology integration into consumer electronics devices and music database platforms.

Making Sonic Magic from Auditory Illusions

So, if a tree falls in the woods, perhaps it makes infinite sounds

A few years ago, I attended a talk on wave field synthesis, and to say I was captivated feels like a sorry understatement. Wave field synthesis, if you are unfamiliar with it as I was, is a spatial auditory illusion and rendering technique that produces a holophone, or auditory hologram, using many individually driven loudspeakers. The effect is that sounds appear to be coming from a virtual source and a listener’s perception of the source remains the same regardless of their position in the room. Its application in theatrical contexts is very new, but as the techniques and technology slowly become more widely available, the potential for theatrical applications is astounding.

This introduction to wave field synthesis, in addition to being quite exciting, pointed me towards a categorical lack of knowledge about auditory illusions. Since then, I’ve been filling in the gaps and adding these illusions to my sonic toolbox. Now, quite a bit of theatrical sound design could be considered spatial illusions like, for example, when we recreate actual physical phenomena like the doppler effect. Auditory illusions, however, encapsulate many effects extending far beyond this.

Optical illusions have long been the inspiration for and integrated into visual arts. M.C. Esher’s work, for instance, presents the viewer with impossible objects and perceptual confusions. In psychology and neurology, the study of optical illusions has played a large role in understanding the visual perception apparatus. Due to the historical ease of reproducing and distributing visual material, as opposed to auditory material, visual illusions have long been widely encountered, studied, and applied in artistic works. The history of auditory illusions and their use in psychology, music, sound design, and elsewhere is much shorter.

Auditory illusions, much like visual illusions, reveal the deficiencies and oddities of our perceptual processes, but the auditory and visual systems have their own unique attributes. The field of psychoacoustics examines how the brain processes sound, music, and speech. Hearing is not strictly mechanical but involves significant neural processing and is influenced by our anatomy, physiology, and cognition. Researches have even found that how we unconsciously interpret sounds is influenced by our individual environments, backgrounds, and dialects. Auditory illusions provide key information in unpacking our auditory processes for psychologists and neurologists. In artistic applications, auditory illusions provide similar insight into our perceptional processes and illustrate that there is no one true sonic reality.

Dr. Diana Deutsch, a psychologist at the University of California, San Diego is at the forefront of psychoacoustic research and her work has utilized countless auditory illusions and sonic paradoxes. If you want to hear examples and read her work, visit her website here: http://dianadeutsch.com/. Due largely to her research, there has been increasing understanding of the cognitive factors in the auditory system and how it has evolved over time to help us interpret our sonic environments effectively. Psychoacoustic research has been applied in myriad contexts including modeling compression codecs like mp3s, software development, audio system design, drone flying, car manufacturing, and even, terrifyingly, in acoustic weapon development. In the arts, psychoacoustics and auditory illusions have been applied in musical contexts, sound art, film, and theater, though these applications are fairly nascent.

There are a number of types of illusions that can be roughly categorized as spatial illusions, perpetual motion, and non-linear perceptual effects. More auditory illusions continue to be uncovered and understood, so these categories aren’t rigid. Spatial illusions are already a mainstay of theatrical sound design. We frequently manipulate spatialization to make it seem as though sounds are coming from a particular source or direction other than the loudspeaker producing the sound. Holophones can be created in a number of ways including wave field synthesis as I’ve mentioned. Binaural recording is another example of spatial manipulation, reproducing interaural features and anatomical influences of the head and ear. All of these spatial illusions exemplify a distinction between the physical properties of the sound field and the perception of what listeners actually hear.

Unreal sounds created in the inner ear or brain are a part of our daily lives that we typically don’t notice, and there are several auditory illusions that mirror common visual illusions. A Zwicker Tone, for example, is the sonic equivalent of an after image. Illusions of Auditory Continuity show us that when an acoustic sound signal is momentarily cut off and replaced by another sound, listeners perceive the original signal to continue through the interruption. Through the familiar Precedence or Haas Effect, we perceive a singular sonic event when one sound is followed by another with a short delay time, and that we ascribe directionality based on the first arriving sound. While subtle, these are all valuable design techniques.

Less subtle are perpetual motion illusions. Pitch and tempo circularity is roughly analogous to the barber pole illusion in which a sound seems to be endlessly ascending or descending or a rhythm seems to be endlessly increasing or decreasing in tempo. Both pitch and tempo circularity encapsulate a number of techniques and effects. The Risset Rhythm and Shepard Tone are complex versions. The Shepard Tone most notably influenced the film score for Dunkirk and created a palpable sense of anxiety. Much like Esher’s impossible stairs, circularity illusions are both unsettling and entrancing, a powerful design technique.

There are a number of speech-related auditory illusions. Most famously, the Laurel/Yanny internet phenomenon of 2018 brought speech interpretation illusions into the spotlight. It also demonstrated the incredible subjectivity of our hearing. Similarly, The McGurk Effect presents a puzzling phenomenon in the interaction of vision and speech. When a visual component of a person mouthing a sound is paired with a different sound, listeners perceive neither of the two sounds, but instead a third sound.

Dr. Deutsch has amassed an immense number of Stereophonic Illusions including Phantom Words, Binaural Beats, the Glissando Illusion, the Octave Illusion, the Scale Illusion, the Tritone Paradox, and more. Her work shows us how differently people perceive the same sounds. When we listen to speech the words we perceive are influenced by our expectations, knowledge, dialect, and culture, in addition to the physical sounds we hear. Much of her work has also demonstrated how left and right-handedness influences how complex sounds are synthesized and localized in our heads. In the Tritone Paradox, utilizing sequentially played Shepard tones a tritone apart, some listeners hear the tone ascending while others hear it descending. The potential for designing sounds in which some of the audience experiences the inverse of what others experience is, to me at least, a riveting notion.

While this is brief overview is the tip of the ever-expanding metaphorical iceberg of auditory illusions, I have found that looking into psychoacoustics and auditory neurology provides incredible design techniques and ideas that are not always at our disposal. The potential here that I’m so excited about, is to create audience experiences that rouse questions about the subjectivity of their perceptions of the world around them. Audiences can leave the theater not believing their ears. It also illuminates a greater need for interdisciplinary collaboration and cooperation between fields that often feel disparate: psychology, neurology, audiology, engineering, music, sound design, etc. In my own work, I have yet to utilize almost any of this material (with the exception of spatialization techniques, of course), but it is leading me to think about designing for the whole head, the ear, the brain, and the mind. I so look forward to the continued integration of auditory illusions in theatrical designs, creating sound magic.

 

 

Mental Health and Attachment

I started this month with some work on the books, a one-off awards show. It was a wonderful feeling to be back at it, while at the same time trying to remind myself that I haven’t forgotten how to re-string a guitar. However, it was short-lived. The crew was cut back due to Covid restrictions and I was back in my sweat pants before I could say load in.

It got me to thinking about how we attach ourselves to our jobs. I started walking taller knowing I was working again, I had a purpose once more. Seeing other people’s posts about feeling a loss of purpose during this lockdown, I’ve been thinking how potentially unhealthy it is that we have such an attachment to our jobs. We are not wholly our jobs. Yes, we may have dedicated years to trying to get the job in the first place, but it does not define us. Just because we have pivoted to driving a delivery van or working in a coffee shop, it doesn’t make us a different person, or at least it shouldn’t. We should focus on our qualities and what we bring to the world that way. Can you deliver a package in the same way you would tend to an artist? Do you take pride in being on time every single day for your shift just like you would need to for a bus call?

You can still be super passionate about your career, but it doesn’t need to be all-consuming. Do you take breaks between tours? Are you able to maintain relationships off the road? As much as we want to believe that people are looking out for us, our artist cares about us, at the end of the day it’s a business. They will no doubt do whatever is best for their business, so you should also think of yourself as a business. Nurture yourself, put yourself first.

What is your identity outside of work? I have been taking this forced time off to start learning to surf. I have always wanted to learn, I have put myself in the best location (Southern California baby!) and now there are no excuses for not having the time. In fact, I am becoming quite knowledgeable on how the waves are during all the seasons (or should I say the one season we have here!).

The one commodity you can never replace is time. Enjoy being handed some time off, or at least having time to do something different.

 

The Changing of the Guard – Training subs and replacements on a show

 

Last month, in Tips and Tricks for Subs and Replacements, we discussed how to put your best foot forward when learning to be a sub or replacement on a show. This month let’s look at the other side of the equation, when you are the one running the show and someone new is coming in either to sub for you, or to take over the show entirely. We will mostly discuss training subs in this post, but the training principles and tips should apply in both scenarios.

Why is having a well-trained sub so important? Well, the old saying “the show must go on” applies equally on stage and backstage! Just as actors have understudies for their roles, it is important that no one person’s health or availability is the “single point of failure” on a production, such that the show literally cannot go on without them if they must call out. Additionally, you don’t want the show to simply “go on” without you. You want it to be as good as it is when you’re the one mixing! When your sub is mixing the show, they are representing you, your work, and the entire sound department, so you want to know you have someone who is going to do their best job and be a good ambassador on your behalf.

Think of your show as this tower, and don’t let one person’s absence be the block that breaks it!

I like to break the training process into 3 phases: Pre-Prep (before your sub’s first official day), Training (when your sub is learning to mix the show), and Hand-Off (when the sub finally gets “hands-on faders” and starts mixing the show). Depending on your sub’s prior mixing experience, this process can take anywhere from a few days to a month. Typically, I will ask for 16 performances (2 weeks, assuming 8 shows a week) to complete this process, and I have this is the typical timeline in NYC.

Phase 1: Pre-prep

There is a lot you can do to make things easier for your incoming sub before they are even hired. The first of these is to maintain a good mix script! If you read my last blog, you know that I take paperwork and formatting very seriously, because they’re the best tools we have to convey all the information that is needed to mix the show correctly. If your script is paper, think about making a digital version, or at least a scanned PDF. That way your sub can have access to all your notes as they put together their own copy of the mix script. Collect any additional paperwork or training materials that might be helpful to them and organize it all in some sort of shared folder. For example, if a new sub was to join my show, they would be added to a private Dropbox which has my mix script, a blank script, the score, face pages (for learning people’s names), startup/shutdown instructions, show recordings (audio-only and conductor cam), and hands videos that my current sub filmed when he was training so that he could reference them while practicing. Back when I was a stage manager, one of my sayings was “the book matters more than you do,” and this idea certainly applies here. When your sub is mixing for real, you won’t be there to answer questions, so as much of that info as possible needs to be written down and easy to reference.

A sneak peek inside the contents of the “RoA_SoundSubs” Dropbox

Phase 2: Training

Once your sub is in the building and training has officially begun, you will want to give them at least a few performances to get familiar with the show, the mix, the pace, and the sound before they start practicing. They should watch the show from the audience at least once before moving to FOH to shadow you. Once they are shadowing you, this is when they can be building their script, taking notes, and asking questions. On Rock of Ages, I had a small table with a video shot of the stage over to one side, plus our console had an overview screen that I could angle towards my sub at the table. This allowed them to watch both the show and a mini-version of my DCAs moving in order to see my strategy for making certain pickups in real-time, and without having to be right on top of me at the console :). If you’re able, try to explain certain things to your sub in real-time while you’re mixing. The more context you can give your sub for why you approach scenes the way you do, the easier it will be for them to mimic your moves. Everyone learns their own way, so give your sub room to do the prep they need, whether that’s watching you, marking up their script, or mixing along with pennies or a practice console. If they are newer at mixing and need more guidance, do your best to instruct them on what to focus on as they train, and what notes they should put in their script to make things as clear as possible.

Phase 3: Hand-off

It’s finally time for your sub to start doing some real mixing! Rather than just have your sub dive in head-first and mix the whole show their first time, it’s best to give them bits and pieces of the show to start with and build up from there. There are 3 common methods that I know of for handing off a show: “top-to-bottom,” “bottom-to-top,” and my personal favorite, “inside out.” If you are handing off a show “top-to-bottom,” you will have your sub start by mixing the beginning the show, and then you will take over and do the rest at a logical “hand-off” point, such as during an applause break. The next night, they will again start mixing from the top, but go on for longer before handing back to you. This way, they are always mixing the show in sequential order, and they will always be starting by mixing a part of the show that they have done before. This can help to build confidence, depending on your sub’s experience and personality. “Bottom-to-top” is the same method, just backwards. Your sub starts at the end of the show (for example, with the finale) and then your “hand-off” point moves earlier and earlier. Handing off “bottom-to-top” can be great because the regular mixer sets the tone for the show, and the sub has a benchmark that they can follow once they take over.

Finally, handing off “inside-out” is when you have your sub start with mixing small sections in the middle of the show, then build out from there until they reach the “bookends” of each act. I love this method because I can tailor my sub’s hand-off schedule to them more specifically. It also has the same advantage as “bottom-to-top” where I can start things off and give the sub a sense of where their levels should be that night. Typically, I will first give my sub some easy stuff to mix in the middle of each act, such as intimate dialogue scenes and solo or two-character songs. I’ll try to make sure that they get a section with some sound effects if the show has those so that they can get used to juggling that responsibility with making their pickups. The next day, I will either add entirely new chunks of the show to their list or extend the length of the chunks they are already doing. Again, this is dependent on the content of your show and the experience of your sub. In this method, the original A1 will find in a few days that all they are mixing is the beginnings and ends of each act, and finally, the whole show will be “handed off!”

These methods all take some advance planning to make sure that your hand-offs are clean, and it’s good to make sure your sub, stage manager, and music director are all privy to the plan each night. You don’t need to go into major detail about who is mixing which exact lines of dialogue, but those folks will be able to give good notes about what they are hearing and what might need adjusting between you and your sub.

Clean hand-offs are key here as well!

 

Optional Phase 4: Noting and Brush-Ups

If time allows, try to make sure that your sub-mixes at least one entire performance by themselves prior to your planned absence day, if applicable. If things are progressing well and your show is fully handed off, the last thing I like to do is give my sub one show where I am not at the console with them, so that they can practice “flying solo.” At this show, I will sit in the back of the house so that I can get to the console quickly if I need to, but mostly I will try to write my notes down and stay out of their way! This really is the only way that your sub will learn to solve problems and make decisions without you there to help, which is exactly the goal of training them in the first place!

Once your sub is fully trained, you should make a schedule for them to come in and mix a brush-up performance every few weeks, with you noting them from the house. Even if you aren’t planning to take a day off, it’s important to make sure your sub stays fresh, and that can be hard to do if they go months without mixing a performance!

What if your theater isn’t in the habit of hiring and training subs? I know from personal experience that it can be hard to sell a producer on this idea, especially in low-budget venues or on short show runs. If you are met with resistance, ask your producer to think of it this way. Training a sub is like taking out an insurance policy for the show. Putting in the time and resources to train a sub in advance will likely result in a higher quality mix than if someone untrained must attempt to mix the show “cold.” Or in the worst-case scenario, the producer might have to cancel an entire performance and refund everyone’s tickets. Hopefully avoiding both these outcomes is in their best interests too!

On a side note, one of my sincerest hopes is that when theater returns post-pandemic, the need for trained subs, paid sick days, paid personal days, and thorough contingency plans will be taken much more seriously by everyone. No one should ever feel like they must “power through” if they aren’t feeling well, and I think that we all now realize that having a sick person in the building is not worth the risk it poses to everyone else! No more “war stories” about sick A1s trying to mix with their sinuses totally blocked or with a nausea bucket next to them (I, unfortunately, speak from personal experience on both). Also, we have always known that this work can be mentally taxing, and I hope that when we reopen workers will feel that they can advocate for themselves better in that arena too, whether by asking for support outside of work or taking a mental health day without fear of repercussions.

I hope this post and my previous blogs have helped to shed some light on this important aspect of running shows! Whether you are the sub or are training the sub, these tips and tricks will help you make sure that your show sounds the best it can, regardless of who is mixing it.

Luna Guitars

TO RIVAL OTHER BRANDS?

It feels like every guitarist must remember the first moment of holding a guitar. How did the body of it mold onto yours? Did the frets feel sharp or was it cleanly shaved down? There are memories here embedded in the guitarist- in fact, I’m more than sure that this is true for every musician. The pianist and drummer could concur.

The first guitar I remember buying was the Luna Guitars’ Passionflower Acoustic-Electric Guitar. It was a maple build with a Preamp: Orion 4-Band EQ w/ Digital Tuner; its neck was a Mahogany/Rosewood with Luna’s iconic Mother-of-pearl Inlaid Moon Phase Fret Markers with a purple body. A flower surrounded the soundhole cut-out. I was so memorized when I saw it, it looked like a drawing I once painted had come to life! It begged for me to play it, and so I took it home.

Luna Guitars have been around since 2005 and was founded by stain-glass artist Yvonne de Villiers. According to Armadilloent de Villiers inspiration came from watching her mother’s struggle over her 40-year bass guitar career. She sought instruments that could be uniquely tailored to fit different players’ bodies, hands, and musical styles. She also wished to avoid the same boring look that most guitars had, but rather making the instruments look and feel radical.

Today you can find Luna guitars everywhere! From Sweetwater to Sam Ash, from Reverb to Guitar Center. I frankly find it crazy that Fender (founded 1946), Taylor (founded 1974), and Yamaha (founded 1887) guitars can all be sold at the same in-store level as a Luna can with them only being 16 years of age! However, just because Luna lacks the age that other companies have, they make up the difference with the quality of a $1,000 instrument without breaking the bank.

To make this comparison using Sweetwater, I chose Sunburst-themed guitars from Yamaha, Fender, Taylor, and of course Luna. My only requirements? They had to be in the Sunburst theme, electric-acoustic, was a six-string, and its neck build had to be mahogany for consistency.

The Yamaha was a CPX1200II

6-string Acoustic-electric Guitar, with Spruce Top Rosewood Back and Sides, Mahogany Neck, Ebony Fingerboard, and SRT/System63 Electronics – Vintage Sunburst. For $1,349.99 you can get 3-band EQ, Focus/Wide control, Resonance control, and Blend control. It definitely thrives off bottoms and lower tones audio-wise, but is it worth the money?

Fender granddaddy Newporter

6-string Acoustic-electric Guitar with Spruce Top, Mahogany Back and Sides, Mahogany Neck, Walnut Fingerboard, and Fishman Electronics – Sunburst. Pay $429.99 for a very balanced guitar for players at any stage of the musical journey, especially for those players hanging around with the mid-sounds. The Fishman pickup/preamp is a personal favorite of mine for accentuating the guitar’s natural timbre.

Taylor’s 714-ce V-class

6-string Acoustic-Electric Guitar with Lutz Spruce Top, Indian Rosewood Back and Sides, Mahogany Neck, Ebony Fingerboard, and Taylor ES2 Electronics – Western Sunburst, takes the cake at $3,199.00. It definitely balanced the high-mid-lows better than the other guitars mentioned.

While I could go into a more detailed review on each of these, does the Luna stack up to the massiveness that Yamaha, Fender, and Taylor have? Well, Yes! Definitely! Luna has definitely amassed a dedicated army of fans. If we look into Luna Safari – 6-string Acoustic-electric Guitar with Spruce Top, Mahogany Back and Sides, Mahogany Neck, Walnut Fingerboard, and Luna SL3 Electronics – Tobacco Sunburst Satin. While it lacks the built-in Fishman pickup/preamp or resonance control it makes up with an effortless grab-and-go travel ability. It did sound a bit tin-like on the higher end but for $199.00 for a premium feel – it definitely could be a contender for the next acoustic-electric you pick up.

For a woman-founded company to compete with musical giants, it is an inspiration for other ladies to lead their own companies alongside Luna. As for me? Maybe it’s the nostalgia talking, but my Luna is the best-sounding guitar for me.

Sources:

 

Hombre cohete

Una de las cosas que más añoro y por la que me dedique hacer sonido en vivo es el constante movimiento, conocer gente y viajar por el mundo… Tener un llamado y una rutina en tour donde viajas muchísimo y pasas por tantos aeropuertos, subes y bajas de aviones constantemente, tienes cambios de horario todo el tiempo… Si, esas son algunas cosas que me gustan, pero para algunas otras profesiones, el hablar de cambios de horarios, vuelos y el mundo, tiene un significado mucho más literal…

Cuando hablamos de prepararnos para “el show”, nos llena de emoción y adrenalina, sentir la energía de tantas personas reunidas esperando ver un espectáculo, pero esta misma adrenalina, la sienten otras personas de una forma diferente… Imaginemos el escenario del ingeniero que maneja el sonido y la comunicación clave entre en el espacio y la tierra, que su “show” es ver una densa nube de vapor junto con una gran explosión y descarga de muchos decibeles que emite la nave mientras despega hacia el espacio exterior, ufff, no tengo palabras para imaginar esa sensación, es por eso que hice contacto con Alexandria Perryman, Ingeniera de sonido de la NASA …

Así que viajemos juntos para entender un poco el sonido y transmisión al espacio exterior.

Hace poco, televisaron el primer lanzamiento privado hacia el espacio que salió desde el Centro Espacial Kennedy, en Cabo Cañaveral, ahí, esta la Plataforma de Lanzamiento 39 de donde han despegado varias naves, entre ellas, el Apollo 11 (Que llevó al humano a la Luna), hasta el día de hoy ha sido uno de los principales puntos de conexión hacia el espacio.

La comunicación entre la tripulación de la estación espacial y el equipo de apoyo en la tierra, son fundamentales para el éxito de la misión. El poder transmitir un mensaje verbal en el espacio es crucial para la mayoría de las actividades de los astronautas, desde hacer caminatas espaciales, realizar experimentos, entablar conversaciones familiares y algo espectacular, poder transmitir información a todos los seres humanos en la tierra,

¿Pero como es que se logra esto?

Toda esta red de transmisión viaja hasta las personas que orbitan a más de 250 millas sobre la tierra gracias a una red de satélites de comunicación y antenas terrestres, todo esto forma parte de la Red Espacial de la NASA.

– de fondo… “Rocket Man” – Elton John,

canción preferida por algunos astronautas en sus viajes –

Un gran número de satélites de seguimiento y retransmisión de datos (Tracking and Data Relay Satellites – TDRS) forma la red de la base espacial, estos grandes aparatos, son y funcionan como torres de telefonía celular en el espacio, y se encuentran ubicados en una órbita geosíncrona a más de 22,000 millas sobre la Tierra, esto permite que la estación espacial se contacte a uno de los satélites desde cualquier lugar de su órbita. A medida que los satélites de comunicaciones viajan alrededor de la Tierra, estos permanecen por encima del mismo punto relativo en el suelo a medida que el planeta gira.

Los satélites de seguimiento y retransmisión de datos manejan información de voz y video en ¡tiempo real!, esto es, si un astronauta que esta en la estación espacial quisiera transmitir datos al Control de Misión en el Centro Espacial Johnson de la NASA, lo primero es; usar la computadora que esta a bordo de la estación para convertir los datos en una señal de radiofrecuencia, una antena en la estación transmite señal a la TDRS y luego ahí mismo dirige la señal al centro de pruebas de “White Sands” en donde se realizan pruebas y análisis de datos. A continuación, los teléfonos fijos envían la señal a Houston, y los sistemas informáticos en tierra convierten la señal de radio en datos legibles, si el Control de la Misión desea enviar datos de vuelta, el proceso se repite en dirección opuesta transmitiendo desde el centro de pruebas a TDRS y de ahí a la estación espacial. Lo increíble de esto es que el tiempo que se tarda en procesar este trayecto y conversión de datos es de muy pocos milisegundos por lo que no se percibe un retraso notable en la transmisión.

Toda esta comunicación es vital para el conocimiento y descubrimiento de muchos temas como el comportamiento de la órbita terrestre para que los astronautas realicen experimentos, proporcionando información valiosa en los campos de la física, la biología, la astronomía, la meteorología entre muchos otros. La Red Espacial entrega estos tan especiales y únicos datos científicos a la Tierra.

– “Here Comes the Sun” – The Beatles – Canción preferida por astronautas…

Platicando con Alexandria, comenta que antes de que existiera la Red Espacial, los astronautas y las naves espaciales de la NASA, solo podrían comunicarse con el equipo de apoyo en la tierra cuando estaban a la vista de una antena en el suelo, esto solo permitía comunicaciones de un poco menos de quince minutos cada hora y media aproximadamente. La comunicación en esos tiempos era muy lenta y complicada, pero actualmente, la Red Espacial, brinda cobertura de comunicaciones casi continua todos los días y eso es sumamente importante para el desarrollo y descubrimiento en el espacio.

En el año 2014, se probó una nueva tecnología de transmisión de datos “OPALS”, esto ha demostrado que las comunicaciones láser pueden acelerar el flujo de información entre la Tierra y el espacio, en comparación con las señales de radio, además OPALS ha recopilado una enorme cantidad de datos para avanzar en la ciencia enviando lásers a través de la atmósfera. Aunque los ingenieros de sonido encargados de la comunicación terrestre hacia los astronautas no la utilizan aún.

Como ingeniera de sonido, me gustaría  saber cual es el flujo de señal que utiliza la  ingeniera de audio que trabajan en la NASA, y esta fue la respuesta…

Todo el ruteo de señal y mezclas se realiza desde una a consola  System 5 Euphonix de AVID y cuando se manda señal o datos desde la tierra hacia el espacio, pasa primero a la consola de audio que a su vez se envía a un codificador digital de señal por medio de radio frecuencias que manda esta misma información hacia un decodificador que se encuentra en un satélite en el espacio para que la tripulación pueda estar en comunicación con la tierra.

Como lo mencionamos en un inicio, las Radio Frecuencias se utilizan hasta la fecha porque son más sencillas de captar además que transmiten mucho mas claro el sonido. En caso de que los astronautas realicen viajes más profundos al espacio, entonces se cambia la forma de transmisión enviando señales directamente a satélites especializados que mandan datos codificados entre ellos, en esta forma, existe un poco más de retraso pero no se pierde calidad del sonido.

Los astronautas estabilizan la nave para llegar la estación espacial internacional, observan Tremor (el dinosaurio) que sirvió como indicador de gravedad cero –

Algo que todo el mundo presenciamos fue cuando los astronautas Bob Behnken y Doug Hurley que viajaron en la nave espacial privada Dragon, llegaron a la Estación espacial recibidos por los astronautas Chris Cassidy, Anatoly Ivanishin and Ivan Vagner, fue entonces que transmitieron en vivo unas palabras utilizando un micrófono inalámbrico conectado directamente a una cámara que envió la señal a un satélite realizando el flujo de señal como lo explicamos anteriormente, Alex detrás de la consola haciendo Broadcast hacia todo el planta, pudo sentir un poco de retraso (más de lo normal) pero no afecto la sincronía entre el video y el audio así como la calidad del sonido, tuvo ¡buen show! Además entre risas me dijo que se sintió feliz por los cursos básicos que le dio a los astronautas para poder manejar el equipo audiovisual en el espacio.

He podido sentir la emoción de operar una misión espacial a través de las palabras y vivencias de una ingeniera de sonido que enfatiza la importancia de ser el lazo de unión entre el espacio y el planeta, transmitir la pasión, la tecnología y los descubrimientos que marcan el futuro de nuestro desarrollo tecnológico como seres humanos. No me siento tan lejana a esta sensación aunque literalmente vives en otro mundo.

Para aquellas personas que no estén seguras de que camino tomar o como obtener este tipo de oportunidades y trabajos, en tours o en diferentes áreas, les comparto que en el caso de Alex, aplico a un trabajo anunciado públicamente a través de redes profesionales en donde no decía que trabajaría para la NASA y se entero hasta que llego al lugar… Esto demuestra entre muchos más ejemplos que no hay que juzgar sino buscar y explorar,  cuando menos lo pienses llegarán estas oportunidades… mientras prepárate para que cuando las enfrentes, estés siempre mejor preparado(a).

Agradezco mucho el tiempo de charla con Alexandria Perryman y Karrie Keyes por la grandiosa introducción.

“Este es un pequeño paso para (el) hombre, un gran salto para la humanidad” primeras palabras de Neil Armstrong en la luna….

 

 

Rocket Man

One of the things I love the most and dedicate to making live sound is the constant movement, meeting people and traveling the world… Having a call and a routine on tour where you travel a lot and pass through so many airports, ups and down, planes constantly, you have schedule changes all the time… Yes, those are some things that I like,  but for some other professions, talking about changes in schedules, flights and the world, has a much more literal meaning…

When we talk about preparing for “the show”, it fills us with excitement and adrenaline, feeling the energy of so many people gathered waiting to see a show, but this same adrenaline, other people feel it in a different way… Imagine the scene of the engineer handling the sound and key communication between space and earth, that his show is to see a dense cloud of steam along with a large explosion and discharge of many decibels emitted by the ship as it takes off into outer space, ufff, I have no words to imagine that feeling, that’s why I made contact with Alexandria Perryman, NASA Sound Engineer …

So let’s travel together to understand the sound and transmission to outer space a little bit.

Countdown T  minus  10  sec… 9, 8, 7, 6, 5, 4, 3, 2, 1,  0!!!

Recently, they televised the first private launch into space that departed from the Kennedy Space Center in Cape Canaveral, there, this Launch Platform 39 from which several ships have taken off, including Apollo 11 (which brought the human to the Moon), to this day it has been one of the main points of connection to space.

Communication between the space station crew and the on-earth support team is critical to the mission’s success. Being able to convey a verbal message in space is crucial for most astronaut activities, from doing spacewalks, conducting experiments, engaging in family conversations  and there’s something spectacular to be able to  transmit  information to all human beings on earth,

But how do you achieve this?

This entire transmission network travels to people orbiting more than 250 miles above Earth thanks to a network of communication satellites and terrestrial antennas, all of which are part of NASA’s Space Network.

A large number of tracking and data relay satellites (TDRS) form the space base network, these large appliances function as cell towers in space and are located in a geosynchronous orbit more than 22,000 miles above Earth, allowing the space station to contact one of the satellites from anywhere in its orbit. As communications satellites travel around the Earth, they remain above the same relative point on the ground as the planet rotates.

Data tracking and retransmission satellites handle real-time voice and video information! That is, if an astronaut on the space station wanted to transmit data to Mission Control at NASA’s Johnson Space Center, the first thing is to use the computer onboard the station to convert the data into a radio frequency signal, or antenna on the station transmits signal to the TDRS and then directs the signal to the “White Sands” test center where data testing and analysis are performed. Fixed phones then send the signal to Houston, and ground computer systems convert the radio signal into readable data, if Mission Control wants to send data back, the process is repeated in the opposite direction by transmitting from the test center to TDRS and from there to the space station. The amazing thing about this is that the time it takes to process this path and data conversion is a few milliseconds so there is no noticeable delay in transmission.

All this communication is vital to the knowledge and discovery of many topics such as the behavior of Earth’s orbit for astronauts to conduct experiments, providing valuable information in the fields of physics, biology, astronomy, meteorology among many others. The Space Network delivers this special and unique scientific data to Earth.

Talking to Alexandria, she says that before the Space Network existed, NASA astronauts and spacecraft could only communicate with the support team on earth when they were in sight of an antenna on the ground, this only allowed communications of just under fifteen minutes every hour and a half. Communication at the time was very slow and complicated, but today,  the Space Network provides almost continuous communications coverage every day, and that is extremely important for development and discovery in space.

In 2014, a new  “OPALS” data transmission technology was tested, and this has shown that laser communications can accelerate the flow of information between Earth and space, compared to radio signals, plus OPALS has collected a huge amount of data to advance science by sending lasers through the atmosphere. Although sound engineers are in charge of ground communication, astronauts don’t use it yet.

You knew that the Gemini 6 crew began the tradition in 1965, waking up with Jack Jones’ “Hello  Dolly”

As a sound engineer, I’d like to see what signal flow the audio engineer working on NASA uses, and this was the answer…

All signal routing and mixing is done from one of AVID’s System 5 Euphonix console and when signal or data is sent from the ground into space, it first passes to the audio console which in turn is sent to a digital signal encoder via radio frequencies that sends this same information to a decoder that is on a satellite in space so that the crew can be in communication with the ground.

As we mentioned in the beginning, Radio Frequencies are used to date because they are easier to capture in addition that they transmit much clearer and the sound. In case astronauts make deeper trips to space, then the transmission form is changed by sending signals directly to specialized satellites that send coded data between them, in this way, there is a little more delay but no sound quality is lost.

Astronauts stabilize the spacecraft to reach the international space station, observing Tremor (the dinosaur) which served as an indicator of zero gravity –

 

One thing we all witnessed was when astronauts Bob Behnken and Doug Hurley who traveled on the private Dragon spacecraft, arrived at the Space Station received by astronauts Chris Cassidy, Anatoly Ivanishin and Ivan Vagner, it was then that they broadcast live a few words using a wireless microphone connected directly to a camera that sent the signal to a satellite performing the signal flow as explained above, Alex behind the console doing Broadcast towards the whole floor, could feel a little lag(more than normal) but did not affect the sync between the video and the audio as well as the sound quality, they had a good show! He also laughed at me because he was happy with the basic courses he gave astronauts so he could run the audio-visual equipment in space.

I have been able to feel the thrill of operating a space mission through the words and experiences of a sound engineer who emphasizes the importance of being the bond between space and the planet, transmitting the passion, technology and discoveries that mark the future of our technological development as human beings. I don’t feel so far away from this feeling even though you literally live in another world.


For those people who are not sure which way to take or how to obtain such opportunities and jobs, on tours or in different areas, I share that in the case of Alex, I apply to a publicly-announced work through professional networks where he did not say that he would work for NASA and find out until I arrive at the place… This shows among many more examples that we should not judge but instead seek and explore when you least think about reaching these opportunities… as you prepare so that when you face them,  you are always better prepared.

I am very grateful for the talk time with Alexandria Perryman and Karrie Keyes for the great introduction.

X