Empowering the Next Generation of Women in Audio

Join Us

Keeping It Real

Using psychoacoustics in IEM mixing and the technology that takes it to the next level

SECTION 1

All monitor engineers know that there are many soft skills required in our job – building a trusting relationship with bands and artists is vital for them to feel supported so they can forget about monitoring and concentrate on their job of giving a great performance. But what do you know about how the brain and ears work together to create the auditory response, and how can you make use of it in your mixes?

Hearing is not simply a mechanical phenomenon of sound waves travelling into the ear canal and being converted into electrical impulses by the nerve cells of the inner ear; it’s also a perceptual experience. The ears and brain join forces to translate pressure waves into an informative event that tells us where a sound is coming from, how close it is, whether it’s stationary or moving, how much attention to give to it and whether to be alarmed or relaxed in response. Whilst additional elements of cognitive psychology are also at play – an individual’s personal expectations, prejudices and predispositions, which we cannot compensate for – monitor engineers can certainly make use of psychoacoustics to enhance our mixing chops. Over the space of my next three posts, we’ll look at the different phenomena which are relevant to what we do, and how to make use of them for better monitor mixes.

What A Feeling

Music is unusual in that it activates all areas of the brain. Our motor responses are stimulated when we hear a compelling rhythm and we feel the urge to tap our feet or dance; the emotional reactions of the limbic system are triggered by a melody and we feel our mood shift to one of joy or melancholy; and we’re instantly transported back in time upon hearing the opening bars of a familiar song as the memory centres are activated. Studies have shown that memories can be unlocked in severely brain-damaged people and dementia patients by playing them music they have loved throughout their lives.

The auditory cortex of the brain releases the reward chemical dopamine in response to music – the same potentially addictive chemical which is also released in response to sex, Facebook ‘likes’, chocolate and even cocaine…. making music one of the healthier ways of getting your high. DJs and producers use this release to great effect when creating a build-up to a chorus or the drop in a dance track; in a phenomenon called the anticipatory listening phase, our brains actually get hyped up waiting for that dopamine release when the music ‘resolves’, and it’s manipulating this pattern of tension and release which creates that Friday night feeling in your head.

Missing Fundamentals

Our brains are good at anticipating what’s coming next and filling in the gaps, and a phenomenon known as ‘missing fundamentals’ demonstrates a trick which our brains play on our audio perception. Sounds that are not a pure tone (ie a single frequency sine wave) have harmonics. These harmonics are linear in nature: that is, a sound with a root note of 100 Hz will have harmonics at 200, 300, 400, 500 Hz and so on. However, our ears don’t actually need to receive all of these frequencies in order to correctly perceive the chord structure. If you play those harmonic frequencies, and then remove the root frequency (in this case 100Hz), your brain will fill in the gaps and you’ll still perceive the chord in its entirety – you’ll still hear 100Hz even though it’s no longer there. You experience this every time you speak on the phone with a man – the root note of the average male voice is 150Hz, but most phones cannot reproduce below 300Hz. No matter – your brain fills in the gaps and tells you that you’re hearing exactly what you’d expect to hear. So whilst the tiny drivers of an in-ear mould may not physically be able to reproduce the very low fundamental notes of some bass guitars or kick drums, you’ll still hear them as long as the harmonics are in place.

A biased system

Human hearing is not linear – our ear canals and brains have evolved to give greater bias to the frequencies where speech intelligibility occurs. This is represented in the famous Fletcher-Munson equal-loudness curves, and it’s where the concept of A-weighting for measuring noise levels originated. As you can see from the diagram below, we perceive a 62.5 Hz tone to be equal in loudness to a 1 kHz tone, when the 1k tone is actually 30dB SPL quieter.

Similarly, the volume threshold at which we first perceive a sound varies according to frequency. The area of the lowest absolute threshold of hearing is between 1 and 5 kHz; that is, we can detect a whisper of human speech at far lower levels than we detect a frequency outside that window. However, if another sound of a similar frequency is also audible at the same time, we may experience the phenomenon known as auditory masking.

This can be illustrated by the experience of talking with a friend on a train station platform, and then having a train speed by. Because the noise of the train encompasses the same frequencies occupied by speech, suddenly we can no longer clearly hear what our friend is saying, and they have to either shout to be heard or wait for the train to pass: the train noise is masking the signal of the speech. The degree to which the masking effect is experienced is dependent on the individual – some people would still be able to make out what their friend was saying if they only slightly raised their voice, whilst others would need them to shout loudly in order to carry on the conversation.

Masking also occurs in a subtler way. When two sounds of different frequencies are played at the same time, as long as they are sufficiently far apart in frequency two separate sounds can be heard. However, if the two sounds are close in frequency they are said to occupy the same critical bandwidth, and the louder of the two sounds will render the quieter one inaudible. For example, if we were to play a 1kHz tone so that we could easily hear it, and then add a second tone of 1.1kHz at a few dB louder, the 1k tone would seem to disappear. When we mute the second tone, we confirm that the original tone is still there and was there all along; it was simply masked. If we then re-add the 1.1k tone so the original tone vanishes again, and slowly sweep the 1.1k tone up the frequency spectrum, we will hear the 1k tone gradually ‘re-appear’: the further away the second tone gets from the original one, the better we will hear them as distinct sounds.

This ability to hear frequencies distinctly is known as frequency resolution, which is a type of filtering that takes place in the basilar membrane of the cochlea. When two sounds are very close in frequency, we cannot distinguish between them and they are heard as a single signal. Someone with hearing loss due to cochlea damage will typically struggle to differentiate between consonants in speech.

This is an important phenomenon to be aware of when mixing. The frequency range to which our hearing is most attuned, 500Hz – 5k, is where many of our musical inputs such as guitars, keyboards, strings, brass and vocals reside; and when we over-populate this prime audio real estate, things can start to get messy. This is where judicious EQ’ing becomes very useful in cleaning up a mix – for example, although a kick drum mic will pick up frequencies in that mid-range region, that’s not where the information for that instrument is. The ‘boom’ and ‘thwack’ which characterise a good kick sound are lower and higher than that envelope, so by creating a deep EQ scoop in that mid-region, we can clear out some much-needed real estate and un-muddy the mix. Incidentally, because of the non-linear frequency response of our hearing, this also tricks the brain into thinking the sound is louder and more powerful than it is. The reverse is also true; rolling off the highs and lows of a signal creates a sense of front-to-back depth and distance.

It’s also worth considering whether all external track inputs are necessary for a monitor mix – frequently pads and effects occupy this territory, and whilst they may add to the overall picture on a large PA, are they helping or hindering when it comes to creating a musical yet informative IEM mix?

Next time: In the second part of this psychoacoustics series we’ll examine the Acoustic Reflex Threshold, the Haas effect, and how our brains and ears work together to determine where a sound is coming from; and we’ll explore what it all means for IEM mixes.


 

Shadowing Opportunity w/Guit Tech Claire Murphy

SoundGirls Members who are actively pursuing a career in Guitar teching, Backline or Concert Production are invited to shadow Guitar tech, Claire Murphy. Claire is currently on tour with Vance Joy.

The experience will focus on Guitar teching; setting up “guitar world,” setting up the stage, experiencing line check and soundcheck with the artist. This is open to SoundGirls members ages 18 and over. There is one (1) spot available for each show. Most call times will be at 11.30am (TBD), and members will most likely be invited to stay for the show (TBD). Ideally, applicants will be able to demonstrate some experience in touring or knowledge there of, to gain the most from this opportunity.

Please fill out this application and send a resume to soundgirls@soundgirls.org with Vance Joy in the subject line. If you are selected to attend, information will be emailed to you.

Playing With Voices

When I went to the Acoustical Society of America’s meeting a few years ago, I did not know what to expect.  I was presenting an undergraduate research paper on signal processing and was expecting individuals with similar backgrounds.  Instead, there were presentations on marine wildlife, tinnitus, acoustic invisibility and the speech patterns of endangered languages.  One individual, I met there was Colette Feehan, a linguistic doctorate student at Indiana University.  I gravitated to her upbeat personality and affinity towards collecting awesome trivia. When she mentioned to me in passing her interest in voice acting, I thought I should follow up and pick her brain on the nuances of voice acting.

Colette Feehan

What is voice acting?

Voice acting is providing vocalizations for various kinds of animated characters and objects. This can be speech, grunts, screams, musical instruments, animal vocalizations, and a whole array of other sounds. When watching an animated TV show or movie, every sound you hear has to come from either someone’s mouth or some creative use of props. Often voice acting draws from generalizations about language that both the actor and the audience hold. In a way, some might think of voice acting as acting with a handicap. You’re not just acting with one arm tied behind your back, your acting without the help of any of your body language, facial expressions, etc. You need to convey all that information using just your voice. It’s honestly quite fascinating.

What got you interested in voice acting?

As a kid, I would always imitate sounds from baby elephants to musical instruments to voicing children younger than me. I can’t think of one specific moment that made me interested in voice acting, but I can certainly say it has always been a part of my life.

Who are your favorite voice actors?

I have too many to count. Some classic voice actors are Daws Butler (Yogi Bear, Elroy Jetson, Cap’n’Crunch) and June Foray (Rocky the Flying Squirrel, Cindy Lou Who, Mulan’s Grandmother). There is also Charlie Adler (Cow, Chicken, and the Red Guy from Cow and Chicken, Mr. and Mrs. Big Head in Rocco’s Modern Life), Frank Welker (Fred Jones from Scooby Doo, Nibbler from Futurama), Rob Paulsen (Yakko Warner, Carl Wheezer, Pinky), Grey DeLisle (Mandy from the Grim Adventures of Billy and Mandy and Azula in Avatar), Tara Strong (Timmy Turner, Bubbles from Powerpuff Girls, Dil Pickles), and Dee Bradley Baker (Momo and Appa from Avatar, Olmec in Legends of the Hidden Temple, Perry the Platypus).

What are your favorite voices to do?

First, I think it’s important to mention that I study the linguistics, phonetics, and acoustics of voice actors MUCH more than I actually do voices myself, though I have lent my voice to some improv, plays, friends animated projects, etc.

I’m a bit of a one-trick pony when it comes to voices, though. I can do teenagers and little kids, but not much else.

Any favorite tricks or sounds?

In contrast, I can do loads of weird sounds: kazoo, trumpet, electric guitar, mourning dove, cats (meow and purr), dogs.

Does voice acting have a specific lingo, and if so what terms should directors learn for more efficient directing?

It does! I’ve actually considered starting a bit of an informal dictionary on terms while working with voice actors on the linguistics of voice acting. Most of the lingo that I’ve really paid attention to are linguistics concepts like what linguists call “dark L” some voice actors call it “Lazy L”. “Breathy Voice” in linguistics is called “Smokey Voice” by voice actors. The one that is really interesting is what Rebecca Starr (2015) calls “Sweet Voice” this is an EXTREMELY specialized kind of breathy voice found in Anime that indexes a very specific character archetype.

I have heard that you are doing some research on voice actors, could you tell me a little about that?

In the Speech Production Lab at Indiana University, I am using a special 3D/4D ultrasound set up to look at the articulatory phonetics of adult voice actors who produce child voices for TV and film. A lot of people either don’t know or don’t think about how when we listen to child characters, particularly in animated TV, those voices are often being produced by an adult. The big question I am asking with my dissertation is–What are adults doing with their vocal tract anatomy in order to sound like a child?

So if anyone doesn’t know a lot about how ultrasound works, here is a quick and dirty description:

Ultrasound works by emitting high-frequency noise and timing how long it takes for those sound waves to bounce back. We place an ultrasound probe (like what you use to see a baby) under the participant’s chin and record ultrasound data of their speech in real-time. What we can see using ultrasound is an outline of the surface of the tongue. The sound waves travel through the tissues of the face and tongue, which is a fairly dense medium to travel through. When the waves come into contact with the air along the surface of the tongue, which is a much lower density medium to travel through, they show up on the ultrasound as a bright line which we can trace to then create static images and dynamic video of the tongue movement. So what does 3D/4D mean? We have a fancy ultrasound probe that records in three planes: sagittal, coronal, and transverse. So we take all these static, 2D images, trace them, then compile them into one 3D representation of the tongue. Then we can sync this with a recording of the speech creating our 4th D, time. So we can create videos of what a 3D representation of the tongue is doing while speaking and we can hear what it was doing at that moment. It is really cool.

So back to voice actors. With my dissertation research, I am imaging a few voice actors in two conditions: 1) doing their regular, adult voices and 2) doing their child voices. Then I compare what changes across those two conditions and what doesn’t.

So things I am looking for are: What is the hyoid bone doing (the bone in your neck near where your neck meets your head)? Does the place where the tongue touches the roof of the mouth for different consonants change? Are general tongue shapes and movements different across the two conditions? How do the acoustics change (how does the sound change)? Are those changes in acoustics changes that we would predict based on what the anatomy is doing?

How balanced is diversity in the voice actor industry?

Voice acting has a bit of a double-edged sword in that you don’t have to *look* the part to get the role. It’s just your voice! So someone who might not be your size -6, blonde-haired, wide-eyed beauty can still get the opportunity to play that character. Where this becomes negative, however, is with actors of color. Because you don’t have to look the part, I think a lot of white actors get roles that otherwise would have HAD to go to an actor of color. I do know the field has recently been trying to address this issue, but we can certainly do better.

So what is your opinion on vocal fry

I love creaky voice (I’m going to use this term instead). It can mean so many different things, socially. Is the speaker a man or a woman? Are they in their 20s? Are they using uptalk? Are they just running out of air at the end of their utterance?

Why is there the focus on women’s vocal fry?

I can’t say I’ve studied why specifically women’s creaky voice has blown up so much recently. Creak is really common in deeper voices, so men do it all the time, but we don’t seem to notice. Maybe when women started doing it more people unconsciously associated it with being manly and negatively reacted to it. Or maybe it’s that creak is often paired with uptalk, so it became stigmatized really quickly.

How are men’s and women’s voices different?

Again, I’m not sure I’m the most qualified to talk about this, but I can say that men’s and women’s voices differ in many different categories. First, there is simply anatomy; men have an Adam’s apple which increases the area for resonance in the larynx. They also tend to be bigger, have bigger lungs, etc., making their voices different. Then there are a lot of social ways in which men’s and women’s voices differ. Taking creak for example again, when women use creak it is associated with very different things than when a man uses creak. So the same “thing” performed by a man compared to a woman doing the same thing can be interpreted quite differently. Humans are fascinating.

 

Missed this Week’s Top Stories? Read our Quick Round-up!

It’s easy to miss the SoundGirls news and blogs, so we have put together a round-up of the blogs, articles, and news from the past week. You can keep up to date and read more at SoundGirls.org

June Feature Profile

The Road from Montreal to Louisville – Anne Gauthier

The Blogs

Multitasking – Why you should avoid it

Soldering for Beginners


SoundGirls News

Shadowing Opportunity w/ FOH Engineer Kevin Madigan

Shadowing Opportunity w/ ME Aaron Foye

Letter for Trades and Manufacturers

https://soundgirls.org/scholarships-18/

Accepting Applications for Ladybug Music Festival

https://soundgirls.org/event/vancouver-soundgirls-chapter-one-year-anniversary/?instance_id=1285

SoundGirls London Chapter Social – June 17

https://soundgirls.org/event/glasgow-soundgirls-meet-greet/?instance_id=1272

Shadowing Opportunities

Telefunken Tour & Workshop

https://soundgirls.org/event/colorado-soundgirls-ice-cream-social/?instance_id=1313

SoundGirls Expo 2018 at Full Sail University

Round Up From the Internet

The Theatrical Sound Designers and Composers Association Releases Statement on Women+ in Sound Design for Broadway and Theatres Across the Country


 

 

Engineer Liv Nagy on mixing sound for theatre

 

 


SoundGirls Resources

Directory of Women in Professional Audio and Production

This directory provides a listing of women in disciplines industry-wide for networking and hiring. It’s free – add your name, upload your resume, and share with your colleagues across the industry.


Women-Owned Businesses

Member Benefits

Events

Sexual Harassment

https://soundgirls.org/about-us/soundgirls-chapters/

Jobs and Internships

Women in the Professional Audio

Multitasking – Why you should avoid it

Being multitalented is excellent and almost a necessity in the audio industry. It is expected of us to be able to do many different things, sometimes even at the same time!

However, I believe multitasking at work should be avoided if possible, and here is my reason why;

The other week I was asked to do a live recording of a band while they performed. We have set up multi-track recording via Dante, which means we can record straight into Pro Tools via a Cat 5 cable. This is great and makes life a hell of a lot easier when doing live recordings.

But also recently, having had a lighting course in the Jester 24 Zero 88 Lighting desk, I now also control the lights more in-depth than we used to.

So this one evening I was going to run the live sound, the lights, and record one of our four acts, while also making sure all the artists were looked after and ready to go for their allocated time slot.

I did not think much of it; I came in early to set up Pro Tools to make sure it was up and running. When that was set up, I prepared the stage and the setups for the different bands. I set up the lights; we had photographers in that evening so we made sure the lights hit all the sweet spots and set the colours, to make sure the artists would look great on picture.

I felt good about having everything set up, ready to go on time, and did not feel stressed at all.

Well, that was until I had the first act on who had almost finished their set. I thought I would do a test recording of the first act to make sure it sounded great for the second act; the band I had promised to record. At this point, I realised I did not get any signal from any of the wireless microphones.

Why?

Well, we have a Yamaha Rio 32×24 stage box, but our Shure Beta 58A wireless microphones we have are directly plugged into the back of our Yamaha QL1. Immediately, I felt fairly stressed as the first act walked off the stage and I simply did not have the time or hands to re-route it in the Dante Controller software.

As the second act walked on stage, I helped them set up and then quickly decided that the vocalist would have to use a wired Shure SM58 running it thru the Rio as I knew this route was already working. Not a big deal, but I definitely panicked for a second as I had promised and confidently said I would be able to record it, and there was just no room for any mistakes. Luckily, I managed and very quickly, came up with a solution though feeling ever so slightly stressed out.

I recorded the band, it sounded great, but I felt that my focus was definitely not where it should have been. It was a live show, and my focus should have purely been on the live sound.

My thinking was that everything was going to go well, it is not like we can predict disaster and obviously, we want all live shows/recordings to go well. However, something I have learned throughout the years it that most of the time, it does not run smoothly and you must leave room for mistakes. No matter how good you are, no matter how many things you think you can do, mistakes happen. Technology breakdowns happen. And when you are alone, you simply will not have the time to solve a problem, and you will cause yourself unnecessary stress.

I did, after all, run a successful night, the band was happy with the recording, photographers were pleased with the lightning, everyone was happy with the sound. However, I did learn my lesson, and next time I will get another pair of hands into the mix. It is simply just not worth the risk of messing up a show and recording because you decide to do everything on your own.

However, if you are ever having to multi-task and do several things on your own; leave plenty of room for mistakes because they will happen!

 

 

Soldering for Beginners

Soldering is one of the most useful skills a sound technician can have. It can seem daunting at first, but it is surprisingly easy once you know how. It can help you understand your equipment and signal flow better, save you money, and there’s nothing quite like whipping out a soldering iron and saving a gig to silence the doubters. Entire books have been written on the subject, and it takes practice to perfect it, but I’m going to outline the very basics you need to get started.

A note on safety!

Soldering irons, unsurprisingly, get very hot! Keep your work area clear and well ventilated, only hold it by the handle, always put it back in its holder and don’t leave it unattended until it has cooled down. Remember that the things you are soldering will also get hot. Be careful not to melt the glue keeping a PCB in place, for example. I also need to point out you shouldn’t solder something in situ above you while lying on your back. Thanks, Tim…

Equipment

You will need:

A soldering iron!: Buy the best you can afford because it will last you for years. There are a few different types, each with their own advantages. Mains-powered irons can either be standalone or come with a station, which can control the temperature and give you a readout of it. Stations also include holders and sponges, so you have a neat setup. Battery or gas-powered irons are a lot more portable, and you don’t need to rely on a mains supply to use them. Non-temperature controlled irons might struggle to solder bigger items because they absorb the iron’s heat until it drops too low to be effective.

Tips: There is a whole world of iron tips out there. For sound work, you’re most likely to need an iron-plated conical tip. They need to be replaced periodically, so keep a few and clean them regularly.

Solder: Many people swear lead solder makes the best joints and is the easiest to work with, but it is also poisonous and bad for the environment. Lead solder has been outlawed, in the EU at least, for use in plumbing and consumer electronics due to its hazardous properties. It is still available for private use. There are a variety of lead-free solders on the market, but they still emit some toxic fumes, have a higher melting point, and the resulting joins may be more brittle than traditional lead ones. Whichever you opt for, pay attention to the percentages of each metal present in your solder: different combinations will have different melting points and strengths. 60% tin, 40% lead is the standard alloy traditionally used for electronics. Most solder comes with a flux core, which is a resin (rosin in the US) that helps bind and strengthen your joints and keep them clean. You can buy your solder and flux separately if you really want to, but that tends to be used for advanced repairs and is unnecessary for beginners.

Sponge and metal wool: Back when all solder contained lead, cleaning your tip on a damp sponge was fine. However, lead-free solder works at a higher temperature and the water from the sponge can cause the iron to dip below your optimum operating point, so repeated cleanings can cause the solder to crack and penetrate the tip. Using brass wool avoids this problem.

Solder sucker/desoldering wick: These help clean old solder away before you work your new join. Don’t just melt and reuse the solder that’s already there!

Helping hand iron touching

Soldering board/helping hand:  You can make a soldering board out of some wood and old cable connectors, so you just plug the cable you’re soldering into its corresponding socket on the board to keep it still. You can also draw wiring diagrams above each one to refer to as you go along. For some applications a “helping hand” might be more useful: it consists of a magnifying glass and two alligator clips on a heavy bass, so you can hold cables in place and get a better view while working.

Wire strippers and cutters: You can get by with just a knife, but a good set of wire strippers will save you time and the frustration of accidentally cutting through the entire wire you were trying to strip.

Method

Let’s take resoldering a broken leg on an XLR as an example.

XLR Short Earth

If you’re using a new iron tip, you can “tin the tip”: heat the iron up and melt a thin layer of solder evenly over the tip, so it’s shiny. This improves heat transfer, protects the tip from oxides and makes it easier to clean. Regularly cleaning and re-tinning the tip will improve the quality of your joins and help the tip last much longer.

Once everything is in place, you first need to remove the casing around the wires. Make a note of which wire goes where (if you ever get confused, just refer to a diagram or open another cable on the same end and compare it to the one you’re fixing). If there isn’t much wire left to work with, don’t be tempted to make a tight fix. It will take too much strain when the cable is moved and will break again soon after (The one exception to this is that some people purposefully make the earth leg shorter, like in the photo, as it is stronger and can take the strain instead of the other pins. This can be tricky to do, and subsequently repair, so is more of an advanced technique). Desolder the other legs of the cable, trim them to the same length and strip the wires back until you have just enough to work with. If you strip too far back, the metal from different legs can touch and cause all sorts of signal problems. If the broken leg is still long enough, just remove the old solder from its join and leave the other two legs attached.

Take the iron in one hand, and hold out a length of solder in the other. Then the important bit: heat the wire, not the solder! You need to heat the wire and its connector, so they melt the solder. If you heat the solder directly and try to drop it onto the join, it will just cling to your iron. While holding the iron on one side of the area, you want to join, touch the solder onto the other side. It should melt and flow around the wire and connector, binding them together. Avoid breathing in the fumes! Keep going until the whole area is covered, removing the iron as soon as you can to minimise the amount of extra solder you’ll need to clean off it. It should only take a few seconds to heat the wire; if nothing happens when you touch the solder to the join, or it only melts when you’ve held the iron in place for a long time, your iron isn’t hot enough. The solder on the join should look clean, shiny and smooth. If it is dull or uneven, it is a sign of a bad join and is liable to break again. You can just desolder and do it again until it’s right!

Finally, put the components back together and test your XLR with a cable tester. Never put an untested cable back into use after soldering it. Turn your iron off, put it somewhere safe until it’s cooled down, and enjoy your new skill!

Additional Resources

Illustrated easy guide to soldering (electronics-focused)

Once you are more comfortable soldering, you might want to make your own phantom power checker

 

The Road from Montreal to Louisville – Anne Gauthier

Anne Gauthier is a self-taught independent recording engineer, producer, and drummer originally from Montreal, Canada. She is currently working at La La Land in Louisville, KY.

Anne started touring with bands as a drummer when she was 19. She found her favorite part of being in bands was in the recording studio and at some point decided she wanted to get serious about recording. “The non-official start of my recording adventure was a boombox setup to record casio/vocal duet rock operas with my brother when I was seven. A friend lent me a four-track tape recorder and a couple of 57 knockoffs in my early 20’s which I used for a few years to record my own projects”. She finds the recording process to be technical, creative and instinctual all at once. She would go on to build a home recording studio.

Anne would become interested in analog recording and would stumble across an article in Tape Op on Kevin Ratterman and his studio and his work with analog recording. She decided to email Kevin, and he responded. They would stay in touch for a couple of years and then one day he invited her to assist at the studio. Anne got a work visa and moved to Louisville. She says she has been “very, very fortunate to find such a kind and talented mentor and co-worker.”

Anne would start engineering her own sessions shortly after arriving at La La Land and she just became the head engineer. At La La Land, she has access to a broader selection of gear, and she has found being able to track in a large room has changed her recording decisions. Anne says that her “approach to recording has always been about finding the best recording color to fit whatever project’s personality. Using gear as a means to represent the band in their most natural and interesting light. So even if I wouldn’t call myself a gearhead to any extent, it’s been really fun having a wide array of classic recording gear to experiment with while recording”.

As an engineer, she has been able to work on diverse projects, from hip-hop, jazz, metal, rock, pop, roots and country bands. This has made her a well-rounded engineer. She has also learned to work with different people and personalities. She has found this experience has made her more patient.

Anne finds inspiration from recordings that were made using vintage gear and tracked to tape. She loves the old country and Motown records. Some of the recent recordings that has influenced her are Mary Gauthier “Mercy Now” (Gurf Morlix), Mac DeMarco “Salad Days,” Vivian Girls “Share the Joy”(Jarvis Taveniere), Black Mountain s/t (Colin Stewart), Wye Oak “Civilian” (John Congleton), The Dead Weather “Sea of Cowards (Vance Powell), Big Thief “Capacity” (Andrew Sarlo).

Anne can count on half of one hand the number of women who have risen to the top of the industry. While enrollment has increased in recording schools, she has not seen the results in studios. She says she has been fortunate that she has been supported and has had fantastic mentors.

Anne also volunteers her time with  Girls Rock Louisville that teaches young women and gender-nonconforming youth how to play instruments, write music and form bands, thus building confidence, self-esteem, and critical thinking.

Anne is excited to keep working, growing, and learning. Even after 20 years, you can always get better. Parting Advice is Be Yourself, Be Kind, Be Respectful. Keep learning and don’t be scared to stand up for yourself and others.

Must have Skills: Patience, an understanding of different styles of music, a good musical instinct, being able to be both creative and technical.

Favorite Gear: I’m privileged with the gear we have at the studio, but really I think you can make most things sound cool & exciting with any gear.

You can contact Anne through her website

Missed this Week’s Top Stories? Read our Quick Round-up!

It’s easy to miss the SoundGirls news and blogs, so we have put together a round-up of the blogs, articles, and news from the past week. You can keep up to date and read more at SoundGirls.org

May Feature Profile

Daniela Seggewiss – Time Flies When You Are Doing What You Love

The Blogs

How to Subcontract work

Me and My Guitar: Part One

Times Up! Time to Move Forward


SoundGirls News

Shadowing Opportunity w/ FOH Engineer Kevin Madigan

Shadowing Opportunity w/ ME Aaron Foye

Letter for Trades and Manufacturers

https://soundgirls.org/scholarships-18/

Accepting Applications for Ladybug Music Festival

Amsterdam SoundGirls Tour & Social

https://soundgirls.org/event/vancouver-soundgirls-chapter-one-year-anniversary/?instance_id=1285

SoundGirls London Chapter Social – June 17

https://soundgirls.org/event/glasgow-soundgirls-meet-greet/?instance_id=1272

Shadowing Opportunities

Telefunken Tour & Workshop

SoundGirls Expo 2018 at Full Sail University

Round Up From the Internet

Rock n Roll In Brazil: A SoundGirl Explains

20 Questions With Catherine Vericolli

Catherine Vericolli is the owner, engineer, manager of Fivethirteen Recording Studios in Tempe, Arizona. She is a lover of all things analog who has personally headed all console installations and outboard wiring at Fivethirteen since the studio’s first console and 2″ machine in 2006. She also co-edits Pink Noise Magazine and teaches classes at The Conservatory Of Recording Arts and Sciences.

Tape Op Podcast Episode 16: Susan Rogers

As an engineer Susan really got her start working with Prince from 1983 to 1988, including albums like Purple Rain, Around the World in a Day, Parade, Sign o’ the Times, and The Black Album. Her other studio sessions have included artists like Barenaked Ladies, David Byrne, Toad the Wet Sprocket, Rusted Root, Tricky, Geggy Tah, and Michael Penn. She is currently the director of the Berklee College of Music’s Perception and Cognition Laboratory, and is an associate professor at Berklee.


SoundGirls Resources

Directory of Women in Professional Audio and Production

This directory provides a listing of women in disciplines industry-wide for networking and hiring. It’s free – add your name, upload your resume, and share with your colleagues across the industry.


Women-Owned Businesses

Member Benefits

Events

Sexual Harassment

https://soundgirls.org/about-us/soundgirls-chapters/

Jobs and Internships

Women in the Professional Audio

Me and My Guitar: Part One

When I was 11, everyone got a guitar at Christmas except for me. My dad had started taking lessons at the music store in town, and wanted the rest of our family to join in on the fun. He had asked me if I wanted one, but I said no. If everyone else was going to have one, I didn’t want one.

I felt a tinge of envy as my grandpa, brother, cousin, and aunt all unwrapped their new instruments on Christmas morning. My grandpa got an electric bass. My brother an electric guitar. My cousin got a classical guitar, and her mom got a ukulele. The living room was bubbly with strings being plucked and tuned and looked over. I avoided the instruments, stubbornly holding onto my original plan, which was that I absolutely definitely without a doubt did not want one.

With school off for the holidays, my brother and I did our usual routine: half the week with mom and then half the week with dad. My dad, who is an artist, was at that moment in time paying our bills by creating covers for syrupy romance novels. (He hilariously used himself and his girlfriend as the models for a number of them; since he was a long-distance contractor his clients were none the wiser.) Computers have always been slow at graphics, but in 2000 were remarkably slower. So in the downtime, he had while a Photoshop file would render or a new proof would print, he would practice his new guitar skills. He played Billy Bragg’s record Back To Basics loudly and practice his chord shapes and pentatonic scales along with it. Billy Bragg was a rockstar with an activist bent, singing in shouts over pulsating solo electric guitar. I could hear the rock n’ roll energy from my room across the hall while I was writing, reading, hanging out with my cat. It was magnetic, and it made me feel something I’d never felt before. And my dad sounded so excited and motivated to play along with one of his favorite musicians.

My dad would regularly ask my opinion on his work, often showing me variations on a piece he was working on. He’d turn layers off and on in Photoshop, showing me the options and discussing the ideas, colors, and shapes with me, what the client was looking for and what he was interested in. So one day I let myself into his studio under the pretense of feedback. He was playing along with his favorite song off Back To Basics, “A New England”—a record chock full of folk-rock hooks like “I don’t want to change the world / I’m not looking for New England / I’m just looking for another girl.” He was singing along in Braggs’ Cockney-ish accent. He stopped playing long enough to say, “Go get the acoustic guitar from the living room.” The moment I had been waiting for had come. Proud as I’d ever be, (I’m a Leo,) I couldn’t openly admit I wanted to learn what he knew. I was grateful he hadn’t mentioned my change of heart since Christmas. I went and got the guitar.

He left the CD playing and showed me the pentatonic scale he had recently learned. Then he turned the CD player off and showed me the three chords he was working on transitioning between A, D, and E. He told me what his teacher had told him: there are hundreds of songs you can play with just these three chords. The trick is just being able to press down and to strum in the rhythm of the song. And they were interchangeable—you could mix and match them in any order you wanted, and they would still sound great. I will master these if it’s the last thing I do, I told myself.

For the next week, I picked up my dad’s guitar for a few minutes every day before we got in the car to go to school. It hurt to press my fingers down on the sharp strings, but making pretty sounds was vastly more noticeable to me than the pain of callouses forming. I wasn’t sure why, but I was drawn to the instrument more every day that passed.

Finally, one Wednesday, my dad asked if I wanted to go to his guitar lesson in his place. I was so excited. I played it cool and said yes.

One of the Polaroids from the wall in Rob’s shop.

The teacher was a man named Rob. He had a very dry sense of humor, which was lost on me at the time, and had hundreds of polaroids around his store of all the different students and customers that had passed through. I felt like they knew something I did not. Something which I desperately wanted to know. I even felt a little bit entitled. All of this compounded into courage when Rob asked me to show him what I already knew. Truthfully, I was terrified to be put on the spot and to have my skills judged, but I wanted to know what they all knew, and I put my fear to rest for a one-half hour. I played my A, D, and E chords and showed him the pentatonic scale runs that my dad had taught me. Rob showed me how I could lift one finger up in my A and E chords to create a seventh chord. He showed me the same thing with my D chord, but it was different: with this chord I had to readjust my shape, so it became upside-down-looking. Rob told me that “Whoolly Boolly” “Wild Thing” and “Should I Stay or Should I Go” all used these exact chords and their seventh variations. We played through these songs for the remainder of my lesson.

After that day I absolutely definitely without a doubt wanted a guitar.

Round Robin – Rob and I playing some of my songs and some of his songs together at a local round robin. Around 2004.

My next lesson Rob showed my how to play “All You Wanted” by Michelle Branch, which was a huge hit on the radio at the time. I didn’t care too much for the song, but I was too nervous to tell that to Rob because I wanted to seem like I knew a lot about music. In spite of my lukewarm feelings about the song, once I had the chords learned, I became obsessed with memorizing it and playing it well. I couldn’t quite sing and play at the same yet, but the idea that I could eventually recreate the song in its entirely was so amazing to me that I forced myself to practice. All of the friends and family that came over for the next week were subject to listening to me try to do just that.

Once I got that down, I started writing my own songs. I showed them to Rob, who in turn showed me “Psycho Killer” by the Talking Heads to encourage me to play with new subjects and characters. He taught me about I-IV-V or 1-4-5 progressions, and traditional song forms based on their variations. He taught me the circle of fifths using Buddy Holly’s “Everyday.” He taught me diminished and augmented chords, using The Ink Spots’ “Java Jive” and The Beatles’ “Oh Darling” as unforgettable examples. I loved it all. I loved every moment of it.

I didn’t realize that I was at the beginning of a life-long relationship with the guitar and with Rob.

 

X