Empowering the Next Generation of Women in Audio

Join Us

More Than Line-by-Line

 

Going Beyond the Basics of Mixing

When I started mixing shows in high school—and I use the term “mixing” loosely—I had no idea what I was doing. Which is normal for anyone’s first foray into a new subject, but the problem was that no one else knew either. My training was our TD basically saying, “here’s the board, plug this cable in here, and that’s the mute button,” before he had to rush off to put out another fire somewhere else.

Back then, there were no Youtube videos showing how other people mixed. No articles describing what a mixer’s job entailed. (Even if there were, I wouldn’t have known what terms to put in a Google search to find them!) So I muddled through show by show, and they sounded good enough that I kept going. From high school to a theme park, college shows to local community theatres, and finally eight years on tour, I’ve picked up a new tip or trick or philosophy every step along the way. After over a decade of trial and error, I’m hoping this post can be a jump start for someone else staring down the faders of a console wondering “okay, now what?”

Every sound design and system has a general set of goals for a musical: all the lines and music are clear and the level is enough to be audible but isn’t painfully loud. These parameters make a basic mix.

For Broadway-style musicals, we do what’s called “line-by-line” mixing. This means when someone is talking, her fader comes up and, when she’s done, her fader goes back down, effectively muting her. For example: if actresses A and B are talking, A’s fader is up for her line, then just before B is about to begin her line, B’s fader comes up and A’s fader goes down (once the first line is finished). So the mixer is constantly working throughout the show, bringing faders up and taking them out as actors start and stop talking. Each of these is called a “pickup” and there will be several hundred of them in most shows. Having only the mics open that are necessary for the immediate dialogue helps to eliminate excess noise from the system and prevent audio waves from multiple mics combining (creating phase cancellation or comb filtering which impairs clarity).

You may have noticed that I’ve only talked about using faders so far, and not mute buttons. Using faders allows you to have more control over the mix because the practice of “mixing” with mute buttons assumes that the actors will say each of their lines in the entire show at the same level, which is not realistic. From belting to whispering and everything in between, actors have a dynamic vocal range and faders are far more conducive than mute buttons to make detailed adjustments in the moment. However, when mixing with faders, you have to make sure that your movements are clean and concise. Constantly doing a slow slide into pickups sounds sloppy and may lose the first part of a line, so faders should be brought up and down quickly. (Unless a slow push is an effect or there is a specific reason for it, yes, there are always exceptions.)

So, throughout the show, the mixer is bringing faders up and down for lines, making small adjustments within lines to make sure that the sound of the show is consistent with the design. Yet, that’s only one part of a musical. The other is, obviously, the music. Here the same rules apply. Usually, the band or orchestra is assigned to VCAs or grouped so it’s controlled by one or two faders. When they’re not playing, the faders should be down, and when they are, the mixer is making adjustments with the faders to make sure they stay at the correct level.

The thing to remember at this point is that all these things are happening at the same time. You’re mixing line by line, balancing actor levels with the music, making sure everything stays in an audible, but not eardrum-ripping range. This is the point where you’ve achieved the basic mechanics and can produce an adequate mix. When put into action, it looks something like this:

 

 

A clip from a mix training video for the 2019 National Touring Company of Miss Saigon.

 

But we want more than just an adequate mix, and with a solid foundation under your belt, you can start to focus on the details and subtleties that will continue to improve those skills. Now, full disclosure, I was a complete nerd when I was young (I say that like I’m not now…) and I spent the better part of my childhood reading any book I could get my hands on. As an adult, that has translated into one of my greatest strengths as a mixer: I get stories. Understanding the narrative and emotions of a scene are what help me make intelligent choices of how to manipulate the sound of a show to best convey the story.

Sometimes it’s leaving an actress’s mic up for an ad-lib that has become a routine, or conversely, taking a mic out quicker because that ad-lib pulls your attention from more important information. It could be fading in or out a mic so that an entrance or exit sounds more natural or giving a punchline just a bit of a push to make sure that the audience hears it clearly.

Throughout the entire show, you are using your judgment to shape the sound. Paying attention to what’s going on and the choices the actors are making will help you match the emotion of a scene. Ominous fury and unadulterated rage are both anger. A low chuckle and an earsplitting cackle are both laughs. However, each one sounds completely different. As the mixer, you can give the orchestra an extra push as they swell into an emotional moment, or support an actress enough so that her whisper is audible through the entire house but doesn’t lose its intimacy.

Currently, I’m touring with Mean Girls, and towards the end of the show, Ms. Norbury (the Tina Fey character for those familiar with the movie) gets to cut loose and belt out a solo. Usually, this gets some appreciative cheers from the audience because it’s Norbury’s first time singing and she gets to just GO for it. As the mixer, I help her along by giving her an extra nudge on the fader, but I also give some assistance beforehand. The main character, Cady, sings right before her in a softer, contemplative moment and I keep her mic back just a bit. You can still hear her clearly, but she’s on the quieter side, which gives Norbury an additional edge when she comes in, contrasting Cady’s lyrics with a powerful belt.

Another of my favorite mixing moments is from the Les Mis tour I was on a couple of years ago. During “Empty Chairs at Empty Tables,” Marius is surrounded by the ghosts of his friends who toast him with flickering candles while he mourns their seemingly pointless deaths. The song’s climax comes on the line “Oh my friends, my friends, don’t ask me—” where three things happen at once: the orchestra hits the crest of their crescendo, Marius bites out the sibilant “sk” of “don’t aSK me,” and the student revolutionaries blow out their candles, turning to leave him for good. It’s a stunning visual on its own, but with a little help from the mixer to push into both the orchestral and vocal build, it’s a powerful aural moment as well.

The final and most important part of any mix is: listening. It’s ironic—but maybe unsurprising—that we constantly have to remind ourselves to do the most basic aspect of our job amidst the chaos of all the mechanics. A mix can be technically perfect and still lack heart. It can catch every detail and, in doing so, lose the original story in a sea of noise. It’s a fine line to walk and everyone (and I mean everyone) has an opinion about sound. So, as you hit every pickup, balance everything together, and facilitate the emotions of a scene, make sure you listen to how everything comes together. Pull back the trumpet that decided to go too loud and proud today and is sticking out of the mix. Give the actress who’s getting buried a little push to get her out over the orchestra. When the song reaches its last note and there’s nothing you need to do to help it along, step back and let it resolve.

Combining all these elements should give you a head start on a mix that not only achieves the basic goals of sound design but goes above and beyond to help tell the story. Trust your ears, listen to your designer, and have fun mixing!

There Really Is No Such Thing As A Free Lunch

Using The Scientific Method in Assessment of System Optimization

A couple of years ago, I took a class for the first time from Jamie Anderson at Rational Acoustics where he said something that has stuck with me ever since. He said something to the effect of our job as system engineers is to make it sound the same everywhere, and it is the job of the mix engineer to make it sound “good” or “bad”.

The reality in the world of live sound is that there are many variables stacked up against us. A scenic element being in the way of speaker coverage, a client that does not want to see a speaker in the first place, a speaker that has done one too many gigs and decides that today is the day for one driver to die during load-in or any other myriad of things that can stand in the way of the ultimate goal: a verified, calibrated sound system.

The Challenges Of Reality

 

One distinction that must be made before beginning the discussion of system optimization is that we must draw a line here and make all intentions clear: what is our role at this gig? Are you just performing the tasks of the systems engineer? Are you the systems engineer and FOH mix engineer? Are you the tour manager as well and work directly with the artist’s manager? Why does this matter, you may ask? The fact of the matter is that when it comes down to making final evaluations on the system, there are going to be executive decisions that will need to be made, especially in moments of triage. Having clearly defined what one’s role at the gig is will help in making these decisions when the clock is ticking away.

So in this context, we are going to discuss the decisions of system optimization from the point of the systems engineer. We have decided that the most important task of our gig is to make sure that everyone in the audience is having the same show as the person mixing at front-of-house. I’ve always thought of this as a comparison to a painter and a blank canvas. It is the mix engineer’s job to paint the picture for the audience to hear, it is our job as system engineers to make sure the painting sounds the same every day by providing the same blank canvas.

The scientific method teaches the concept of control with independent and dependent variables. We have an objective that we wish to achieve, we assess our variables in each scenario to come up with a hypothesis of what we believe will happen. Then we execute a procedure, controlling the variables we can, and analyze the results given the tools at hand to draw conclusions and determine whether we have achieved our objective. Recall that an independent variable is a factor that remains the same in an experiment, while a dependent variable is the component that you manipulate and observe the results. In the production world, these terms can have a variety of implications. It is an unfortunate, commonly held belief that system optimization starts at the EQ stage when really there are so many steps before that. If there is a column in front of a hang of speakers, no EQ in the world is going to make them sound like they are not shadowed behind a column.

Now everybody take a deep breath in and say, “EQ is not the solution to a mechanical problem.” And breathe out…

Let’s start with preproduction. It is time to assess our first round of variables. What are the limitations of the venue? Trim height? Rigging limitations? What are the limitations proposed by the client? Maybe there is another element to the show that necessitates the PA being placed in a certain position over another; maybe the client doesn’t want to see speakers at all. We must ask our technical brains and our career paths in each scenario, what can we change and what can we not change? Note that it will not always be the same in every circumstance. In one scenario, we may be able to convince the client to let us put the PA anywhere we want, making it a dependent variable. In another situation, for the sake of our gig, we must accept that the PA will not move or that the low steel of the roof is a bleak 35 feet in the air, and thus we face an independent variable.

The many steps of system optimization that lie before EQ

 

After assessing these first sets of variables, we can now move into the next phase and look at our system design. Again, say it with me, “EQ is not the solution to a mechanical problem.” We must assess our variables again in this next phase of the optimization process. We have been given the technical rider of the venue that we are going to be at and maybe due to budgetary restraints we cannot change the PA: independent variable. Perhaps we are carrying our own PA and thus have control over the design with limitations from the venue: dependent variable forms, but with caveats. Let’s look deeper into this particular scenario and ask ourselves: as engineers building our design, what do we have control over now?

The first step lies in what speaker we choose for the job. Given the ultimate design control scenario where we get the luxury to pick and choose the loudspeakers we get to use in our design, different directivity designs will lend themselves better in one scenario versus another. A point source has just as much validity as the deployment of a line array depending on the situation. For a small audience of 150 people with a jazz band, a point source speaker over a sub may be more valid than showing up with a 12 box line array that necessitates a rigging call to fly from the ceiling. But even in this scenario, there are caveats in our delicate weighing of variables. Where are those 150 people going to be? Are we in a ballroom or a theater? Even the evaluation of our choices on what box to choose for a design are as varied as deciding what type of canvas we wish to use for the mix engineer’s painting.

So let’s create a scenario: let’s say we are doing an arena show and the design has been established with a set number of boxes for daily deployment with an agreed-upon design by the production team. Even the design is pretty much cut and paste in terms of rigging points, but we have varying limitations to trim height due to high and low steel of the venue. What variables do we now have control over? We still have a decent amount of control over trim height up to a (literal) limit of the motor, but we also have control over the vertical directivity of our (let’s make the design decision for the purpose of discussion) line array. There is a hidden assumption here that is often under-represented when talking about system designs.

A friend and colleague of mine, Sully (Chris) Sullivan once pointed out to me that the hidden design assumption that we often make as system engineers, but don’t necessarily acknowledge, is that we assume that the loudspeaker manufacturer has actually achieved the horizontal coverage dictated by technical specifications. This made me reconsider the things I take for granted in a given system. In our design, we choose to use Manufacturer X’s 120-degree line source element. They have established in their technical specs that there is a measurable point at 60 degrees off-axis (total 120-degree coverage) where the polar response drops 6 dB. We can take our measurement microphone and check that the response is what we think it is, but if it isn’t what really are our options? Perhaps we have a manufacturer defect or a blown driver somewhere, but unless we change the physical parameters of the loudspeaker, this is a variable that we put in the trust of the manufacturers. So what do we have control over? He pointed out to me that our decision choices lie in the manipulation of the vertical.

Entire books and papers can and have been written about how we can control the vertical coverage of our loudspeaker arrays, but certain factors remain consistent throughout. Inter-element angles, or splay angles, let us control the summation of elements within an array. Site angle and trim height let us control the geometric relationship of the source to the audience and thus affect the spread of SPL over distance. Azimuth also gives us geometric control of the directivity pattern of the entire array along a horizontal dispersion pattern. Note that this is a distinction from the horizontal pattern control of the frequency response radiating from the enclosure, of which we have handed responsibility over to the manufacturer. Fortunately, the myriad of loudspeaker prediction software available from modern manufacturers has given the modern system engineer an unprecedented level of ability to assess these parameters before a single speaker goes up into the air.

At this point, we have made a lot of decisions on the design of our system and weighed the variables along every step of the way to draw out our procedure for the system deployment. It is now time to analyze our results and verify that what we thought was going to happen did or did not happen. Here we introduce our tools to verify our procedure in a two step-process of mechanical then acoustical verification. First, we use tools such as protractors and laser inclinometers as a means of collecting data to assess whether we have achieved our mechanical design goal. For example, our model says we need a site angle of 2 degrees to achieve this result so we verify with the laser inclinometer that we got there. Once we have assessed that we made our design’s mechanical goals, we must analyze the acoustical results.

Laser inclinometers are just one example of a tool we can use to verify the mechanical actualization of a design

.

It is here only at this stage that we are finally introducing the examination software to analyze the response of our system. After examining our role at the gig, the criteria involved in pre-production, choosing design elements appropriate for the task, and verifying their deployment, only now can move into the realm of analysis software to see if all those goals were met. We can utilize dual-channel measurement software to take transfer functions at different stages of the input and output of our system to verify that our design goals have been met, but more importantly to see if they have not been met and why. This is where our ability to critically interpret the data comes in to play. By evaluating impulse response data, dual-channel FFT (Fast-Fourier Transform) functions, and the coherence of our gathered data we can make an assessment of how our design has been achieved in the acoustical and electronic realm.

What’s interesting to me is that often the discussion of system optimization starts here. In fact, as we have seen, the process begins as early as the pre-production stage when talking with different departments and the client, and even when asking ourselves what our role is at the gig. The final analysis of any design comes down to the tool that we always carry with us: our ears. Our ears are the final arbiters after our evaluation of acoustical and mechanical variables, and are used along every step of our design path along with our trusty use of  “common sense.” In the end, our careful assessment of variables leads us to utilize the power of the scientific method to make educated decisions to work towards our end goal: the blank canvas, ready to be painted.

Big thanks to the following for letting me reference them in this article: Jamie Anderson at Rational Acoustics, Sully (Chris) Sullivan, and Alignarray (www.alignarray.com)

New Decade – New Year

The New Year and a new decade have begun! Thoughts of reinvention and feelings of excitement fill the air. This time of the year can often feel overwhelming. For me, I am in between apartments, jobs, and I just finished up my bachelor’s and am headed into a master’s program. Life has been a roller coaster.

It is no secret that the audio and music industry can be challenging, but as a young woman, I have definitely been feeling the pressure to find work and be successful. Something I am sure we all feel. There has also been a lot of talk amongst my friends and peers about depression and seasonal depression. It seems to flourish in the cold, dark months. I myself have been struggling with it. I wanted to do something proactive to combat these negative thoughts and emotions and to welcome in the New Year in a positive way. So I looked to my community for support and ideas.

Now, one thing you may not know about me is that I am the founder of the Michigan Technological University SoundGirls chapter. It is something I am incredibly proud of and sad to have moved away from. It’s okay though, I left it in good hands.

A few weeks ago, I emailed some of the members of this organization and asked them to fill out a short set of interview questions. Many of these young women I consider my family, if not friends. One response in particular not only brought me joy, but hope for a brighter future for all of us in this amazing, yet challenging industry.

Izzy Waldie is a first-year Audio Production major and the newly elected secretary of the Michigan Tech SoundGirls chapter. She is not only incredibly creative but also very good at STEM classes. Something I have to admit, I am not good at. Because she is still in her first year, I asked Izzy a few ice breaker questions.

Sarah: “So Izzy, what are you excited for or looking forward to in your time here at Michigan Tech in the visual performing arts department?”

Izzy: “I’m just really looking forward to doing more projects with people, and making stuff I’m really proud of.”

When asked about the university chapter specifically, she responded with;

Izzy: “Next semester I really want to do some creative projects with SoundGirls. We will be finishing up our movie project which will be really cool, but I want to do more projects. I was thinking of maybe just us recording a song. Nothing fancy, it could be just for fun, and we could do it with all the musicians in the organization. Now that I’m on the management board I really want to help head up some of these projects.”

When I was in my first year at Michigan Tech, I was one of two female students out of the two audio programs. Now, those numbers have been multiplied by at least five. The fact that there is an organization where students can go, create things together, learn and refine their skills, all while being supportive of each other, makes my heart melt. It reminds me that life isn’t always a challenge. Their excitement makes me excited.

Sarah: “Recently you said you don’t know what you are doing and I wanted to talk to you a little bit more about that. It is my opinion that you don’t need to know exactly what you are doing and it is more important to know what you don’t want to do. By exploring different areas and avenues, you are figuring out what you are doing or at least what you want to do. What are some things that you are exploring, interested in, or new things you might want to try out?”

Izzy: “I’ve definitely realized that what you already know isn’t as important as how willing you are to learn. I still don’t know what I’m doing, no one ever knows 100% what they’re doing, but I definitely have learned a lot this semester. I’ve seen this the most working at the Rozsa (The Rosza Performing Arts Center), I had basically no experience at first, but now I’m working the sound and lighting boards pretty confidently. One thing I really want to get more into is recording. I’ve helped out with some other people’s projects and would like to work more creatively with it.”

Izzy made the observation that most students might overlook. What you already know isn’t as important as how willing you are to learn. Not only was she a student in the organization I was president of, but I was also a teaching assistant for one of her classes. This statement is a testament to how she is as a student and how she approaches learning situations. It is an excellent characteristic to have for the industry that she is headed into.

I was feeling revitalized by the end of our conversation. I had received new hope, excitement, and appreciation from talking with Izzy. To finish out the conversation I asked her something a little more personal.

Sarah: “Tell me something good that happened this semester in our department that you will remember for a while, that makes you smile?”

Izzy: “This semester was awesome. Never did I think I’d be so involved as a first-semester student. One of the best parts for me was working on the haunted mine. (A project that the visual performing arts department collaborates with the Quincy Mine owner on every Halloween). We were down there for a really long time but the idea was really cool and so were the people. I also really liked working in the different groups on the audio movie project. I made a lot of friends while working on this project and our Dr. Seuss The Lorax audio movie ended up being pretty fun to make. I remember one time, at like midnight a bunch of us were in Walker Film Studio working on one of the audio movies while passing around a 2-liter of Dr. Pepper.

Izzy’s responses were wholesome and honest. To me, she has a perspective that exceeds her age. It was a nice reminder when faced with the daunting challenges of moving to a new area, finding work, and starting anew. It was a reminder of why I chose this career field. I chose it for the exciting new projects, learning new things, and working late into the night with people you hardly know, but will soon feel like family to you.

Though our conversation had ended, I was feeling myself again and that was because of the connections and relationships I had made through our little SoundGirls chapter. At the core of SoundGirls, you will find this kind of understanding from its members. We are here to listen to one another, remind each other of why we are here and doing what we love, and create an environment that welcomes all who are seeking opportunities and support. I wish you all a prosperous and happy New Year.

 

Gain Without the Pain

 

Gain Structure for Live Sound Part 1

Gain structure and gain staging are terms that get thrown about a lot, but often get skimmed over as being obvious, without ever being fully explained. The way some people talk about it, and mock other people for theirs, you’d think proper gain structure was some special secret skill, known only to the most talented engineers. It’s actually pretty straightforward, but knowing how to do it well will save you a lot of headaches down the line. All it really is is setting your channels’ gain levels high enough that you get plenty of signal to work with, without risking distortion. It often gets discussed in studio circles, because it’s incredibly important to the tone and quality of a recording, but we have other things to consider on top of that in a live setting.

So, what exactly is gain?

It seems like the most basic question in sound, but the term is often misunderstood. Gain is not simply the same as volume. It’s a term that comes from electronics, which refers to the increase in amplitude of an incoming signal when you apply electricity to it. In our case, it’s how much we change our input’s amplitude by turning the gain knob. In analogue desks, that means engaging more circuits in the preamp to increase the gain as you turn (have you ever used an old desk where you needed just a bit more level, so you slowly and smoothly turned the gain knob, it made barely any difference… nothing… nothing… then suddenly it was much louder? It was probably because it crossed the threshold to the next circuit being engaged).

Digital desks do something similar but using digital signal processing. It is often called trim instead of gain, especially if no actual preamp is involved. For example, many desks won’t show you a gain knob if you plug something into a local input on the back of it, because its only preamps are in its stagebox. You will see a knob labelled trim instead (I do know these knobs are technically rotary encoders because they don’t have a defined end point, but they are commonly referred to as knobs. Please don’t email in). Trim can also be used to refer to finer adjustments in the input’s signal level, but as a rule of thumb, it’s pretty much the same as gain. Gain is measured as the difference between the signal level when it arrives at the desk to when it leaves the preamp at the top of the channel strip, so it makes sense that it’s measured in decibels (dB), which is a measurement of ratios.

The volume of the channel’s signal once it’s gone through the rest of the channel strip and any outboard is controlled by the fader. You can think of the gain knob as controlling input, and the fader as controlling output (let’s ignore desks with a gain on fader feature. They make it easier for the user to visualise the gain but the work is still being done at the top of the channel strip).

Now, how do you structure it?

For studio recording, the main concern is getting a good amount of signal over the noise floor of all the equipment being used in the signal chain. Unless you’re purposefully going for a lo-fi, old-school sound, you don’t want a lot of background hiss all over your tracks. A nice big signal-to-noise ratio, without distortion, is the goal. In live settings, we can view other instruments or stray noises in the room as part of that noise floor, and we also have to avoid feedback at the other end of the scale. There are two main approaches to setting gains:

Gain first: With the fader all the way down, you dial the gain in until it’s tickling the yellow or orange LEDs on your channel or PFL while the signal is at its loudest, but not quite going into the red or ‘peak’ LEDs (of course, if it’s hitting the red without any gain, you can stick a pad in. You might find a switch on the microphone, instrument or DI box, and the desk. If the mic is being overwhelmed by the sound source it’s best to use its internal pad if it has one, so it can handle it better and deliver a distortion-free signal to the desk). You then bring the fader up until the channel is at the required level. This method gives you a nice, strong signal. It also gives that to anyone sharing the preamps with you, for example, monitors sharing the stagebox or multitrack recording. However, because faders are measured in dBs, which are logarithmic, it can cause some issues. If you look at a fader strip, you’ll see the numbers get closer together the further down they go. So if you have a channel where the fader is near the bottom, and you want to change the volume by 1dB, you’d have to move it about a millimetre. Anything other than a tiny change could make the channel blaringly loud, or so quiet it gets lost in the mix.

Fader at 0: You set all your faders at 0 (or ‘unity’), then bring the gain up to the desired level. This gives you more control over those small volume changes, while still leaving you headroom at the top of the fader’s travel. It’s easier to see if a fader has been knocked or to know where to return a fader to after boosting for a solo, for example. However, it can leave anyone sharing gains with weak or uneven signals. If you’re working with an act you are unfamiliar with, or one that is particularly dynamic, having the faders at zero might not leave you enough headroom for quieter sections, forcing you to have to increase the gain mid-show. This is far from ideal, especially if you are running monitors, because you’re changing everyone’s mix without being able to hear those changes in real-time, and increasing the gain increases the likelihood of feedback. In these cases, it might be beneficial to set all your faders at -5, for example, just in case.

In researching this blog, I found some people set their faders as a visual representation of their mix levels, then adjust their gains accordingly. It isn’t a technique I’ve seen in real life, but if you know the act well and it makes sense to your workflow, it could be worth trying. Once you’ve set your gates, compressors, EQ, and effects, and added the volume of all the channels together you’ll probably need to go back to adjust your gains or faders again, but these approaches will get you in the right ballpark very quickly.

All these methods have their pros and cons, and you may want to choose between them for different situations. I learned sound using the first method, but I now prefer the second method, especially for monitors. It’s clear where all the faders should sit even though the sends to auxes might be completely different, and change song to song. Despite what some people might say, there is no gospel for gain structure that must be followed. In part 2 I’ll discuss a few approaches for different situations, and how to get the best signal-to-noise ratio in those circumstances. Gain structure isn’t some esoteric mystery, but it is important to get right. If you know the underlying concepts you can make informed decisions to get the best out of each channel, which is the foundation for every great mix.

 

Missed this Week’s Top Stories? Read Our Quick Round-up!

It’s easy to miss the SoundGirls news and blogs, so we have put together a round-up of the blogs, articles, and news from the past week. You can keep up to date and read more at SoundGirls.org

December Feature Profile

Love for Chaos: Willa Snow Live Sound Engineer

 

SoundGirls News

She Rocks Awards Tickets

SoundGirls Yearbook 2019 – Now on Sale


We just got some new merch in. Long Sleeves, Onesies, Toddlers, Gig Bags, and Canvas Totes. Check it out Here


SoundGirls Events

Alberta SoundGirls Winter Social

SoundGirls FOH Tuning Workshop – Los Angeles

Colorado SoundGirls Monthly Social

SoundGirls Mentoring at AES@NAMM

SoundGirls NAMM Dinner

SoundGirls NAMM Sunday Breakfast

Business Basics for the Entertainment Industry


SoundGirls Opportunities


SoundGirls and SoundGym

Sound Particles Licenses Available

Meyer Sound Supports SoundGirls


SoundGirls Resources


Women-Owned Businesses

SoundGirls – Gaston-Bird Travel Fund

Events

Sexual Harassment

 

https://soundgirls.org/about-us/soundgirls-chapters/

 

Missed this Week’s Top Stories? Read Our Quick Round-up!

It’s easy to miss the SoundGirls news and blogs, so we have put together a round-up of the blogs, articles, and news from the past week. You can keep up to date and read more at SoundGirls.org

December Feature Profile

Love for Chaos: Willa Snow Live Sound Engineer

 

SoundGirls News

She Rocks Awards Tickets

SoundGirls Yearbook 2019 – Now on Sale


We just got some new merch in. Long Sleeves, Onesies, Toddlers, Gig Bags, and Canvas Totes. Check it out Here


SoundGirls Events

Los Angeles SoundGirls Holiday Party

Atlanta SoundGirls Chapter Launch

Houston SoundGirls Holiday Potluck

Alberta SoundGirls Winter Social

SoundGirls FOH Tuning Workshop – Los Angeles

SoundGirls Mentoring at AES@NAMM

SoundGirls NAMM Dinner

SoundGirls NAMM Sunday Breakfast

Internet Round-Up

Grammys Pledge More Diversity Under New Leadership

60 seconds with mastering engineer and PSNEurope columnist Katie Tavini


SoundGirls Opportunities


SoundGirls and SoundGym

 

Sound Particles Licenses Available

Meyer Sound Supports SoundGirls


SoundGirls Resources


Women-Owned Businesses

A More Inclusive Industry

Events

Sexual Harassment

 

 

https://soundgirls.org/about-us/soundgirls-chapters/

 

Missed this Week’s Top Stories? Read Our Quick Round-up!

It’s easy to miss the SoundGirls news and blogs, so we have put together a round-up of the blogs, articles, and news from the past week. You can keep up to date and read more at SoundGirls.org

December Feature Profile

Love for Chaos: Willa Snow Live Sound Engineer

 

SoundGirls News

She Rocks Awards Tickets


We just got some new merch in. Long Sleeves, Onesies, Toddlers, Gig Bags, and Canvas Totes. Check it out Here


SoundGirls Events

Colorado SoundGirls Social

Melbourne SoundGirls Holiday Drinks

New York SoundGirls Winter Mixer

Los Angeles SoundGirls Holiday Party

Houston SoundGirls Holiday Potluck

Alberta SoundGirls Winter Social

SoundGirls FOH Tuning Workshop – Los Angeles

SoundGirls Mentoring at AES@NAMM

SoundGirls NAMM Dinner

SoundGirls NAMM Sunday Breakfast


SoundGirls Opportunities


SoundGirls and SoundGym

 

Sound Particles Licenses Available

Meyer Sound Supports SoundGirls


SoundGirls Resources


Women-Owned Businesses

A More Inclusive Industry

Events

Sexual Harassment

 

 

https://soundgirls.org/about-us/soundgirls-chapters/

 

Missed this Week’s Top Stories? Read Our Quick Round-up!

It’s easy to miss the SoundGirls news and blogs, so we have put together a round-up of the blogs, articles, and news from the past week. You can keep up to date and read more at SoundGirls.org

November Feature Profile

Adriana Viana: Independent Brazilian Sound Engineer

Adriana Viana: Engenheira de Som Brasileira Independente

The Blogs

AI Composition Technology

One Year On

The Buskers Equipment Guide

SoundGirls News


We just got some new merch in. Long Sleeves, Onesies, Toddlers, Gig Bags, and Canvas Totes. Check it out Here

Internet Round-Up

Lauren Deakin Davies: ‘The mental health movement has a faux face to it’

On tour with Lizzo: The artist’s audio engineers take us through their set up

Illuminating The Dark Art: A Practical Step-By-Step Guide To Success With Wireless/RF

AES Reflects Increasingly Diverse Industry


shesaidso wants to increase the number of speakers from underrepresented communities at conferences, particularly womxn, trans and non-binary.

Please take a moment to submit yourself as a speaker and we will add you to our directory. We are updating this list in real-time.


SoundGirls Events

Vancouver SoundGirls Chapter Winter Social (WIM Networking Party)

Live Sound Workshop presented by Sus. Media, Soundgirls and Female Frequency

SoundGirls Electricity and Stage Patch

Bay Area SoundGirls Meeting

Colorado SoundGirls Social

Melbourne SoundGirls Holiday Drinks

Los Angeles SoundGirls Holiday Party

Alberta SoundGirls Winter Social

Los Angeles – Live Sound Workshop

SoundGirls FOH Tuning Workshop – Los Angeles

SoundGirls Mentoring at AES@NAMM

SoundGirls NAMM Dinner

SoundGirls NAMM Sunday Breakfast


SoundGirls Opportunities


SoundGirls and SoundGym

Sound Particles Licenses Available

Meyer Sound Supports SoundGirls


SoundGirls Resources


Spotify and SoundGirls Team Up – EQL Directory

SoundGirls – Gaston-Bird Travel Fund

Letter for Trades and Manufacturers


Women-Owned Businesses

A More Inclusive Industry

Events

Sexual Harassment

https://soundgirls.org/about-us/soundgirls-chapters/

Jobs and Internships

Women in the Professional Audio

Member Benefits

AI Composition Technology

 

It feels like technology is developing at an incredible rate with every year that passes, and in the music world, these changes continue to push the boundaries of what is possible for creators as we approach 2020. Several companies specialising in AI music creation have been targeting composers lately, headhunting and recruiting them to develop the technology behind the artificial composition. So who are the AI companies and what do they do?

AIVA

One company called ‘AIVA’ has been the most prevalent that I’ve been aware of this year, and they have reached out to recruit composers stating they are ‘building a platform intended to help composers face the challenges of the creative process’.  Their system is based on preset algorithms, simplified and categorised by genre as a starting point.

I set up an account to experiment and found it to be quite different from the demo on the landing page led me to believe. The demo video demonstrates how the user can choose from a major or minor key, instrumentation, and song length to create a new track, and that is it – the piece is created! The playback of the piece has overtones of the keyboard demos of my youth in its overall vibe however I have to admit I am genuinely impressed with the functionality of the melody, harmony, and rhythms as well as the piano roll midi output that is practical for importing into a DAW – it’s really not bad at all.

The magic happens while watching the rest of the demo and seeing how the composer modifies the melody to make slightly more technical sense and sound more thought-out and playable, they shift the voicing and instrumentation of the harmony and add their own contributions to the AI idea. I have to admit that I have similar methods for composing parts when inspiration is thin on the ground, but my methods are not so fast, slick or lengthy and I can completely see the appeal of AIVA being used as a tool for overcoming writers’ block or getting an initial idea that develops quickly.

On the argument against, I was pretty stunned how little input was required from the user to generate the entire piece, which has fundamentally been created by someone else. The biggest musical stumbling block for me was that the melodies sounded obviously computer-generated and a little atonal, not always moving away from the diatonic in the most pleasing ways and transported me back to my lecturing days marking composition and music theory of those learning the fundamentals.

In generating a piece in each of the genres on offer, I generally liked most of the chord progressions and felt this was a high point that would probably be the most useful to me for working speedily, arranging and re-voicing any unconvincing elements with relative ease. While I’m still not 100% sure where I stand morally on the whole thing, my first impressions are that the service is extremely usable, does what it claims to do, and ultimately has been created by composers for those who need help to compose.

Track 1 – https://soundcloud.com/michelle_s-1/aiva-modern-cinematic-eb-minor-strings-brass-110-bpm

Track 2 – https://soundcloud.com/michelle_s-1/aiva-tango-d-major-small-tango-band-90-bpm

Amper

‘Amper’ music is a different yet interesting AI composition site that assists in the creation of music, and the company states that the technology has been taught music theory and how to recognise which music triggers which emotions. The nerd in me disagrees with this concept profusely (the major key ukulele arrangement of ‘Somewhere over the rainbow’ by Israel Kamakawiwo’ole is just one example of why music is far more complex than key and instrumentation assumptions) however in looking at the target market for Amper, this makes far more sense – they provide a service primarily aimed at non-musicians who are faced with the prospect of trawling through reams of library music as a means to support concept such as a corporate video. In a similar vein to AIVA, Amper creates fully-formed ideas to the brief of set parameters such as timing length and tempo with the addition of incorporating a video to the music creation stage, making this a really practical tool for those looking for supporting music. I loaded a piece from the given options and found it to be very usable and accessible to non-musicians. While the price tag to own and use the pieces seems steep, it’s also reassuring that the composers should have been paid a fair fee.

IBM

Similarly, IBM has created compositional AI they have named ‘Watson Beat’ which its creator Janani Mukundan says has been taught how to compose. The website states:

“To teach the system, we broke the music down into its core elements, such as pitch, rhythm, chord progression and instrumentation. We fed a huge number of data points into the neural network and linked them with information on both emotions and musical genres. As a simple example, a ‘spooky’ piece of music will often use an octatonic scale. The idea was to give the system a set of structural reference points so that we would be able to define the kind of music we wanted to hear in natural-language terms. To use Watson Beat, you simply provide up to ten seconds of MIDI music—maybe by plugging in a keyboard and playing a basic melody or set of chords—and tell the system what kind of mood you want the output to sound like. The neural network understands music theory and how emotions are connected to different musical elements, and then it takes your basic ideas and creates something completely new.”

While this poses the same arguments to me as AIVA and Amper with its pros and cons, it’s clearly advertised as a tool to enhance the skills of composers rather than replace them, which is something I appreciated once again and I am curious to see where IBM takes this technology with their consumers in the coming years.

Humtap

The last piece of software I tried myself was an app downloaded onto my phone called ‘Humtap’ which was a slightly different take on AI for music composition. In a lot of ways, this was the least musical of all the software, yet conversely, it was the only one I tried that required something of a live performance – the app works by singing a melody into the phone and choosing the genre. I hummed a simple two-bar melody and played around with the options of what instrument played it back and where the strong beats should fall in the rhythm. The app then creates a harmonic progression around the melody, a separate B section, and this can all loop indefinitely. It’s really easy to experiment, undo, redo, and intuitively create short tracks of electronic, diatonic sounding music. This app by its nature seems like it’s aimed at young people, and I felt that was pretty positive – if Humtap works as a gateway app in getting youngsters interested in creating music using technology at home, then that’s a win from me.

There’s always a discussion to be had around the role of AI in music composition, and I suspect everyone will have a slightly different opinion on where they stand. Some fear the machines will take over and replace humans, others make the argument that this kind of technology will mean everybody will have to work faster because of it, and there are some who fear it will open up the market to less able composers at the mid and lower end of the scale. On the other side, we have to accept that we all crave new, better sounds and sample libraries to work with, and that the development of technology within music has been responsible for much of the good we can all universally agree has happened through the last 5 decades. My lasting impression in researching and experimenting with some of these available AI tools is that they are useful assets to composers but they are simply not capable of the same things as a live composer. To me, emotion cannot be conveyed in the same way because it needs to be felt by the creator and ultimately, music composition is far more complex and meaningful than algorithms and convention.

X