Empowering the Next Generation of Women in Audio

Join Us

The Way We Hear


Immersive sound, 3D sound, Surround sound… Let’s talk Psychoacoustics

Lately, all these concepts have been gaining popularity in the sound field, many manufacturers have started to increase their innovations for the audience to experience even more realistic immersive sound. But what’s the theory behind it?

Immersive sound, 3D sound, and Surround sound are all referring to the same thing and it is basically how sound can be manipulated to recreate real-life sound on speakers and headphones, making it closer to a 360 experience. These manipulations are based purely on how our brain tricks us to hear things, It is all related to the way we hear.

The way our brain processes the information we receive in our ears defines the way we hear. Meaning that additionally to the physical shapes of our body and the mechanical characteristics of the sound waves, it is actually the brain, combining this information, creates the perception of sound. There have been studies about it and the theory behind those studies is called Psychoacoustics.

Starting with the human body, there have been several studies that defined very accurately the physical structure of our ear and the role they play in the process of hearing:

The external ear, where sound waves hit the eardrum and make it vibrate

The medium ear where the small bones transfer these vibrations from the eardrum to the internal ear (malleus, incus, and stapes)

The internal ear where these vibrations reach the Cochlea, in which interior millions of hair cells vibrate and produce signals sent to the brain

Once the brain receives these signals, it processes them according to its properties

  1. Pitch characteristics
  2. Distance-related properties – Interaural level difference
  3. Time-related properties – Interaural time difference

Here is where psychoacoustics plays a role and explains with different phenomena how we can perceive sound depending on different properties of the sound and the way our brain is programmed. There are many concepts and explanations about how our brain behaves when listening to sound:

 According to pitch characteristics:

Physically each region of the Cochlea acts as an amplifier to mechano-electrical transduction that gives “hair cells” an electromotility property which gives selectivity and sensitivity to the frequencies we hear by areas.

One common disease related to this is tinnitus, at least 20% of the population is affected by it, where despite the existence of an external sound, the person can hear phantom noises due to age-related hearing loss, an ear injury or a problem with the circulatory system

Ghost fundamental:

This phenomenon happens when a signal containing all harmonic elements but the fundamental is detected by the brain who identifies the pattern of the signal and tricks us to hear that fundamental frequency that is not present.

Robinson-Dadson curves: 

The Ear has a specific response to what we hear, meaning the higher the Sound Pressure Level (SPL) is, the flatter the response is in our ears, i.e., we tend to hear frequency range more evenly with a higher SPL. If the SPL is lower, lower frequencies are more present in our hearing


For each tonal frequency, there is an associated masking threshold with a critical bandwidth, where any signal reproduced inside this bandwidth and below that threshold will not be heard. For frequencies around 20Hz and 400Hz, this bandwidth varies between 100Hz<BW <400Hz, behaving logarithmically on the raise. Meaning our brain processes most information within the lower frequencies, a concept utilized in audio compression such as mp3 to removed frequencies masked on the higher frequencies range

Not perceptible frequencies:

There have been findings showing that even if over 26KHz frequencies cannot be heard, they can be detected as brain activity in MRI images, causing different responses in individuals such as pleasure, tranquility, and dynamic appreciations

According to Interaural time and level difference:

Source Location:

It has been determined that the way our brain deciphers the position of a sound is determined by high frequencies. Because the wavelength at high frequencies is comparable to the dimensions of our body, it can let the brain understand what the position of the sound source is in reference to our bodies.  However, lower frequencies can also help to determine the sound source by measuring phase differences between signals perceived by each ear.

Hass effect: 

When two sound sources with the same distance and SPL to the hearing person (stereo) will produce a ghost image in the middle of the sources. If one of the sources is attenuated or delayed (5ms or -18dB), the ghost image will move from the center closer to one of the sources. Here is where concepts like time difference and level differences between audio signals are the key for most professional audio applications


The Head Related and Head Phones Transfer Functions are mathematical equations that explain how our head, torso, and ear shapes affect the way we hear, meaning how the distance between our ears, dimensions, curves, shapes, bones reflections, and bones resonators affect each frequency of a sound wave defining the sound perception for each specific individual. These anthropometric measurements are used to define personalized HRTF and HPTF.

As mentioned before,  all these concepts are being applied in Professional sound applications based on psychoacoustics theory:


Oohashi, Tsutomu (1991). High-Frequency sound above the audible range affects brain electric activity and sound perception. AES 91st Convention. NY

Sunder, K., Tan E., Gan W. (2014) Effect of headphone equalization on auditory distances perception. AES 137th     Convention LA.

Novatech & Adelaide Symphony Orchestra Present Harry Potter and the Prisoner of Azkaban in L-ISA.     https://www.youtube.com/watch?v=3VMfMA1i-hY

What is AMBEO Immersive Audio by Sennheiser? https://www.youtube.com/watch?v=uIpVM4-3tV4

Binaural Audio Recording. https://www.youtube.com/watch?v=vGt9DjCnnt0

ASMR 3D Tingles | Zoom H3 VR Mic Test (No Talking). https://www.youtube.com/watch?v=Hrf87AdR3Eg

Woo Lee, G., Kook Kim, H. (2018) Personalized HRTF Modeling Based on Deep Neural Network Using Anthropometric Measurements and Images of the Ear


Doing Sound for Acrobatics Shows

The first time I ran a soundboard from FOH for a show with acrobatics, my main concern was not to get distracted by the act and by the anxiety that watching acrobatics and dangerous acts can cause. This feeling never goes away but you learn how to control it and to focus your attention on your cues and mixing. Especially when your track is a fundamental part of the show, as important as the music and sound effects can be, and especially when troubleshooting needs to be performed as effectively as possible in case of any surprise or technical difficulties because it can affect the act and performer’s safety. I might even claim that your mixing becomes second, safety is always first.

The way to achieve this concentration level starts by learning mainly four things: learn your gear, learn the act/show, learn the cue sheet and learn the music. As with any job, knowing the tools and gear you have to perform your job is fundamental, even getting used to the physical position of it and training your muscular memory can be important to efficiently do your job during a show with acrobatics that requires rapid response and accuracy. Many of the sound cues will be related to visual references, verbal cues, or musical cues during a show with acrobatics, so learning when an artist moves a leg or does a head bow, are as important as learning the key change in the music to trigger the next scene on your console.

As in other types of shows, acrobatics shows have a big crew of technicians backstage running different tracks to make the show happen. During the show (and rehearsals) we are all on Intercoms following a script read by a show caller. These scripts let each technician know the moment to run their specific cue, and it will be something like winch coming in, cue 27 go, door is clear, performers to position, house to 20% go, standby for …, etc. If you are running FOH, 99% of the time you won’t be listening to the show caller because there is a show to be mixed with both of your ears, but you may have cue lights triggered by them or you might have to hear momentarily the show caller channel to trigger your cue. Other show tracks for sound as monitors or backstage will probably hear the show caller during the whole show, adding it to the mix for the in-ears or carrying a belt pack just for coms.

Following artists’ movements to run cues, sound effects, or musical remarks might happen during the show too, like pushing the master for specific impressive moments of the acropachies or triggering sound effects for clown acts. This means that in addition to your audio console and processor you’ll run, you might always have another piece of gear with sound clips for this purpose, like Qlab, LCS Cue consoles, 360 Systems Instant Replay audio player, etc. Learning the acts and the different versions of them will help you follow the artist’s actions, if they decide to repeat an action or not, your cues may vary or not.

It will also be very important what to do in case of an emergency, you’ll be trained to follow emergency protocols depending on the situation (show stops, fire alarm, etc.) like triggering special announcements, playing waiting music, or even assisting artists on stage.

Cue sheets and track sheets are the best way to put together all the learning of the music, the act, and the cues. On them, you can specify preset instructions, the type of reference to take cues, what the cue does, when to take the cue, what the next cue is, and how fast you need to do it, act versions or show versions, etc.

Doing sound for acrobatic shows will always keep your attention to a maximum, there is no chance for missing cues or for big mistakes, and problem-solving will be your most valuable skill.


Andrea Arenas – Live Sound & Studio Engineer

Andrea Arenas is a Live and Studio Engineer working in the industry for over 17 years. Andrea is currently working as a sound technician for La Perle by Dragone in Dubai. Andrea discovered audio when she was in her teens and overheard some of her friends from orchestra discussing audio engineering.  Andrea wanted to pursue music as she had been learning percussion since she was ten years old. She was deterred by her family who said that music was not an option, so audio engineering opened another career path for her. At the time in Venezuela, there were no official institutions offering audio as a career path, so Andrea enrolled in electronic engineering at Simon Bolivar University in Venezuela, with the understanding that it was somehow related to audio and music. Andrea is currently enrolled at Iberoamerican University, Puebla working on a Master’s Degree in Cultural Management.


Career Start

How did you get your start?

I approached a recording studio in my university, part of the communications department, open-minded and willing to find a person who could take me in to teach me all about it. The person in charge of it, fortunately, took me in and taught me most of the things I know about sound today.

How did your early internships or jobs help build a foundation for where you are now?

That first job in the university studio was the door to starting my career in audio, it let me understand what the field was about and if it was something I would enjoy. So it was one of the most important decisions I’ve made in my audio career

What did you learn interning or on your early gigs?

I learned about types of gear, signal flow, working processes, and critical listening. I learned about which parts of the sound career I liked and whatnot.

Did you have a mentor or someone that really helped you?

Yes, Francisco ‘Coco’ Diaz was the person who took me in at the university studio and mentored me for almost 3 years. Even after all these years, I still go to him when I need some perspective or advice. You can follow his Instagram account in Spanish for musical production tips @serproductordemusica.


Career Now

What is a typical day like?

I wake up around 8 to 9 am and take care of any home and personal activities like cleaning, cooking, yoga, etc. Then I check emails and work on any out-of-work projects like my personal music, podcasts, mixing, university classes, volunteering work, etc. Then, my work hours for the show usually start after 2 pm. When I arrive at the theater, I check the schedule for the day. We usually have some training, rehearsals or validations with artists. Soundcheck happens every day a couple of hours before the show starts, depending on my track for the day (because I rotate 4 tracks, foh, monitors, RF, and musical director) I’ll do presets for microphones, consoles, computers, etc. Then I run two shows and go home at midnight.

How do you stay organized and focused?

Discipline is part of the daily routine in every aspect of my life, I think mainly because of my musical training, I try to plan short-term goals and keep track of schedules I plan in my mind. I say “in my mind” because following a routine is not my way of doing it. Depending on the day’s mood I organize my activities trying to follow those short-term goals, let’s say I try to keep a weekly schedule rather than a daily tight schedule.

What do you enjoy the most about your job?

Feeling that I’m part of a show that, for at least two hours, takes people’s imagination to new places, to enjoy and be happy for a moment. It makes me feel rewarded.

What do you like least?

Having shows on days you want to see your favorite artist show.

If you tour, what do you like best?

Before the pandemic, I was touring with Cirque and my favorite part was always during the first soundcheck at every new city. I usually felt very tired at that moment because of the transfer work, but as soon as the first notes sounded, I could remember why I was doing it, kept going, and enjoyed the moment.

What do you like least?

Working many days in a row, one time I worked 22 days in a row, live sound can be physically very demanding sometimes

What is your favorite day off activity?

I still work on my personal projects during the days I don’t have shows. I consider everyday activities as a choice and I disagree with thinking that on days off I’m “free”. Of course, I also enjoy doing nature or art activities, but I consider them as part of my schedule to achieve the mental state I need to be efficient, enjoy my creative process and enjoy life.

What are your long-term goals?

Keep learning and be open to new opportunities. The pandemic changed my perspective about two things: making plans and depending on a single paycheck. So I’m willing to expand my horizons as much as possible, always open to new experiences related to sound, music, art, culture, and a sense of community.

What if any obstacles or barriers have you faced?

It has probably been to leave my country and be able to be recognized as a professional again despite having to practically start from scratch. It’s common to find people don’t trust your skills and even doubt your CV when you are from a different latitude and speak different languages. Fortunately, not everyone thinks the same way, and some others gave me the opportunity to prove myself and let my work speak for myself

How have you dealt with them?

I always try to mention that despite anything that I’ve dealt with (consciously or not) I’m true to myself, and my ideas and keep working as hard and passionately as possible.

Advice you have for other women and young women who wish to enter the field?

Follow your instincts, speak up, despite feeling intimidated by others, and don’t let these feelings rule the way you behave or think. There will always be people more experienced and less experienced than you anywhere, just be aware that your opinion is also important and can be considered as others.

Must have skills?

Problem-solving, active listening, and patience

Favorite gear?

I always say that because I haven’t tried them all, I can’t choose a favorite. I think the idea is to feel comfortable with the gear you use, and learning the most about it and practicing will be the only way to get there. So I usually try to feel comfy with the gear I use, sometimes I wish I could have the trendy ones or the ones that a super famous artist or studio owns, but sometimes it is not possible. So I embrace reality and get the best out of the gear I have in front.

Música y Sonido. Parte 2

¿Los profesionales de audio también deben ser músicos?

Mi respuesta directa sería que no, algunos de los mejores ingenieros de audio de la industria no son músicos. Pero si quieres mi consejo como música y como ingeniera de sonido, aprender algunos conceptos básicos sobre música no te hará daño.

Ser ingeniera de sonido trabajando en proyectos que involucren tratamiento musical como grabación, edición, mezcla, etc., requerirá que tengas y desarrolles unas aptitudes y conocimientos básicos sobre música que te permitirán tener un mejor desempeño en tu trabajo. Esto significa que, incluso si no eres músico, deberás tener buen oído para la música: reconocer el notas, reconocer qué instrumentos musicales están tocando, reconocer si los instrumentos musicales están desafinados, reconocer los patrones armónicos y la forma de una pieza musical, reconocer y seguir ritmos y patrones rítmicos, ser sensible a las dinámicas.

Profundicemos en cada tema:

Reconocer y seguir ritmos y patrones rítmicos.

El clic:

Cada pieza musical tiene un pulso llamado tempo, que sigue una marca de metrónomo medida en pulsaciones por minuto (bpm). Si hay una partitura disponible, esta marca de metrónomo se indicará en la parte superior izquierda de la partitura. Puede indicarse con números o con términos musicales en italiano que darán una pista sobre el tempo. La mayoría de las veces, será necesario para las grabaciones y/o presentaciones en vivo configurar el clic en tu DAW o software de música. Dependiendo de la solicitud del músico, el clic puede configurarse para seguir el tempo o subdividirse, asegúrate de estar familiarizado con la configuración del clic en el software antes de realizar tu sesión.

Compases y compases:

Los pulsos o tiempos se agrupan en compases, y pueden variar dependiendo de la música, los compases también pueden cambiar dentro de la misma pieza musical. El número de tiempos en un compás puede ser 1, 2, 3, 4, 5, etc. Los compases también se pueden configurar en tu software, aparecerán como un número con dos dígitos: un número en la parte superior que indica cuántos tiempos que hay dentro del compás y el número de abajo que indicará el tipo de notas utilizadas dentro del compás (blanca, negra, corchea, etc.). Conocer el compás te ayudará a contar compases y a seguir patrones rítmicos, también te ayudará a ubicar partes específicas en una pieza musical. Pero también permitirá al músico identificar conteos de barras y pulsos.

Compases y tiempos fuertes:

Cada compás tiene tiempos fuertes y débiles que le dan a la música patrones rítmicos memorables. Por lo general, el primer tiempo es el tiempo fuerte del compás, esta característica se puede configurar en el software para que los compases puedan tener diferentes acentos, niveles y sonidos para cada tiempo, ayudando a los músicos durante su interpretación. Conocer todos estos ajustes a la hora de configurar el clic, los compases y la línea de tiempo de tu sesión es fundamental.


Comprender el término anacrusa es útil cuando necesitas anticipar grabaciones o ejecuciones musicales en vivo. Si escuchas este término, significará que la música comenzará con una nota o un grupo de notas que preceden al primer tiempo fuerte. Su principal característica es que la anacrusa es un compás parcial antes de que comience el primer compás de la música.

Reconocer patrones y forma de una pieza musical:


La estructura de una pieza musical se conoce como forma musical. Familiarizarse con los diferentes tipos de formas pueden ayudarte a organizar tu sesión de manera eficiente. Encontrarás frases musicales, estructuras armónicas, progresiones de acordes, modulaciones, patrones rítmicos dentro de la música que te ayudarán a la hora de reconocer diferentes formas. Una buena manera de familiarizarse con ella es escuchar y leer sobre diversos estilos de música para que pueda identificar qué forma está presente en la pieza musical. Para la música popular, los elementos de forma como el coro, el puente, etc., pueden ser más familiares para identificar, sin embargo, para otros tipos de música, entrenar tu oído es la mejor manera de hacerlo.

Un excelente ejemplo de un tipo de forma muy distinguido es el blues básico: la forma de blues es de 12 compases y su progresión de acordes es muy distintiva porque el acorde I es un acorde dominante, así como el acorde IV, y los músicos han tomado el patrón básico I7-IV7-V como para ser utilizado en el mismo. Se pueden estudiar otras formas como Binaria (AB), Ternaria (ABA), Rondo (ABACA) o (ABACABA), Arco (ABCBA), Sonata (Exposición, Desarrollo, Recapitulación), Tema y Variaciones para que pueda identificarlas mejor para su sesiones

Progresión de acordes:

La mayor parte de la música escrita se basa en escalas y tonalidades. Cada nota de una escala se identifica como un grado. La secuencia y el orden de los acordes basados ​​en estos grados de la escala se denomina progresión de acordes. Los acordes principales son I, IV, V y algunos géneros musicales populares tienen progresiones de acordes distintivas que se pueden identificar fácilmente, como la progresión I-IV-V-I utilizada en la mayoría de las canciones pop. Debido a la variedad de tonalidades y escalas que pueden estar presentes en una canción, las progresiones de acordes pueden ayudarte a identificar la forma de una canción y el género, reconocer frases y temas fácilmente, y ubicar partes musicales para ayudarte a obtener una mejor comprensión del tema musical.


Como parte de las progresiones de acordes, la forma en que finaliza un tema, frase o idea musical estará acompañada armónicamente por al menos dos acordes que se reconocen como una cadencia. Esta cadencia da una sensación de resolución y se puede clasificar en muchos tipos. Una de las más fáciles de reconocer será la cadencia perfecta que va del acorde V al acorde I donde la nota de bajo es la nota principal (tónica) de cada acorde. Ser capaz de reconocer cadencias durante grandes piezas musicales puede ayudarte con tu proceso creativo mientras mezclas, etc.

Se utilizan algunos otros elementos musicales que te ayudarán a comprender lo que sucede en una canción

Un riff es un patrón de notas que se repiten a lo largo de una pieza musical. Los riffs no se repiten inmediatamente y generalmente se encuentran al final del verso en una canción o en el coro.

Groove, un término tomado de los músicos de jazz, a menudo se refiere a un sentido rítmico de cohesión empleado en una rutina o estilo de práctica musical.

Solo es una sección de improvisación donde actúa cada instrumentista, el orden puede ser predeterminado o no. Los solos se interpretan en forma de tema y el número de vueltas se denominan estribillos.

Los fills son frases melódicas o rítmicas improvisadas, tocadas entre frases del tema.

Un vamp es una figura, sección o acompañamiento musical que se repite hasta que se da la señal para la siguiente sección.

El interludio es un arreglo pre-escrito que sirve como transición entre secciones o solos.

Los breaks son interrupciones momentáneas del discurso musical mientras se mantiene el tiempo. A veces, un solista podría tocar durante el break.

Entrenando tu oído para identificar instrumentos y notas:

Si nunca has escuchado un instrumento que está a punto de grabar, solo pídele al músico que te explique cómo se toca y cualquier otro detalle que te interese saber, pídele al músico que toque el instrumento frente a ti para que puedas escuchar, camina a su alrededor y encuentre el mejor lugar para colocar un micrófono para grabaciones o amplificación.


Identificar instrumentos desafinados puede ser complicado y requiere mucho entrenamiento, por lo que la mejor manera de proceder es recordarle al músico antes de las grabaciones y de vez en cuando durante largas sesiones, que verifique su afinación siempre que sea posible.

El tono o las notas son la forma en que el oído humano entiende la frecuencia en la que cualquier fuente produce una onda de sonido. Cuanto mayor sea la frecuencia, mayor será el tono y viceversa. Los instrumentos musicales pueden producir diferentes rangos de tonos dependiendo de su construcción. Cada nota musical producida por cualquier instrumento tiene una frecuencia relacionada medida en (Hz) que luego se interpretará como un tono o nota específica (do, re, mi, fa, sol, etc.)

Se pueden encontrar muchos recursos en línea para ayudarlte a entrenar tu oído y aprender sobre instrumentos musicales y teoría musical. Si te interesa profundizar en estos conceptos consulta:

Aprende sobre orquestación, escucha cada instrumento musical en una orquesta, su construcción, rango de tonos, consejos, trucos y más:


Aprende teoría musical:


Mejore las habilidades básicas de escucha, como la detección de frecuencias (Soundgym ofrece suscripciones a miembros de SoundGirls):



Music and Sound. Part 2

Find Part One Here

Do professionals in audio need to be musicians too?

My straight answer will be no, some of the best sound engineers in the industry are not musicians. But if you want my advice as a musician and as a sound engineer, learning some basics about music won’t hurt you.

Being a sound engineer working on projects that involve music treatment such as, recording, editing, mixing, etc., will require you to have and develop some aptitudes and basic knowledge about music that will allow you to have a better performance at your job. This means, even if you are not a musician you will need to have a good ear for music: recognize pitch and tone, recognize which musical instruments are playing, recognize if musical instruments are out of tune, recognize harmonic patterns and form of a piece of music, recognize and follow beats and rhythmic patterns, be sensible to dynamics.

Let’s go deeper into each topic

Recognize and follow beats and rhythmic patterns

The Click:

Every piece of music has a heartbeat called tempo that follows a metronome marking measured in beats per minute (bpm). If a music sheet is available, this metronome marking will be indicated at the top left of the music sheet. It could be indicated by numbers or by musical terms in Italian that give you a hint about the tempo. Most of the time, It will be necessary for recordings and/or live performances to set up the click on your DAW or music software. Depending on the musician’s request, the click can be set to follow the tempo or subdivided, make sure you are familiar with setting up a click on your software before you run your session.

Bars and time signatures:

Beats are grouped into bars, and they can vary depending on the music, bars can also change within the same piece of music. The number of beats on a bar can be 1, 2, 3, 4, 5, etc. Bars can also be set up in your software, it will appear as a number with two digits: one number at the top that indicates how many beats there are inside the bar, and the bottom number that will indicate the type of notes used inside the bar ( half, quarter, eight, etc.). Knowing the time signature will help you with bar counting and following rhythm patterns, it will also help you to locate specific parts in a piece of music. But it will also allow the musician to identify bar counts and pulses.

Strong beat:

Each bar has strong and weak beats that give music memorable rhythmic patterns. Usually, the first beat is the strong beat of the bar (known as downbeat), this feature can be set up on the software so bars can have different accents, levels, and sounds for each beat, helping musicians during their performance. Knowing all these settings when configuring the click and timeline of your session are essential.


Understanding the pickup term or anacrusis is handy when you need to anticipate music recordings or performances. If you hear this term, it will mean that the music will start with a note or a group of notes preceding the first downbeat. Its main characteristic is that the pickup is a partial bar before the first bar of the music starts.

Recognize patterns and forms of a piece of music


The structure of a piece of music is known as musical form. Familiarizing yourself with the different types of forms can help you organize your session in an efficient way. You will find musical phrases, harmonic structures, chord progressions, modulations, and rhythmic patterns within the music that will help you when it comes to recognizing different forms. A good way to familiarize yourself with it is to hear and read about diverse styles of music so you can identify which form is present in the piece of music. For popular music, forms elements like chorus, and bridge can be more familiar to identify, however, for other types of music training your ear is the best way to go.

One terrific example of a very distinguished type of form is basic blues: Blues form is 12 bars and its chord progression are very distinctive because the I chord is a dominant chord as well as the IV chord and the musicians have taken the Basic I7-IV7-V chord to be used in it. Other forms like Binary (AB), Ternary (ABA) , Rondo (ABACA) or (ABACABA), Arch (ABCBA), Sonata (Exposition, Development, Recapitulation), Theme, And Variations can be studied so you can identify them better for your sessions.

Chord progression:

Most music written is based on scales and keys. Each note of a scale is identified as a grade. The sequence and order of the chords based on these grades of the scale is called a chord progression. The primary chords are I, IV, V, and some popular music genres have distinctive chord progressions that can be identified easily, like the progression I-IV-V-I used in most pop songs. Because of the variety of grades and scales that can be present in a song, chord progressions can help you identify the form of a song and the genre, recognize phrases and themes easily, and locate musical parts to help you get very creative.


As part of the chord progressions, the way a musical theme, phrase, or idea ends will be harmonically accompanied by at least two chords that are recognized as a cadence. This cadence gives a sense of resolution and can be classified into many types. One of the easiest to recognize will be the perfect cadence that goes from V chord to I chord where the bass note is the main note (tonic) of each chord. Being able to recognize cadences during large pieces of music can help you. With your creative process while mixing, etc.

There are some other musical elements used that will help you understand what’s happening in a song

A riff is a pattern of notes that are repeated throughout a piece of music. Riffs do not repeat immediately and are usually found at the end of the verse in a song or in the chorus.

Groove, a term borrowed from jazz musicians, often refers to a rhythmic sense of cohesion employed in a routine or musical practice style.

Solo is an improvisation section where each instrumentalist performs, the order can be predetermined, or not. Solos are performed in the form of the theme and the number of turns are called choruses.

Fill are improvised melodic or rhythmic phrases, played between phrases of the theme.

A vamp is a repeating musical figure, section, or accompaniment until the cue for the next section is given.

An interlude is a pre-written arrangement that serves as a transition between sections or solos.

Breaks are momentary interruption of musical discourse while time is maintained. Sometimes, a soloist could play during the break (solo break).

Training your ear to identify instruments and pitch

If you have never heard an instrument you are about to record, just ask the musician to explain to you how is it played and any other details you might be interested in knowing, ask the musician to play the instrument in front of you so you can hear it, walk around it and find the best place to place a microphone for recordings or amplifications.


Identifying instruments out of tune can be tricky and it takes a lot of training so the best way to proceed is to remind the musician before recordings and every once in a while during long sessions, to check their tuning every time possible.

Pitch is how the human ear understands the frequency at which a sound wave is being produced by any source. The higher the frequency the higher the pitch and vice versa. Musical instruments can produce different pitch ranges depending on its construction. Each musical note produced by any instrument has a related frequency measured in (Hz) that will then be interpreted as a specific pitch or note (c, d, e, f, g, etc.)

Many resources can be found online to help you train your ear and learn about musical instruments and music theory. If you are interested in going deeper into these concepts check out:

Learn about orchestration, listen to every musical instrument in an orchestra, their construction, pitch range, tips, tricks, and more :


Learn music theory:


Improve core listening skills like frequency detection (Soundgym offers SoundGirls Members Subscriptions to the service):



Do Musicians Need to Know About Sound?

Music and Sound: Part 1

Modern and changing times have pushed people to learn and use technology more and more, especially musicians. But particularly during the pandemic, many musicians have had the need to record themselves, edit and mix their own music.  Does this mean now that they have to master a new career as sound engineers too besides being musicians?

I would say yes, but only if it is their true interest. Diving into a sound career implies a lot of technical terms to learn, gear to buy, and aptitudes to have. So, I would say no, if you are not much of a technophile and you don’t want to consume your instrument study time into troubleshooting equipment or learning about deep theoretical and technical aspects of sound.

That being said, my first and best advice would be to always hire a professional sound person to help you set up your home studio, teach you how to do your recordings and mixes, and give you professional advice. However, if you are still thinking to give it a try and set up your own home studio, mix your own music, and doing it all by yourself, I may have some tips for you.

Technical aptitudes are part of the important things to consider: computer skills and good problem-solving skills are basic aptitudes you’ll need to enhance to set up, use, and master your own music studio. Keep in mind that you might have to update or buy a computer that can manage recording and music software requirements. Most websites have now a specific list of technical requirements to use their products, so you might want to take a look through their websites to make sure your computer is up to date. The most important things to consider for a computer to be able to manage music and recording software are mainly: processor type, operation system version, RAM size, disk space, ports, etc. If any of these terms are in a foreign language for you, you may also need help from a person how knows about computers.

Here is an example of Ableton Live Computer requirements for a Windows Computer:

Windows 10 (Build 1909 and later)

Intel® Core™ i5 processor or an AMD multi-core processor.


1366×768 display resolution

ASIO compatible audio hardware for Link support (also recommended for optimal audio performance)

Access to an internet connection for authorizing Live (for downloading additional content and updating Live, a fast internet connection is recommended)

Approximately 3 GB disk space on the system drive for the basic installation (8 GB free disk space recommended)

Up to 76 GB disk space for additionally available sound content

Digital Audio Workstations

The next thing you will need to consider is getting digital audio workstations (DAWs) and/or music creation software. DAWs are computer programs designed to record any sound into a computer, manipulate the audio, mix it, add effects and export it in multiple formats.

You will need to choose according to your needs and preferences among many workstations that are available online from free versions to monthly subscriptions or perpetual licenses. Some of the most popular DAWs between professional sound engineers are Pro Tools, Cubase, Logic Pro, Ableton Live, Reaper, Luna, Studio One, but you can also find others for free or less than USD $100:

To learn how to use any of these DAWs you will be able to find many resources online on the manufacture’s websites, Google or YouTube, such as training videos, workshops, live sessions, etc. Here is an example of a tutorial video for Pro tools that can be found on Avid’s YouTube channel: Get Started Fast with Pro Tools | First — Episode 1: https://www.youtube.com/watch?v=9H–Q-fwJ1g

Some theoretical concepts will also come up when doing recordings and mixing, like stereo track, mono track, multitrack, bit depth, sample rate, phantom power, condenser mics, phase, plugin, gain, DI, etc. Multiple free online resources to learn about those concepts are available all over the internet. Just take your time to learn them.

You can read about educational resources at https://soundgirls.org/educational-resources/

Audio Interface

The next thing you are going to need is an Audio Interface, but why?

Audio interfaces are hardware units that allow you to connect microphones, instruments, midi controllers, studio monitors and headphones to your computer. They translate electric signals produced by soundwaves to a digital protocol (0s and 1s) so your computer can understand it.

Depending on your requirements as a musician you may need to record one track at a time or more. For example, if you play drums you may need more than one mic, but if you are a singer probably one mic is just enough. This means that you will find audio interfaces with different amounts of inputs where usually the price is attached to it, the greater the number of channels and preamps, the more money you’ll need. Audio interfaces will also have different types of inputs: for microphones, for instruments (with a DI), or both (combo), make sure you choose the proper one for your needs. Especially, make sure it has a built-in preamplifier in case you are using condenser mics to record.

There are also microphones that you can plug directly into your computer or phone via USB, this means no audio interface is needed (it’s built-in). This type of mics might be helpful for podcasters, broadcasters, video streamers. However, bear in mind that even if you try your best, this type of recordings may not have the same results as a professional recording and mixing.


Learning about microphones and microphone technics might take lots of blogs to read and videos to watch, so I will narrow it down: there are no straight formulas for sound or strict rules to follow regarding to microphones. The mic you choose can vary depending on your budget, the type of instrument you play, and what you are using your microphone for. For this, you will need to search and learn about types of mics depending on their construction (dynamic, condenser, ribbon, etc.), types of polar pattern (cardioid, super-cardioid, Omni, etc), and some recommendations of mics based on the instruments you’ll record.

For example, you may find definitions for commonly-used terms for microphones and Audix products on their website: https://audixusa.com/glossary/. Or you can register for Sennheiser Sound Academy Seminars at https://en-ae.sennheiser.com/seminar-recordings.

If you want to read more about Stereo Microphone Techniques you can also check: https://www.andreaarenas.com/post/2017/11/06/stereo-microphone-techniques

Midi Controllers

Midi controllers: Musical Instrument Digital Interfaces are mostly used to generate digital data that can be used to trigger other equipment or software, meaning that they do not generate sound by themselves. A MIDI controller can be a keyboard, drum pad-style device, or a combination of the two. You will need to learn how to program and map your midi controller to be able to use it creatively for your productions.

You will also find many resources online that will help you learn about midi controllers, such as Ableton on how to set up your midi: https://www.youtube.com/watch?v=CWOXblksDxE


The acoustics of the room is also important, the lack of acoustic treatment can make your recordings sound different, and usually in a bad way. Sound gets reflected and absorbed in all surfaces present in a room and noise can interact with your recordings too. If you are in an improvised room in your house and no professional acoustic treatment is possible to make, you might have in mind some basics like avoiding recording in rooms with parallel walls, square or rectangle design pattern with square corners and hard surfaces, minimizing the reflected sounds with carpets, soft couches, pillows, etc.

Once again, considering hiring a sound engineer as a consultant might be your best option if you are planning to take the next step as a musician to learn about sound engineering. It would make you save time; money and you’ll be employing a friend.




During these last months of 2020, I started a master’s degree that has pleasantly surprised me, and although it seems to be unrelated to my professional facet of audio, studying “cultural management” has led me to know a new and exciting world, which has more related to my interests than it seems.

But why do I want to talk about acoustemology and cultural management when I have been in sound engineering for almost 14 years focusing only on the technical aspects? In some reading, I found that term, acoustemology. At the time I did not know but that due to its etymological roots caught my attention.

Already with ethnomusicology and some branches of anthropology in conjunction with acoustics, studies of music, ecological acoustics and soundscapes have been carried out, helping to interpret sound waves as representations of collective relationships and social structures, such is the case of sound maps of different cities and countries, which reflect information on indigenous languages, music, urban areas, forest areas, etc., some examples:

Mexican sounds through time and space: https://mapasonoro.cultura.gob.mx/

Sound Map of the Indigenous Languages ​​of Peru: https://play.google.com/store/apps/details?id=com.mc.mapasonoro&hl=en_US&gl=US

Meeting point for the rest of the sound maps of the Spanish territory: https://www.mapasonoro.es/

The life that was, the one that is and the one that is collectively remembered in the Sound Map of Uruguay: http://www.mapasonoro.uy/

As Carlos de Hita says, our cultural development has been accompanied by soundscapes or soundtracks that include the voices of animals, the sound of wind, water, reverberation, temperature, echo, and distance.

But it is with the term acoustemology, which emerged in 1992 with Steven Feld, where the ideas of a soundscape that is perceived and interpreted by those who resonate with their bodies and lives in a social space and time converge. An attempt is made to argue an epistemological theory of how sound and sound experiences shape the different ways of being and knowing the world, and of our cultural realities.

But then another concept comes into play, perception. Perception is mediated by culture: the way we see, smell, or hear is not a free determination but rather the product of various factors that condition it (Polti 2014). Perception is what really determines the success of our work as audio professionals, so I would like to take a moment with this post to think over the following ideas and invite you to do it with me.

As professionals dedicated to the sound world, do we stop to think about the impact of our work on the cultures in which we are immersed? Do we worry about taking into account the culture in which we are immersed when doing an event? Or do we only develop our work in compliance with economic and technological guidelines instead of cultural ones?

When we plan an event, do we use what is really needed, do we have a limit or to attend to our ego we use everything that manufacturers sell us without stopping to think about the impact (economic, social, and environmental) that this planning has in the place where these events will be taking place? Do we really care about what we want to transmit or do we only care about making the audio sound as loud as possible or even louder? Do we stop to think what kind of amplification an event really requires or do we just want to put a lot of microphones, a lot of speakers, if it’s immersive sound the better, make it sound loud, and good luck if you understand? Do we care about what the audience really wants to hear? Are we aware of noise pollution or do we just want the concert to be so loud that people can’t even hear their own thoughts?

Are we conscious of making recordings that reflect and preserve our own culture and that of the performer, or do we only care about obtaining awards at all costs? Have we already shared all the knowledge we have about audio or are we still competing to show that we know everything, that I am technically the best? Or is it time to humanize and put our practice as audio professionals in a cultural context?

I remember an anecdote from a colleague, where he told how after doing all the set up for a concert in a Mexican city, of which I do not remember the details, it was only after the blessing of the shamans and the approval of the gods that the event was possible.

Our work as audio professionals should be focused on dedicating ourselves to telling stories in more than acoustic terms, telling stories that bear witness to our sociocultural context and who we are.

“Beyond any consideration of an acoustic and/or physiological order, the ear belongs to a great extent to culture, it is above all a cultural organ” (García 2007


Bull, Michael; Plan, Les. 2003. The Auditory culture reader. Oxford: Berg. New York.

De Hita, Carlos. 2020. Sound diary of a naturalist. We Learn Together BBVA. Spain Available at https://www.youtube.com/watch?v=RdFHyCPtrNE&list=WL&index=14

García, Miguel A. 2007. “The ears of the anthropologist. Pilagá music in the narratives of Enrique Palavecino and Alfred Metraux ”, Runa, 27: 49-68 and (2012) Ethnographies of the encounter. Knowledge and stories about other music. Anthropology Series. Buenos Aires: Ed. Del Sol.

Rice, Timothy. 2003. “Time, Place, and Metaphor in Music Experience and Ethnography.” Ethnomusicology 47 (2): 151-179.

Macchiarella, Ignazio. 2014. “Exploring micro-worlds of music meanings”. The thinking ear 2 (1). Available at http://ppct.caicyt.gov.ar/index.php/oidopensante.

Victoria Polti. 2014. Acustemología y reflexividad: aportes para un debate teórico-metodológico en etnomusicología. XI Congreso iaspmal • música y territorialidades: los sonidos de los lugares y sus contextos socioculturales. Brazil

Andrea Arenas is a sound engineer and her first approach to music was through percussion. She graduated with a degree in electronic engineering and has been dedicated to audio since 2006. More about Andrea on her website https://www.andreaarenas.com/

Sistemas de Grabación Estéreo

Para poder seleccionar la técnica con la que trabajaremos, primero, debemos considerar algunos detalles como son: presupuesto, equipo disponible y estilo de música, teniendo esto claro podremos tomar una decisión del sistema que mejor se adapte y funcione a las circunstancias que nos enfrentemos.

Hay 4 elementos básicos para poder escoger una técnica:

De allí surgen algunos de los sistemas de grabación estéreo más conocidos, como son:


Existe una relación de la posición en la que una fuente virtual aparece entre un par de parlantes y la diferencia de intensidad del sonido (en dB) para una señal estéreo. Esta variación  se logra en los sistemas de grabación estéreo mediante los cuatro elementos anteriormente expuestos: patrón polar, posición, ángulo entre los micrófonos y distancia a la fuente. (Recordemos que estamos hablando de técnicas de grabación estéreo)

Por ejemplo, sabemos que para lograr que una fuente virtual se sitúe 100% hacia uno de los parlantes la diferencia debe ser de 18dB (1.5 ms), para 75% es de 11dB, para 50% es de 6.5dB, para 25% es de 3dB y 0dB para estar completamente al centro.

Estas diferencias de nivel (dB) o en tiempo (ms) se pueden lograr manipulando la distancia y/o el ángulo entre los micrófonos, esto, para que el sonido que llega a cada una de las cápsulas de los micrófonos del sistema, se traduzcan en imágenes diferentes en los parlantes, con distintas posiciones y anchos de imagen de las fuentes virtuales.

Por ejemplo, al acercar los micrófonos a la fuente, la imagen se hace mayor en los parlantes. O si se reduce el ángulo entre los ejes de los micrófonos de un sistema XY la imagen disminuye debido a que el área de grabación se hace mayor. De la misma manera podemos observar diferencias de imagen entre cada uno de los sistemas AB vs. XY vs. equivalente.

La imagen de la orquesta representada anteriormente, muestra un ejemplo extremo de cómo pueden variar los resultados según la configuración escogida, sin embargo, esto no significa que siempre que seleccionemos un sistema de grabación estéreo AB se obtendrá una imagen que proviene de los extremos izquierdo y derecho de los altavoces, o que al escoger un sistema coincidente se obtendrá una imagen concentrada en el centro de los parlantes. Todo depende de los parámetros seleccionados (patrón polar, ángulo, distancia entre los micrófonos y distancia entre la fuente) para cada configuración.

Específicamente si comparamos un sistema XY con patrón polar cardiode vs uno AB podríamos escuchar:

Les invito a escuchar y seleccionar su sistema de grabación en estéreo preferido, realizando variaciones en los patrones polares, distancias y ángulos de los sistemas de grabación.

Aprovecho para agradecer a la persona a quien le debo estos conocimientos, a quien aprecio y admiro enormemente, Thorsten Weigelt.

Notas adicionales:

A continuación encontrarán una lista con las especificaciones de los sistemas de grabación estéreo establecidos más conocidos.


Andrea Arenas: Soy ingeniero de sonido. Mi primer contacto con la música fue a los 10 años cuando comencé a tocar percusión. Me gradué de ingeniero electrónico y desde el 2006 me dedico al audio. También tengo estudios de composición y un loco amor por la música.

Nizarindani Sopeña: Journalist by the National Autonomous University of Mexico (UNAM), specialist in subjects of the cultural field. Publisher since ten years of Sound: check Magazine, a Mexican publication aimed at professionals in the entertainment industry in Latin America and the world.




Stereo Recording Systems

In order to select the technique with which we will work, first, we must consider some details such as budget, available equipment, and music style; having this clear we can make a decision on the system that best suits and works to the circumstances that we face.

There are 4 basic elements to choose a technique:

From there arise some of the best-known stereo recording systems, such as:


There is a relationship of the position in which a virtual source appears between a pair of speakers and the difference in sound intensity (in dB) for a stereo signal. This variation is achieved in stereo recording systems through the four elements previously discussed: polar pattern, position, the angle between the microphones and distance to the source. (Recall that we are talking about stereo recording techniques)

For example, we know that to get a virtual source to be 100% towards one of the speakers, the difference must be 18dB (1.5 ms), 75% is 11dB, 50% is 6.5dB, 25% is of 3dB and 0dB to be completely at the center.

These differences in level (dB) or in time (ms) can be achieved by manipulating the distance and / or the angle between the microphones, so that the sound that arrives at each of the microphones’ capsules of the system is translated in different images in the speakers, with different positions and image widths of the virtual sources.

For example, when the microphones are brought closer to the source, the image becomes louder in the speakers. Or if the angle between the axis´ of the microphones of an XY system is reduced, the image decreases because the recording area becomes larger. In the same way, we can observe image differences between each of the systems AB vs. XY vs. equivalent.


The image of the orchestra represented above shows an extreme example of how the results may vary according to the chosen configuration; however, this does not mean that whenever we select an AB stereo recording system we will obtain an image that comes from the left and right ends. The right of the loud speakers, or that by choosing a matching system, a concentrated image will be obtained in the center of the loud speakers. Everything depends on the selected parameters (polar pattern, angle, distance between the microphones and distance between the source) for each configuration.

Specifically, if we compare an XY system with a cardioid polar pattern vs an AB one, we might hear:


I invite you to listen and select your favorite stereo recording system, making variations in the polar patterns, distances, and angles of the recording systems.

I take this opportunity to thank the person to whom I owe this knowledge, whom I greatly appreciate and admire, Thorsten Weigelt.

Additional notes:

Below you will find a list of the specifications of the most well-known established stereo recording systems.

Andrea Arenas: I’m a sound engineer. My first approach to music was through percussion since I was 10 years old. I graduated electronic engineer and dedicated to audio since 2006.  I also have composing studies and crazy love for music.

Nizarindani Sopeña: A journalist by the National Autonomous University of Mexico (UNAM), a specialist in subjects of the cultural field. Publisher since ten years of Sound: check Magazine, a Mexican publication aimed at professionals in the entertainment industry in Latin America and the world