Blog 10: Reflection

I was excited to start the Creative Mixing unit as for the longest time I wanted to learn more about sound post-production. 

Aside from gaining technical skills, I got a better understanding of the importance of level balancing, the role of surround sound in the industry, the future of mastering, and the visual stereo imaging concept. This knowledge made my mixing process more mindful and conscious.

After learning more about Dolby’s systems and history I found it hard to focus on the movie in the cinema, instead, I listened to how the mixing engineers panned a ball hitting a tennis racket in the surround system. 

When I was looking for a mixing engineer to research for blog 9, I tried to find a woman among them. I did find a few names but there was not a lot of information on them. The biggest female mixing engineer I discovered was Dr Susan Rogers, so I watched and read plenty of interviews with her, and tried to find out more on Berklee’s website and online journals such as Sound On Sound. Even then, most of the questions were about Prince. “How was Prince in the studio?”, “Did he dress extravagantly every day?”, “How did it feel to work with a hypercreative like him?”. It confused me, as I know she’s done a lot of mixing and producing, wrote a book on creative neuroscience and is teaching at the largest independent college of contemporary music in the world.

I heard a few interesting approaches like listening and hearing through someone else’s ears. She would ask what exactly a person likes about a song, listening to it later through a new set of knowledge. But at work, she prefers not to know what the song is about so that she would listen to it like the audience, with no context. 

However, even after hours of listening to her interviews, I still couldn’t find more about her techniques and approaches, which made me go with Mark Spike Stent instead.

After completing blog 9, I went to research the state of gender balance in music production and post-production, which didn’t sound promising at all. 

The study titled “Lost in the Mix” highlights the severe underrepresentation of female and nonbinary individuals in key technical roles within the music industry, particularly in the top 50 streamed songs across various genres. Only around 5% of tech credits in these songs were attributed to women and nonbinary people. The report, conducted by We Are Moving the Needle, Howard University, Middle Tennessee State University, and Jaxsta, examined data from 2022, encompassing 1,128 songs across different metrics such as streaming, Grammy-winning albums, and industry playlists. Among the findings, it was noted that women and nonbinary individuals were more commonly found in assistant roles rather than key technical roles, possibly indicating a pipeline for future advancement but also suggesting a potential glass ceiling. At the Grammy Awards, only 7.6% of technical roles were filled by women and nonbinary individuals, with just one woman winning in a technical category.

After finding out these numbers, I went from feeling like giving up to feeling angry and more persistent a few hours later. I reached out to as many London-based music studios as I could find, sending them my CV and asking for an apprenticeship.   

Technically speaking, writing this blog and conducting the research led to me taking action on progressing my career. 

References:

Ableton. “Susan Rogers on Prince, Production and Perception | Loop.” Www.youtube.com, 17 Apr. 2018, www.youtube.com/watch?v=EgBZHIUUn8Q.

Corcoran, Nina. “Women and Nonbinary Producers and Engineers “Vastly Underrepresented” in 2022’s Top Songs, New Study Finds.” Pitchfork, 12 Apr. 2023, pitchfork.com/news/women-and-nonbinary-producers-and-engineers-vastly-underrepresented-in-2022s-top-songs-new-study-finds/. Accessed 26 May 2024.

Jordan, Benn. “Why Aren’t There More Female Music Producers? – YouTube.” Www.youtube.com, 25 Feb. 2021, www.youtube.com/watch?v=2Ipb81z46kI.

Lazar, Emily, et al. Lost in the Mix. Apr. 2023. (https://w1.mtsu.edu/media/fix.pdf

Musician’s Hangout. “WOMXN & AUDIO LIVE HERSTORY and Q&A.” Www.youtube.com, 5 Dec. 2020, www.youtube.com/live/3pPVwWE0A1g?t=180s. Accessed 26 May 2024.

Psych Mic. “Letting out Your Inner Child | Music & Psychology with Dr. Susan Rogers.” YouTube, 23 Dec. 2021, www.youtube.com/watch?v=GyM5mbNEpWs. Accessed 26 May 2024.

Red Bull Music Academy. “Susan Rogers on Engineering Prince | Red Bull Music Academy.” YouTube, 8 Dec. 2016, www.youtube.com/watch?v=8ON0nQCQF08&t=993s. Accessed 26 May 2024.

Rogers, Susan. “Susan Rogers: From Prince to Ph.D.” TapeOP, May 2017, tapeop.com/interviews/117/susan-rogers/.

—. “The Listener Profile: How We Perceive Music.” Www.soundonsound.com, Sept. 2023, www.soundonsound.com/sound-advice/listener-profile-how-we-perceive-music.

Why Are There Only a Handful of Female Mixing and Mastering Engineers? 3 Apr. 2022, www.reddit.com/r/mixingmastering/comments/tv479r/why_are_there_only_a_handful_of_female_mixing_and/. Accessed 26 May 2024.

Blog 9: Mark “Spike” Stent

Mark Spike Stent went from being a tea boy to a mixing engineer for Björk, Depeche Mode, Echo & the Bunnymen, Massive Attack, and other great artists. 

Spike Stent’s mixing techniques are characterised by a blend of traditional analogue warmth and modern digital precision. During the transition period, he didn’t discriminate against either of the technologies, incorporating both analogue hardware and digital plugins. 

This approach allows him to achieve the rich, full-bodied sound that analogue equipment provides while benefiting from the flexibility and convenience of digital tools.

One of Spike’s signature techniques is his use of parallel compression. He confessed he didn’t like gentle compression which I found interesting considering his professional experience. By compressing a duplicate of the track and blending it with the uncompressed original, he adds depth and punch to the mix without sacrificing dynamic range. In his interview with Dave and Herb, he said: “I’ll have 3 different types of compressors, for example, Compex, send all the drums there, EQ and mix them all in underneath the [dry] drums… they would jump up when the chorus comes…” 

Automation is a crucial aspect of Spike Stent’s mixing process. He uses automation to enhance the dynamics and emotional impact of the mix. This can include automating volume levels, panning, effects sends, and even plugin parameters.

Spike also places a strong emphasis on the overall vibe and energy of the mix. He often focuses on the groove and rhythm, ensuring that the mix feels cohesive and compelling from start to finish. This approach is particularly evident in his work with bands like U2 and Coldplay, where the interplay between instruments and vocals creates a dynamic and immersive listening experience.

As he worked with many different genres, he doesn’t have a signature sound. Spike Stent’s workflow begins with a critical initial listening session. He starts by thoroughly listening to the rough mix provided by the artist or producer. This step is not just about hearing the music but understanding its core elements, emotional tone, and the artist’s vision. Spike makes detailed notes on what he perceives as the strengths and weaknesses of the track, identifying areas that may need enhancement or adjustment.

Overall, the techniques he uses are mostly standard: gain staging, colour-coding, reverbs, EQing as a way to ensure instruments don’t compete for the same frequency range, and critical listening, on different playback systems to ensure the mix sounds balanced in different environments. What makes him stand out is probably his experience and the sharp ear which helps him identify problematic areas, which was prominent in his guest lecture at the University of West London, where he improved students’ mixes in a few minutes. 

For my future work, incorporating Spike’s techniques such as parallel compression and precise EQ sculpting can significantly enhance the clarity and impact of my mixes.

References:

BAREFOOT SOUND. “Mark Spike Stent – Masters of the Craft.” YouTube, 11 Sept. 2018, www.youtube.com/watch?v=DJwdP-TvYBU.

London College of Music. “Mark “Spike” Stent at the University of West London | London College of Music.” YouTube, 24 Feb. 2020, youtu.be/ZfPaepcRsyY?si=Aw27Jv87uk7YXaGa.

Pensado’s Place. “Mark “Spike” Stent – Pensado’s Place #250.” YouTube, 18 Feb. 2016, www.youtube.com/watch?v=D5_uQJn2JFA.

Tingen, Paul. “Spike Stent.” Www.soundonsound.com, Jan. 1999, www.soundonsound.com/people/spike-stent.

Blog 8: Immersive Audio

In my last blog, I discussed surround sound. Following this, immersive audio emerged as a notable development. Even though both of these technologies create an immersive audio experience, immersive audio utilises a different principle. 

In conventional audio setups like stereo or surround sound, every audio component is designated to particular channels or channel groups, whereas in immersive audio formats, each sound is considered an independent “object,” that can be positioned anywhere in the three-dimensional auditory space. The immersive audio is encoded with metadata that describes the audio objects within the scene rather than being tied to specific channels or speakers. This allows the audio to adapt dynamically to different speaker configurations, making it suitable for a wide range of playback setups. So, regardless of whether you’re listening on headphones, a stereo system, or a full surround sound setup, the audio can adjust to provide an immersive experience through dynamic adjustments in timing, volume, and effects, creating an ambient environment. create an ambience within it. This is what allows this technology to be experienced with a set of headphones.  

I listened to a few examples of immersive audio and tried to compare it to stereo. Indeed, immersive sound creates a “stereo” field which is much wider, allowing each instrument to shine in the mix. A lot of elements were more prominent in immersive mixes and created a sense of absorption into the music. However, in my future mixes and listening experiences, I would most likely use stereo. 

I can’t quite put my finger on the reason why I lean towards stereo rather than immersive audio. Perhaps, it’s the case of familiarity but I prefer all the elements of music to be positioned in a tighter sonic space from left to right, as, to me, they sound more organic this way. However, it is impossible to deny that immersive audio exhibits an impressive appeal and holds promise for the future.

Ref. list:

Inglis, Sam. “An Introduction to Immersive Audio.” Www.soundonsound.com, Jan. 2022, www.soundonsound.com/techniques/introduction-immersive-audio. Accessed 10 May 2024.

Kopp, Bill. “GRAMMY.com.” Www.grammy.com, 16 May 2022, www.grammy.com/news/what-is-immersive-audio-industry-explainer-dolby-atmos. Accessed 10 May 2024.

Launder, Lucy. “What Is Immersive Audio?” Abbey Road, 14 May 2023, www.abbeyroad.com/news/what-is-immersive-audio-3273. Accessed 10 May 2024.

Blog 7: Surround Sound

Surround sound is a technology that creates an immersive audio experience by using multiple speakers placed around a room to envelop the listener with sound from all directions. It enhances the sense of realism and immersion in movies (what it was initially created for), music, and games.

One of the earliest examples of surround sound in film is the “Fantasound” system developed by Disney for their 1940 animated feature film, “Fantasia.” Fantasound used multiple audio channels to create a more immersive listening experience, although it was only shown at a few cinemas in major cities in the UK, as most of them weren’t equipped to handle this advanced modern technology.

Dolby Laboratories played a significant role in the development of surround sound technology. Dolby Stereo, introduced in 1976, was one of the earliest systems that provided multi-channel audio for movies. It used four channels: left, centre, right, and a mono surround channel, making the system quadraphonic. One of the iconic films utilising this system was “Star Wars: Episode IV – A New Hope”. In 1982, Dolby introduced Dolby Surround, which expanded upon Dolby Stereo by adding a rear channel for a more immersive experience. This technology became the foundation for modern surround sound systems. “Star Wars came back to the cinemas in 1983 introducing surround sound to the audiences. The success of the film series helped popularise Dolby, leading to its widespread adoption in cinemas around the world.

Surround sound was first implemented into music in the late 1960s and early 1970s, primarily through experimental recordings and specialised playback systems. With the release of albums mixed in quadraphonic sound and the introduction of home surround sound systems, popular albums like Pink Floyd’s “The Dark Side of the Moon” embraced this new mixing technology. 

Surround sound is now widely used in various forms of entertainment, including movies, gaming, and home audio systems, offering increasingly immersive experiences with advancements like Dolby Atmos.

“A Brief History of Surround Sound.” KEF US, 30 Mar. 2021, us.kef.com/blogs/news/a-brief-history-of-surround-sound. Accessed 3 May 2024.

Greenwald, Will. “What Is Surround Sound? 5.1, 7.1, Dolby Atmos, and More Explained.” PCMag UK, 3 Mar. 2022, uk.pcmag.com/speakers/139029/what-is-surround-sound-51-71-dolby-atmos-and-more-explained. Accessed 3 May 2024.

“How Dolby Atmos in Cinemas Works the History of Dolby Surround Sound | How Dolby Atmos in Cinemas Works the History of Dolby Surround Sound | by Tom S. Ray Audio MasteringFacebook.” Www.facebook.com, www.facebook.com/watch/?v=811768992319235. Accessed 3 May 2024.

published, Jonathan Evans. “We Look Back at Home Cinema History and the Birth of Dolby Surround Sound.” Whathifi, 23 June 2023, www.whathifi.com/features/we-look-back-at-home-cinema-history-and-the-birth-of-dolby-surround-sound. Accessed 3 May 2024.

Blog 6: The Future of Mastering

In the analogue era, mastering engineers were responsible for the final steps in preparing recorded music for distribution. They transferred final mixes from analogue tape to master tape, adjusted the frequency balance and dynamics, controlled noise, set the final playback levels, and ensured overall quality before transferring them onto formats like vinyl records or cassette tapes. Their expertise was crucial in refining the sound and ensuring that it translated well across different playback systems. In a way, this is what they still do –  ensure that the final mix sounds good on different playback systems. Mastering engineer nowadays needs to ensure the track isn’t going to be compressed by streaming platforms and won’t be too quiet compared to other music on the market. On top of controlling the distribution quality, mastering engineers overlook the whole album and make sure all the tracks on it sound like they belong to the same record.

The adjustments made during mastering are always very gentle and can’t make too much of a difference to the finished mixes. Oftentimes, the only job to get done after mixing would be just gain level checking, ensuring that the tracks are at -14 LUFS with their true peak at -1 dBTP. That is why a lot of mixing engineers, especially those working with musicians with low budgets or self-producing, have started mastering in the process of mixing. That is one of the challenges mastering engineers are facing right now and will be in the future. 

Much more of a threat could be artificial intelligence. As AI technologies become more advanced, there’s a concern that they could replace or devalue the expertise and creativity that mastering engineers bring to the process. Buying a program which can analyse audio data and make decisions about EQ, compression, and other processing parameters without human intervention is cheaper than hiring a person who would spend hours working on your record. This could lead to a reduction in demand for human mastering engineers, as AI systems become more capable of delivering high-quality results at a lower cost and faster turnaround time.

Knowing about the future of mastering is helpful when considering future careers and brings more ethical thinking into using AI to complete a job which once belonged to people.

Anderson, Nate. “AI Can Now Master Your Music—and It Does Shockingly Well.” Ars Technica, 6 Feb. 2024, arstechnica.com/ai/2024/02/mastering-music-is-hard-can-one-click-ai-make-it-easy/. Accessed 26 Apr. 2024.

iZotope. “Mixing vs. Mastering: What’s the Difference?” Www.youtube.com, 21 June 2023, www.youtube.com/watch?v=BGfHR8MRiBI. Accessed 26 Apr. 2024.

Kagan, Adam . “Mastering Music: 72 Years of History – Sonarworks Blog.” Sonarworks Blogs, 2 Nov. 2022, www.sonarworks.com/blog/learn/the-history-of-mastering. Accessed 26 Apr. 2024.

Blog 5: Loudness Wars

The “loudness wars” refer to the phenomenon in the music industry where recordings are mastered and produced with increasingly high levels of loudness, often at the expense of dynamic range and audio quality. 

Even back in the 60s, when jukeboxes were still around, The Beatles recognised that the louder records tended to grab more attention when played on jukeboxes alongside other popular records. Allegedly, they ordered a Fairchild compressor limiter for Abbey Road Studios to achieve a louder, more impactful sound without sacrificing the quality of their recordings.

However, the push of the loudness around the 2000s was different. It was indeed sacrificing the dynamics and quality, justified by the “the louder the better” mindset when the development of digital formats allowed engineers to master tracks at excessive volumes. A number of producers and mastering engineers believed that going over -14 LUFS would help to catch listeners’ attention when the song would be played on the radio among quieter tracks. 

“Death Magnetic” (2008) by Metallica is often regarded as the “winner” of the loudness wars – the worst-sounding loudest album. The mastering of it heavily relied on compression and limiting, resulting in an excessively loud and compressed sound. This aggressive approach led to significant clipping and distortion, particularly in the louder sections of the music. Fans and criticised the album’s production quality, expressing disappointment with the lack of clarity and dynamics.

Learning about this phenomenon is useful for understanding the importance of dynamics in my productions. I would, indeed, rather try to preserve the quality of my song instead of pushing the gain up. 

Bunning, James. “The Loudness Wars – USC Viterbi School of Engineering.” USC, 26 Oct. 2018, illumin.usc.edu/the-loudness-wars/. Accessed 16 Apr. 2024.

Clark, Christopher. “The Loudness Wars: Why Music Sounds Worse.” NPR.org, 31 Dec. 2009, www.npr.org/2009/12/31/122114058/the-loudness-wars-why-music-sounds-worse. Accessed 16 Apr. 2024.

Wykes, AJ. “What’s the Loudness War?” SoundGuys, 17 June 2021, www.soundguys.com/the-loudness-war-51513/. Accessed 16 Apr. 2024.

Blog 4: Research about the history and development of Equalization

The RCA 8B, introduced in 1931, is often credited as one of the earliest examples of equalization tools. It was used for broadcast and recording purposes – mostly radio. The RCA 8B is considered a passive equaliser. It used resistors, capacitors, and inductors to shape the frequency response of audio signals. Passive equalisers (developed from then on) typically didn’t allow users to control levels of specific frequencies individually. They usually had fixed frequency points where attenuation or boosting occurred

1950s Pultec EQP-1A

It’s the graphic equalisers that changed everything. They gained significant popularity and widespread use in the 1970s and 1980s providing a visual representation for adjusting the frequencies. They were also more precise than passive EQs offering independent control over each frequency band.

Before the introduction of graphic equalizers, audio processing tools tended to be expensive and primarily accessible to professional studios and engineers. However, with the advent of graphic equalizers, they became more affordable and were integrated into consumer-grade audio equipment such as home stereo systems and portable music players.

API 550A

Around the 1970s mixing engineers experimented with the use of EQ, making it the beginning of parametric EQ. While graphic equalisers provide sliders or faders to adjust preset frequency bands like bass, midrange, and treble, with parametric EQ, you can choose specific frequencies, adjust how wide the changes affect neighbouring frequencies, and control how much you boost or cut those frequencies, allowing for more detailed and tailored adjustments to the sound.

Avalon VT-747SP (combines a parametric EQ with a compressor)

As DAWs developed, digital equalisors were introduced as part of softwares or external plug-ins.

FabFilter Pro-Q

EQ is used to balance, shape, and enhance the tonal characteristics of audio, as well as change it in a creative way to suit the genre. Knowing the history and types of equalisation, one can opt for an earlier type of EQ for creative limits or aesthetics or simply know their way around analogue technology to use the best of it. 

References:

Flanzbaum, Tim. “What Is the Difference between Passive and Active EQ’s.” Www.youtube.com, 14 Sept. 2020, www.youtube.com/watch?v=dK41oBJrBMk. Accessed 1 Apr. 2024.

KIRCHNER, ANDRÉ. “AAX/AU/vst Audio Plugins from Black Rooster Audio.” Black Rooster Audio, 2 May 2023, blackroosteraudio.com/en/blogreader/a-brief-history-of-equalization. Accessed 1 Apr. 2024.

Massenburg, George. “A Short History of Graphic and Parametric Equalization.” Intelligent Sound Engineering, 22 Feb. 2016, intelligentsoundengineering.wordpress.com/2016/02/22/a-short-history-of-graphic-and-parametric-equalization/. Accessed 1 Apr. 2024.

Blog 3: History of stereophonic reproduction

One could argue that the first stereo playback system was presented at the Paris Expo of 1881. The Théâtrophone was a system that allowed to listen to live opera and theatre performances over two telephone lines to create a sense of presence.

Théâtrophone was capable of delivering stereo-like sound, although it didn’t technically use modern stereo technology. The Théâtrophone used separate telephone lines and receivers for each ear to create a stereo-like effect, while binaural stereo, developed in the 1930s by an English engineer Alan Dower Blumlein, involves playing two different channels from two microphones picking up sound from different perspectives.

The recording technique that involves placing two microphones crossed at a 90-degree angle from each other is known as the Blunlein Pair, developed by the same engineer. While “stereo” refers to the playback aspect, the “Blumlein Pair” describes the recording technique which creates a sense of depth and precise stereo imaging (which the French didn’t quite achieve earlier).

Stereo broadcasting on radio began in the late 1950s and early 1960s, helping the audience get used to the idea of using stereo as a standard. It was around that time when the sonic “arms race” occurred in the audio industry. During this time, manufacturers competed to develop and introduce new technologies, such as stereo sound, high-fidelity amplifiers, speakers, turntables, and other audio equipment. It took special effort for the companies to convince mono-playback audio system owners to switch to stereo systems. The marketing involved demonstrations, video-visualisations and generally advertising stereo systems as luxurious and advanced (futuristic almost) technology. 

Nowadays, stereo is a standard format for audio playback. With immersive audio, virtual reality and a strong community of audiophiles, engineers keep developing stereo and testing its limits. However, the principle invented by Blumlein is still used as a baseline for the stereo we have today.

References:

Borgerson, Janet, and Jonathan Schroeder. “How Stereo Was First Sold to a Skeptical Public.” The Conversation, 12 Dec. 2018, theconversation.com/how-stereo-was-first-sold-to-a-skeptical-public-103668. Accessed 30 Mar. 2024.

EMI Archive Trust. “Alan Blumlein and the Invention of Stereo | EMI Archive Trust.” Emiarchivetrust.org, 2019, www.emiarchivetrust.org/alan-blumlein-and-the-invention-of-stereo/. Accessed 30 Mar. 2024.

“Victorian Era Theatrophone Live Streamed Opera Performances and News.” Racing Nellie Bly, 9 July 2017, racingnelliebly.com/weirdscience/victorian-theatrophone-live-streamed-opera/. Accessed 30 Mar. 2024.

Blog 2: Analysis and Visual representation of a mix

After reading a section of “Art of Mixing” by David Gibson, I learned, apart from the fact that some mixing engineers work barefoot, the way to visualise a track by placing its elements in a soundbox. This space works that way: depth represents loudness (the further the element, the quieter it is), height represents frequency range (the higher the element, the higher the frequency), and width (left and right position in a stereo mix) 

I decided to create a visual representation of “Wanna Quit All the Time” by Faye Webster.

The book said the sound never travels behind the listener’s ears but the effect was quite different for me, listening through headphones rather than speakers. However, identifying the position of the instruments wasn’t too difficult as some of them (e.g. electric piano and electric guitars) were panned distinctly to one side or another. This mix is quite “spare”, meaning it’s not too loaded with instruments and those elements which are present there are spread across the left and right in stereo. 

Creating a soundbox is a useful technique which I may implement in my future productions. It helps to identify if the mix is balanced or not; full or spare; bright or boxy, etc. Visualising the mix makes it clearer which frequency areas are too busy and need panning or what the track lacks frequency-wise. It also makes it clear which elements are more important and prominent. 

References:

Gibson, David. Art of Mixing: A Visual Guide to Recording, Engineering, and Production. New York, NY, Routledge, 1997, pp. 37–62.

Blog 1: Mixing with a reference.

A reference track is a professionally mixed track to compare your current mix with. It doesn’t have to be a specifically popular song, just a mix that has sonic characteristics similar to those the mixing engineer is trying to achieve in their mix. 

It is used to make post-production processes easier. Like in drawing, even though you can create something great relying on your gut feeling only, it is much easier and more reliable to have a reference to compare your work with. Since mixing music is all about achieving a certain sonic aesthetic to favour the song, the reference track for every genre will be different. 

Such a track is especially useful when mixing in an unfamiliar environment, providing a reliable benchmark by comparing it to something an engineer knows is well-mixed. It becomes a reliable point of reference, ensuring that the mix maintains a consistent standard of excellence, even when the studio surroundings are potentially unpredictable.

In practice, when using a reference track, the mixing engineer actively switches between their mix and the reference track. They evaluate aspects like balance, stereo imaging, dynamics, and more. Nowadays, there are plugins that streamline this comparison by analyzing frequencies, stereo spread, and various parameters, making the referencing process more efficient. Although some would argue it is better to use one’s ear to mix, not to rely on visual representation of the sound. 

The same tools are often used for technical and creative purposes. For instance, EQ can be utilized technically to address frequency imbalances, eliminate unwanted frequencies, or enhance clarity. Simultaneously, it serves creatively by shaping the tonal character of individual instruments or adding colour to the overall mix, as every frequency has its own characteristics (that is why the sound can be described as “bright”, “dark”, “boxy” or “crisp” and so on).

Some of the basic tools are gain levels and panning. So, here are examples of the use of pan in the same genre:

For clarity of elements: One Dance by Drake or N95 by Kendrick Lamar

For clarity and creative use of panning: Baby I’m Bleeding by JPEGMAFIA or Mood Swings by Little Simz

“Best Reference Track for All Genres.” Mastering the Mix, 18 Dec. 2020, www.masteringthemix.com/blogs/learn/best-reference-track-for-all-genres. Accessed 2 Mar. 2024.

Gibson, David. Art of Mixing : A Visual Guide to Recording, Engineering, and Production. New York, Routledge, 1997.

Messitte, Nick. “13 Tips for Using References While Mixing.” IZotope, 27 Apr. 2022, www.izotope.com/en/learn/13-tips-for-using-references-while-mixing.html. Accessed 3 Mar. 2024.

Miraglia, Dusti. “Vocal EQ Chart: The Ultimate Vocal EQ Cheat Sheet (2023).” Unison, 17 May 2023, unison.audio/eq-chart/. Accessed 2 Mar. 2024.

Texidor, Lewis. “Reference Mixes – Why Use Them and Why They Are Vital.” Audient, audient.com/tutorial/reference-mixes/. Accessed 2 Mar. 2024.