Posted on: July 16, 2021
By Evren Göknar, an Award-winning Mastering Engineer who has been employed at Capitol Studios in Hollywood for 25 years
What does it take to become a successful and effective Mastering Engineer? Most people attracted to the music industry assume that creating high-quality audio masters for streaming, television, movies, and so on involves learning how to manipulate the technical gear and software used to create a commercial master.
They would only be half right, maybe less than half right. Yes, some of the key competencies of a mastering engineer revolve around learning how to manipulate equalizers, compressors, limiters, and other audio process equipment . But in my book, Major Label Mastering: Professional Mastering Process , I identify 10 competencies, and only 5 of them are highly technical in nature. The real secret to success begins with your ears, and this article expounds upon what I mean.
If you’re already working as a Mastering Engineer, you might also want to take a look at this free guide on mastering techniques in music . It contains insights from four mastering engineers (including me) on career paths, choosing the right tools for different projects, and more specialist techniques for mastering dance and heavy metal music.
Becoming an Audiophile
The word audiophile, coined by High Fidelity magazine in the early 1950s when modern sound engineering was still in its early stages, simply means someone who loves to hear (Greek phile, to love; Latin audire to hear) — and that’s what becoming a mastering engineer is all about.
When you’re in a studio manipulating audio processing equipment, sensitivity to the music remains paramount. What is the mood? How is the song recorded? How will your chosen adjustments affect the listening experience? You also need to be aware of song structure and macro-dynamics related to song sections, cadences, passages, themes, and musical payoffs. This is all part of the musicality that is the foundation for becoming an effective Mastering Engineer.
Beyond musical sensitivity, a successful mastering engineer needs to understand and have a feel for the various musical genres. Rock, pop, country, singer/songwriter, hip-hop/rap, techno, classical, jazz, and their various subgenres all have specific production characteristics. For example, pop is vocal forward, rock is often guitar forward, rap is kick/bass forward, and jazz preserves dynamic range. Final masters must impart an appropriate listening experience for their specific genre of music.
To an observer watching a Mastering Engineer work, it might seem like the whole process revolves around looking at meters, twisting knobs, and adjusting curves on a computer screen. The truth is that much more is occurring during the mastering session.
The primary concern of a seasoned mastering engineer is deliberate listening. This means constantly asking the question, “Is the listening experience pleasing, effective, and genre-appropriate?” This question — and the answer(s) it conjures in the ears and mind of the mastering engineer — remains mission critical to success in mastering a recording. Deliberate listening is the foundation of Subjective Assessment , a crucial component of the mastering process. As many mastering engineers interviewed in Russ Hepworth-Sawyer and Jay Hodgson’s Audio Mastering: The Artists will attest, confidence that a master is “right” is as important as any technical skill on the path to success, and critical listening is the first step on that path.
The act of listening to recorded music is always influenced by the way that piece of music was captured in the first place and subsequently transmitted. At the outset of a mastering session, step 1 is to verify and measure these technical metrics, a process I refer to as Objective Assessment . These include:
• Sample rate. In today’s digital world, sound waves are recorded with a series of audio “snapshots” or samples. The sampling rate in the music industry is typically 44.1 kilohertz (kHz), or 44,100 snapshots per second, although it can extend much higher. (For comparison, telephone sampling rates are typically 8 kHz.) The higher the sampling rate, the better the quality.
• Bit depth. This refers to the amount of digital information captured in each snapshot, expressed as the number of bits. The greater the number of bits, the greater the dynamic range and the higher the fidelity of the digital audio. The upper limit of bit depth in today’s audio world is 24.
• Data Compression. Digital music files may be data-compressed to reduce their size for ease in storage, streaming and sharing. Digital files can be lossless (no data compression) or lossy (has data compression). Lossy compression creates even smaller files but results in the loss of data — bytes which are considered at least by some to be not noticeable or not important. Common lossless files are .wav, .aiff, and .flac; common lossy files include .mp3, .mp4, and .aac.
For success as a mastering engineer, deliberate listening must evolve over time into critical listening. Understanding viscerally what constitutes a sublime listening experience is a natural response to music and sound. Articulating the elements of that experience — and conversely, what may be lacking from it — is critical listening.
The first requirement for critical listening is a high-quality listening environment, which may be a listening room, mastering studio, control room, or even headphones. Playback component chains can be complicated, and choosing the right components is an important task. There are far too many choices to review here, but the advice of Robert Harley, long-time editor of Absolute Sound magazine, is excellent guidance: “Each component in a playback (monitoring) system is like a pane of glass, and the more transparent and uncolored, the better.”
Once a critical listening environment is set up, music will then be presented as a soundstage between two sound sources, e.g. speakers. This soundstage is commonly referred to as the image. The individual musical elements should be defined, discernable, and clearly placed within the three-dimensional space of this image. Spatial dimension and depth are desirable in a great audio recording, and a good Mastering Engineer will preserve or enhance this aspect of the recording.
Placement within the image has several components. The first is simply left to right. Each component should have its own clear position. (In many cases, the genre will dictate, or at least inform this placement.) A second element is foreground — typically a vocalist if there is one — vs. background. These effects are determined at least partly by volume levels. In the early days of recording, this effect was achieved by literally placing background instruments farther away from the microphone. Now it can be fully controlled by the Mastering Engineer.
Another major component of the image is frequency hierarchy. The frequencies must be well-balanced, and they can be visualized with the lower frequencies such as a kick drum near the bottom, moving up higher to bass, guitars, and keyboards through the middle, with the vocals rising out of the middle, and cymbals or percussion at the very top, thus giving height to the image.
In addition to getting a good sense of how images work, it’s important to practice listening to the fidelity difference between lossy and lossless files, as well as the differences between different resolutions of lossless files. This means different sampling frequencies, bit depths (mostly 16 bit versus 24 bit), and frequencies, particularly 44.1kHz (the music industry standard) and 48kHz (films). The higher sampling frequencies can result in differences that can’t be perceived by the human ear.