Mastering for Spotify, YouTube, Tidal, Amazon Music, Apple Music and other Streaming Services

Estimated reading time: 14 minutes

Mastering for Spotify, YouTube, Tidal, Amazon Music, Apple Music and other Streaming Services


Does audio streaming platforms also require a special master?

Introduction

Streaming platforms (Spotify, Apple, Tidal, Amazon, Youtube, Deezer etc.) are hot topics in the audio community. Especially since these online services suggest concrete guidelines for the ideal loudness of tracks. To what extent should you follow these guidelines when mastering and what do you have to consider when interacting with audio streaming services? To find the answer, we have to take a little trip back in time.

Do you remember the good old cassette recorder? In the 80s, people used it to make their own mixtapes. Songs of different artists gathered on a tape, which we pushed into a tape deck of our car with a Cherry Coke in the other hand in order to show up with suitable sound before hitting at the next ice cream dealer in the city center. The mixtapes offered a consistently pleasant listening experience, at least as far as the volume of the individual tracks was concerned. When we created mixtapes, the recording level was simply adjusted by our hand, so that differently loud records were more or less consciously normalized by hand. 

Back to the Streaming Future. Time leap: Year 2021.

Music fans like us still enjoying mixtapes, except that today we call them playlists and they are part of various streaming services such as Apple Music, Amazon Music, Spotify, YouTube, Deezer or Tidal. In their early years, these streaming services quickly discovered that without a regulating hand on the volume fader, their playlists required constant readjustment by the users due to the varying loudness of individual tracks.

So they looked for a digital counterpart to the analog record level knob and found it in an automated normalization algorithm that processes every uploaded song according to predefined guidelines. The streaming service Spotify for example, specifies the number -14 dB LUFS as an ideal loudness value. This means if our song is louder than -14 dB LUFS, it will be automatically reduced in volume by the streaming algorithm so that playlists have a more consistent average loudness. Sounds like a good idea at first glance, right?

Why LUFS?

The problem with different volume levels was not just limited to the music area. In the broadcasting area, the problem was also widespread. The difference in volume between a television movie and the commercial interruption it contains sometimes took on such bizarre proportions that the European Broadcast Union felt forced to issue a regulation on loudness. This was the birth of the EBU R128 specification, which was initially implemented in Germany in 2012. With this regulation, a new unit of measurement was introduced, the LUFS (Loudness Units relative to Full Scale).

Whereby 1 x LU (Loudness Units) equals the relative value of 1 dB and at the same time, a new upper limit for digital audio was defined. A digital peak level of -1 dB TP (True Peak) should not be exceeded according to EBU speecification. This is the reason why Spotify and Co provide a True Peak limit of -1 dBFS for music files. 

Tip: I recommend to keep this limit. Especially if we do not adhere to the loudness specification of -14 dB LUFS. At higher levels, the normalization algorithm will definitely intervene in a moderating way. Spotify refers to the following in this context: If we do not keep -1 dB TP as limiter upper limit (ceiling), sound artifacts may occur due to the normalization process.

This value is not carved in stone, as you will see later. Loudness units offer a special advantage to the mastering engineer. Simply spoken, we should be able to use LUFS to quantify how “loud” a song is and thereby compare different songs in terms of loudness. More on this later.

Mastering for Spotify, Youtube, Tidal, Amazon Music, Apple Music and other Streaming Services | T-Racks Stealth Limiter

How can we see if our mix is normalized by a streaming service?

The bad news is that some streaming services have quite different guidelines. Therefore, you basically have to search for the specifications of each individual service if you want to follow their guidelines. This can be quite a hassle, as there are more than fifty streaming and broadcasting platforms worldwide. As an example, here are the guidelines of some services in regards to ideal LUFS values:

-11 LUFS Spotify Loud

-14 LUFS Amazon Alexa, Spotify Normal, Tidal, YouTube

-15 LUFS Deezer

-16 LUFS Apple, AES Streaming Service Recommendations

-18 LUFS Sony Entertainment

-23 LUFS EU R128 Broadcast

-24 LUFS US TV ATSC A/85 Broadcast

-27 LUFS Netflix

The good news is that there are various ways to compare your mix with the specifications of the most important streaming services at a glance. How much your specific track will be manipulated by the respective streaming service? You can check this on the following website: www.loudnesspenalty.com

Mastering for Spotify, Youtube, Tidal, Amazon Music, Apple Music and other Streaming Services | Loudness Penalty

Some DAWs, such as the latest version of Cubase Pro also feature comprehensive LUFS metering. Alternatively, the industry offers various plug-ins that provide information about the LUFS loudness of a track. One suitable candidate is YOULEAN Loudness Meter 2, which is also available in a free version: https://youlean.co/youlean-loudness-meter/.

Another LUFS metering alternative is the Waves WLM Plus Loudness Meter, which is already fed with a wide range of customized presets for the most important platforms. 

Waves Loudness Meter

Metering

Using the Waves Meter as an example, we will briefly go into the most important LUFS meters, because LUFS metering involves a lot more than just a naked dB number in front of the unit. When we’re talking about LUFS, it should be clear what this exactly means. LUFS data is determined over a period of time and depending on the length of the time span and this can lead to different results. The most important value is the LUFS Long Term Display.

This is determined over the entire duration of a track and therefore represents an average value. To get an exact Long Term value we have to play the song once from the beginning to the end. Other LUFS meters (e.g. in Cubase Pro) like to refer to the Long Term value as LUFS Integrated. LUFS Long Term or Integrated is the value that is prefixed in the streaming platform’s specifications. For “Spotify Normal” this means that if a track has a loudness of -12LUFS Integrated, the Spotify algorithm will lower this track by two dB to -14LUFS. 

LUFS Short Term

The Waves WLN Plus plugin offers other LUFS indicators for evaluation, such as LUFS Short Term. LUFS Short Term is determined over a period of three seconds when the plugin measures according to EBU standards. This is an important point, because depending on the ballistics, the measurement distances are different in time and can therefore lead to different results. A special feature of the Waves WLM Plus plugin is the built-in True Peak Limiter. Many streaming platforms insist on a true peak limit of -1dB (some even -2dB). If you use the WLM Plus Meter as the last plugin in the chain of your mastering software, the True Peak limit is guaranteed not to be exceeded when the limiter is activated.

Is the “Loudness War” finally over thanks to LUFS?  

As we already learned, all streaming platforms define maximum values. If our master exceeds these specifications, it will automatically made quieter. The supposedly logical conclusion: we no longer need loud masters. At least this is true for those who adhere to the specifications of the streaming platforms. Now, parts of the music industry have always been considered a place away from all reason, where things like to run differently than logic dictates. The “LUFS dictate” is a suitable example of this. 

Fact is: Most professional mastering engineers don’t care about LUFS in practice nor about the specifications of the streaming services! 

Weird stuff, I know. However, the facts are clear and the thesis can be proven with simple methods. We remember that YouTube, just like Spotify, specifies a loudness of -14dB LUFS and automatically plays louder tracks at a lower volume. So all professional mixes should take this into account, right? It’s nice that this can be checked without much effort. Open a recent music video on YouTube, right-click on the video and click on ” Stats for nerds”. The entry “content loudness” indicates by how much dB the audio track is lowered by the YouTube algorithm. Now things become interesting. For the current AC/DC single “Shot in the Dark” this is 5.9dB. Billy Talent’s “I Beg To Differ” is even lowered by 8.6dB. 

Amazing, isn’t it?  

Obviously, hardly anyone seems to adhere to the specifications of the streaming platforms. Why is that? 

There are several reasons. The loudness specifications differ from streaming platform to streaming platform. If you take these specifications seriously, you would have to create a separate master for each platform. This would result in a whole series of different sounding tracks, for the following reason. Mastering equipment (whether analog or digital) does not work linearly across the entire dynamic spectrum. 

Example:

The sound of the mix/master changes if you have to squeeze 3dB more gain reduction out of the limiter for one song than for another streaming platform. If you finally normalize all master files to an identical average value, the sound differences become audible due to the different dynamics processing. The differences are sometimes bigger and sometimes smaller. Depending on which processing you have done. 

Another reason for questioning the loudness specifications is the inconsistency of the streaming platforms. Take Spotify, for example. Do you know that Spotify’s normalization algorithm is not enabled when playing Spotifys via web player or a third party app? From the Spotify FAQs:

Spotify for Artists FAQ
The Metal Mix

This means that if you deliver a metal mix with -14dB LUFS and it is played back via Spotify in a third-party app, the mix is simply too weak compared to other productions. And there are other imponderables in the streaming universe. Spotify allows its premium users to choose from three different normalization settings, with standards that also differ. For example, the platform recommends a default of -11dB LUFS and a True Peak value of -2dB TP for the “Spotify Loud” setting, while “Spotify Normal” is certified at -14dB LUFS and -1dB TP. Also from the Spotify FAQs:

FAQ2

For mastering engineers, this is a questionable state of affairs. Mastering for streaming platforms is like trying to hit a constantly changing target at varying distances with a precision rifle. Even more serious, however, is the following consideration: What happens if one or more streaming platforms raise, lower, or even eliminate their loudness thresholds in the future? There is no guarantee that the specifications currently in place will still be valid in the future. Unlikely? Not at all! YouTube introduced its normalization algorithm in December 2015. Uploads prior to December 2015 may sound louder if they were mastered louder than -14dB LUFS. Even after 2015, YouTube’s default has not remained constant. From 2016 to 2019, the typical YouTube normalization was -13dB and did not refer to LUFS. Only since 2019 YouTube has been using the -14dB LUFS by default. 

The reason why loudness is not exclusively manifested in numbers

If you look at the loudness statistics of some YouTube videos and listen to them very carefully at the same time, you might have made an unusual observation. Some videos sound louder even though their loudness statistics indicate that they are nominally quieter than other videos. How can this be? There is a difference between measured loudness in LUFS and perceived loudness. Indeed, it is the latter that determines how loud we perceive a song to be, not the LUFS specification. But how do you create such a lasting loudness impression?

Many elements have to work together for us to perceive a song as loud (perceived loudness). Stereo width, tonal balance, song arrangement, saturation, dynamics manipulation – just to name a few pieces of the puzzle. The song must also be well composed and performed. The recording must be top-notch and the mix professional. The icing on the cake is a first-class master. If all these things come together, the song is denser, more forward and, despite moderate mastering limiter use, simply sounds louder than a mediocre song with less good mix & mastering, even if the LUFS integrated specifications suggest a different result. An essential aspect of a mastering process is professional dynamics management. Dynamics are an integral part of the arrangement and mix from the beginning.

In mastering, we want to try to further emphasize dynamics while not destroying them. Because one thing is always inherent in the mastering process: a limitation of dynamics. How well this manipulation of dynamics is done is what separates good mastering from bad mastering and a good mix with a professional master always sounds fatter and louder than a bad mix with a master that is only trimmed for loudness.

Choose your tools wisely!

High quality equalizers and compressors like the combination of the elysia xfilter and the elysia xpressor provide a perfect basis for a more assertive mix and a convincing master. Quality compression preserves the naturalness of the transients, which automatically makes the mix appear louder. You miss the punch and pressure in your song? High-quality analog compressors always guarantee impressive results and are more beneficial to the sound of a track than relying solely on digital peak limiting.

You are loosing audible details in the mixing and mastering stage? Bring them back into light with the elysia museq! The number of playback devices has grown exponentially in recent years. This doesn’t exactly make the art of mastering easier.

Besides the classic hi-fi system, laptops, smart phones, Bluetooth speakers and all kinds of headphones are fighting for the listener’s attention in everyday life. Good analog EQs and compressors can help to adjust the tonal focus for these devices as well. Analog processing also preserves the natural dynamics of a track much better than endless plug-in rows, which often turn out to be a workflow brake. But “analog” can provide even more for your mixing & mastering project. Analog saturation is an additional way to increase the perceived loudness of a mix and to noticeably improve audibility, especially on small monitoring systems like a laptop or a Bluetooth speaker.

Saturation and Coloration

The elysia karacter provides a wide range of tonal coloration and saturation that you can use to make a mix sound denser and more assertive. Competitive mastering benefits sustainably from the use of selected analog hardware. The workflow is accelerated and you can make necessary mix decisions very quick and accurate. For this reason, high-quality analog technology enjoys the highest popularity, especially in high-end mastering studios. karacter is available as a 1 RU 19″ Version, karacter 500 – module and in our new super handy qube series as a karacter qube.

Mastering Recommendations for 2021

As you can see, the considerations related to mastering for streaming platforms are anything but trivial. Some people’s heads may be spinning because of the numerous variables. In addition, there is still the question of how to master your tracks in 2021. 

 The answer is obvious: create your master in a way that serves the song. Some styles of music (jazz, classical) require much more dynamics than others (heavy metal, hip-hop). The latter can certainly benefit from distortion, saturation, and clipping as a stylistic element. What sounds great is allowed. The supreme authority for a successful master is always the sound. If the song calls for a loud master, it is legitimate to put the appropriate tools in place for it. The limit of loudness maximization is reached when the sound quality suffers. Even in 2021, the master should sound better than the mix. The use of compression and limiting should always serve the result and not be based on the LUFS specifications of various streaming services. Loudness is a conscious artistic decision and should not end up in an attempt to achieve certain LFUS specifications.

And the specifications of the streaming services? 

With how many LUFS should i master to?

There is only one valid reason to master a song to -14dB LUFS. The value of -14dB LUFS is just right if the song sounds better with it than with -13 or -15dB LUFS!

I hope you were able to take some valuable information from this blog post and it will help you take your mix and personal master for digital streaming services to the next level. 

I would be happy about a lively exchange. Feel free to share and leave a comment or if you have any further questions, I’ll be happy to answer them of course.

Yours, Ruben Tilgner 

-18dBFS is the new 0dBu

Estimated reading time: 18 minutes

-18dBFS is the new 0dBu

Gain staging and the integration of analog hardware in modern DAW systems


Introduction

-18dBFS is the new 0dBu:

In practice, however, even experienced engineers often have only a proximate idea of what “correct” levels are. Like trying to explain the offside rule in soccer, a successful level balance is simple and complex at the same time. Especially when you have the digital and analog worlds supposed to work together on equal grounds. This blog post offers concrete tips for confident headroom management and “how to integrate analog hardware in digital production environment – DAW systems” in a meaningful way.

Digital vs. Analog Hardware

A good thing is that you don’t have to choose one or the other. In modern music production, we need both worlds, and with a bit of know-how, the whole thing works surprisingly well. But the fact is: On one hand, digital live consoles and recording systems are becoming more and more compact in terms of their form factor, on the other hand, the number of inputs and outputs and the maximum number of tracks are increasing at the same time. The massive number of input signals and tracks demand even more to always find suitable level ratios.

Let’s start at the source and ask the simple question, “Why do you actually need a mic preamp?”

The answer is as simple as clear. We need a Mic-Preamp to turn a microphone signal into a line signal. A mixer, audio interface, or DAW always operates at line level, not microphone level. This is the case for all audio interfaces, such as insert points or audio outputs. How far do we actually need to turn up the microphone preamp, and is there one “correct” level? There is no universal constant with a claim to be the sole representative, but there is a thoroughly sensible recommendation that has proven itself in a practical workflow. I recommend to level all input signals to line level with the help of the microphone preamplifier. Line level is the sweet spot for audio systems.

But what exactly is line level now and where can it be read?

Now we’re at a point where it gets a little more complicated. For the definition of the line level, a reference level is used and this is different depending on which standard is used as a basis. The reference level for professional audio equipment according to the German broadcast standard is +6dBu (1.55Vrms, -9dBFS). It refers to a level of 0dBu at 0.775V (RMS). In the USA, the analog reference level of +4dBu, corresponding to 1.228V (effective value), is used. Furthermore relevant in audio technology is the reference level of 0 dBV, corresponding to exactly 1V (RMS) and the home equipment level (USA) with -10dBV, corresponding to 0.3162V (RMS). Got it? We’ll focus on the +4dBu reference level in this blog post. Simply for the reason that most professional audio equipment relies on this reference level for line-level.

dBu & dBV vs. dBFS

What is +4dBu and what does it mean?

Level ratios in audio systems are expressed in the logarithmic ratio decibel (dB). It is important to understand that there is a difference between digital and analog mixers in terms of “dB metering”. This is the experience of anyone who has swapped from an analog- to a digital mixer for the first time (or vice versa). Obviously, the usual level settings don’t fit anymore. Why is that? The simple explanation: analog mixers almost always use 0dBu (0.775V) as a reference point, while their digital counterparts use the standard set by the European Broadcasting Union (EBU) for digital audio levels. According to the EBU, the old analog “0dBu” should now be equivalent to -18dBFS (full scale). Digital consoles- and DAW users, therefore, hold fast: -18dBFS is our new 0dBu!

This sounds simple, but unfortunately, it’s not that easy, because dBu values can’t be unambiguously converted to dBFS. It varies from device to device which analog voltage leads to a certain digital level. Many professional studio devices are connoted with the nominal output of +4dBu, while consumer devices tend to fall back on the dBV meter (-10dBV). This is not enough confusion. There are also massive differences in terms of “headroom”. With analog equipment, there is still plenty of headroom available when a VU meter is operating in a 0dB range. Often there is another 20dB available until an analog soft clipping signals the end of the line. The digital domain is much more uncompromising at this point. Levels beyond the 0dBFS mark produce hard clipping, which sounds unpleasant on the one hand and represents a fixed upper limit on the other. The level simply does not get any louder. 

We keep in mind: The analog world works with dBu & dBV indications, while dBFS describes the level ratios in the digital domain. Accordingly, the meter displays on an analog mixing console are also different compared to a digital console or DAW.

Analog meter indicators are referenced to dBu. If the meter shows 0dB, this equals +4dBu at the mixer output and we are happy about a rich headroom. A digital meter is usually scaled over the range of -80 to 0dBFS, with 0dBFS representing the clipping limit. To make a comparison, let’s recall: 0dBu (analog) = -18dBFS (digital). This is true for many digital devices, such as Yamaha digital mixers, but not all. ProTools, for example, works with the reference level of 0dBu = -20dBFS. We often find this difference when comparing European and US equipment. The good news is that we can live very well with this difference in practice. Two dB is not what matters in the search for the perfect level of audio signals. 

Floating Point

But why do we need to worry about level ratios in a DAW at all? Almost all modern DAWs work with floating-point arithmetic, which provides the user with infinite headroom and dynamics (theoretically 1500dB). The internal dynamics are so great that clipping cannot occur. Therefore, common wisdom on this subject is: “You can do whatever you want with your levels in a floating-point DAW, you just must not overdrive the sum output”. Theoretically true, but practically problematic for two reasons. First, there are plug-ins (often emulations of classic studio hardware) that don’t like it at all if you feed their input with extremely high levels.

This degrades the signal audibly. Very high levels have a second undesirable side effect: they make it virtually impossible to use analog audio hardware as an insert. Most common DAWs work with a 32-bit floating-point audio engine. Clipping can only occur on the way into the DAW (e.g. overdriven MicPre) or on the way out of the DAW (overdriven sum DA-converter). This happens faster than you think. Example: Anyone who works with commercial loops knows the problem. Finished loops are often normalized and you reach quickly the 0dBFS mark on the loudest parts mark on your peak meter. If we play several loops simultaneously and two loops will reach 0dBFS at one point at the same time, we already have clipping on the master bus. You need to avoid too high levels in a DAW at all costs.

Noise Generator

We’ve talked about clipping and headroom so far, but what about the other side of the coin? How do analog and digital audio systems handle very low levels? In the analog world, the facts are clear: the lower our signal level, the closer our useful signal approaches the noise floor. That means our “signal to noise” ratio is not optimal. Low signals enter the ring with the noise floor, which doesn’t come off without causing collateral damage to the sound quality. Therefore, in an analog environment, we must always emphasize solid levels and high-quality equipment with the best possible internal “signal to noise” ratio. This is the only way to guarantee that in critical applications (e.g. classical recordings, or music with very high dynamics) the analog recording is as noise-free as possible.

And digital?

Fader position as a part of Gain Staging

Another often overlooked detail on the way to a solid gain structure is the position of the faders. First of all, it doesn’t matter whether we’re working with an analog mixer, a digital mixer, or a DAW. Faders have a resolution, and this is not linear.

The resolution around the 0dB mark is much higher than in the lower part of the fader path. To mix as sensitively as possible, the fader position should be near the 0dB mark. If we create a new project in a DAW, the faders in the DAW project are in the 0dB position by default. This is how most DAWs handle it. Now we can finally turn up the mic preamps and set the appropriate recording level. We recommend leveling all signals in the digital domain to -18dBFS RMS / -9dBFS peak. In other words, to the line-level already invoked at the beginning, because that’s what digital mixers and DAWs are designed for. Since we have the channel faders close to the 0 dB mark, the question now is: How do I lower signals that are too loud in the mix? 

You have several ways to do this and many of them are simply not recommended. For example, you could turn down the gain of the mic preamp. But then we’re no longer feeding line level to the DAW. With an analog mixer, this results in a poor “signal to noise” ratio. A digital mixer with the same approach has the problem that all sends (e.g. monitor mixes for the musicians, insert points) also leave the line-level sweet spot. Ok, let’s just pull down the channel fader! But then we leave the area for the best resolution, where we can adjust the levels most sensitively. This may “only” be uncomfortable in the studio, but at a large live event with a PA to match, it quickly becomes a real problem.

This is where working in the fader sweet spot is essential. The ability to specifically make the lead vocal two dB louder via the fader is almost impossible if we start with a fader output setting of, let’s say, -50dB. If we move the fader up just a few millimeters, we quickly reach -40dB, which is an enormous jump in volume. The solution to this problem: We prefer to use audio subgroups for rough volume balancing. If these are not available, we fall back on DCA or VCA groups. The input signals are assigned to the subgroups (or DCAs or VCAs) accordingly. For example, one group for drums, one for cymbals, one for vocals and one each for guitars, keyboards and bass. With the help of the groups you can set a rough balance between the instruments and vocal signals and use the channel faders for small volume corrections. 

Special tip: It makes sense to route effect returns to the corresponding groups instead of to the master. The drum reverb to the drum group, or the vocal reverb to the vocal group. If you have to correct the group volume, then the effect part is automatically pulled along and the ratio signal/effect part always remains the same.

Gain Staging in the DAW – the hunt for line level


As a first step, we need to clear up a misunderstanding. “Gain” and “Volume” are not members of the same family. Adjusting gain is not the same as adjusting volume. In simple words, Volume is the volume after processing, while Gain is the volume before processing. Or even simpler, Gain is input level, Volume is output level!

The next important step for clean gain staging is to determine what kind of meter display my digital mixer or DAW is even working with. Where exactly is line level on my meter display?

Many digital consoles and DAWs have hybrid metering. Like the metering in Studio One V5, which we’ll use as an example. The scaling going from -72dB to +10dB and from -80dB to +6dB in the sum output.

Studio One metering is between an analog dBu meter and a digital meter in dBFS in terms of its scaling. This is similar in many DAWs. It is important to know whether the meter display shows an RMS (average level) or Peak Meter (peak level). If we see only peak metering and control to line level (-18dBFS), then the level is too low, especially for very dynamic source material with fast transients like a snare drum. The greater the dynamic range of a track, the higher the peak values and the lower the average value. Therefore, drum tracks can quickly lighten up the clip meter of a peak meter but produce comparatively little deflection on an RMS meter.

In Studio One, however, we get all the information we need. The blue Studio One meter represents peak metering, while the white line in the display always shows the RMS average level. Also important is where the metering (tap point) is tapped. For leveling out, the metering should show the pre-fader level ratios, especially if you already inserted insert plug-ins or analog devices into the channel. These can significantly influence the post-fader metering.

-18dBFS is the new 0dBu | Gains Staging and the integration of analog Hardware in DAW Systems

Keyword: Plugins

You need to drive digital emulations with a suitable level. There are still some fix-point plug-ins and emulations of old hardware classics on the market that don’t like high input levels. It is sometimes difficult to see which metering the plugins use themselves and where the line level is located. A screenshot illustrates the dilemma.

-18dBFS is the new 0dBu | Gain Staging and the integration of analog hardware in DAW Systems

The BSS DRP402 compressor clearly has a dBFS meter. Thus, the BSS compressor has line-level reference on its metering at -20 dBFS. The bx townhouse compressor in the screenshot is fed with the same input signal as the BSS DRP402 but shows completely different metering.

Here you may assume since it is an analog emulation, its meter display is more like a VU meter. 

Fire Department Operation

It’s not uncommon that you will find yourself in the studio with recordings that just want to be mixed. Experienced sound engineers will agree with me. Many recordings by less experienced musicians or junior technicians are simply too high. So what can you do to bring the levels back to a reasonable level? Digitally, this is not a big problem, at least if the tracks are free of digital clipping. Turning the tracks down doesn’t change the sound, and we don’t have to worry about noise floor problems on the digital level either. In any DAW, you can reduce the waveform (amplitude) to the desired level.

-18dBFS is the new 0dBu | Gain Staging and the integration of analog hardware in DAW Systems

Alternatively, every DAW offers a Trim plug-in that you can place in the first insert slot to lower the level there.

The same plugin can also be used in busses or in the master if the added tracks prove to be too loud. We did not use the virtual faders of the DAW mixer for this task, because they are post-insert and, as we already know, only change the volume but not the gain of the track.

Analog gear in combination with a DAW

The combination of analog audio gear and a DAW has a special charm. The fast, haptic access and the independent sound of analog processors make up the appeal of a hybrid setup. You can use Analog gear as a front-end (mic preamps) or as insert effects (e.g., dynamics). If you want to connect an external preamp to your audio interface, you should use a line input to bypass the generic MicPreamp of the audio interface.

In insert mode, we have to accept an AD/DA conversion for pure analog gear to get into the DAW. Therefore the quality of the AD/DA converters is important. If you use the full 24bit spectrum by a full scale, this corresponds to a dynamic range of 144dB. This overstrains even a high-end digital converter. Therefore, you need to drive your analog gear in the insert at line level to give the digital converters enough headroom. Especially if you plan to boost the signal with the analog audio gear.

This simply requires headroom. If, on the other hand, you only make subtractive EQ settings, you can also work with higher send and return levels. Now we only need to adjust the level ratios for the insert operation. Several things need our attention. 

It depends on the entire signal chain

The level ratios in a DAW are constant and always understandable. When integrating analog gear, however, we have to look at the entire signal flow and sometimes readjust it. We start with the send level from the DAW. Again, i recommend you to send the send signal with line-level to an output of the audio interface.

The next step requires a small amount of detective work. In the technical specifications of the audio interface, we look up the reference level of the outputs and have to bring them in line with the input of the analog gear we want to loop into the DAW. If the interface has balanced XLR outputs, we connect it to a balanced XLR input of the analog insert unit. However, what do we do with unbalanced devices that have a reference level of -10dBV? Many audio interfaces offer a switch for their line inputs and outputs from +4dBu to -10dBV, which should you use in this case. In the technical specifications of the audio interface, you can find out which analog level is present at 0dBFS.  This you can also switch in some cases.

On an RME Fireface 802, for example, you can switch between +19dBu, +13dBu and +2dBV. It is important to know that many elysia products can handle a maximum level of about +20dBu. This level applies to the entire signal chain from the interface output to the analog device and from its output back to the interface. Ideally, a line-level send signal with an identical return level will make its way back into the DAW. In addition, the analog unit itself is under observation. Make sure that neither its input nor its output is distorting. These distortions will otherwise be passed on to the DAW unadulterated.

elysia qube series

It also depends a bit on the type of analog gear how its insert levels behave. A ground-in EQ that moderately boosts or cuts frequencies is less critical than a transient shaper (elysia nvelope). Depending on the setting, this can generate peaks that RMS metering can hardly detect. In a worst-case scenario, this creates distortions that are only audible but not readable without peak metering. Another classic operating mistake is a too high make-up gain setting for compressors.

In worst case, both the output of the compressor itself and the return input of the sound card are overdriven. The level balance at all four places (input & output analog device + input & output of the interface) of an insert should be under close observation. But we are not alone. Help for insert operation is provided by generic DAW on-board tools, which we will look at in conclusion.

Use Insert-Plugins!

When integrating analog hardware, you should definitely use insert plugins, which almost every DAW provides. Reaper features the “ReaInsert” plugin, ProTools comes with “Insert” and Studio One provides the “Pipeline XT” plugin.The wiring for this application is quite simple.

We connect a line output of our audio interface to the input of our hardware. We connect the output of our hardware to a free line input of our interface. We select the input and output of our interface as a source in our insert plugin (see Pipeline XT screenshot) and have established the connection.

A classic “send & return” connection. Depending on the buffer size setting, the AD/DA conversion causes a more or less large propagation delay, which can be problematic. Especially when we use signals in parallel. What does this mean? Let’s say we split our snare drum into two channels in the DAW. The first channel stays in the DAW and is only handled with a latency-free gate plugin, the second channel goes out of the DAW via Pipeline XT, into an elysia mpressor and from there back into the DAW.

Due to the AD/DA conversion, the second snare track is time delayed compared to the first track. For both snare tracks to play together time aligned, we need latency compensation. This you could do manually by moving the first snare track, or you could simply click the “Auto” button in Pipeline XT for automatic latency compensation. This is much faster and more precise. The advantage is that the automatic delay compensation ensures that our insert signal phases coherently with the other tracks of the project. With this tool, you can also easily adjust the level of the external hardware. If distortion already occurs here, you can reduce the send level and the return level will increase at the same time. 

This is also the last tip in this blog post. The question of the correct level should be settled, as well as all relevant side issues that have a significant impact on the gain structure and a hybrid way of working. For all the theory and number mysticism – it does not depend on a dB exact adjustment. It is quite sufficient to stick roughly to the recommendations. This guarantees a reasonable level that will make your mixing work much easier and faster. Happy Mixing!

Here’s a great Video from RME Audio about Matching Analog and Digital Levels.

YouTube

By loading the video, you agree to YouTube’s privacy policy.
Learn more

Load video

Feel free to discuss, leave a comment below or share this blog post in your social media channels.

Yours, Ruben

How to deal with audio latency

Estimated reading time: 10 minutes

How to deal with latency in audio productions


Increased signal propagation time and annoying latency are uninvited permanent guests in every recording studio and at live events. This blog post shows you how to avoid audio latency problems and optimize your workflow.

As you surely know, the name elysia is a synonym for the finest analog audio hardware. As musicians, we also know and appreciate the advantages of modern digital audio technology. Mix scenes and DAW projects can be saved, total recall is mandatory and monstrous copper multicores are replaced by slim network cables. A maximally flexible signal flow via network protocols such as DANTE and AVB allows the simple setup of complex systems. Digital audio makes everything better? That would be nice, but reality shows an ambivalent balance. If you look and listen closely, the digital domain sometimes causes problems that are not even present in the analog world. Want an example? 

From the depths of the bits & bytes arose a merciless adversary that will sabotage your recordings or live gigs. Plenty of phase and comb filter problems will occur. But with the right settings, you are not powerless against the annoying latencies in digital audio systems. 

What is audio latency and why it doesn’t occur in analog setups?

Latency occurs with every digital conversion (AD or DA). Latency is noticeable in audio systems as signal propagation time. In the analog domain the situation is clear: The signal propagation time from input to the output of an analog mixer is always zero.

Latencies only existed in the compound midi devices, where external synths or samplers were integrated via midi. In practice, this was not a problem, since the entire monitoring situation always remained analog and thus no latency was audible. With digital mixing consoles or audio interfaces, on the other hand, there is always a delay between input and output.

Latency can have different reasons, for example the different signal propagation times of different converter types. Depending on the type and design, a converter needs more or less time to manage the audio signal. For this reason, mixing consoles and recording interfaces always use identical converter types in the same modules (e.g. input channels), so that the modules have the same signal propagation time among each other. As we will see, within a digital mixer or recording setup latency is not a fixed quantity. 

Signal propagation time and round trip latency

Latency in digital audio systems is specified either in samples or milliseconds. A DAW with a buffer size of 512 samples generates at least a delay of 11.6 milliseconds (0.016s) if we work with a sampling rate of 44.1kHz. The calculation is simple: We divide 512 samples by 44.1 (44100 samples per second) and get 11.6 milliseconds (1ms = 1/1000sec).

If we work with a higher sample rate, the latency decreases. If we run our DAW at 96kHz instead of 44.1kHz, the latency will be cut in half. The higher the sample rate, the lower the latency. Doesn’t it then make sense to always work with the highest possible sample rate to elegantly work around latency problems? Clear answer: No! 96 or even 192kHz operation of audio systems is a big challenge for the computer CPU. The higher sample rate makes the CPU rapidly break out in a sweat, which is why a very potent CPU is imperative for a high channel count. This is one reason why many entry-level audio interfaces often only work with a sample rate of 44.1 or 48kHz. 

Typically, mixer latency refers to the time it takes for a signal to travel from an analog input channel to the analog summing output. This process is also called “RTL”, which is the abbreviation for “Round Trip Latency”. The actual RTL of an audio interface depends on many factors: The type of interface (USB, Thunderbolt, AVB or DANTE), the performance of the recording computer, the operating system used, the settings of the sound card/audio interface and those of the recording project (sample rate, number of audio & midi tracks, plugin load) and the signal delays of the converters used. Therefore it is not easy to compare the real performance of different audio interfaces in terms of latency. 

It depends on the individual case!

A high total runtime in a DAW does not necessarily have to be problematic. Some things depend on your workflow. Even with the buffer size of 512 samples from our initial example, we can record without any problems. The DAW plays the backing tracks to which we record overdubs. Latency does not play a role here. If you work in a studio, it only becomes critical if the DAW is also used for playing out headphone mixes or if you want to play VST instruments or VST guitar plug-ins to record them to the hard disk. In this case, too high a latency makes itself felt in a delayed headphone mix and an indirect playing feel. 

If that is the case, you will have to adjust the latency of your DAW downwards. There is no rule of thumb as to when latency has a negative effect on the playing feel or the listening situation. Every musician reacts individually. Some can cope with an offset of ten milliseconds, while others already feel uncomfortable at 3 or 4 milliseconds.

The Trip

Sound travels 343 meters (1125ft) in one second, which corresponds to 34.3 centimeters (0.1125ft) per millisecond. Said ten milliseconds therefore correspond to a distance of 3.43 meters (11.25ft). Do you still remember the last club gig? You’re standing at the edge of the stage rocking with your guitar in your hand, while the guitar amp is enthroned three to four meters (10 – 13ft) behind you. This corresponds to a signal delay of 10-12ms. So for most users, a buffer size between 64 and 128 samples should be low enough to play VST instruments or create headphone mixes directly in the DAW.

Unless you’re using plug-ins that cause high latency themselves! Most modern DAW programs have automatic latency compensation that matches all channels and busses to the plug-in with the highest runtime. This has the advantage that all channels and busses work phase coherent and therefore there are no audio artifacts (comb filter effects). The disadvantage is the high overall latency.

Some plug-ins, such as convolution reverbs or linear phase EQs, have significantly higher latencies. If these are in monitoring, this has an immediate audible effect even with small buffer size. Not all DAWs show plug-in latencies, and plug-in manufacturers tend to keep a low profile on this point.

First Aid

We have already learned about two methods of dealing directly with annoying latency. Another is monitoring via hardware monitoring that may be provided by the audio interface.

RME audio interfaces, for example, comes with the Total Mix software. This allows low latency monitoring with on-board tools. Depending on the interface even with EQ, dynamics and reverb. Instead of monitoring via the DAW or the monitoring hardware of the interface, you can alternatively send the DAW project sum or stems into an analog mixer and monitor the recording mic together with the DAW signals analog with zero latency. If you are working exclusively in the DAW, then it helps to increase the sample rate and/or decrease the buffer size. Both of these put a significant load on the computer CPU.

Depending on the size of the DAW project and the installed CPU, this can lead to bottlenecks. If no other computer with more processing power is available, it can help to replace CPU-hungry plug-ins in the DAW project or to set them to bypass. Alternatively, you can render plug-ins in audio files or freeze tracks.

Buffersize Options
The buffer size essentially determines the latency of a DAW
Track Rendering in DAW
Almost every DAW offers a function to render intensive plug-ins to reduce the load on the CPU

Good old days

Do modern problems require modern solutions? Sometimes a look back can help.

It is not always advantageous to record everything flat and without processing. Mix decisions, how a recorded track will sound in the end, will be postponed into the future. Why not commit to a sound like in the analog days and record it directly to the hard disk? If you’re afraid you might record a guitar sound that turns out to be a problem child later in the mixdown, you can record an additional clean DI track for later re-amping.

Keyboards and synthesizers can be played live and recorded as an audio track, which also circumvents the latency problem. Why not record signals with processing during tracking? This speeds up any production, and if analog products like ours are used, you don’t have to worry about latency.

If you are recording vocals, try to compress the signal moderately during the recording with a good compressor like the mpressor or try it with our elysia skulpter. With the elysia skulpter there are some nice and practical sound shaping functions like filter, saturation and compressor in addition to the classic preamp possibilities – so you have a complete channel strip. If tracks are already recorded with analog processing, this approach also saves some CPU power during mixing. Especially with many vocal overdub tracks, an unnecessarily large number of plug-ins are required, which in turn leads to a change in the buffer size and consequently has a negative effect on latency.  

What are your experiences with audio latencies in different environments? Do you have them under control? I’m looking forward to your comments.

Here are some FAQ:

What is audio latency and why it doesn’t occur in analog setups?

Latency occurs with every digital conversion (AD or DA). Latency is noticeable in audio systems as signal propagation time. In the analog domain the situation is clear: The signal propagation time from input to the output of an analog mixer is always zero.

Latencies only existed in the compound midi devices, where external synths or samplers were integrated via midi. In practice, this was not a problem, since the entire monitoring situation always remained analog and thus no latency was audible. With digital mixing consoles or audio interfaces, on the other hand, there is always a delay between input and output.
Latency can have different reasons, for example the different signal propagation times of different converter types. Depending on the type and design, a converter needs more or less time to manage the audio signal. For this reason, mixing consoles and recording interfaces always use identical converter types in the same modules (e.g. input channels), so that the modules have the same signal propagation time among each other. As we will see, within a digital mixer or recording setup latency is not a fixed quantity. 

What is Round Trip Latency?

Typically, mixer latency refers to the time it takes for a signal to travel from an analog input channel to the analog summing output. This process is also called “RTL”, which is short for “Round Trip Latency”.
The actual RTL of an audio interface depends on many factors: The type of interface (USB, Thunderbolt, AVB or DANTE), the performance of the recording computer, the operating system used, the settings of the sound card/audio interface and those of the recording project (sample rate, number of audio & midi tracks, plugin load) and the signal delays of the converters used. Therefore it is not easy to compare the real performance of different audio interfaces in terms of latency.