Search
Close this search box.

How to deal with audio latency

Estimated reading time: 10 minutes

Increased signal propagation time and annoying latency are uninvited permanent guests in every recording studio and at live events. This blog post shows you how to avoid audio latency problems and optimize your workflow.

As you surely know, the name elysia is a synonym for the finest analog audio hardware. As musicians, we also know and appreciate the advantages of modern digital audio technology. Mix scenes and DAW projects can be saved, total recall is mandatory and monstrous copper multicores are replaced by slim network cables. A maximally flexible signal flow via network protocols such as DANTE and AVB allows the simple setup of complex systems. Digital audio makes everything better? That would be nice, but reality shows an ambivalent balance. If you look and listen closely, the digital domain sometimes causes problems that are not even present in the analog world. Want an example? 

From the depths of the bits & bytes arose a merciless adversary that will sabotage your recordings or live gigs. Plenty of phase and comb filter problems will occur. But with the right settings, you are not powerless against the annoying latencies in digital audio systems. 

What is audio latency and why it doesn’t occur in analog setups?

Latency occurs with every digital conversion (AD or DA). Latency is noticeable in audio systems as signal propagation time. In the analog domain the situation is clear: The signal propagation time from input to the output of an analog mixer is always zero.

Latencies only existed in the compound midi devices, where external synths or samplers were integrated via midi. In practice, this was not a problem, since the entire monitoring situation always remained analog and thus no latency was audible. With digital mixing consoles or audio interfaces, on the other hand, there is always a delay between input and output.

Latency can have different reasons, for example the different signal propagation times of different converter types. Depending on the type and design, a converter needs more or less time to manage the audio signal. For this reason, mixing consoles and recording interfaces always use identical converter types in the same modules (e.g. input channels), so that the modules have the same signal propagation time among each other. As we will see, within a digital mixer or recording setup latency is not a fixed quantity. 

Signal propagation time and round trip latency

Latency in digital audio systems is specified either in samples or milliseconds. A DAW with a buffer size of 512 samples generates at least a delay of 11.6 milliseconds (0.016s) if we work with a sampling rate of 44.1kHz. The calculation is simple: We divide 512 samples by 44.1 (44100 samples per second) and get 11.6 milliseconds (1ms = 1/1000sec).

If we work with a higher sample rate, the latency decreases. If we run our DAW at 96kHz instead of 44.1kHz, the latency will be cut in half. The higher the sample rate, the lower the latency. Doesn’t it then make sense to always work with the highest possible sample rate to elegantly work around latency problems? Clear answer: No! 96 or even 192kHz operation of audio systems is a big challenge for the computer CPU. The higher sample rate makes the CPU rapidly break out in a sweat, which is why a very potent CPU is imperative for a high channel count. This is one reason why many entry-level audio interfaces often only work with a sample rate of 44.1 or 48kHz. 

Typically, mixer latency refers to the time it takes for a signal to travel from an analog input channel to the analog summing output. This process is also called “RTL”, which is the abbreviation for “Round Trip Latency”. The actual RTL of an audio interface depends on many factors: The type of interface (USB, Thunderbolt, AVB or DANTE), the performance of the recording computer, the operating system used, the settings of the sound card/audio interface and those of the recording project (sample rate, number of audio & midi tracks, plugin load) and the signal delays of the converters used. Therefore it is not easy to compare the real performance of different audio interfaces in terms of latency. 

It depends on the individual case!

A high total runtime in a DAW does not necessarily have to be problematic. Some things depend on your workflow. Even with the buffer size of 512 samples from our initial example, we can record without any problems. The DAW plays the backing tracks to which we record overdubs. Latency does not play a role here. If you work in a studio, it only becomes critical if the DAW is also used for playing out headphone mixes or if you want to play VST instruments or VST guitar plug-ins to record them to the hard disk. In this case, too high a latency makes itself felt in a delayed headphone mix and an indirect playing feel. 

If that is the case, you will have to adjust the latency of your DAW downwards. There is no rule of thumb as to when latency has a negative effect on the playing feel or the listening situation. Every musician reacts individually. Some can cope with an offset of ten milliseconds, while others already feel uncomfortable at 3 or 4 milliseconds.

The Trip

Sound travels 343 meters (1125ft) in one second, which corresponds to 34.3 centimeters (0.1125ft) per millisecond. Said ten milliseconds therefore correspond to a distance of 3.43 meters (11.25ft). Do you still remember the last club gig? You’re standing at the edge of the stage rocking with your guitar in your hand, while the guitar amp is enthroned three to four meters (10 – 13ft) behind you. This corresponds to a signal delay of 10-12ms. So for most users, a buffer size between 64 and 128 samples should be low enough to play VST instruments or create headphone mixes directly in the DAW.

Unless you’re using plug-ins that cause high latency themselves! Most modern DAW programs have automatic latency compensation that matches all channels and busses to the plug-in with the highest runtime. This has the advantage that all channels and busses work phase coherent and therefore there are no audio artifacts (comb filter effects). The disadvantage is the high overall latency.

Some plug-ins, such as convolution reverbs or linear phase EQs, have significantly higher latencies. If these are in monitoring, this has an immediate audible effect even with small buffer size. Not all DAWs show plug-in latencies, and plug-in manufacturers tend to keep a low profile on this point.

First Aid

We have already learned about two methods of dealing directly with annoying latency. Another is monitoring via hardware monitoring that may be provided by the audio interface.

RME audio interfaces, for example, comes with the Total Mix software. This allows low latency monitoring with on-board tools. Depending on the interface even with EQ, dynamics and reverb. Instead of monitoring via the DAW or the monitoring hardware of the interface, you can alternatively send the DAW project sum or stems into an analog mixer and monitor the recording mic together with the DAW signals analog with zero latency. If you are working exclusively in the DAW, then it helps to increase the sample rate and/or decrease the buffer size. Both of these put a significant load on the computer CPU.

totalmix config 1 1500x630 1
RME Total Mix Low Latency Monitoring

Depending on the size of the DAW project and the installed CPU, this can lead to bottlenecks. If no other computer with more processing power is available, it can help to replace CPU-hungry plug-ins in the DAW project or to set them to bypass. Alternatively, you can render plug-ins in audio files or freeze tracks.

Good old days

Do modern problems require modern solutions? Sometimes a look back can help.

It is not always advantageous to record everything flat and without processing. Mix decisions, how a recorded track will sound in the end, will be postponed into the future. Why not commit to a sound like in the analog days and record it directly to the hard disk? If you’re afraid you might record a guitar sound that turns out to be a problem child later in the mixdown, you can record an additional clean DI track for later re-amping.

Keyboards and synthesizers can be played live and recorded as an audio track, which also circumvents the latency problem. Why not record signals with processing during tracking? This speeds up any production, and if analog products like ours are used, you don’t have to worry about latency.

If you are recording vocals, try to compress the signal moderately during the recording with a good compressor like the mpressor or try it with our elysia skulpter. With the elysia skulpter there are some nice and practical sound shaping functions like filter, saturation and compressor in addition to the classic preamp possibilities – so you have a complete channel strip. If tracks are already recorded with analog processing, this approach also saves some CPU power during mixing. Especially with many vocal overdub tracks, an unnecessarily large number of plug-ins are required, which in turn leads to a change in the buffer size and consequently has a negative effect on latency.  

What is audio latency and why doesn’t it occur in analog setups?

Latency occurs with every digital conversion (AD or DA). Latency is noticeable in audio systems as signal propagation time. In the analog domain the situation is clear: The signal propagation time from input to the output of an analog mixer is always zero. 

Latencies only existed in the compound midi devices, where external synths or samplers were integrated via midi. In practice, this was not a problem, since the entire monitoring situation always remained analog and thus no latency was audible. With digital mixing consoles or audio interfaces, on the other hand, there is always a delay between input and output.
Latency can have different reasons, for example the different signal propagation times of different converter types. Depending on the type and design, a converter needs more or less time to manage the audio signal. For this reason, mixing consoles and recording interfaces always use identical converter types in the same modules (e.g. input channels), so that the modules have the same signal propagation time among each other. As we will see, within a digital mixer or recording setup latency is not a fixed quantity. 

What is Round Trip Latency?

Typically, mixer latency refers to the time it takes for a signal to travel from an analog input channel to the analog summing output. This process is also called “RTL”, which is short for “Round Trip Latency”. 
The actual RTL of an audio interface depends on many factors: The type of interface (USB, Thunderbolt, AVB or DANTE), the performance of the recording computer, the operating system used, the settings of the sound card/audio interface and those of the recording project (sample rate, number of audio & midi tracks, plugin load) and the signal delays of the converters used. Therefore it is not easy to compare the real performance of different audio interfaces in terms of latency. 

Search