Unlocking the Secrets behind the elysia xpressor|neo

Estimated reading time: 9 minutes

Unlocking the secrets behind the xpressor|neo

Core competence compression – the genesis of the elysia xpressor|neo 

Question: When is good, good enough? We asked ourselves this question more than once during the relaunch of the elysia xpressor|neo. Does it even make sense to send a perfectly functioning product like the xpressor to the in-house tuning department to search for possible performance boosts? This blog article is dedicated to this topic and offers a condensed outline of the elysia company history and clarifies the question, what actually makes a first-class audio compressor?

In the beginning, there was the alpha

With a top-down approach, we started in 2006 with the alpha compressor, setting our own standards in terms of quality and sound. One year later the mpressor was released and shortly after that the museq. This trio of good sounds is the foundation of our elysia product portfolio. In order to make the elysia sound accessible to less well-heeled users, we gradually introduced the 500 series modules to the market starting in 2009.

Following our own history, the first 500 module was also a compressor. The elysia xpressor 500, which despite its compact form factor carries essential components of the alpha compressor‘s DNA. A discrete VCA compressor equipped with a soft knee sidechain.

Feed-Forward

The topology is “feed-forward”, which is why negative ratios can also be implemented with the xpressor 500. Further features like a mix control, Auto Fast & Warm complete the range of functions. The xpressor 500 is without a doubt an extensively equipped signal compressor. Amazing, considering the 500 form factor. We launched this compact compressor in mid-2010 and it turned out to be a real summer hit. The first run of a hundred units was sold in a short time. Proof that our customers understood the concept. The fact that a 19″ rack version was added to the portfolio a year later was therefore a logical step. The xpressor is what can rightly be called a success story. This can be attributed to several reasons.

In any case, it is a fact that 15 years ago there were not really many good VCA compressors available on the market. The xpressor was and still is one of the anchor products that have left a lasting mark on our elyisa philosophy and the character of our products. Therefore, the wish for a contemporary, revised xpressor version had been around for quite some time. But why are compressors so important in our product portfolio and why should every musician, producer, or studio owner have a high-quality compressor in his arsenal?

Compressor anatomy

The actual task of an audio compressor is basically quite simple: Volume differences in the useful signal are to be compensated according to taste. Sounds simple, but technically it is not trivial, since the characteristics of different signal sources can vary enormously. Even with one signal family (e.g. vocals), the bandwidth is enormous.

Low tones are usually quieter and are perceived psychoacoustically with less emphasis than medium and high-pitched vocals. In addition, almost all natural sound sources contain volume modulations that cause more overtones (partial oscillations) to be generated at higher levels.

If you try to embed such natural, highly dynamic signals into a comparatively static mix, you can hardly avoid additional dynamics processing. The tool of choice is a compressor.

How would an xpressor, for example, perform this processing? The decisive control element, in this case, would be a VCA (voltage-controlled amplifier), which can change the volume of the useful signal in a voltage-controlled manner. Thus, the quality of the VCA has decisive importance on the control process and ultimately on the sound quality.

For a better view

For a simpler explanation, a comparison can be made with optics. If you want to look into the distance, you need binoculars. The better the mechanics and the quality of the glass, the sharper you can see the object in the eyepiece. In this context, an interesting analogy can also be made between analog and digital. A high-quality digital camera has an optical zoom to magnify distant objects and make them as sharp as possible. A smartphone, on the other hand, is often only equipped with digital zoom. The more you work with digital zoom, the coarser the image becomes.

This inevitably leads to a loss of quality. The situation is similar with a compressor. If you want to compensate for large differences in level, you have to make up for the compressed compressor signal after processing with the help of the make-up gain, i.e. you have to re-amplify it. The quality of this catch-up process is essential for sound quality. Do all the subtleties come to the front, are the quiet parts in the signal adequately amplified without bringing unwanted artifacts to the fore?

Is your compressor able to present all the subtleties of quiet parts of the signal in a striking way in the case of large-level jumps after processing? Digital compressors in particular are often at a disadvantage in this process, because the make-up gain is often applied to the digital domain and therefore has to contend with similar resolution problems as a digital camera zoom.

Evolution

With the xpressor, the redesign of the neo version already has a very well-positioned basis. The tuning of attack, threshold, release, and ratio on the one hand, and the numerous additional features like auto-fast, log-release, warm, and parallel compression on the other hand basically cannot be improved any further. Therefore, every user of a classic xpressor will immediately feel comfortable with the workflow and the way the xpressor|neo works.

As developers, we nevertheless asked ourselves, where is there still sound capital to be gained? What can be done to display even more subtleties in the sound? What other adjustments can be made? This is not a trivial task, especially since some sound phenomena simply defy evaluation by audio measurements.

For example, improved three-dimensionality in terms of stereo base width can only be determined in complex listening tests. The fact is: There are definitely starting points and options to improve the sound. However, you have to be able to think “outside the box” and look in the right places. The classic xpressor has been available since 2010. Since then, the world has moved on. So our expertise has evolved, as has the fact that modern components and assemblies open up new (sound) possibilities.

New research and development approach learned 

Unlocking the secrets behind the xpressor|neo. Ruben Tilgner, CEO, founder and developer at elysia.

Ideas and improvements sometimes come from unusual disciplines. At elysia, we always take an interdisciplinary approach, which is why the redesign of the xpressor uses a process whose origins lie in circuit board design and, at first glance, doesn’t have much to do with audio processing.

Originally, it was an attempt to optimize the ground concepts and voltage supply of critical components such as FPGAs, microcontrollers, and DSPs. We applied this method to the design of audio circuits on a test basis and were surprised to find that audio circuits can be supplied with current peaks and corresponding voltages much faster as a result. The result is a significant jump in signal quality. In parallel, we are always on the lookout for improved components. Especially for components that were not yet available in 2010, as these carry the supposed potential for a performance increase.

Especially due to the requirements of modern DSPs and switching power supplies, some new electrolytic capacitors have been developed in the last few years. These special electrolytic capacitors have smaller values concerning their parallel and serial resistance. We have tested these components, among others, as coupling electrolytic capacitors or as buffering for the power supply, with partly astonishing results.

Especially when you consider that these electrolytic capacitors were not originally designed for use in audio circuits. This is how you can reach your goal even via supposed detours. Due to these “extra miles” the xpressor|neo has an improved impulse behavior and a more precise transient reproduction, which can also be proven by measurements.

Encore

Unlocking the secrets behind the xpressor|neo. The circuit board of the xpressor|neo

But that’s not all. We have given the input circuit an additional filter in the form of a small coil, which provides a special RF suppression that has a positive effect on the overall sound. Throughout the circuit design, there is a reference voltage that drives the discrete circuits. This has been completely redesigned and is now more resilient to disturbances at the voltage supply level. As an aside, this is a thorny ongoing issue with the 500 modules.

We cannot say which 500 Rack our customers use for their modules. Depending on the manufacturer and model, the voltage supply of these 500 Racks is sometimes more or less good. Often you search in vain for concrete information about the interference voltage distances and other relevant values.

The xpressor|neo, on the other hand, is particularly well positioned in terms of power supply. The VCAs are another important point for increasing performance. In the xpressor|neo, we have reworked the control of the VCAs, resulting in improved stereo separation. In addition, the VCAs in the neo are driven completely symmetrically, which audibly benefits the stereo image. We were able to further improve the impulse response by revising the output amplifiers, which we have given new output filters.

More is more

In addition to the pure compressor circuit, the xpressor|neo has improvements in the additional functions. For example, the sidechain is more finely structured to allow the suppression of artifacts at the level of the voltage supply. A 2-layer board has now become a 4-layer board with a large mass area. This is important to minimize external interference.

It also provides a low-impedance connection between the power supply and the audio circuits, which can then be supplied with power more quickly. The sum of these improvements makes a clearly audible difference. The classic xpressor is an audiophile tool on a high level, but the xpressor|neo is in direct comparison sonically a step ahead. Especially the transient reproduction sets new standards, the stereo image is more three-dimensional and the overall sound simply shows more spatial depth. The bass range is extended, and the mids resolve more finely – the sound quality is outstanding for this price range and we do not say that without pride.

Compression expertise in a new look

The xpressor|neo is not only a clear power-up in terms of sound. The neo has also improved its appearance. We have beveled the edges of the housing, and the focus is now confidently placed on the center, where the company logo and the device name take their VIP place. This makes the renaissance of the xpressor|neo a rounded affair visually as well. In retrospect, we didn’t realize how much work the new edition of the neo would require.

Especially since it was not clear whether and to what extent the basic version of the xpressor could be further optimized. However, we are more than satisfied with the results and it is still amazing what potential for improvement can be achieved with new design methods and improved components. Even users who already have an original xpressor in their inventory should definitely try out the xpressor|neo. Especially in critical applications such as bus processing or mastering applications, you will be rewarded with a new sound quality that undoubtedly justifies an upgrade.

Unlocking the secrets behind the xpressor|neo. The xpressor|neo in the 19" Rack Version.

Use your ears – How intuitive is music production today?

Estimated reading time: 12 minutes

How intuitive is music production today?

Video killed the Radio Star


MTV was just the beginning. It seems that video has totally gained dominance over good old audio. We know from communication science that pictures are the most direct form of communication. Does this also mean that visual communication is generally superior? What influence does this have on the way we consume and produce music? Is seeing even more important than hearing? 

Corona is changing our world permanently. Since the beginning of the pandemic, video traffic in Germany has increased quadruple. Instead of picking up the phone, people prefer to make Zoom calls. This has a clear impact on our communication structures. But as with any mass phenomenon, there is always a counterbalancing correlation, a countermovement. This manifests itself in the form of the good old record players.

For the first time, more vinyl records than CDs were sold in Germany in 2021. This decelerated type of music consumption is so completely at odds with the prevailing zeitgeist. The desire to be able to hold your favorite music in your hand as a vinyl record is extremely popular. The fact that we process music from the vinyl record player exclusively with hearing is so archaic that it seems to be out of time.

At the same time, the enjoyment of music with the help of a record player corresponds phylogenetically completely to human nature. In the following, we will clarify why this is so. We learn from the past for the future. This is also true for producing music. The goal of a successful music production should be to inspire the audience. Music is not a pure end in itself. For this, we only need to look at the creation of music [sic].

The origin of music

Germany is an old cultural nation. Very old, to be precise. This is shown by archaeological findings discovered during excavations in a cave in the Swabian Alb. Researchers found flutes made of bone and ivory there that are believed to be up to 50,000 years old. The flute makers even implemented finger holes that allowed the pitch to be manipulated. Experts estimate that humanity has been expressing itself rhythmically and melodically for a very long time. These non-linguistic acoustic events are believed to have served primarily social contexts. Music enabled emotional vocal expressions and established itself as a second communication system parallel to language. Much of the emotional level of music-making has survived to this day, such as the so-called “chill effect“.

This occurs when music gives you goosebumps. The goosebumps are the physical reaction to a chill effect moment. The chill effect also causes the brain’s reward system to be stimulated and happy hormones to be released. This happens when the music provides special moments for the listener, and these moments are often very subjective. But this is precisely where music listeners derive their benefit during music consumption. Emotionality is the currency of music. For this reason, children should be enabled to learn a musical instrument. Along with language, music is a profoundly human means of expression. Music teaches children to experience emotions and also to express their own feelings. It is an alternative means of expression in case language fails. It is the desire for emotionality that makes us reach for the vinyl record as our preferred music medium in special moments.

Then and now

The vinyl record is preserved music. The flutists of the Swabian Alb could always practice their music only in the “here and now”. No recording, no playback – handmade music for the moment. That meant making music for the longest period in human history. With the digital revolution, music-making changed radically. In addition to traditional instruments, keyboards, drum computers, sampling, and sequencers came along in the 80s. The linearity of music-making was broken. Music no longer necessarily had to be played simultaneously. Rather, a single musician was able to gradually play a wide variety of instruments and was no longer dependent on fellow musicians. As a result, several new musical styles emerged side by side in a short time, a trademark of the 80s.

The Nineties

In the 90s, the triumph of digital recording and sampling technology continued. Real sounds were replaced by samplers and romplers, which in turn received competition from midi programming. With midi sequencers, screens and monitors increasingly entered the recording studios, and music was made visible for the first time. The arrangement could be heard and seen simultaneously. The 2000s is the time of the comprehensive visualization of music production. Drums, guitars, basses, and synths – everything is available as a VST instrument and since then virtually at home inside our monitors.

At the same time, the DAW replaces the hard disk recorders that were common until then. The waveform display in a DAW is the most comprehensive visual representation of music to date and allows precise intervention in the audio material. For many users, the DAW is becoming a universal production tool, providing theoretically infinite resources in terms of mix channels, effects, EQs, and dynamics tools. In recent years, the previously familiar personnel structure has also changed. Not the band, but the producer creates the music. Almost everything takes place on the computer.

Due to this paradigm shift, new music genres emerge, which are particularly at home in the electronic field (Trap, Dubstep, EDM). It is not uncommon for these productions to no longer use audio hardware or real instruments. 

Burnout from Wellness Holidays

A computer with multiple monitors is the most important production tool for many creatives. The advantages are obvious. Cost-effective, unlimited number of tracks, lossless recordings, complex arrangements can be handled, an unlimited number of VST instruments and plug-ins. Everything can be automated and saved. A total recall is obligatory. If you get stuck at any point in the production, YouTube offers suitable tutorials on almost any audio topic. Drawing by numbers. Music from the automatic cooker. Predefined ingredients predestine a predictable result without much headache. 

Stone Age

Our Swabian flutists would be surprised. Music only visual? No more hardware needed? No need to play by hand? The Neanderthal hidden in our brain stem subconsciously resists. The eye replaces the ear? Somehow something is going wrong. In fact, this kind of producing contradicts the natural prioritization of human senses. The Stone Age flute player could usually hear dangers before he could see them. Thanks to our ears, we can even locate with amazing accuracy the direction from which a saber-toothed tiger is approaching.

Evolution has thought of something that the sense of hearing is the only sense that cannot be completely suppressed. You can hold your nose or close your eyes, but even with fingers in your ear, a human being perceives the approaching mammoth. The dull vibrations trigger a fear sensation. This was and is essential for survival. Sounds are always be associated with emotions. According to Carl Gustav Jung (1875 – 1961), the human psyche has collective memories in the subconscious. He called these archetypes.

Emotions

Sounds such as thunder, wind or water generate immediate emotions in us. Conversely, emotions such as joy or sadness can be best expressed with music. In this context, hearing is eminently important. Hands and ears are the most important tools of the classical musician and for this reason, there are many relative musicians who are blind and play at the highest level. Those who rely exclusively on the computer for music production are depriving themselves of one of their best tools. Music production with keyboard and mouse is rarely more than a sober data processing with artificial candy coating. DAW operation via mouse demands constant control by our eyes. There is no tactile feedback. In the long run, this is tiring and does not remain without collateral damage. Intuition is usually in the first place when it comes to reporting damage.

Seeing instead of listening?

The visualization of music is not problematic by itself. Quite the opposite, in fact, because sometimes it is extremely helpful. Capturing complex song sequences or precisely editing audio files is a blessing with adequate visualization. As far as the core competence of music production is concerned, the balance looks much more ambivalent. Adjusting an EQ, compressor, effect, or even adjusting volume ratios exclusively with monitor & mouse is ergonomically questionable. It is like trying to saw through a wooden board with a wood planer. It is simply an unfortunate tool of choice.

Another aspect also has a direct impact on our mix.

The visual representation of the EQ curve in a DAW or digital mixer has a lasting effect on how we process signals with the EQ. Depending on the resolution of the display, we use the filters sometimes more and sometimes less drastically. If the visual representation creates a massive EQ hump on the screen, our brain inevitably questions this EQ decision. Experiences have shown that with an analog EQ without a graphical representation, these doubts are much less pronounced. 

The reason: the reference of an analog EQ is the ear, not the eye. If a guitar needs a wide boost at 1.2 kHz to assert itself in the mix, we are more likely to make drastic corrections with an analog EQ than with a DAW EQ whose visualization piles up a massive EQ hump on the monitor screen. Successful producers and mixers sometimes work with drastic EQ settings without giving it much thought. Inexperienced users who resort to an equalizer with a visual curve display too often use their eyes instead of their ears in their search for suitable settings. This often leads to wrong decisions.

Embrace the chaos 

When asked what is most lacking in current music productions, the answer is intuition, interaction, and improvisation. When interacting with other musicians, we are forced to make spontaneous decisions and sometimes make modifications to chords, progressions, tempos, and melodies. Improvisation leads to new ideas or even a song framework, the DNA of which can be traced back to the sense of hearing and touch.

Touch and Feel

The sense of touch in combination with a real instrument offers unfiltered access into the subconscious. Or loosely according to Carl Gustav Jung to the primal images, the archetypes. Keyboard & mouse do not have this direct connection. To be able to interact musically with VST instruments and plugins, we, therefore, need new user interfaces that serve our desire for a haptic and tactile experience. Especially at this point, a lot has happened in the past few years. The number of DAW and plug-in controllers is steadily increasing, forming a counter-movement to the keyboard & mouse.

Faders, knobs and switches are fun

Feeling potentiometer positions allows operation without consciously looking, like a car radio. For this reason, the Federal Motor Transport Authority considers the predominant operation of a modern electric car via the touchscreen to be problematic. The fact is: with this operating concept, the driver’s attention shifts from the road to the touchscreen more often than in conventional automobiles with hardware pushbuttons and switches. The wrong tool for the job? The similarities are striking. A good drummer plays a song in a few takes. Yet some producers prefer to program the drums, even if it takes significantly longer. Especially if you want to implement something like a feel and groove to the binary drum takes.

The same goes for programming automation curves for synth sounds, for example, the cut-off of a TB 303. It’s faster to program in than to program out, and the result is always organic. It’s no accident that experienced sound engineers see their old SSL or Neve console as an instrument. And in the literal sense. Intuitive interventions in the mix with pots and faders put the focus on the ear and deliver original results in real-time.

Maximum reduction as a recipe for success

In the analog days, you could only afford a limited number of instruments and pro audio equipment. Purchasing decisions were made more consciously and the limited equipment available was used to its full potential. Today it is easy to flood the plugin slots of your DAW with countless plugins on a small budget. But one fact is often overlooked. The reduction to carefully selected instruments is very often style-shaping. Many musicians generate a clear musical fingerprint precisely through their limited instrument selection.

The concentration on a few, but consciously selected tools define a signature sound, which in the best case becomes an acoustic trademark. This is true for musicians as well as for sound engineers and producers. Would Andy Wallace deliver the same mixes if he swapped his favorite tool (SSL 4000 G+) for a plugin bundle complete with DAW? It’s no coincidence that plugin manufacturers are trying to port the essence of successful producers and sound engineers to the plugin level. Plugins are supposed to capture the sound of Chris Lord Alge, Al Schmitt, or Bob Clearmountain.

A comprehensible approach. However, with the curious aftertaste that just these gentlemen are only conditionally known for preferring to use plugins. Another curiosity is to revive popular hardware classics as plugin emulations. A respectable GUI is supposed to convey a value comparable to that of the hardware. Here, only the programming, the code determines the sound of the plugin. Another example of how visualization influences the choice of audio tools.

Just switch off

Don’t get me wrong, good music can also be produced with a mouse & keyboard. But there are sustainable reasons to question this way of working. We are not spreading the audio engineering gospel. We just want to offer an alternative to visualized audio production and shift the focus from the eye to the ear again. That music often sends itself in the background noise of the zeitgeist, which we will hardly be able to reverse.

But maybe it helps to remember the archetypes of music. Listening to music instead of seeing it and, in the literal sense, taking a hands-on approach again. Using real instruments, interacting with other musicians, using pro audio hardware that allows tactile feedback.

Self-limiting to a few deliberately selected instruments, analog audio hardware, and plug-ins with hardware controller connectivity. This intuitive workflow can help break through familiar structures and ultimately create something new that touches the listener. Ideally, this is how we find our way back to the very essence of music: emotion!

Finally, one last tip: “Just switch off!” Namely, the DAW monitor. Listen through the song instead of watching it. No plugin windows, no meter displays, no waveform display – listen to the song without any visualization. Like a record, because unlike MTV, it has a future.

What do you think? Leave a comment and share this post if you like it.

Yours, Ruben

Mastering for Spotify, YouTube, Tidal, Amazon Music, Apple Music and other Streaming Services

Estimated reading time: 14 minutes

Mastering for Spotify, YouTube, Tidal, Amazon Music, Apple Music and other Streaming Services


Does audio streaming platforms also require a special master?

Introduction

Streaming platforms (Spotify, Apple, Tidal, Amazon, Youtube, Deezer etc.) are hot topics in the audio community. Especially since these online services suggest concrete guidelines for the ideal loudness of tracks. To what extent should you follow these guidelines when mastering and what do you have to consider when interacting with audio streaming services? To find the answer, we have to take a little trip back in time.

Do you remember the good old cassette recorder? In the 80s, people used it to make their own mixtapes. Songs of different artists gathered on a tape, which we pushed into a tape deck of our car with a Cherry Coke in the other hand in order to show up with suitable sound before hitting at the next ice cream dealer in the city center. The mixtapes offered a consistently pleasant listening experience, at least as far as the volume of the individual tracks was concerned. When we created mixtapes, the recording level was simply adjusted by our hand, so that differently loud records were more or less consciously normalized by hand. 

Back to the Streaming Future. Time leap: Year 2021.

Music fans like us still enjoying mixtapes, except that today we call them playlists and they are part of various streaming services such as Apple Music, Amazon Music, Spotify, YouTube, Deezer or Tidal. In their early years, these streaming services quickly discovered that without a regulating hand on the volume fader, their playlists required constant readjustment by the users due to the varying loudness of individual tracks.

So they looked for a digital counterpart to the analog record level knob and found it in an automated normalization algorithm that processes every uploaded song according to predefined guidelines. The streaming service Spotify for example, specifies the number -14 dB LUFS as an ideal loudness value. This means if our song is louder than -14 dB LUFS, it will be automatically reduced in volume by the streaming algorithm so that playlists have a more consistent average loudness. Sounds like a good idea at first glance, right?

Why LUFS?

The problem with different volume levels was not just limited to the music area. In the broadcasting area, the problem was also widespread. The difference in volume between a television movie and the commercial interruption it contains sometimes took on such bizarre proportions that the European Broadcast Union felt forced to issue a regulation on loudness. This was the birth of the EBU R128 specification, which was initially implemented in Germany in 2012. With this regulation, a new unit of measurement was introduced, the LUFS (Loudness Units relative to Full Scale).

Whereby 1 x LU (Loudness Units) equals the relative value of 1 dB and at the same time, a new upper limit for digital audio was defined. A digital peak level of -1 dB TP (True Peak) should not be exceeded according to EBU speecification. This is the reason why Spotify and Co provide a True Peak limit of -1 dBFS for music files. 

Tip: I recommend to keep this limit. Especially if we do not adhere to the loudness specification of -14 dB LUFS. At higher levels, the normalization algorithm will definitely intervene in a moderating way. Spotify refers to the following in this context: If we do not keep -1 dB TP as limiter upper limit (ceiling), sound artifacts may occur due to the normalization process.

This value is not carved in stone, as you will see later. Loudness units offer a special advantage to the mastering engineer. Simply spoken, we should be able to use LUFS to quantify how “loud” a song is and thereby compare different songs in terms of loudness. More on this later.

Mastering for Spotify, Youtube, Tidal, Amazon Music, Apple Music and other Streaming Services | T-Racks Stealth Limiter

How can we see if our mix is normalized by a streaming service?

The bad news is that some streaming services have quite different guidelines. Therefore, you basically have to search for the specifications of each individual service if you want to follow their guidelines. This can be quite a hassle, as there are more than fifty streaming and broadcasting platforms worldwide. As an example, here are the guidelines of some services in regards to ideal LUFS values:

-11 LUFS Spotify Loud

-14 LUFS Amazon Alexa, Spotify Normal, Tidal, YouTube

-15 LUFS Deezer

-16 LUFS Apple, AES Streaming Service Recommendations

-18 LUFS Sony Entertainment

-23 LUFS EU R128 Broadcast

-24 LUFS US TV ATSC A/85 Broadcast

-27 LUFS Netflix

The good news is that there are various ways to compare your mix with the specifications of the most important streaming services at a glance. How much your specific track will be manipulated by the respective streaming service? You can check this on the following website: www.loudnesspenalty.com

Mastering for Spotify, Youtube, Tidal, Amazon Music, Apple Music and other Streaming Services | Loudness Penalty

Some DAWs, such as the latest version of Cubase Pro also feature comprehensive LUFS metering. Alternatively, the industry offers various plug-ins that provide information about the LUFS loudness of a track. One suitable candidate is YOULEAN Loudness Meter 2, which is also available in a free version: https://youlean.co/youlean-loudness-meter/.

Another LUFS metering alternative is the Waves WLM Plus Loudness Meter, which is already fed with a wide range of customized presets for the most important platforms. 

Waves Loudness Meter

Metering

Using the Waves Meter as an example, we will briefly go into the most important LUFS meters, because LUFS metering involves a lot more than just a naked dB number in front of the unit. When we’re talking about LUFS, it should be clear what this exactly means. LUFS data is determined over a period of time and depending on the length of the time span and this can lead to different results. The most important value is the LUFS Long Term Display.

This is determined over the entire duration of a track and therefore represents an average value. To get an exact Long Term value we have to play the song once from the beginning to the end. Other LUFS meters (e.g. in Cubase Pro) like to refer to the Long Term value as LUFS Integrated. LUFS Long Term or Integrated is the value that is prefixed in the streaming platform’s specifications. For “Spotify Normal” this means that if a track has a loudness of -12LUFS Integrated, the Spotify algorithm will lower this track by two dB to -14LUFS. 

LUFS Short Term

The Waves WLN Plus plugin offers other LUFS indicators for evaluation, such as LUFS Short Term. LUFS Short Term is determined over a period of three seconds when the plugin measures according to EBU standards. This is an important point, because depending on the ballistics, the measurement distances are different in time and can therefore lead to different results. A special feature of the Waves WLM Plus plugin is the built-in True Peak Limiter. Many streaming platforms insist on a true peak limit of -1dB (some even -2dB). If you use the WLM Plus Meter as the last plugin in the chain of your mastering software, the True Peak limit is guaranteed not to be exceeded when the limiter is activated.

Is the “Loudness War” finally over thanks to LUFS?  

As we already learned, all streaming platforms define maximum values. If our master exceeds these specifications, it will automatically made quieter. The supposedly logical conclusion: we no longer need loud masters. At least this is true for those who adhere to the specifications of the streaming platforms. Now, parts of the music industry have always been considered a place away from all reason, where things like to run differently than logic dictates. The “LUFS dictate” is a suitable example of this. 

Fact is: Most professional mastering engineers don’t care about LUFS in practice nor about the specifications of the streaming services! 

Weird stuff, I know. However, the facts are clear and the thesis can be proven with simple methods. We remember that YouTube, just like Spotify, specifies a loudness of -14dB LUFS and automatically plays louder tracks at a lower volume. So all professional mixes should take this into account, right? It’s nice that this can be checked without much effort. Open a recent music video on YouTube, right-click on the video and click on ” Stats for nerds”. The entry “content loudness” indicates by how much dB the audio track is lowered by the YouTube algorithm. Now things become interesting. For the current AC/DC single “Shot in the Dark” this is 5.9dB. Billy Talent’s “I Beg To Differ” is even lowered by 8.6dB. 

Amazing, isn’t it?  

Obviously, hardly anyone seems to adhere to the specifications of the streaming platforms. Why is that? 

There are several reasons. The loudness specifications differ from streaming platform to streaming platform. If you take these specifications seriously, you would have to create a separate master for each platform. This would result in a whole series of different sounding tracks, for the following reason. Mastering equipment (whether analog or digital) does not work linearly across the entire dynamic spectrum. 

Example:

The sound of the mix/master changes if you have to squeeze 3dB more gain reduction out of the limiter for one song than for another streaming platform. If you finally normalize all master files to an identical average value, the sound differences become audible due to the different dynamics processing. The differences are sometimes bigger and sometimes smaller. Depending on which processing you have done. 

Another reason for questioning the loudness specifications is the inconsistency of the streaming platforms. Take Spotify, for example. Do you know that Spotify’s normalization algorithm is not enabled when playing Spotifys via web player or a third party app? From the Spotify FAQs:

Spotify for Artists FAQ
The Metal Mix

This means that if you deliver a metal mix with -14dB LUFS and it is played back via Spotify in a third-party app, the mix is simply too weak compared to other productions. And there are other imponderables in the streaming universe. Spotify allows its premium users to choose from three different normalization settings, with standards that also differ. For example, the platform recommends a default of -11dB LUFS and a True Peak value of -2dB TP for the “Spotify Loud” setting, while “Spotify Normal” is certified at -14dB LUFS and -1dB TP. Also from the Spotify FAQs:

FAQ2

For mastering engineers, this is a questionable state of affairs. Mastering for streaming platforms is like trying to hit a constantly changing target at varying distances with a precision rifle. Even more serious, however, is the following consideration: What happens if one or more streaming platforms raise, lower, or even eliminate their loudness thresholds in the future? There is no guarantee that the specifications currently in place will still be valid in the future. Unlikely? Not at all! YouTube introduced its normalization algorithm in December 2015. Uploads prior to December 2015 may sound louder if they were mastered louder than -14dB LUFS. Even after 2015, YouTube’s default has not remained constant. From 2016 to 2019, the typical YouTube normalization was -13dB and did not refer to LUFS. Only since 2019 YouTube has been using the -14dB LUFS by default. 

The reason why loudness is not exclusively manifested in numbers

If you look at the loudness statistics of some YouTube videos and listen to them very carefully at the same time, you might have made an unusual observation. Some videos sound louder even though their loudness statistics indicate that they are nominally quieter than other videos. How can this be? There is a difference between measured loudness in LUFS and perceived loudness. Indeed, it is the latter that determines how loud we perceive a song to be, not the LUFS specification. But how do you create such a lasting loudness impression?

Many elements have to work together for us to perceive a song as loud (perceived loudness). Stereo width, tonal balance, song arrangement, saturation, dynamics manipulation – just to name a few pieces of the puzzle. The song must also be well composed and performed. The recording must be top-notch and the mix professional. The icing on the cake is a first-class master. If all these things come together, the song is denser, more forward and, despite moderate mastering limiter use, simply sounds louder than a mediocre song with less good mix & mastering, even if the LUFS integrated specifications suggest a different result. An essential aspect of a mastering process is professional dynamics management. Dynamics are an integral part of the arrangement and mix from the beginning.

In mastering, we want to try to further emphasize dynamics while not destroying them. Because one thing is always inherent in the mastering process: a limitation of dynamics. How well this manipulation of dynamics is done is what separates good mastering from bad mastering and a good mix with a professional master always sounds fatter and louder than a bad mix with a master that is only trimmed for loudness.

Choose your tools wisely!

High quality equalizers and compressors like the combination of the elysia xfilter and the elysia xpressor provide a perfect basis for a more assertive mix and a convincing master. Quality compression preserves the naturalness of the transients, which automatically makes the mix appear louder. You miss the punch and pressure in your song? High-quality analog compressors always guarantee impressive results and are more beneficial to the sound of a track than relying solely on digital peak limiting.

You are loosing audible details in the mixing and mastering stage? Bring them back into light with the elysia museq! The number of playback devices has grown exponentially in recent years. This doesn’t exactly make the art of mastering easier.

Besides the classic hi-fi system, laptops, smart phones, Bluetooth speakers and all kinds of headphones are fighting for the listener’s attention in everyday life. Good analog EQs and compressors can help to adjust the tonal focus for these devices as well. Analog processing also preserves the natural dynamics of a track much better than endless plug-in rows, which often turn out to be a workflow brake. But “analog” can provide even more for your mixing & mastering project. Analog saturation is an additional way to increase the perceived loudness of a mix and to noticeably improve audibility, especially on small monitoring systems like a laptop or a Bluetooth speaker.

Saturation and Coloration

The elysia karacter provides a wide range of tonal coloration and saturation that you can use to make a mix sound denser and more assertive. Competitive mastering benefits sustainably from the use of selected analog hardware. The workflow is accelerated and you can make necessary mix decisions very quick and accurate. For this reason, high-quality analog technology enjoys the highest popularity, especially in high-end mastering studios. karacter is available as a 1 RU 19″ Version, karacter 500 – module and in our new super handy qube series as a karacter qube.

Mastering Recommendations for 2021

As you can see, the considerations related to mastering for streaming platforms are anything but trivial. Some people’s heads may be spinning because of the numerous variables. In addition, there is still the question of how to master your tracks in 2021. 

 The answer is obvious: create your master in a way that serves the song. Some styles of music (jazz, classical) require much more dynamics than others (heavy metal, hip-hop). The latter can certainly benefit from distortion, saturation, and clipping as a stylistic element. What sounds great is allowed. The supreme authority for a successful master is always the sound. If the song calls for a loud master, it is legitimate to put the appropriate tools in place for it. The limit of loudness maximization is reached when the sound quality suffers. Even in 2021, the master should sound better than the mix. The use of compression and limiting should always serve the result and not be based on the LUFS specifications of various streaming services. Loudness is a conscious artistic decision and should not end up in an attempt to achieve certain LFUS specifications.

And the specifications of the streaming services? 

With how many LUFS should i master to?

There is only one valid reason to master a song to -14dB LUFS. The value of -14dB LUFS is just right if the song sounds better with it than with -13 or -15dB LUFS!

I hope you were able to take some valuable information from this blog post and it will help you take your mix and personal master for digital streaming services to the next level. 

I would be happy about a lively exchange. Feel free to share and leave a comment or if you have any further questions, I’ll be happy to answer them of course.

Yours, Ruben Tilgner 

-18dBFS is the new 0dBu

Estimated reading time: 18 minutes

-18dBFS is the new 0dBu

Gain staging and the integration of analog hardware in modern DAW systems


Introduction

-18dBFS is the new 0dBu:

In practice, however, even experienced engineers often have only a proximate idea of what “correct” levels are. Like trying to explain the offside rule in soccer, a successful level balance is simple and complex at the same time. Especially when you have the digital and analog worlds supposed to work together on equal grounds. This blog post offers concrete tips for confident headroom management and “how to integrate analog hardware in digital production environment – DAW systems” in a meaningful way.

Digital vs. Analog Hardware

A good thing is that you don’t have to choose one or the other. In modern music production, we need both worlds, and with a bit of know-how, the whole thing works surprisingly well. But the fact is: On one hand, digital live consoles and recording systems are becoming more and more compact in terms of their form factor, on the other hand, the number of inputs and outputs and the maximum number of tracks are increasing at the same time. The massive number of input signals and tracks demand even more to always find suitable level ratios.

Let’s start at the source and ask the simple question, “Why do you actually need a mic preamp?”

The answer is as simple as clear. We need a Mic-Preamp to turn a microphone signal into a line signal. A mixer, audio interface, or DAW always operates at line level, not microphone level. This is the case for all audio interfaces, such as insert points or audio outputs. How far do we actually need to turn up the microphone preamp, and is there one “correct” level? There is no universal constant with a claim to be the sole representative, but there is a thoroughly sensible recommendation that has proven itself in a practical workflow. I recommend to level all input signals to line level with the help of the microphone preamplifier. Line level is the sweet spot for audio systems.

But what exactly is line level now and where can it be read?

Now we’re at a point where it gets a little more complicated. For the definition of the line level, a reference level is used and this is different depending on which standard is used as a basis. The reference level for professional audio equipment according to the German broadcast standard is +6dBu (1.55Vrms, -9dBFS). It refers to a level of 0dBu at 0.775V (RMS). In the USA, the analog reference level of +4dBu, corresponding to 1.228V (effective value), is used. Furthermore relevant in audio technology is the reference level of 0 dBV, corresponding to exactly 1V (RMS) and the home equipment level (USA) with -10dBV, corresponding to 0.3162V (RMS). Got it? We’ll focus on the +4dBu reference level in this blog post. Simply for the reason that most professional audio equipment relies on this reference level for line-level.

dBu & dBV vs. dBFS

What is +4dBu and what does it mean?

Level ratios in audio systems are expressed in the logarithmic ratio decibel (dB). It is important to understand that there is a difference between digital and analog mixers in terms of “dB metering”. This is the experience of anyone who has swapped from an analog- to a digital mixer for the first time (or vice versa). Obviously, the usual level settings don’t fit anymore. Why is that? The simple explanation: analog mixers almost always use 0dBu (0.775V) as a reference point, while their digital counterparts use the standard set by the European Broadcasting Union (EBU) for digital audio levels. According to the EBU, the old analog “0dBu” should now be equivalent to -18dBFS (full scale). Digital consoles- and DAW users, therefore, hold fast: -18dBFS is our new 0dBu!

This sounds simple, but unfortunately, it’s not that easy, because dBu values can’t be unambiguously converted to dBFS. It varies from device to device which analog voltage leads to a certain digital level. Many professional studio devices are connoted with the nominal output of +4dBu, while consumer devices tend to fall back on the dBV meter (-10dBV). This is not enough confusion. There are also massive differences in terms of “headroom”. With analog equipment, there is still plenty of headroom available when a VU meter is operating in a 0dB range. Often there is another 20dB available until an analog soft clipping signals the end of the line. The digital domain is much more uncompromising at this point. Levels beyond the 0dBFS mark produce hard clipping, which sounds unpleasant on the one hand and represents a fixed upper limit on the other. The level simply does not get any louder. 

We keep in mind: The analog world works with dBu & dBV indications, while dBFS describes the level ratios in the digital domain. Accordingly, the meter displays on an analog mixing console are also different compared to a digital console or DAW.

Analog meter indicators are referenced to dBu. If the meter shows 0dB, this equals +4dBu at the mixer output and we are happy about a rich headroom. A digital meter is usually scaled over the range of -80 to 0dBFS, with 0dBFS representing the clipping limit. To make a comparison, let’s recall: 0dBu (analog) = -18dBFS (digital). This is true for many digital devices, such as Yamaha digital mixers, but not all. ProTools, for example, works with the reference level of 0dBu = -20dBFS. We often find this difference when comparing European and US equipment. The good news is that we can live very well with this difference in practice. Two dB is not what matters in the search for the perfect level of audio signals. 

Floating Point

But why do we need to worry about level ratios in a DAW at all? Almost all modern DAWs work with floating-point arithmetic, which provides the user with infinite headroom and dynamics (theoretically 1500dB). The internal dynamics are so great that clipping cannot occur. Therefore, common wisdom on this subject is: “You can do whatever you want with your levels in a floating-point DAW, you just must not overdrive the sum output”. Theoretically true, but practically problematic for two reasons. First, there are plug-ins (often emulations of classic studio hardware) that don’t like it at all if you feed their input with extremely high levels.

This degrades the signal audibly. Very high levels have a second undesirable side effect: they make it virtually impossible to use analog audio hardware as an insert. Most common DAWs work with a 32-bit floating-point audio engine. Clipping can only occur on the way into the DAW (e.g. overdriven MicPre) or on the way out of the DAW (overdriven sum DA-converter). This happens faster than you think. Example: Anyone who works with commercial loops knows the problem. Finished loops are often normalized and you reach quickly the 0dBFS mark on the loudest parts mark on your peak meter. If we play several loops simultaneously and two loops will reach 0dBFS at one point at the same time, we already have clipping on the master bus. You need to avoid too high levels in a DAW at all costs.

Noise Generator

We’ve talked about clipping and headroom so far, but what about the other side of the coin? How do analog and digital audio systems handle very low levels? In the analog world, the facts are clear: the lower our signal level, the closer our useful signal approaches the noise floor. That means our “signal to noise” ratio is not optimal. Low signals enter the ring with the noise floor, which doesn’t come off without causing collateral damage to the sound quality. Therefore, in an analog environment, we must always emphasize solid levels and high-quality equipment with the best possible internal “signal to noise” ratio. This is the only way to guarantee that in critical applications (e.g. classical recordings, or music with very high dynamics) the analog recording is as noise-free as possible.

And digital?

Fader position as a part of Gain Staging

Another often overlooked detail on the way to a solid gain structure is the position of the faders. First of all, it doesn’t matter whether we’re working with an analog mixer, a digital mixer, or a DAW. Faders have a resolution, and this is not linear.

The resolution around the 0dB mark is much higher than in the lower part of the fader path. To mix as sensitively as possible, the fader position should be near the 0dB mark. If we create a new project in a DAW, the faders in the DAW project are in the 0dB position by default. This is how most DAWs handle it. Now we can finally turn up the mic preamps and set the appropriate recording level. We recommend leveling all signals in the digital domain to -18dBFS RMS / -9dBFS peak. In other words, to the line-level already invoked at the beginning, because that’s what digital mixers and DAWs are designed for. Since we have the channel faders close to the 0 dB mark, the question now is: How do I lower signals that are too loud in the mix? 

You have several ways to do this and many of them are simply not recommended. For example, you could turn down the gain of the mic preamp. But then we’re no longer feeding line level to the DAW. With an analog mixer, this results in a poor “signal to noise” ratio. A digital mixer with the same approach has the problem that all sends (e.g. monitor mixes for the musicians, insert points) also leave the line-level sweet spot. Ok, let’s just pull down the channel fader! But then we leave the area for the best resolution, where we can adjust the levels most sensitively. This may “only” be uncomfortable in the studio, but at a large live event with a PA to match, it quickly becomes a real problem.

This is where working in the fader sweet spot is essential. The ability to specifically make the lead vocal two dB louder via the fader is almost impossible if we start with a fader output setting of, let’s say, -50dB. If we move the fader up just a few millimeters, we quickly reach -40dB, which is an enormous jump in volume. The solution to this problem: We prefer to use audio subgroups for rough volume balancing. If these are not available, we fall back on DCA or VCA groups. The input signals are assigned to the subgroups (or DCAs or VCAs) accordingly. For example, one group for drums, one for cymbals, one for vocals and one each for guitars, keyboards and bass. With the help of the groups you can set a rough balance between the instruments and vocal signals and use the channel faders for small volume corrections. 

Special tip: It makes sense to route effect returns to the corresponding groups instead of to the master. The drum reverb to the drum group, or the vocal reverb to the vocal group. If you have to correct the group volume, then the effect part is automatically pulled along and the ratio signal/effect part always remains the same.

Gain Staging in the DAW – the hunt for line level


As a first step, we need to clear up a misunderstanding. “Gain” and “Volume” are not members of the same family. Adjusting gain is not the same as adjusting volume. In simple words, Volume is the volume after processing, while Gain is the volume before processing. Or even simpler, Gain is input level, Volume is output level!

The next important step for clean gain staging is to determine what kind of meter display my digital mixer or DAW is even working with. Where exactly is line level on my meter display?

Many digital consoles and DAWs have hybrid metering. Like the metering in Studio One V5, which we’ll use as an example. The scaling going from -72dB to +10dB and from -80dB to +6dB in the sum output.

Studio One metering is between an analog dBu meter and a digital meter in dBFS in terms of its scaling. This is similar in many DAWs. It is important to know whether the meter display shows an RMS (average level) or Peak Meter (peak level). If we see only peak metering and control to line level (-18dBFS), then the level is too low, especially for very dynamic source material with fast transients like a snare drum. The greater the dynamic range of a track, the higher the peak values and the lower the average value. Therefore, drum tracks can quickly lighten up the clip meter of a peak meter but produce comparatively little deflection on an RMS meter.

In Studio One, however, we get all the information we need. The blue Studio One meter represents peak metering, while the white line in the display always shows the RMS average level. Also important is where the metering (tap point) is tapped. For leveling out, the metering should show the pre-fader level ratios, especially if you already inserted insert plug-ins or analog devices into the channel. These can significantly influence the post-fader metering.

-18dBFS is the new 0dBu | Gains Staging and the integration of analog Hardware in DAW Systems

Keyword: Plugins

You need to drive digital emulations with a suitable level. There are still some fix-point plug-ins and emulations of old hardware classics on the market that don’t like high input levels. It is sometimes difficult to see which metering the plugins use themselves and where the line level is located. A screenshot illustrates the dilemma.

-18dBFS is the new 0dBu | Gain Staging and the integration of analog hardware in DAW Systems

The BSS DRP402 compressor clearly has a dBFS meter. Thus, the BSS compressor has line-level reference on its metering at -20 dBFS. The bx townhouse compressor in the screenshot is fed with the same input signal as the BSS DRP402 but shows completely different metering.

Here you may assume since it is an analog emulation, its meter display is more like a VU meter. 

Fire Department Operation

It’s not uncommon that you will find yourself in the studio with recordings that just want to be mixed. Experienced sound engineers will agree with me. Many recordings by less experienced musicians or junior technicians are simply too high. So what can you do to bring the levels back to a reasonable level? Digitally, this is not a big problem, at least if the tracks are free of digital clipping. Turning the tracks down doesn’t change the sound, and we don’t have to worry about noise floor problems on the digital level either. In any DAW, you can reduce the waveform (amplitude) to the desired level.

-18dBFS is the new 0dBu | Gain Staging and the integration of analog hardware in DAW Systems

Alternatively, every DAW offers a Trim plug-in that you can place in the first insert slot to lower the level there.

The same plugin can also be used in busses or in the master if the added tracks prove to be too loud. We did not use the virtual faders of the DAW mixer for this task, because they are post-insert and, as we already know, only change the volume but not the gain of the track.

Analog gear in combination with a DAW

The combination of analog audio gear and a DAW has a special charm. The fast, haptic access and the independent sound of analog processors make up the appeal of a hybrid setup. You can use Analog gear as a front-end (mic preamps) or as insert effects (e.g., dynamics). If you want to connect an external preamp to your audio interface, you should use a line input to bypass the generic MicPreamp of the audio interface.

In insert mode, we have to accept an AD/DA conversion for pure analog gear to get into the DAW. Therefore the quality of the AD/DA converters is important. If you use the full 24bit spectrum by a full scale, this corresponds to a dynamic range of 144dB. This overstrains even a high-end digital converter. Therefore, you need to drive your analog gear in the insert at line level to give the digital converters enough headroom. Especially if you plan to boost the signal with the analog audio gear.

This simply requires headroom. If, on the other hand, you only make subtractive EQ settings, you can also work with higher send and return levels. Now we only need to adjust the level ratios for the insert operation. Several things need our attention. 

It depends on the entire signal chain

The level ratios in a DAW are constant and always understandable. When integrating analog gear, however, we have to look at the entire signal flow and sometimes readjust it. We start with the send level from the DAW. Again, i recommend you to send the send signal with line-level to an output of the audio interface.

The next step requires a small amount of detective work. In the technical specifications of the audio interface, we look up the reference level of the outputs and have to bring them in line with the input of the analog gear we want to loop into the DAW. If the interface has balanced XLR outputs, we connect it to a balanced XLR input of the analog insert unit. However, what do we do with unbalanced devices that have a reference level of -10dBV? Many audio interfaces offer a switch for their line inputs and outputs from +4dBu to -10dBV, which should you use in this case. In the technical specifications of the audio interface, you can find out which analog level is present at 0dBFS.  This you can also switch in some cases.

On an RME Fireface 802, for example, you can switch between +19dBu, +13dBu and +2dBV. It is important to know that many elysia products can handle a maximum level of about +20dBu. This level applies to the entire signal chain from the interface output to the analog device and from its output back to the interface. Ideally, a line-level send signal with an identical return level will make its way back into the DAW. In addition, the analog unit itself is under observation. Make sure that neither its input nor its output is distorting. These distortions will otherwise be passed on to the DAW unadulterated.

elysia qube series

It also depends a bit on the type of analog gear how its insert levels behave. A ground-in EQ that moderately boosts or cuts frequencies is less critical than a transient shaper (elysia nvelope). Depending on the setting, this can generate peaks that RMS metering can hardly detect. In a worst-case scenario, this creates distortions that are only audible but not readable without peak metering. Another classic operating mistake is a too high make-up gain setting for compressors.

In worst case, both the output of the compressor itself and the return input of the sound card are overdriven. The level balance at all four places (input & output analog device + input & output of the interface) of an insert should be under close observation. But we are not alone. Help for insert operation is provided by generic DAW on-board tools, which we will look at in conclusion.

Use Insert-Plugins!

When integrating analog hardware, you should definitely use insert plugins, which almost every DAW provides. Reaper features the “ReaInsert” plugin, ProTools comes with “Insert” and Studio One provides the “Pipeline XT” plugin.The wiring for this application is quite simple.

We connect a line output of our audio interface to the input of our hardware. We connect the output of our hardware to a free line input of our interface. We select the input and output of our interface as a source in our insert plugin (see Pipeline XT screenshot) and have established the connection.

A classic “send & return” connection. Depending on the buffer size setting, the AD/DA conversion causes a more or less large propagation delay, which can be problematic. Especially when we use signals in parallel. What does this mean? Let’s say we split our snare drum into two channels in the DAW. The first channel stays in the DAW and is only handled with a latency-free gate plugin, the second channel goes out of the DAW via Pipeline XT, into an elysia mpressor and from there back into the DAW.

Due to the AD/DA conversion, the second snare track is time delayed compared to the first track. For both snare tracks to play together time aligned, we need latency compensation. This you could do manually by moving the first snare track, or you could simply click the “Auto” button in Pipeline XT for automatic latency compensation. This is much faster and more precise. The advantage is that the automatic delay compensation ensures that our insert signal phases coherently with the other tracks of the project. With this tool, you can also easily adjust the level of the external hardware. If distortion already occurs here, you can reduce the send level and the return level will increase at the same time. 

This is also the last tip in this blog post. The question of the correct level should be settled, as well as all relevant side issues that have a significant impact on the gain structure and a hybrid way of working. For all the theory and number mysticism – it does not depend on a dB exact adjustment. It is quite sufficient to stick roughly to the recommendations. This guarantees a reasonable level that will make your mixing work much easier and faster. Happy Mixing!

Here’s a great Video from RME Audio about Matching Analog and Digital Levels.

YouTube

By loading the video, you agree to YouTube’s privacy policy.
Learn more

Load video

Feel free to discuss, leave a comment below or share this blog post in your social media channels.

Yours, Ruben

How to deal with audio latency

Estimated reading time: 10 minutes

How to deal with latency in audio productions


Increased signal propagation time and annoying latency are uninvited permanent guests in every recording studio and at live events. This blog post shows you how to avoid audio latency problems and optimize your workflow.

As you surely know, the name elysia is a synonym for the finest analog audio hardware. As musicians, we also know and appreciate the advantages of modern digital audio technology. Mix scenes and DAW projects can be saved, total recall is mandatory and monstrous copper multicores are replaced by slim network cables. A maximally flexible signal flow via network protocols such as DANTE and AVB allows the simple setup of complex systems. Digital audio makes everything better? That would be nice, but reality shows an ambivalent balance. If you look and listen closely, the digital domain sometimes causes problems that are not even present in the analog world. Want an example? 

From the depths of the bits & bytes arose a merciless adversary that will sabotage your recordings or live gigs. Plenty of phase and comb filter problems will occur. But with the right settings, you are not powerless against the annoying latencies in digital audio systems. 

What is audio latency and why it doesn’t occur in analog setups?

Latency occurs with every digital conversion (AD or DA). Latency is noticeable in audio systems as signal propagation time. In the analog domain the situation is clear: The signal propagation time from input to the output of an analog mixer is always zero.

Latencies only existed in the compound midi devices, where external synths or samplers were integrated via midi. In practice, this was not a problem, since the entire monitoring situation always remained analog and thus no latency was audible. With digital mixing consoles or audio interfaces, on the other hand, there is always a delay between input and output.

Latency can have different reasons, for example the different signal propagation times of different converter types. Depending on the type and design, a converter needs more or less time to manage the audio signal. For this reason, mixing consoles and recording interfaces always use identical converter types in the same modules (e.g. input channels), so that the modules have the same signal propagation time among each other. As we will see, within a digital mixer or recording setup latency is not a fixed quantity. 

Signal propagation time and round trip latency

Latency in digital audio systems is specified either in samples or milliseconds. A DAW with a buffer size of 512 samples generates at least a delay of 11.6 milliseconds (0.016s) if we work with a sampling rate of 44.1kHz. The calculation is simple: We divide 512 samples by 44.1 (44100 samples per second) and get 11.6 milliseconds (1ms = 1/1000sec).

If we work with a higher sample rate, the latency decreases. If we run our DAW at 96kHz instead of 44.1kHz, the latency will be cut in half. The higher the sample rate, the lower the latency. Doesn’t it then make sense to always work with the highest possible sample rate to elegantly work around latency problems? Clear answer: No! 96 or even 192kHz operation of audio systems is a big challenge for the computer CPU. The higher sample rate makes the CPU rapidly break out in a sweat, which is why a very potent CPU is imperative for a high channel count. This is one reason why many entry-level audio interfaces often only work with a sample rate of 44.1 or 48kHz. 

Typically, mixer latency refers to the time it takes for a signal to travel from an analog input channel to the analog summing output. This process is also called “RTL”, which is the abbreviation for “Round Trip Latency”. The actual RTL of an audio interface depends on many factors: The type of interface (USB, Thunderbolt, AVB or DANTE), the performance of the recording computer, the operating system used, the settings of the sound card/audio interface and those of the recording project (sample rate, number of audio & midi tracks, plugin load) and the signal delays of the converters used. Therefore it is not easy to compare the real performance of different audio interfaces in terms of latency. 

It depends on the individual case!

A high total runtime in a DAW does not necessarily have to be problematic. Some things depend on your workflow. Even with the buffer size of 512 samples from our initial example, we can record without any problems. The DAW plays the backing tracks to which we record overdubs. Latency does not play a role here. If you work in a studio, it only becomes critical if the DAW is also used for playing out headphone mixes or if you want to play VST instruments or VST guitar plug-ins to record them to the hard disk. In this case, too high a latency makes itself felt in a delayed headphone mix and an indirect playing feel. 

If that is the case, you will have to adjust the latency of your DAW downwards. There is no rule of thumb as to when latency has a negative effect on the playing feel or the listening situation. Every musician reacts individually. Some can cope with an offset of ten milliseconds, while others already feel uncomfortable at 3 or 4 milliseconds.

The Trip

Sound travels 343 meters (1125ft) in one second, which corresponds to 34.3 centimeters (0.1125ft) per millisecond. Said ten milliseconds therefore correspond to a distance of 3.43 meters (11.25ft). Do you still remember the last club gig? You’re standing at the edge of the stage rocking with your guitar in your hand, while the guitar amp is enthroned three to four meters (10 – 13ft) behind you. This corresponds to a signal delay of 10-12ms. So for most users, a buffer size between 64 and 128 samples should be low enough to play VST instruments or create headphone mixes directly in the DAW.

Unless you’re using plug-ins that cause high latency themselves! Most modern DAW programs have automatic latency compensation that matches all channels and busses to the plug-in with the highest runtime. This has the advantage that all channels and busses work phase coherent and therefore there are no audio artifacts (comb filter effects). The disadvantage is the high overall latency.

Some plug-ins, such as convolution reverbs or linear phase EQs, have significantly higher latencies. If these are in monitoring, this has an immediate audible effect even with small buffer size. Not all DAWs show plug-in latencies, and plug-in manufacturers tend to keep a low profile on this point.

First Aid

We have already learned about two methods of dealing directly with annoying latency. Another is monitoring via hardware monitoring that may be provided by the audio interface.

RME audio interfaces, for example, comes with the Total Mix software. This allows low latency monitoring with on-board tools. Depending on the interface even with EQ, dynamics and reverb. Instead of monitoring via the DAW or the monitoring hardware of the interface, you can alternatively send the DAW project sum or stems into an analog mixer and monitor the recording mic together with the DAW signals analog with zero latency. If you are working exclusively in the DAW, then it helps to increase the sample rate and/or decrease the buffer size. Both of these put a significant load on the computer CPU.

Depending on the size of the DAW project and the installed CPU, this can lead to bottlenecks. If no other computer with more processing power is available, it can help to replace CPU-hungry plug-ins in the DAW project or to set them to bypass. Alternatively, you can render plug-ins in audio files or freeze tracks.

Buffersize Options
The buffer size essentially determines the latency of a DAW
Track Rendering in DAW
Almost every DAW offers a function to render intensive plug-ins to reduce the load on the CPU

Good old days

Do modern problems require modern solutions? Sometimes a look back can help.

It is not always advantageous to record everything flat and without processing. Mix decisions, how a recorded track will sound in the end, will be postponed into the future. Why not commit to a sound like in the analog days and record it directly to the hard disk? If you’re afraid you might record a guitar sound that turns out to be a problem child later in the mixdown, you can record an additional clean DI track for later re-amping.

Keyboards and synthesizers can be played live and recorded as an audio track, which also circumvents the latency problem. Why not record signals with processing during tracking? This speeds up any production, and if analog products like ours are used, you don’t have to worry about latency.

If you are recording vocals, try to compress the signal moderately during the recording with a good compressor like the mpressor or try it with our elysia skulpter. With the elysia skulpter there are some nice and practical sound shaping functions like filter, saturation and compressor in addition to the classic preamp possibilities – so you have a complete channel strip. If tracks are already recorded with analog processing, this approach also saves some CPU power during mixing. Especially with many vocal overdub tracks, an unnecessarily large number of plug-ins are required, which in turn leads to a change in the buffer size and consequently has a negative effect on latency.  

What are your experiences with audio latencies in different environments? Do you have them under control? I’m looking forward to your comments.

Here are some FAQ:

What is audio latency and why it doesn’t occur in analog setups?

Latency occurs with every digital conversion (AD or DA). Latency is noticeable in audio systems as signal propagation time. In the analog domain the situation is clear: The signal propagation time from input to the output of an analog mixer is always zero.

Latencies only existed in the compound midi devices, where external synths or samplers were integrated via midi. In practice, this was not a problem, since the entire monitoring situation always remained analog and thus no latency was audible. With digital mixing consoles or audio interfaces, on the other hand, there is always a delay between input and output.
Latency can have different reasons, for example the different signal propagation times of different converter types. Depending on the type and design, a converter needs more or less time to manage the audio signal. For this reason, mixing consoles and recording interfaces always use identical converter types in the same modules (e.g. input channels), so that the modules have the same signal propagation time among each other. As we will see, within a digital mixer or recording setup latency is not a fixed quantity. 

What is Round Trip Latency?

Typically, mixer latency refers to the time it takes for a signal to travel from an analog input channel to the analog summing output. This process is also called “RTL”, which is short for “Round Trip Latency”.
The actual RTL of an audio interface depends on many factors: The type of interface (USB, Thunderbolt, AVB or DANTE), the performance of the recording computer, the operating system used, the settings of the sound card/audio interface and those of the recording project (sample rate, number of audio & midi tracks, plugin load) and the signal delays of the converters used. Therefore it is not easy to compare the real performance of different audio interfaces in terms of latency. 

The elysia hardware purchase guide

500 vs. 19“ vs. qube | The elysia hardware purchase Guide


The elysia hardware purchase guide | Which hardware version is right for me? 

As you have already found out, we offer you our products in different versions and you may ask yourself: “Which version should I buy? What are the advantages and disadvantages? What are the differences between the 19″ rack versions vs. the 500 Modules and the qube series?” 

Exactly to answer these essential questions and to make your purchase decision as easy as possible I wrote this short, informative and concise blog post for you which includes the ultimate elysia hardware purchase guide, so you can see all differences between 500 Modules, 19″ rack versions and qube series.

Enjoy reading it!


We offer, you choose!

Basically, all our product variants offer exactly the same high-quality audio electronics with all its refinements. Whether xpressor as 500 module, 19″ rack or qube – you will always get the same circuit board.

Power Handling

The voltage is identical on all variants and it’s +-16V. The input and output levels are the same and the usable dynamics are identical. The technical data are the same for all models and you always get the same sound. The main difference is the built-in power supply which has a small but not an unimportant effect on the overall sound.

On one hand, there is an audible hum and noise that you perceive in the direct audio range. Whether the general sound behavior changes, you will hear subtly.

In all our 19″ rack versions we use a classic toroidal transformer with linear regulation for voltage conditioning. In the qube versions, we generate the necessary +-16V from an 18V power supply with two separate switching regulators. These are very low noise and were optimized by me for audio applications. For the 500 module racks, there are a lot of different manufacturers and variants. There are many differences, like for example: How the power supply is built and if it is low noise. Our modules expect a quite clean power supply, otherwise, there will be audible interferences. Unfortunately, most manufacturers do not release any information about the noise and the interference spectrum of their power supplies. 

Lost in Space?

A further and not insignificant important criterion for your decision is of course the space requirement.

Do you already have classic 19″ racks or is your overall space limited? Do you want to install our product permanently in your studio or are you planning an instant mobile use?

Here is a short overview of the different versions with their advantages and disadvantages:

19″ rack versions

These have the best ergonomics. The controller arrangement from left to right is logical, clear, and easy to use thanks to the haptic user interface. If you already have 19″ racks in your studio, this is the perfect solution for you.

The installation into a studio table is great for mastering. Great, because the products are positioned in front of you and you don’t have to leave the perfect and central listening position. 

All our 19″ racks also have an EXT socket for very special and exciting functionalities. The 19″ rack version of the xpressor has an external sidechain and the karacter corresponding to CV (Control Voltage) control sockets for drive and mix. The internal linear power supply with a classic toroidal transformer provides a clean, low-noise, and stable power supply. 

+ Perfect for 19″ rack mounting 

+ Integrated and linear power supply for best sound aesthetics 

+ EXT sockets for special functions 

+ XLR and jack sockets for in- and outputs 

+ Sturdy and lightweight aluminum housing 

+ Optimal ergonomics for mastering and mixing 

+ xfilter available also as Mastering Edition 

– The higher-priced version 

elysia skulpter 500 and mpressor 500 are not available in this model variant

qube

The qube version is the perfect solution for you if you are looking for a very handy space-saving all-round solution. And you can be sure that there is always a place on every desk or kitchen table. Thanks to the vertical arrangement, you can comfortably stack several qubes above each other.

Are you also looking for something for mobile use?

The sturdy aluminum case is almost perfect for a tough stage, rehearsal room, or studio use. If you work at different locations, you should really take a closer look at the qube series. Simply because it fits into any backpack.

The connectors we are using are XLR and jack. Synthesizers and drum machines can be connected directly without an adapter. The jack sockets are perfect for DAW integration via an audio interface for latency free recording.

The qube is perfect for your entry into the analog world – no need to search for the right 500 rack.

Compact 500 formats that can be used immediately 

+ Perfect entry into the world of analog processors 

+ Robust, travel-ready and lightweight housing 

+ Can be stacked vertically to save space

+ Optimal for mobile use (rehearsal room, studio, stage, FOH)

+ Internal low-noise voltage conditioning with switching regulator

+ External universal power supply suitable for all voltages

+ Additional jack sockets 

– no rack mounting possible

500 modules – The classics

If you already own a 500 rack or planning to buy several modules, these classic models are the most flexible and affordable versions. You have a huge selection from different manufacturers and of the 500 racks and you can let your creativity run wild in the assembly. The market offers an excellent choice of options in terms of the number of slots, additional mounting in a 19″ rack, connectors, and much more.

Some manufacturers also offer great features such as summing or SUB25-D multipin connectors for audio connections. How good the respective power supplies are?

Unfortunately, I can’t usually say that because most manufacturers have no or only sparse information in their technical specification.

+ Cheapest versions

+ Individual combinations possible

+ Large selection of racks from different manufacturers available

+ 19″ rack installation possible

+ Nice additional functions selectable 

– Quality of the power supply may vary

– Most racks have only XLR inputs and outputs 

– Empty slots do not look nice

The differences between elysia 500 series, 19" Rack and qube.

The ultimate elysia compressor guide

The ultimate elysia compressor guide


Which elysia compressor fits my musical applications? 

Is an elysia compressor VCA or FET? 

What is the difference between the mpressor and xpressor? 

Let me bring light into darkness

alpha compressor – the ultimate mastering tool

Rack Series | alpha compressor state of the art compressor

Our noble – compressor! When it comes to mastering, the alpha compressor is the first choice for you. With its extensive possibilities and features set, the alpha compressor is perfectly equipped for all your mastering tasks. The alpha compressor is just waiting to be fed by you to give you the best dynamic results you ever wished for but could not describe until now.

Do you want subtle or intensive changes in the sound experience? The soft knee curve of the alpha compressor always controls discreet and elegant. Especially the M/S matrix allows you detailed editing and a very transparent and spatial sound image.

With the Sidechain Filter, you can adjust the control behavior perfectly and the Audio Filter allows you to make subtle tonal adjustments. The integrated Soft Clipper feature is the perfect function to protect your A/D converter from unwanted transients. The alpha compressor is a real musical all-rounder and you can use it for all kinds of music, from acoustic to modern. With this compressor, you will be able to unleash the expression and emotions of your music. 

Despite its many complex feature set, you can still achieve perfect sound results very quick and easy. Last but not least, the alpha compressor impresses with its special design and is also a pure visual treat in every studio.

mpressor – a strong character with creativeness

mpressor from elysia | side

Would you like something special? Then the mpressor is just right for you. 

On one hand, the mpressor can compress many standard signals like speech, vocals, bass, guitars, brass and drums, and on the other hand, thanks to its hard knee characteristic, it is an absolute multi-talent when it comes to danceable beats for hip-hop, trap, electro, house, techno, and rock – this is exactly where the mpressor can work with true audible wonders, both when mastering and mixing. 

Due to its sensitively reacting time constants you will very quick and safely find the right settings to make your groove tangible. Further creative functions such as the gain reduction limiter, the anti-logarithmic release curves, and the negative ratios make the mpressor a great tool for unimagined dynamic manipulations. 

You will soon realize that it is a real specialist and workhorse, especially for creative drum sample editing. You can use the audio filter to influence the overall character of your signal, and the switchable external sidechain input is available for the typical and popular techno and house ducking sounds.

xpressor (19″ and 500 Series version) – The all-rounder

xpressor 19" | sideview white
xpressor 500 by elysia

The xpressor is a true all-rounder with special features and control ranges that give you control over the dynamic spectrum for all kinds of music and signals. 

As a soft knee compressor, the discrete VCA controls very cleanly and inconspicuously – even at very high-level reductions.

We designed the xpressor as a stereo compressor so that you can use it as a bus and mastering compressor. The switchable auto functions allow you to compress even difficult signals like electric basses, piano, and sum signals very unobtrusively. The Sidechain Filter gives you precise control over the bass response, while the Gain Reduction Limiter gives you perfect control over the compression. 

The so-called Warm Mode allows you to quickly adjust the basic sound characteristics. As a true summing compressor, it also has a mix control for parallel compression on board. In the 19″ version, the xpressor even offers an external sidechain input for frequency-selective processing and ducking.

mpressor 500 – Flexible and affordable

mpressor 500 | the compressor from the future | side view

You are looking for maximum dynamic range processing for little money? 

Then the mpressor 500 is the right choice for you. A lot of punch for the bucks! 

The mpressor 500 is a completely discrete Class-A compressor with our typical clear and transparent sound philosophy combined with maximum functionality.

Perfect for your recording or mixing, the mpressor 500 offers you all the important features of its bigger brother. You can use it perfectly for speech, vocals, guitars, basses, and all drum signals. With fast time constants, a hard knee curve, a gain reduction limiter and extensive autofast functions, it offers you a rich bandwidth – from subtle changes to drastic sound design. 

A special feature is the THD Boost function, which allows you to distort the input stage even more. The fast and accurate LED display gives you precise visual feedback of your control behavior. Even in complex mixes, it’s clear, punchy and powerful sound will stand out. Our compressor from the future.

For a complete overview of the elysia compressors, please visit our compressor comparison page

We have done a comprehensive overview of the whole topic graphically to help you to make the right decision when buying an elysia compressor.  Should you have any further questions, please feel free to contact us at any time.

If you like the blog post, feel free to share it, or just leave a comment below. I am looking forward to a lively conversation.

Yours, Ruben Tilgner


We have some short elysia compressor FAQ overview for you:

What is a elysia compressor?

All elysia compressors like alpha compressor, mpressor, xpressor are discrete class-A hardware compressors based on VCA (Voltage Controlled Amplifier). All our compressors are working in a different way:
alpha compressor is working with a so called PCA (Passive Current Attenuator) This circuit transforms the incoming signal into a current which is then reduced controlled by voltage. The triggering can be compared to that of a VCA with a predictable characteristic curve. The core consists of sixteen discrete transistors which are kept at a defined temperature by an exclusive heating system, avoiding unwanted fluctuations.

xpressor is working with a discrete VCA Technology.
The mpressor Series are using Transductance Amplifiers A differential pair of transistors that uses a modulated current source to affect the amount of amplification builds the core of this module. A few extra transistors were added in order to further decrease noise and unwanted influences of the control voltage. Most of our compressors are also available as Plugins for your DAW and are a nice reproductions of our original hardware.

What elysia compressor can i use for Hip-Hop, Trap and Urban Production?

The elysia xpressor is the perfect match for you when you like to have a very affordable “value for price“ compressor. The xpressor has a transparent and punchy sound and is very versatile.
The mpressor is perfect for creative drum sample editing and vocal processing.  The alpha compressor is the perfect solution for mastering.

What elysia compressor can i use for Electronic, EBM, Techno and House Production?

The elysia xpressor is the perfect match for you when you like to have a very affordable “value for price“ compressor. The xpressor has a transparent and punchy sound and is very versatile.
The mpressor  is perfect for creative drum sample editing and vocal processing. 
The alpha compressor can be used perfectly for all mastering compressing task and can handle nearly all types of musical styles. 

Can i use the elysia xpressor as a de-esser via sidechain filter?

Yes, of course you can! It is possible to realize an De-Esser with the xpressor if the sidechain filter is set to about 1 kHz.
It is important to compress the signal before. Otherwise, only the loud S sounds will be processed.

What does discrete class-A technology mean?

The discrete class-A technology describes how the analog amplifiers are constructed. In contrast to Integrated Circuits (IC’s) the circuits are built from single transistors. The class-A mode ensures that a current always flows through the transistor and thus avoids transition distortions.

Does the elysia xpressor 500 has external sidechain capabilities?

No, the xpressor 500 doesn’t have any external sidechain functionalities like his bigger brother xpressor Rack. The xpressor 19″ Rack Version has an EXT Jack with Sidechain Send and Return.

elysia on tour

elysia on tour in the United States


Our fine elysia 500 Series audio processors touring from Germany to conquer the United States of America.

Fasten your seatbelts and put your seats in the upright position.

Back in the late summer of 2018 we had a crazy idea. This idea led to a 1,5 years long tour all over the United States and nearly 30 visited mixing&mastering engineers, producers and musicians. But who or better to say what was exactly on tour? Let’s dive in!

elysia on tour case with 500 series analog modules

If you’re reading this – you most likely are in love with analog gear, right? Just like we do!

It doesn’t matter if your studio has several racks loaded with analog jewels or you got a small home recording studio with one beloved mix bus compressor – whenever you get your fingers on analog gear it’s always a happy time. Because we at elysia love what we do, we’re always happy to see you using our gear. We want to see as many people get in touch with it and learn what special things it can do for their music. But what are the options to get this in-depth experience with the gear, maybe even in your own trusty environment? That’s why we decided – let’s send our gear on tour!

We loaded nearly all of our 500-series gear into a Neve R10 rack, to be precise – a pair of our at that time just released preamp elysia skulpter 500, our stereo equalizer xfilter 500, compressor xpressor 500, our transient designer nvelope 500 and our saturation module karacter 500. Each module on its own is a treat, but all of them at once as a channel strip – that’s something you should not forget too fast!

So how much should this hands-on experience cost you? Of course nothing!

We decided to cover all shipping costs. The only investment the participants should make was their precious time.  Honestly, we knew it’s something no-one else did before and that it’s a special offer but would the people react to it the way we hoped they would? Hell, they did! We reached out to so many guys and soon realized it was a mistake. Simply everyone was super excited about it and within two weeks the list was longer than we ever expected.

How did we choose who would get it?

While we all know these big names, the engineers we all look up to and who of course we also wished-for to host our rack – there are so many extremely talented guys climbing up the Olymp but yet maybe not having the resources to get the gear worth of $7000 to try out just like that. So we wanted to make this thing different. We care about everyone – a Grammy-winning engineer and a bedroom producer. That’s when the real work started. We asked ourselves how to ship it, how to make the user experience as easy as possible. How to ease the learning curve of our gear? Because it’s far away from being a one-trick-pony. How to make sure it can be plugged in without missing a cable? How much time should be enough for an unforgettable experience? Many questions and we needed answers.

The best experience for you!

We had this idea of a perfect experience – you get the rack, you plug it in and start immediately enjoying it. So we even soldered our own custom cables, any cable that you’d need, including TT-Phone jacks. We made them durable so they last the tour for sure. How inhuman would it be to plug in the rack that you waited several months for just to realize that one of the cables is broken and you haven’t a spare one? No one wants to live through that – haha!

We made a short video starring our CEO Ruben Tilgner explaining each module and put it on a USB stick. Probably the worst video production of all times but all guys loved it as it really helped to directly dive into the rack. By the way – we should’ve known better – the stick got lost pretty fast along the way. Maybe, it’s still on the desk of one of the guys, maybe even of the one reading this blog.

elysia 500 series rack

We found the right case for shipping, TSA-locks to pass the customs, and even made ourselves a beautiful custom wood box for the cables and accessories. Analog gear and wood – nothing looks better together, right? That’s when we realized that we wanted to do one more really special thing!

elysia uses a state-of-the-art milling machine to build custom parts for its gear. Why shouldn’t we manufacture something really important? A custom coaster!

You can imagine how sad this day is, when the gear has to move. We wanted to give at least something which can stay forever. All the guys couldn’t believe their eyes to see their names engraved in this little but meaningful gifts and we still see many of them actually use it. 

Handmade Custom Coaster as a gift with the names

One last special we’ve added to the case was a little notebook. We kindly asked the guys to write down some words for us. Yes, it’s sentimental but the whole thing was so personal. Each participant would’ve held this notebook in his hands and as it turned out – there were many!

We were simply overwhelmed by getting the rack back to our headquarters and reading all the beautiful messages the guys wrote down for us. No other company has such a book. It’s just us. And we’re proud of it!

With all that said, you know now everything about the development of the tour so let’s dive in and see what the guys thought about it.


Rick King

Rick Kings Notebook entry 1

The first lucky soul was Rick King, a producer, and mixing engineer at his beautiful studio in Paducah, Kentucky. Rick wrote an intriguing introduction to the brand new notebook! Who’s up for an adventure?

“To whomever finds this, you will surely notice a page missing from this book. On it is a map that leads to the elysia gear that I have buried. Find it, and it is yours! – RK”

Rick King

As you can see the first page is really missing. If you ever find this map, let us know! Rick even shared a suggestion for an amazing piece of analog gear that will make any mix better! Thanks for your useful input!

Rick Kings second Notebook entry
Product Idea Rick King

“Elysia. Thank you again for your hospitality and kindness in letting me be a part of this elysia tour. Your gear has been on my radar for a long time, but being able to test it in my own space made all the difference. I can’t wait to put some pieces in my rack permanently! Loved twisting these knobs! – Rick King King Sound”

Rick King

It was a real pleasure to have you as our first host, Rick! Where did you bury our gear?! 


Jack Daniels

The second guy was Jack Daniels, we guess it won’t be hard to memorize his name for most of you musicians out there. ;) By the way, Jack is Rick’s best studio friend. Unfortunately, Jack forgot to sign the notebook but he made this beautiful shot of the rack and we love it! 


Welcome to Nashville

When it comes to music, there are not many places that can compete with Nashville. Though it’s not a huge city, its musical community is one of a kind. There is probably no other place where you’ll find so many music studios so close to each other. And that’s for a reason. It will be hard to find another place with so many musicians, audio engineers and producers so close to each other. So Nashville is simply the right place to continue with, right?

Nashville Map by google
© Google Maps 2020

We visited exiting and talented engineers and producers whose names you probably already saw many times on social media: 

Travis Ball, Colt Capperrune, Kyle Monroe, Michael Frasinelli, Josh Bonanno, Josh Colby, Marc Frigo. 

We really hope we didn’t forget anyone! Because Everyone got the rack for two full weeks so the rack stood in Nashville for over 3,5 months. This is what the guys wrote down for us:


Travis Ball 

“Ruben + Aleg,

Thank you very much for putting this rack together! I have been keen to try some of the offerings from elysia for some time now, very impressed with how everything sounds and put together! Wish I could have spent more time with it! I guess this means I should buy something eventually, haha! Cheers Travis Ball”

Travis Ball

You’re very welcome, Travis! We’ll gladly watch your carrier prosper! And thanks for the beautiful pictures!  



Colt Capperrune

elysia on tour notebook entry Colt Capperrune

“The week gone by far too fast, I will never forget you dear elysia. May your silky smooth goodness bless many waveforms, and tame all the peaks you meet. Until we meet again, Colt Capperrune” 

Colt Capperrune

This is the most musical message we could wish for, Colt! Thank you so much! 


Kyle Monroe “Tiny Tape Room”

elysia on tour | Rack at Kyle Monroe | Tiny Tape Room

“What a wonderful treat to be a part of this elysia tour. I am very excited to pick up some of these EQ’s in the future! Cheers, Kyle TTR”

Kyle Monroe

Our equalizer will fit amazingly in your newly built studio, Kyle!

Such heart-warming comments! And this is just the beginning!


Michael Frasinelli 

elysia on tour | Michael Frasinelli's Notebook entry
elysia on tour | Michael Frasinelli handmade gift

“Well before these 2 weeks I thought I got to hold off on buying an additional 500 series chassis for a while… I  thought wrong.

I demo a lot of gear, but it is rare that I end the demo wanting to buy every single unit that I don’t already own ASAP. Thanks… I think? As always, fantastic work elysia team. Plus, now you can see why I type and rarely handwrite.”

Michael Frasinell

It is all about the message, and we love it, Michael!


Josh Bonanno

elysia on tour | rack at Josh Bonanno

“Can’t say thank you enough for including me in this tour. Incredible gear. Truly enjoyed my time with it and look forward to adding a few to my collection in the future.”  

Josh Bonanno

[layerslider id=”10″]

We’re very happy to be now a part of your setup, Josh! The video you made while using the rack was definitely one of the most special things from the tour!


Marc Frigo

“Thank you so much for letting me try out this wonderful gear. Really appreciate it! Love what you’re doing, keep up the great work!”

Marc Frigo

It was our pleasure to have you onboard, Marc! 


Last but not least – check out the gear in action and see what Travis Ball did with it while mastering a cool pop track! Very nice!

YouTube

By loading the video, you agree to YouTube’s privacy policy.
Learn more

Load video

I hope you guys liked this blog post.

Meanwhile, it would be great if you leave a comment, tell me if you like this touring idea. And if so, please share this blogpost on social media. Thank you.

Cheers, Aleg

Transient Designer Part 2

Transient Designer Story Part 2/2

In the previous Episode of my Transient Designer Story, I wrote about the ups and downs and how i came to the idea to develop the Transient Designer Technology. Gave you some insights on my work. Also i wrote about the development of my first Compressor – The DynaMaxx. If you haven’t read the first Part yet, please click >here.

(translated from the original german article)


Flash of inspiration: difference – the crucial idea

I’m not exactly sure when, where and how I picked up the topic “difference”. But I think I remember reading about a trick in an issue of the Keyboards Magazine to use some kind of exciter for samples with the EPS 16+ sampler. One sunny day I got the ultimate inspiration flash. The next day I had another lucky circumstance. My boss, Hermann Gier, with whom I was sitting in the same office, was going on a business trip for two days. Now I was finally able to build my new idea on a real lab board.

Transient Designer Story | Ruben's Lab Board for Development
Ruben Tilgner’s Playground
Transient Designer Story | Ruben Tilgner's Lab Board
Ruben Tilgner's Lab Board | Detail

I then applied this difference theory to the envelopes of my compressor. And instead of just one circuit I had two in parallel. These had different time constants. I chose different attack times but the same release time. I added a differential amplifier – and all of a sudden the scales fell from my eyes and ears: Because I could suddenly see the transients of the signal! With very soft transients both envelopes were equal and hardly formed a difference, with fast signals this could be recognized and put into a VCA. 

The brilliant aspect of this was that no threshold was necessary since the difference worked independently of the input level. Next! Another genius idea popped into my mind. The control voltage could now be positive or negative, which meant that the VCA would amplify or attenuate. The transients could now be amplified as well as attenuated! I thought that was genius.

I finished my new circuit in a single day. Afterward, I asked myself, what if these envelopes had the same attack but different release times?

So the next day – new luck. I built up exactly this part of the circuit the next day and it worked at first attempt. So now I could control the sustain! I was out of my mind because I could achieve an amazing effect with just two knobs. I couldn’t get the grin off my face. 

Day three – Return of the Boss. On the third day, Hermann came back from his business trip and I presented him my new invention. He was immediately enthusiastic and wanted me to implement the product quickly. But…  


Codeword.Yellow Kick Man!

Endless days in the dark with the envelopes – the fine-tuning

So, my boss liked my invention and now wanted the product to be finished as soon as possible. After the first euphoria, I was disillusioned during a simple test with a drum loop. When I started it, the first beat was the loop, mostly the bass drum was very loud, but the further beats were quieter.

Looking for the reason I found it very quickly. At the beginning of the loop, the difference was simply much bigger than with the following ones. I had a simple analog oscilloscope at hand with which I now tried to observe the generated envelopes. 

To achieve exactly this, I had to adjust the time-scale on my oscilloscope so slowly that only a low shining dot moved across the screen. Under bright sunshine, I completely darkened the room and saw that this dot glowed a little bit and I could roughly see the control voltage. It took me about three months to optimize my circuit for the different signal types. With every small change of the circuit, I repeatedly fed all sounds through the circuit to check if the respective changes had disadvantages. It was like a microsurgical operation or finding the famous needle in a haystack. 

Complexity

Just to briefly explain the problem. A normal compressor usually has five controls and my circuitry is four times as complex and that is easy to explain. Because it contained several time constants and also internal thresholds, which all had to be perfectly tuned to each other. I could have given the user ten knobs per channel, but this reduction to only two knobs became the key success and stands for perfect usability. I was even able to fit four channels into one unit. And then?

The next challenge!

I wasn’t really satisfied with the overall sound, because especially with a stronger use of attack the whole signal sounded a bit too hard and very unpleasant. To solve this I put a low pass filter behind the VCA and replaced the missing highs with a coil filter again. Exactly with that, the sound became much more pleasant and softer again. To present the overall idea to a selected audience, we made some prototypes which, as usual at that time, consisted of self-etched circuit boards and were then drilled by ourselves. Ronald Prent was one of the first who worked with these prototypes. He immediately became a big fan of this new concept. Yellow Kick Man aka Transient Designer was born.

Handmade Prototype of the Transient Designer
Kick Man | Handmade Prototype of the Transient Designer (Lo-Res Photo)

What is it? Is it a compressor or a noise gate?

The big speculation during the premiere at Prolight + Sound 1998

The Transient Designer, that’s how my finished product was officially named, was first presented to the public at the ProLight + Sound fair in Frankfurt / Germany in spring 1998 and I still remember the many questioning faces of the audience. The most asked question on our booth was: Is this a compressor? A Noise Gate? What is that? 

Transient Designer (2 Channels) invented and developed by Ruben Tilgner, CEO of elysia
(Photo: Transient Designer © SPL Electronics GmbH)
Transient Designer 4, invented and developed by Ruben Tilgner, CEO of elysia
(Photo: Transient Designer 4  © SPL Electronics GmbH)

So I gave interested fair visitors some headphones to check out the product. After a few seconds of listening, the majority of our booth visitors were amazed and enthusiastic about what was possible with this new Transient Designer. So it was a new kind of audio processor that didn’t exist before and I knew that I had created something beautiful and new. In this context, I have to say that I always find it very interesting to experience and enjoy the reactions of potential customers and it always gives me back proof that the long work on the development of a product can always be worthwhile. At premieres like this, I simply enjoy the audience’s reactions and get goosebumps. Definitely deep and emotional memories that are worth looking back on.

Thanks to the positive feedback, also from the international press, as well as the numerous positive reviews, the Transient Designer rapidly became popular and entered national and international studios in no time. 

“The Transient Designer has already earned an entry in the ‘Golden Book’ of renowned studio equipment classics. The intelligent implementation of a simple idea combined with equally simple operation offers truly enormous and sometimes astounding design possibilities. (…) However, such a specific possibility to influence the transient structure of audio signals did not exist before the Transient Designer. Especially in sample-based productions, this FX dynamics processor proves to be a real elixir of life, but also in the world of production with real instruments, there are numerous possibilities of use (see ‘Listening’). So unreserved praise to the manufacturer for this development and a recommendation to you to take a closer look at this extraordinary dynamics processor…”

Studio Magazin | Germany 1999
Ruben Tilgner at a Gig of the Band Muse (FOH).
(Foto: Me in Cologne at a Muse Gig / FOH)

The simple operation with only two controllers and four channels was one of the reasons why the TD4 – Transient Designer became a successful cash cow. The four channels were, of course, predestined for editing a drum kit. Furthermore, the bang for bucks value was pretty good too.

 

 

 

 

The Envelope Conqueror V2.0 – nvelope from elysia

Next Generation Transient Shaper

In 2006 I co-founded the company elysia. Took it upon myself to develop many new and high-quality products according to my own taste and experiences. 

Since I already had the necessary expertise in the development of compressors through DynaMaxx, I decided to develop another compressor. I started with the alpha compressor, it should be a mastering compressor with very special functions. Nowadays It is still our flagship and has become a modern classic in the audio world. Some years later, around 2012, I had the idea to develop a new and extended version of the Transient Designer. With all the experience I had now, especially in the design of discrete Class-A circuits, I was able to improve this idea fundamentally. Especially in finished mixes, the original circuit does not react as reliable as you would expect. 

The detection of transients was not always working perfectly. Also, there are too large fluctuations in generated amplitudes. So I decided to develop the circuit again from scratch. 

nvelope – Development Board

As a special feature, I planned filters with which you can specify the starting frequencies for the attack and a final frequency for the sustain. My basic idea was quite simple: transients always have something to do with fast and high frequencies and long release times of instruments have always something to do with lower frequencies.

Since the two filter bands can widely overlap I had to develop a special bandpass that ensures that the frequency response always remains linear. With my discrete design and my own VCAs, the sound was much punchier and clearer, which again is a real improvement. Due to this multiband concept, the editing sounds more natural now and less like a Noise Gate. 

Especially shortening the sustain only in the bass range works much better now. Nevertheless, the nvelope can be operated in the so-called full-range mode, which is similar to the original Transient Designer. 

If you don’t need transient processing in a mix, you can also use the EQ mode. So three possibilities in one product! Which turned out to be a real advantage.

Transient Designer – The PlugIns in the digital World

At the peak of the DAW – Mayhem, the Transmod from Sony Oxford was one of the first plug-ins that were based on my idea and exactly my circuitry. Based on the original hardware, Brainworx then developed the corresponding plug-ins for SPL and elysia, which sounds really great – but analog remains analog.

Most big DAWs now include a Transient Designer/Transient Shaper Tools. Several software manufacturers have also adopted the idea of the Transient Designer. Besides, the technology can be found as a processing tool in many drum sample players. I can only recommend everyone to test the hardware nvelope and you will see and hear by yourself that it is still the benchmark in Transient Shaping.

Closing words

In conclusion, I have to say:

Yes, the Transient Designer has become a true classic and revolutionized the audio industry both analog and digital. These are the facts of which I am of course very proud of. But also that I’m still the only one that developed the only two analog hardware versions on the market. This proofs me how complex my circuit is and it takes a lot of knowledge and experience to create something like this in the analog domain.

As this idea has spread to many plug-ins – the transient designer and transient shaper have become a standard in audio processing. 

In the future, this will surely gain more importance. Thanks to the Loudness Normalization there’s even more headroom available again to let nice transients be sounded in the mixes.

So my number 1 billboard hit became a product with which I could make the audio world a bit more beautiful, better and even more creative. Even this has not brought me a Ferrari – I’m not really sad about it. What do you think? Leave a comment or your personal experience. I would be happy to discuss, exchange or philosophize with you.

I am curious.

Thank you very much for your interest.

Yours truly, Ruben Tilgner

Transient Designer

The invention of the Transient Designer | Transient Shaper Technology

My Invention of the Transient Designer – On my trails of transients and my personal adventure with envelopes – but suddenly one of the most revolutionary ideas of the 90s was born, even without me realizing it.

Part 1 /2

(translated from the German Blog article)

Many of you may not even know that I invented one of the most influential and revolutionary audio processors of the late ’90s, the Transient Designer. I was young and really didn’t need much money, I was more looking for one specific sound. You’re probably wondering how this came, what I had in mind, and heck, what does Michael Jackson have to do with all of this?

With this blogpost, I will take you on a personal journey into my bloomy past and i will tell you how I unwittingly became the inventor of one technology. This technology became a standard in every DAW and continues to be used in countless sample libraries and until today. Today known as an award-winning technology.

The musical needs

It all started when I, as a passionate musician and professional radio and television engineer with ambitions for analog technology, got a job as a developer for SPL Audio in Niederkrüchten / Germany in 1995. At that time I just left my band as a keyboard player and started to build my first home studio in my bedroom – of course with the idea to get rich with my music and sell millions of records. You all know that! Don’t you? 

I equipped my home studio with a sampler from Ensoniq, the EPS 16+, a Kawai K4 Synth, a Roland D70 and a Kurzweil K2000. As common in the nineties, I connected my collection of fine Soundgenerators via MIDI to an Atari 1040 ST with the good old Cubase installed.

Besides, I had an old fashioned analog mixer and my speakers so that I could work creatively in my home studio. With this setup, I started to compose my own songs, work on my sounds and tweaked knobs all night long to create sequences to get the best out of my Instruments. At that time my outboard equipment, unfortunately, was quite poor and I could only dream of buying a compressor.

The perfect change: Creative Fridays in the company

During my time working for SPL Audio, I was frequently asked to work in the production where I was responsible for the technical and acoustic testing routines of their Products. This was a kind of first quality control. A very frustrating fact was, that I sometimes was asked to do stupid and idiotic tasks for weeks. These task were really time-consuming and exhausting and most of the time no brains really were worn out. So I decided to use Fridays to give my creativity full scope.

During the morning I had the idea for a product. This idea I wanted to realize in the afternoon. There were cardboard, which was commonly used to safely stack several products. For me, that cardboard served as paper for sketching my circuit schematics, that I thought of. On Friday noon I took all my full enthusiasm and armed myself with soldering tools.

When I was in a good mood, I developed and build a completely new product from old circuit boards and housings on just only one Friday afternoon. In a very Frankenstein-like Way and with my vision: The most important thing was that the product has to be crazy and it should generate sounds and create noises. One circumstance turned out to be quite cool. Since there were always defects in some front panels, housings, and circuit boards during production, these parts no longer could be used for assembly and I was able to recycle those parts sustainably and creatively.

Vitalizing Circuits

For my own vitalization, I often took the circuit board of the Vitalizer and created something completely different. A surfboard? Nah, not really! The nice fact about these circuit boards was that I could already use a pre-build infrastructure. There were Power supplies, audio jacks, potentiometers and switches that were already implemented for the front panel.

Thankfully, I had a reasonable quantity of OP amps on hand and even with the LM13700, a type of a VCA. What a hell of luck. On the back of the circuit board, I soldered my new connections with several cables and cable bridges that created all the new functions that I wanted.  

The holes on the front panel were already punched and I always needed to consider useful features and functions for all of them. Even though I only needed four potentiometers for my new project, I needed to come up with something useful. So I uses all the left holes creatively. If I needed an additional switch I could easily add it using the drill. 

On Friday evenings a good schoolmate was already waiting for me with some of his spray cans. I went to his house with beer and my new product in my bag. Finally, I had to paint the front panel of my newly created baby. I had to use exactly the paint from his leftovers and i was limited. With a permanent marker, I labeled the front panel afterward.

That way I created a whole bunch of crazy and creative self-made products for my home studio. Filter boxes with LFO’s, AutoPaner, Gates, Bass Drum Generators just to name a few.

Invention of the Transient Designer: Ruben Tilgner's Developments
Ruben’s Rack with handmade processors
Invention of the Transient Designer: Ruben Tilgners Funk Maschine
The Funk-Maschine
The developments of Ruben Tilgner
Ruben’s Rack closeup

The Transient Whisperer | Transient Shaping – My first Idea.

There I was working with my collection of self-made audio processors in my home studio. As a bedroom producer continuing to refine the sounds of my self-composed songs. I felt in love with an album in 1991 from Michael Jackson called Dangerous. I terms of the sound I found it pretty amazing. Especially the punchy drum sounds that smacked over my speakers were extraordinary to me. At that time I knew that I had to recreate this sound for my own songs in my home studio. But how?

YouTube

By loading the video, you agree to YouTube’s privacy policy.
Learn more

Load video

Michael Jackson | Dangerous
Urgent Requirement

With my equipment, I couldn’t get that sound. At that time the all-knowing internet was still learning – Google and Youtube couldn’t help me either. I didn’t had a compressor at that time. My philosophy, then as today: “What I don’t have, I will build myself”. At the same time, I had the first idea of the predecessor of the Transient Designer – the Transient Equalizer. As I was already experienced in creating new products in a Frankenstein-style. I developed the first prototype of the so called Transient Equalizer.


The Transient Equalizer

The Transient Eqalizer by Ruben Tilgner

The basic principle of this product was similar to that of a noise gate with a threshold control. This also triggered an envelope like in a synthesizer. Additionally, I spend this prototype a decay to control the decay time. As I already mentioned several times, I had to accommodate eight useful potentiometers to give the front panel a neat face. And so I had the idea to implement a noise generator. I could mix this noise generator together with a bandpass into the VCA. I used the last potentiometers for a mix control and a distortion stage for the effect signal. It turned out that this was perfect for polishing up lame and nasty snare drums from my sampler. With this prototype I could already make very crazy sounds but…

Unfortunately, it turned out that this threshold trigger was not always perfect and I had to optimize it. The following scenario happened: If my signal was too quiet, no envelope was triggered and if my snare fills were too fast, no new ones could be triggered. Many noise gates still have exactly this problem today. Have you ever noticed it? I soon found out that exactly this circuit was not suitable for everything and unfortunately did not work very reliable.


Master of the envelopes – finally, my first compressor!

So there I stood in front of my pile of broken glass, which caused me sleepless nights. Envelopes, dynamics and the dream of my own compressor. All the time I invested in this project I simply forgot to continue working on my “Number 1” album production. How am I supposed to get into the Billboard charts? But… not without this particular sound! Suddenly a ray of hope and intuition: A voice telling me forcefully: “A compressor, Ruben! A compressor!”. In 1996 I finally developed my first own compressor called DynaMaxx.

The Challenge

A real challenge for me was to develop a compressor with only one single knob. To realize it, I worked very intensively on the subject on how I can transform the AC voltage into a control signal for a VCA. Especially the rectifier and the time constants were a big challenge. For this purpose I used countless signals from my Kurzweil K2000 to get a compression result that was as discreet as possible. The endless adjustment of the time constants then brought control behavior which worked very well on many types of signals. The DynaMaxx was already a feedforward compressor at that time. I was able to realize a de-compressor and additionally an intelligent noise gate.

My first compressor became a real success! Quickly, it was used in many studios and was also appreciated in Live Environment because it delivered fast and good results. 

DynaMaxx Compressor from SPL
(Foto: DynaMaxx © SPL Electronics GmbH)

It was exactly through this development that I gained real expertise on the sidechain of a compressor. Did I perhaps now conquer the envelopes? More to come in the second part of my personal Transient Designer Story.

In the meantime please have a closer look into our nvelope.

Please feel free to share, comment and discuss with me.

Yours truly, Ruben