Core competence compression – the genesis of the elysia xpressor|neo
Question: When is good, good enough? We asked ourselves this question more than once during the relaunch of the elysia xpressor|neo. Does it even make sense to send a perfectly functioning product like the xpressor to the in-house tuning department to search for possible performance boosts? This blog article is dedicated to this topic and offers a condensed outline of the elysia company history and clarifies the question, what actually makes a first-class audio compressor?
In the beginning, there was the alpha
With a top-down approach, we started in 2006 with the alpha compressor, setting our own standards in terms of quality and sound. One year later the mpressor was released and shortly after that the museq. This trio of good sounds is the foundation of our elysia product portfolio. In order to make the elysia sound accessible to less well-heeled users, we gradually introduced the 500 series modules to the market starting in 2009.
Following our own history, the first 500 module was also a compressor. The elysia xpressor 500, which despite its compact form factor carries essential components of the alpha compressor‘s DNA. A discrete VCA compressor equipped with a soft knee sidechain.
Feed-Forward
The topology is “feed-forward”, which is why negative ratios can also be implemented with the xpressor 500. Further features like a mix control, Auto Fast & Warm complete the range of functions. The xpressor 500 is without a doubt an extensively equipped signal compressor. Amazing, considering the 500 form factor. We launched this compact compressor in mid-2010 and it turned out to be a real summer hit. The first run of a hundred units was sold in a short time. Proof that our customers understood the concept. The fact that a 19″ rack version was added to the portfolio a year later was therefore a logical step. The xpressor is what can rightly be called a success story. This can be attributed to several reasons.
In any case, it is a fact that 15 years ago there were not really many good VCA compressors available on the market. The xpressor was and still is one of the anchor products that have left a lasting mark on our elyisa philosophy and the character of our products. Therefore, the wish for a contemporary, revised xpressor version had been around for quite some time. But why are compressors so important in our product portfolio and why should every musician, producer, or studio owner have a high-quality compressor in his arsenal?
Compressor anatomy
The actual task of an audio compressor is basically quite simple: Volume differences in the useful signal are to be compensated according to taste. Sounds simple, but technically it is not trivial, since the characteristics of different signal sources can vary enormously. Even with one signal family (e.g. vocals), the bandwidth is enormous.
Low tones are usually quieter and are perceived psychoacoustically with less emphasis than medium and high-pitched vocals. In addition, almost all natural sound sources contain volume modulations that cause more overtones (partial oscillations) to be generated at higher levels.
If you try to embed such natural, highly dynamic signals into a comparatively static mix, you can hardly avoid additional dynamics processing. The tool of choice is a compressor.
How would an xpressor, for example, perform this processing? The decisive control element, in this case, would be a VCA (voltage-controlled amplifier), which can change the volume of the useful signal in a voltage-controlled manner. Thus, the quality of the VCA has decisive importance on the control process and ultimately on the sound quality.
For a better view
For a simpler explanation, a comparison can be made with optics. If you want to look into the distance, you need binoculars. The better the mechanics and the quality of the glass, the sharper you can see the object in the eyepiece. In this context, an interesting analogy can also be made between analog and digital. A high-quality digital camera has an optical zoom to magnify distant objects and make them as sharp as possible. A smartphone, on the other hand, is often only equipped with digital zoom. The more you work with digital zoom, the coarser the image becomes.
This inevitably leads to a loss of quality. The situation is similar with a compressor. If you want to compensate for large differences in level, you have to make up for the compressed compressor signal after processing with the help of the make-up gain, i.e. you have to re-amplify it. The quality of this catch-up process is essential for sound quality. Do all the subtleties come to the front, are the quiet parts in the signal adequately amplified without bringing unwanted artifacts to the fore?
Is your compressor able to present all the subtleties of quiet parts of the signal in a striking way in the case of large-level jumps after processing? Digital compressors in particular are often at a disadvantage in this process, because the make-up gain is often applied to the digital domain and therefore has to contend with similar resolution problems as a digital camera zoom.
Evolution
With the xpressor, the redesign of the neo version already has a very well-positioned basis. The tuning of attack, threshold, release, and ratio on the one hand, and the numerous additional features like auto-fast, log-release, warm, and parallel compression on the other hand basically cannot be improved any further. Therefore, every user of a classic xpressor will immediately feel comfortable with the workflow and the way the xpressor|neo works.
As developers, we nevertheless asked ourselves, where is there still sound capital to be gained? What can be done to display even more subtleties in the sound? What other adjustments can be made? This is not a trivial task, especially since some sound phenomena simply defy evaluation by audio measurements.
For example, improved three-dimensionality in terms of stereo base width can only be determined in complex listening tests. The fact is: There are definitely starting points and options to improve the sound. However, you have to be able to think “outside the box” and look in the right places. The classic xpressor has been available since 2010. Since then, the world has moved on. So our expertise has evolved, as has the fact that modern components and assemblies open up new (sound) possibilities.
New research and development approach learned
Ideas and improvements sometimes come from unusual disciplines. At elysia, we always take an interdisciplinary approach, which is why the redesign of the xpressor uses a process whose origins lie in circuit board design and, at first glance, doesn’t have much to do with audio processing.
Originally, it was an attempt to optimize the ground concepts and voltage supply of critical components such as FPGAs, microcontrollers, and DSPs. We applied this method to the design of audio circuits on a test basis and were surprised to find that audio circuits can be supplied with current peaks and corresponding voltages much faster as a result. The result is a significant jump in signal quality. In parallel, we are always on the lookout for improved components. Especially for components that were not yet available in 2010, as these carry the supposed potential for a performance increase.
Especially due to the requirements of modern DSPs and switching power supplies, some new electrolytic capacitors have been developed in the last few years. These special electrolytic capacitors have smaller values concerning their parallel and serial resistance. We have tested these components, among others, as coupling electrolytic capacitors or as buffering for the power supply, with partly astonishing results.
Especially when you consider that these electrolytic capacitors were not originally designed for use in audio circuits. This is how you can reach your goal even via supposed detours. Due to these “extra miles” the xpressor|neo has an improved impulse behavior and a more precise transient reproduction, which can also be proven by measurements.
Encore
But that’s not all. We have given the input circuit an additional filter in the form of a small coil, which provides a special RF suppression that has a positive effect on the overall sound. Throughout the circuit design, there is a reference voltage that drives the discrete circuits. This has been completely redesigned and is now more resilient to disturbances at the voltage supply level. As an aside, this is a thorny ongoing issue with the 500 modules.
We cannot say which 500 Rack our customers use for their modules. Depending on the manufacturer and model, the voltage supply of these 500 Racks is sometimes more or less good. Often you search in vain for concrete information about the interference voltage distances and other relevant values.
The xpressor|neo, on the other hand, is particularly well positioned in terms of power supply. The VCAs are another important point for increasing performance. In the xpressor|neo, we have reworked the control of the VCAs, resulting in improved stereo separation. In addition, the VCAs in the neo are driven completely symmetrically, which audibly benefits the stereo image. We were able to further improve the impulse response by revising the output amplifiers, which we have given new output filters.
More is more
In addition to the pure compressor circuit, the xpressor|neo has improvements in the additional functions. For example, the sidechain is more finely structured to allow the suppression of artifacts at the level of the voltage supply. A 2-layer board has now become a 4-layer board with a large mass area. This is important to minimize external interference.
It also provides a low-impedance connection between the power supply and the audio circuits, which can then be supplied with power more quickly. The sum of these improvements makes a clearly audible difference. The classic xpressor is an audiophile tool on a high level, but the xpressor|neo is in direct comparison sonically a step ahead. Especially the transient reproduction sets new standards, the stereo image is more three-dimensional and the overall sound simply shows more spatial depth. The bass range is extended, and the mids resolve more finely – the sound quality is outstanding for this price range and we do not say that without pride.
Compression expertise in a new look
The xpressor|neo is not only a clear power-up in terms of sound. The neo has also improved its appearance. We have beveled the edges of the housing, and the focus is now confidently placed on the center, where the company logo and the device name take their VIP place. This makes the renaissance of the xpressor|neo a rounded affair visually as well. In retrospect, we didn’t realize how much work the new edition of the neo would require.
Especially since it was not clear whether and to what extent the basic version of the xpressor could be further optimized. However, we are more than satisfied with the results and it is still amazing what potential for improvement can be achieved with new design methods and improved components. Even users who already have an original xpressor in their inventory should definitely try out the xpressor|neo. Especially in critical applications such as bus processing or mastering applications, you will be rewarded with a new sound quality that undoubtedly justifies an upgrade.
https://www.elysia.com/wp-content/uploads/2023/04/Ruben-am-Messplatz2-min-scaled.jpg17062560Ruben Tilgnerhttps://www.elysia.com/wp-content/uploads/2021/03/elysia-logo-black.pngRuben Tilgner2023-04-05 11:19:242023-04-05 12:50:02Unlocking the Secrets behind the elysia xpressor|neo
There are a striking number of passionate people in our industry, both on the manufacturer and user side, who talk about their work with glowing eyes. In the vast majority of cases, I look for my interlocutors on the user side, but it’s not always so easy to tell user and developer apart, because there are also contemporaries who – for good reason – are on both sides. One of these audio enthusiasts, who enjoy the privilege of being able to build the equipment they would like to use themselves, is called Ruben Tilgner and is the owner of the German analog equipment specialist elysia, which has been operating since 2006 with the clearly defined goal of giving analog technology an innovative, forward-looking face. Discretely constructed Class A technology forms the basis for the kind of sound quality that Ruben Tilgner wants as a user, without overlooking the fact that the contemporary studio is dominated by the DAW, i.e. digital technology.
After a longer Corona break, I took the opportunity to visit Ruben in his high-tech wizard’s kitchen. With an unmistakable gleam in his eyes, he shows me the production department, which has been optimized for self-sufficiency and now also benefits from automated SMD assembly and metal processing. But actually, we want to spend the day as usual with 19-inch erotica and nerd talk, for which we go to a very special room that can alternatively be used as a creativity-giving recording room, meeting room or cozy lounge due to its careful acoustic design. A few doors down, there is a sound control room equipped with a conspicuous amount of elysia equipment, just as if there were a source in the immediate vicinity.
Little Ruben, who was interested in lighting effects from a young age and wanted to know exactly where the sounds coming from his toy car were coming from, now belongs to an endangered species of experts who are considered the guardians of analog technology and who use their brains and hearts to ensure that valuable know-how in the service of good sound is not lost. Like most of my interviews, this one starts with the question of how it all began… in this case with a toy car disassembled into its component parts!
Ruben Tilgner: I’ve always been curious to find out how technical things work. When I was a little boy, I got a toy car for Christmas and I wanted to find out where the noises it made came from. A week later, it didn’t drive anymore because I had taken it apart into its component parts. I was maybe six years old then. A few years later, my first contact with electronics came through my cousin, who was training to be a radio and television technician. My father was very supportive and recognized my enthusiasm. At the age of ten, I started taking piano lessons, so you can almost imagine how music and technology connected in my mind. Oddly enough, at first, I was more focused on light, on everything that flashed and glowed.
The light organs and disco balls in party basements motivated me to experiment with light effects in my childhood room, for example with a rotating bicycle lamp on my record player. Apparently, I was quite talented and very quickly had a small workshop in the basement for tinkering and soldering. You have to remember that there was no YouTube at that time, but I had three or four books and just tried wildly. At sixteen, I started an apprenticeship as a radio and television technician, practically my entry into the ‘real’ world of electronics.
On the very first or second day of my apprenticeship, I came home and my father said that the TV stopped working. I found out that some diodes in the power supply were broken, got the spare parts from my supervisor, and the TV worked again. From a technical point of view, I didn’t get much out of that time, because the company where I was trained wasn’t a big one, but in retrospect, due to my personal initiative, it was the perfect foundation for analog circuitry, because all devices in consumer electronics were still discrete. If you unscrewed an amplifier back then, you wouldn’t find any ICs, but everything was built with transistors and resistors.
I still have a book from that time, which I like to look at because it’s amazing how cool the circuitry was back then that the developers had. I have a lot of respect for the engineers who developed color TVs and VCRs back then. They were real cracks. At the same time, I started playing in bands with the piano and electric bass, for a few years as a bass player with self-composed songs, and later as a keyboard player. The first thing we had, however, was of course a lighting system with a corresponding mixer, all built by ourselves.
Actually, we had more light than sound (laughs). During the apprenticeship, I already realized that this pure repairing would become boring in the long run, because at some point everything was based on routine and experience – always the same mistakes that you knew practically in your sleep. After the apprenticeship came civilian service and a vocational baccalaureate. During my high school graduation, a buddy of mine told me that he knew of a company where you could work while you were at school. The company was called SPL and was located nearby. So I started to work there in the production – assembling boards, soldering, testing. At some point, I was asked if I wanted to be part of the team. It was a great time, a young team, but also successful products in exactly the field I wanted to work in, without actually knowing it at the beginning. Together with a friend, we came up with the idea for a kind of ‚Exciter’, which we called ‘Freshman’.
But the topic was settled for the time being, because SPL hired me permanently. I quickly learned the ropes and began to understand the essence of dynamics processing, something I had never had any contact with before. I also saw how devices were built and how to make the whole thing production-efficient. By reworking existing products, I then came up with completely independent developments, such as the Dynamaxx, which was launched in 1997. The brief was to design a compressor that was as simple as possible. The thing already worked great, even with complex signals, and was used a lot in the live area. It was a lot of fun to work on the details of such a circuit and to experience how the components ‘communicate’ with each other. Anyway, I noticed that my musical vein suddenly became very important, designing the control processes with this in mind.
Fritz Fey: You stayed with SPL as a developer for a long time before I heard about your plans to start your own business. Did you need this freedom for your personal development?
Ruben Tilgner: After ten years, I naturally had my own head and it was no longer so easy for me to develop ‘on assignment’. My later partner Dominik Klaßen originally wanted me to build a bass preamp for him. That’s how we got to know each other. The preamp was never built, but I started to realize more and more of my own ideas in circuits, which later formed the basis for the vision of the first elysia product, the alpha compressor.
The acoustically high-quality recording room is a practice space for sound and work processes, but can also be used as a meeting or training room and a cozy lounge.
Fritz Fey: Earlier we talked about simplifying dynamics processing. The alpha became the complete opposite of that, with what felt like eighty parameters (grins)…
Ruben Tilgner: The alpha was intended to be a high-quality mastering toolbox, but the basic quality of the stages was created when collecting ideas for Dominik’s bass preamp. That was such an increase in quality that it practically called out to become the basis for a complex circuit design. This drove my innovative spirit on how to make a device that would work differently or better in sum than a normal compressor. The first sketch I made for it ended up being almost 1:1 the alpha compressor. I took the prototype to various studios to collect reactions because I wasn’t a mastering engineer. Rather, I was looking at it more from the technical side, but I also wanted to create something new without having been asked for it by users.
Dissatisfaction is a good driving force for innovation. In mastering, ‘recording’ compressors were actually used, which often became audible at one dB. There had to be more to it than that. At that time, I was also regularly on the road as a live mixing engineer, and I didn’t really like the analog compressors that were being used. Vocals demand a lot of punch from a compressor because the forces at work at a concert like that have to be tamed somehow.
When I took my alpha prototype with me, I noticed how absolutely upfront the vocals were, without any control action or even signal degradation being heard. A compressor like this should be able to make a 10 dB boost without you noticing much of it. In the end, this also suited mastering, even if you certainly don’t control 10 dB there. If you look at two cars driving 100, it’s a different feeling to sit in the S-Class than in a small car. Actually, the alpha defined the entire sound philosophy of the company.
Fritz Fey: The alpha was also a real statement for me at the time: ‘This is what we can do’. In principle, all subsequent products were derived from this template, which, by the way, also became more affordable (grins).
Ruben Tilgner: When you see the mechanical effort that went into this device, which is still a lot of work in manufacturing today, you know that we wanted to set an example in every respect. At the time of its creation, it was common in mastering, for example, to overdrive the A/D converters to be really loud. My idea was to solve this in a more elegant and controllable way, by installing a ‚Soft Clipper’.
Actually, I could hardly be stopped by my will to innovate. I don’t know whether a mastering engineer would have come to me with this idea. I simply wanted to offer a lot. The continued interest in this device shows to this day that it was a good idea to put so much thought into a product and to push the quality to the extreme. I don’t think the user can define such quality. But I need a musical and technical understanding to recognize such a need. I had a vision myself of how such a device should sound – bigger, more open, more spectacular, more emotional. This vision drives me to this day to find the direction, even if the solution sometimes lies not in a magical high-tech component, but quite profanely in the design of the power supply.
Fritz Fey: It’s been a long road, from a boy spinning a bicycle lamp on a turntable to a developer designing mastering-grade equipment….
Ruben Tilgner: That’s right (smiles). In my last years at SPL, I started to get involved with discrete circuitry. The first products I developed there were rather classically built with op-amps. After that, the next step simply had to come, which I saw in discrete circuit design. I taught myself most of what was necessary for that. For example, the SPL gain station uses a discrete operational amplifier that I developed. But before that, there were many experiments, measurements, and listening sessions. It’s a bit like Lego. Sometimes you don’t immediately understand what happens in such an experimental circuit.
Fritz Fey: I’ll translate that as passion, motivation, diligence, perseverance… you just have to go where it hurts. Having an idea at four in the morning and then sticking to it… right?
Ruben Tilgner: On Saturdays, when no one was in the company, I tried things out and carried out a series of measurements. On alpha, I was already working in my parents’ basement when I was still at SPL. The crucial thing is actually to have a goal. Nine failed attempts and the tenth attempt is it. If you give up the third time, you won’t achieve anything. You have to overcome failed attempts and not get discouraged. Fortunately, analog technology was still available during my apprenticeship. Anyone who wants to get into the subject today will certainly have a harder time.
Fritz Fey: What I took away from my friendship with Gerd Jüngling over decades is that analog technology is not a sober science, but a complex ‘living’ organism…
Ruben Tilgner: That’s true. If you take a capacitor, for example, there are dozens of different types. You can measure everything, but you have to transfer component properties to the audio level first, and unexpected things often happen there. But as I said, you have to have a goal and know where you want to go as a developer. As a sound engineer, I also need a sound concept. For me, this sound philosophy only came into the game in my late 20s.
Fritz Fey: Is it always your own vision or do you also listen to the market? In my experience, users are not so goal-oriented in their wishful thinking. How do you find out what the market wants?
Ruben Tilgner: If you ask like that, the SPL Transient Designer would probably never have existed. No user would have had the idea for this product. Of course, there is always a bit of market analysis involved, but I see myself much more as an innovator. My idea for the Transient Designer came out of a listening experience. I was sitting in my little home studio and listening to the latest Michael Jackson album, which had incredibly loud transients. How do they do that? I didn’t even have a compressor to experiment with.
So I built it myself. Within two days I had developed a prototype on a plug-in board that had an attack and a sustain control. There are now lots of digital ‘replicas’ of this concept, but the SPL Transient Designer and the elysia envelope are practically the analog originals.
With the alpha compressor, I didn’t ask anyone at first either. The mpressor was already a kind of offshoot of the alpha, born out of the experience of what ‘sick’ settings some engineers turn, which I would never have thought of. In this respect, the mpressor was requested by the market, but in the end, it had functions that users would never have thought of.
Fritz Fey: There are now various plug-in emulations of your devices. Aren’t you creating competition for yourself that doesn’t have to be there?
Ruben Tilgner: It’s a different platform on which a lot of people work. But there are also many others who want to work analog or at least hybrid. The plug-in is a digital implementation of the idea of the analog original. If you take the example of the mpressor again, you can do very crazy things with this device, which the plug-in can also do. This gives the digital creation its absolute reason to exist because it is clearly different from other compressor plug-ins. But when you get into the finer details, you realize that the hardware is playing on a completely different level. But since not everyone can afford such a hardware compressor for a few thousand, it’s good to have a ‘digital replacement’, differently usable through the many possible instances.
We hand over the hardware to the software engineers with certain measuring points in the circuit so that the basic character of a device is hit very accurately. Nevertheless, differences remain, because an analog amplifier cannot be emulated so exactly in detail without completely eating up the CPU power. The control voltage in a compressor can be reproduced digitally very well, but the VCA cannot be reproduced down to the last detail, if only in terms of bandwidth. In the measurement cycle for the alpha compressor, for example, I use a 100 kHz square wave signal and look at how the edge looks on the oscilloscope. Harmonics are generated that go up to 3 or 4 MHz. The amplifiers operate in this bandwidth. That’s where analog technology makes a special difference.
Fritz Fey: Is such a plug-in also a kind of promotional tool for the analog device?
Ruben Tilgner: Definitely. I know about users who first used the plug-in and then decided to use the analog device. I honestly didn’t even suspect that it could develop in this direction. In our field with 19-inch devices and 500 modules, this is even more conceivable than with analog mixing consoles, which are light years away from the plug-in in terms of price or are even no longer being built.
Fritz Fey: The lion’s share of plug-ins are emulations of an analog original. Could this eventually lead to analog devices only being developed as templates for plug-ins?
Ruben Tilgner: Basically, from my point of view, digital technology has been reaching its limits for some years now, because the flood of software and digital hardware has not reached what good analog products can do. On the other hand, I also see a certain stagnation in the analog sector.
Here, too, analog equipment concepts are copying themselves to some extent. Many things were already developed in principle thirty or forty years ago. Of course, some of these are excellent devices, but I miss the innovation, the different concepts, and the independence. Digital technology is actually more ‘original’ in this respect. So the question is not whether analog technology retains its right to exist, but where the innovation process remains. Sound in itself isn’t all that innovative, but it is of course the domain of analog technology.
Fritz Fey: Plug-ins today represent an unbelievable variety, what feels like a hundred equalizers, compressors, reverbs, saturation, and tape machine simulators. Everything is in unmanageable quantity and for unbelievably small money. What is there to be said for analog technology with its manageable range of products, usually for a relatively large amount of money?
Ruben Tilgner: Many promises in the digital world are not fulfilled and you can buy a lot of EQ plug-ins without experiencing a real improvement in the sound. Anyone who has ever worked with a good analog EQ will know how big the leaps in quality to a new level of sound can be. The trick is to use a few analog tools as efficiently as possible because the secret is not in opening up quite a few instances or storing settings.
When I’m recording with analog gear, so I’m also making a sound decision, all I have to do in the DAW is pull up the knob and the signal is there. That’s where I really save time. As I said, it’s in the world beyond 20 kHz that things get really exciting. You can hear these differences even when comparing analog devices that are discrete and not built with integrated circuits. Analog is not better by definition. Some amplifiers make 10 volts per microsecond.
At a sampling rate of 48 kHz, we are at about 20 microseconds from one sample to the next. That’s the resolution range we’re talking about, and that’s often called the mojo or emotional level in analog. There is a difference between digitally recording a signal that has been processed in analog and processing it on the digital level. You simply have to look for coherent approaches to integrating analog technology as perfectly as possible into the modern DAW environment. A large analog mixing console is certainly not the solution, if only for economic reasons.
Fritz Fey: Not only plug-ins adorn themselves with the emulation of famous analog classics and brands, but numerous analog developments are ‘inspired’ by historical circuits.
Ruben Tilgner: That’s right. I think every manufacturer has its own ‘fingerprint’ or sound philosophy and I think it’s also legitimate to implant a proven sound character into a new device. It’s just a pity that you see proportionally very few new developments and ideas. There are so many 1176 clones or Fairchild replicas. This all sounds very nice without any question, but is this really the way forward for analog?
Some of the schematics of these classics are in the public domain and then it seems obvious to recreate something like this, maybe even improve on it. Just as the analog world copies itself to some extent, just as the digital world copies analog technology, music and sound copy each other all the time.
I miss inspiration and innovation here. The tutorial videos, perhaps even unintentionally, constantly set rules and provide recipes for how something should be. There, as well, ways of working are constantly being copied. Maybe it’s too uncomfortable or too risky to leave the paths set by professionals?
Fritz Fey: Of course, this also has to do with the “amateurization” of the industry, with people who want to get to the top quickly and then look at the people who are already there.
Ruben Tilgner: The people who are already at the top have certainly experimented a lot to get there. Those who emulate that forget that you can’t put a template over any musical performance and copy success or ‘quality’ with it. You have to reorient yourself every time, which shows how elementarily important it is what happens in front of the microphone and that you are always challenged to react to it individually.
Maybe this doesn’t sound so popular, but I think it would be cooler if the recordings were only made by people who have the skills. Because of the studio structure at the time, this was practically inevitable. In addition, the recording industry has let itself be taken out of the scepter and has thus deprived itself of an important task, namely to discover and promote talent. The market today seems to be dominated by people who can do everything, but only a bit of everything…
Fritz Fey: When I look at your product line, I see EQs, dynamics processing, saturation, and many special shapes and functions… what does ‘innovation’ mean to you in terms of analog equipment? It would have to be something that breaks through previously known boundaries…
Ruben Tilgner: It’s not the completely new processor that you come up with, it’s more the use of the devices. That’s where you have to think a bit bigger and look at how music is created or generated. I would compare it to MIDI programming a guitar that can’t do what a real guitarist plays.
The guitarist shapes his sound with effects devices directly as he plays and as a creative reaction. Through the feel and the sound, this musical-sound creativity is also created in the studio with analog devices. The best example are analog synthesizers, which became popular in the early 80s. Although there are tons of digital emulations, analog ‘new or replicas’ are extremely in demand again right now.
Once because of the sound aesthetics, which is really different, but also because of the feel. If you combine this with digital possibilities in the right way, analog technology takes on an outstanding significance again.
Fritz Fey: I would attribute the existence of numerous digital controllers to this, some of which even exactly reproduce the surface of the plug-in. Would it also be conceivable to control an analog black box with a plug-in and thus combine the advantages of both worlds?
Ruben Tilgner: Technically, this is certainly both conceivable and feasible. You have to think about how to integrate it sensibly into a production process because this combination presupposes that I want to edit an analog sound result afterward. Of course, analog technology suffers from a lack of recall. For me it’s like this: the more I use analog technology, the more final decisions have to be made at an early stage. In the days of analog tape machines, you were limited to 24 tracks and had to figure out a strategy for getting along with that.
This limitation is exactly what challenges you creatively. In the DAW, you’re easily running 80 tracks for some little ditty, with at least two or three plug-ins on each channel. When you think about the number of parameters you have to control and keep track of, it’s hard to imagine. My analog idea is to work differently, with analog processing for the recording that you commit to. Then there’s no need for recall because everything that’s recorded already sounds the way it should. Why do all sound decisions necessarily have to be postponed? Of course, you also have to develop a sound idea and a musical goal, and that’s something that’s possibly lacking today.
Fritz Fey: Do you have any new equipment concepts in mind that you can already talk about?
Ruben Tilgner: Yes, but still very vague. I’ll put it another way – there are a few things that work much better in analog than in digital. If you link analog devices, it happens without latency. If the linking is not serial, but parallel, it gets even more complicated. If you implement something like this cleverly, you can gain various advantages from it. Another aspect is that for me the analog is always the original. In the digital world, there are always only images of it – of a particular synthesizer, guitar amp, microphone, or drum kit. The development is clearly going in the direction of ‘imitation’ or ‘virtualization’.
In my opinion, the path should lead in the other direction again. Analog is not a counterpart to digital technology, but analog are also all musicians and all instruments. Of course, it takes much more effort to work with microphones and to determine the final sound with them. We have to learn again to define the quality at the beginning. In the process, I make mistakes and have to try things out…
Fritz Fey: This kind of production, which has already existed, is considered a luxury today. You need good-sounding rooms, many microphones, musicians – everything is very expensive. You can get that ‘sort of’ for $25.90 into the computer. We are sitting here in a room that has been acoustically planned and developed. Is this your private music playground or also the test station for your analog developments?
Ruben Tilgner: I have this room to understand the production process, but I also make recordings here with artists for my own projects. It’s a certain luxury that the company affords, but I just want to figure out how to get an optimal signal when recording. How do microphone preamps differ, how do microphones sound? Where can I use analog equipment efficiently? The quality is most likely to be decided at the source. It’s absurd to throw a lot of plug-ins at what is actually a bad signal to make it usable. Where did the magic come from in the earlier recordings that is so often missing today? The technology I develop is only a small part of this process, but it has to be applied at a crucial point. Our industry tries to convey to users that the tool is paramount.
The sophisticated sound control room is a test station for equipment developments, but also a playground for the company’s own productions.
Fritz Fey: You also have a control room a few doors down, which was planned and built by Dennis Busch. Do you currently have your own music project?
Ruben Tilgner: Yes, I do, but without commercial interests. I am currently working with an artist, composing and producing songs with her. But of course, I don’t have the time to make more of it.
Fritz Fey: Another topic – when you showed me the production earlier, I had the feeling that you had a certain prophetic gift because it looks as if you had already started a long time ago to make yourself more independent in terms of production. Did you foresee the current procurement problems that many companies are suffering from?
Ruben Tilgner: Of course, for me as a developer, the production process is a very exciting matter. Many years ago, I had the opportunity to look at manufacturing at Rohde & Schwarz. Back then, when elysia moved from the basement here to this building, I started thinking about this topic. It has less to do with prophetic gifts and more to do with implementing production sensibly, especially because of the small quantities we produce compared to large manufacturers.
You have to spend a lot of money to buy the appropriate machines. I had the idea of how we produce today five or six years ago. It has proven very useful to be able to control delivery times and be more independent in the area of metal processing or SMD assembly. What we practice here could be called ‘professional small series production’. Of course, there are not just people here who solder everything together by hand, which is always so aptly described as ‘manufacture’.
In another environment, our SMD pick-and-place machine would perhaps assemble 200 boards, in our case only 30, but with the same professional quality. The metal parts, which we mill completely ourselves, also give us a very high degree of independence, but we also learn about the manufacturing processes, for example how to simplify a changeover from one product to another.
Fully automatic SMD pick-and-place machine for small series with professional quality requirements.Compact high-tech CNC milling machine for a wide range of metal parts from the front panel to the adjustment knob.Perfect for professional small series: selective soldering system for perfect solder joints
Fritz Fey: In the meantime, are you experiencing what other manufacturers are also complaining about, namely poor availability of components and/or exorbitantly high prices?
Ruben Tilgner: Let’s put it this way – in electronics purchasing, there have always been problems with immediate availability when you have to buy 600 different components, as we do. For example, one and a half years ago we already had procurement problems with certain resistor types, whereupon we switched to an alternative. There are of course areas with really serious procurement problems, for example, microcontrollers, FPGAs, and AD/DAs – but since we don’t use these at all, we are not affected by them and have always been able to find solutions for our area and keep our supply capability high. Of course, prices are rising, but that doesn’t mean we have to constantly update our price lists.
Fritz Fey: As a developer of sound tools, you are at the service of music and set very high-quality standards for yourself. To what extent do you feel that this is still reflected in today’s chart music?
Ruben Tilgner: That’s a difficult topic because I can’t know which customers produce which music with my devices. I generally find the current charts problematic, since pop music is popular but not particularly sophisticated. If I look back to the 80s, the best studio musicians and producers were invited to – from today’s point of view – sinfully expensive studios to work for a Michael Jackson album, for example, which is still a reference today. Now many productions are the opposite of that: only very moderately talented singers, straightened with autotune. You don’t really hear musicians anymore, no instrument, but sound building blocks spread over an arrangement. Then begins the tedious work of somehow making that sound alive with technical means, because the sound source is not alive. If you take the analog topic a bit broader, nothing really takes place analog anymore and no finger touches an instrument. Another aspect is the loudness wave, which is by no means over.
A lot of things are still extremely driven to the wall, so I often wonder if the listener really wants that. Of course, there are still a lot of people out there who also have a high demand for quality, and of course, I hope that I make a positive contribution with my development.
Many products from the elysia range are constantly kept ready for demonstration and use in the sound control room.
Fritz Fey: I think there is a relatively high dark figure of good music (grins) that is not discovered so quickly because of the distribution structures or, in other words, is buried by garbage.
Ruben Tilgner: That’s true, but I also don’t want to be misunderstood. There is no need for every price to have people playing instruments because there are also quite great productions in the electronic field. It’s about a certain philosophy. In the 90s some albums inspired me, great recording, great spatiality, and very dynamic. That shaped in me the idea of how equipment has to sound to achieve that sound – bigger and more three-dimensional. When you flatten a production like that, as it often happens today, everything in me resists and fortunately, I’m not alone with this view…
This blog post starts with supposedly simple questions: Did we have higher quality music in the last decades compared to today’s standard? What constitutes high-quality music anyway? The fact is, the concept of quality can be looked at from many angles. Quality is also an attempt at categorization that can be applied to numerous aspects of music and its creation and is not limited to duo “hardware & craftsmanship”. This blog post would like to take a look at the concept of quality in terms of sound engineering, composition and music production and ask the question, what characterizes high-quality music and how can it be produced?
What is quality?
The term “quality” can be applied to many aspects of life. For the classification of quality, we like to draw on suitable adjectives such as “good, bad, or mediocre”. While the evaluation of products with the help of standards (e.g. DIN standard) is comprehensible and thus comparable to a large extent, the concept of quality becomes so transcendent when we want to evaluate subjective states such as “beauty” with it. Welcome to the dilemma of wanting to evaluate the quality of music. We will try it anyway.
Let’s approach the topic from the hardware side. Who doesn’t know the common opinion that devices from past decades consistently meet a higher quality standard? Thus, the general quality of products from the 80s seems to be higher than their counterparts today. Without wanting to generalize, many devices from past decades benefited from their comparatively long development time, complex components and careful selection. Back then, the longest possible service life was considered a key point of product development.
And today?
The sheer volume of short-lived, inexpensive products was certainly not available to consumers in this breadth a few decades ago.
After production has become faster and faster in recent decades, and thus in part also worse, a rethinking is slowly taking place. Low-quality products in large quantities are neither resource-saving nor particularly sustainable. Simple plastic products are only used for a short time before they spend long years in our oceans as plastic waste awaiting their transformation into microplastics. The call for new quality standards and sustainability can no longer be ignored. We believe that this should also apply to music and music production.
Each of us knows “quality music” and it does not need a DIN standard to recognize it. Musical quality works are marked with the stamp of “timelessness”. Elaborately produced and sophisticatedly composed music has a higher and longer entertainment value than plastic-like utility music that merely pays homage to the zeitgeist. Why there is such a mass of uninspired utility music is not insignificantly connected with the medium with which this music is preferably consumed.
Keep it short!
The current focus is on various streaming services that disadvantage longer lavishly produced titles simply because of their structure. Simply put: Longer titles are not “worth it” financially! The additional effort is simply not remunerated by streaming portals. For example, a track must be listened to for just thirty seconds at a time for it to be paid out on the Spotify platform. This monetizes the track. This means that the artist only receives money if the song runs for 30 seconds or more. If the song is seven minutes long, there is no extra compensation. Not only the band “The Pocket Gods” finds this unfair. This band counteracts this unspeakable structure with an extremely unconventional idea. The last album of the band includes equal to one thousand songs, all in a length of thirty to thirty-six seconds.
On the one hand, this is clever and creative, but it also clearly shows the current dilemma of the streaming medium. Samsung recently published an interesting study. According to it, the attention span has dropped from twelve to eight seconds since 2000. According to this, the first eight seconds decide whether a song will be a hit or not.
Don’t forget:
If it is pushed away within the first 30 seconds, there are no streaming revenues for the artist. This has a direct impact on current “utility music” and its composition. Songs today often don’t even have an intro anymore, but start directly with the chorus. In the case of the AOR classic “Don’t Stop Believin” by Journey, the recipient has to wait for one minute and seven seconds until the refrain start for the first time. By today’s standards, an incredibly long period.
By loading the video, you agree to YouTube’s privacy policy. Learn more
In addition, the song is unusually long for a hit at 4.10 minutes.
Fact is: The average length is decreasing, elaborate bridge parts or lengthy breakouts from known song structures can be found more and more rarely. Another forecast says that by the end of the decade the average song length will be two minutes. All this does not necessarily allow conclusions to be drawn about the actual quality of the music. What can be said, however, is that the creative playing field for musicians and composers is being greatly narrowed by existing structures and developments, and the incentive to publish high-quality music is becoming increasingly less.
The relevance of the medium
The current “skipping culture” is basically a digital phenomenon that favors superficial consumption in streaming services with their playlists and gapless playback mechanisms. Classic media such as tapes or records offer a significantly different approach. This begins with the selection of a track. With tape, you have to specifically fast-forward to a song; with a record, consciously reaching for the cover box and putting the needle to it is a deliberate act.
According to science, this act involves preconditioning. You become engaged in the auditory event before you even play the song. Perhaps this is also a building block for perceiving music more consciously again and, above all, consuming it purposefully. The medium has an influence on the consumer. What many music lovers may not be aware of is the fact that the recording and playback medium itself has a major influence on the type of composition, the choice of musicians, their virtuosity, and the length and complexity of musical pieces. To understand this better, let’s look at the history of sound recording.
The history of sound recordings
Before there was the possibility of sound recording, music was exclusively performed live. This meant that the musician had to master his craft and his instrument to practice his profession. This continued to be the case in the early days of recording technology. Recordings took place as one-takes, and only professional musicians could withstand this pressure. In the early days of the record, the recording was scratched directly onto the medium.
The first recording devices were exclusively acoustic-mechanical in nature and managed completely without electricity. How did the whole thing work? The sound was captured by a funnel, and the vibrations were converted by a diaphragm and held on a plate or roller. The only energy available was the sound energy itself. This was responsible for transforming the information into the carrier medium. Microphones did not exist in the early days of sound technology, so bands and ensembles had to be placed quite unnaturally in front of the recording funnels. Loud instruments such as brass stood further away, while strings and singers set up closer in front of the funnel. In case of playing errors, everything had to be re-recorded. Quite clearly, this all had an impact on the nature of the music and the way it was played.
The pioneers
The possibility of recording speech or music is still comparatively young. In 1857, the Frenchman Édouard-Léon Scott de Martinville invented the “phonautograph,” a device for recording sound.
It was not until 1877 that Thomas A. Edison developed his “phonograph,” which was actually designed for dictating messages in everyday office life. The advantage of the phonograph: The device could not only record sounds but also play them back.
In 1884, Edison’s concept was further developed by Charles Sumner Tainter and Chichester Alexander Bell. They called their recording device “Graphophone” and received the first patent for it on May 4, 1886.
However, Emil Berliner is considered to be the inventor of the well-known “gramophone”. Berliner presented his gramophone to the public in May 1888, which is also considered the birth of the record. Until the 1940s, shellac was considered the material for records until it was finally replaced by PVC (vinyl) at the end of the 1940s.
AC/DC
With the introduction of electrical sound recording in the mid-1920s, the limitations of the earlier acoustic-mechanical sound recording were overcome. The conversion of acoustic vibrations into modulated current generated a mechanical force electromagnetically in the record cutter, which was able to carve the sound into the carrier material completely independently of its energy. The first electrical sound recordings were made with a microphone and were therefore always mono.
Georg Neumann developed the condenser microphone in the late 1920s. With this type of microphone, sound quality improved dramatically. Neumann microphones are still part of the professional studio standard today. After the rotational speed of record players changed from 78 rpm to 33 rpm, and vinyl could be used instead of shellac as the carrier medium, the sound quality and the running time of the medium improved in equal measure. The first stereo record was introduced in 1957 and stereophony soon became the standard.
On Tape
At the same time as the record players, work was also being done in Germany on tape recorders. Fritz Pfleumer patented the tape recorder in Dresden in 1928. A few years later he sold the patent to AEG “Allgemeine Elektricitäts-Gesellschaft” where Eduard Schülle developed the first tape recorder. In 1935, the forerunner of the chemical company BASF developed the magnetic tape.
The tape as a carrier medium revolutionized the recording and radio industry. For the first time, it was possible to record high-quality recordings “off the grid”, i.e. in a non-studio environment (e.g. live events). In addition, tape offered the invaluable advantage of being able to edit.
The first stage of editing was thus achieved. It is therefore not surprising that from the end of the 1940s, the tape became the recording medium of choice worldwide. The next stage of evolution was represented by multitrack tape machines. Multitracking made it possible to make overdubs for the first time and thus also permanently changed the way music was recorded. The musicians no longer had to perform the song simultaneously as a collective. From this point on, decentralized production processes were also possible.
Digital revolution and strange hairstyles
In the 1980s, the first digital revolution in music production took place with the advent of digital technology. This affected the recording medium as well as the instruments and sound sources. The Fairlight CMI (Computer Musical Instrument) is considered one of the most important tools. The Fairlight was the first digital synthesizer with sampling technology. The first units came up on the market in 1979 and among the first customers were artists like Peter Gabriel and Stevie Wonder.
Due to its price structure, the Fairlight was reserved for only a few musicians. Over time, however, more affordable synthesizers, sequencers and drum machines increasingly found their way into studios. With the Midi interface, the first universal communication protocol was also introduced. Together with multitrack recording, which had become commonplace, the new equipment had a massive impact on music and the way it was composed and recorded. The musician faded into the background, with drummers, in particular, facing digital competition. Beats were no longer played but programmed. The 80s was also the decade in which the CD as a digital medium replaced the sonically inferior compact cassette as the standard.
The 90s
Digital development did not stop in the 90s. ROMplers provided sampled sounds of the most common instruments (strings, winds, etc.) in more or less good quality for a manageable investment. This put them in direct competition with real musicians. The quality of the sounds improved increasingly in the 90s, while the first DAWs gradually replaced the analog multitrack tape machine. The DAW also offered significantly more possibilities to edit recorded tracks afterward. With these new technical possibilities, new music styles (techno, hip-hop, house) emerged, which knew how to use the new possibilities skillfully.
In 2000
Since the 2000s at the latest, the DAW has been the center of music production. The classic combination of “mixer & tape machine” had had its day. At the same time, there was Napster, the first “peer-to-peer” music exchange platform, which made it possible to send compressed audio files in MP3 format over the Internet. What began in 1999 led to the CD being replaced as the most popular digital medium. VSTi (virtual instruments) further limited the need for real musicians. A development that basically continues to this day. The status quo is that you can basically represent any instrument digitally. Songs sound amazingly similar in many productions since some of the same samples and sounds are used.
Back to the Future!
Currently, music production is the exact opposite of how it was in the early days. Back then, musicians and entire orchestras would gather in front of a horn or a single microphone and play their tracks live, straight through, without overdubs. There was no subsequent editing or mastering. Today, many recording processes are automated and the “human” factor is no longer necessarily the focus. We have lost much of what was commonplace in the early days of sound engineering: minimal use of technology with maximum use of the musicians through their interaction. Is a return to the old days perhaps a way to more quality in music? Do we need quality in music at all?
The Big Picture
Perhaps a comparison to the film industry will help to clarify the question of whether sound quality is relevant at all. In film and TV productions, it is important to ensure dialog quality. This is relevant for several reasons. For one thing, sound reproduction is not standardized on TV sets. In addition, the common flat-screen TV hardly offers enough space to install reasonably sized drivers. In other words, television sound is often problematic. On movie sets, too, maximum sound quality is not always a priority. As a result, some dialogs are difficult to understand, which spoils the enjoyment of a movie. However, the quality of the movie sound is important for the overall experience.
This also applies to radio broadcasts. Here, a maximally intelligible, clear signal is transmitted, if only for reasons of FM range limitation. This is to keep the listener at the station as long as possible. The Orban Optimod 8000 was the first processor for FM radio stations to be introduced in 1975 and was designed to guarantee consistently good sound. Optimods, which are now fully digital, continue to operate in radio stations until today. Optimods usually includes at least a compressor, equalizer, enhancer, AGC (Automatic Gain Control) and a multiband limiter. Basically an automated mastering chain.
FM Mastering
The approach that a piece of music should sound as good as possible on various systems and playback systems are familiar to us from mastering. However, mastering that is exclusively geared to hi-fi systems no longer works with today’s range of playback systems. Therefore, an ideal song should not only have the best possible sound quality, but also an interesting composition and lively individual tracks, which in combination results in an original title.
The liveliness and depth can be created organically with the help of real instruments and musicians. This offers the human ear more depth, more stimulation than automated music. Programmed songs can also achieve this depth, but to do so, these songs must be programmed with the same depth and attention to detail that a collective of musicians would record. Breathing life into ready-made sound building blocks is no less difficult than mastering an instrument with virtuosity. That’s why many standard pop songs simply lack finesse.
Where is the way out?
There is no patent recipe for the production of high-quality music. But different approaches can pave the way. One suggestion is to combine the best of all worlds! The current digital recording technology offers so many advantages and possibilities in terms of storage and sound manipulation compared to the old familiar duo “tape machine & analog mixer”, so you should fully use and exhaust these new possibilities.
The visual editability of the arrangement and the audio waveforms offer additional creative potential that wants to be used. This potential is maximized when you let veritable musicians do their job in front of a professional front end of good microphones and preamps. Art is made in front of the microphone and this magic must be captured accordingly. Especially in a collective with several musicians, interesting ideas can arise spontaneously.
That’s more exciting than tipping through countless sample libraries in search of the individual sound. But especially in the last few years, these processes have started to move. In any case, it is in line with the spirit of the times that analog synthesizers and drum machines are increasingly being used again instead of VST instruments. Analog synthesizers are experiencing a great revival these days.
What’s exciting about this new, old hardware is the direct access to the sound structure and the haptics that go along with it.
The music you can touch. Of course, this analog hardware represents a larger investment compared to VSTi or other instrument plugins. Here we come full circle to the beginning of this article. More quality almost always means a higher cost. In the end, you are almost always rewarded with a better product (song).
With a little luck, this song will also pay for itself twice over through its longevity. Turning the focus from the digital domain to analog sound production and recording technology, while relying on the creative input of real musicians, combines the best of both worlds. This creates music that conveys something individual and exciting to the listener. The necessary investment should therefore flow equally into both sound engineering and musical assets (assets). This combination creates music that will still have relevance many years from now. And relevance in this case is synonymous with high quality.
What are your thoughts on this subject? If you like this blog post please leave a comment and share this post with your friends.
Thanks for reading!
Your Ruben Tilgner
https://www.elysia.com/wp-content/uploads/2022/04/pexels-brett-sayles-4012671-scaled.jpg17032560Ruben Tilgnerhttps://www.elysia.com/wp-content/uploads/2021/03/elysia-logo-black.pngRuben Tilgner2022-04-07 17:00:002022-04-27 16:36:22Quality in Music Production
The Renaissance – tips and tricks for mixing with headphones
A look on the street and in bus & train these days is enough to realize: The headphone is back! Those who have already arrived in the autumn of life feel pushed back to the time of the 80s. The Sony Group first established the location-independent availability of music on a grand scale with its legendary Walkman. Back then, however, the music selection was limited to the number of compact cassettes you were willing to lug around. Nowadays, thanks to modern data transmission via the mobile network combined with numerous streaming providers. Location-independent access to music, podcasts and radio plays are virtually unlimited. Together with the trend toward more home offices, this has given headphone sales an additional boost.
Headphone Sales
Ten years ago, just under nine million headphones were sold annually in Germany. In 2015, the figure was already 11.4 million units – with a clear upward trend! What surprises is that sound quality is listed as the most important purchase criterion. Way ahead of the “design and price” categories. Consumers currently prefer to buy high-quality headphones instead of monstrous hi-fi tower speakers with imposing speaker cabinets.
The arguments for investing in headphones are many and varied. Sound alone is not always the decisive factor. Headphones are now also a lifestyle product, where design and brand affiliation play a weighty role for some users. Sometimes it is also special features, such as noise cancelling, that move a certain model into the shopping cart. If you travel a lot and use trains and planes, you won’t want to miss the automatic suppression of ambient noise (noise-canceling). Effective noise canceling filters out the annoying ambient noise from the useful signal, and the user is not tempted to compensate for noise with a higher listening volume.
A further aspect of our reality in life is communication with cell phones and tablets. In line with this trend, most mobile devices come with matching in-ear headphones. This is having a lasting impact on media use, especially among young people. According to a JIM Study in 2020, the smartphone is by far the most popular device among German youngsters for connecting to the Internet.
Spotify
At the same time, the distribution of Internet use in terms of content has shifted noticeably within ten years. While 23% of young people used the Internet for entertainment purposes (e.g., music, videos, pictures) in 2010, this figure rose to 34% in 2020. According to the JIM study, the streaming service “Spotify” is more popular among young people than Facebook for example. With a look at these facts, it should be clear: Headphones are back and as musicians, sound engineers and mastering engineers, we have to ask ourselves whether we are responding adequately to this trend?
More devices = more mixing decisions?
With the sheer number of different mobile devices, the question inevitably arises as to whether special consideration needs to be given to these devices when mixing and mastering? It used to be simpler. There were the duos “tape & turntable” and “hi-fi speakers & headphones”. Today, the number of different devices and media seems almost infinite.
Vinyl Record players, CD players, MP-3 players, smartphones, tablets, desktop computers, laptops, wi-fi speakers and active battery Bluetooth speakers. Some consumers own a hi-fi system worth a condo, while others consume music through their smartphone’s built-in mini-speakers.
The range is extremely wide, leaving you with an uneasy feeling when mixing whether the mix will sound good on any device. There will probably not be a generally valid answer. However, those who know their “customers” can draw their conclusions. If you produce rap music for a youthful crowd, you should definitely check your mix via smartphone speakers and standard in-ear headphones. Widely they are included with every iPhone or Samsung phone.
The perfect listening environment – headphones?!
In the 80s, the Hifi tower with two large loudspeakers was the ultimate in music entertainment. The music lover sat there then in the perfect stereo triangle and has listened to the music there. According to this example nearly all music studios are still built today. Two loudspeakers in an acoustically optimized room for recording and mixing.
But even then, the room acoustics in the hi-fi living rooms were anything but perfect. Depending on the nature of the surfaces, windows and room geometry, an optimal condition was never possible there. Many compromises were made and especially in the bass range, the room has a formative role and is rarely linear. Then the different loudspeaker models still sound very different, a standard reference therefore does not exist. So if, in the worst case, the studio acoustics and the room acoustics of the listener are not optimal, the music experience is already very clouded soundwise.
Nowadays, perfect listening situations at home are becoming less and less common. The number of different speaker systems is unmanageable and many playback systems are designed more for background sound.
For the real music lover with a demand for quality, headphones have established themselves as the listening system. Customers are also willing to spend more money if this enables an increase in quality.
Disadvantage of studio monitor speakers
Studio stereo monitor speakers come in all sizes and variations and are available in active and passive versions. The difference between studio monitors and studio headphones results from the fact that studio monitors never act in isolation from the room acoustics. Room acoustics have a significant impact on the performance of studio monitor speakers.
The better the room acoustics, the better the performance of the monitors. The room acoustics have a large part in the frequency response of the loudspeaker. This may be linear in the test lab, but anything but perfect in an acoustically unfavorable environment. Then even buying the next better model is no real help, the acoustics don’t change there. Mixes made in unfavorable rooms often also have problems in the bass. Either there is a frequency range that is missing or is exaggerated. Also, spatial imaging is disturbed by first reflections.
Consideration
Then there are other practical considerations. What noise can I make in my room without disturbing a roommate or neighbor? At what times can I work? Is it possible to make the room acoustically perfect and what costs can I expect? Especially for many home studios these are important questions.
The question here is whether it would make sense to make the mix directly with headphones and then check whether this mix also works on other speaker systems. The likelihood of enjoying the music on better headphones is higher than on a high-end speaker in optimal placement.
Therefore, it would make sense to make the best sound for the headphones right away. This way the listener has almost the same listening experience, possibly even with the same headphone model. It sometimes takes many years for someone to get a balanced and good stereo mix in an unfavorable acoustic environment. With headphones, this is possible much faster.
If other music is also heard with these, the reference is directly there and a comparison is easier.
Through laptops, music production has become more mobile and possible in many places, with headphones the same listening situation is available everywhere.
Especially the bass range can be judged very well with headphones, the frequency response goes very deep and is reproduced without distortion.
What should you pay attention to
A headphone mix is always an unnatural listening environment. Why is that? Here’s a little experiment. Sit in front of your studio monitors or hi-fi system and move your head quickly from left to right and back. While you are moving, you will hear a slight flanging or comb filter effect. If you repeat the same movement under headphones, you will notice that the sound remains identical. No matter in which position your head is.
The reason for this is quickly explained: when you listen to music through headphones, the signal lacks the natural crosstalk that is always present in normal speaker signals.
Even if you sit in front of your monitor speakers in a perfect stereo triangle, the left ear will still hear sound from the right monitor and vice versa. The head never completely blocks sound events from the opposite side. This would also be fatal, because only thanks to the ears and the ability for acoustic localization our ancestors could recognize from which direction the saber-toothed tiger was approaching. Binaural hearing is a basic prerequisite for an aural sense of direction.
Historical evolution
If danger approaches from the left, its sound first reaches the left ear. The right ear, on the other hand, perceives the sounds at a reduced level and also with a slight time delay. The distance between the ears creates a difference in the time of flight, which enables our brain to extrapolate the direction of sound incidence. Phylogenetically, listening via studio monitors is more natural than via headphones. Only, the common prehistoric man was a hunter-gatherer and not a music producer concerned about missing crosstalk. Producers should know about these issues, however, because in an acoustically problematic room, studio monitors excite many unwanted room reflections that complicate mix decisions.
Example: In a large room, our binaural hearing can very well distinguish between direct sound and reflections as soon as the temporal distance between direct sound and reflections is large enough. However, this abstraction performance of the brain consumes a lot of attention resources. Mixing and mastering in such an environment are latent permanent stress for our hearing and should be remedied by sustainably improving the acoustics of the room. How should producers and musicians react to this fact if they regularly work in different places with different room acoustics? As an immediate measure, it is preferable to reach for studio headphones! Because apart from the lack of crosstalk, headphones have several features that recommend them for music productions.
Pro-Phones!
The great advantage of headphones is that they always offer the same acoustic landscape. Therefore, it does not matter where you are with them. Headphones offer a tonal home that provides an identical working basis day after day. If you are a producer or sound engineer who travels a lot, you should definitely get a pair of good studio headphones. No matter how bad the monitoring conditions may be in a studio, on a streaming job or at an FoH place in a reflection hell called „Town Hall”: headphones provide a reliable reference that users can always fall back on.
When mixing with headphones, what do I need to keep in mind?
After we have evaluated the technical requirements for the best possible headphone mix, here is a short summary of what you should keep in your mind when mixing with headphones. The mix on headphones always sounds a bit more striking and bigger compared to conventional monitors. Hard panning (left/right) sounds more drastic and extreme than on monitors due to the lack of crosstalk. Therefore, creating a natural stereo image and making panning decisions is more difficult on headphones.
On the other hand, minimal mixing errors and noises are much easier to localize on headphones. Headphones are also ideal for editing vocal or drum tracks. For longer sessions, it makes sense to switch between headphones and monitors more often. Especially with headphones, the risk of fatigue and too high levels are always present. In home studios, vocals are often recorded in the same room where the studio equipment is located. Here, a good pair of headphones is also important to directly assess the artist’s performance. With experience, analog processing can also be used directly here during the recording, because this can be immediately evaluated acoustically.
Which headphones are perfect for me?
For mixing tasks and studio use, studio headphones are generally better than consumer models. Quite a few consumer models prefer to sound “fat” instead of being an aid in mix decisions. Therefore, reaching for studio headphones is always preferable. Now there is another decision to make and that concerns the construction type. What should it be? Closed, semi-open or open? I’m not talking about the status of your favorite club, but the design of the earcup.
Closed headphones like the beyerdynamic DT770 Pro have the advantage that hardly any sound penetrates from the outside and the listener itself hardly emits any sound to the outside. Closed headphones are therefore particularly suitable for tracking in the studio. The spillover of other instruments into the headphones is suppressed and a loud click track does not spill over into the vocal microphone when singing. In addition, closed headphones work well in noisy environments and people around do not feel disturbed, since hardly any sound penetrates to the outside. Due to their design, closed headphones have some disadvantages when it comes to achieving the most linear sound possible. The closed ear cup always creates a pressure buildup, especially at low frequencies. This is one reason why (semi-)open headphones often have a more natural bass response, generally sound more “open – airy” and also boast better impulse fidelity.
Which headphone for which task
During long working days, the (semi-)open models offer better wearing comfort due to good airflow (air exchange through the open earcups), which is particularly advantageous at higher temperatures. The open models, on the other hand, have the disadvantage that they audibly emit sounds to the outside and accordingly offer only a small separation between the useful signal and ambient noise in a noisy environment. The right choice therefore also depends on the preferred application. For pure mixing tasks in a studio room without ambient noise, open headphones are a good choice, while closed headphones are recommended for tracking at tape volume.
In addition to working in the studio with headphone amplifiers with a lot of torque, there is also the application that you want to listen to your mix via smartphone. It makes sense to use headphones with the lowest possible impedance (ohms). Our example DT 770 Pro headphones are available in three different ohm versions (32, 80 or 250 ohms). As a rule of thumb, the higher the ohm, the more voltage the headphone amplifier has to apply to generate a decent level. This means that if you want to use your headphone on a smartphone, laptop or tablet, you should preferably add a low-impedance version to your shopping cart.
Conclusion
It is worth considering buying a very good pair of headphones for studio work and learning to record and mix with them. The cost is only a fraction of what very good speakers and room acoustics cost.
Therefore, it makes more sense to invest the money saved in audible improvements to the signal chain such as microphones, preamps, sound generators and analog outboard processing. With a good pair of headphones, the differences in these improvements can be quickly noticed. These improvements can also be heard directly in the music and that, after all, should be the goal of the measures.
What are your thoughts on this subject? If you like this blog post please leave a comment and share this post with your friends.
Thanks for reading!
Your Ruben Tilgner
https://www.elysia.com/wp-content/uploads/2021/11/mixing-with-headphones_cover-2-scaled.jpg17072560Ruben Tilgnerhttps://www.elysia.com/wp-content/uploads/2021/03/elysia-logo-black.pngRuben Tilgner2021-11-18 16:07:142021-11-18 16:07:19Mixing with Headphones
Music Production: Produce songs faster and more efficiently with the
Lean Method!
We at elysia know how much work, time, and energy is needed until a new product is ready for the market and can be offered to the public with a clear conscience. The numerous steps that have to be taken between the actual idea and the product ready for sale have a significant influence on the success of the product. The leaner the production process, the more resources the manufacturer can conserve. “Lean production” is the appropriate keyword.
A closer look reveals amazing synergies and parallelisms between modern hardware design and songwriting. It’s not for nothing that people talk about “music production” and the profession of a producer is well known in both music and industry. Both create new products in a creative process, which hopefully find their grateful buyers. So the question is obvious: “As a musician and songwriter, is there anything you can learn from industrial product development, and if so, what?” I will try to find an answer to this question in this blog post.
Make the right decisions in your music production
There are many different reasons for composing music. The question is exciting: “For whom do I actually compose? Who should hear my music production?”.
The answers within the musician community are likely to be diversified. There is space for the most diverse views and intentions. Some musicians address only a small maximum audience, which they want to attract and that is to themselves.
All creative decisions are to meet exclusively one’s own benchmarks. This represents maximum artistic freedom. The absolute opposite is defined by music producers who, like to achieve the greatest possible commercial success, are prepared to undermine their own standards for their work, provided it increases the chances of success. In between, there is a wide range of music creators who manage the balancing act between their own artistic demands and a solid financial balance.
This blog post is dedicated to them. Those who succeed in meeting their own taste and that of the target audience with their songs can certainly profit from the technique of “lean production” and use it for their music production in an inspiring way. I will show you how it works.
What is Lean Production?
The term “lean production” has become an indispensable part of day-to-day production in both large and small companies. The term “lean production” addresses several essential aspects. The core focus, however, is the conscious use of resources.
The elimination of all the unnecessary work processes in development and administration is intended to make production more efficient. Ideally, the right structures will result in a continuous improvement process that should produce better products in a short period. That’s about it in theory. In practice, one of the first actual implementations of these principles took place at the Japanese carmaker Toyota.
With its TPS system (Toyota Production System), the company defined many of the basic principles that can be found today under the term “Lean Production”.
But what relevance does this have for musicians and music producers?
Basically, all manufacturers of products have the same problem. Regardless of whether they produce cars, bag soups, or software – every producer wants to place his product successfully in the market or in the music charts and profit from it. Therefore, it can also be profitable for musicians and producers to think about an efficient workflow and their audience.
How do industrial manufacturers do it?
The main focus is on avoiding unnecessary waste.
This includes the overproduction of components or entire products that can no longer be used or sold at a later date. Or the use of employees and entire departments working on projects that turn out not to be target-oriented. The wasted working time could have been avoided with more precise, forward-looking planning. The resource “time” is also very limited in music production. If, in addition to the necessary improvisation and the search for new sounds and melodies, you adopt a stringent workflow, you will be amazed at what you can accomplish in a comparably short time.
From the factory floor to the rehearsal room
The first step: The manufacturer first performs market research to determine which products the customers are looking for. But it also works the other way around. Once you have an idea for a new product, the next step is to identify whether there is a large customer base for it. The next step is to draw up a requirements specification. This is a list that includes the following points amongst others: What features should the product have, what should the design look like, and what price can be calculated for it? Once the specifications have been drawn up, you have a fairly accurate picture of the new product. The next step is the development of a working prototype, which can be presented to selected test customers at an early stage to obtain initial feedback.
In an early product stage, things can still be changed in the product without much effort. However, the further development progresses, the more complex the necessary product improvements become. If the product is already in mass production, major changes can only be made with significant time and effort and should be avoided as far as possible in advance. Therefore, feedback from a representative customer community is very important for modern product development.
There are various possibilities for targeted feedback
For example, there are online communities such as Microsoft Teams, etc. Developers can exchange ideas about the product with their selected customers and beta testers in chats, online meetings, and dashboards. Even for app developers, there are different ways to make a beta version of a new app available to a select group of users. A composer also shouldn’t invest hours or even days in a string score if the song doesn’t require strings. With that said, I’ve now turned this corner to the field of music.
Or shouldn’t I rather talk about music production? Before we connect the production steps of both worlds, however, we should talk about how music is actually “developed”.
The development of music
Every song has different basic structures and these structures have a different emphasis, depending on which musical genre you serve. We can differentiate between primary and secondary elements within the song. Primary elements represent the framework of our song. This is based on the melody, lyrics, chord progressions, rhythm, groove, and tempo. The more defined these elements are, the more transparent the song identity appears. Even if one or two primary elements are missing, a listener will recognize the song in a few seconds. This fact can be found in a well-known statement: “A song works if it can be played with just an acoustic guitar and every listener can sing along immediately.”
The secondary elements of a song concern the nuances and subtleties that accentuate and, ideally, enhance the effect. These include tracks with background sounds, sound layers that should enhance the song’s impression and also generate an interesting atmosphere. This also includes the input that the well-known “Mix & Mastering” duo has on the song. The suitable selection of reverb, delay, and modulation effects. The balance of the individual tracks among themselves, the use of equalizers, the dynamics section, and automation. All of these secondary elements will help give the song its character.
Depending on the style of music and genre, the importance of the primary and secondary elements can vary significantly. A folk song is mainly defined by the primary elements, while a techno track clearly focuses on the secondary elements. Therefore, it is necessary to understand which genre you are in and which market you are serving. Whether it’s Rock, Pop, Hip Hop, Classic, or EDM, each music style has its own importance of primary and secondary elements.
How music will be created in a band context
In search for the answer to the question of how music is developed, we have to differentiate between a band environment and the work of a single producer (one-man show). The approaches are sometimes very different. For example, many bands rely on one proven method for their product development: interactivity! A band in the rehearsal room combines many steps of lean production simultaneously and often non-consciously.
A musical idea can consist of a guitar riff or an interesting groove. The advantage of a band is that changes to a song idea can be tried out and implemented very quickly (in real-time, so to speak). Key or tempo changes, adjusting the length of the chorus, or experimenting with different lead instruments? No problem in a band approach. Through the classic jam session, the primary elements of the song gradually evolve, and usually not too much time passes before a complete structure is formed. Hooray – the “prototype” is ready.
Ideally, you will preserve this idea already in the rehearsal room with multi-track recording. During the next few days, an online platform and messengers such as MS Teams or WhatsApp can be used to fine-tune a suitable lyric, and the arrangement can be further refined based on the multi-track recording until the band feels that the fresh work should be made available to their “selected test audience”. More about that later.
How a music producer creates music
The opposite of a classical band is the producer. A single person who composes and records by himself. For this, you basically need nothing more than a powerful computer, the appropriate software, and, above all, an idea. With that, the producer starts composing, recording, arranging, and finally mixing and mastering. Thanks to modern technology, there are hardly any restrictions on the genre. The unbelievable number of sound generators and sample libraries define the modern bedroom producer, who can work on his creations like a digital nomad not necessarily in his bedroom, but anywhere in the world. This corresponds to the democratization of music production.
Anyone who wants to produce music for themselves can now do it without much effort and with a manageable budget. Also working on ideas is meanwhile similarly flexible as in a band structure. And the producer does well to keep this flexibility as long as possible in his production process. The fact is, a band has direct access to a creative collective.
The producer, on the other hand, first stares at an empty arrangement window when opening the DAW, which he has to partially fill with midi notes and WAV files to create a musical image of his idea. In this process, the producer repeatedly encounters two problem areas. The further he develops his prototype, the more difficult and time-consuming it is to change or even replace already existing structures. For this reason, he should first take care of the primary song elements and keep them in a flexible state for as long as possible.
How is that meant?
The modern music producer is basically missing a producer in a classical sense. Like Rick Rubin, who intervenes less in the technical process, but can contribute meaningful feedback to all artistic decisions. Is the tempo right? Is the intro perhaps too long, or is the melody of the chorus just not catchy enough? These are issues that a bedroom producer usually doesn’t get input on. Unless he manages to attract from his environment a feedback community that makes sense for him. The music producer usually does not have a network that evaluates the product characteristics of his music production already in the creation phase and could correct mistakes. Being a music producer, you always run the risk of getting bogged down in details and thus devoting too much time and energy to the secondary elements, even though the primary elements have not yet been sufficiently worked out.
Extra Tip
Keep all production steps within the DAW in the digital domain for as long as possible! If you get valuable feedback from your community, you can always quickly change the key, tempo, or arrangement. However, if you have already recorded many tracks analog, for example, drums and guitars with microphones on the hard disk, then changes can only be made with great difficulties. It is better to produce the song completely with VST instruments and plugins in the DAW as a sketch and make the result available to your community for a review. If the feedback is positive, you continue working on the secondary elements. You look for the right effects, record melodies with real instruments, and use analog hardware to give the song an extra character boost. This conserves resources, saves time, and prevents unnecessary WAV files from being captured to hard drives.
Lean Production! We keep in mind, also for the Bedroom Producer the target audience relevance of his product is important. But for this, you need customer feedback on your music production as accurately as possible. How should you collect this as a producer?
Evaluate your product
At this point, we come full circle back to the beginning of this blog post. If you have no commercial intentions, you have a completely different idea of when your music production is successful or not, than someone who measures success primarily according to business aspects. In the first case, the evaluation of success is very simple: If your song corresponds exactly to your own ideas, you will stamp your personal requirements as fully implemented and dedicate yourself to the next project or music production with a positive feeling.
Do you also want to be commercially successful and hope for the support of your community?
The “evaluation of success” is not that simple. The variables of “success” are complicated and often varied. It could be a well-padded bank account or a sold-out tour. Or maybe a good chart position? Only you can make the balance. As different as the criteria for success may be, one factor is always relevant. And that is the feedback of your fan base, i.e. your customers.
Every success-oriented company wants to know a major thing: “Who are my customers?” Only those who know their customers can develop products with a high level of acceptance. The classical band or a live artist has a big advantage on this point. If you stand on stage in front of an audience and perform your songs, you will always get feedback on your “product”. The feedback from the audience at a concert is immediate and unfiltered. And there is definitely a difference between reading the feedback in a face or in a Facebook comment. For that reason, it is not uncommon in a band context to first try out new tracks in front of an audience, and only after this baptism of fire visit the recording studio to finally record these tracks.
On one point there is also common ground.
You have to know how to interpret each feedback. This is a question of experience and communication with your fans. A bad sound (undersized PA or unfavorable room acoustics) or a miserable daily form of the actors can distort the results. The song can be as good as it is, if you don’t perform it in the usual quality on one day due to the reasons mentioned above, the audience feedback can be less positive. Although the song itself is not the problem. What is needed here is the ability to abstract.
The lone music producer often is missing this possibility of evaluation through performance in front of an audience. How can you still get feedback on new ideas and songs as a music producer? One possibility is the use of an online community or the way via social media. Even in the early stages of new music production, it is a good idea to face the criticism of a meaningful crowd. Much like a band exchanges ideas with each other, you could build a similar network of true fans, other producers, and musicians.
You produce techno or EDM tracks?
Then you should also seek contact with club DJs and other scene experts. They can offer you valuable feedback or play a rough mix of your track in their club. This is also important because especially in Techno and EDM the sound selection is a style-defining element. If the kick and bass of your track don’t come across well in the club – Your Track needs to be tested again. In this genre, you have to pay attention to maximum sound quality from the beginning, because this is one of the primary song elements in Techno, House, and EDM. You can also add secondary elements later.
You can replace or add important sounds with analog synth sounds with manual filter changes. Through analog hardware, a personal signature becomes audible and can even develop into your sound trademark. The fact is, quite a few producers are mixing and mastering their tracks by themselves. There is nothing wrong with that in principle. If you give your tracks to a third party for the final refinement, you automatically get a second and third opinion. If the track turns out to be problematic, the effort for necessary changes is usually disproportionately high.
Tip: It’s not a bad idea to send rough mixes to Mix- and Mastering engineers in the early stages of the project, asking them to evaluate them in terms of arrangement and sound selection. If you’re a lone wolf and have been struggling through your song arrangement for days, you often lack the necessary distance to be able to properly evaluate the status quo. Experienced sound engineers, as well as friends and fans, can give the necessary impulses through tips and suggestions to put your song (your music production) on the right course.
In other words: improve the product.
But that’s still not the end of the evaluation process of your music production. The day has come and the freshly mixed and mastered song is finally available for purchase. Where and how can you get the hard facts about its sales performance? Your account balance reflects only a part of the truth. Especially in times of streaming and decreasing sales of physical records and CDs.
How is the reach of the songs? Who are my fans and how do they accept the new song? Does a high chart position automatically mean commercial success? And what charts are even relevant for my product? As you can see, the subject is extremely complex. Besides the immediate feedback at live concerts and from the online community, the picture quickly becomes blurred. Who is listening to my songs and what does a chart position even say? This is a topic worth covering in a separate blog post.
What are your thoughts on this subject? Were you aware of it or did you experience an “aha” effect now and start your next music production more motivated? Leave a comment and share this post with your friends and colleagues who didn’t know about Lean Production methods yet.
Thanks for reading!
Your Ruben Tilgner
https://www.elysia.com/wp-content/uploads/2021/05/pexels-thibault-trillet-167491-scaled.jpg11662560Ruben Tilgnerhttps://www.elysia.com/wp-content/uploads/2021/03/elysia-logo-black.pngRuben Tilgner2021-05-27 17:30:002021-05-31 15:52:59Music Production – Produce songs faster and more efficiently with the Lean Method
Due to the pandemic, our media behavior is changing permanently. Many extroverted presentation possibilities such as concerts, exhibitions, or face-to-face seminars are (if at all) currently only possible to a limited extent. But that doesn’t stop creative minds from spreading their artistic output. Due to the situation, people are looking for alternative ways to expose themselves to the world.
Besides live streams, the podcast is a popular way to make more complex topics, in particular, accessible to a larger audience. The access barriers for the audience are low. On the other hand, a podcaster has to consider a few things to come up with a good quality podcast. In this blog post, we want to offer you valuable tips around the topic “podcast”. Besides relevant, interesting content, good audio quality is a key to a successful podcast. Especially if you want to recommend yourself as a high-fidelity island in an ocean of podcasts. So let’s get started immediately.
The renaissance of podcasts
It might be assumed that the podcast medium is only a few years old. Rather, it has been rediscovered in recent years and is currently celebrating a noticeable revival due to the pandemic. Yet the basic idea is surprisingly old. In the 1980s, the company RCS (Radio Computing Services) provided digital talk content for radio stations for the first time, which at least comes close to the basic idea of today’s podcasts. At that time, however, no one was talking about a podcast. “Audiobloggin” was an attempt to give this new medium a name. The term “podcast” was initially used in 2004 by Ben Hammersley in a Guardian newspaper article.
The Medium
The year 2004 is considered formative for the podcast medium because of another event. Software developer Dave Winer and former MTV VJ Adam Curry are considered as the inventors of the podcast format that is still known today. From this point on, the development rapidly accelerated, also thanks to the Internet. In the same year, the first podcast service provider, Libsyn, was launched. Today Libsyn is a stock exchange-listed company. In 2005, Apple released native support for podcasts for the first time with iTunes 4.9. Steve Jobs demonstrated how to create a podcast using Garageband in 2006, while in 2013 Apple reported its “1 Billionth Podcast Subscriber.” The burning hot topic of “podcast” can also be measured by the fact that in 2019 the streaming service Spotify acquired the podcast services “Gimlet Media” and “Anchor FM inc.” and thus became one of the largest providers of podcasts.
33 percent of Germans said In 2020 that they listened to podcasts, according to a representative survey. In terms of content, most listeners (in addition to the omnipresent Corona Topic) prefer content on comedy, sports, and news. The world of podcasts is widely varied in terms of content. The number of podcast providers is just as wide-ranging.
Which podcast hosting platform would you like?
Ok, you have decided to start your own podcast. While i’ll offer tips & tricks on technical implementation later, the first step you should take is to think about where your podcast should be hosted. Sounds trivial, but this groundwork has a direct impact on the performance of your podcast. On the one hand, the range of hosting platforms is very large, and on the other hand, an unwise choice can have a massive impact on the future of your podcast project. But first things first.
Before you decide which provider to choose, you should clearly outline what you want to achieve with your podcast in the medium and long term and then decide on a suitable provider. You do not pursue any commercial intentions and rather deal with topics and content beyond the predominant zeitgeist? Then you can easily host your podcast “The Multiband Compressor in Changing Times” on your website. This will not cause any additional costs and the access of your rather small community will not cause any hassle for the webserver. However, if you plan to address a larger audience, you should rely on a professional podcast provider from the beginning. Otherwise, a later move from your website to a professional provider can be problematic. In a pinch, you may even lose some of your subscribers.
Can’t you just host your podcasts with one of the global players like iTunes, Google Podcasts, or Spotify?
Unfortunately, you can’t. Rather, these providers simply take your data from a dedicated podcast hosting. Thus, you have to match your ideas with the offers of the podcaster hosting platforms to find the right partner. If you don’t know exactly where you want your podcast to go, it’s best to go with an entry-level package from a professional provider (e.g. Podcaster, Anchor, Captivate, or buzzsprout). If the number of subscribers grows, you can upgrade your hoster, for example, to earn some extra money via an affiliate marketing integration or to get a more detailed picture of your listeners via detailed statistics.
What equipment do I need for a podcast?
Since podcasts focus primarily on voice recordings, the minimum equipment needed is a microphone and a recording device. As always with audio technology, the price range is wide. Since most podcasts are now consumed through a smartphone, the following thought is obvious: why not record your first attempts directly with your smartphone? Every smartphone has a built-in microphone, and together with the free “Anchor” app from Spotify (available for iOS and Android), you can start your first steps as a podcaster without a big budget. Especially since this combination can also be used “off the grid” beyond a studio environment.
However, this combination quickly reaches its limits when professional sound quality is required, there are multiple participants in the conversation or even a participant is to be integrated via Skype. In this case, we switch from a smartphone to a laptop or desktop computer in combination with a professional audio interface with multiple microphone inputs. Tip: For the best possible sound quality, each participant should listen via headphones instead of monitors. If I playback the voice of a guest (e.g. via Skype) via monitors, then his voice is also picked up via his own microphone. This inevitably creates unsightly comb filter effects that are hard to remove later.
Which microphone is the perfect one for a podcast?
When it comes to choosing a microphone, the choice is similar to that of podcast platform providers. What we already know: If you are recording a live podcast with one or more participants, you should not use monitors. If you still need to have your hands free while speaking, you can also use an audio/speech combination as an alternative microphone. Among podcasters, for example, the beyerdynamic DT-297-PV-MKII is considered a popular standard. The DT-297-PV combines a cardioid condenser capsule with a dynamic headphone, which is used for monitoring. An affordable alternative is the Presonus Revelator USB-C microphone, which combines everything necessary for a podcast.
Next Level Sound Quality
There is still room for improvement in terms of sound quality. A look at the equipment of professional podcasts (e.g. The Joe Rogan Experience) provides insight. Here you can see professional broadcast microphones in use across the board, which are known for one thing in addition to their proven sound quality: A predictable proximity effect. Professional broadcast microphones such as the Electro Voice RE20, Neumann BCM 01, or the Shure SM7 sound very balanced and give the speaker a deep, warm voice at a close distance – just as you would expect from the radio. Ribbon microphones also have a very warm and pleasant sound with very few S sounds. In addition to the use of well-known high-quality microphones, a closer look reveals another detail in professional podcast studios: acoustic elements for improving room acoustics.
Ambitious podcasters basically operate similarly to a professional recording studio. The quality of the recording is determined by the weakest link in the chain. Thus, the use of an expensive microphone makes only limited sense if the room acoustics are anything but ideal. If you put a professional studio microphone in the shopping cart, the rest of the signal chain (microphone preamplifier, equalizer, compressor, digital converter, room acoustics, post-pro or post-processing equipment) should operate at a comparable level. Especially a good analog compressor gives the voice the professional sound we know and love from the radio. Excessive dynamic fluctuations are unpleasant when listening, especially when using headphones. But also very important when listening in a loud environment like driving a car.
During recording
In the recording studio, many users keep the options open to postpone the actual sound to the mix phase. As a podcaster, you can do the same. But it often makes sense to decide on a sound while recording. Some podcasts are also streamed live and are only available for download after the broadcast. In this case, you should definitely use EQ and compression during the recording. The same applies if you receive one or more guests on the podcast. Skillful use of EQ and compression makes the sound more balanced and each participant can better understand his or her counterpart.
Why is that?
Every voice and every microphone sounds different, so there are almost always one or two problem frequencies where a voice might sound too nasally (400-500Hz) too shrill (3-4kHz), or too treble-heavy (6-8kHz). If you filter out these frequencies with a professional EQ like the elysia xfilter 500, the voice will sound much more pleasing and consistent. Even at the source, the sound can be significantly improved with the right tools in just a few steps. The elysia skulpter 500 preamp offers direct access to the most important parameters, such as microphone pre-amplification.
This is especially important in the case when talk guests with very tight schedules join in. In these cases, the soundcheck must be as short as possible. No problem with the help of the skulpter 500. The built-in microphone preamp boosts low-output microphones with +65 dB when needed. Perfect for microphones like the Shure SM7B, in case it’s being reviewed by a delicate voice. For fast and efficient sound correction, the “Shape” filter is available. If the microphone is used very close or the speaker has an unusually low voice, unnecessary low bass components can be professionally disposed of with the tunable Low Cut. Very dynamic voices are specifically captured with the interactive single-knob compressor, which benefits a much more homogeneous sound.
All these functions are controlled by just four potentiometers, which produce fast and comprehensible results. In combination with a professional AD converter or audio interface, this is already half the battle. Speech understandability is also very high here, which is important for longer podcasts. The minimum performance is achieved when a distortion-free recording with a constant level is captured on the hard disk. For that, you should know the basics of staging. You will find lot’s of information in our previous Gain Staging Blog post.
Caution. Room!
For the final touch, you should ensure good room acoustics and Ideally, you’ll have a recording room with dry, even acoustics that are also damped from environmental noise. Investing a not inconsiderable portion of your budget in room acoustics and soundproofing may not have been on your wish list, but it’s definitely worth it. However, there are often rooms that are already very low noise by themselves, such as the bedroom. As we will see later in the topic „Post-processing for maximum sound”, the recordings are usually compressed again significantly.
Compression clearly brings existing room components to the front in the mix. Existing room reverberation may not be particularly annoying during the recording, but at the latest in post-processing, you will be annoyed by the indirect sound, which can only be saved with a lot of effort afterward. The same applies to noise from the outside that creeps uninvited into the recording. Background noise should be avoided in all shelters.
Post-processing for maximum sound
To be able to mix the recorded tracks comfortably, we need audio software. The selection here is also wide. From free entry-level applications like Audacity to full-fledged DAWs (Cubase, Logic, or Pro Tools), both price and features range. The podcaster is therefore spoiled for choice. Regardless of the price and features, however, there are some criteria that a suitable software for podcasters should provide. These include the recording option, arranging and adding additional audio material, such as jingles, original sounds, or background music. Furthermore, the software must be able to combine all audio files into a stereo mixdown in different formats. Let’s get started and put the finishing touches on our signals.
The main focus should always be on the best possible sounding, intelligible voice reproduction. The listener wants to listen to the voice, that’s important to keep the listener engaged. This should be worked out in the post-production/post-processing. Post-production/Post-processing is divided into three steps. First, we try to adjust the volume of all the signals among themselves so that the podcast can be listened to without large jumps in level.
If we have different speakers and perhaps different music feeds, we also try to match them in terms of sound and dynamics. The second step is to work with the individual signals and use EQ and dynamics processors. More tips on how to use EQs and dynamic tools will be presented in a future blog post (spoiler).
Tools
Another useful tool to improve the sound can be restoration or noise reduction plug-ins, which specifically filter out any background noise. Anything that serves intelligibility and an unobtrusive listening impression are useful for keeping the listener engaged. The final step in a podcast post-processing/post-production is a “mini-master”. All individual signals are mixed together to create a stereo track. This stereo track can also be fine-tuned again, similar to mastering for music production. However, the topic of “How to Master” requires one or more separate blog posts and cannot be further elaborated here for reasons of space.
However, i would like to share the following tip:
A frequently asked question in the podcast community is: How loud should a podcast master be? The AES (Audio Engineering Society) recommends a maximum level between -20 and -16 LUFS for podcasts. We, on the other hand, recommend working with the level that generates the best sound from the master. This again is simply dependent on the recording and mix. Some voices may be treated more heavily with compression and limiting than others. Therefore, the ear and not LUFS metering should be the final consideration. More on this topic in our previous Mastering for Spotify blog post.
That has format
Once we have found the suitable mastering settings, the mixdown to the appropriate audio format is next. Usually, this is an MP3. MP3 is preferred for podcasts due to its small file size, even though it’s not the best option in terms of sound. A direct competitor would be M4a, which produces similarly small files and sounds even better. Unfortunately, not all podcast platforms (e.g. Spotify) support the M4a format, so you should only choose it if you prefer “sound over reach”. If the sound is the most important criterion, then there is no way around WAV format. However, the big disadvantage of WAV is the huge file size, which might scare away some subscribers. Especially if they prefer to listen to your podcasts as a download or access your podcast via mobile data on their smartphone. Therefore, a high-resolution MP3 (320 Kbit/s) is currently the common compromise.
Metadata
After we have created the master, the last step is to add the metadata. For the podcast area, the insertion of so-called ID-3 or Enclosure tags in the master file is almost absolutely indispensable. This guarantees that your podcast is equipped with the most important tags (podcast title, cover art, year of publication, etc.). This metadata is the digital identity card for your podcast file. If your audio software cannot assign metadata, the ID-3 tags can also be added later via software like iTunes, MP3Tag, or ID3 Editor. The Presonus DAW Studio One has an extra “Project Page”, which is like a mastering program. Metadata for individual tracks and albums, including artwork, can also be entered there. This looks like this:
The ID-3 or enclosure tags are important because this metadata can be referenced in an RSS feed. This makes it easier to find your episodes and to download them automatically if you wish. But what is an RSS feed? An RSS feed is a file format that shows you content from the Internet. You can use it to subscribe to blogs or podcasts, and when you visit the corresponding website, you will automatically be alerted to new content. RSS feeds are effectively the standard for a podcast subscription. Theoretically, you can create an RSS feed yourself, but this is not necessary if you use a podcaster hosting platform. They automatically generate appropriate RSS feeds for their customers. One more reason to entrust your podcast to a hosting platform.
Summary
We hope this blog post has provided some valuable tips and tricks for your podcast venture. At least as far as the production side is concerned, it should now be clear how to produce a podcast. You should always keep one thing in mind: A podcast with good sound and professional production does not guarantee a large number of subscribers. Basically, a successful podcast is hardly different from a successful music production at this point. Only those who understand how to combine a good sound with interesting, fresh content will build up a loyal and hopefully steadily growing audience over time. Especially in challenging times, people are increasingly looking for one thing: meaningful content with relevance. If your podcast succeeds in combining both and the production is also convincing in terms of sound, then your podcast is unlikely to complain about a lack of attention in the future.
I hope you enjoyed this post and i would be happy if you can comment, discuss and share this post.
Yours, Ruben Tilgner
https://www.elysia.com/wp-content/uploads/2021/04/how-to-podcast-cover.jpg502845Ruben Tilgnerhttps://www.elysia.com/wp-content/uploads/2021/03/elysia-logo-black.pngRuben Tilgner2021-04-22 17:30:002021-10-13 15:08:51How to Podcast
MTV was just the beginning. It seems that video has totally gained dominance over good old audio. We know from communication science that pictures are the most direct form of communication. Does this also mean that visual communication is generally superior? What influence does this have on the way we consume and produce music? Is seeing even more important than hearing?
Corona is changing our world permanently. Since the beginning of the pandemic, video traffic in Germany has increased quadruple. Instead of picking up the phone, people prefer to make Zoom calls. This has a clear impact on our communication structures. But as with any mass phenomenon, there is always a counterbalancing correlation, a countermovement. This manifests itself in the form of the good old record players.
For the first time, more vinyl records than CDs were sold in Germany in 2021. This decelerated type of music consumption is so completely at odds with the prevailing zeitgeist. The desire to be able to hold your favorite music in your hand as a vinyl record is extremely popular. The fact that we process music from the vinyl record player exclusively with hearing is so archaic that it seems to be out of time.
At the same time, the enjoyment of music with the help of a record player corresponds phylogenetically completely to human nature. In the following, we will clarify why this is so. We learn from the past for the future. This is also true for producing music. The goal of a successful music production should be to inspire the audience. Music is not a pure end in itself. For this, we only need to look at the creation of music [sic].
The origin of music
Germany is an old cultural nation. Very old, to be precise. This is shown by archaeological findings discovered during excavations in a cave in the Swabian Alb. Researchers found flutes made of bone and ivory there that are believed to be up to 50,000 years old. The flute makers even implemented finger holes that allowed the pitch to be manipulated. Experts estimate that humanity has been expressing itself rhythmically and melodically for a very long time. These non-linguistic acoustic events are believed to have served primarily social contexts. Music enabled emotional vocal expressions and established itself as a second communication system parallel to language. Much of the emotional level of music-making has survived to this day, such as the so-called “chill effect“.
This occurs when music gives you goosebumps. The goosebumps are the physical reaction to a chill effect moment. The chill effect also causes the brain’s reward system to be stimulated and happy hormones to be released. This happens when the music provides special moments for the listener, and these moments are often very subjective. But this is precisely where music listeners derive their benefit during music consumption. Emotionality is the currency of music. For this reason, children should be enabled to learn a musical instrument. Along with language, music is a profoundly human means of expression. Music teaches children to experience emotions and also to express their own feelings. It is an alternative means of expression in case language fails. It is the desire for emotionality that makes us reach for the vinyl record as our preferred music medium in special moments.
Then and now
The vinyl record is preserved music. The flutists of the Swabian Alb could always practice their music only in the “here and now”. No recording, no playback – handmade music for the moment. That meant making music for the longest period in human history. With the digital revolution, music-making changed radically. In addition to traditional instruments, keyboards, drum computers, sampling, and sequencers came along in the 80s. The linearity of music-making was broken. Music no longer necessarily had to be played simultaneously. Rather, a single musician was able to gradually play a wide variety of instruments and was no longer dependent on fellow musicians. As a result, several new musical styles emerged side by side in a short time, a trademark of the 80s.
The Nineties
In the 90s, the triumph of digital recording and sampling technology continued. Real sounds were replaced by samplers and romplers, which in turn received competition from midi programming. With midi sequencers, screens and monitors increasingly entered the recording studios, and music was made visible for the first time. The arrangement could be heard and seen simultaneously. The 2000s is the time of the comprehensive visualization of music production. Drums, guitars, basses, and synths – everything is available as a VST instrument and since then virtually at home inside our monitors.
At the same time, the DAW replaces the hard disk recorders that were common until then. The waveform display in a DAW is the most comprehensive visual representation of music to date and allows precise intervention in the audio material. For many users, the DAW is becoming a universal production tool, providing theoretically infinite resources in terms of mix channels, effects, EQs, and dynamics tools. In recent years, the previously familiar personnel structure has also changed. Not the band, but the producer creates the music. Almost everything takes place on the computer.
Due to this paradigm shift, new music genres emerge, which are particularly at home in the electronic field (Trap, Dubstep, EDM). It is not uncommon for these productions to no longer use audio hardware or real instruments.
Burnout from Wellness Holidays
A computer with multiple monitors is the most important production tool for many creatives. The advantages are obvious. Cost-effective, unlimited number of tracks, lossless recordings, complex arrangements can be handled, an unlimited number of VST instruments and plug-ins. Everything can be automated and saved. A total recall is obligatory. If you get stuck at any point in the production, YouTube offers suitable tutorials on almost any audio topic. Drawing by numbers. Music from the automatic cooker. Predefined ingredients predestine a predictable result without much headache.
Stone Age
Our Swabian flutists would be surprised. Music only visual? No more hardware needed? No need to play by hand? The Neanderthal hidden in our brain stem subconsciously resists. The eye replaces the ear? Somehow something is going wrong. In fact, this kind of producing contradicts the natural prioritization of human senses. The Stone Age flute player could usually hear dangers before he could see them. Thanks to our ears, we can even locate with amazing accuracy the direction from which a saber-toothed tiger is approaching.
Evolution has thought of something that the sense of hearing is the only sense that cannot be completely suppressed. You can hold your nose or close your eyes, but even with fingers in your ear, a human being perceives the approaching mammoth. The dull vibrations trigger a fear sensation. This was and is essential for survival. Sounds are always be associated with emotions. According to Carl Gustav Jung (1875 – 1961), the human psyche has collective memories in the subconscious. He called these archetypes.
Emotions
Sounds such as thunder, wind or water generate immediate emotions in us. Conversely, emotions such as joy or sadness can be best expressed with music. In this context, hearing is eminently important. Hands and ears are the most important tools of the classical musician and for this reason, there are many relative musicians who are blind and play at the highest level. Those who rely exclusively on the computer for music production are depriving themselves of one of their best tools. Music production with keyboard and mouse is rarely more than a sober data processing with artificial candy coating. DAW operation via mouse demands constant control by our eyes. There is no tactile feedback. In the long run, this is tiring and does not remain without collateral damage. Intuition is usually in the first place when it comes to reporting damage.
Seeing instead of listening?
The visualization of music is not problematic by itself. Quite the opposite, in fact, because sometimes it is extremely helpful. Capturing complex song sequences or precisely editing audio files is a blessing with adequate visualization. As far as the core competence of music production is concerned, the balance looks much more ambivalent. Adjusting an EQ, compressor, effect, or even adjusting volume ratios exclusively with monitor & mouse is ergonomically questionable. It is like trying to saw through a wooden board with a wood planer. It is simply an unfortunate tool of choice.
Another aspect also has a direct impact on our mix.
The visual representation of the EQ curve in a DAW or digital mixer has a lasting effect on how we process signals with the EQ. Depending on the resolution of the display, we use the filters sometimes more and sometimes less drastically. If the visual representation creates a massive EQ hump on the screen, our brain inevitably questions this EQ decision. Experiences have shown that with an analog EQ without a graphical representation, these doubts are much less pronounced.
The reason: the reference of an analog EQ is the ear, not the eye. If a guitar needs a wide boost at 1.2 kHz to assert itself in the mix, we are more likely to make drastic corrections with an analog EQ than with a DAW EQ whose visualization piles up a massive EQ hump on the monitor screen. Successful producers and mixers sometimes work with drastic EQ settings without giving it much thought. Inexperienced users who resort to an equalizer with a visual curve display too often use their eyes instead of their ears in their search for suitable settings. This often leads to wrong decisions.
2 identical EQs @1kHz each with +12dB
EQ @1kHz with +12dB and 30 dB resolution
EQ@ 1kHz with 12dB and 6dB resolution
Embrace the chaos
When asked what is most lacking in current music productions, the answer is intuition, interaction, and improvisation. When interacting with other musicians, we are forced to make spontaneous decisions and sometimes make modifications to chords, progressions, tempos, and melodies. Improvisation leads to new ideas or even a song framework, the DNA of which can be traced back to the sense of hearing and touch.
Touch and Feel
The sense of touch in combination with a real instrument offers unfiltered access into the subconscious. Or loosely according to Carl Gustav Jung to the primal images, the archetypes. Keyboard & mouse do not have this direct connection. To be able to interact musically with VST instruments and plugins, we, therefore, need new user interfaces that serve our desire for a haptic and tactile experience. Especially at this point, a lot has happened in the past few years. The number of DAW and plug-in controllers is steadily increasing, forming a counter-movement to the keyboard & mouse.
Faders, knobs and switches are fun
Feeling potentiometer positions allows operation without consciously looking, like a car radio. For this reason, the Federal Motor Transport Authority considers the predominant operation of a modern electric car via the touchscreen to be problematic. The fact is: with this operating concept, the driver’s attention shifts from the road to the touchscreen more often than in conventional automobiles with hardware pushbuttons and switches. The wrong tool for the job? The similarities are striking. A good drummer plays a song in a few takes. Yet some producers prefer to program the drums, even if it takes significantly longer. Especially if you want to implement something like a feel and groove to the binary drum takes.
The same goes for programming automation curves for synth sounds, for example, the cut-off of a TB 303. It’s faster to program in than to program out, and the result is always organic. It’s no accident that experienced sound engineers see their old SSL or Neve console as an instrument. And in the literal sense. Intuitive interventions in the mix with pots and faders put the focus on the ear and deliver original results in real-time.
Maximum reduction as a recipe for success
In the analog days, you could only afford a limited number of instruments and pro audio equipment. Purchasing decisions were made more consciously and the limited equipment available was used to its full potential. Today it is easy to flood the plugin slots of your DAW with countless plugins on a small budget. But one fact is often overlooked. The reduction to carefully selected instruments is very often style-shaping. Many musicians generate a clear musical fingerprint precisely through their limited instrument selection.
The concentration on a few, but consciously selected tools define a signature sound, which in the best case becomes an acoustic trademark. This is true for musicians as well as for sound engineers and producers. Would Andy Wallace deliver the same mixes if he swapped his favorite tool (SSL 4000 G+) for a plugin bundle complete with DAW? It’s no coincidence that plugin manufacturers are trying to port the essence of successful producers and sound engineers to the plugin level. Plugins are supposed to capture the sound of Chris Lord Alge, Al Schmitt, or Bob Clearmountain.
A comprehensible approach. However, with the curious aftertaste that just these gentlemen are only conditionally known for preferring to use plugins. Another curiosity is to revive popular hardware classics as plugin emulations. A respectable GUI is supposed to convey a value comparable to that of the hardware. Here, only the programming, the code determines the sound of the plugin. Another example of how visualization influences the choice of audio tools.
Just switch off
Don’t get me wrong, good music can also be produced with a mouse & keyboard. But there are sustainable reasons to question this way of working. We are not spreading the audio engineering gospel. We just want to offer an alternative to visualized audio production and shift the focus from the eye to the ear again. That music often sends itself in the background noise of the zeitgeist, which we will hardly be able to reverse.
But maybe it helps to remember the archetypes of music. Listening to music instead of seeing it and, in the literal sense, taking a hands-on approach again. Using real instruments, interacting with other musicians, using pro audio hardware that allows tactile feedback.
Self-limiting to a few deliberately selected instruments, analog audio hardware, and plug-ins with hardware controller connectivity. This intuitive workflow can help break through familiar structures and ultimately create something new that touches the listener. Ideally, this is how we find our way back to the very essence of music: emotion!
Finally, one last tip: “Just switch off!” Namely, the DAW monitor. Listen through the song instead of watching it. No plugin windows, no meter displays, no waveform display – listen to the song without any visualization. Like a record, because unlike MTV, it has a future.
What do you think? Leave a comment and share this post if you like it.
Yours, Ruben
https://www.elysia.com/wp-content/uploads/2021/03/IMG_5091-scaled.jpeg16262560Ruben Tilgnerhttps://www.elysia.com/wp-content/uploads/2021/03/elysia-logo-black.pngRuben Tilgner2021-03-25 16:59:362021-05-27 15:22:06Use your ears – How intuitive is music production today?
Mastering for Spotify, YouTube, Tidal, Amazon Music, Apple Music and other Streaming Services
Does audio streaming platforms also require a special master?
Introduction
Streaming platforms (Spotify, Apple, Tidal, Amazon, Youtube, Deezer etc.) are hot topics in the audio community. Especially since these online services suggest concrete guidelines for the ideal loudness of tracks. To what extent should you follow these guidelines when mastering and what do you have to consider when interacting with audio streaming services? To find the answer, we have to take a little trip back in time.
Do you remember the good old cassette recorder? In the 80s, people used it to make their own mixtapes. Songs of different artists gathered on a tape, which we pushed into a tape deck of our car with a Cherry Coke in the other hand in order to show up with suitable sound before hitting at the next ice cream dealer in the city center. The mixtapes offered a consistently pleasant listening experience, at least as far as the volume of the individual tracks was concerned. When we created mixtapes, the recording level was simply adjusted by our hand, so that differently loud records were more or less consciously normalized by hand.
Back to the Streaming Future. Time leap: Year 2021.
Music fans like us still enjoying mixtapes, except that today we call them playlists and they are part of various streaming services such as Apple Music, Amazon Music, Spotify, YouTube, Deezer or Tidal. In their early years, these streaming services quickly discovered that without a regulating hand on the volume fader, their playlists required constant readjustment by the users due to the varying loudness of individual tracks.
So they looked for a digital counterpart to the analog record level knob and found it in an automated normalization algorithm that processes every uploaded song according to predefined guidelines. The streaming service Spotify for example, specifies the number -14 dB LUFS as an ideal loudness value. This means if our song is louder than -14 dB LUFS, it will be automatically reduced in volume by the streaming algorithm so that playlists have a more consistent average loudness. Sounds like a good idea at first glance, right?
Why LUFS?
The problem with different volume levels was not just limited to the music area. In the broadcasting area, the problem was also widespread. The difference in volume between a television movie and the commercial interruption it contains sometimes took on such bizarre proportions that the European Broadcast Union felt forced to issue a regulation on loudness. This was the birth of the EBU R128 specification, which was initially implemented in Germany in 2012. With this regulation, a new unit of measurement was introduced, the LUFS (Loudness Units relative to Full Scale).
Whereby 1 x LU (Loudness Units) equals the relative value of 1 dB and at the same time, a new upper limit for digital audio was defined. A digital peak level of -1 dB TP (True Peak) should not be exceeded according to EBU speecification. This is the reason why Spotify and Co provide a True Peak limit of -1 dBFS for music files.
Tip: I recommend to keep this limit. Especially if we do not adhere to the loudness specification of -14 dB LUFS. At higher levels, the normalization algorithm will definitely intervene in a moderating way. Spotify refers to the following in this context: If we do not keep -1 dB TP as limiter upper limit (ceiling), sound artifacts may occur due to the normalization process.
This value is not carved in stone, as you will see later. Loudness units offer a special advantage to the mastering engineer. Simply spoken, we should be able to use LUFS to quantify how “loud” a song is and thereby compare different songs in terms of loudness. More on this later.
How can we see if our mix is normalized by a streaming service?
The bad news is that some streaming services have quite different guidelines. Therefore, you basically have to search for the specifications of each individual service if you want to follow their guidelines. This can be quite a hassle, as there are more than fifty streaming and broadcasting platforms worldwide. As an example, here are the guidelines of some services in regards to ideal LUFS values:
-16 LUFS Apple, AES Streaming Service Recommendations
-18 LUFS Sony Entertainment
-23 LUFS EU R128 Broadcast
-24 LUFS US TV ATSC A/85 Broadcast
-27 LUFS Netflix
The good news is that there are various ways to compare your mix with the specifications of the most important streaming services at a glance. How much your specific track will be manipulated by the respective streaming service? You can check this on the following website: www.loudnesspenalty.com
Some DAWs, such as the latest version of Cubase Pro also feature comprehensive LUFS metering. Alternatively, the industry offers various plug-ins that provide information about the LUFS loudness of a track. One suitable candidate is YOULEAN Loudness Meter 2, which is also available in a free version: https://youlean.co/youlean-loudness-meter/.
Another LUFS metering alternative is the Waves WLM Plus Loudness Meter, which is already fed with a wide range of customized presets for the most important platforms.
Metering
Using the Waves Meter as an example, we will briefly go into the most important LUFS meters, because LUFS metering involves a lot more than just a naked dB number in front of the unit. When we’re talking about LUFS, it should be clear what this exactly means. LUFS data is determined over a period of time and depending on the length of the time span and this can lead to different results. The most important value is the LUFS Long Term Display.
This is determined over the entire duration of a track and therefore represents an average value. To get an exact Long Term value we have to play the song once from the beginning to the end. Other LUFS meters (e.g. in Cubase Pro) like to refer to the Long Term value as LUFS Integrated. LUFS Long Term or Integrated is the value that is prefixed in the streaming platform’s specifications. For “Spotify Normal” this means that if a track has a loudness of -12LUFS Integrated, the Spotify algorithm will lower this track by two dB to -14LUFS.
LUFS Short Term
The Waves WLN Plus plugin offers other LUFS indicators for evaluation, such as LUFS Short Term. LUFS Short Term is determined over a period of three seconds when the plugin measures according to EBU standards. This is an important point, because depending on the ballistics, the measurement distances are different in time and can therefore lead to different results. A special feature of the Waves WLM Plus plugin is the built-in True Peak Limiter. Many streaming platforms insist on a true peak limit of -1dB (some even -2dB). If you use the WLM Plus Meter as the last plugin in the chain of your mastering software, the True Peak limit is guaranteed not to be exceeded when the limiter is activated.
Is the “Loudness War” finally over thanks to LUFS?
As we already learned, all streaming platforms define maximum values. If our master exceeds these specifications, it will automatically made quieter. The supposedly logical conclusion: we no longer need loud masters. At least this is true for those who adhere to the specifications of the streaming platforms. Now, parts of the music industry have always been considered a place away from all reason, where things like to run differently than logic dictates. The “LUFS dictate” is a suitable example of this.
Fact is: Most professional mastering engineers don’t care about LUFS in practice nor about the specifications of the streaming services!
Weird stuff, I know. However, the facts are clear and the thesis can be proven with simple methods. We remember that YouTube, just like Spotify, specifies a loudness of -14dB LUFS and automatically plays louder tracks at a lower volume. So all professional mixes should take this into account, right? It’s nice that this can be checked without much effort. Open a recent music video on YouTube, right-click on the video and click on ” Stats for nerds”. The entry “content loudness” indicates by how much dB the audio track is lowered by the YouTube algorithm. Now things become interesting. For the current AC/DC single “Shot in the Dark” this is 5.9dB. Billy Talent’s “I Beg To Differ” is even lowered by 8.6dB.
Amazing, isn’t it?
Obviously, hardly anyone seems to adhere to the specifications of the streaming platforms. Why is that?
There are several reasons. The loudness specifications differ from streaming platform to streaming platform. If you take these specifications seriously, you would have to create a separate master for each platform. This would result in a whole series of different sounding tracks, for the following reason. Mastering equipment (whether analog or digital) does not work linearly across the entire dynamic spectrum.
Example:
The sound of the mix/master changes if you have to squeeze 3dB more gain reduction out of the limiter for one song than for another streaming platform. If you finally normalize all master files to an identical average value, the sound differences become audible due to the different dynamics processing. The differences are sometimes bigger and sometimes smaller. Depending on which processing you have done.
Another reason for questioning the loudness specifications is the inconsistency of the streaming platforms. Take Spotify, for example. Do you know that Spotify’s normalization algorithm is not enabled when playing Spotifys via web player or a third party app? From the Spotify FAQs:
The Metal Mix
This means that if you deliver a metal mix with -14dB LUFS and it is played back via Spotify in a third-party app, the mix is simply too weak compared to other productions. And there are other imponderables in the streaming universe. Spotify allows its premium users to choose from three different normalization settings, with standards that also differ. For example, the platform recommends a default of -11dB LUFS and a True Peak value of -2dB TP for the “Spotify Loud” setting, while “Spotify Normal” is certified at -14dB LUFS and -1dB TP. Also from the Spotify FAQs:
For mastering engineers, this is a questionable state of affairs. Mastering for streaming platforms is like trying to hit a constantly changing target at varying distances with a precision rifle. Even more serious, however, is the following consideration: What happens if one or more streaming platforms raise, lower, or even eliminate their loudness thresholds in the future? There is no guarantee that the specifications currently in place will still be valid in the future. Unlikely? Not at all! YouTube introduced its normalization algorithm in December 2015. Uploads prior to December 2015 may sound louder if they were mastered louder than -14dB LUFS. Even after 2015, YouTube’s default has not remained constant. From 2016 to 2019, the typical YouTube normalization was -13dB and did not refer to LUFS. Only since 2019 YouTube has been using the -14dB LUFS by default.
The reason why loudness is not exclusively manifested in numbers
If you look at the loudness statistics of some YouTube videos and listen to them very carefully at the same time, you might have made an unusual observation. Some videos sound louder even though their loudness statistics indicate that they are nominally quieter than other videos. How can this be? There is a difference between measured loudness in LUFS and perceived loudness. Indeed, it is the latter that determines how loud we perceive a song to be, not the LUFS specification. But how do you create such a lasting loudness impression?
Many elements have to work together for us to perceive a song as loud (perceived loudness). Stereo width, tonal balance, song arrangement, saturation, dynamics manipulation – just to name a few pieces of the puzzle. The song must also be well composed and performed. The recording must be top-notch and the mix professional. The icing on the cake is a first-class master. If all these things come together, the song is denser, more forward and, despite moderate mastering limiter use, simply sounds louder than a mediocre song with less good mix & mastering, even if the LUFS integrated specifications suggest a different result. An essential aspect of a mastering process is professional dynamics management. Dynamics are an integral part of the arrangement and mix from the beginning.
In mastering, we want to try to further emphasize dynamics while not destroying them. Because one thing is always inherent in the mastering process: a limitation of dynamics. How well this manipulation of dynamics is done is what separates good mastering from bad mastering and a good mix with a professional master always sounds fatter and louder than a bad mix with a master that is only trimmed for loudness.
Choose your tools wisely!
High quality equalizers and compressors like the combination of the elysia xfilter and the elysia xpressor provide a perfect basis for a more assertive mix and a convincing master. Quality compression preserves the naturalness of the transients, which automatically makes the mix appear louder. You miss the punch and pressure in your song? High-quality analog compressors always guarantee impressive results and are more beneficial to the sound of a track than relying solely on digital peak limiting.
You are loosing audible details in the mixing and mastering stage? Bring them back into light with the elysia museq! The number of playback devices has grown exponentially in recent years. This doesn’t exactly make the art of mastering easier.
Besides the classic hi-fi system, laptops, smart phones, Bluetooth speakers and all kinds of headphones are fighting for the listener’s attention in everyday life. Good analog EQs and compressors can help to adjust the tonal focus for these devices as well. Analog processing also preserves the natural dynamics of a track much better than endless plug-in rows, which often turn out to be a workflow brake. But “analog” can provide even more for your mixing & mastering project. Analog saturation is an additional way to increase the perceived loudness of a mix and to noticeably improve audibility, especially on small monitoring systems like a laptop or a Bluetooth speaker.
Saturation and Coloration
The elysia karacter provides a wide range of tonal coloration and saturation that you can use to make a mix sound denser and more assertive. Competitive mastering benefits sustainably from the use of selected analog hardware. The workflow is accelerated and you can make necessary mix decisions very quick and accurate. For this reason, high-quality analog technology enjoys the highest popularity, especially in high-end mastering studios. karacter is available as a 1 RU 19″ Version, karacter 500 – module and in our new super handy qube series as a karacter qube.
Mastering Recommendations for 2021
As you can see, the considerations related to mastering for streaming platforms are anything but trivial. Some people’s heads may be spinning because of the numerous variables. In addition, there is still the question of how to master your tracks in 2021.
The answer is obvious: create your master in a way that serves the song. Some styles of music (jazz, classical) require much more dynamics than others (heavy metal, hip-hop). The latter can certainly benefit from distortion, saturation, and clipping as a stylistic element. What sounds great is allowed. The supreme authority for a successful master is always the sound. If the song calls for a loud master, it is legitimate to put the appropriate tools in place for it. The limit of loudness maximization is reached when the sound quality suffers. Even in 2021, the master should sound better than the mix. The use of compression and limiting should always serve the result and not be based on the LUFS specifications of various streaming services. Loudness is a conscious artistic decision and should not end up in an attempt to achieve certain LFUS specifications.
And the specifications of the streaming services?
With how many LUFS should i master to?
There is only one valid reason to master a song to -14dB LUFS. The value of -14dB LUFS is just right if the song sounds better with it than with -13 or -15dB LUFS!
I hope you were able to take some valuable information from this blog post and it will help you take your mix and personal master for digital streaming services to the next level.
I would be happy about a lively exchange. Feel free to share and leave a comment or if you have any further questions, I’ll be happy to answer them of course.
Yours, Ruben Tilgner
https://www.elysia.com/wp-content/uploads/2021/03/Mastering-for-streaming-cover-3-scaled.jpg24042560Ruben Tilgnerhttps://www.elysia.com/wp-content/uploads/2021/03/elysia-logo-black.pngRuben Tilgner2021-03-04 17:30:002021-04-29 10:27:17Mastering for Spotify, YouTube, Tidal, Amazon Music, Apple Music and other Streaming Services
Gain staging and the integration of analog hardware in modern DAW systems
Introduction
-18dBFS is the new 0dBu:
In practice, however, even experienced engineers often have only a proximate idea of what “correct” levels are. Like trying to explain the offside rule in soccer, a successful level balance is simple and complex at the same time. Especially when you have the digital and analog worlds supposed to work together on equal grounds. This blog post offers concrete tips for confident headroom management and “how to integrate analog hardware in digital production environment – DAW systems” in a meaningful way.
Digital vs. Analog Hardware
A good thing is that you don’t have to choose one or the other. In modern music production, we need both worlds, and with a bit of know-how, the whole thing works surprisingly well. But the fact is: On one hand, digital live consoles and recording systems are becoming more and more compact in terms of their form factor, on the other hand, the number of inputs and outputs and the maximum number of tracks are increasing at the same time. The massive number of input signals and tracks demand even more to always find suitable level ratios.
Let’s start at the source and ask the simple question, “Why do you actually need a mic preamp?”
The answer is as simple as clear. We need a Mic-Preamp to turn a microphone signal into a line signal. A mixer, audio interface, or DAW always operates at line level, not microphone level. This is the case for all audio interfaces, such as insert points or audio outputs. How far do we actually need to turn up the microphone preamp, and is there one “correct” level? There is no universal constant with a claim to be the sole representative, but there is a thoroughly sensible recommendation that has proven itself in a practical workflow. I recommend to level all input signals to line level with the help of the microphone preamplifier. Line level is the sweet spot for audio systems.
But what exactly is line level now and where can it be read?
Now we’re at a point where it gets a little more complicated. For the definition of the line level, a reference level is used and this is different depending on which standard is used as a basis. The reference level for professional audio equipment according to the German broadcast standard is +6dBu (1.55Vrms, -9dBFS). It refers to a level of 0dBu at 0.775V (RMS). In the USA, the analog reference level of +4dBu, corresponding to 1.228V (effective value), is used. Furthermore relevant in audio technology is the reference level of 0 dBV, corresponding to exactly 1V (RMS) and the home equipment level (USA) with -10dBV, corresponding to 0.3162V (RMS). Got it? We’ll focus on the +4dBu reference level in this blog post. Simply for the reason that most professional audio equipment relies on this reference level for line-level.
dBu & dBV vs. dBFS
What is +4dBu and what does it mean?
Level ratios in audio systems are expressed in the logarithmic ratio decibel (dB). It is important to understand that there is a difference between digital and analog mixers in terms of “dB metering”. This is the experience of anyone who has swapped from an analog- to a digital mixer for the first time (or vice versa). Obviously, the usual level settings don’t fit anymore. Why is that? The simple explanation: analog mixers almost always use 0dBu (0.775V) as a reference point, while their digital counterparts use the standard set by the European Broadcasting Union (EBU) for digital audio levels. According to the EBU, the old analog “0dBu” should now be equivalent to -18dBFS (full scale). Digital consoles- and DAW users, therefore, hold fast: -18dBFS is our new 0dBu!
This sounds simple, but unfortunately, it’s not that easy, because dBu values can’t be unambiguously converted to dBFS. It varies from device to device which analog voltage leads to a certain digital level. Many professional studio devices are connoted with the nominal output of +4dBu, while consumer devices tend to fall back on the dBV meter (-10dBV). This is not enough confusion. There are also massive differences in terms of “headroom”. With analog equipment, there is still plenty of headroom available when a VU meter is operating in a 0dB range. Often there is another 20dB available until an analog soft clipping signals the end of the line. The digital domain is much more uncompromising at this point. Levels beyond the 0dBFS mark produce hard clipping, which sounds unpleasant on the one hand and represents a fixed upper limit on the other. The level simply does not get any louder.
We keep in mind: The analog world works with dBu & dBV indications, while dBFS describes the level ratios in the digital domain. Accordingly, the meter displays on an analog mixing console are also different compared to a digital console or DAW.
Analog meter indicators are referenced to dBu. If the meter shows 0dB, this equals +4dBu at the mixer output and we are happy about a rich headroom. A digital meter is usually scaled over the range of -80 to 0dBFS, with 0dBFS representing the clipping limit. To make a comparison, let’s recall: 0dBu (analog) = -18dBFS (digital). This is true for many digital devices, such as Yamaha digital mixers, but not all. ProTools, for example, works with the reference level of 0dBu = -20dBFS. We often find this difference when comparing European and US equipment. The good news is that we can live very well with this difference in practice. Two dB is not what matters in the search for the perfect level of audio signals.
Floating Point
But why do we need to worry about level ratios in a DAW at all? Almost all modern DAWs work with floating-point arithmetic, which provides the user with infinite headroom and dynamics (theoretically 1500dB). The internal dynamics are so great that clipping cannot occur. Therefore, common wisdom on this subject is: “You can do whatever you want with your levels in a floating-point DAW, you just must not overdrive the sum output”. Theoretically true, but practically problematic for two reasons. First, there are plug-ins (often emulations of classic studio hardware) that don’t like it at all if you feed their input with extremely high levels.
This degrades the signal audibly. Very high levels have a second undesirable side effect: they make it virtually impossible to use analog audio hardware as an insert. Most common DAWs work with a 32-bit floating-point audio engine. Clipping can only occur on the way into the DAW (e.g. overdriven MicPre) or on the way out of the DAW (overdriven sum DA-converter). This happens faster than you think. Example: Anyone who works with commercial loops knows the problem. Finished loops are often normalized and you reach quickly the 0dBFS mark on the loudest parts mark on your peak meter. If we play several loops simultaneously and two loops will reach 0dBFS at one point at the same time, we already have clipping on the master bus. You need to avoid too high levels in a DAW at all costs.
Noise Generator
We’ve talked about clipping and headroom so far, but what about the other side of the coin? How do analog and digital audio systems handle very low levels? In the analog world, the facts are clear: the lower our signal level, the closer our useful signal approaches the noise floor. That means our “signal to noise” ratio is not optimal. Low signals enter the ring with the noise floor, which doesn’t come off without causing collateral damage to the sound quality. Therefore, in an analog environment, we must always emphasize solid levels and high-quality equipment with the best possible internal “signal to noise” ratio. This is the only way to guarantee that in critical applications (e.g. classical recordings, or music with very high dynamics) the analog recording is as noise-free as possible.
And digital?
The good news is, in the digital domain there is no problem with the noise floor. It is simply not there. Instead, there are other vagaries with attenuated recording levels in a digital environment, and these are related to the way digital converters work. At full scale (0dBFS) every single bit of a 24 bit AD converter is used. Low-level signals, on the other hand, are converted at a lower bit depth. A rule of thumb for a 24-bit converter is 1-bit = 6dB. That means 24-bit = 0dBFS, -6dBFS = 23-bit, -12 dBFS = 22-bit. This means at line level (-18dBFS) we have a high resolution of 21-bits available. At very low levels (e.g. -60dBFS = 8-bit), however, the resolution is lacking.
Fader position as a part of Gain Staging
Another often overlooked detail on the way to a solid gain structure is the position of the faders. First of all, it doesn’t matter whether we’re working with an analog mixer, a digital mixer, or a DAW. Faders have a resolution, and this is not linear.
The resolution around the 0dB mark is much higher than in the lower part of the fader path. To mix as sensitively as possible, the fader position should be near the 0dB mark. If we create a new project in a DAW, the faders in the DAW project are in the 0dB position by default. This is how most DAWs handle it. Now we can finally turn up the mic preamps and set the appropriate recording level. We recommend leveling all signals in the digital domain to -18dBFS RMS / -9dBFS peak. In other words, to the line-level already invoked at the beginning, because that’s what digital mixers and DAWs are designed for. Since we have the channel faders close to the 0 dB mark, the question now is: How do I lower signals that are too loud in the mix?
You have several ways to do this and many of them are simply not recommended. For example, you could turn down the gain of the mic preamp. But then we’re no longer feeding line level to the DAW. With an analog mixer, this results in a poor “signal to noise” ratio. A digital mixer with the same approach has the problem that all sends (e.g. monitor mixes for the musicians, insert points) also leave the line-level sweet spot. Ok, let’s just pull down the channel fader! But then we leave the area for the best resolution, where we can adjust the levels most sensitively. This may “only” be uncomfortable in the studio, but at a large live event with a PA to match, it quickly becomes a real problem.
This is where working in the fader sweet spot is essential. The ability to specifically make the lead vocal two dB louder via the fader is almost impossible if we start with a fader output setting of, let’s say, -50dB. If we move the fader up just a few millimeters, we quickly reach -40dB, which is an enormous jump in volume. The solution to this problem: We prefer to use audio subgroups for rough volume balancing. If these are not available, we fall back on DCA or VCA groups. The input signals are assigned to the subgroups (or DCAs or VCAs) accordingly. For example, one group for drums, one for cymbals, one for vocals and one each for guitars, keyboards and bass. With the help of the groups you can set a rough balance between the instruments and vocal signals and use the channel faders for small volume corrections.
Special tip: It makes sense to route effect returns to the corresponding groups instead of to the master. The drum reverb to the drum group, or the vocal reverb to the vocal group. If you have to correct the group volume, then the effect part is automatically pulled along and the ratio signal/effect part always remains the same.
Gain Staging in the DAW – the hunt for line level
As a first step, we need to clear up a misunderstanding. “Gain” and “Volume” are not members of the same family. Adjusting gain is not the same as adjusting volume. In simple words, Volume is the volume after processing, while Gain is the volume before processing. Or even simpler, Gain is input level, Volume is output level!
The next important step for clean gain staging is to determine what kind of meter display my digital mixer or DAW is even working with. Where exactly is line level on my meter display?
Many digital consoles and DAWs have hybrid metering. Like the metering in Studio One V5, which we’ll use as an example. The scaling going from -72dB to +10dB and from -80dB to +6dB in the sum output.
Studio One metering is between an analog dBu meter and a digital meter in dBFS in terms of its scaling. This is similar in many DAWs. It is important to know whether the meter display shows an RMS (average level) or Peak Meter (peak level). If we see only peak metering and control to line level (-18dBFS), then the level is too low, especially for very dynamic source material with fast transients like a snare drum. The greater the dynamic range of a track, the higher the peak values and the lower the average value. Therefore, drum tracks can quickly lighten up the clip meter of a peak meter but produce comparatively little deflection on an RMS meter.
In Studio One, however, we get all the information we need. The blue Studio One meter represents peak metering, while the white line in the display always shows the RMS average level. Also important is where the metering (tap point) is tapped. For leveling out, the metering should show the pre-fader level ratios, especially if you already inserted insert plug-ins or analog devices into the channel. These can significantly influence the post-fader metering.
Keyword: Plugins
You need to drive digital emulations with a suitable level. There are still some fix-point plug-ins and emulations of old hardware classics on the market that don’t like high input levels. It is sometimes difficult to see which metering the plugins use themselves and where the line level is located. A screenshot illustrates the dilemma.
The BSS DRP402 compressor clearly has a dBFS meter. Thus, the BSS compressor has line-level reference on its metering at -20 dBFS. The bx townhouse compressor in the screenshot is fed with the same input signal as the BSS DRP402 but shows completely different metering.
Here you may assume since it is an analog emulation, its meter display is more like a VU meter.
Fire Department Operation
It’s not uncommon that you will find yourself in the studio with recordings that just want to be mixed. Experienced sound engineers will agree with me. Many recordings by less experienced musicians or junior technicians are simply too high. So what can you do to bring the levels back to a reasonable level? Digitally, this is not a big problem, at least if the tracks are free of digital clipping. Turning the tracks down doesn’t change the sound, and we don’t have to worry about noise floor problems on the digital level either. In any DAW, you can reduce the waveform (amplitude) to the desired level.
Alternatively, every DAW offers a Trim plug-in that you can place in the first insert slot to lower the level there.
The same plugin can also be used in busses or in the master if the added tracks prove to be too loud. We did not use the virtual faders of the DAW mixer for this task, because they are post-insert and, as we already know, only change the volume but not the gain of the track.
Analog gear in combination with a DAW
The combination of analog audio gear and a DAW has a special charm. The fast, haptic access and the independent sound of analog processors make up the appeal of a hybrid setup. You can use Analog gear as a front-end (mic preamps) or as insert effects (e.g., dynamics). If you want to connect an external preamp to your audio interface, you should use a line input to bypass the generic MicPreamp of the audio interface.
In insert mode, we have to accept an AD/DA conversion for pure analog gear to get into the DAW. Therefore the quality of the AD/DA converters is important. If you use the full 24bit spectrum by a full scale, this corresponds to a dynamic range of 144dB. This overstrains even a high-end digital converter. Therefore, you need to drive your analog gear in the insert at line level to give the digital converters enough headroom. Especially if you plan to boost the signal with the analog audio gear.
This simply requires headroom. If, on the other hand, you only make subtractive EQ settings, you can also work with higher send and return levels. Now we only need to adjust the level ratios for the insert operation. Several things need our attention.
It depends on the entire signal chain
The level ratios in a DAW are constant and always understandable. When integrating analog gear, however, we have to look at the entire signal flow and sometimes readjust it. We start with the send level from the DAW. Again, i recommend you to send the send signal with line-level to an output of the audio interface.
The next step requires a small amount of detective work. In the technical specifications of the audio interface, we look up the reference level of the outputs and have to bring them in line with the input of the analog gear we want to loop into the DAW. If the interface has balanced XLR outputs, we connect it to a balanced XLR input of the analog insert unit. However, what do we do with unbalanced devices that have a reference level of -10dBV? Many audio interfaces offer a switch for their line inputs and outputs from +4dBu to -10dBV, which should you use in this case. In the technical specifications of the audio interface, you can find out which analog level is present at 0dBFS. This you can also switch in some cases.
On an RME Fireface 802, for example, you can switch between +19dBu, +13dBu and +2dBV. It is important to know that many elysia products can handle a maximum level of about +20dBu. This level applies to the entire signal chain from the interface output to the analog device and from its output back to the interface. Ideally, a line-level send signal with an identical return level will make its way back into the DAW. In addition, the analog unit itself is under observation. Make sure that neither its input nor its output is distorting. These distortions will otherwise be passed on to the DAW unadulterated.
It also depends a bit on the type of analog gear how its insert levels behave. A ground-in EQ that moderately boosts or cuts frequencies is less critical than a transient shaper (elysia nvelope). Depending on the setting, this can generate peaks that RMS metering can hardly detect. In a worst-case scenario, this creates distortions that are only audible but not readable without peak metering. Another classic operating mistake is a too high make-up gain setting for compressors.
In worst case, both the output of the compressor itself and the return input of the sound card are overdriven. The level balance at all four places (input & output analog device + input & output of the interface) of an insert should be under close observation. But we are not alone. Help for insert operation is provided by generic DAW on-board tools, which we will look at in conclusion.
Use Insert-Plugins!
When integrating analog hardware, you should definitely use insert plugins, which almost every DAW provides. Reaper features the “ReaInsert” plugin, ProTools comes with “Insert” and Studio One provides the “Pipeline XT” plugin.The wiring for this application is quite simple.
We connect a line output of our audio interface to the input of our hardware. We connect the output of our hardware to a free line input of our interface. We select the input and output of our interface as a source in our insert plugin (see Pipeline XT screenshot) and have established the connection.
A classic “send & return” connection. Depending on the buffer size setting, the AD/DA conversion causes a more or less large propagation delay, which can be problematic. Especially when we use signals in parallel. What does this mean? Let’s say we split our snare drum into two channels in the DAW. The first channel stays in the DAW and is only handled with a latency-free gate plugin, the second channel goes out of the DAW via Pipeline XT, into an elysia mpressor and from there back into the DAW.
Due to the AD/DA conversion, the second snare track is time delayed compared to the first track. For both snare tracks to play together time aligned, we need latency compensation. This you could do manually by moving the first snare track, or you could simply click the “Auto” button in Pipeline XT for automatic latency compensation. This is much faster and more precise. The advantage is that the automatic delay compensation ensures that our insert signal phases coherently with the other tracks of the project. With this tool, you can also easily adjust the level of the external hardware. If distortion already occurs here, you can reduce the send level and the return level will increase at the same time.
This is also the last tip in this blog post. The question of the correct level should be settled, as well as all relevant side issues that have a significant impact on the gain structure and a hybrid way of working. For all the theory and number mysticism – it does not depend on a dB exact adjustment. It is quite sufficient to stick roughly to the recommendations. This guarantees a reasonable level that will make your mixing work much easier and faster. Happy Mixing!
Here’s a great Video from RME Audio about Matching Analog and Digital Levels.
By loading the video, you agree to YouTube’s privacy policy. Learn more
Feel free to discuss, leave a comment below or share this blog post in your social media channels.
Yours, Ruben
https://www.elysia.com/wp-content/uploads/2021/02/elysia-Gainstaging-Blogpost-Cover.jpg8811500Ruben Tilgnerhttps://www.elysia.com/wp-content/uploads/2021/03/elysia-logo-black.pngRuben Tilgner2021-02-18 17:30:002022-05-03 13:10:00-18dBFS is the new 0dBu
Increased signal propagation time and annoying latency are uninvited permanent guests in every recording studio and at live events. This blog post shows you how to avoid audio latency problems and optimize your workflow.
As you surely know, the name elysia is a synonym for the finest analog audio hardware. As musicians, we also know and appreciate the advantages of modern digital audio technology. Mix scenes and DAW projects can be saved, total recall is mandatory and monstrous copper multicores are replaced by slim network cables. A maximally flexible signal flow via network protocols such as DANTE and AVB allows the simple setup of complex systems. Digital audio makes everything better? That would be nice, but reality shows an ambivalent balance. If you look and listen closely, the digital domain sometimes causes problems that are not even present in the analog world. Want an example?
From the depths of the bits & bytes arose a merciless adversary that will sabotage your recordings or live gigs. Plenty of phase and comb filter problems will occur. But with the right settings, you are not powerless against the annoying latencies in digital audio systems.
What is audio latency and why it doesn’t occur in analog setups?
Latency occurs with every digital conversion (AD or DA). Latency is noticeable in audio systems as signal propagation time. In the analog domain the situation is clear: The signal propagation time from input to the output of an analog mixer is always zero.
Latencies only existed in the compound midi devices, where external synths or samplers were integrated via midi. In practice, this was not a problem, since the entire monitoring situation always remained analog and thus no latency was audible. With digital mixing consoles or audio interfaces, on the other hand, there is always a delay between input and output.
Latency can have different reasons, for example the different signal propagation times of different converter types. Depending on the type and design, a converter needs more or less time to manage the audio signal. For this reason, mixing consoles and recording interfaces always use identical converter types in the same modules (e.g. input channels), so that the modules have the same signal propagation time among each other. As we will see, within a digital mixer or recording setup latency is not a fixed quantity.
Signal propagation time and round trip latency
Latency in digital audio systems is specified either in samples or milliseconds. A DAW with a buffer size of 512 samples generates at least a delay of 11.6 milliseconds (0.016s) if we work with a sampling rate of 44.1kHz. The calculation is simple: We divide 512 samples by 44.1 (44100 samples per second) and get 11.6 milliseconds (1ms = 1/1000sec).
If we work with a higher sample rate, the latency decreases. If we run our DAW at 96kHz instead of 44.1kHz, the latency will be cut in half. The higher the sample rate, the lower the latency. Doesn’t it then make sense to always work with the highest possible sample rate to elegantly work around latency problems? Clear answer: No! 96 or even 192kHz operation of audio systems is a big challenge for the computer CPU. The higher sample rate makes the CPU rapidly break out in a sweat, which is why a very potent CPU is imperative for a high channel count. This is one reason why many entry-level audio interfaces often only work with a sample rate of 44.1 or 48kHz.
Typically, mixer latency refers to the time it takes for a signal to travel from an analog input channel to the analog summing output. This process is also called “RTL”, which is the abbreviation for “Round Trip Latency”. The actual RTL of an audio interface depends on many factors: The type of interface (USB, Thunderbolt, AVB or DANTE), the performance of the recording computer, the operating system used, the settings of the sound card/audio interface and those of the recording project (sample rate, number of audio & midi tracks, plugin load) and the signal delays of the converters used. Therefore it is not easy to compare the real performance of different audio interfaces in terms of latency.
It depends on the individual case!
A high total runtime in a DAW does not necessarily have to be problematic. Some things depend on your workflow. Even with the buffer size of 512 samples from our initial example, we can record without any problems. The DAW plays the backing tracks to which we record overdubs. Latency does not play a role here. If you work in a studio, it only becomes critical if the DAW is also used for playing out headphone mixes or if you want to play VST instruments or VST guitar plug-ins to record them to the hard disk. In this case, too high a latency makes itself felt in a delayed headphone mix and an indirect playing feel.
If that is the case, you will have to adjust the latency of your DAW downwards. There is no rule of thumb as to when latency has a negative effect on the playing feel or the listening situation. Every musician reacts individually. Some can cope with an offset of ten milliseconds, while others already feel uncomfortable at 3 or 4 milliseconds.
The Trip
Sound travels 343 meters (1125ft) in one second, which corresponds to 34.3 centimeters (0.1125ft) per millisecond. Said ten milliseconds therefore correspond to a distance of 3.43 meters (11.25ft). Do you still remember the last club gig? You’re standing at the edge of the stage rocking with your guitar in your hand, while the guitar amp is enthroned three to four meters (10 – 13ft) behind you. This corresponds to a signal delay of 10-12ms. So for most users, a buffer size between 64 and 128 samples should be low enough to play VST instruments or create headphone mixes directly in the DAW.
Unless you’re using plug-ins that cause high latency themselves! Most modern DAW programs have automatic latency compensation that matches all channels and busses to the plug-in with the highest runtime. This has the advantage that all channels and busses work phase coherent and therefore there are no audio artifacts (comb filter effects). The disadvantage is the high overall latency.
Some plug-ins, such as convolution reverbs or linear phase EQs, have significantly higher latencies. If these are in monitoring, this has an immediate audible effect even with small buffer size. Not all DAWs show plug-in latencies, and plug-in manufacturers tend to keep a low profile on this point.
First Aid
We have already learned about two methods of dealing directly with annoying latency. Another is monitoring via hardware monitoring that may be provided by the audio interface.
RME audio interfaces, for example, comes with the Total Mix software. This allows low latency monitoring with on-board tools. Depending on the interface even with EQ, dynamics and reverb. Instead of monitoring via the DAW or the monitoring hardware of the interface, you can alternatively send the DAW project sum or stems into an analog mixer and monitor the recording mic together with the DAW signals analog with zero latency. If you are working exclusively in the DAW, then it helps to increase the sample rate and/or decrease the buffer size. Both of these put a significant load on the computer CPU.
RME Total Mix Low Latency Monitoring
Depending on the size of the DAW project and the installed CPU, this can lead to bottlenecks. If no other computer with more processing power is available, it can help to replace CPU-hungry plug-ins in the DAW project or to set them to bypass. Alternatively, you can render plug-ins in audio files or freeze tracks.
The buffer size essentially determines the latency of a DAW
Almost every DAW offers a function to render intensive plug-ins to reduce the load on the CPU
If musicians stand further away from their monitor, the monitor signal is also slightly delayed by the sound propagation time
Good old days
Do modern problems require modern solutions? Sometimes a look back can help.
It is not always advantageous to record everything flat and without processing. Mix decisions, how a recorded track will sound in the end, will be postponed into the future. Why not commit to a sound like in the analog days and record it directly to the hard disk? If you’re afraid you might record a guitar sound that turns out to be a problem child later in the mixdown, you can record an additional clean DI track for later re-amping.
Keyboards and synthesizers can be played live and recorded as an audio track, which also circumvents the latency problem. Why not record signals with processing during tracking? This speeds up any production, and if analog products like ours are used, you don’t have to worry about latency.
If you are recording vocals, try to compress the signal moderately during the recording with a good compressor like the mpressor or try it with our elysia skulpter. With the elysia skulpter there are some nice and practical sound shaping functions like filter, saturation and compressor in addition to the classic preamp possibilities – so you have a complete channel strip. If tracks are already recorded with analog processing, this approach also saves some CPU power during mixing. Especially with many vocal overdub tracks, an unnecessarily large number of plug-ins are required, which in turn leads to a change in the buffer size and consequently has a negative effect on latency.
What are your experiences with audio latencies in different environments? Do you have them under control? I’m looking forward to your comments.
Here are some FAQ:
What is audio latency and why it doesn’t occur in analog setups?
Latency occurs with every digital conversion (AD or DA). Latency is noticeable in audio systems as signal propagation time. In the analog domain the situation is clear: The signal propagation time from input to the output of an analog mixer is always zero.
Latencies only existed in the compound midi devices, where external synths or samplers were integrated via midi. In practice, this was not a problem, since the entire monitoring situation always remained analog and thus no latency was audible. With digital mixing consoles or audio interfaces, on the other hand, there is always a delay between input and output. Latency can have different reasons, for example the different signal propagation times of different converter types. Depending on the type and design, a converter needs more or less time to manage the audio signal. For this reason, mixing consoles and recording interfaces always use identical converter types in the same modules (e.g. input channels), so that the modules have the same signal propagation time among each other. As we will see, within a digital mixer or recording setup latency is not a fixed quantity.
What is Round Trip Latency?
Typically, mixer latency refers to the time it takes for a signal to travel from an analog input channel to the analog summing output. This process is also called “RTL”, which is short for “Round Trip Latency”. The actual RTL of an audio interface depends on many factors: The type of interface (USB, Thunderbolt, AVB or DANTE), the performance of the recording computer, the operating system used, the settings of the sound card/audio interface and those of the recording project (sample rate, number of audio & midi tracks, plugin load) and the signal delays of the converters used. Therefore it is not easy to compare the real performance of different audio interfaces in terms of latency.
https://www.elysia.com/wp-content/uploads/2021/01/covertest.jpeg16052247Ruben Tilgnerhttps://www.elysia.com/wp-content/uploads/2021/03/elysia-logo-black.pngRuben Tilgner2021-01-14 19:30:002021-03-22 11:10:16How to deal with audio latency