Search
Close this search box.

Quality in Music Production

Estimated reading time: 15 minutes

This blog post starts with supposedly simple questions: Did we have higher quality music in the last decades compared to today’s standard? What constitutes high-quality music anyway? The fact is, the concept of quality can be looked at from many angles. Quality is also an attempt at categorization that can be applied to numerous aspects of music and its creation and is not limited to duo “hardware & craftsmanship”. This blog post would like to take a look at the concept of quality in terms of sound engineering, composition and music production and ask the question, what characterizes high-quality music and how can it be produced?

What is quality?

The term “quality” can be applied to many aspects of life. For the classification of quality, we like to draw on suitable adjectives such as “good, bad, or mediocre”. While the evaluation of products with the help of standards (e.g. DIN standard) is comprehensible and thus comparable to a large extent, the concept of quality becomes so transcendent when we want to evaluate subjective states such as “beauty” with it. Welcome to the dilemma of wanting to evaluate the quality of music. We will try it anyway. 

Let’s approach the topic from the hardware side. Who doesn’t know the common opinion that devices from past decades consistently meet a higher quality standard? Thus, the general quality of products from the 80s seems to be higher than their counterparts today. Without wanting to generalize, many devices from past decades benefited from their comparatively long development time, complex components and careful selection. Back then, the longest possible service life was considered a key point of product development.

And today?

The sheer volume of short-lived, inexpensive products was certainly not available to consumers in this breadth a few decades ago. 

After production has become faster and faster in recent decades, and thus in part also worse, a rethinking is slowly taking place. Low-quality products in large quantities are neither resource-saving nor particularly sustainable. Simple plastic products are only used for a short time before they spend long years in our oceans as plastic waste awaiting their transformation into microplastics. The call for new quality standards and sustainability can no longer be ignored. We believe that this should also apply to music and music production.

Each of us knows “quality music” and it does not need a DIN standard to recognize it. Musical quality works are marked with the stamp of “timelessness”. Elaborately produced and sophisticatedly composed music has a higher and longer entertainment value than plastic-like utility music that merely pays homage to the zeitgeist. Why there is such a mass of uninspired utility music is not insignificantly connected with the medium with which this music is preferably consumed. 

Keep it short!

The current focus is on various streaming services that disadvantage longer lavishly produced titles simply because of their structure. Simply put: Longer titles are not “worth it” financially! The additional effort is simply not remunerated by streaming portals. For example, a track must be listened to for just thirty seconds at a time for it to be paid out on the Spotify platform. This monetizes the track. This means that the artist only receives money if the song runs for 30 seconds or more. If the song is seven minutes long, there is no extra compensation. Not only the band “The Pocket Gods” finds this unfair. This band counteracts this unspeakable structure with an extremely unconventional idea. The last album of the band includes equal to one thousand songs, all in a length of thirty to thirty-six seconds.

Please read the Article here: https://inews.co.uk/culture/music/spotify-payments-pocket-gods-protest-album-1000-songs-30-seconds-long-1442024

On the one hand, this is clever and creative, but it also clearly shows the current dilemma of the streaming medium. Samsung recently published an interesting study. According to it, the attention span has dropped from twelve to eight seconds since 2000. According to this, the first eight seconds decide whether a song will be a hit or not.

Don’t forget:

If it is pushed away within the first 30 seconds, there are no streaming revenues for the artist. This has a direct impact on current “utility music” and its composition. Songs today often don’t even have an intro anymore, but start directly with the chorus. In the case of the AOR classic Don’t Stop Believin” by Journey, the recipient has to wait for one minute and seven seconds until the refrain start for the first time. By today’s standards, an incredibly long period.

YouTube

By loading the video, you agree to YouTube’s privacy policy.
Learn more

Load video

PGRpdiBjbGFzcz0iX2JybGJzLWZsdWlkLXdpZHRoLXZpZGVvLXdyYXBwZXIiPjxpZnJhbWUgdGl0bGU9IkpvdXJuZXkgLSBEb24mIzAzOTt0IFN0b3AgQmVsaWV2aW4mIzAzOTsgKE9mZmljaWFsIEF1ZGlvKSIgd2lkdGg9IjgwMCIgaGVpZ2h0PSI0NTAiIHNyYz0iaHR0cHM6Ly93d3cueW91dHViZS1ub2Nvb2tpZS5jb20vZW1iZWQvMWs4Y3JhQ0dwZ3M/ZmVhdHVyZT1vZW1iZWQiIGZyYW1lYm9yZGVyPSIwIiBhbGxvdz0iYWNjZWxlcm9tZXRlcjsgYXV0b3BsYXk7IGNsaXBib2FyZC13cml0ZTsgZW5jcnlwdGVkLW1lZGlhOyBneXJvc2NvcGU7IHBpY3R1cmUtaW4tcGljdHVyZTsgd2ViLXNoYXJlIiBhbGxvd2Z1bGxzY3JlZW4+PC9pZnJhbWU+PC9kaXY+

In addition, the song is unusually long for a hit at 4.10 minutes.

Fact is: The average length is decreasing, elaborate bridge parts or lengthy breakouts from known song structures can be found more and more rarely. Another forecast says that by the end of the decade the average song length will be two minutes. All this does not necessarily allow conclusions to be drawn about the actual quality of the music. What can be said, however, is that the creative playing field for musicians and composers is being greatly narrowed by existing structures and developments, and the incentive to publish high-quality music is becoming increasingly less.

The relevance of the medium

The current “skipping culture” is basically a digital phenomenon that favors superficial consumption in streaming services with their playlists and gapless playback mechanisms. Classic media such as tapes or records offer a significantly different approach. This begins with the selection of a track. With tape, you have to specifically fast-forward to a song; with a record, consciously reaching for the cover box and putting the needle to it is a deliberate act.

According to science, this act involves preconditioning. You become engaged in the auditory event before you even play the song. Perhaps this is also a building block for perceiving music more consciously again and, above all, consuming it purposefully. The medium has an influence on the consumer. What many music lovers may not be aware of is the fact that the recording and playback medium itself has a major influence on the type of composition, the choice of musicians, their virtuosity, and the length and complexity of musical pieces. To understand this better, let’s look at the history of sound recording.

The history of sound recordings

Before there was the possibility of sound recording, music was exclusively performed live. This meant that the musician had to master his craft and his instrument to practice his profession. This continued to be the case in the early days of recording technology. Recordings took place as one-takes, and only professional musicians could withstand this pressure. In the early days of the record, the recording was scratched directly onto the medium.

The first recording devices were exclusively acoustic-mechanical in nature and managed completely without electricity. How did the whole thing work? The sound was captured by a funnel, and the vibrations were converted by a diaphragm and held on a plate or roller. The only energy available was the sound energy itself. This was responsible for transforming the information into the carrier medium. Microphones did not exist in the early days of sound technology, so bands and ensembles had to be placed quite unnaturally in front of the recording funnels. Loud instruments such as brass stood further away, while strings and singers set up closer in front of the funnel. In case of playing errors, everything had to be re-recorded. Quite clearly, this all had an impact on the nature of the music and the way it was played.

The pioneers

Scott phonautograph
Tape Recorder invented by the Volta Associates Bell Tainter i013

The possibility of recording speech or music is still comparatively young. In 1857, the Frenchman Édouard-LĂ©on Scott de Martinville invented the “phonautograph,” a device for recording sound.

It was not until 1877 that Thomas A. Edison developed his “phonograph,” which was actually designed for dictating messages in everyday office life. The advantage of the phonograph: The device could not only record sounds but also play them back.

In 1884, Edison’s concept was further developed by Charles Sumner Tainter and Chichester Alexander Bell. They called their recording device “Graphophone” and received the first patent for it on May 4, 1886.

Gramophone berliner2

However, Emil Berliner is considered to be the inventor of the well-known “gramophone”. Berliner presented his gramophone to the public in May 1888, which is also considered the birth of the record. Until the 1940s, shellac was considered the material for records until it was finally replaced by PVC (vinyl) at the end of the 1940s.

With the introduction of electrical sound recording in the mid-1920s, the limitations of the earlier acoustic-mechanical sound recording were overcome. The conversion of acoustic vibrations into modulated current generated a mechanical force electromagnetically in the record cutter, which was able to carve the sound into the carrier material completely independently of its energy. The first electrical sound recordings were made with a microphone and were therefore always mono. 

Georg Neumann developed the condenser microphone in the late 1920s. With this type of microphone, sound quality improved dramatically. Neumann microphones are still part of the professional studio standard today. After the rotational speed of record players changed from 78 rpm to 33 rpm, and vinyl could be used instead of shellac as the carrier medium, the sound quality and the running time of the medium improved in equal measure. The first stereo record was introduced in 1957 and stereophony soon became the standard.

turntable

On Tape

At the same time as the record players, work was also being done in Germany on tape recorders. Fritz Pfleumer patented the tape recorder in Dresden in 1928. A few years later he sold the patent to AEG “Allgemeine Elektricitäts-Gesellschaft” where Eduard SchĂĽlle developed the first tape recorder. In 1935, the forerunner of the chemical company BASF developed the magnetic tape.

The tape as a carrier medium revolutionized the recording and radio industry. For the first time, it was possible to record high-quality recordings “off the grid”, i.e. in a non-studio environment (e.g. live events). In addition, tape offered the invaluable advantage of being able to edit.

The first stage of editing was thus achieved. It is therefore not surprising that from the end of the 1940s, the tape became the recording medium of choice worldwide. The next stage of evolution was represented by multitrack tape machines. Multitracking made it possible to make overdubs for the first time and thus also permanently changed the way music was recorded. The musicians no longer had to perform the song simultaneously as a collective. From this point on, decentralized production processes were also possible.

Digital revolution and strange hairstyles

In the 1980s, the first digital revolution in music production took place with the advent of digital technology. This affected the recording medium as well as the instruments and sound sources. The Fairlight CMI (Computer Musical Instrument) is considered one of the most important tools. The Fairlight was the first digital synthesizer with sampling technology. The first units came up on the market in 1979 and among the first customers were artists like Peter Gabriel and Stevie Wonder.

Due to its price structure, the Fairlight was reserved for only a few musicians. Over time, however, more affordable synthesizers, sequencers and drum machines increasingly found their way into studios. With the Midi interface, the first universal communication protocol was also introduced. Together with multitrack recording, which had become commonplace, the new equipment had a massive impact on music and the way it was composed and recorded. The musician faded into the background, with drummers, in particular, facing digital competition. Beats were no longer played but programmed. The 80s was also the decade in which the CD as a digital medium replaced the sonically inferior compact cassette as the standard.

radio

The 90s

Digital development did not stop in the 90s. ROMplers provided sampled sounds of the most common instruments (strings, winds, etc.) in more or less good quality for a manageable investment. This put them in direct competition with real musicians. The quality of the sounds improved increasingly in the 90s, while the first DAWs gradually replaced the analog multitrack tape machine. The DAW also offered significantly more possibilities to edit recorded tracks afterward. With these new technical possibilities, new music styles (techno, hip-hop, house) emerged, which knew how to use the new possibilities skillfully. 

studio 1

In 2000

Since the 2000s at the latest, the DAW has been the center of music production. The classic combination of “mixer & tape machine” had had its day. At the same time, there was Napster, the first “peer-to-peer” music exchange platform, which made it possible to send compressed audio files in MP3 format over the Internet. What began in 1999 led to the CD being replaced as the most popular digital medium. VSTi (virtual instruments) further limited the need for real musicians. A development that basically continues to this day. The status quo is that you can basically represent any instrument digitally. Songs sound amazingly similar in many productions since some of the same samples and sounds are used.

Back to the Future!

Currently, music production is the exact opposite of how it was in the early days. Back then, musicians and entire orchestras would gather in front of a horn or a single microphone and play their tracks live, straight through, without overdubs. There was no subsequent editing or mastering. Today, many recording processes are automated and the “human” factor is no longer necessarily the focus. We have lost much of what was commonplace in the early days of sound engineering: minimal use of technology with maximum use of the musicians through their interaction. Is a return to the old days perhaps a way to more quality in music? Do we need quality in music at all?

The Big Picture

Perhaps a comparison to the film industry will help to clarify the question of whether sound quality is relevant at all. In film and TV productions, it is important to ensure dialog quality. This is relevant for several reasons. For one thing, sound reproduction is not standardized on TV sets. In addition, the common flat-screen TV hardly offers enough space to install reasonably sized drivers. In other words, television sound is often problematic. On movie sets, too, maximum sound quality is not always a priority. As a result, some dialogs are difficult to understand, which spoils the enjoyment of a movie. However, the quality of the movie sound is important for the overall experience.

This also applies to radio broadcasts. Here, a maximally intelligible, clear signal is transmitted, if only for reasons of FM range limitation. This is to keep the listener at the station as long as possible. The Orban Optimod 8000 was the first processor for FM radio stations to be introduced in 1975 and was designed to guarantee consistently good sound. Optimods, which are now fully digital, continue to operate in radio stations until today. Optimods usually includes at least a compressor, equalizer, enhancer, AGC (Automatic Gain Control) and a multiband limiter. Basically an automated mastering chain. 

FM Mastering

The approach that a piece of music should sound as good as possible on various systems and playback systems are familiar to us from mastering. However, mastering that is exclusively geared to hi-fi systems no longer works with today’s range of playback systems. Therefore, an ideal song should not only have the best possible sound quality, but also an interesting composition and lively individual tracks, which in combination results in an original title.

The liveliness and depth can be created organically with the help of real instruments and musicians. This offers the human ear more depth, more stimulation than automated music. Programmed songs can also achieve this depth, but to do so, these songs must be programmed with the same depth and attention to detail that a collective of musicians would record. Breathing life into ready-made sound building blocks is no less difficult than mastering an instrument with virtuosity. That’s why many standard pop songs simply lack finesse.

Where is the way out?

There is no patent recipe for the production of high-quality music. But different approaches can pave the way. One suggestion is to combine the best of all worlds! The current digital recording technology offers so many advantages and possibilities in terms of storage and sound manipulation compared to the old familiar duo “tape machine & analog mixer”, so you should fully use and exhaust these new possibilities. 

The visual editability of the arrangement and the audio waveforms offer additional creative potential that wants to be used. This potential is maximized when you let veritable musicians do their job in front of a professional front end of good microphones and preamps. Art is made in front of the microphone and this magic must be captured accordingly. Especially in a collective with several musicians, interesting ideas can arise spontaneously. 

That’s more exciting than tipping through countless sample libraries in search of the individual sound. But especially in the last few years, these processes have started to move. In any case, it is in line with the spirit of the times that analog synthesizers and drum machines are increasingly being used again instead of VST instruments. Analog synthesizers are experiencing a great revival these days. 

What’s exciting about this new, old hardware is the direct access to the sound structure and the haptics that go along with it.

The music you can touch. Of course, this analog hardware represents a larger investment compared to VSTi or other instrument plugins. Here we come full circle to the beginning of this article. More quality almost always means a higher cost. In the end, you are almost always rewarded with a better product (song). 

With a little luck, this song will also pay for itself twice over through its longevity. Turning the focus from the digital domain to analog sound production and recording technology, while relying on the creative input of real musicians, combines the best of both worlds. This creates music that conveys something individual and exciting to the listener. The necessary investment should therefore flow equally into both sound engineering and musical assets (assets). This combination creates music that will still have relevance many years from now. And relevance in this case is synonymous with high quality.

What are your thoughts on this subject? If you like this blog post, share this post with your friends.

Thanks for reading!

Your Ruben Tilgner