After having understood the fundamentals of your DAW, chosen some hardware devices to complete your set up, gone through the basics of music theory, understood the structure of a song, you are ready for the final process. Easily described as “the last process in a long chain of production ethics”, mixing still comes before mastering. Even if knowing how to mix music is an important step in music production, mixing in itself should never be seen as something more than final touches to an already completed piece of work. 

Mixing is not a magical step which will erase the mistakes you could have made earlier. A good track will be a track where the main elements (bass, drums, vocals, lead) intricately fit together. In that aspect, an unmistakable approach is the individual programming of one instrument after the other. Understanding how to mix music could definitely improve your track in some regards, but in terms of final adjustments, it should not be seen as anything more than a repositioning tool for instruments 

This article contains excerpts of Rick Snoman’s “The Dance Music Manual (3rd Edition)”.

Mixing Theory | The Fundamentals

Initial mixing considerations  

Before considering any theory, it is important to note down that mixing is an entirely creative enterprise. This means that there are no right or wrong ways to approach mixing. There are no rules which are set in stone on how to mix music. Nonetheless, there is common practice and processes that work most of the time.

Some modifications and repositioning of instruments, while retaining the producer’s original style should correspond to the ethics of mixing. Hence, the ultimate goal of mixing should be a transparency between all instruments with each sound being clearly heard, and occupying its own space within the soundstage.

Levels and dominant frequencies  

To fully grasp how to mix music, you need a deep understanding of soundstage, monitoring environments but also our own hearing limitations. That is, our hearing is fallible: we perceive different frequencies to be at different volumes. The volume at which we listen to music will determine the dominant frequencies.

For example, if we listen to music at a conversational level, our hearing would be much more receptive to mid-range frequencies. Any frequencies that are higher or lower should be physically louder to be perceived at the same volume. Conversely, at a higher volume of music, the lower and higher frequencies will be much more perceivable.

Volume adjustement constitutes one of the pillars of mixing.

Why shouldn’t you use headphones when mixing?

There is a noticeable difference if you use headphones over studio monitors. This difference is provoked by the properties of pressure within your ear canal. In the given case, inaccuracy of sound is not provoked by the lower-frequency sounds which are easily perceived with headphones.

It is frequencies above 500 Hz which might be problematic. At that frequency, the outer ear is the victim of resonances and the proximity of the headphone cup produces additional pressure changes inside your ear. This results in a different perception of frequencies and volume levels. Music will rarely translate well on your pair of monitor speakers even if you know how to mix music on headphones.

The goal should be to achieve an equal loudness of all elements during your mix. As a rule of thumb, you should also always mix just above normal conversation level. First of all, this won’t result in excessive ear fatigue and this will also result in a more proportionally balanced mix. 

Mixing with headphones is not recommended. 

The Soundstage

The soundstage is an important aspect to understand how to mix music. Every producer should imagine the soundstage as a “three-dimensional box” where you position various instruments. Sounds can be positioned:

1. In the back or in the front and anywhere in between.

2. Left or right using the panning knobs.

3. The volume and frequency of a sound will determine whether it is perceived at the top of the stage (high frequencies), the middle (midrange frequencies) or the bottom (low frequencies)

The Front To Back Perspective (Gain)

When a sound wave comes out of your speakers, it doesn’t directly reach your ears. It propagates in all directions and de-intensifies according to the distance. This phenomenon is named the “inverse law”. Logically, the louder an instrument is within a mix, the closer to the listener it will appear to be.

What you will notice a lot of the time with dance mixes is that they give a very frontal impression. This is the result of a mix which is well put together, where all the elements occupy their own space. It is simply impossible that all elements are situated at the front of the soundstage. Why? Because in that case, all the tracks would have the same volume and this would result in a cluttered mix. 

Gain can alter the way we perceive sound. 

The Horizontal Perspective (Panning)

The horizontal mixing perspective is responsible for the distance and positioning of sounds on the left or the right.

Stereo placement is dependent on 2 elements: 1. Volume/Intensity between sounds 2. Respective timing

As a technique, panning provides one of the most straightforward methods for a producer to create space for 2 instruments that share a similar frequency range. This method allows the producer to slightly pan a sound left or right, in order to preserve a clear soundstage where all sounds are positioned in a balanced way.

To deliver a quality mix, the positioning of the studio monitors matters. Ideally, an equilateral triangle (same distance) should be formed between the producer and the speakers and between the speakers themselves. 

An equilateral triangle should be formed between the producer and the speakers and between the speakers themselves. 

Basic mix considerations

  1. The kick and the bass are central elements of any electronic music track. Most of the time, they sit in the front of the mix. Nonetheless, their positioning also depends on other elements (vocals, drums, lead).
  2. If the lead, pluck or vocals are important elements, they should be placed at the forefront.
  3. To make certain elements pop out, the easiest option seems to increase the gain over other instruments.
  4.  If you apply a cut of 1 or 2 decibels at a higher frequency, the sound you mix might appear more distant. Conversely, if you increase the higher frequencies of a sound, the sound appears closer.
  5. Compression can produce a similar effect. With a fast attack, the compressor could reduce transients, removing high-frequency content, offering a new positioning in the mix. Nonetheless, a Multiband Compressor proves to be more precise when adjusted on multiple bands either modifying low or high-frequency content.

The Vertical Perspective (EQing)

The vertical perspective corresponds to higher frequency elements located at the top of the soundstage, whereas, lower frequencies are placed at the bottom. Vertical positioning will be limited by predetermined timbres (high, mid & low frequencies) the producer has used in his mix. 

All timbres have frequencies that don’t really add up to the end result. When listening to instruments in the context of a mix, various harmonic frequencies make instruments clash. Whenever this is the case you should eliminate unnecessary frequencies. 

The process of finding the problematic frequencies and remediating to the latter is called “frequency masking”. This process is not always as straightforward as it seems because it requires calculations and compromises on the producer’s side.

To help producers with this process, frequency charts are available on the internet. However, most of the time these are not accurate because they rely on synthesized timbres, processing and effects which considerably modify the original timbre. Instead of relying on a frequency range of instruments, you should rely on 7 fixed EQ octaves. 

Fabfilter Pro Q2, one of the most famous modern-day EQing plugins.

The 1st Octave – from 20 Hz to 80 Hz

40 Hz and under is a frequency area occupied by the lowest frequencies of a kick drum and the bass.

A rule for these frequencies (40 Hz and under) is to never apply a boost. Instead, the engineer should apply boosts around 50 to 65 Hz areaThere the character of the bass and kick drum.

It is also because of the necessity to create space for the kick drum that cuts to the bass can be applied in this frequency area. This octave range is very tricky and must be listened to on multiple sound systems to get a good idea of the small inconsistencies.

The 2nd Octave – from 80 Hz to 250 Hz

In this range, the best frequency to boost or cut is to be found (generally) at 120 or 180 Hz. A sound can be rendered heavier by boosting in that frequency range. Conversely, cutting in that area will make a sound thinner.

This frequency has the biggest part of the low-frequency energy of most instruments and vocals. It is a very important frequency range where the “bass boost” of home sound systems lies and where much of the bass energy can be found.

The 3rd Octave – from 250 Hz to 600 Hz

This area is crucial in the sense that if harmonic frequencies are clashing in this range, most elements will appear cluttered and indistinct. Most of the elements that sit in this frequency range are located between 300 Hz and 400 Hz. When an engineer proceeds to boosts in this area, it is to accentuate the presence and clarity of a sound. If you want to make a sound less boxy, cuts in that range will be useful.

The 4th Octave – from 600 Hz to 2 kHz

This octave is often used to give instruments presence. Small boosts can also be made at 1.5 kHz to increase the attack and body of some instruments. Most commonly, cuts and boosts in this range happen between 2.5 kHz and 3 kHz.

The 5th Octave – from 2 kHz to 4 kHz

The fifth octave is the one that expresses attack of most rhythmic elements. Most commonly, cuts and boosts in this range happen between 2.5 kHz and 3 kHz.

The 6th Octave – from 4 kHz to 7 kHz

This octave is a distinctive range for most instruments. 5 kHz is the frequency to look out for. Boosting at this frequency will increase air and sonic features. Cutting is employed to reduce sonic harshness.

The 7th Octave – from 7 kHz to 20 kHz

This octave pertains to higher frequency elements such as cymbals and hi-hats. A common technique is to apply a shelving boost at 12,000 Hz to make the music seem more hi-fidelity, detailed and to prevent aural fatigue. Popular frequencies are 8 kHz, 10 kHz, and 15 kHz. When boosting here, the purpose is to increase timbre clarity and add air. Cuts are used to remove resonance from instruments, as well as vocal sibilance. 

Basic mixing mistakes to avoid

1. If a producer only uses stereo signals in his arrangement, the end result will be confusing. By using only stereo signals, it is impossible to locate all the instruments. The choice of which elements should be mono or stereo belongs to the producer. It is common, however, for the kick drum to be mono since it is the leading element in electronic music. 

2. When trying to create a sense of depth in their mix, many beginners will inundate their sound with reverb (and other effects). To avoid this, and still deliver a sense of depth, a solution is to employ a mono reverb signal with a long tail. In order to clean up the sound, an EQ could be added to reduce the higher frequency content of reverberation.

3. Another major mistake is creeping mix faders. It corresponds to the gradual increase of the volume of each channel, with the (impossible) aim of hearing one instrument clearly over another. When a novice engineer does this, he does not take headroom into consideration. Maximum headroom is a level which should not be reached if you want your mix to retain a defined sonic definition.

Practical Mixing | Tips & Guidelines

STEP 1: Before starting to learn how to mix, you should always make sure that you save the session. By doing that, you will always have an original version with unaltered audio tracks where you can go back to.

STEP 2: The rule before starting to mix is to set correct monitoring levels through your audio interface. You should be able to hear your music clearly whilst being able to have a conversation over it. It is important to note down that loudness can be a misleading factor. A mix should sound great at a low volume.

STEP 3: Another advice on how to mix music is to only use track volumes at first with no additional plugins. A whole song can be balanced just by using the volume faders.

STEP 4Editing is often underestimated in mixing. However, during the mixing process, you should know that it is common to have to cut, paste or even remove things from the mix for it to sound good. Sometimes, parts of vocals or guitar noises such as useless breaths or “s” sounds (see: de-esser*) can be removed.

STEP 5Next up is EQing, one of the most important steps in the mixing process. EQ should improve the way your tracks coexist. In fact, it should reach a point where they create space for each other and not fight for the same frequency range. EQ is responsible for the presence of low, mid and high frequencies. EQing in mono can remove the space created by panning.
STEP 6: The way you proceed with EQing is to move around frequencies by setting a high “Q” and gain and listening to different frequencies. Frequencies that are harming the overall mix should be cut. Useful ones should be boosted. What you should do here is to cut sharp (high “Q”) and boost gently (low “Q”). You shouldn’t go up or under 3-4 dB unless these are extreme circumstances.

High-passing is very common, to remove the useless low-frequency content from tracks that don’t need it (except bass & kick drum). “Complimentary EQ” is useful in the sense that you have two tracks (you cut a frequency from one, and boost it on the other). EQ is used to fix tracks but should definitely not be seen as an end in itself.

STEP 7: After EQing, it is useful to use panning since you have created more space. Vocal, bass, kick and snare should always be kept in the middle (mono). Everything else can be panned left or right.

STEP 8: Once you have checked the levels, used eq and panned, it is time to use compression. Compression gives balance to the dynamic range of a track. You should mostly use it on bass and vocals. If overused, it can really be detrimental to your tracks.

STEP 9: An unmissable processing tool in music production is compression. Compression reduces the gain of any signal above a decibel “threshold” according to a “ratio”. A higher ratio results in more compression.

  • 2:1 – ratio means = 50% of the signal over threshold remains.
  • 3:1 – ratio means = 33% remains
  • 4:1 – ratio means = 25% remains
  • Attack = how long it takes to start this reduction
  • Release = how long it takes to stop it

Faster attack = less “punchy” and more “fat” because you’re reducing the attack transient. Makeup gain allows the now-compressed signal to return to its previous level. Manipulate and experiment with these parameters to create energy and consistency. Rule of thumb: 3:1 ratio is a good place to start, look for about 4 dB of gain reduction. Use compression to bring life to tracks.

STEP 10: The next step is going back through the volumes of different tracks and verifying general Eqing. You should start with the least important instruments and finish with the most important ones in your production. Soloing tracks is not a good idea at this point. When adding a plugin to a track, you should always make sure that it sounds better than before adding the plugin. Learning how to mix music involves take breaks and after the process, listen to the track at a low volume on low-quality sources (car speakers, earbuds) to make sure that the bass, kick, snare and vocal are well defined.

STEP 11: Only after general EQ, you can start using coloring plugins*. This corresponds to the reverb, delay, saturation, harmonic exciters you use to color your tracks and create particular tones. Experimenting is a good way to understand what effect can contribute something to the mix. Nonetheless, you should not entirely rely on effects to make your tracks sound better.

Effects such as vocal doubling, “telephone” EQ, de-essing (mentioned earlier), parallel tracks, sends, panning, volume automation, and other tricks can make your mix sound less dull and more interesting. The “less is more” should be applied in this phase. It’s better to use a few plugins well than having a bunch of different ones which don’t really add anything to your mix.

STEP 12: Finding a reference mix is also a very good piece of advice. Professionally sounding tracks will make you realize the mistakes you can make on your own. Furthermore, you can also use group plugins, to link certain (similar) elements of your track together, like vocals or drums. This creates cohesion.

Master tracks are useful since they can be used for subtle “glue’ compression, EQ tweaks, or tape saturation and even a group reverb which makes the tracks constitute a logical ensemble.

STEP 13: Once you have finished mixing, you should let your ears rest for a while and come back to the mix later. After doing this, the next time you re-listen to your mix you might want certain elements to be louder. This is the best time to make these modifications. If you have followed our steps on how to mix music, your song is ready for the last step, mastering.

Conclusion

To conclude, mixing can be a very daunting and complex process. Its theory is adaptable, extensive and ever-changing. Its practice is not a science and entails very personal approaches. Nonetheless, there is some grounding knowledge on how to mix music as there is with any other topic in music production.

Understanding how to mix music is (without a doubt) a step that you cannot skip if you want your track to reach the quality of professional ones. This article closes the cycle of how to start your music production journey? Over the course of this series of articles, we have tried to cover the basics of DAW’s, hardware setups, music theory, arrangement, and how to mix music.