Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 15

Top 10 Reverb Tips and Tricks

With reverb, you can make or break a space

Imagine listening to a recording and half a minute into a song you notice something wrong. You
can’t quite put your finger on it; you just can’t feel the instruments, you feel attacked by the
singer’s in-your-face voice and everything is just too…..dry. It’s like listening to music in a
vacuum. There’s no space.

Although listening to a reverb-free record is nearly impossible, (unless it was recorded entirely in
an anechoic chamber), you can still have a really dry record if you don’t put any reverb on
anything.

Reverb can be perceived as a glue that holds everything together, yet retains enough space to
maintain a perceived distance between each element. It makes a three dimensional picture of the
soundscape you just recorded, causing you to feel that you can hear the room accompanied by
the instrument.

Different modes of reverb

There are quite a few different types of reverb. You can call them reverb modes, or room types.
Some of the more common types include; Room, Hall, Chamber, Spring, Plate, and Convolution.
In our age, we have access to digital reverb simulators which can simulate, quite realistically, all
of these programmed room or reverb modes. Let’s take a look.

 Room reverb – These types simulate the sound of having recorded something in a room.
Whether the parameters are for a big room or a drum room, they usually simulate smaller
spaces than their Hall/Chamber counterparts.
 Hall reverb – Rich, warm and big are the first adjectives that come to mind when thinking
about Hall reverb. These types simulate halls, whether they be medium halls, concert
halls, or whatever lush parameter name the hall has.
 Plate reverb – Plate reverb is a personal favorite of mine for vocals. Live, I propably use
it too much, but I just think it does wonder to the vocals, without taking it too far or
drowning it in reverb. Plate reverb is basically sound being sent to a metal plate which
vibrates back and forth. These vibrations are picked up and transformed into an audio
signal. Plate reverbs are very bright but clean, so they suit vocals especially.
 Spring reverb – I was once asked what reverb was when I was fooling around with my
guitar. I cranked up the reverb on my small practice amp and then kicked it. “That boing
you heard?” “Yeah?” “That’s reverb”. Although true is some form, that boing wasn’t all
reverb, it was spring reverb. The reverb found on guitar amps so most usually used for
guitar.
 Chamber reverb – In the old days, studios had so called echo chambers. In these
chambers they had speakers that they routed the audio signal that they wanted to put
reverb on. The signal, be it guitar, voice or whatever was produced through the speakers
into the chamber and picked up by a microphone that was positioned to capture the
reverb in said chambers.
 Convolution reverb – This is the type of reverb that allows digital emulation of real three-
dimensional spaces. If you’re familiar with the famous reverb plugin Altiverb, then you
have heard convolution reverb. In order to capture a room’s reverb characteristics, an
“impulse” sound is played in a real space, such as an opera house or a cathedral, then
recorded into a computer. The impulse sound allows the computer to simulate that space
just from the impulse sound. This is possibly the best kind of digital reverb around

So now you know a little bit about the reverb modes you most commonly work with. Below I
have brainstormed a few fun tips you can use whenever you like to spice things up.

1. A different take on reverse reverb:


You all know the classic reverse reverb, where the reverse seems to swoosh in before the
phrase of the singer or the hit of the drum. A neat trick for something different is to
record an infinite reverb on a different track and then reverse it. For example, say you
have a slow intermission type middle part and the part before ends on a snare hit. You
can record that last snare hit on a different track with a big cathedral like reverb with
infinite decay. Then you can reverse the audio part and put it low in the mix, then you
have a weird controlled reverbed ambience filling out your slow part.
2. Gated Reverb on vocals:
Gated-reverb on vocals is something I think is pretty cool. I think this is used on the song
On call, by Kings of Leon. His vocal reverb stays on while singing but cuts off abruptly
when he stops. You patch your effect processor to a gate and the sound source is side-
chained to the gate. That way, the gate opens and lets the reverb out whenever the singer
is singing, but cuts off as soon as the sound level dips below the threshold of the gate.
3. Making things feel bigger and bigger:
Say you have a really spaced out Sigur Ros rock outro(I’m Icelandic, I’ve got to
namedrop here) and the drums are going wild in the end. It can be fun experimenting
with automating the reverb so the drums, or maybe only the snare, or everything,
whatever you choose, gets bigger and bigger. I know for a fact that this can work
wonders live to really give that last song a huge impact on the audience.
4. Pan it:
Use mono reverbs for a mono sound source and pan them to a different location in the
mix. It can give an interesting impression.
5. Put space between source and reverb:
Using a standard room reverb, adjust the pre-delay to give the impression it is a little big
bigger without making it linger too long. On vocals for example, it can give space
between the singing and the reverb.
6. Reverb only:
Send your drums to a big reverb and solo-safe the reverb. That way you are only hearing
the reverb and not the original sound source. It can make for a cool fade-in intro for a
song. Especially if you add reverse reverb for the change into the real drum kit.
7. Mix it up:
Use different types of reverb on the same source. Mixing a couple of types of reverb can
create an interesting effect.
8. Don’t use any:
Keep some instruments reverb-free. It can add an interesting contrast to the rest of the
song. It can put a solo intrument to the forefront in a special way.
9. Add other effects:
Add other types of effects on the aux channel where you have your reverb. Try distorting
it, phasing it or anything else you can think of.
10. Use REAL reverb:
Try ditching your plugins and use real reverb. Upload your audio clips to Silophone, an
old grain silo that has been converted into a do-it-yourself reverb chamber. You upload
audio and it is played back in the empty silo, then recorded and sent back to you as a
download.

I’ve decided to do an example of tip #7. I’ve taken a small snare sample and put two
types of reverbs on it via an aux send. I used the Logic presets “Ambience” in the
Platinum Reverb and the “Short Snare Hall” in Space Designer. Although I am using
Logic, any DAW with decent reverbs works just as well.

First audio sample has the untreated snare drum.

Second audio sample has the snare with a tiny bit of ambience reverb. Not a huge
difference but not as dead.

Third sample has a snare hall preset. The reverb makes the snare much bigger.

Fourth sample has both the ambience and hall reverb patches together. Notice that the
predelay on the ambience preset delays the hall reverb so it enters later than the actual
snare hit. Could make for an interesting sound.

Reverb is an instrument of endless debate. Everybody has an opinion of what works


best(like in everything else regarding audio). But reverb can often make or break a song,
too much fills it with too much space and you can’t hear what it’s all about and too little
just kills the emotion of it. So you have to take particular care in your appliance of
reverb, and also be open to a lot of experimentation. Since it is such a big topic you are
sure to find something interesting in your endeavours.

Who knows, maybe you’ll be the next one to invent the next “reverb studio trick”?

If you stumbled upon this article and like it, please tell the rest of the Stumbleupon
community.
Reverberation is a subtle but crucial part of any mix. The wrong choice can make everything
sound harsh, messy, muddy or distant. The right choice can bind a mix together, add depth,
space, and air, and enhance detail. The trick is knowing the difference.

Once upon a time, things were simple. You either had an SPX-90 or a Lexicon, and you got on
with the job. Now, the range of choice is bewildering. In the 21st century there are hardware and
plugin reverbs, modelling reverbs, sampling reverbs, convolution reverbs, emulating reverbs…
the list goes on.

This post will completely ignore all that confusion, and attempt to cut right to the chase. Later
I’ll offer some ideas, hints and tips for getting the best results with reverb, but first we’ll look at:

How to choose the right reverb.

Broadly speaking, there are two types of reverb – ones that try to sound “natural” – meaning, like
a real acoustic space – and ones that don’t. Stage one is to decide which you are looking for.

Natural – These make instruments sound as if they are in a real space. They tend to be more
subtle, and add more depth to a mix. If you use too much you’ll end up swamping the original
character of the recording, but used well, they can help give character to a close-miced recording,
and make things sound more three-dimensional and “real”. It’s important to choose one that
complements the sound of the space you recorded in – if the reverb is too different from what is
already there, the two will “fight” and the result will sound artificial.

The Rest - Notice I haven’t called these un-natural – we are so used to artificial reverbs today
that even the least realistic reverb can sound completely normal. Rather than emulate a real room
or hall sound, this type of reverb is more of a pure “effect” – they allow you to add life or
“sheen” to dry recorded sounds without making them “sit back” in the mix in the same way as a
natural verb.

If it’s not immediately obvious which of these two you need, don’t worry – just experiment ! If it
sounds good, it is good. Even when I already have a clear idea in my head about what I want, my
first step in choosing a reverb is always to simply to flick through a load of presets, and make a
note of the ones that I like. Keep an open mind – it’s easy to settle on a small pool of favourites,
but it’s always worth listening with fresh ears every so often. Once I have a shortlist, I then listen
to each in turn and use the ideas from the list below to tweak the best, before doing final
comparisons to make the final choice.

Finally – don’t be afraid to use both types in the same mix. Maybe adding a realistic room-ey
reverb to the guitars and drum overheads helps pull everything together in the mix, but the vocal
just needs a shimmery plate and the snare a little more life.
Ideas for using and abusing reverb

Now to the nitty-gritty. Here are some of the rules of thumb, ideas and techniques I’ve come
across for using reverb effectively.

Take time to make a good choice early on

It’s amazing how much influence reverb can have on how we perceive a mix – and it’s important
to get the right one early on. If you pick something too bright and harsh early on, you may
struggle to get the warm sound you’re aiming for, for example. On the other hand a dull, muddy
reverb could drag down what’s otherwise supposed to be a spiky, punky mix.

Balance level and time

The most common reverb question is – “how much ?” One simple answer is:

Turn it up until you notice it, then turn it down slightly.

But this only works if the decay time is right first. If the reverb tail is too short then turning it up
won’t help, if it’s too long you won’t be able to get it loud enough before it starts to swamp
things. The length of the reverb and it’s amount need to be balanced against each other, and may
be different for separate elements in a mix – don’t be afraid to patch in two sets of the same
reverb with slightly different decay times – or other parameters, for that matter.

Learn about early reflections

Most reverbs include two elements – the familiar longer reverb tail, but also much shorter
elements known as “first reflections” – the idea being that most instruments in a real space are
reflected from a nearby wall first. Our ears pick up on this, and it has a big effect on our
perception of the size of the “room”. Experiment with the reverbs you have access to to learn
how their settings affect the sound.

Always EQ the return

I always bring the reverb return up on a stereo channel (rather than routing it straight to stereo)
and then EQ it. Most reverbs include settings for tweaking how much high frequency there is in
the sound, but I think it’s faster (and gives more control) to just EQ the return. It’s amazing how
many cheap and nasty reverbs can be made to sound ten times more natural by slapping on a
low-pass filter, for example.

Sometimes EQ the send

Depending how the reverb works, this may give a different result than EQ-ing the return. It also
allows you to use the same reverb for different instruments – say you have a bright horn section
which really “catches” the reverb and emphasises it’s unnatural qualities, but you want to use the
same rev on the piano to help blend the mix, and it need the brightness – try routing the auxiliary
send from the horns through an EQ first, to take the harshness out on it’s way into the rev.

Consider compressing the send as well !

A common problem with reverbs is that they sound fine most of the time, but certain louder
notes “catch” them and the huge reverb tail sticks out a mile as a result. In the same way that EQ
cans sometimes help, compressing the signal you send to a compressor can solve this problem –
it holds back the loud notes in the send to the reverb only, and gives a much smoother result.

Automate reverb levels for complete control

It’s not uncommon for a reverb that sounds great when everything is going full-tilt to sound
completely over-the-top in quiet sections, or vice-versa. Using automation to “ride” the return
level – or the levels you’re sending – can keep a reverb sounding great in both loud and quiet
sections. Or, if you have “sticking out notes” and the suggestion about using a compressor
doesn’t work for you, automation can sometimes get a better result – custom tweaking the reverb
send by hand. I once made seventeen separate reverb automation changes in a single 40-second
sax solo !

Watch out for spill

Unless you have 100% isolation in your mix, always be aware that spill may be influencing
things. The dull, muffled sound of drums in the booth next door may be getting out of hand once
it bleeds into the vocal reverb send – again, EQ-ing the send may be a solution to this – or
perhaps using a gate.

Tweak tweak tweak

If you’re using a preset reverb sound, you probably aren’t getting the best out of it. Presets are
invaluable for quickly auditioning and choosing which reverbs have the right “flavour” for a
given mix, but at the very least you need to optimise the decay time and amount of early
reflection. Most reverbs offer far more elements wo tweak and fine-tune, though. Just like
playing an instrument, time spent experimenting with all the possibilities will always be a
worthwhile investment.

Don’t use any

Maybe you don’t need any artificial reverb at all ! If the space you’re recording in sounds great,
sling some extra mics and try to capture it, then mix it in instead of (or as well as) artificial
reverb. And be aware that sometimes things sound great dry – or, much more commonly, that an
effect other than reverb can give you what you’re looking for – delay on vocals instead of reverb
is a classic example.

Whichever of these suggestions you decide to make use of, the most important message is simply
– never underestimate the power of reverb. All too often it’s just the case that a tried and trusty
favourite is applied for the rough mix, and never gets improved or optimised. It may take longer,
but time spent choosing the right reverbs for a mix can reap dividends and actually end up saving
time, in the long run.

(This post is the third mixing-themed post I’m bringing back to the front page – prompted by Joe
Gilder re-opening the doors to his Mix With Us community. Here are the other two so far:

Sonar’s stereo buses are great tools when it comes time to mix your project. The concept of
buses is rooted in the world of analog mixing consoles, but Cakewalk has redefined the bus into
something even more powerful than aux sends or subgroups. Sonar simply has stereo buses, and
you can have as many of them as you want, and route them however you want. To understand
the power behind this concept, take a look at how buses are used in three common scenarios:
Submixes, Effects Buses, and Parallel Compression.

Outputs and Sends

First, you need to understand how tracks are routed to buses. Every track has an Output that
routes to a bus. The output of the track, after the effects bin, volume, and pan have affected the
audio, is sent to this bus.

In addition to the track output, you can add one or more Sends. To do this, right click on the
track and choose Insert Send and pick a bus to send it to. (You can choose New Stereo Bus and
Sonar will create a new bus for you). Sends also send audio from the track to a bus, but they have
additional controls that the track Output does not have. Sends have their own Level and Pan, a
button to enable or disable them, and another button that will toggle between “Pre” and “Post.”
You may have heard this referred to as “Pre Fader” and “Post Fader,” because the state of this
button determines if the track’s audio will be sent to the bus before or after it goes to the track’s
volume fader. If it is set to “Pre,” then the track’s own volume fader will have no effect on the
audio being sent through that Send. Regardless of whether the Send is pre or post-fader, the level
control on the send will control how much signal is sent to the bus – it is like a volume control
for the send.
The “Lead Vox” track outputs to the Master Bus and has a Post Fader Send to the Reverb Bus,
while the “Acoustic” track outputs to the Guitar Bus

Many tracks can output and send audio to a single bus. Each bus has an output which you can
route to your audio interface (sound card), or another bus. In fact, you can even insert sends on
buses! The routing possibilities are endless.

The first common use for buses is submixing. Let’s say you spent hours tweaking the mix of a
drum kit, finding the perfect balance of kick, snare, hi hat, toms, and overheads. Later, after
working on the bass, guitars, and vocals, you decide that the drums need to be a little louder. You
don’t want to adjust the drums individually, because you like the sound of the kit as a whole –
you don’t want to upset the balance between the different tracks that make up the drum kit. You
need a submix.

Insert a new Stereo Bus, name it “Drums,” and set the output of the new Drums Bus to your
Master Bus. (The output of your Master Bus should go to your audio interface so you can hear
what’s playing back.) Now, set the output of each drum track (kick, snare, etc.) to go to the
Drum Bus. Now you can control the level of the entire drum kit with the fader on the Drum Bus
– without affecting the mix of the individual drum tracks.

I often create a bus for drums, one for guitars, and another for vocals. That way, for example, I
can bring all the guitars up or down in the mix without affecting the balance between the
different guitar tracks. If you have a lot of background vocals (BGVs), you might have a separate
bus for them that outputs to a vocal bus. Then you have two levels of control – you can bring all
the BGVs up or down under the lead vocal (with the BGV bus fader), and you can also adjust the
level of all the vocals at once (with the Vocal bus fader).
The “Acoustic” track outputs to a Guitar Bus, where it can be mixed with other guitars before
being routed to the Master Bus

Submixes are also handy in mixing. For instance, if you wanted to take the guitars out of the mix
while you work on something else, you can mute the Guitar Bus instead of muting the individual
tracks.

Sonar’s stereo buses have their own effects bins – an essential property that makes them very
useful for reverb, delay, and other effects that you want to apply to multiple tracks. The basic
purpose of reverb is to add a sense of space, or depth, to the recording – to make it sound like it’s
in a natural environment. Having a different reverb on every track doesn’t sound very natural.
(Other plugins, like EQ and compression, make sense on a track-by-track basis, though there are
some instances where you would use them on a bus; see Parallel Compression, below). A by-
product of this technique is that your computer’s processor only has to run one instance of the
reverb plugin – saving valuable CPU cycles.
To use the stereo bus for effects, first create a bus and insert your favorite reverb plugin into the
bus’s effects bin. When using an effect this way, the settings need to be different than if you
were to insert it on a track. You want the effect to be “100% wet” – in other words, you want the
bus to only output the sound generated by the reverb plugin. This is because you will be mixing
the output of the tracks (the “dry” sound) with the output of the reverb bus (the “wet” sound).

After you have put a reverb on the bus and set it to be 100% wet, set the output of the Reverb
Bus to your Master Bus. To begin with, you probably want to pull way back on the Reverb Bus
fader – this is where you will be mixing in the total amount of reverb in your project.

The Lead Vocal is sent to a Reverb Bus which outputs to the Master Bus

Now that your bus is set up, it is time to route some tracks there. Find the lead vocal (or any
other track you like) and insert a Send to the reverb bus. Don’t change the Output of the track –
that is your “dry” signal going to your Vocal Bus or Master. You want to send a copy of the
signal to the reverb bus. Enable your Send and make sure it is “Post Fader.” In this application, it
makes the most sense for the Send to be Post Fader, because you want the level of reverb to
change with the level of the track.

Use the send level control to change the amount of reverb that the track gets. If you have
multiple tracks routed to the same Reverb Bus, you use the send level controls on each track to
control the amount of reverb for that track. The volume fader on the Reverb Bus itself will
control the level of the reverb for all the tracks – much like a submix.
The kick and snare tracks are compressed on a bus and mixed with the dry sound

Parallel Compression is a technique that is used to get a “fatter” drum sound, while preserving
the natural attack of the drums. You do this by mixing the raw kick and snare tracks with heavily
compressed copies of those tracks. (See this article at Hometracked). While you could simply
clone the tracks and insert compressors on the copies, there is a much more elegant solution
using busses.

First create a new bus and set the output to your Drum Bus (submix). Insert a compressor plugin
into its effects bin, and choose a short attack, long release, and big ratio.

Next, insert sends on the kick and snare tracks, routing them to your new Parallel Compression
Bus. Enable the sends, and make them “Pre Fader.” It is important to understand why you want
them to be pre fader. You’re going to be using the track’s volume fader to mix the level of the
drum in the song. If you made the send “Post Fader,” then lowering the fader would not only
lower the level of the uncompressed drum in the mix, but also reduce the amount of compression
the drum got on the Parallel Compression Bus. That’s not what we want – so make the send “Pre
Fader” and use the Send’s slider to mix levels of the the kick and snare tracks getting sent to the
bus.

Set the threshold of the compressor so that it really squashes those drums, and then use the
volume slider on the Parallel Compression Bus to mix the compressed signals back into the drum
mix. Then, when you adjust the volume slider of the Drum Bus, you will be changing the level of
your complete drum sound, including the parallel compression sound.

Kick and Snare are output to the Drum Bus, and also sent Pre Fader to the Parallel Compression Bus,
which outputs back to the Drum Bus

Getting Your Mix to...mix, Part One, EQ as event security.


Mixing and Multitracking Articles

Contributed By blueninjastar
The first of a series dedicated to concepts and techniques to help your mixes to find that perfect
blend of clarity, punch, character and definition.

Most of us have been familiar with EQs since long before we ever started recording audio in our
home studios. The presence of EQs on home and car stereos has made it seem like we have a
grasp on what they do and how to use them to improve the sound of our recordings. However,
applying EQ appropriately to a stereo mix and using EQ to add definition and clarity in a
multitrack recording are indeed, two different things entirely. In order to achieve the latter, you
must gain a working knowledge of EQ Theory.

How many times have you mixed your latest number one hit only to find that the vocals seem
buried in the mix? To bring them out more, you just need to turn them up a little, right? Well, not
necessarily. In fact, doing this most often just places that vocal "on top” of your mix. The result
sounds like two completely separate recording playing at the same time; one of the band, and one
of the vocals. How do you get the vocals to sit "in” the mix without fighting with other tracks?
How do you get that bass guitar to still be nice and bottom heavy and still hear the kick drum
punching through? How do you make that sax solo pop out front without piercing your ear
drums? You guessed it. EQ.

Proper use of compression, panning and levels all contribute to this goal as well, but EQ will
provide much of the groundwork for what we’re trying to achieve.

EQ Theory

First, it's important to understand that your mix (or any recorded sound) is nothing more than a
bunch of frequencies that hit various amplitudes over the course of a timeline. The human ear is
capable of hearing frequencies in the range from about 20Hz up to about 20,000Hz (20k).
Everything audible in a recording falls somewhere in this range or thereabouts and a given
instrument (or any other sound) will occupy certain frequencies more dominantly than others.
For example, a hi-hat cymbal would have significant amplitude (volume) between around 3k to
5k and would have virtually no amplitude at 30Hz. Likewise, a bass guitar will have a lot of
amplitude around 80Hz and next to none at 10k. So, if you apply this theory across all of the
tracks in your mix, you can imagine how each track (instrument, voice) will primarily occupy a
certain range of frequencies. Most any track will have a dominant frequency range that
constitutes the "meat" of the sound. They will also occupy other frequencies in less significant
amplitudes that make up some of the characteristics of the sound. For example, the "boom" of a
kick drum might be around 60Hz while the "attack" might be around 2k. So, when you mix,
you're not just mixing several instruments together. Your mixing the frequency ranges of
multiple sound sources. Many of these sound sources will occupy overlapping frequency ranges.
If two sounds are trying to occupy the same frequency at similar amplitudes, they will fight with
each other creating a muddy sound and losing definition from both sound sources.
Imagine you’re in line to get into a concert. There are ten lines all running side by side and at the
front of each line is the ticket-taker and a turnstile. As long as everyone goes one at a time, the
lines continue to move nicely. But what if the guy behind you tries to go through the turnstile at
the same time as you? If you let him pass, no problem. But if you both try to push through the
turnstile with the same strength at the same time, you both end up stuck in the turnstile, detained
by security and missing the opening song of the show! This is not unlike what happens in your
mix when two sounds (tracks) are competing for the same frequencies. They jumble themselves
together and you never hear either of them clearly. Think of your mix as 180,000 lines (20Hz to
20Khz) to get into your ears.

Notching Out

Now that you have a basic understanding of EQ Theory, let’s look at how you make sure
everyone is waiting for their turn in the lines; Notching Out. Let’s just jump right in to an
example. My voice usually sits "primarily” around 2.1k to 2.5k. If I also have a guitar track that
includes the same range, the two tracks will step on each other. The vocal doesn’t get a chance to
shine through on it’s own because that guitar track is trying to force his way through the line at
the same time with the same force.

This graph maps the average amplitude and frequencies of two tracks in a mix. Notice the
similar amplitudes in the frequencies from around 2K to 2.5K. Which track gets heard here?
This struggle causes muddiness in the mix.

To fix this, some might just turn up the vocal track. But, as I stated earlier, this won't really fix it.
What will happen is that the vocal will sit "on top" of the guitar. That's not what we want. We
want the vocal sit along side of the guitar. So, we notch out the guitar track for the vocals. By
applying an EQ to the guitar track and reducing the volume of the frequencies in that 2.1k to 2.5k
range, the vocal ends up louder than the guitar ONLY in that range. The other frequencies that
the guitar occupies are left alone. So now, the guitar track and the vocal track can stay at fairly
even volumes to one another without losing clarity in the vocals. Make sense?
Now, the guitar track has been "notched out" between around 2K to 2.K. This creates an opening
that the vocal can sit in allowing both tracks to co-exist without fighting each other.

You can apply this concept throughout your mix to help create better definition between tracks
and to allow every track to have its own place in the mix. As another example, I always roll off
everything below about 80Hz on a guitar track and just let the bass fill that void. When I listen to
the that guitar track by itself, it might sound a little thin, but when the bass is playing along with
it, the two sit along side of each other allowing both to be heard clearly. As you apply this
approach across your mix, you will begin to see how it can clean everything up by reducing the
amount of overlapping frequencies from track to track.

In this mix, the guitar track has been notched out for the vocals and rolled off for the bass. The
bass has been notched out for the kick drum. As a result, all four tracks have their own place in
the mix and no tracks are fighting each other in the upper amplitudes.

Cutting the Notch


So, exactly how do you do this? Well, some basic understanding of how EQs work is imperative.
There are a few good articles in the Recording Tips section on that, so I won't go into too much
detail here, but I'll give you the basics. All you really need to do is apply an EQ to the track you
wish to notch out. If the track already has an EQ on it, then you're one step ahead of the game.
Select a band that is near the range you wish to notch out. Pull the gain for that band down 3 - 4
db. Set the Q or bandwidth to be around 1 octave. Different EQs use different values for this, but
basically, you only want the Q about as wide as the range you wish to notch out. Then you just
sweep the frequency of the band around the range you're looking to notch out until you hear that
you've hit the pocket.

In this example, a
paragraphic EQ is used to notch out a tight hole at 750hz and to roll off everything above
12Khz.

This can take a little ear training to recognize the difference since it can be fairly subtle. But once
you find it, you should be able to hear a noticeable improvement in the clarity and definition of
the track you're notching out for (this would be the vocal track in the example above).

Use Your Ears

After all, we’re working with audio here! I’m making mention of this seemingly obvious point
because with the plethora of software based EQs and visual displays, it’s easy to begin "looking"
at your mix instead of listening to it. Use the visual references to better understand what your
doing, but listen carefully to the way you’re effecting the sound. Notching out the wrong
frequencies will not only fail to accomplish our goal of creating more definition between tracks,
but will also rob the track of frequencies that may be important to the character of the sound.

Finally

Ok, I’m going to shut up now and let you get back to mixing. I’m confident that once you start
using these concepts and techniques in your mixes, you will notice a dramatic improvement the
sound. Your recordings will begin to "open up" and individual tracks will start to reveal
themselves more clearly. Subtleties that were once buried under other tracks will come through
and add character and your vocals and instrument solos will sit right "in" the mix and no longer
sound "pasted on." So, listen carefully to your mix and then get in there and demand that all of
those frequencies wait for their turn in line.

You might also like