Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

Ask Abbey Road: Senior Engineer Andrew Dudman

shares his recording advice


Our Q&A series continues as we put your questions to Abbey Road’s engineering talent. This time, senior recording engineer
Andrew Dudman elds your questions on recording and explains that sometimes virtual instruments can be better than the
real thing.

By Will Betts - 7th July 2019

We’re delighted to bring you the second edition of Ask Abbey Road, in partnership with the iconic Abbey Road Studios. In
this series, we ask you to put your pressing production questions to the experts at the world’s most famous studio.

This time, we have Andrew Dudman in the hot seat. An Abbey Road Studios senior recording engineer with an incredible
21 years of experience at the facility, he’s earned awards for his work on Disney’s Brave and The Fellowship Of The Ring.
He’s also recorded scores for huge lms such as Hacksaw Ridge and Baby Driver and games including Killzone 3 and
Uncharted 3. His extraordinary résumé goes on to include tracks recorded with Underworld and Elbow and many more
classical recordings. 

This time, Dudman elds your questions on recording, using virtual instruments alongside real instruments and ways to
get extra bass into movie soundtrack recordings.

You can put your own questions to engineer Lewis Jones (Doctor Strange, The Grand Budapest Hotel) for the next edition
of Ask Abbey Road, now.
Michael Proulx asks: how would you typically record a live band composed of a drummer,
singer/keyboardist and singer/electric guitarist?

Andrew Dudman: Firstly, you nd out the style and that would inform whether you went for musicians in the room
together. If you think you’ll need to do any tuning or hardcore editing, you’d de nitely need to use isolation. If you were
going to be giving it to someone else to take on, then you record the band together with spill everywhere, you tie the mix
engineer’s hands.

Then, line of sight. It’s good to feel like you’re still playing in a band, even if you’re isolated. It pays to keep the musicians
as close together as possible with good lines of sight or rely on cameras and screens doing that job for you. Once you’ve
got that out of the way, then you get into mic choices. You probably go for more dynamics if everyone’s together in the
room, just to give you a bit more control. If you’re isolated, you can choose what you like. You’ve got a blank page to put
out your favourite mics, knowing they’re not going to be affected by spill from other instruments. 

Erik Skytt asks: when it comes to drums, how do you decide how much processing gets printed in
the tracking phase and how much to leave until after the tracking?

AD: If I know the drums that I’m tracking are going to be mixed by someone else, I’ll de nitely keep the tracks cleaner –
probably with little compression at all, if any. Then, just a tiny bit of EQ to get a good, balanced level to tape. 

I’ve been doing this long enough to have learned from some great engineers, so I always try and keep my monitor faders
all at. When you send the right level to tape, it makes the mixing a bit easier – it helps everyone down the line get a
balance quicker. I keep the drum tracks pretty at.

“A lot of it is whether the drums are tuned as you want them…”

If I were looking to give some interesting colours or options to a different engineer to mix, I would put out extra
microphones, and process those. So they then have the choice of having a more coloured sound or a cleaner, atter
sound. That’s just so I don’t force them into a corner where they couldn’t undo the processing.

If I knew it was going to record and mix, I would use more EQ and a bit of compression to catch any peaks. From
experience in big studios where we’re often recording band elements alongside strings or a whole orchestra, often I will
only get ve or 10 minutes to get a drum sound up! So that’s another reason for keeping it simple – not gating to tape or
anything like that. Anything you do, you have to be able to use from the start. 

A lot of it is whether the drums are tuned as you want them and the position of the microphones. You make sure you
have the right amount of gain on your mic amp and then you know it’s going to be a great starting point. 

MusicTech asks: do you have any rules of thumb for compression?

AD: Not really, because every time I do a recording, I treat it as a unique thing. So you’re applying dynamics based on
what you hear at the start. I know what level, roughly, I’m going to print on to the computer, so that informs your threshold
level because you know what level stuff is going to start hitting your compressor, so you can pre-prepare things like that.
Then it’s just a case of asking: “What do I want to do to it?”. If I want to catch the odd loud hit on the snare, then I’ll only
be tickling it a couple of dBs with a 2:1 or 3:1 compression ratio. In that instance, we’re not trying to change the sound
too much, but just to cover the odd hit that’s sticking out. 

Once you get to mixing, it’s a whole different creative process. Do you leave it light feeling, or do you really want to get
things sounding tight? That means gating and compression. That’s more a production decision than a part of the
recording.

Studio Two is home to the famous Challen upright piano (right) that was used on many Beatles records

Claire Kannapell asks: Are there any instruments where you prefer to go digital instead of analogue
for scoring? Or do you think that recording a live instrument will always sound better than a virtual
one?

AD: Often, it’s a question of time versus budget. It would be lovely to have the time and money to replace everything on a
fully edged demo if all the instruments are replaceable. 

There are other reasons to keep virtual instruments, though. It might be that you’ve got a piano on your demo. If, for
example, you go into Studio Two at Abbey Road, you will have the option of two grand pianos – a Yamaha and a Steinway
– and the classic Lady Madonna honky-tonk, a Challen upright. The grands are always going to sound like grand pianos,
and the other pianos have really unique sounds. So, if you’ve got a mellow, muted piano sound in your demo and your
whole lm score is built around that kind of sound, you might be better off using the sample than you are trying to re-
record. 

You’ll sometimes try to replace pianos, and it’s just not the right sound. They’re all so unique and different, there’s not
really much you can do about it, honestly. If you’re looking for a Steinway grand or a big Yamaha grand piano, or
something as speci c as a C5 Yamaha sound, then it’s going to be great recording the real thing. But keyboard
instruments are a prime example of times when we do keep the demos. The samples themselves are amazingly good
these days.
Percussion we tend to keep a lot of, too, particularly cymbals. This is because you’ll have composers who have listened
to tracks for months at a time. Then if you get a percussion section to come and record, their cymbals will have different
pitches. And they’ll peak and trough at different times.

Then there are just times when it’s a personal choice. It can be that you actually prefer the sound of the way stuff blends
in the track with the sample. If you get a complete lm score, it can take a long time to overdub a full track. If you need to
save time and budget, percussion would be the rst thing that you end up using as a sample rather than a real sound. 

The exception is timpani. If you want these to blend perfectly with strings recorded in Studio One, for example, it’s much
better to record them in the same acoustic as the orchestra. Even though there are great samples out there, the best way
to get instruments to blend is to record them in the same space – either at different times, or with the rest of the band.

There are occasions when composers or artists might have spent months creating their own samples and unique
sounds with existing instruments. You might go to replace it in the session and what works best in the track is keeping
the original sound that everyone’s been hearing. It’s not just the engineers and composers that have been hearing it. The
director and editor have been hearing it, too. The last thing you want to do is scare people off with stuff that sounds
different to the demos everyone’s been listening to.

Obviously, you want the width and scale and the depth of live musicians. But there are some times when it makes more
sense to just stay as a computerised instrument. But where it’s a more lyrical, musical score, with lead lines on any of the
acoustic instruments – guitar, woodwind, strings, brass – you’d always want to record it live. Sometimes, you’ll go as far
as specifying the exact player you want to play the part if you want a speci c sound.

Recording timpani in the same space as the rest of the instrumentation helps things gel during the mix
MT asks: are there any speci c virtual instruments that you use?

AD: I don’t do much programming when I’m editing. I usually get handed over stuff and if I do add any sounds to
recordings, it’s often stuff that I’ve recorded myself or had someone record for me.

What I will often do is add a real, low bass-drum boom underneath an existing bass drum in the score. This helps you get
real low end in the percussion.

I tend to layer up home-made samples rather than going to any particular sound library. We move around so many
different studios and so many different computers that to try and carry libraries with you everywhere you went would be
impossible.

Andrew Holdaway asks: how do you get virtual instruments to mesh in a guitar-heavy mix?

AD: If something’s not working, go back to the original sounds rather than trying to x something that’s inherently not
right. When you’re programming and play stuff in on the keyboard, you can often get inconsistent velocity levels coming
out, so make sure all of that is under control, and nothing is sticking out in that sense. The great thing with computers is
that you have so many options for making great new sounds. Blend sounds together to try and match what you’ve got a
real recording from an acoustic instrument. Only after that would I start applying a bit of EQ and compression to try and
blend things that way. Then, I’d try reverb. Always try to go back to the original sound if you can.

“There’s one renowned lm composer who sets all velocities between ve


and 15 on piano parts.”

I’ve done that on mixes where you might battle for ve or 10 minutes and if something’s not working straight away, just
contact the person who did the programming. If you can pinpoint what it is you don’t like about it. I’ve mentioned
keyboard programming quite a lot. But you might feel like you’re playing it very equally on the system, but it only takes a
small change in velocity value to trigger a different sample that might bark or be a bit quieter. 

There’s one renowned lm composer – I was told he sets all his velocities to be between ve and 15 on any piano part.
And what you get is this most beautiful sound without having to try. You can constrain that on your MIDI channel as
you’re playing it in and it takes a lot of the post work out of editing MIDI notes. That’s one way to do it – to limit the
constraints of the velocities, and then your parts should just sit better. 

Mark Turnham asks: what sounds do you buss or group together before a nal mix is made and how do you go about
processing busses?

AD: In terms of recording, most of the mics go direct to Pro Tools. The only time we’d really route stuff is when you have
multiple mics on a string section, for example. Depending on the size of each section, you might end up with between
two and ve mics on the violins. Mostly, we work so fast that you don’t have time to change things as you’re recording. I
would typically put lead violinist, lead 2nd violinist, viola and front-desk cello down separately. Then I’ll put together all
the remaining mics of a section. So, you end up with two close violin tracks per violin section, and ve more tracks per
recording than you would do normally. But, having those separate is really useful, especially in the world of lm and
immersive audio, because it gives you the ability to pan them around to pinpoint images.

Depending on the style, we might end up not using the close mics anyway, because we’re capturing the ambience and the
room. So, my mantra is, ‘if you record it, you don’t have to use it. But if you haven’t recorded it, there’s not anything you
can use.’ We tend to record more mics direct to Pro Tools now than we ever used to because we don’t have to worry
about track limits any more.

When I’m mixing, I tend to deliver stems, as well. That’s across all genres. To prepare them, I will have a buss per record
stem. That’s orchestra with strings separate and woodwind separate. Often we end up recording all these elements
separately, anyway. When anything is re-recorded separately, it’s usually for control. So you want to avoid tying your
hands together too much by bussing them down.

Have a burning question about recording? Ask yours before July 11 and have it answered by Abbey Road engineer John
Barrett. 

You might also like