Uploaded by chacon_valenzuela

Creating an Orchestral Room v2

advertisement
Creating an Orchestral Room
for a Virtual Orchestra
Film music, John Williams Style
By Graham Plowman of Cthulhu Mythos Music
Version 2.0 (September 2022)
Overview
The purpose of this guide
The goal for this guide is to help provide the building blocks for creating an orchestral set up with virtual
instruments that can produce a realistic room sound. My benchmark for this is the earlier recordings of
John Williams film scores such as Raiders of the Lost Ark, The Empire Strikes Back and many others in the
80s and 90s, including the prequel Star Wars recordings. If the new modern Star Wars recordings such as
The Force Awakens, with their cleaner digital sound and less ‘room’ quality is your preference, you can
certainly use this guide to achieve that as well, it’s just a matter of adjusting depth and reverb to taste. The
choice of room depth and many other decisions you’ll see is going to be ultimately up to you.
There’s nothing in this guide that suggests this is the right way and other methods are wrong, it is the
method I’ve had the best results with and is simply a guide to get you off the ground. All methods I use here
can be tailored to your own preferences or dumped entirely (in which case I apologize in advance for
wasting your time!)
What you should already know and have…*
TIP: There is likely going to be a lot of
information you already know. It’s
ultimately up to you if anything here is
new, different or useful to you, and there
is no suggestion that one method or
another is wrong. I base my methods on
results, not on what some may suggest
is ‘supposed’ to be done. Remember, too,
that this is a singular aim, a good room
orchestral setup, not a template that will
create any style of music for any
situation. That isn’t how a template
works.

A digital audio workstation (DAW), such as Cubase, Ableton Live, Logic, Studio One, Reaper (any
one of these or others would do). I will be using Ableton Live for this guide.

Familiarity with how to route in your DAW and perform other tasks in it.

General familiarity with sample libraries and how to use them in Kontakt (and other sample players)

Orchestral Sample Libraries – an assumption is made that you have orchestral instruments
available to you already.
 Strings, Brass, Woodwinds and Percussion.
 I do not make recommendations on which sample libraries you should buy as people's
preferences (and the expense in purchasing these) means you should think hard about
which ones suit you – a catch 22 because suitability comes from experience with a library. I
do however have key expectations from sample libraries that are not met by all. I explain
these later.
 Good MIDI programming is required with all sample libraries by the user. You can get great
results from most libraries if you put enough time in programming them, choosing the right
articulations, and have a well-trained ear. Some are easier to program than others. I have
heard the best sample libraries sound poor, and the oldest sound good – skill of the user sets
most of these apart. It’s worth nothing that despite this, certain libraries can only achieve so
much, and others offer the possibility to reach a closer realism benchmark with less effort.
*Despite some prior knowledge preferred, as noted above, this guide will still start with a lot of basics. Only because
they are relevant to how I do things. I go through basic routing, mic use in sample libraries, and placement basics. I
wanted to make sure this guide was covering everything needed to end up with a better result, even if starting with a
blank project file.
But I am using my ears?
TIP: A commonly used phrase is ‘use your ears’. This is both a
frustrating phrase to be told, but also a vital part of understanding
issues. If your ear is untrained the phrase doesn’t help much. You
will need to develop the sense that what you are hearing is
natural sounding or not.
A shocking revelation, but a well-trained ear is required to know
what sounds good or bad. Using real recordings as references
and critical listening to these recordings is vital. The goal is NOT
to match the sound of original recordings, the goal is to have your
own natural room sound and orchestra. But how can this be
achieved if you cannot recognise this sound naturally?
It’s tricky because there are different needs for different
situations. Remember the focus of this tutorial, a John Williams
orchestral sound. I will attempt to address this by explaining
some basic balance techniques between instruments later.
Several factors come together to contribute to the aim of a realistic virtual
orchestra…(these are the ones I consider to be the most important )
Realism Chart
• The principle here is that good and balanced orchestration already
accounts for solving a large portion of both mixing and programming
needs, meaning less reliance on those to fix issues. It also won’t
matter how good your mixing and programming is (or even your sample
library) – if you arrange orchestral music poorly, you may not get good
results.
Mixing ability and
Reverb
20%
MIDI Programming
Skill
20%
Sample Library
Quality
10%
NOTE! Orchestration accounts for half the overall total here (which you
may or may not agree with).
• Skilled MIDI programming also chips away at the need for greater
mixing needs. This means utilising everything in your arsenal to
produce a ‘performance’. Choosing the right articulations for example
and cc automation is vital.
Orchestration
50%
• Good sample library quality (with a host of articulations, mic options
and a good sound out of the box) further reduces the need to heavily
mix and rely on reverb. If you rely on reverb to fix problems, you’re
already in trouble. The dry mix (no reverb tail) must be balanced as
much as is possible. This is difficult when bringing together a series of
libraries from different developers as they can have varying degrees of
wetness (room) baked in. If you add a dry instrument into an orchestra
that is already placed with close, mid, and far mics, the dry instrument
is going to stick out until it is given placement treatment with panning,
convolution reverb to simulate a hall, and perhaps EQ for further
sculpting of the sound. Before you add a reverb tail to anything, it’s
best to have all the instruments balanced and placed, dry or wet. Note
the distinction here that I’d consider the use of convolution reverb to
place an instrument not part of the actual reverb set up for the whole
orchestra. It is for hall placement.
A Typical John Williams Orchestra
This is the typical score order. You can include additional patches as needed. For example, you might have a horn (solo), a 4
horn ensemble, then 6 horns, 12…or 4 horn solos, 3 trumpet solos, those choices are up to you, but they are not necessary.
Woodwinds
Flute 1
Flute 2
Piccolo
Oboe 1
Oboe 2
English Horn (if you have one)
Bb Clarinet 1
Bb Clarinet 2
Bass Clarinet (if you have one)
Bassoon 1
Bassoon 2
Contrabassoon
(if you have one)
Hi Woodwind Ensemble
Lo Woodwind Ensemble
Brass
Horn (solo)
Horns 4 ensemble
Trumpet (solo)
Trumpet 3 ensemble
Trombone (solo)
Trombones 3 ensemble
Bass Trombone (solo)
Tuba (solo)
Percussion
Timpani (hits and rolls)
Bass Drum (hits and rolls)
Snare Drum (hits and rolls)
Toms
Suspended Cymbals
Large Cymbal
Med Cymbal
Small Cymbal
Piatti
Tam Tam
Triangle
Mark Tree
Orchestral ‘Toys’ (Woodblocks,
shakers,
etc.)
Crotales
Xylophone
Marimba
Glockenspiel
Vibraphone
Celeste
Harp
Piano
Strings
Full String Ensemble (1)
Full String Ensemble (2)
Violins 1
Violins 2
Violas
Cellos
Basses
Add solo and con sord
patches if needed.
TIP: You can add or remove any
instruments you want. It’s your set up
in the end and your choice whether
to use certain instruments over
others. This guide will show the most
typically used concert performance
instruments and is useful for many
orchestral pieces, not just John
Williams music.
Sample Library Basics
Though I won’t recommend any one library over another, I do have my own preferences for
what makes a sample library useful.
MIC options (Close, Mid, Far, etc.) – this helps you get a sound that has a room quality around it
without even touching reverb. The natural ambience around a recorded instrument is very useful
as a starting point to creating space. ‘Dry’ recorded instruments are very useful for smaller
ensemble, and studio like mixes, for a more intimate sound. Take note of my distinction between
a Close Mic and a Dry recorded instrument (see box out). More on making dry instruments work
in the mix later…
Performance ability: very hard to quantify until you spend time working with a library. Though a
good range of demos can give you a sense of the performance ability of a library, they are often
written in a way to avoid their shortcomings. Still, if you hear performance quality in the demos
then it’s a useful indicator the library can do certain techniques. Things to look for are:
• Good range of articulations (a range of shorts, true legato, longs, marcato, tremolo,
and extended techniques). I list the most useful articulations on the next page.
• Listen for fast repetitions - short repeated notes, particularly in the same pitch. If
demos avoid this, see if you can find out how it sounds through a walkthrough or other
means. Even if a library has many round robins, it does not necessarily mean it won’t
sound odd with fast repeated notes.
• Ability to change or shorten set-length articulations. For example, marcato in brass is
a more aggressive mid-to short note, and if these are a pre-set length, it greatly limits
your ability to use it. If you hold down a marcato and it stops when you lift the key, this
is much more versatile. Pre-baked lengths are tricky to compose around but not
impossible.
TIP: Mic use in a sample library
better represents how an orchestra is
recorded in a real setting. There’s also a
certain real room quality to the
behaviour of instruments in these rooms
that may not be replicated well using a
dry close instrument with convolution
reverb. I don’t consider ‘close mics’ to
be the same as dry recorded
instruments! I avoid using just close
mics. I use all the mics available.
NOTE! Close Mic versus a Dry Instrument.
A dry instrument is recorded in a dry environment. A
close mic’ed instrument is usually recorded in a hall,
with a mic close to it (for adding definition to a
recording). I consider the close mic only sound to be
a thin sound and thus not a good representation of
that instrument’s timbre. With this in mind:
• For a dry sound, use dry instruments recorded in a
dry environment.
• For an orchestral sound, use close mics to add
some definition to an instrument in addition to the
mid and far mics.
This is my list of the most useful articulations and instrument sections – it is not an exhaustive list of all possible articulations
or instruments, and to you, perhaps one I haven’t listed might be more useful to you than I. I’ve not included Choir.
Woodwinds
Flute (solo)
Flute 2 or ensemble*
Piccolo (solo)
Oboe (solo)
Oboe 2 or ensemble
Clarinet (solo)
Clarinet 2 or ensemble
Bass Clarinet (solo)
Bassoon (solo)
Bassoon 2 or ensemble
Contrabassoon (solo)
•
•
•
•
•
•
•
•
Legato
Sustains
Staccatissimo
Staccato
Sforzando
Portato
Marcato
Trills
* It is useful to have Flute 1 and
Flute 2 if possible (and repeat for
each woodwind), but either
option works.
Brass
Horn (solo)
Horns 4 ensemble
Trumpet (solo)
Trumpet 3 ensemble
Trombone (solo)
Trombones 3 ensemble
Bass Trombone (solo)
Tuba (solo)
•
•
•
•
•
•
•
Legato
Sustains
Staccatissimo
Staccato
Sforzando
Marcato
Mutes
Percussion*
Timpani (hits and rolls)
Bass Drum (hits and rolls)
Snare Drum (hits and rolls)
Toms
Suspended Cymbals
Piatti
Tam Tam
Triangle
Mark Tree
Strings*
Violins 1
Violins 2
Violas
Cellos
Basses
Legato
Sustains
Spiccato
Staccatissimo
Staccato
Sforzando
Pizzicato
Harmonics
Tremolo
Trills
Marcato
Con sordino
Fast runs
Harp (including glissandos)
Piano
•
•
•
•
•
•
•
•
•
•
•
•
•
* Most percussion libraries will include this
and much more, there’s a lot of choice
available. These are what I use most.
* I haven’t included solo
strings, but you can if
you wish.
Orchestral ‘Toys’ (Woodblocks,
shakers, etc.)
Xylophone
Marimba
Glockenspiel
Vibraphone
Celeste
Isn’t it better to have 4 solo horns?
4 Horn patches
Some people prefer the versatile nature of having 4 solo horns, 3 trumpets, etc. This is
more conducive to natural writing; but the resulting sound can be a little off because
your 4 separate horn patches may not sound like they are playing back in a room
together. But also, it may not matter too much as good writing/orchestration can
overcome this. What happens in a real room environment when 4 horns play together is
not what happens in your DAW with reverb on 4 separate horn tracks. Ensemble
patches exist for the more natural joining of the instruments into a single line – but it’s
also fair to say it is far more cost effective to record ensembles than to record many
single instruments multiple times.
=
4 Horn Ensemble
You could have 4 solo horns and have a 4 horn ensemble. Having that flexibility is good,
but it’s not required.
If you are limited to ensemble patches, then you can still get great results if you are
careful with the sound balance by using a 4 horn ensemble patch to write a 4-part horn
chord. My main advice here is to ensure the top voice is a little louder than the lower
ones. This is easy to do with velocity for shorts, but for longs you may need to duplicate
the 4 horn patch and MIDI track and use less mod-wheel (or reduce actual volume) of
the lower 3 voices on 1 track, and on the track for the top voice, have it a little louder.
So, two tracks in total of the 4 horn ensemble. One is the single upper voice, and the
other is the 3 lower voices. Careful listening to the balance is key. This theory can be
used on any instruments, trumpets, trombones, etc.
Warning – do not double the same
solo instrument patch over itself to
make it sound like 2 instruments
are playing the same phrase. This
causes phase issues because you
are playing the same samples
directly over itself (an example of
something that would not happen in
a real recording environment)
Signal Flow
The basics of routing in your DAW to set up
the Orchestral Template.
DAW Basics
Here I will go through the initial setup and routing of tracks. You can skip this part if you have your own set up or preference to how you do
this. However, if you are a beginner then this section could help. It’s also possible your current setup isn’t working well for you, if that is the
case perhaps check some of these notes and see what suits you better.
There are various ways to set up routing and this is my method. I will also explain the other methods that are popular.
In my final template, the Instrument Patches are routed to Audio Tracks from
Kontakt (or any other sample player such as SINE, Spitfire Audio Player, Play
Engine, etc.)
I use Kontakt to hold multiple patches. Each patch goes to a MIDI channel.
Each patch is also Output from within Kontakt to an audio track.
If you wish, you can instead use one instance of Kontakt per instrument and
route that track to an audio output. It doesn’t matter which method you use.
I also use a single MIDI track per instrument to hold all articulations (where
possible). I prefer to use Keyswitching on a single track. For example, Flute 1
MIDI track can play shorts and longs, legato, etc.
If you wish, you can use one MIDI track per articulation. A common
approach is to divide the instrument into LONGS and SHORTS. Route all
LONGS to one set of audio tracks and route all SHORTS to another.
TIP: There are various personal
choices in setting up a template. And
some are choices based on delivery of
audio to others. Also, different DAWs
have different methods for routing. The
principles are the same regardless of
the software used, and so will
translate across different DAWS and
sample players (e.g., Kontakt, SINE,
Spitfire Audio Player, Play Engine, etc.)
Setting Up Instruments – (all articulation patches)
1.
To start I load Flute 1 and Flute 2 into Kontakt.
2.
The MIDI channels are set up to receive Flute 1 and
MIDI channel #1, and Flute 2 on channel #2.
3.
Kontakt is set up that the Outputs (st.2)* thru to (st.16)
will output each patch on that channel so that my audio
tracks can be set up the same way. Flute 1 audio and
Flute 2 audio.
4.
Steps 1 to 3 are repeated for the remaining woodwind
section. If your woodwind section would fit into a single
instance Kontakt (16 patches), you can continue adding
patches. Or open a new Kontakt instance per
instrument type.
Alternate Method: Some find it much simpler to load one instrument
only into Kontakt and create a new instance of Kontakt for each
instrument. Then, simply route the audio from that track to an
associated audio track. This method is useful if you wish to load up
different articulations (shorts and longs) as different patches, and route
the shorts to one audio track, and the longs to another. I just like using
as few instances of a sample player as I can, so I prefer to group them.
The choice is totally up to you!
Kontakt
Flute 1 (all articulations)
Output: st. 2
Midi Chn:: [A] 1
Flute 2 (all articulations)
Output: st. 3
Midi Chn:: [A] 2
* I find that st.1 output is used by the actual Kontakt instance, so only
outputs st.2 thru to st.16 can be routed. Depending on what DAW you use,
this limitation might not be present.
Signal Flow – different types
There are various ways to route the signal to a bus (If delivering multiple stems, you usually need a reverb set up for each bus)

Delivering all individual tracks to a mixer (whether they want reverb or not should be discussed)

Delivering stems (combined groups of woods being 1 stem, strings being another stem, etc.) – multiple reverb setup required

Delivering stems further divided so that short articulations collect in a bus, and long articulations collect in another bus (also
divided by each section, woodwinds, strings, etc. – multiple reverb setup required
OR

The simplest set up which I use most of the time is to have a single reverb setup for the orchestra. This is when producing and
mixing music for my own productions where I avoid the need for multiple busses with multiple reverb setups. I generally have my
set up ready to be routed to the stems, but if not required I turn them off and disable unneeded processing.
Note! Over the next few pages, I
explain the different signal flow
options. What you choose (or
already do) is entirely up to you.
The results of the room sound will
be exactly the same regardless of
the routing method chosen above.
Simplified Signal Flow - if you don’t need to deliver stems
Most of the time, if producing music for my own productions and finished releases, I can greatly simplify the reverb setup by only
using one signal flow. Audio channels send % of signal to reverb, reverb signal goes to Sub Master. Less taxing on the system, so
if resources for you are tight, this set up can work.
Kontakt
Woodwinds
Audio
Channels
Kontakt
Brass
TIP: ‘Sub Master’ is used as a
track to put my final plug-ins on.
The reason I have this track is in
case I want a certain track (film
sound effects, dialog) to bypass
this processing. I can send any
track I want to bypass the Sub
Master processing and go direct
to the Master instead.
MIDI
Channels
% Sent to Reverb
MIDI
Channels
Audio
Channels
Kontakt
Percussion
MIDI
Channels
Audio
Channels
Kontakt
Strings
MIDI
Channels
Audio
Channels
Reverb Tail
Aux
Sub Master
Master
BUS Signal Flow
The process for every single instrument is the same, regardless of sample library, dry or wet. A percentage of each audio signal is
sent to a Reverb Aux channel (for each bus) (more on this later so ignore for now)
Kontakt
Woodwinds
MIDI
Channels
Audio BUS
Audio
Channels
Woodwinds
BUS
Audio Channels
% Sent to Reverb
Reverb Tail
Aux
One for each BUS
Kontakt
Brass
MIDI
Channels
Audio
Channels
Kontakt
Percussion
Strings
Brass
BUS
MIDI
Channels
Audio
Channels
Kontakt
Each BUS
Sub Master
Percussion
BUS
MIDI
Channels
Audio
Channels
Strings
BUS
Master
How that may look in a DAW
Combined Stem tracks – all
woodwind audio is grouped
here, including reverb, then
sent to SUB MASTER
Your DAW may look different, but the idea is the very same. Refer to your DAWs manual if unsure how to do routing in it.
Group your sections together, and colour code them however you like. Naming and grouping tracks is very important
for a faster workflow.
Grouped MIDI
Channels
Grouped Audio
Channels
Send the audio to
the grouped BUS
Stem - Woods
Track with
Kontakt on it
Read MIDI
Channel 1 from
Kontakt
Combined Mix
BUS
Mix (Sub)
Master
Read st.2 Output
from Kontakt
% Sent to Reverb
Master
Reverb Tail
Aux
Further Busses - Dividing Short and Long Articulations
If you wish to have the flexibility to delivering to a TV or Film mixer, then dividing short and long articulations is often expected so that they
can be delivered as separate tracks. You can also keep splitting into more groups, increasing the granularity. Percussion can be drums,
metals, melodic, and so on. This also greatly increases the requirement to have multiple reverb instances. 1 or more for each BUS.
Woodwind - Longs
Woodwind - Shorts
Brass - Shorts
MIDI Channels
Audio Channels
Audio Channels
Percussion - Melodic
Strings - Shorts
Strings - Longs
Woodwinds BUS Shorts
MIDI Channels
Brass BUS Longs
% Sent to Reverb
Reverb Tail
Aux
One for each BUS
Each BUS
MIDI Channels
Audio Channels
Percussion - Hits
Woodwinds BUS Longs
MIDI Channels
Audio Channels
Brass - Longs
Audio Channels
Brass BUS Shorts
MIDI Channels
Audio Channels
Percussion BUS Hits
MIDI Channels
Audio Channels
Percussion BUS Melodic
MIDI Channels
Audio Channels
Strings BUS Longs
MIDI Channels
Audio Channels
Strings BUS Shorts
TIP: Remember that a % of audio signal will
go to reverb, and then back to the bus for that
audio group. This means that you’ll need a
reverb set up for EVERY BUS you have. In this
example it’d be 8 reverb set ups. 1 for each
bus.
If this taxes your system, you can work
around this by routing 1 reverb set up to each
bus in turn, bouncing each stem, route the
reverb, bounce the stem, and so on. If you’re
not bouncing stems, you can route the reverb
to the Sub Master instead.
Microphone Options
Using the available mic options in sample libraries.
Seating Arrangements
On the right is a rough layout guide for orchestra seating.
A sample library is often recorded in seating positions like
these. You will hear this in the mid and far mics mainly as
those mics represent the orchestra hall placements.
Timpani and other
Percussion
If your chosen library is not recorded in seated positions (or
is a fully dry library), it is likely centered, and as such would
then need to be placed with both panning and reverb. I’ll
show you how to do this later.
I don’t recommend to use only close mics (when multiple
mics are available), and don’t try to place the instrument with
just that close mic. You can if you want to, I just don’t
recommend it.
Close mics (or spot mics) are for adding definition, and not
intended to be used as the sole instrument sound. They
simply don’t represent the instrument at its best (often
sounds thin/harsh). There are some limited exceptions to
this, but if the other mics are available, I use them in addition
to the close mic. If you want a dry intimate sound, you should
ideally be using dry recorded instruments recorded in a
small dry room – which can then be given placement
treatment later.
French Horns
Trombones
Trumpets
Harp
Clarinets
Piano
Celeste
Flutes/Piccolo
2nd Violins
1st Violins
Bassoons
Tuba
Oboes
Contra
Basses
Violas
Cellos
A basic guide only. There are many variations on this layout, and
it changes from orchestra to orchestra. However, this would be a
‘typical’ seating arrangement. Note that the percussion would
more likely cover the width of the orchestra across the back.
Mic Options
Mic options in a library vary by maker, as do the names they give
the mics.

Close (or spot mic) – right next to the instrument. For adding
definition to a recording or bringing out a solo.

Mid (or Decca tree) – above the conductor’s position. For
adding placement and room sound.

Far (or ambient mic) – at the back of room. Adds the most
room sound.

Full Mix – usually this is a combination of other provided
mics. A default mix created by the developer. Lowers the
RAM footprint because it’s the other mics combined. If
using the 3 other mics, that’s 3 times the RAM amount.
 For this reason, I tend to favour the Full Mix if available.
There may be even more mics available depending on the library:

Different brands of mics recorded

Spill mic (captures an instrument through the OTHER mics
.
for those other instruments)

Additional close mics.
Kontakt
Flute 1
Output: st. 2
Midi Chn:: [A] 1
0db
0db
OR
Close
Mid
Far
Full MIX
Using the Different Mics for Depth
Preference of mic balance is up to you. If you want a close mic to be the highest and only have a hint of mid and far that’s fine. Some people prefer drier mixes.
However, remember that we’re looking to create a room here and that is best done starting with the room present in the samples.
If you don’t like the provided full mix (if available), you can sculpt the sound you want from the available mics. You’re looking to hear a sense of depth, but retain a
clear sound. Try to keep the relative volume levels the same as when the patch was loaded.
Libraries recorded in a similar way: The depth you produce with a single section should be a set benchmark to position the remaining sections around. Start with
strings, the closest section. You want these to sound full, with some sense of the room, but not distant. Then, you can balance your woodwinds to this to ensure
the depth you set with the mic options available has the woodwinds just behind the strings. Brass behind the woodwinds, and finally percussion at the back. With
orchestra sections from a single developer (given they are recorded in the same way or part of a single big orchestra package), this is easy enough to do. Usually
nothing more than setting the same levels across all. Remember: they are recorded in position in most cases. This means the woodwinds are automatically more
distant from the ‘mid’ (Decca tree) than the strings are, and closer than the brass are. So, setting the level the same, could work and should be the starting point.
However, you should still do this by ear – because if from different developers, the room sizes aren’t likely the same – the distances aren’t consistent.
Libraries recorded in different halls or by different developers: If you have 4 different sections by 4 different developers, woodwinds, brass, percussion, strings,
the depth balance (and your mic options) might be all over the place. The best you can do here is strike a depth balance without washing out any one section.
Even if it means with no other plugins used, the woodwinds (for example) are now sitting in front of the strings, this will have to do (for now). We’ll push them back
later with plugins to fix this. It’s important to continue to balance the orchestra with no plugins yet. Just watch you do not have a badly defined section sound (too
distant) because you turned off a close mic, or only use the far mic to get it behind the strings. That is doing more harm than good.
0db
Close
Mid
Strings
Far
Close
0db
0db
0db
Mid
Woodwinds
Far
Close
Mid
Brass
Far
Close
Mid
Percussion
Far
Orchestral Balance
Adjusting the levels to balance the
orchestra sections.
Bringing Balance to the Orchestra 1
Ok, you’ve loaded all your instrument patches, and routed the tracks. What now?
The plan is to balance these instruments against each other so that they are blending with each other in the same way they would be during a
performance. For example, how a flute playing ff would sound against violins playing ff . The aim is for a balanced playback sound. If you are
using an orchestra from a single source, let’s say the BBC Symphony Orchestra from Spitfire, you should already have a balanced orchestra (not
that it is perfect) – but it would be a good start that might need only a few adjustments – perhaps the same adjustment each section just to bring
the levels down to mixing volume.
If you have a collection of libraries from different developers (like I do) then different balancing levels of the orchestra may be required. This is to
achieve a general balance across the 4 sections, but wouldn’t necessarily negate the need for volume automation during the composing
process. That’s normal - a mixer with a live recording doesn’t just leave the faders idle!
There’s no exact science to this, but let’s look at this in more detail on the next page.
0db
0db
-14 db
Strings Bus
0db
0db
-12 db
-20 db
Woodwinds
Bus
Brass
Bus
-12 db
Percussion
Bus
Bringing Balance to the Orchestra 2
These are my starting points when I was setting up the template and are a basic guide only. There’s always variance with what you might be using that could change the results
but the main goal is having headroom of around 6db on the master bus when a lot of the louder instruments are playing together at max dynamic.

Percussion: Play a timpani hit at max velocity (or a roll at top dynamic). Try a G2 note (taking C3 as middle C). Note where it hits on the meter of the percussion bus. Get
this to hit at its loudest around -12db by adjusting the percussion bus volume. The percussion bus, not the timpani track. Your instrument tracks should remain at 0db. This
-12db on your percussion bus is your base volume level. You haven’t made the timpani quieter (and must not balance all other percussion against it), what has happened
here is you have made the whole percussion section quieter – but maintained its own balance within itself. The only caveat here is if you are bringing in other percussion
patches from other developers. You may need to check how they sound. But again, this is difficult unless writing a piece (where it can become clear that something is out
of whack volume wise) – if so, you adjust that one track to balance it with the rest). But first, let’s just balance the busses to create headroom, before getting into the
tracks.

Brass: Play a chord in the trumpets, say a high G (G4, B4, D5), max cc1 (or velocity – but I find sustains are much easier to read. On the Brass bus get this to a max of -12db.

Strings: Take the cellos and the basses together and play a low sustained E1. Get this volume to around -14db

Woodwinds: This is harder to gauge as it is more difficult to get the woodwinds to project. What I do is I play a flute and violins 1 patch together at max dynamic. I want to
only just about hear the flute blending with the strings once you get past G4 and up. And it will probably be heard a little more as you go up the register, becoming a little
more obvious there is a flute playing along. So, this means you’re adjusting the volume of the WOODWINDS bus, not the strings, to get this blend. There’s no telling what
the woodwind bus will need to be adjusted to here. Maybe it doesn’t even need to be adjusted at all. In my setup, with no adjustments, the flute (and other woodwinds) hit
around -20db with no adjustments, so I left it. You can then also play a clarinet along with the viola section, and ideally, they should blend as well now with no adjustments.
If you find the clarinet is too loud, it’s time to just adjust the clarinet track down instead (the bus is already set via the flute and strings).
It may not be perfect, but it’s a start. You might even find these figures don’t work for you. If using different patches from different developers, then you may need to repeat
balancing steps adjusting another instrument against the one in the same section. I suggest adjusting the other woodwinds track levels this time (not the bus) to sit well with the
flute and do not change the flute (as you balanced that against the strings already by adjusting the bus). Ignore this step completely if you are using woodwinds from a single
library. This principle also applies to the brass but note that trumpets are going to be weak in their lower register, trombones are going to be very strong in the same low register
the trumpet can play, so these can’t really be judged and balanced together. You must use your ears.
0db
0db
-14 db
Strings
Bus
0db
0db
-12 db
-20 db
Woodwinds
Bus
Brass
Bus
-12 db
Percussion
Bus
Bringing Balance to the Orchestra 3
Taking C3 as middle C, this is my rough guide to blending different instruments. There’s no exact science, and differences between libraries and their own dynamic ranges makes
this more difficult to judge in people's personal setups. Some of these balances can be adjusted in mixing, but I try to have them close to this while writing. The needs of the
music I’m writing will dictate if I simply disregard a suggestion here due to what I’m looking to achieve in a phrase.
Blend with:
Balance Dynamic – with legato*
Register
Loudest Instrument
Flute 1
with
Violins 1
At highest dynamic have the flute blending under
the violins
C3 to G4 I do not expect to hear the flute - you should start to hear the flute a little
more as you rise through the scale approaching G4 and beyond.
Violins 1
Clarinet 1
with
Violas
At highest dynamic have the clarinet blending
under the violas
Around middle C I do not expect to hear the clarinet clearly, but you should know
it’s there underneath.
Violas
French
Horns
with
Cellos
Mid dynamic they should blend together, it’s a
great sound. Higher dynamic, expect the horns to
start taking over the sound more.
Around C2 at mid dynamic, I want to hear the cellos a little more, the horns are
adding a blending tone. As you rise in dynamic and the horns become more
powerful, they should be easier to hear – maybe even becoming dominant.
Equal at mid
dynamic
Horns high dynamic
Highest dynamic.
Low trumpets (below and around C3) are not very powerful, and violins should
lead here. As the register increases the trumpets become more powerful and
should become more obvious (but still blending) as you start hitting E4 and up.
Violins low register
High register – both
blend and trumpet is
clearer.
No clear winner here,
but both should be
obvious enough they
are playing.
Blending in low and
mid. Then timpani
takes over in higher.
Trumpets
with
Violins
Tuba
with
Basses
Highest dynamic.
A solo tuba against the basses section – doesn’t seem fair. Tuba should be clear
enough through all the dynamics and register. Not dominating, but clear that
there’s a tuba playing as well. Overall, the tuba is one instrument I’m always
adjusting during a composition.
Timpani
with
Cellos and
Basses
Use HITS/SHORTS for this test. All dynamics.
Timpani and the strings here should be clearly matching each other in low
dynamics, but as the dynamics rise, I expect the timpani to start dominating. You
will often see timpani scored at forte, when strings are at double forte.
You get the idea here, and the balancing should ideally be done on the busses, because the sections should be balanced within themselves – but if you find there’s a disparity
between some instruments, then you can use the track fader instead. Again, if you want to swap out your trumpets for another library, you need to balance those new trumpets
by adjusting its levels against the old trumpets, don’t reach for the bus again.
*Legato - In my experience, many instrument patches in legato mode tend to represent a mid to high level dynamic rather than full raw power. This is ideal for gauging the
balances above because in most cases, legato playing will usually be a blend, instead of one instrument completely dominating another. Shorts and other articulations aren’t
really suited to this, because a short high trumpet has that immediate loud attack that would be heard over just about anything else.
Some Basic Instrument
Characteristics
General guidelines and basic roles for the instruments.
Woodwinds – Basic Instrument Roles
Starting with the Woodwinds, these are some generalizations on the use of those instruments in the orchestra. Mostly related to common use from John Williams (as a starting
point). There are so many more uses - this is just some basic beginner use from an orchestration point of view.
Instrument
Common Role
Common Doubling or Unisons
Flute
Scale runs (often as colour, over sustains, gaps in the melody and as transitions). Solo melodies,
and adding colour with high piercing flurries.
Violins, piccolo, and other woodwinds, piano (runs).
Piccolo
Scale runs (often as colour, over sustains, gaps in the melody and as transitions). Adding colour
with high piercing flurries.
Violins and flutes, piano (runs).
Oboe
Solo melody. Scale runs in the middle register and rhythmic support.
Trumpets, other woodwinds, strings.
English
Horn
Solo melody. Scale runs in the middle register and rhythmic support.
Horns, other woodwinds, strings.
Clarinet
Solo melody. Scale runs. Harmonic support (often two clarinets used to form a triad or other
harmonic interval). Rhythmic support.
Horns. Violas, other woodwinds.
Bass
Clarinet
Bass support, rhythmic support.
Trombones, basses.
Bassoon
Solo melody. Bass rhythmic support.
Cellos, basses, trombones.
Contra
Bassoon
Bass rhythmic support.
Bassoons, cellos, basses, tuba, trombones.
Doubling – same notes but supported at a different pitch. Usually, an octave above or below. If you double the flutes with the violins, you are deciding that one of them
will be an octave above the other (or two).
Unison – same notes, and same pitch. This represents a single idea or phrase, across multiple instruments. If you have the flutes in unison with the violins, you are
deciding that they are playing the same pitch.
Brass - Basic Instrument Roles
Some general characteristic use of the brass section.
Instrument
Common Role
Common Doubling or Unisons
French
Horns
Melody, solo and in unison. Rhythmic support (often in chords), sustained harmony support in
chords (low dynamics for soft passages).
Cellos in melody (great sound). Trumpets (low melody).
Clarinets.
Trumpets
Melody, solo and in unison. Rhythmic support. Rhythmic motifs and support during melody breaks
and transitions.
Violins, horns, trombones (low melody).
Trombones
Rhythmic support. Rhythmic motifs and support during melody breaks and transitions. Sustained
builds for tension. Harmonic support in chords (low dynamics for soft passages).
Cellos, basses, trumpets (low melody), bassoons, contra
bassoon, and horns.
Bass
Trombone
Useful low support for the tuba and trombones. Can often play the role of trombone number 3, as
the lower voice.
Trombones (part of the section), tuba, basses, cellos,
bassoons.
Tuba
Bass support with sustains and rhythmic support. Often to punctuate an idea the tuba is often
resting for moments before joining the brass section to thicken the bass.
Trombones, basses, cellos, contra bassoon.
Percussion - Basic Instrument Roles
Some general characteristic use of the percussion section.
Instrument
Common Role
Common Doubling or Unisons
Timpani
Bass accents, rolls for transitions, dramatic builds, orchestral pulse (repeated note to drive a
beat). Solo flourishes.
Low instrument accents, basses and tuba, etc.
Bass Drum
Bass accents, rolls for transitions, dramatic builds. Very impactful when used to accent 2 nd beat
(not first).
Low instrument accents, basses and tuba, etc.
Snare Drum
Traditionally military and march, and can also provide drive and colour. Various snare sizes can
add different colours.
A snare and bass drum hit is a good combination.
Concert
Toms
The varying pitches creates interesting fill moments. Very attack heavy and less resonance.
Suspended
Cymbals
Colour, texture, suspended rolls for build up and transitions.
Often used in combination with a timpani roll.
Piatti
Crash cymbals for accents, colour and rhymical support.
Sometimes used in combination with a bass drum.
Tam Tam
Rolls for transitions, dramatic builds. Big booming sound.
Triangle
The triangle’s distinctive ring can add top end accents and some sparkle, especially if used as a
tremolo. (Such as the opening of Star Wars)
Mark Tree
The magical sparkle you hear in many fantasy and mystery orchestral pieces – mostly for adding
colour and can be used for transitions.
Toys
Woodblocks, shakers, vibraslap, whip, and more. Lots of percussion colour options. Best used
sparingly.
Harp, Keys and Mallets
Some general characteristic use of the harp, piano and other keys, and the mallets.
Instrument
Common Role
Common Doubling or Unisons
Xylophone
Often used to add a percussive element and colour to high accents in the woodwinds.
Often with high woodwinds.
Marimba
Can add a slightly exotic flavour and melody to a piece.
Glockenspiel
High ringing attack can help bring out a melody even with the whole orchestra playing. Shorter
passing notes might be ignored, and the harmonic main notes of a melody doubled.
Commonly with brass. Often doubles the main notes of a
melody (not every note of a phrase, the most important
ones).
Vibraphone
Often used in quieter moments, to add mystery (commonly as a two-note harmony). With motor
on, adds a slight sustained movement to a chord.
Common with harp, piano or celeste.
Celeste
Colour, texture and adding mystery or sparkle (in the high end). Famously used for the Harry
Potter main theme.
With harp or piano.
Harp
Chords, and arpeggiated chords for harmonic support, glissandos for transitions and key
changes, and sometimes a main melody.
Commonly with the celeste and the piano.
Piano
Bass support (low notes add a lot of percussive weight), melody, harmony support (chords or
arpeggios). Runs (often doubles the woodwinds for this). Percussive accents.
Often with the harp, and woodwinds (runs) or percussive
accents.
Strings
Strings are the backbone of the orchestra and fulfil many roles. Every section can be used for harmony, melody, rhythmic drive, and so on. The best use of the string section is to
utilize their registers in your orchestration and avoid simply copying one part into the next and avoid doing constant tight triads (playing it like a piano).
Instrument
Common Role
Common Doubling or Unisons
Violins 1
Around 16 players. Harmonic and rhythmic support, melody or unison with both violas and violins.
Violins 2, violas (often unison in melody), flutes.
Violins 2
Around 14 players. Harmonic and rhythmic support, melody in double or unison with both violas
and violins 1.
Violins 1, violas, flutes.
Violas
Around 12 players. Great for rhythmic support and drive, filling middle harmony, and supporting
big melodies in unison with the violins.
Violins 1 and 2, cellos, clarinets.
Cellos
Around 12 to 10 players. Great for melody with the horns around middle C. Big arpeggios for
movement in sweeping melodic passages. Also great for supporting big melodies.
Violas, basses, horns, bassoons.
Basses
Around 8 players. Sounds an octave below the cellos. Aside from standard bass support, great
for pedal tones, pizzicato notes for light rhythmic accents and movement.
Bassoons, contrabassoon, tuba.
Placement for the Orchestra
Sections
How to achieve depth and placement
Depth and Orchestra Section Placement
Let’s say you’ve got your main orchestral sections set up, mic placement chosen (if available) and the volume levels reasonably
balanced. We’ll now need to tackle a couple of problems that could be present:

One or more orchestral sections is too close (note, if a section is too far at this stage, then there’s something wrong in the earlier
setup, using too much far mic, or volume is too low – that needs fixing in the earlier stages, not here).

If you are using a dry recorded solo instrument it’s going to sound out of place with an orchestra, particularly if using
close/mid/far mics for your sections. We’ll adjust this instrument placement after the rest of the orchestra is placed. I recommend
you place entire sections before you start placing solo performance instruments.
Let’s start with getting sections to sound placed and help push back those that are either too present and up front - or are fully dry
recorded sections/instruments and so must be placed.
Percussion
Brass
Woodwinds
Strings
Front
Help me, my Woodwinds are invading my string section…
Let’s explore this topic with an example that can be
applied to any of the sections or individual
instruments if needed.
After using your mic placement, let’s say you have a
woodwind library that sounds too close, even with
the mic options it has. For the moment, I’m not
counting this as being a fully dry set of woodwinds
yet – I’ll explain what I do for that situation after
this.
Here, I don’t want my woodwinds to sound washed
out, but also, I would prefer if they sounded a little
further back where they would usually be, middle
and centre. It’s likely if I ended up with this
situation, the woodwinds can’t even sound washed
out. They might have been recorded close and
present (even if in a hall).
I always recommend trying to judge the placement
from the strings point of view. However close you
have the strings sounding, you want the woods and
the other sections to be a little further away.
For this – it’s time to talk plug-ins…
Timpani and other
Percussion
French Horns
Trombones
Trumpets
Harp
Tuba
Piano
Celeste
2nd Violins
1st Violins Clarinets
Flutes/Piccolo
Contra
Basses
Violas
Oboes
Bassoons
Cellos
The woodwinds have infiltrated my string section, and if I know
my music theory, they’ll eat the string players unless we do
something about it.
Fixing the Woodwind Section Placement (1)
It was possible until now to avoid being specific about using certain plugins. In fact, that’s still possible as the principle of how I use
the following plugin is not much different to other plugins that are designed to help add depth in a 3D space. I just find this one works
well with no fuss. Panagement 2!
Woodwinds
MIDI
Channel
Audio
Channel
Sub Master
Woodwinds
BUS
I place this on the Woodwinds BUS, so that all of the woodwinds are
going to be positioned together as a section. This is based on my using
a consistent library of woodwinds – meaning - I don’t have a rogue
flute for example from another source that doesn’t match the tone and
levels of the rest of the woodwinds. If I did, I might need to consider
treating that flute on its audio channel before it hits the BUS.
Anyway, back to the now. On left is Panagement, default settings
before I’ve touched anything. I am going to need to do a number of
tweaks here to get the woodwinds ‘seated’ correctly.
Let’s try that now…
Master
Fixing the Woodwind Section Placement (2)
The woodwind section sits a
little better in the centre when
the stereo width has been
narrowed – just a little.
Very important – turn OFF the
Reverb here, I want to use my
own reverb later.
This is the main placement setting. By
dragging this circle around, you can hear the
effect of the positioning in a 3D space (hold
down a woodwind note while moving it
around and you’ll see). This is where I’m
trying to place this icon in such a way that I
hear the woodwind section is now placed
just behind the strings. Some trial and error
is involved.
To compensate with the
woodwinds ‘fading’ as they
are pushed back, you may
need to adjust the volume
output. Here, for example, I’ve
pushed this up to compensate.
I don’t want to lose my volume
balance, I want them to sound
seated back, but still balanced
with the other sections.
What if I’m Using a Completely Dry Instrument?
Ok, our string section is now saved from cannibalistic woodwind players, but what if we have that solo instrument we want for a violin
concerto? The completely bone-dry one that doesn’t belong?
It’s very common to have a solo violin recorded as a dry instrument. The principle behind adding a completely dry instrument into what is
hopefully now a well balanced and positioned orchestra is basically the same regardless of the instrument.
To get a dry solo violin to sit well with the orchestra can take a number of treatment steps specifically just for that instrument.
This means we’ll be adding plugins directly on the audio channel for that instrument.
SOLO
Violin
MIDI
Channel
Audio
Channel
EQ
Convolution
Reverb
Panagement 2
Strings BUS
The main must have plugins here would be EQ and the Convolution Reverb. I’ve added Panagement 2 at times as well, because it’s very
useful to add a final touch of placement to an instrument.
Using the Plugins to Place a Dry Instrument (1)
EQ
The need to EQ is dependent on the instrument. Some principles
can apply to all though, and one of those is rolling off the higher
frequencies to simulate some distance. You may not need to do
this. Use those ears!
Another slight issue I had with the violin was the sound was a little
harsh (sometimes a symptom of a close dry recording). To better
make it sit in the hall, I EQ’ed out some of the harsher frequencies.
I apply the EQ first because if I want to EQ out some harsh tones,
then I want that to be taken out before it goes into the convolution
reverb. Otherwise, I would then have to deal with the reverb's
treatment of those frequencies. Opinions on this may differ.
Convolution
Reverb
Next is a convolution reverb. This is to simulate the room and give a sense that the instrument has ‘mic’ positions
added to it. I leave the early reflections AND the tail on. I found turning the tail off creates some sort of ‘disconnect’
with the room. Your experience might be different.
Remember that this is placed directly on the dry instrument, so it’s likely to sound very washed out and ‘dream-like’.
You must lower the MIX level of the reverb, so that it is not 100% wet. I found only 25% wet was good enough to get a
dry violin to sound good with the orchestra. The % used is going to be dependent on what it takes to get that
instrument sitting well with your other sections. Either, you are trying to get a dry instrument to match a section (that
rogue flute I mentioned), or, you are trying to add a solo performance instrument (usually playing at the front of the
orchestra) but still sound like it’s in the hall with them.
Using the Plugins to Place a Dry Instrument (2)
Panagement 2
Finally, and only because I felt it was useful, I
added Panagement 2.
I again turned off the built-in reverb and reduced
the stereo width by a good bit (it’s a single
instrument so I want it to be narrow).
I played around with the positioning grid until I
heard a sound and placement I liked – in this case
it was slightly to the left and forward.
Of course, none of this matters without testing
how it sits with the orchestra, so when writing a
piece, it’s always possible you might need to
revisit and tweak something in this chain of
EQ/Convolution Reverb/Panagement to get the
result sounding like it’s naturally in the same
‘space’ as the orchestra.
Unfortunately, it’s not always a case of ‘set and
don’t touch again’.
SOLO
Violin
MIDI
Channel
Audio
Channel
EQ
Convolution
Reverb
Panagement 2
Strings BUS
Final Reverb Settings
The final glue…
Reverb Tail (1)
After successful placement of the orchestra, it’s possible to just stop there, and some people do – letting the natural mic placements
and ‘room’ sound in the samples be the reverb. Even live musician recordings get digital reverb added, however, and there’s something
very off-putting to me to hear a sampled instrument just…end. It doesn’t even seem to sound like a real instrument note finishing
(perhaps a result of how release samples are programmed – the room sound is in the samples, but is the natural room tail sometimes
cut off in the processing of the note? I’m not sure.)
To mask this issue that I hear, I put a reverb tail on the end, and it can be as short or as long as you personally like. Once it masks that
odd dead air cut off at the end of the samples it’s good enough for me.

I’m demonstrating this using the simplified audio routing method (on page 14).

Audio signal goes direct to the sub master channel

A percentage of this audio signal goes to the Reverb Tail (first passing through an EQ, then into the reverb), then on to the sub
master
Which brings me to the very important point about EQ’ing the reverb tail signal…let’s look at this more closely.
MIDI
Channel
Woodwinds
Audio
Channel
% Sent to
EQ/Reverb
EQ
Reverb Tail
Aux
Sub Master
Master
Reverb Tail (2)
Audio
Channel
EQ
The EQ here is taking the percentage of the signal I send
and shaping it before it hits the reverb. I’m using a rough
version of what is known as the ‘Abbey Road Curve’. Cleans
the reverb signal by quite a lot, by removing the low end
(rolling off from around 300hz and down (you can tweak this)
and the high end (rolling off around 6khz region).
The reverb tail is entirely up to you – adjust to taste. Must be 100% wet.
Reverb Tail
Aux
Reverb Tail (3)
To summarize how I use reverb:

For the Reverb tail I use an algorithmic reverb. I find this is best for a bright clear tail.

Convolution Reverb is used only to place dry instruments into the hall so they sound part of the orchestra.
I want to avoid that odd ‘sucking’ note drop-off effect, so the amount of signal I send from an instrument into this reverb tail is designed
to eliminate that. This means the amount sent will likely differ per section (or instrument), but also note here this doesn’t necessarily
mean that just because a trumpet is meant to be further away from a violin that it should get more reverb tail. It doesn’t – you’ve
already distanced or spaced the orchestra. Adding tail is a global reverb ‘icing on the cake’ – don’t apply the same distance rules here.
Do what sounds right, not what might seem correct on paper.
Also note that this is the ONLY reverb tail I have for the orchestra (ignoring the convolution reverb that might have been used to place
a dry instrument – remember I consider that placement, NOT specifically reverb in the sense of a tail).
There’s no multiple reverbs here for close, mid, and far tails – just this one single reverb. I have found results using multiple tails for
distance rather pointless. It doesn’t do anything at all that I can personally hear. I would suggest don’t bother doing this – it’s another
one of those ‘seems good on paper’ ideas that brings nothing worthwhile to the table. I’ve tired this and after a time I realized I was just
creating additional resource use by using multiple reverbs tails with different settings for close, mid and far, and when I removed them
it made no difference (or I could even argue it cleared things up by removing it and made placing the instruments much easier).
The Final Mix and Master
Let’s say you’ve finished composing your masterpiece! You have a good balanced orchestral sound, mixed with automation where
needed and you want to produce the final master yourself.
Most people recommend if you’re 100% happy with the MIDI side of things, bounce your audio stems out and make the final mixing
decisions here.
I’m not going into this area as it’d be a document twice this size of this already, and you know what? There are people far far better
than I for explaining how to go about creating that superb mix and even into the dark art of mastering.
I will however just drop some closing notes on what I do:
My combined submaster track is used to place my final plugins on. These consist of (in order):

A simple gain plug-in that I can use to boost (or reduce) the overall volume of the piece. I usually ride this while listening back
through the piece to find the peaks and dips I want the piece to have.

EQ – corrective EQ that might be adjusted based on the needs of the track. Too bass heavy, not enough top end? It depends on the
track and is adjusted as needed.

Tape saturation - there’s something about tape saturation that adds a small layer of realism over a whole orchestral mix for me. I
notice it when I turn it off, there’s a slight dulling of the sound that I don’t want. Be careful that you don’t over do it. Like with many
plugins that change the colour or tone, you stop ‘hearing’ the effect after a while mixing, so you must not respond to this by
adjusting it again. What you heard and liked when you first turned it on and tweaked it is what you wanted – now leave it alone.

Finally – a limiter. A simple limiter to stop any rogue peaks clipping.
And that, as they say, is that. I hope you found some useful information in here (even if you disagree with some of it (or all) – that’s ok).
It’s how I’ve done and enjoyed doing things for some time now. A simple workflow (all things considered).
Some Parting Words
I hope there was something useful in this guide for your orchestral needs. There are a few ways to
approach things, and opinions vary on the usefulness of those approaches. This is just my method,
honed after years of trial and error.
Whatever approach you use, I just want to say, happy composing!
Graham Plowman from the Virtual Orchestration Facebook Group
Download