The instructor will discuss… - TI:ME Technology In Music Education

advertisement
Technology Institute for MUSIC Educators
TI:ME Course 2a
Advanced Sequencing, Second Edition
Syllabus, Workbook and Appendices
Revised and Written By
Steve Cunningham and Rick Schmunk
Edited by Scott Lipscomb
Original Edition by
Don Muro and Bill Purse
Pg. 1
Technology Institute for Music Educators
TI:ME Course 2a, Advanced Sequencing
Introduction
Topic 1: Review of Basic MIDI Concepts
Topic 1a: DAW Basics Review
Topic 1b: Recording MIDI
Topic 2: MIDI Editing
Topic 3: Creating Drum Set parts and Working With MIDI Regions
Topic 4a: Continuous Controllers
Topic 4b: ReWire and Client Applications
Topic 5: Pattern-Based Drum Programming and Multi-Output Virtual Instruments
Topic 6: Using Virtual Samplers
Topic 7: Subtractive Synthesis
Topic 8: Volume Editing and Musical Sequencing
Topic 9: Audio Time Compression and Expansion
Topic 10: Mixing Fundamentals
Topic 11: Bounce to Disk
Topic 12: Composing to Picture
Topic 13: Non-Linear MIDI Sequencing
Topic 14: MIDI Sequencing in the Curriculum
Topic 15: Using Sequencers for Student Musical Composition and Performance
Topic 16: Evaluating Sequencing Software
Topic 17: Final Projects
Appendix A: The General MIDI specification (GM)
Appendix B: MIDI Controller Numbers
Appendix C: Historical Developments in Music Sequencing
Appendix D: Basic MIDI Concepts
Appendix E: Rewire and Client Applications
Appendix F: Multi-Output Virtual Instruments
Appendix G: Subtractive Synthesis Basics
Appendix H: Musical Sequencing
Appendix I: Audio Time Compression and Expansion
Appendix J: Mixing and Signal Processing Fundamentals
Appendix K: Bounce to disk
Appendix M: Computer DAW and MIDI Sequencing Software
Appendix N: Lesson Plan Guide
Appendix O: TI:ME 2A Advanced Sequencing Project Journal Guide
Appendix Q: Bibliography for Further Study
Appendix R: Sequencing, Computer and Music Technology Terminology
Pg. 2
3
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
26
28
31
32
34
37
39
41
43
45
48
56
63
74
84
86
88
96
98
TI:ME Course 2a, Advanced Sequencing
Revised and Rewritten by
Steve Cunningham and
Rick Schmunk
Introduction
Objective:
The objective of Advanced Sequencing is to impart practical skills and knowledge to InService Teachers (ISTs) to allow them to integrate MIDI and digital audio sequencing
effectively into teaching and learning. The course covers in-depth skills in sequencing, and
ISTs will learn MIDI theory along with best practices and skills pertaining to sequence
recording, editing, and mixing. ISTs will leave with the necessary knowledge to make
critical judgments about the appropriateness of selected music software and hardware for
particular educational applications and various computer platforms. The format of the
course alternates presentations with class activities, many of which serve to assess student
progress. Satisfactory participation in class activities and successful completion of multiple
final projects are required for certification. The prerequisite for this course is basic
computer skills including using a computer keyboard; using a mouse for pointing, clicking,
dragging; and some exposure and experience in the use of any sequencing software. TI:ME
Course 1A Electronic Instruments, MIDI Sequencing and Notation is strongly recommended
as preparation for this course.
Additional Information:
Advanced Sequencing is offered as either a two-credit or three-credit graduate course
(includes optional topics). The instructor of the course must be approved by TI:ME and an
expert in teaching music sequencing to in-service music teachers. It is strongly
recommended that the class size be limited to allow for one IST per computer and no more
than 16 ISTs per instructor. Additional ISTs may be accommodated if computer
workstations and assistants are available. Each IST will need approximately 20 hours
working alone on a workstation in order to complete class activities and final projects.
Hardware Requirements:
A computer lab or classroom consisting of multimedia computers connected to MIDI
keyboard controllers is the standard recommended configuration. The teacher's station
should be connected to a projection device to allow the instructor's screen to be viewed by
the entire class and a sound system. All workstations must have DAW applications capable
of integrating MIDI, digital audio and video. When possible, a sound system connecting all
MIDI devices in the room should be used to allow for class activities.
Pg. 3
Software Requirements:
Sequencing software designed for professional musicians and educators should be used.
The course exercises are provided in both Logic Studio and Pro Tools versions. In addition,
some exercises use Propellerhead’s Reason to demonstrate how to setup and use client
audio applications. If an alternative DAW sequencing application is substituted, it must be
capable of recording, playing and editing digital audio; integrating video; and audio time
compression and expansion (elastic audio, audio warping, flex-time, etc.). Demonstration
versions of low-end software packages designed for young students or amateurs should be
used only in presentations to demonstrate software choices that are available.
Please Note: The activities in this syllabus often specify the use of specific scores and
sequences that are provided in the course materials. Instructors may substitute other
similar pieces for these activities.
Required:
· Instructor-specified sequencing text.
· Technology Strategies for Music Education by Thomas Rudolph, Floyd Richmond, David
Mash, and David Williams; published by Technology Institute for Music Education,
http://www.ti-me.org.
Prerequisites:
TI:ME Level One Certification, having completed a minimum of 2 courses in music
technology or comparable experience using music sequencing software with a Macintosh
or Windows PC. TI:ME Course 1A (Electronic Instruments, MIDI Sequencing and Notation)
is strongly recommended. TI:ME course 2A is designed for the experienced computer user.
The outline is designed for 25-hour units on advanced sequencing within a 2-credit
graduate workshop. Optional items can be added if the course is offered for 3 graduate
credits.
Introduction:
The primary objective of TI:ME 2A Advanced Sequencing is to impart practical skills and
knowledge to in-service teachers (ISTs) to allow them to integrate MIDI sequencing
effectively into teaching and learning. The purpose of this course is to teach ISTs the basic
skills in using a high-end sequencing program. The instructor will also provide ISTs with
the information they need to make critical judgments about the appropriateness of selected
sequencing programs for particular educational contexts. The format of this course
alternates presentations with class activities, many of which serve to assess the
participant’s progress. In addition to satisfactory participation in class activities, final
projects in sequencing are required for certification.
Pg. 4
Procedural Knowledge:
The ISTs will be assessed via class activities, 2 final sequences and 2 lesson plans.
The IST creates two finished sequences. One sequence should be a transcription or an
arrangement of a classical work. (See Appendix P for a list of public domain works. Other
scores are available at the International Music Score Library Project (IMSLP) web site—
http://imslp.org. The other sequence should be in a contemporary style using
contemporary sounds and drum parts. This sequence could be a popular song from a show
or performing group. One sequence project of your choice must include at least two tracks
of digital audio in addition to a minimum of six MIDI tracks. (Sample audio files of public
domain songs are available from the TI:ME web site). The IST may sequence the Star
Spangled Banner as a song to add digital audio tracks [voice(s) and or acoustic
instrument(s)]. Each sequence will be a minimum of thirty-two measures.
Each sequence should include a brief journal (See Appendix O for content suggestions; two
page minimum) containing a brief description of the sequence and the IST’s artistic goals,
specific problems and solutions, musical decisions regarding timbres, effects, etc.).
Two Lesson Plans
• The IST will create two lesson plans that integrate sequencing software to enhance
teaching and learning. The lesson plans should clearly incorporate the MENC National
Standards for music. (Appendix N includes a sample lesson planner).
Declarative Knowledge:
The IST demonstrates skills in using a MIDI/digital audio sequencing program to create
dynamic, musical sequences in any musical style. The fundamental understanding and
specific skills to be acquired include the following:
· Introduction to Advanced Sequencing
· History of Music Sequencing
1. Review of Basic MIDI Concepts
a. MIDI In, Out and Thru ports
b. MIDI channels
c. Channel voice messages (note on/off, pitch bend, aftertouch, program change,
volume, pan, sustain)
d. Miscellaneous information – keyboards (multi-timbral capability, polyphony),
MIDI interfaces (simple MIDI interfaces, multi-port interfaces, USB, etc.)
e. General MIDI
2. Review of Basic Sequencing Concepts
a. Types of sequencers (hardware, software, integrated)
b. Sequencer tracks and MIDI channels
c. Standard MIDI Files (SMFs)
d. Opening, creating, and saving sequencer files
e. Transport controls (play, stop, pause, record, go to, loop, etc.)
f. Track parameters (volume, mute, solo, pan, transpose, and program)
Pg. 5
g. Record modes (record, overdub, loop, punch-in, step time)
h. Sequencer display views (standard music notation, piano-roll, graphic, event
list)
3. Review of Corrective Editing Techniques
a. Correcting wrong notes
b. Correcting rhythmic errors
c. Correcting dynamics
4. Creative Editing Techniques
a. Manipulating data on individual tracks (volume, velocity level, pan, brightness,
vibrato, and pitch bend)
b. Using Quantization Effectively
5. Copying and pasting MIDI data
6. Adding Effects to MIDI Tracks (Definition and common uses of reverberation and
chorus)
7. Creating a Balanced Stereo Image
8. Creating, Editing and Importing Drum Parts
9. Creating Tempo Maps (conductor tracks)
10. Creating a Notated Musical Score of a Sequence
11. Using audio time compression and expansion to conform audio to new or different
tempos
12. Adding Digital Audio to MIDI Sequences
13. Adding Effects to MIDI and digital audio tracks
14. Creating a Master Copy (Bounce to Disk)
15. Integrating MIDI Sequences with Digital Video (3 Credit Course)
16. Sequencer Applications in the Curriculum & Classroom (composition, improvisation,
arranging, orchestration, listening skills) and Performance (accompaniments,
practice tapes)
Introduction:
Prediction about today’s art of electronic music and sequencing dates at least back to 1624
when Francis Bacon, in his book New Atlantis, wrote…
“We have also sound-houses, where we practice and demonstrate all sounds, and their
generation. We have harmonies which you have not, of quarter-sounds, and lesser slides of
sound. Diverse instruments of music likewise to you unknown, some sweeter than any you
have; together with bells and rings that are dainty and sweet. We represent small sounds as
great and deep; likewise great sounds extenuate and sharp; we make diverse tremblings
and warblings of sounds, which in their original are entire. We represent and imitate all
articulate sounds and letters, and the voices and notes of beasts and birds. We have certain
helps which set to the ear do further the hearing greatly. We have also diverse strange and
artificial echoes, reflecting the voice many times, and as it were tossing it: and some that
give back the voice louder than it came; some shriller, and some deeper; yes, some
rendering the voice differing in the letters or articulate sound from that they receive. We
have also means to convey sounds in trunks and pipe, in strange lines and distance.”
Pg. 6
Overview of Music Sequencing:
A sequencer is a device that records the details or parameters of a musical performance as
MIDI data and not as actual sound. These parameters can include the notes that were
played, their dynamics and a general tempo. On playback, a sequencer will feed its stored
MIDI information into an instrument (or computer) capable of translating this information
into a musical duplicate of the original performance. If you think of a player piano’s use of
the paper roll to store a musical performance, this is similar to how a sequencer uses its
computer memory to store a musical performance. A sequencer also provides easy and
extensive options for editing the recorded MIDI data such as transposition, quantization,
and surgical editing of note data without re-recording the source.
There are three basic types of sequencers, each with pros and cons:
 Software (Most common)
Pros – Easy to update, many edit and performance functions displayed
simultaneously on a large computer screen, other software can be run on the
computer.
Cons – Requires a computer to operate the sequencer.
 Integrated (MIDI Workstation)
Pros – All in one design, including a MIDI Keyboard, Drum Machine, Synthesizer
Sounds, and Sequencer; transportable.
Cons – Includes a small display for edit and performance functions; limited song
storage; generally course quantization compared to software-based sequencers.
 Hardware (Least common)
Pros – Easy to transport from classroom to classroom; usually inexpensive.
Cons – Includes a too-small display for edit and performance functions; may have
limited space for creating and storing sequences; often requires multiple disks for
storage.
Pg. 7
Topic 1: Review of Basic MIDI Concepts
Objective:
Participants will review basic MIDI concepts and MIDI devices.
Materials:
The instructor may choose to display the included PowerPoint presentation as a means of
organizing the presentation and discussion of the topic.
Procedures:
The instructor will discuss the definition of MIDI and MIDI devices including:
 MIDI is data, not audio
 MIDI messages
 MIDI cables, channels and ports
 MIDI controllers
 MIDI sound modules
 MIDI interfaces
 MIDI sequencers
The instructor will review basic synthesizer and sound module performance parameters:
 Voices
 Polyphony
 Multi-timbral
The instructor will review the most often used MIDI Channel and MIDI Continuous
Controller messages including:
 Note On/Off
 Velocity
 Pitch Bend
 Modulation
 MIDI Volume
 MIDI Pan
The instructor will review the basic parameters of General MIDI.
The instructor will review the purpose, uses and types of Standard MIDI Files (SMFs).
Notes:
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
Pg. 8
Topic 1a: DAW Basics Review
Objective:
Participants will review the basic operation of a DAW.
Materials:
The instructor will use the included exercise and assets or provide an alternate exercise
that includes MIDI and audio regions suitable to demonstrate the basic operation of a DAW.
Procedures:
The instructor will review…
 how to open a DAW application;
 how to open an existing DAW session or project;
 the DAW’s basic layout and tool set;
 the available track types;
 how to size track height and zoom in or out on MIDI and audio regions;
 the available MIDI views (regions, notes, velocity, MIDI editor, notation, etc.);
 playback controls and methods of locating the playback cursor; and
 the folder and file structure of a DAW session or project.
Class Activities:
Participants will review and practice the basic operations of a DAW, using the included
Exercise 1a: DAW Review.
Notes:
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
Pg. 9
Topic 1b: Recording MIDI
Objective:
Participants will review how to create a DAW session or project and record MIDI data.
Materials:
The instructor will use the included exercise or an alternative exercise that provides the
participants an opportunity to record and edit MIDI data.
Procedures:
The instructor will review…
 how to create a DAW session or project and select the sample rate and bit depth;
 how to set the project tempo, meter and key;
 how to create a click track (if necessary), set the click parameters and options;
 how to create MIDI and instrument tracks;
 how to load virtual instruments and presets;
 how to record MIDI; and
 how to edit MIDI note pitches and rhythms (snap to grid).
Class Activities:
Participants will create a DAW session or project and record a 12-bar blues that includes at
least three tracks—drum set, bass and keyboards.
Notes:
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
Pg. 10
Topic 2: MIDI Editing
Objective:
The participants will learn to edit MIDI note velocities, duration and how to quantize MIDI
data.
Materials:
The instructor will use the included exercise or an alternative exercise that focuses on MIDI
editing and quantization.
Procedures:
The instructor will discuss and demonstrate…
 how to import a SMF into a DAW project or session;
 how to filter a MIDI region using the DAW’s event list filter:
 how to transpose a MIDI region;
 how to edit MIDI note durations;
 how to quantize MIDI notes and regions;
 the available quantization parameters, including quantize value, groove
quantization, quantize strength and quantize swing value;
 how to edit and scale velocities;
 how to copy and paste MIDI regions; and
 how to enter MIDI notes in non-real time.
Class Activities:
Participants will complete Exercise 2: Editing MIDI.
Notes:
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
Pg. 11
Topic 3: Creating Drum Set parts and Working With MIDI Regions
Objective:
Participants will learn to create and program drum set tracks
Participants will learn to build a song using virtual MIDI instruments and MIDI regions.
Materials:
The instructor will use the included exercise and assets or provide an alternate exercise
and MIDI assets that include a Standard MIDI File consisting of MIDI regions appropriate
for building a pop song. The alternate materials should also include an audio file of a drum
set part that the participants can use as a guide to creating an appropriate drum
accompaniment.
Procedures:
The instructor will demonstrate how to…
 copy and paste, duplicate, repeat and loop MIDI regions;
 trim or change the duration of a MIDI note or region; and
 use loop record and its application to creating drum set parts.
The instructor will discuss…
 the difference between repeating and looping MIDI regions;
 the elements of a drum set groove;
 the ways in which a drum set part evolves from the start to finish of a pop/rock
song;
 the nature and placement of drum set fills in a pop/rock song; and
 the MIDI Merge function and its fundamental uses.
Class Activities:
Participants will complete Exercise 3: Creating Drum Set parts and Working with MIDI
Regions
Notes:
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
Pg. 12
Topic 4a: Continuous Controllers
Objective:
Participants will learn the purpose and methods of using MIDI Continuous Controllers.
Materials:
The instructor will use the included exercise and assets or provide an alternate exercise
that focuses on setting up, recording and editing MIDI Continuous Controller Data.
Procedures:
The instructor will discuss and demonstrate the use of MIDI Continuous Controllers (MIDI
CC) including:
 Using CCs to emulate human performance characteristics (vibrato, volume
modulation, pitch bend, etc.);
 Standard or assigned MIDI CCs (MIDI Volume, pitch bend, modulation, etc.);
 Unassigned MIDI CCs;
 MIDI “Learn” functionality;
 Controlling a MIDI CC from a MIDI keyboard or MIDI controller;
 Recording MIDI CC data; and
 Creating and editing graphical MIDI CC/automation data.
Class Activities:
Participants will complete Exercise 4a: MIDI Continuous Controllers
Notes:
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
Pg. 13
Topic 4b: ReWire and Client Applications
Objective:
Participants will learn how to setup and use host and client audio applications.
Materials:
The instructor will use the included exercise and assets or provide an alternate exercise
that focuses on setting up and using Reason as a ReWire client application.
Procedures:
The instructor will discuss and demonstrate the use and procedures for using host and
client DAW applications including…
 the benefits of using client applications;
 the available client audio applications;
 the setup procedures including MIDI and audio signal routing;
 the correct order for starting and stopping host and client applications;
 the correct procedures and best practices for saving host and client sessions or
projects;
 sequencing in a host application vs. a client application.
Class Activities:
Participants will complete Exercise 4b: Rewire
Notes:
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
Pg. 14
Topic 5: Pattern-Based Drum Programming and Multi-Output Virtual Instruments
Objective:
Participants will learn to create drum set parts using pattern-based virtual drum
instruments.
Participants will learn to use the multi-output functionality integrated into many virtual
MIDI instruments.
Materials:
The instructor will use the included exercise and assets or provide an alternate exercise
that uses a pattern-based virtual drum instrument (Ultrabeat, Additive Drums, Strike, EZ
Drummer, BFD, etc.)
Procedures:
The instructor will discuss and demonstrate…
 the use of a pattern-based virtual drum instrument;
 the best practices of creating pattern-based drum parts;
 the pro and cons of mixing multiple parts (and adding signal processing) within a
virtual instrument;
 how to route the multiple parts of a multi-output virtual instrument to independent
tracks; and
 the gain structure challenges and best practices of using a multi-output virtual
instrument.
Class Activities:
Participants will complete Exercise 5: Pattern-Based Drum Programming and Multi-Output
Virtual Instruments
Notes:
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
Pg. 15
Topic 6: Using Virtual Samplers
Objective:
Participants will learn to use virtual sampler MIDI instruments, including multiple sampled
instrument articulations and a multi-output setup.
Materials:
The instructor will use the included exercise and assets or provide an alternate exercise
that uses a multi-output virtual sampler instrument and sample library with multiple
instrument articulations.
Procedures:
The instructor will discuss and demonstrate …
 how to setup a virtual sampler as a multi-output virtual instrument;
 transposing instruments (Bb trumpet for example) and SMFs;
 how to transpose a MIDI region;
 how to transpose a track using real-time properties;
 how to use multiple samples and articulations to better emulate human
performance;
 key switching in a virtual sampler;
 the importance of editing MIDI note velocities to enhance musical phrasing and the
expressive use of dynamics; and
 the importance of editing MIDI note durations to enhance musical phrasing and the
expressive use of articulation.
Class Activities:
Participants will complete Exercise 6: Virtual Samplers
Notes:
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
Pg. 16
Topic 7: Subtractive Synthesis
Objective:
Participants will learn the fundamentals of subtractive synthesis and modulation.
Materials:
The instructor will use the included exercise and assets or provide an alternate exercise
that uses a subtractive synthesis virtual instrument (ES2, Hybrid, Moog Modular, Thor, etc.).
Procedures:
The instructor will discuss and demonstrate…
 oscillators and wave shapes;
 analog filters including the following parameters:
o Cutoff frequency
o Resonance
o Filter envelopes
o Filter modulation
 LFOs and envelope modulation;
 arpeggiators;
 step sequencers; and
 control sequences.
Class Activities:
Participants will complete Exercise 7: Subtractive Synthesis
Notes:
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
Pg. 17
Topic 8: Volume Editing and Musical Sequencing
Objective:
Participants will learn to use volume editing and specialized MIDI CCs to create musical
MIDI sequences.
Materials:
The instructor will use the included exercise and assets or provide an alternate exercise
that uses a virtual sampler instrument and sample library that offers sample start as an
automatable parameter.
Procedures
The instructor will discuss and demonstrate…
 how to import a .pdf or .tiff file into a music notation program;
 how to export a SMF from a notation program;
 the best practices in creating musical MIDI sequences;
 volume editing and its relation to musical phrasing and dynamics;
 the differences between linear and non-linear crescendos and decrescendos and
how to create them in a DAW;
 the differences between MIDI volume, MIDI expression and audio volume, as well as
best practices in their usage;
 specialized MIDI CCs available in full-featured sample libraries; and
 the use of sample start as a MIDI CC.
Class Activities:
Participants will complete Exercise 8: Volume Editing
Notes:
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
Pg. 18
Topic 9: Audio Time Compression and Expansion
Objective:
Participants will learn to use audio time stretching to conform and quantize audio regions.
Materials:
The instructor will use the included exercise and assets or provide an alternate exercise
that focuses on audio time stretching and audio quantization.
Procedures:
The instructor will discuss and demonstrate
 audio time compression and expansion (TCE);
 TCE algorithms optimized for different kinds of musical content;
 audio transients and their importance to audio TCE;
 conforming audio regions to a new tempo; and
 quantizing audio regions using TCE.
Class Activities:
Participants will complete Exercise 9: Audio Time Stretching
Notes:
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
Pg. 19
Topic 10: Mixing Fundamentals
Objective:
Participants will learn the fundamentals of audio mixing, signal routing and signal
processing.
Materials:
The instructor will use the included exercise and assets or provide an alternate exercise
that focuses on a multi-track mix of a MIDI project.
Procedures:
The instructor will discuss and demonstrate…
 the best practices of track level setting and balancing audio levels in a mix;
 the best practices of gain structure in a mixer (analog or virtual);
 common practices of stereo panning;
 the structure and purpose of mixer inserts;
 the use of parametric EQ signal processor plug-ins;
 the use of audio filters plug-ins;
 the use of audio compressor signal processor plug-ins;
 the structure and purpose of auxiliary sends;
 how to create a send and return setup for time-based effects;
 the use of delay signal processor plug-ins;
 the use of reverb signal processor plug-ins;and
 the purpose and use of master tracks.
Class Activities:
Participants will complete Exercise 10: Mixing
Notes:
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
Pg. 20
Topic 11: Bounce to Disk
Objective:
Participants will learn to bounce a MIDI sequence to a stereo audio file appropriate for
burning to an audio CD.
Materials:
The instructor will use the included exercise and assets or provide an alternate MIDI
exercise that the participants will bounce to an interleaved stereo audio file appropriate for
burning to an audio CD.
Procedures:
The instructor will discuss and demonstrate…
 the purpose and best practices of adding dither to a digital audio bounce;
 the purpose and use of noise shaping;
 available sample rates and bit depths;
 consumer and industry standard sample rates and bit depths;
 Red Book audio file standards;
 available audio compression codecs and file types; and
 the difference between multi-mono and stereo interleaved file types and their uses.
Class Activities:
Participants will complete Exercise 11: Bounce to Disk.
Notes:
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
Pg. 21
Topic 12: Composing to Picture
Objective:
The Participants will learn to important video to a DAW project, sync the video to the
project timeline and export a new video file with accompanying music.
Materials:
The instructor will use the included exercise and assets or provide an alternate MIDI
exercise that includes MIDI sequencing and video.
Procedures:
The instructor will discuss and demonstrate…
 Internet sites where video files can be found for student projects;
 applications available that can be used to rip DVD chapters to disc;
 the “composing to picture” process;
 the purpose of film and video frame rates;
 the purpose and use of SMPTE timecode;
 how to determine the frame rate of a video file;
 how to add a SMPTE timecode burn to a video file;
 how to import a video file into a DAW project;
 how to set the DAW project frame rate and SMPTE start time; and
 how to bounce a video project to a new video file.
Class Activities:
Participants will complete Exercise 12: Composing to Picture
Notes:
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
Pg. 22
Topic 13: Non-Linear MIDI Sequencing
Objective:
Participants will learn about non-linear MIDI sequencing and resampling.
Materials:
The instructor will demonstrate an Ableton Live set of their choice.
Procedures:
The instructor will demonstrate…
 non-linear sequencing in Ableton Live (or Sonar) and facilitate a discussion
comparing linear to non-linear sequencing;
 demonstrate clips and scenes;
 “resampling” by recording from Session view to Arrange view in Ableton Live;
 MIDI mapping of clips and scenes, as well as synth and effect parameters;
 triggering clips and scenes from a MIDI controller; and
 non-linear sequencing as applied to live performance.
Class Activities:
Participants will compare and contrast the differences and pros and cons of non-linear
sequencing.
Notes:
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
Pg. 23
Topic 14: MIDI Sequencing in the Curriculum
Objective:
Participants will learn to develop innovative ways to incorporate sequencer applications
into the music curriculum.
Procedures:
The instructor will provide sequenced examples and facilitate discussion about student
activities and teacher strategies that use the sequencer for curriculum development.
Student activities examples: A student can…
 Record music in step time or real time with a MIDI sequencer;
 Capture musical performances for self-evaluation or evaluation by the teacher via a
MIDI sequencer;
 Isolate individual parts for singing practice and rehearsal; practice singing one on a
part using practice sequences;
 Change the timbres of one or more parts in a prerecorded MIDI sequence;
 Record music in step time or real time with a MIDI sequencer;
 Record and edit acoustic sounds using digital audio integrated into a MIDI
sequencer; and
 Search for MIDI Files on the Internet.
Teacher strategy examples: A teacher can…
 Use a MIDI sequencer to accompany a student choir or class;
 Select appropriate MIDI accompaniment music for students to use in live
performance;
 Create multi-timbral music examples using a MIDI sequencer;
 Create musically expressive MIDI sequences using appropriate MIDI controllers and
effects;
 Edit and perform complex mixing processes and integrate digital audio with MIDI
sequencers; and
 Record and evaluate student performances using a MIDI sequencer.
Class Activities:
Additional strategies should be developed and shared by the IST and instructor. These
additional activities and strategies can be incorporated into the two lesson plans the IST
will submit at the end of this workshop.
Notes:
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
Pg. 24
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
Pg. 25
Topic 15: Using Sequencers for Student Musical Composition and Performance
Objective:
Participants will learn to develop innovative ways to incorporate sequencer applications
into the music curriculum that support the development of student composition and
improvisation skills.
Procedures:
The instructor will provide sequencer examples and facilitate a discussion about student
activities that use a sequencer for composition and improvisation development.
Student strategies: A student can…
 Demonstrate the elements of music using a MIDI sequencer;
 Compose pieces demonstrating knowledge of appropriate ranges for traditional
instruments using a MIDI sequencer;
 Create compositions in set forms (binary, ternary, blues, rondo) using a MIDI
sequencer
 Use a sequencer to develop an original or supplied compositional theme; and
 Record a MIDI sequence and synchronize it with a video clip or computer animation.
Compositional Development Technique
Sequencer Technique
Augmentation and Diminution of Melody
Scale Time 200% Aug. 50% Dim
Diatonic Inversion of Melody (reversal of intervals in
diatonically)
An “invert pitch” option is an available feature in
some sequencers
Symmetrical or Exact Inversion of Melody (reversal
of intervals symmetrically)
Invert pitch or transpose notes up or down in
sequence
Use Different Rhythms for the Melody
Change the time signature, but do not adjust the
barlines and modify the note values
Retrograde melody (use the original melody played
backwards and invert)
A “retrograde edit” option is an available feature in
some sequencers
Parallel transposition of melody
Copy and paste a melody into a new track and
transpose the melody either symmetrically or
diatonically by an interval of a 3rd, 4th, 5th, or 6th.
Modal Alterations
Apply a new key signature to the melody and hold
the original notes to their original pitches except the
ones effected by the key signature. Some sequencers
have this option under their “transposition”
command.
Octave Displacement (moving the melody notes to
upper or lower octaves in relation to the original)
Humanize – Many sequencers will have a pitch range
that you can set for humanizing, extend it to 5 or 6
octaves. Instant 20th century composition technique.
Pg. 26
Teacher Strategies: A teacher can…
 Create ostinatos and accompaniments for student improvisation using a MIDI
sequencer;
 Present the music of various cultures and historical periods using MIDI sequencers;
and
 Create examples for students to listen to, analyze and describe using a MIDI
sequencer.
Class Activities:
Additional strategies should be developed and shared by the IST and instructor. These
additional activities and strategies can be incorporated into the two lesson plans the
IST will submit at the end of this workshop.
Notes:
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
Pg. 27
Topic 16: Evaluating Sequencing Software
Objective:
Participants will discuss the currently available MIDI sequencing applications and learn to
distinguish fundamental and advanced options (functionality).
Procedures:
The instructor will facilitate a discussion about currently available MIDI sequencing
applications. The following criteria can be used to guide the evaluation of the individual
applications:
 How much does the application cost?
 Is there an educational version? Is there an educational price?
 On what operating systems does the application run? Is it cross-platform?
 What are the application’s specific system requirements?
 Does the application facilitate both MIDI sequencing and audio recording and
editing?
 Is the application 32- or 64-bit?
 What is the resolution of the sequencer (in ticks or PPQ)? Is it variable?
 Does the software provide multiple ways to view MIDI data (graphic, notation, event
list, etc.)?
 Does the application support real-time audio scrubbing?
 What virtual instruments are included with the application?
 Are there additional virtual instruments available at additional cost?
 How comprehensive are the included sample and loop libraries?
 What 3rd party libraries is the application capable of importing or reading?
 What audio plug-ins are included with the application?
 What plug-in formats are compatible with the application?
 What sample rates and bit rates does the application support?
 What audio file formats does the application support?
 Is the application capable of freezing tracks?
 Does the application support audio TCE (Warping, elastic audio, flex-time, etc.)?
 What bounce and rendering options are available?
 What MIDI mapping options are available?
 Does the application support external MIDI controllers?
 Does the application support instant mapping?
 Can plug-ins be grouped in a rack?
 Does the application allow for both linear and non-linear sequencing?
 Does the application support video import? If so, what video formats does the
application support (QuickTime, Avid, Windows Media, HD, etc.)?
 Does the application support timecode? What sync formats are supported (MTC,
SMPTE, frame rates, word clock, etc.)?
Class Activities:
Additional criteria should be developed and shared by the IST and instructor.
Pg. 28
Notes:
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
Pg. 29
Pg. 30
Topic 17: Final Projects
Procedures:
In order to successfully fulfill the workshop requirements, each IST will create two
sequences and two lesson plans.

Sequence 1 should be a transcription or an arrangement of a classical work. (See
Appendix P for a list of suggested works or download a .pdf score from the
International Music Score Library Project website).

Sequence 2 should be a song or composition in a contemporary style (pop/rock, jazz,
etc.), using electronic sounds and drum parts. This sequence should include at least
two audio tracks and a minimum of six MIDI tracks. The audio can be voices, sound
effects, an acoustic instrument, or any combination of these. An additional option for
the second sequence project may be one that incorporates digital video and a
synchronized score or sound design as digital audio.
Each sequence should include a brief journal (two-page minimum) containing a description
of the sequence (specific sequencing problems and solutions, musical decisions regarding
virtual instrument and preset choices, effects, etc.).

Turn In Two Lesson Plans that feature several ways to use sequencing software to
enhance teaching and learning in the music classroom. The lesson plans should
clearly incorporate the MENC National Standards for music (Appendix N includes a
sample lesson planner).
Notes:
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
Pg. 31
Appendix A: The General MIDI specification (GM)
Review the information about General MIDI:
 Why and how it was developed
 Specifications (minimum)
a)
24 voice polyphony
b)
16 part multi-timbral capability (with dynamic voice allocation)
c)
128 instrumental sounds and one drum kit (47 drum sounds)
General MIDI organizes sounds into 16 families, with eight instruments in each family; this
is sometimes referred to as octal mode.
Prog#
(1-8
1
2
3
4
5
6
7
8
(17-24
17
18
19
20
21
22
23
24
Instrument
PIANO)
Acoustic Grand
Bright Acoustic
Electric Grand
Honky-Tonk
Electric Piano 1
Electric Piano 2
Harpsichord
Clav
ORGAN)
Drawbar Organ
Percussive Organ
Rock Organ
Church Organ
Reed Organ
Accordion
Harmonica
Tango Accordion
(9-16
9
10
11
12
13
14
15
16
(25-32
25
26
27
28
29
30
31
32
CHROMATIC PERC)
Celesta
Glockenspiel
Music Box
Vibraphone
Marimba
Xylophone
Tubular Bells
Dulcimer
GUITAR)
Acoustic Guitar (nylon)
Acoustic Guitar (steel)
Electric Guitar (jazz)
Electric Guitar (clean)
Electric Guitar (muted)
Overdriven Guitar
Distortion Guitar
Guitar Harmonics
(33-40
33
34
35
36
37
38
39
40
BASS)
Acoustic Bass
Electric Bass (finger)
Electric Bass (pic)
Fretless Bass
Slap Bass 1
Slap Bass 2
Synth Bass 1
Synth Bass 2
(41-48
41
42
43
44
45
46
47
48
STRINGS)
Violin
Viola
Cello
Contrabass
Tremolo Strings
Pizzicato Strings
Orchestral Strings
Timpani
(49-56
49
50
51
52
53
54
55
56
ENSEMBLE)
String Ensemble 1
String Ensemble 2
Synth Strings 1
Synth Strings 2
Choir Aahs
Voice Oohs
Synth Voice
Orchestra Hit
(57-64
57
58
59
60
61
62
63
64
BRASS)
Trumpet
Trombone
Tuba
Muted Trumpet
French Horn
Brass Section
Synth Brass 1
Synth Brass 2
Pg. 32
(65-72
65
66
67
68
69
70
71
72
REED)
Soprano Sax
Alto Sax
Tenor Sax
Baritone Sax
Oboe
English Horn
Bassoon
Clarinet
(73-80
73
74
75
76
77
78
79
80
PIPE)
Piccolo
Flute
Recorder
Pan Flute
Blown Bottle
Shakuhachi
Whistle
Ocarina
(81-88
81
82
83
84
85
86
87
88
SYNTH LEAD)
Lead 1 (square)
Lead 2 (sawtooth)
Lead 3 (calliope)
Lead 4 (chiff)
Lead 5 (charang)
Lead 6 (voice)
Lead 7 (fifths)
Lead 8 (bass + lead)
(89-96
89
90
91
92
93
94
95
96
SYNTH PAD)
Pad 1 (new age)
Pad 2 (warm)
Pad 3 (polysynth)
Pad 4 (choir)
Pad 5 (bowed)
Pad 6 (metallic)
Pad 7 (halo)
Pad 8 (sweep)
(97-104
97
98
99
100
101
102
103
104
SYNTH EFFECTS)
FX 1 (rain)
FX 2 (soundtrack)
FX 3 (Crystal)
FX 4 (atmosphere)
FX 5 (brightness)
FX 6 (goblins)
FX 7 (echoes)
FX 8 (sci-fi)
(105-112
105
106
107
108
109
110
111
112
ETHNIC)
Sitar
Banjo
Shamisen
Koto
Kalimba
Bagpipe
Fiddle
Shanai
(113-120
113
114
115
116
117
118
119
120
PERCUSSIVE)
Tinkle Bell
Agogo
Steel Drums
Woodblock
Taiko Drum
Melodic Tom
Synth Drum
Reverse Cymbal
(121-128
121
122
123
124
125
126
127
128
SOUND EFFECTS)
Guitar Fret Noise
Breath Noise
Seashore
Bird Tweet
Telephone Ring
Helicopter
Applause
Gunshot
Channel 10 is typically assigned to drums and percussion.
Pg. 33
Appendix B: MIDI Controller Numbers
Hex
0
1
2
3
4
5
6
7
8
9
0A
0B
0C
0D
0E
0F
10
11
12
13
14
15
16
17
18
19
1A
1B
1C
1D
1E
1F
20
21
22
23
24
25
26
27
28
29
2A
2B
2C
Dec
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
Controller Name
Bank Select (coarse)
Modulation Wheel (coarse)
Breath Control (coarse)
Continuous controller #3
Foot Controller (coarse)
Portamento Time (coarse)
Data Entry Slider (coarse)
Main Volume (coarse)
Stereo Balance (coarse)
Continuous controller #9
Pan (coarse)
Expression (sub-Volume) (coarse)
Effect Control 1 (coarse)
Effect Control 2 (coarse)
Continuous controller #14
Continuous controller #15
General Purpose Slider 1
General Purpose Slider 2
General Purpose Slider 3
General Purpose Slider 4
Continuous controller #20
Continuous controller #21
Continuous controller #22
Continuous controller #23
Continuous controller #24
Continuous controller #25
Continuous controller #26
Continuous controller #27
Continuous controller #28
Continuous controller #29
Continuous controller #30
Continuous controller #31
Bank Select (fine)
Modulation Wheel (fine)
Breath Control (fine)
Continuous controller #3 (fine)
Foot Controller (fine)
Portamento Time (fine)
Data Entry Slider (fine)
Main Volume (fine)
Stereo Balance (fine)
Continuous controller #9 (fine)
Pan (fine)
Expression (sub-Volume) (fine)
Effect Control 1 (fine)
Pg. 34
Data
Range
0..127
0..127
0..127
0..127
0..127
0..127
0..127
0..127
0..127
0..127
0=left 64=center 127=right
0..127
0..127
0..127
0..127
0..127
0..127
0..127
0..127
0..127
0..127
0..127
0..127
0..127
0..127
0..127
0..127
0..127
0..127
0..127
0..127
0..127
0..127
usually ignored
0..127
0..127
0..127
0..127
0..127
0..127
0..127
usually ignored
0..127
0..127
0..127
usually ignored
0..127
usually ignored
0..127
2D
2E
2F
30
31
32
33
34
35
36
37
38
39
3A
3B
3C
3D
3E
3F
40
41
42
43
44
45
46
47
48
49
4A
4B
4C
4D
4E
4F
50
51
52
53
54
55
56
57
58
59
5A
5B
5C
5D
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
Effect Control 2 (fine)
Continuous controller #14 (fine)
Continuous controller #15 (fine)
Continuous controller #16
Continuous controller #17
Continuous controller #18
Continuous controller #19
Continuous controller #20 (fine)
Continuous controller #21 (fine)
Continuous controller #22 (fine)
Continuous controller #23 (fine)
Continuous controller #24 (fine)
Continuous controller #25 (fine)
Continuous controller #26 (fine)
Continuous controller #27 (fine)
Continuous controller #28 (fine)
Continuous controller #29 (fine)
Continuous controller #30 (fine)
Continuous controller #31 (fine)
Hold pedal (Sustain) on/off
Portamento on/off
Sustenuto Pedal on/off
Soft Pedal on/off
Legato Pedal on/off
Hold Pedal 2 on/off
Sound Variation
Sound Timbre
Sound Release Time
Sound Attack Time
Sound Brightness
Sound Control 6
Sound Control 7
Sound Control 8
Sound Control 9
Sound Control 10
General Purpose Button
General Purpose Button
General Purpose Button
General Purpose Button
Undefined on/off
Undefined on/off
Undefined on/off
Undefined on/off
Undefined on/off
Undefined on/off
Undefined on/off
Effects Level
Tremolo Level
Chorus Level
Pg. 35
0..127
0..127
0..127
0..127
0..127
0..127
0..127
0..127
0..127
0..127
0..127
0..127
0..127
0..127
0..127
0..127
0..127
0..127
0..127
0..63=off
0..63=off
0..63=off
0..63=off
0..63=off
0..63=off
0..127
0..127
0..127
0..127
0..127
0..127
0..127
0..127
0..127
0..127
0..63=off
0..63=off
0..63=off
0..63=off
0..63=off
0..63=off
0..63=off
0..63=off
0..63=off
0..63=off
0..63=off
0..127
0..127
0..127
64..127=on
64..127=on
64..127=on
64..127=on
64..127=on
64..127=on
64..127=on
64..127=on
64..127=on
64..127=on
64..127=on
64..127=on
64..127=on
64..127=on
64..127=on
64..127=on
64..127=on
5E
5F
60
61
94
95
96
97
62
63
64
65
66
67
68
69
6A
6B
6C
6D
6E
6F
70
71
72
73
74
75
76
77
78
79
7A
7B
7C
7D
7E
7F
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
Celeste (Detune) Level
Phaser Level
Data entry +1
Data entry -1
Non-Registered Parameter Number
(coarse)
Non-Registered Parameter Number (fine)
Registered Parameter Number (coarse)
Registered Parameter Number (fine)
Undefined
Undefined
Undefined
Undefined
Undefined
Undefined
Undefined
Undefined
Undefined
Undefined
Undefined
Undefined
Undefined
Undefined
Undefined
Undefined
Undefined
Undefined
All Sound Off
All Controllers Off
Local Keyboard On/Off
All Notes Off
Omni Mode Off
Omni Mode On
Monophonic Mode On
Polyphonic Mode On (mono=off)
Pg. 36
0..127
0..127
ignored
ignored
0..127
0..127
0..127
0..127
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
ignored
ignored
0..63=off
ignored
ignored
ignored
**
ignored
64..127=on
Appendix C: Historical Developments in Music Sequencing
1600
Athanasius Kircher described in his book, Musurgia Universalis (1600), a mechanical device
that composed music. He used numeric and arithmetic relationships to represent scale, rhythm,
and tempo relations, called the Arca Musarithmica.
mid-1600s
Carillons, a Middle Age invention that used pins mounted on a rotating cylinder to “pluck” the
teeth of a tuned steel comb, was modified to instead play a melody, by causing hammers to
strike bells in a church tower at quarter-hour intervals.
1804
Panharmonium designed by Maelzel. Driven by air pressure, it reproduced the timbres of
traditional instruments. Beethoven wrote Battle of Vittoira for the Panharmonium;
unfortunately it was not performed due to various technical problems.
1895
Boch and Wacher developed the automatic calliope, which used metal discs with holes punched
into them to produce music from the calliope’s steam- or air-driven pipes.
1897
E.S. Votey invented the Pianola, an instrument that used pre-punched, perforated paper rolls
moved over a capillary bridge. The holes in the paper corresponded to the 88 keys of the piano.
Openings in the paper roll sounded a note.
1920s
Givelet and Coupleux’s pipeless organ used vacuum tube oscillators to create sound, and
punched paper tapes to modulate it. One of the first programmable analog music synthesizers.
At the same time, Seeburg, Wurlitzer and others were building the first electronic jukeboxes.
1948-
Conlon Nancarrow used the player piano as a medium for original composition. He laboriously
punched out rolls by hand.
1950s
MUSICOMP composition language developed by Lejaren Hiller and Leonard Isaacson’s IBM
Illiac 7090 computer at the University of Illinois; the first significant work was a string quartet
called the Illiac Suite. In addition, the RCA Mark II synthesizer used operating codes punched on
cards or paper tape.
1951-53
Eimert and Beyer (b. 1901) produced the first compositions using electronically generated
pitches. The pieces used a mechanized device that produced melodies based on Markov
analysis of Stephen Foster tunes.
1956
Martin Klein and Douglas Bolitho used a Datatron computer called Push-Button Bertha to
compose music. This computer was used to compose popular tunes; the tunes were derived
from random numerical data that was sieved, or mapped, into a preset tonal scheme.
late-1950s
Raymond Scott developed the Electronium, an "instantaneous composition/performance
machine" which generated rhythms and melodies in response to a composer’s requests via
buttons and switches.
1963
The invention of the transistor helped Don Buchla create an analog synthesizer with the first
built-in sequencer. Early models that followed came in eight- and sixteen-step versions.
1964
Robert Moog’s modular analog synthesizer became a commercial product and featured an
eight-step analog sequencer with knobs per step for pitch (CV), gate time (on/off) and timing.
1974
Tom Oberheim created the DS-2, the first digital sequencer for controlling analog synthesizers
like the MiniMoog. The DS-2 stored and played up to 72 notes, triggers or filter events via
control voltage (CV).
1979-84
High-end analog and digital synthesizers featured built-in real time sequencing capabilities
under computer control; examples included the Fairlight CMI and New England Digital’s
Synclavier. By the late 1980s many keyboard synthesizers had built-in sequencers, drum
sounds and effects, constituting early MIDI workstations.
1980-84
Digital drum sequencers were on the rise, from Roger Linn (LM-1, the first with digital
samples), Oberheim (DMX, DX, DSX), Roland, Sequential Circuits, and others.
Pg. 37
1981
John Melcher wrote the first software sequencer for Passport Designs. It ran on an Apple II
personal computer.
1982
Oberheim’s CV- and Gate-based System (OBX and Xa synthesizers, DMX digital drum machine,
and DSX hardware sequencer) is featured on two Top 40 singles.
1983
Dave Smith’s (Sequential Circuits) Universal Synthesizer Interface specification was modified,
in collaboration with Roland and Yamaha and was ratified as the MIDI 1.0 Specification.
1985-86
Software MIDI sequencers appeared for various computer operating systems: Apple Macintosh
(Opcode’s Vision, MOTU’s Performer), Atari ST (Cubase and Notator; the latter became Logic)
and IBM (Cakewalk).
1986-90
Multitrack MIDI sequencing was welcomed into the recording industry as a legitimate
production tool, as well as for live performance. Notation was added to MIDI sequencers;
shortly afterwards audio recording is integrated into sequencing.
1989
Digidesign released Sound Tools, a stereo audio editor that runs on the Apple Macintosh. In
1991, Sound Tools became Pro Tools, a software multitrack audio recording and editing
package. MIDI is added in 1995.
1995
Seer Systems was granted a patent covering the first software synthesizer, Reality. The patent
promotes the use of the General MIDI (GM) specification for compatibility.
1998
Rewire was released. A joint development between Steinberg (Cubase) and Propellerhead
Software (Recycle), Rewire facilitates communication between different software sequencers,
and allows MIDI, audio and synchronization information to be transferred between programs
running on a single computer. Rewire is open for general use without a licensing fee.
2000
Propellerhead Software released Reason, a software-based virtual rack consisting of
synthesizers, samplers, mixers and effects processors.
2001
Ableton Live was released, a loop-based MIDI and audio program designed for both
composition and live performance. Live is popular with DJs for its ability to alter an audio clip’s
tempo in real time for beat-matching effects.
Pg. 38
Appendix D: Basic MIDI Concepts
1. Review the definition of MIDI – MIDI is a serial digital protocol that MIDI-capable
devices use to communicate with one another. In the most common usage, MIDI allows
a musician to play notes on the keyboard of one MIDI instrument, while another
instrument (or instruments) responds to these triggers. MIDI data communicates
performance information (note on/off, MIDI pitch, velocity, patch, etc.), not sound; the
synthesizer or sound module generates the signal that becomes the sound.
2. Review MIDI channels – Within MIDI there are sixteen separate channels that act as
discrete streams of data. A MIDI device set to listen to a single channel will execute only
data on that channel; it will ignore MIDI data on other channels. Note that the concept
of MIDI channels is less used today in multitrack, computer-based sequencers that
utilize software synthesizers. However, it is still useful for accomplishing specific tasks.
3. Review the MIDI IN, OUT, and THRU ports – A MIDI device receives MIDI data on its
MIDI IN port and sends MIDI data via its MIDI OUT port, while the MIDI THRU port
sends out precisely the data received on the MIDI IN port. Note that although MIDI is a
one-way protocol, modern DAWs use USB between MIDI devices and a host computer,
enabling MIDI’s one-way signals to travel over a single USB cable. Software synthesizers
use virtual MIDI ports and cables to connect to host software and to other virtual MIDI
devices.
4. Review a synthesizer’s basic performance parameters which are relevant to MIDI:
a. Voice – A synthesizer voice consists of all modules necessary to play a single note.
This normally includes at least one tunable sound generator (an oscillator or
sample) that establishes a basic timbre, plus sound modifiers including a filter and
an amplifier, along with envelope generators to control the latter over the duration
of a note. Other modifiers may also be included, such as an LFO for pitch modulation,
and glide or portamento to make the sound generator slide between pitches.
b. Polyphony – A synthesizer’s polyphony indicates how many notes can speak
simultaneously and is equal to the number of voices with which the synthesizer is
equipped. A guitar can be said to be a six-voice polyphonic instrument. Modern
software synthesizers often have unlimited polyphony, subject only to the
processing power of the computer on which they run.
c. Multi-timbral – A synthesizer capable of playing multiple sounds of differing timbres
(programs) simultaneously is said to be multi-timbral. This is not necessarily
related to polyphony; a synthesizer with limited polyphony can still be multitimbral. Programs are usually assigned within the synthesizer to receive data on
different MIDI channels.
5. Review the most commonly-used MIDI Channel Voice messages:
a. Note On – This causes a MIDI device to speak. A Note On message consists of the On
command plus the MIDI channel number, the Key number (determines pitch; values
are C1 to G8, where middle C is C3), and the Velocity (normally determines loudness
Pg. 39
b.
c.
d.
e.
f.
g.
or brightness; values range from 1-127). Note: MIDI Note Off command is in the
specification but seldom used; a Note On with Velocity=0 accomplishes the same
thing. Note Off also includes a Key number and a Velocity value; the latter is
sometimes visible in sequencers but seldom used.
Pitch bend – This is a continuous controller (CC) that “bends” a note up or down
using a wheel with a center detent. The default range is typically +2 to -2 semitones
or +12 to -12 semitones. Pitch bend is a 14-bit number for accuracy and smoothness,
providing a total of 16,384 steps.
Modulation (CC#1) – This is normally used to add vibrato via an LFO, with a control
range from 0 to 127.
Volume (CC#7) – The control range of Volume is 0 to 127. Since Volume is a 7-bit
number (a total of 128 steps), it often creates an audible “stepping” sound and is
unsuited for real-time volume change.
Pan (CC#10) – Short for “pan pot” or “panoramic potentiometer”, this controller
positions a sound within a stereo field (left to right). Values go from 0-127, where 0
is hard left, 64 is center, and 127 is hard right.
Aftertouch – Some keyboards allow the player to apply pressure to fully depressed
keys, which generates Aftertouch information ranging from 0-127. This is generally
used to increase brightness or add vibrato.
Sustain (CC#64) – This is a switch controller with a value of either 0 (off) or 127
(on).
6. Review information about General MIDI (GM)
a. 24-voice polyphony
b. 16-part multi-timbral capability with dynamic voice allocation
c. 128 instrumental sounds and one drum kit consisting of 47 individual drum and
percussion sounds
i. MIDI channel 10 is typically reserved for drums and percussion
d. Standardized program map (see Appendix A)
7. Understand Standard MIDI File (SMF) format and how to import it
a. The SMF format allows different sequencing programs on different computer
platforms to share MIDI data.
b. All essential MIDI data are stored, although some proprietary (brand-specific)
settings may be lost.
c. There are three types of SMF files, each of which saves the same data.
i. Type 0 is the original SMF format, and combines MIDI data from all tracks into a
single track with multiple MIDI channels.
ii. Type 1 retains all discrete MIDI tracks that exist and is the preferred format.
iii. Type 2 adds pattern information per track if it exists. The use of this type of SMF
is limited primarily to hardware sequencers, and is not recommended.
d. Most sequencers allow MIDI data on different channels to be split into separate
tracks for simplicity.
Pg. 40
Appendix E: Rewire and Client Applications
Background: Host and Client Applications
Digital audio workstations (DAWS) send and receive audio data through audio interfaces.
For reasons of efficiency and retaining high audio quality, the drivers that manage the
relationship between the interface and the computer’s OS do not allow multiple programs
to send and receive audio simultaneously (especially DAW applications). At times, there are
reasons to use a DAW in conjunction with other audio applications. For that to work, the
host DAW application must have complete control of the audio interface and act as the
“master” in the relationship. Other applications must route their audio output through the
host DAW and act as a client or “slave” program. This master-slave relationship is achieved
through Rewire, software that acts as a plug-in in the host DAW. The Rewire plug-in allows
MIDI to be routed from the DAW to the client application and then receive audio routed
from the client app back to the host DAW.
Why use a client application at all?
The client app has virtual instruments that offer additional resources (Reason)
The client app has functionality that the host app doesn’t have (Melodyne, Ableton
Live)
Setup: Pro Tools example
1. Create a Pro Tools session (Pro Tools must be opened first in order for it to take
control of the audio interface. If Reason is opened as a stand-alone application, it
will block Pro Tools’ access to the audio driver and interface.).
2. Create a stereo Instrument track
3. Insert Reason as a multi-channel instrument plug-in on the instrument track. This
will trigger both the Reason application to open and the Rewire plug-in.
4. Switch to Reason. Click the Create menu and choose a virtual instrument from the
menu to create or add the instrument to the Reason session.
5. Click on the Browse Patch button (or in some cases the Browse sample) and choose
a sound (patch, preset, etc.) from the menu to load the sound.
6. Press the Tab key to flip the Reason rack and plug the instrument’s outputs into the
Reason audio interface outputs. The first instrument added or created in Reason will
automatically cable to outputs in the Reason software audio interface. Any other
instruments will need to be manually routed to the Reason interface.
7. Switch back to PTs. Set the Rewire plug-in input to correspond to the Reason
outputs. Note that Mix L and R correspond to Reason outputs 1 and 2.
8. Set the Instrument track MIDI output selector to the desired Reason instrument and
MIDI channel. After an instrument is created in Reason, it will appear as an available
selection on the MIDI output selector of a Pro Tools Instrument or MIDI track.
Pg. 41
Example: Instrument track MIDI output selector
MIDI Output Selector button
Current MIDI Output Menu Options
9. Record enable the track
10. Make sure that MIDI thru is checked and the fader levels are up.
11. You should now be able to play and hear the Reason instrument through Pro Tools.
12. Record MIDI as necessary on the instrument track in Pro Tools.
13. Note: because Reason is a client application and is not part of Pro Tools, the Reason
session is not saved as a part of the Pro Tools session like other plug-ins. Yes, the
Rewire plug-in and its settings are saved with the Pro Tools session. The next time
the Pro Tools session is opened, the presence of the Rewire plug-in will trigger
Reason to open, but it will not automatically recall the Reason session. That must be
saved as an independent file. The best policy is to save the Reason session file and
place it in the Pro Tools session folder.
Pg. 42
Appendix F: Multi-Output Virtual Instruments
Some virtual instruments are equipped with more than stereo outputs. As a group these
devices are referred to as multi-output devices. The primary benefit of these devices is that
the user is able to independently route signals that are easier to manage separately. For
example, independently routing the components of drum machine instrument allows
different signal processing to be applied to the kick drum, snare drum, hihat cymbals, etc.
It’s also much easier to adjust levels on track faders rather than using volume controls in
the virtual instrument. Some examples of multi-output virtual instruments include Native
Instrument’s sampler Kontakt, Spectrasonic’s Omnisphere, Pro Tool’s Structure and Strike,
Logic’s EXS24 and Ultrabeat, and Reason’s Redrum and NN-XT Sampler.
The setup of virtual multi-output virtual instruments varies, but can be loosely grouped
into two categories:
(1) A single MIDI signal is routed to a virtual instrument. The virtual instrument separates
the resulting audio into independent signals that can be routed to separate outputs. Again,
a virtual drum machine would be an excellent example of this type of device. The actual
setup for this type of device is fairly simple.
A) Install the virtual instrument on an instrument track.
B) Set the different drum parts to independent outputs.
C) Create aux tracks for each drum part and set their inputs to the corresponding
virtual instrument outputs.
Example: Pro Tools Drum VI Strike in Multi-Output Setup
Virtual Drum Plug-In
On Track Insert
Aux Track Inputs Set to Plug-In Outputs
Pg. 43
Example: Virtual Drum Instrument Set to Multiple Outputs
Drum set components set to independent outputs
(2) Other multi-output virtual instruments are capable of receiving multiple MIDI signals
(on one or more MIDI channels), which are routed to multiple sounds or presets and, in
turn, routed to independent outputs. Examples would include virtual samplers like Kontakt
and Structure. The setup for this type of device is a bit more complex.
A) Install the virtual instrument on an instrument track.
B) Load the desired sounds or presets.
C) Set each sound or preset to a separate MIDI channel and output channel.
D) Create MIDI tracks for the additional MIDI parts. Set the track MIDI outputs to the
corresponding MIDI channels.
E) Create aux tracks for each necessary output. Set the aux track input to the
corresponding virtual instrument.
Some DAWs allow for a somewhat less complicated setup for this type of device. The first
three steps would mirror the above example, but would substitute instrument tracks for
MIDI tracks.
A third variant should be mentioned but is not a multi-output device:
Some virtual instruments can receive multiple MIDI signals and assign them to play
different patches simultaneously—they are multi-timbral—but have only one stereo
output path for those sounds. When using these, the individual levels and signal processing
of the different sounds must be adjusted in the plug-in.
Pg. 44
Appendix G: Subtractive Synthesis Basics
Subtractive synthesis employs oscillators that output a prototypical “audio waveform”
(sine, sawtooth, square, triangle) from which filters act to remove (or “subtract”)
frequencies from the original waveform to create the desired result. Synthesizers based on
subtractive synthesis principles typically have one to three oscillators that, in combination,
allow the user to create extremely rich and complex sounds.
Oscillators may output the following types of waveforms:
Sine (Sinusoidal) wave:
Sawtooth wave: fundamental plus all harmonics. The harmonics are in inverse proportion
(2nd harmonic is ½ as loud as the fundamental, 3rd harmonic is ⅓ as loud as the
fundamental, 4th harmonic is ¼ as loud as the fundamental, etc.)
Square wave: fundamental plus only odd-numbered harmonics; again, in inverse
proportion.
Triangle wave: fundamental plus all odd-numbered harmonics. Here, the harmonics are
proportional to the inverse square of their number in the series (the 3rd harmonic is 1/32
or 1/9 as loud as the fundamental, 5th harmonic is 1/25 as loud as the fundamental, etc.).
Other variants that may be available:
Synchronized: an oscillator outputting a waveform is slaved to a master oscillator. Each
time the master oscillator completes a cycle; the slave oscillator is reset and starts the
beginning of its cycle. The master oscillator’s pitch is not heard and the slave oscillator’s
pitch is not changed. However, the waveform of the slave oscillator is altered in a way that
often results in aggressive and expressive “sounds.”
Cross Modulated: Similar concept to the sync type, but the pitch of a sawtooth wave is
modulated by a triangle wave.
Pulse Width Modulation: A square wave in which the positive and negative portions of the
wave are not equal. In a 20% pulse wave, for example, the positive section lasts only 20%
of the wave’s period, while the negative section occupies the remaining 80%.
Sub-oscillator: generates a second waveform one octave below the pitch of the oscillator
being processed.
Noise generator: sets the amount of white or pink noise added to a signal
Audio Filters
Audio filters are devices that divide the frequency spectrum into two or more regions and
then allow some frequencies to pass through the device uneffected while others are
attenuated. The frequency regions that are uneffected are said to be in the pass band and
the frequency regions that are attenuated are said to be in the stop band. The dividing point
between a pass and stop band is called the cutoff frequency. The example below shows a
diagram of a low pass filter.
Pg. 45
Example: High Pass Filter
The example above shows that at the cutoff frequency, frequencies in the stop band are not
immediately attenuated to a zero output. Instead the output level of frequencies in the stop
band is gradually reduced the further you move into the stop band frequency range. The
rate at which the frequencies are attenuated is referred to as the filter slope, commonly
stated as a negative number of decibels per octave.
Common filter types include:
Low pass (high cut): passes frequencies below the cutoff frequency, attenuates frequencies
above the cutoff frequency.
High pass (low cut): passes frequencies above the cutoff frequency, attenuates frequencies
below the cutoff frequency.
Band pass: combines a low and high pass filter to create a frequency range in the “middle”
that is allowed to pass, while attenuating frequency below the low-end cutoff frequency
and above the high-end cutoff frequency.
Band reject (notch): again combines low and high pass filters but, this time creates a
frequency range (usually narrow) that is attenuated, while frequencies below and above
that range are allowed to pass.
All pass (phase shift): an interesting variant that allows all frequencies to pass, but not at
the same rate. The small amount of time delay introduced to some frequencies results in
phase cancellation, which is perceived as a “whooshing” sound (phase shifting or flanging).
Other common filter parameters:
Cutoff frequency: sets the frequency that divides the filter’s pass and stop bands
Filter slope: sets the rate of attenuation. Typical settings are 3 to 24 dB per octave.
Resonance: accentuates the frequencies that surround the cutoff frequency to create a
more noticeable effect.
Pg. 46
Modulation
By themselves, oscillators and filters often create a sound that is static and sometimes not
very interesting. A good example is a violinist who plays without any vibrato or dynamics.
So, to create a sound that is interesting or more “human,” a method or process is needed to
alter the sound in real time. Synthesizers typically use two additional devices to manipulate
sound in this manner—low frequency oscillators (LFOs) and envelopes.
A low frequency oscillator is a device that outputs a very low frequency (.1 to 20 Hz). The
LFO frequency is not added to the resulting audio signal but, instead, is used to change or
modulate a sound in real time. Because the LFO frequency is very low, it vibrates at a rate
slow enough to mimic vibrato or other human performance characteristics.
An envelope is a device used to mimic or manipulate the shape of a note. Envelopes
typically have four sections—the attack, decay, sustain and release (ADSR). In a synthesizer
the envelope can be used to shape the time-variant characteristics (amplitude, filter
settings, etc.) of a note.
LFOs and envelopes can be used individually or in combination. For example:
 Filter Envelope: An envelope that modulates the filter cutoff frequency
 Amplifier Envelope: Controls the dynamic shape of a note or sound
 Velocity envelope: modify envelope parameters by MIDI velocity. The harder a note
is struck the more it increases the loudness of a sound, the intensity of modulation,
the attack time, envelope time, etc.
Pg. 47
Appendix H: Musical Sequencing
Objectives:
 Discuss techniques for making MIDI sequences more musical
MIDI sequences often sound artificial and unmusical. How can a sequencer be used to
emulate the idiosyncratic expressive characteristics of human performance and the natural
behavior of acoustic instruments? This article will discuss orchestration techniques and
tips for using automation and continuous controllers to mimic phrase shaping, dynamics
and other aspects of human musical performance.
MIDI sequences referred to as “mock-ups” are often a composer’s only method of
evaluating a composition’s orchestration, form and balance. Most film producers want to
review a MIDI mock-up before granting approval to release funds for a recording project to
move forward. Therefore, it is critical that a composer have the tools to musically express
their compositions and arrangements in the form of a MIDI sequence.
Creating musical and effective MIDI sequences starts with a basic knowledge of music.
Many poor MIDI recordings are result of a lack of background or attention to orchestration
and musical style. A MIDI sequencing program can’t take into account an instrument’s
range or what happens when an instrumentalist plays louder or when they play in the
extreme ranges of their instrument. Yes, a trumpet sample is going to sound squeaky,
sound wrong when played out of the instrument’s range. So, good sequences start with
creating parts that are realistic, that a real musician can actually play.
MIDI can be recorded in step-time (non-real time), notes can be added with the “pencil
tool” and sequencing software can be set up to snap events to a grid. Sections of a piece that
are chordal, or more homophonic in nature, can be recorded pianistically on a single track
using a “section” sample. However, all of these techniques (bad habits) lead to mechanical,
unmusical sequences and can be avoided by following these guidelines.






Record parts in real time. Rerecord a part until you can’t do any better.
Record section parts one part a time. If there are five trumpet parts, create and
record 5 tracks. This will add subtle differences in articulation, rhythm, note length,
etc. to each part. These differences are a natural part of any real ensemble
performance.
Record unison lines one part at a time and avoid using “section” samples.
Where possible, use different samples for the different members of a section. You
wouldn’t expect all the members of a section to play an instrument of the same
brand and vintage. Using samples from multiple libraries will create a more realistic
and more complex timbre.
Where possible, replace lead part(s) with a real performance (audio recording)
Don’t expect that using a single sound per part will create as complex a timbre as a
real instrument. Doubling parts with other sampled or synthesized instruments will
create a more complex sound just as if would if you were orchestrating for real,
acoustic instruments.
Pg. 48


Doubling a part at the octave can enhance thin or wimpy sounds.
A MIDI part can also be enhanced by simply duplicating the track—two is better
than one. Then try delaying or offsetting the duplicated track by a 20-50
milliseconds to create a doubled effect.
MIDI Editing: Quantizing
Quantization is often the most frequently used (and misused) MIDI editing feature.
Quantization allows a user to correct the rhythmic performance in a MIDI sequence using a
grid based on the bar, beat structure of a piece and the locations of the recorded MIDI
events in the sequence. Any competitive, modern MIDI sequencing program gives the user
control over the resolution of the grid, what parameter(s) will be quantized (note start,
note duration, note release, etc.), and how strictly the chosen material will be quantized. In
addition, special accommodations are made for quantizing music with a swing feel. For
example, Pro Tools allows a user to switch on or off the swing characteristic. When
enabled, a swing percentage of 100% will yield a triplet feel—in an 8th note passage, notes
not on the beat will move to the third 8th note of an 8th note triplet. Settings of less than
100% will result in a less dramatic swing feel while settings of more than 100% will
gradually make the feel closer to a dotted 8th and 16th feel.
Example: Unquantized MIDI Notes Against a Triplet Grid
Example: Quantized MIDI Notes Against a Triplet Grid
8th notes quantized with a swing feel at 100%
Pg. 49
So, again it is important to listen to and analyze real performances and recordings and put
that knowledge to use in your MIDI sequences.
The rhythmic resolution of MIDI is measured in ticks. The actual resolution varies from
program to program, but a higher number of ticks per quarter note results in a more
accurate rendering of a performance. It is very important to know what a note’s rhythmic
value equals in ticks so that you can understand how much a note is early or late in respect
to the rhythmic grid. For example, in Pro Tools the resolution is 960 ticks per quarter note.
Consequently an 8th note is equal to 480 ticks, and 8th note triplet is 320 ticks and a 16th
note is worth 240 ticks. Therefore, a quarter note quantized exactly to the grid starts at 0
ticks, 8th notes start at 0 or 480 ticks, or an 8th note triplet notes start at 0, 320 or 640 ticks.
Moving a note 5 – 20 ticks before or after the related grid point – though inaccurate in an
absolute, mathematical sense – won’t harm the perceived rhythmic accuracy of a musical
passage but, it will allow you to subtly alter the feel or add life-like nuance to the rhythmic
performance. As suggested earlier, misuse of quantizing can yield very bland results. Some
suggestions to avoid this quandary are provided below …





Avoid the wholesale snapping to grid of any notes that don’t “visually” look right!
Always set the quantize value to the smallest rhythmic value in the selection to be
quantized.
Use the strength parameter when quantizing. Passages that are 85 to 90% quantized
still retain some of the original performance and will not sound as “mechanical.”
Quantize small sections at a time
Instead of quantizing everything, select a note and use the Nudge feature to move it
5-10 ticks. This improves the rhythmic accuracy but, at the same time, retains some
of the human element.
Velocity or MIDI Volume
The MIDI protocol allows for a resolution of 128 (0 – 127) steps or degrees for volume.
While this resolution is small when compared to our ability to hear changes in volume, it
has served well since the inception of MIDI. Many times, the problems in a poor sequence
relate to uneven or unmusical note velocities. This issue is often related to a person’s piano
skills or to recording a sequence using a non-weighted controller. But, velocity problems
can easily be edited using the event list editor or velocity editor in a DAW. After viewing the
velocity information on a track, common sense can direct decisions about where and how
to edit MIDI velocity.
Many sample libraries and virtual instruments are made more realistic by using velocity
information to trigger different samples. This makes a MIDI performance more realistic
because, as instruments get louder, their timbre usually gets brighter (and the opposite
when they get softer). Also, in addition to this timbral variation, as wind or brass
instrumentalists play louder, they tongue harder which results in a more aggressive
articulation. Modern sample library manufacturers often sample each note of an
instrument 10 – 15 times to capture these subtle differences. The resulting samples are
Pg. 50
mapped to velocity ranges or “layers.” For example, the softest sample is mapped to the
velocity range 0 – 10. At the juncture of the ranges, crossfades are added between samples
so that it is very difficult to tell when you move from one velocity layer to the next.
MIDI sequences can be improved substantially with this knowledge of MIDI velocity and
the specific setup of a sample library. Musical accents can be highlighted or the desired
timbre can be triggered by increasing the velocity of a note and, as a result, switching to a
more appropriate sample in a different velocity range or layer. Also, a legato passage can be
made more effective by slightly reducing the velocity for any notes that are slurred and not
individually articulated. In the example below, the velocity of each note is represented by
the vertical “stalk” at the beginning of the note.
Example: Unedited Velocity “Stalks”
Example: Velocity stalks Edited to Enhance the Line Contour and Legato Phrasing
Articulations and Key Switching
Sample libraries often include alternate samples for different articulations—sustained,
staccato, muted, etc. Effective sequences can be created by either using multiple MIDI or
instrument tracks for each instrument and articulation or by switching between
articulations on the same track. Some instruments in sample libraries are set up with low
register keys assigned to send program or patch change information. Playing or recording a
MIDI note from one of these keys embeds program change messages in the sequence,
which results in the instrument switching articulations or, in the case of the Garritan Jazz
Pg. 51
library, to muted versions of an instrument or, in the saxes, to woodwind doubles.
Instrument setups, with samples allocated to multiple velocity layers and key switching to
alternate articulations, place a big load on a DAW’s ability to function and can result in
reduced track count and other CPU-related problems. Where multiple articulations are
unnecessary, it’s a good idea to choose a non-key switching preset.
Note Duration
Often, problems with articulation and phrasing can be resolved by editing the length or
duration of a note. Staccato and related articulations and accents can be adjusted by editing
or trimming a note’s duration. Use your ear and musical experience to judge the correct
note length. Another issue related to note duration is legato phrasing, which is often
difficult to achieve when using a non-weighted MIDI controller. This issue is more easily
resolved by using the MIDI editing capabilities of a MIDI sequencer than by repeated efforts
at recording the part. For example, in Pro Tools the Change Duration dialog box has a legato
setting that will extend the duration of selected notes to overlap with the next note(s) by a
user-defined amount. Overlapping by a small amount (5 – 10 ticks) won’t be perceived as
notes sounding simultaneously, but it will result in a more legato-sounding phrase. Sample
libraries, such as the Garritan Jazz and Big Band library offer additional help for this issue.
When notes overlap and sustain pedal (CC 64) information is recorded, “the attack of the
sample is removed to … more closely emulate the sound of a slur.” (Garritan, 2007, p. 39)
Example: MIDI Note Durations Set to Overlap
Durations set to overlap and enhance legato phrasing
Continuous Controllers and Human Performance
The above suggestions offer simple, effective ways to begin creating more musical
sequences but, the real pay-off results from becoming familiar with and using continuous
controllers (referred to as CCs). CCs offer the best method of emulating the continual
changes in dynamics, timbre and pitch that are part of human musical performance. For
example, a brass or woodwind instrumentalist will not perform a sustained note with
unchanging pitch while a sample player or virtual instrument can do so indefinitely. The
implementation of CCs varies by manufacturer. Typically, there are 128 CCs and controllers
such as modulation (CC #1), volume (CC #7), pan (CC #10) and sustain (CC #64) are
Pg. 52
standard, while many others are left open for manufacturers to use as they choose. (Most
books on MIDI will contain a discussion and a list of CCs).
Using continuous controllers the first time can be tricky and frustrating, so a brief
discussion here will be time well spent. Remember that controller information can be sent
and recorded, but, if the manufacturer did not implement the controller or used it in a nonstandard manner, the controller information won’t trigger anything or it will trigger
something unexpected. It’s a good idea to become familiar with the standard CCs and their
implementation in the library in use. There are two primary methods of accessing and
adding controller information to a sequence.
(1) MIDI control devices come with pre-programmed continuous controllers such as the
modulation wheel, pitch wheel and, possibly, a sustain pedal. There also might be
additional knobs or sliders that are user-programmable, continuous controllers. In modern
sequencing programs and virtual instruments, programming these “knobs” can be very
simple. For example, in the host sequencer (Logic, Pro Tools, DP, etc.) right-click on the
desired parameter and choose “Learn or Assign MIDI CC,” then move the desired knob or
slider on the keyboard controller. Look at the desired on-screen parameter as you move
the “knob” and you should see the values change. When this functionality is not
implemented in a DAW, assigning controllers can be more of a task. For example, the CC
knobs on an M-Audio Oxygen 8, can be programmed to control a continuous controller
using the following instructions…
To assign a CC Controller to the Oxygen 8 CC Knobs:
1. Press the MIDI/Select button
2. Press the Set CTRL key
3. Use the number keys to enter the desired CC Knob number
4. Press the Enter key
5. Use the number keys to enter the desired MIDI controller number
6. Press the Enter key
7. Use the number keys to enter the MIDI channel that the controller should send on
(probably channel 1)
8. Press the Enter key
9. Press the MIDI/Select button to finish the controller assignment.
Controller data can be recorded in real-time by putting the track in record and moving the
assigned wheel, knob or slider during recording. If the controller information is being
recorded in a 2nd pass (after the note information was recorded), make sure that the MIDI
Merge function is enabled; otherwise, any previously recorded information on the track
will be overwritten. After the automation or controller data are written, they can be edited
in the same manner that any MIDI or automation data are edited.
(2) The second method of accessing and recording continuous controllers is via the
automation feature in a sequencing program such as Logic (or Pro Tools, Cubase, DP, etc.).
The advantage here is that the controller information can be added and edited in non-real
Pg. 53
time and can be seen on screen. The following describes how to access and assign
controllers in Pro Tools (9.x).
All automatable parameters are accessed via the track view selector found on track headers
in the Edit window. In addition to volume, pan and mute found on audio tracks, MIDI and
Instrument tracks also offer programmable “playlists” for several standard MIDI CCs as
well as a method to access less frequently used CCs. To add controller information, click the
track view selector on the desired track and choose the desired controller from the
available list. In the track playlist area, a related automation line graph will appear.
Example: Track View Selector and Automated MIDI Controllers Dialog Box
Track View Selector
Assignable Controllers Dialog Box
Controller Automation
Track Header
Unassigned Controllers
Assigned Controllers
Automation/controller information can be written in real time as discussed above. It can
also be written in non-real time using the grabber, pencil and trimmer tools. Choose the
desired controller from the track view selector list. Click the associated line graph with the
grabber or pencil tools to add “break points.” Click and drag these break points up or down
to change the controller values. Option-click a break point with the grabber to delete it. Use
the selector tool to select a range of automation/controller data then use the trimmer tool
to raise or lower that section while retaining the current contour. Automation/controller
data can also be copied and pasted or a selection can be nudged a user-definable amount
using the plus and minus keys on your computer keyboard.
Accessing less frequently-used controllers is simple. Click the track view selector and, at
the bottom of the list, choose Controllers and then Add/Remove Controller. In the
Automated MIDI Controllers dialog box that appears, available controllers are shown in a
Pg. 54
column on the left. Choose the desired controller, then click the Add button. This will move
the desired controller to the right column that displays the active controllers. Click the OK
button to close the dialog box. The controller is now available in the Track View List.
The following is a brief list of the most common controllers and their standard assignment.
Controller
Purpose/Function
1
Modulation: Pre-programmed to the Mod wheel on most controllers. Usually
set to control vibrato but can be assigned to control other CCs.
7
MIDI volume level (0 -127)
10
Pan: 64 = center, 0 = totally left and 127 = totally right.
11
Expression: Most often is assigned as a pre-master volume control. In this case,
it scales by percentage the value set by CC #7. When using both controllers for
volume, the suggested method is to use expression (CC #11) to add volume
automation and then use CC# 7 to lower and raise the overall volume of a track
(Pejrolo, p. 11). CC #11 is used by some manufacturers for other parameters.
Make sure you read the available information about continuous controller
assignments when you start using a new sample library or virtual instrument.
Bibliography:
Bergersen, T. (December/January 2007). Sequencing Samples, Part 1. Virtual Instrument
Magazine, Los Angeles, CA.
Bergersen, T. (April/May 2007). Sequencing Samples, Part 2. Virtual Instrument Magazine,
Los Angeles, CA.
Garritan, G. (2007). Jazz and Big Band Library Manual, Garritan Corp: Orcas, WA.
Pedergnana, D. (March 2005). Subtle Gestures. Electronic Musician, Emeryville, CA.
Russ, F. (April/May 2006). MIDI Mockup Microscope. Virtual Instrument Magazine, Los
Angeles, CA.
Pejrolo, A. (2005). Creative Sequencing Techniques for Music Production, Focal Press:
Oxford, England.
Pg. 55
Appendix I: Audio Time Compression and Expansion
Since the inception of digital audio, engineers and users alike have searched for ways to
manipulate audio files in order to “play” them at different tempos without a shift in pitch
(which was not possible with tape-based audio). The earliest methods for accomplishing
this were, made commercially available from Propellerhead (Recycle), Spectrasonics
(Groove Control) and Sonic Foundry (Acid), were useful, but did not actually use time
compression or expansion (TCE). Instead, they separated audio files at the transient,
grouped the separated regions so that they were perceived as one unit by the user, and
associated a MIDI note with each component region. Then, because MIDI can be time
stretched without pitch artifacts, the MIDI notes could be manipulated to fit a new tempo
and to trigger the associated regions—a very clever method to solve a complicated
problem.
The big breakthroughs in TCE began in 2001 with the introduction of Ableton Live. Live’s
audio warping functionality proved to be effective, extremely popular and led competitors
to integrate similar capabilities. Some of the TCE functions found in current DAWs include
Logic’s “Flex Time,” Pro Tools’ “Elastic Audio,” Sonar’s “AudioSnap,” and so on. Like the
earlier methods, real audio time compression and expansion begins with an analysis of the
audio file to determine where the transient attacks are located. That information is then
used to determine the tempo, meter and length of the audio file in bars and beats. Where
transient attacks are regular and recurring (as is the case with drum set or percussion files),
the TCE process has proven to be very successful, both accurately analyzing audio files and
then time stretching the files to fit different tempos. When the audio files are more complex,
different algorithms are used. For example, most of the previously mentioned DAWs have
algorithms optimized for vocals or bass (monophonic), keyboards or guitar (polyphonic),
and drums (rhythmic). It should be noted that current TCE algorithms operate in real time,
which places considerable demand on a computer’s CPU, especially when slowing down an
audio file. In this case, the process not only needs to decrease the file’s tempo, but also fill
the gaps that result between transients as the “audio waveform” is stretched apart. All of
the DAWs mentioned above offer a method to render the TCE audio files at the new tempo.
This will alleviate any CPU choke that results from using real time audio TCE.
In addition to conforming audio files to a new tempo, the TCE functionality in Logic, Pro
Tools and Ableton Live includes capabilities that allow the audio files to be quantized much
like MIDI. For example, in Pro Tools, you would enable elastic audio on a track, select an
audio region and then open the Event Operations > Quantize dialog. There, you would
select a quantize value or a groove template (or other quantization parameters) against
which the transient locations in the audio file are quantized.
Pro Tools Example:
A. Enabling Elastic Audio on a track using the Conform to Tempo Command
1. Create an audio track
2. Set the Time Base selector to Ticks.
Pg. 56
3.
4.
5.
Click the Elastic Audio Plug-in Selector button (on the track header) and
choose an appropriate plug-in
Place an audio region on the track. The audio region will go offline briefly as
the region is analyzed.
Select the region and then from the Region menu select Conform to Tempo.
Example: Track Header
Track View
Selector
Time Base
Selector
Elastic Audio
Plug-in Selector
Real-Time or
Rendered Processing Indicator
Current Elastic Audio Plug-in Indicator
B. Viewing and Editing Elastic Audio in the Edit Window
1. There are two new ways to view audio in the Edit window.
2. Select Analysis or Warp from the Track View Selector
3. Analysis view
a) After Pro Tools analyzes an audio file, an Event marker is placed at each
detected transient. Event markers mark places in the audio file where the
audio can be quantized or the audio can be stretched or contracted
(Warped).
b) Switch to Analysis view to see and edit Event markers.
4. Editing event markers
a) Adding Event Markers
(1) Click a location with the Pencil tool or…
(2) Double-click a location with the Grabber tool
b) Move an Event Marker
(1) Drag an Event marker with the Pencil or Grabber tool
(2) Select one or more Event markers with the Selector tool and press the
Delete key.
c) Delete an Event Marker
(1) Option-click an Event marker with either the Grabber or Pencil tool
(2) Select one or more Event markers with the Selector tool and press the
Delete key.
Pg. 57
5.
6.
Warp view
a) In Warp view, Warp markers can be added to event markers. Warp
markers can then be moved to stretch or contract audio between Warp
markers.
Creating and editing warp markers
a) Automatically adding warp markers
(1) Place an audio file on a tick-based audio track with elastic audio
enabled.
(2) Select the audio region and quantize it.
b) Manually adding Warp markers
(1) Switch to Warp view
(2) Click an Event marker with the Pencil tool or
(3) Double-click an event marker with the Grabber tool
c) Moving Warp markers without warping audio
(1) Control-click and drag a Warp marker using either the Pencil or
Grabber tools.
d) Deleting Warp markers
(1) Option-click a Warp marker with either the Grabber or Pencil tool
(2) Select one or more Warp markers with the Selector tool and press the
Delete key.
Example: Event and Warp Markers (Warp view)
Warp Markers
Event Markers
C.
Quantizing Elastic Audio
1. Select a region(s) on a tick-based, elastic audio-enabled track
2. Select the Event menu > Event Operations > Quantize (Option-3)
3. Select Elastic Audio Events (in the What to Quantize section)
4. Select a rhythmic increment or groove template as the quantize reference
5. Set other options desired and click apply.
6. In Warp view notice that Warp markers have been added to each event
marker.
D. Conforming or Rendering Elastic Audio
1. Real-time elastic audio places great demand on CPU resources, so, once an
audio file has been conformed to a session tempo and edited as necessary, any
elastic audio can be rendered as new audio file and region.
Pg. 58
2.
3.
4.
Click the Elastic-Audio plug-in selector menu
Set the Elastic Audio plug-in to “None-Disable elastic Audio”
In the Commit Dialog box, click the Commit button; this will render a new
audio file(s) and reduce CPU useage.
Logic Example:
A. Enabling Flex and Generating Transient Markers
1. When Flex View is enabled, all audio track headers show the Flex Mode
selector, which defaults to Off (no Flex). Choosing a Flex Mode enables Flex for
all regions on that track, and each region is analyzed for transients. Logic
places a light gray transient marker at each found transient.
2. When an audio file or region is placed on a track where Flex is active, that
audio is immediately analyzed for transients and the transient markers
appear.
3. To disable Flex for specific regions on an enabled Flex track, select the region
and un-check the Flex checkbox in the Region Inspector.
Example: Track Header
Flex View Enable Button
Flex Enable per Region
Transients
Flex Mode
B. The Six Flex Modes (see Example at right)
1. Slicing
a) Best for cutting drum tracks into
multiple regions for audio quantization.
May leave silence gaps between regions.
b) Adjust Transients (see item C. below),
then right-click region, and and choose
Slice at Transient Markers.
2. Rhythmic
Pg. 59
C.
a) Best for quantizing drums and percussion without slicing.
3. Monophonic
a) Best for quantizing solo instruments, especially bass, vocals and singleline guitar parts
4. Polyphonic
a) Best for complex music tracks and chordal instruments including piano,
rhythm guitar and instrumental sections.
5. Tempophone
a) Mimics an early device similar to a tape machine, with a cylinder upon
which are mounted multiple playback heads. The cylinder turns as tape
moves across it; allows independent pitch or tempo change, but with
strong artifacts.
6. Speed
a) Allows the creation of “fades” that actually speed up or slow down
playback. Good for “dying turntable” effects.
Editing Transient Markers
1. Select the first region in the Flex-enabled track and open the Sample Editor.
Enable the Transient Editor Mode button and check for false or missing
transient markers.
a) Use the “+” or “-“ buttons to have Logic add or subtract transients.
b) Click and drag a transient to move it. Command-click to manually add a
transient or double-click to remove one.
2. When only the essential transients remain and are correct, close the Sample
Editor.
Example: Transient Editor Mode
Transient Editor Mode Button
Fewer Transients
More Transients
D. Conforming Regions to Tempo with Flex Markers
1. Using the Arrow Tool or Flex Tool, place a single Flex Marker at the beginning
of the region to be conformed. This Marker will act as an anchor.
2. Click in the upper right corner of the region (the cursor will look like Trim
with waveforms instead of arrows), and stretch or shrink the region to the
Pg. 60
3.
desired new endpoint. Changes of less than 20% give the best results. Listen
for artifacts indicating too much stretch or shrink.
Quantize the region as needed; Flex will quantize the Transients as if they
were Flex Markers, stretching or shrinking the audio between markers as
needed.
Example: Flex and Transient Markers
Flex Markers
Transient Markers
E.
F.
Adjusting Notes Within a Region using Flex Markers
1. Occasionally it is not possible to create accurate Transient Markers, especially
with complex, sustaining instruments. In these cases, Flex Markers can be
placed manually to allow accurate conforming and quantization. Using the Flex
Tool in the Arrange window, place Flex Markers as needed.
a) Clicking on a Transient in the upper half of the waveform display will
place a single Flex Marker on that Transient.
b) Clicking on a Transient in the lower half of the waveform display will place
three Flex Markers: The center one will be on the targeted Transient, and
the left and right ones will be on the nearest Transients on either side of
the target. The outside Flex Markers will act as anchors as the center one
is moved.
c) Clicking while not on a Transient will create either a single or triple Flex
Marker, depending on whether the upper or lower area is clicked.
2. To remove a Flex Marker, double-click it. To change the position of a Flex
Marker without effecting the audio, Option-click and drag it.
Quantizing Flexed Tracks and Regions
1. Select a track or region, and then select an appropriate quantization value
from the Region Inspector. Groove Templates work well here.
2. To quantize a portion of a whole file, we recommend that you separate the
portion into its own region and then quantize it.
Pg. 61
G. Making Conformed Audio Permanent
1. Audio that has been conformed using Flex Time will remain conformed only if
Flex View and Flex Mode are left enabled. There are two ways to make Flex
Time changes permanent:
a) Select the entire track. From the local Track menu, select Bounce in Place
(right-clicking on the region(s) also reveals the Bounce in Place menu). In
the dialog box, choose a Destination for the bounced audio, along with
how effects and normalization should be handled, and then click Bounce. A
new audio file will be created per the specifications selected in the dialog.
b) To avoid the above dialog altogether, select a region or regions and then
use the local Region menu to access Merge > Regions. This will create a
new audio file that will replace the previous one, with conforming intact.
Note that the Merge function works only on multiple regions. If a
conformed track consists of a single region, it must be sliced to allow
access to the Merge function.
Pg. 62
Appendix J: Mixing and Signal Processing Fundamentals
Many who are unfamiliar with recording, mixing and editing assume that the job of an
engineer is solely technical. In reality, it is a complicated job that requires equal parts of
technical knowledge and “musical” ability (an art and a science). A great engineer, like Al
Schmitt for example, is as much an artist as the musicians with whom he works (e.g., Sam
Cooke Jefferson Airplane, George Benson, Chet Baker, Henry Mancini, Steely Dan, and Diana
Krall). That said, mixing is like any other musical endeavor, it takes practice, and it helps to
observe those who do it well. The following notes summarize the basic knowledge and
concepts that should help you better understand the process.
Setting Levels
The first step in mixing a MIDI or audio project is to set basic levels for the tracks in the
project. If nothing else, this will provide clarity so that the mix can be evaluated to check
how the components complement each other. It’s important to understand that, when the
signals from several tracks are added or mixed together, the level of the resulting
composite signal will be louder than the individual tracks by themselves. The following
methods help to avoid clipping or distortion in a pop/rock mix:
1.
2.
Start by setting the kick drum at -5 to -10 dB. Then, balance the other drum tracks to
the kick drum. Add and balance the bass track, then continue this process with the
other tracks. This method will not only solve the output level problem, but should
ensure that the level on individual tracks doesn’t clip (nor should you run out of fader
headroom).
A similar method: start by placing the fader on the master track at -6 dB. Then, begin
balancing the drum tracks (or other preferred tracks) and gradually add and balance
the other tracks. Experience has shown that this method can be problematic, causing
you to run out of headroom on individual tracks.
Alternatively, some engineers start by working with the most important track(s) in a
project. For example, the lead vocal track in a pop/rock mix is commonly the most
important track. So, begin by setting the lead vocal level and add the desired or necessary
equalization, compression and reverb. Then, balance the other tracks to support the vocal
track, while still taking care not to overload the master track.
Pan Positions
A pan pot (panoramic potentiometer) is used to place a track’s output in a stereo sound
field. When the pan knob is at it’s center position, the track’s output is distributed equally
to both left and right speakers. This results in the impression that the signal from the track
emanates from a phantom center position between the left and right speakers. As the pan
knob is turned to the left, the signal gradually gets louder in the left speaker and softer in
the right speaker yielding the perception that the signal moves toward the left side of the
Pg. 63
stereo sound field; a similar perception occurs in the opposite direction, when the knob is
turned to the right.
There are as many panning strategies as there are audio engineers, but here are some
“common sense” thoughts that provide a good place to start.
1. The instruments in a classical or jazz recording are often panned to create the perception
that the instruments are on stage for a live performance. For example, when panning the
instruments in a string quartet mix, the 1st violin would be placed on the far left, the 2nd
violin approximately 30 degrees to the left, the viola 30 degrees to the right and the cello
on the far right.
2. In a pop/rock mix the primary melodic and rhythmic elements are placed in or around
the center (lead vocal, kick, snare, bass and instrumental solos). The drums are panned in
stereo—kick and snare center, toms appropriately right-to-left across the sound field and
ride and crash cymbals hard left and right. Other rhythm instruments, including guitar and
keyboard parts, are panned to the sides.
3. Low frequency elements in a pop/rock mix are usually panned center. This practice
started when stereo records were cut to keep the needle from jumping out of the groove
and because low frequencies require more power, it also balances the power used to drive
the left and right speakers.
3. Stereo tracks don’t have to be panned in stereo. In fact, for MIDI projects in which all
tracks can be stereo, it’s important to maintain a balance of stereo and mono panning.
Otherwise, the resulting mix won’t have a perceived stereo identity. In some DAWS, stereo
tracks have two pan knobs (or sliders). This allows the track’s output to be panned in
stereo (hard left or right), to a region (position the two pan knobs at the left and right
edges of the intended range), or in mono (place the two pan knobs in the same position).
4. Balance the stereo image. Too many instruments to one side will make the mix appear
lopsided. Too much activity on one side of the mix is distracting. Panning most of the low or
high frequencies to one side is equally distracting. So, distributing instruments, musical
activity and frequency evenly across the mix is key to creating a resulting sound that will be
perceived as balanced.
Signal Routing I: Inserts:
An insert is an audio patch point on a track, which allows a signal processor to be placed in
the signal path of an audio signal. The audio signal path on the track is interrupted,
allowing the user to route a track’s entire signal to a plug-in (virtual) or an external
hardware signal processor. A processor placed on a track insert effects only the signal on
that individual track. The most common signal processors used on track inserts are
equalizers, filters, compressors, limiters and noise gates.
Pg. 64
Example: Insert Signal Path
DAWS allow for 5 to 10 inserts per track, while mixing consoles only allow for one (analog
mixing consoles often have dedicated dynamics and EQ on each track and don’t need 10
inserts). Track inserts are wired in series. This means that the signal is routed through the
first insert and associated plug-in and then routed to the second insert and so on.
Example: Series Configuration of Track Inserts
Pg. 65
Signal Processing
Signal processors are audio devices that change some characteristic of an audio signal.
There are four primary categories of signal processing devices—spectrum, dynamic, time
and noise processors.
Spectral Processors
Spectral processors are used to change the frequency response or tone color of an audio
signal. They can be used to make the signal from a track sound “better” or more often, help
the signal “fit” in a mix. The two main spectral processors are parametric equalizers and
filters.
Parametric EQs:
Parametric EQs feature adjustable “bands” that can be used to boost or cut a range of
frequencies. Typical devices have from three to ten frequency bands that are generically
labeled “Lows, Low Mids, Mids, High Mids, etc.” Each band of a parametric EQ typically has
the following adjustable patterns.
Center frequency: Sets the center frequency of a frequency range (band)
Q/Contour: Set the width of the frequency range around the selected center
frequency. In some cases the Q is fixed and not adjustable, most often in the low or
high frequency bands.
Boost/cut: Sets the amount of boost or reduction to applied to the given frequency
range
Using a parametric EQ device can be a bit intimidating at first, so some general guidelines
would include the following:
Select a frequency band that matches the general frequency area that needs
adjustment.
In order to locate the specific area that needs work, set a significant amount of
boost; try around 10 dB to start.
Next, while listening to the track, sweep the center frequency parameter up and
down until the frequency related problem(s) “sticks out.”
Decide if you need to boost or cut the selected range.
Finally, open and close the Q setting to determine (then set) the width of the
frequency range.
Determining the frequencies that need to be adjusted can also be intimidating at first, so
this is a good time to briefly review tone color (timbre). All acoustic instruments produce
complex waveforms, consisting of multiple frequencies. The pitch that we perceive is, most
often, the lowest (and loudest) of those frequencies and is referred to as the “fundamental”
frequency. The other frequencies that are produced are (usually) integer multiples of the
fundamental frequency and are referred to as “harmonics.” We perceive different
Pg. 66
instruments as having unique timbral qualities because different instruments produce
harmonics in differing proportions and intensities. This is due to many factors—the
materials used in the manufacture of an instrument, chambers or columns of differing
lengths, single coil or humbucking pickups, differing mouthpieces, etc. When we EQ an
instrument, knowledge of its timbral characteristics needs to be linked to the frequency
areas in which its strengths and weaknesses lie. Audio engineers often refer to these areas
as the “magic frequencies.”
Example: Timbral character of selected instruments (mostly taken from The Mixing
Engineers Handbook by Bobby Owinski).
Instrument
Kick Drum
Snare drum
Cymbals
Bass
Electric guitar
Acoustic
guitar
Piano
Vocals
Brass
Strings
Magic Frequencies
Fundamental around 80 Hz, Mid-range honk: 200 – 400 Hz, Snap: 2 3 kHz
Low end: 120 – 240 Hz, Boing: 900 Hz, Crispness: 5 kHz, Snap: 10 kHz
Clang: 200 Hz, Sparkle: 8 – 10 kHz
Low end: 50 - 80 Hz, Attack: 700 Hz, Snap: 2 - 3 kHz
Fullness: 240 – 500 Hz, Presence: 1.5 – 2.5 kHz, To simulate the sound
of 4 x 12 cabinet, reduce at 1 kHz
Fullness: 80 Hz, Mid-range: 240 Hz, Presence: 2 – 5 kHz
Fullness: 80 Hz, Presence: 2 – 5 kHz, Honky-tonk: 2.5 kHz
Fullness: 120 Hz, Boominess: 240 Hz, Presence: 5 kHz, Sibilance: 5
kHz, Air: 10 – 15 kHz
Fullness: 120 – 240 Hz, Piercing: 5 kHz
Fullness: 240 Hz, Scratchiness: 7 – 10 kHz
Audio Filters
Audio filters are simple spectral processor devices, often used to clean up the low or high
frequency areas in a mix. Audio filters divide the frequency spectrum into two or more
regions and then allow some frequencies to pass through the device uneffected while
others are attenuated. The frequency regions that are uneffected are said to be in the pass
band and the frequency regions that are attenuated are said to be in the stop band. The
dividing point between a pass and stop band is called the cutoff frequency. The example
below shows a diagram of a low pass filter.
Pg. 67
Example: High Pass Filter
The example above shows that above the cutoff frequency (user adjustable), frequencies in
the stop band are not immediately attenuated to a zero output. Instead the output level of
frequencies in the stop band is gradually reduced the further you move into the stop band
frequency range. The rate at which the frequencies are attenuated is referred to as the filter
slope, commonly stated as a negative number of decibels per octave. Typical filter slopes
reduce the frequencies in the stop band at a rate (slope) of 3, 6, 12, 18 or 24 dB per octave.
Use examples:
1. In pop/rock mixes high pass filters (HPFs) are used to clean up the low end of the mix.
For example, to clear the way for the kick drum or bass, HPFs are placed on guitar,
keyboard and some drum tracks. In such a situation, you would begin by setting the
filter cutoff frequency at 100 – 400 Hz depending on the track, then adjust until you find
the right cutoff frequency.
2. HPFs might also be used to remove noise from a signal. If noise from a ground problem
is present it can be reduced by passing the signal through a HPF with a cutoff freq set
above the frequency of the noise. In this example, the cutoff frequency would be set just
above 60 Hz.
3. Low pass filters (LPFs) are used less frequently. Some microphones manufactured
today have a significant boost in the higher frequency range. If the resulting signal
sounds too “brittle,” a low pass filter can be used. To avoid a drastic reduction of
frequencies above the cutoff frequency, try using a shallow slope (e.g., 3 dB/octave).
4. Low pass filters are sometimes used to help create the effect of someone talking though
an “old fashioned” telephone. In this case, the cutoff frequency would be set quite low
(500 – 700 Hz) with a very steep slope.
Dynamic Processing:
Dynamic processors are used to control the dynamic range of a signal. The most commonly
used type of dynamic processor is the compressor—a device that reduces the dynamic
range of a signal. Additionally, compressors can be used to help a vocal stand out against
the backing tracks in a mix, smooth the attacks in a rhythm guitar or funk bass part, or add
Pg. 68
punch to a kick drum or snare drum part. The standard parameters on a compressor
include:
Threshold: a user-definable level above which the compressor will proportionately
reduce the signal level. Below the threshold the compressor is inactive.
Ratio: Sets the amount that the input signal needs to increase to cause a one-decibel
increase at the compressor’s output. For example, with a 5:1 ratio, for every 5dB
that the signal exceeds the threshold, the output will increase 1 dB. So, if the
threshold is set at -10dB and the signal actually “hits” 0dB, the compressed signal
level is -8dB.
Attack time: sets the amount of time it takes a compressor to start working after a
signal exceeds the threshold. Fast attack times can alter the perceived frequency
response of a signal. Since much of the high frequency content is contained in the
attack of a sound envelope, compressing the attack can result in a dull sound. Try
starting with longer settings and gradually moving to a shorter attack time.
Release time: determines the amount of time it takes the compressor to return a
signal to unity gain (i.e., to stop attenuating) after the signal drops below the
threshold.
Makeup gain: Once the dynamic range of a signal has been reduced, the overall
signal can be increased (if desired) without causing clipping distortion. Check the
compressor’s gain reduction meter and set the makeup gain at the average level
shown on the meter.
Signal Routing II: Sends
A send (sometimes referred to as an auxiliary send) is an audio device that copies a signal
or a portion of a signal from a track to a bus—an audio path that connects a destination to
one or more sources. In this case, the destination is a processor and the sources are any
tracks that need processing. Note that the signal path through the track is not interrupted
by a send; instead, a user-defined portion of the track’s signal is copied to a bus. Multiple
tracks can have a send set to the same bus in order to share the processor. This is optimal
for delays and processor-intensive devices such as reverbs.
Pro Tools example:
1. To create a send, click on a track send and set the output to the desired bus (or
interface output if using an external processor)
a. Create sends on any additional tracks that are to share the processor and set to
them to the same bus.
b. To create a send simultaneously on all tracks, Option-click a send and set it to
the desired bus.
c. To name a send right-click the send and choose the rename option
2. In order for the different tracks to get differing amounts of processing, the send on
each track has a fader that allows the user to adjust the amount of signal that is
Pg. 69
copied to the bus. The more signal that is copied from the send to bus, the more
obvious the effect will be.
3. Create an Aux track (because plug-ins can only be placed on track inserts).
a. Set the Aux track input to the same bus as the Send
b. Place the desired processor on one of the Aux tracks inserts
4. The Aux track fader functions as the master effect return control.
5. Note that, when processing on inserts is referred to as series processing, the use of
Sends is different. In this case the unprocessed (“dry”) signal is routed out the
respective source track outputs. The signal gathered through the sends is processed
by the signal processor inserted on the “return” aux track, which (in most cases)
only returns 100% processed or “wet” signal. This is called parallel processing.
Example: Sends and Parallel Processing
Pg. 70
Time-Based Signal Processing
The “aux” send and return structure described in the previous section are primarily used to
integrate time-based effects into a mix, including delay, delay effects (like flanging and
chorusing), and reverb. These effects are added via a “send” because this method allows
multiple tracks to share the same effect. This is both an effective use of computer
processing power (critical with reverb devices that are typically processor intensive) and
can help to place the tracks in the mix in the same sonic environment.
Delay Effects
Delays are devices that delay or hold a copy of a signal for user-defined amount of time.
When the delay signal is mixed with the unprocessed signal it adds a sense of depth and
dimension. Delays can create several types of effects, primarily based on the amount of
delay time. Delay effects with a delay time that ranges from 1 to 50 milliseconds are
perceived as part of the original signal and not as a discrete repeat or echo. Representative
effects include:
Flanging: 1-15 milliseconds of delay time, which results in a type of phase
cancellation called comb filtering
Phase shifting: 1-15 milliseconds of delay time, which results from running the
delayed signal through an all-pass filter (a device that passes all frequencies, but just
not at the same rate). When the delayed signal is combined with the unprocessed
signal, phase cancellation occurs.
Doubling: 15-50 milliseconds of delay time, which results in a sense of fullness
similar to a vocal track that is recorded twice and then played back simultaneously.
Chorusing: 15-50 milliseconds of delay time with pitch modulation. Using the
example of a vocalist, the delay and pitch detuning that are part of a chorusing effect
create the perception that the performer is more like a choir than a single vocalist.
When the delay time is more than 50 milliseconds, the delayed signal is perceived as a
discrete event. Echo is the best known of these delay effects. The parameters commonly
found in delay effects include:
Delay time: in milliseconds
Modulation: delay by itself can create a static, uninteresting effect (especially when
the delay times are less than 50ms). Modulating the delay time can change the
nature of the effect much like a violinist who adds vibrato to a performance.
Modulation is added to a delay effect by using a Low Frequency Oscillator (LFO).
The LFO oscillates at a very slow rate and is used to control the rate of change in the
delay time or other parameter.
Rate: The speed at which the LFO causes the delay time to shift
Width: The range of drift in the set delay time
Pg. 71
Feedback (regeneration): The amount of delayed signal that is rerouted to the input
of the delay. Increasing the feedback will increase the intensity of the effect or, in the
case of echo-like effects, it will increase the number of repeats. With echo, increasing
feedback beyond a certain point may create an infinite loop of repeats, and possible
distortion.
Wet/dry mix: This determines the ratio of processed to unprocessed signal output.
Higher percentages will result in the output of more delayed or “wet” signal. When a
delay (or reverb) is setup as a “bus” effect, the wet/dry mix is always set to 100%
wet. It is not uncommon to find delay effects directly placed on an audio or
instrument track. When that is the case, the wet/dry mix parameter on the delay
effect is set by “ear” to achieve the desired effect.
Repeating delay: One whose delay time is set to seconds or milliseconds. This time
may still be modulated by an LFO.
Tempo or Tap delay: A delay effect synchronized to the musical tempo of a song,
either by entering a rhythmic value, or by tapping the tempo on the computer
keyboard key, foot pedal or MIDI controller. Normally this delay time cannot be
modulated further.
Reverb
Reverb is defined as multiple echoes (closely spaced and random) that reflect within an
acoustic space. These echoes are packed together so densely that they are not perceived as
discrete events but instead as one decaying signal (ambience). This natural effect can be
reproduced by a virtual reverb plug-in to add a sense of dimension and warmth to a
recording. Typical reverb parameters include:
Pre-delay: the time between the arrival of direct sound (no reflections) and the first
reflections at the listener
Early reflections: The first reflections that arrive at the listener. They reflect a single
time off the primary boundaries of an acoustic space (room) before arriving at the
listener. Early reflections give the strongest impression of an acoustics space’s
dimensions and construction.
Delayed reflections: A multitude of reflections that are so closely spaced that they
are perceived as one single decaying signal.
Mix: the ratio of unprocessed to processed signal. Because reverb plug-ins are
typically used in a send and return setup, the output is set to 100% wet.
EQ: The construction materials in a room greatly effect the resulting reverb. Spaces
with highly reflective surfaces produce brighter, longer reverbs. Those with
absorptive surfaces tend to produce warmer (if not duller) reverb. EQ parameters
like high frequency roll-off help to simulate different types of spaces.
Pg. 72
Master (Fader) Tracks
Purpose: As the audio signals from tracks in a session are combined, the resulting
composite signal gets louder. To ensure that distortion doesn’t occur as part of this process,
master fader tracks can be used (inserted) at the point where the signals are combined. The
fader on the master track can be lowered to prevent distortion. Note that the master track
fader should not be lowered to adjust listening levels as that would result in a change in the
signal-to-noise ratio of the overall mix. Instead, adjust the monitor level control on the
audio interface.
Pg. 73
Appendix K: Bounce to disk
When a MIDI project is finished, it’s likely that you’ll want to generate a stereo audio file
that can be burned to disc or played on an MP3 player. To accomplish this, most DAWs
follow a similar process referred to as a bounce to disk. The next few paragraphs will
discuss basic “bounce” procedures and parameters including dither, bit rates, file types and
file formats.
Dither
Quantization error is an undesirable byproduct of the process of digitizing an analog audio
signal or changing the bit rate of a digital audio signal. Like tape hiss, quantization error is
primarily a concern when the audio signal level is soft enough that it doesn’t mask the
noise that results from quantization error. Unlike tape hiss, though, this noise is not
perceived as an artifact separate from the audio signal. Instead it is perceived as being
correlated to the audio signal, and has the harmonic characteristics of a distorted square
wave as the least significant (smallest value) bit fluctuates in an ordered pattern between
zero and one. In order to solve this problem the signal is dithered, which is a process that
causes the signal level to randomly fluctuate at the least significant bit. The result of
dithering the signal is that the noise resulting from quantization error is converted into a
more palatable pink noise with more of a sine wave shape, and the noise is perceived as
decorrelated from the actual audio signal.
A common point at which the bit rate of a digital audio signal might be changed is during
the bounce to disk process. Since one the biggest improvements in digital audio quality is to
work at the highest bit rate possible, DAW users might choose to create 24-bit sessions,
even if they are solely using virtual MIDI instruments. In the end, though, the resulting
bounce will need to be 16-bit in order to burn it to an audio CD, etc. How dither is added to
a bounce depends upon the DAW program. In Pro Tools and Sonar for example, the bounce
is dithered by loading a dither plug-in on one of the master track inserts. As a policy, dither
should always be the last plug-in insert and the last process performed. That way if any
other plug-ins are used on the master track the dithering won’t be “undone.” In some other
programs (Logic or Ableton Live), dither is added and configured in the bounce to disk
dialog box.
Dither parameters are few and not that complicated. First, remember that dither is not
required on a bounce where the bit rate is not being changed. If changing the bit rate, the
target bit rate will need to be set—most likely to 16-bit. Next, there may be noise shaping
options. The noise shaping process attempts to move noise out of the human hearing range.
While noise shaping can be an effective tool, it often creates the impression that a bounce
has been equalized. So, our recommendation is to bounce a project multiple times using
different bounce options and choose the one that either sounds the best or has the most
transparent effect on the original multi-track mix.
Pg. 74
Example: Pro Tools Dither Plug-In
Example: Logic Bounce Dialog
Bounce Preparation
DAW programs have different methods for setting a bounce length. In most programs you
can set the duration by making a selection of the desired length in the program’s “arrange”
window (Edit window in Pro Tools). In other cases the bounce duration can be set in the
DAWs Bounce or Export Audio dialog box—in Logic or Ableton Live, for example.
Additionally, all track solo and mute buttons should be disabled, or it is possible that the
bounce will not include all the desired tracks.
Pg. 75
Bounce Options
The last step in bouncing a project is to set the bounce options. Depending on the DAW, this
can be found under the File menu and titled Bounce to Disk or Audio Export. The
fundamental parameters found here include the following:
File Type: The most important consideration is whether the bounce will result in a
compressed or uncompressed audio file. Standard file types include…
BWF (Broadcast Wave File): uncompressed, Red Book CD standard, most common
uncompressed file type
AIF (Apple Interchange File Format): uncompressed, Red Book CD standard
MP3: most common compressed file type. Depending on the DAW program, the
encoding parameter might also be available; if so, the Constant Bit Rate (CBR) can
be set, which largely determines the audio quality. While the standard setting is
128kbps, we recommend that you use a CBR of at least 256kbps.
Sample Rate: 44.1kHz is the consumer standard rate for CDs, etc. If the bounce is being sent
to a mastering studio, they will want the bounce at the original sample rate.
Bit rate: 16-bit is the consumer standard for CDs, etc. If the bounce is being sent to a
mastering studio, they will want the bounce at the original bit rate.
File format: The standard is an interleaved file. This means that the left and right sides of a
stereo (or 5.1) signal are contained in one data stream, which is later decoded by the CD or
MP3 player. A stereo interleaved bounce will result in a single file. DAW programs also
offer a multi-mono bounce option. This will result in two files—one each for the left and
right sides. This is useful if the files are brought back into the DAW for further editing, but
useless if an audio CD is the desired result.
Example: Pro Tools Bounce Dialog
The last step in the bounce process is to set the save location and name the file. As a best
practice, we recommend that the bounce be saved in the project folder. Lastly, some DAWS
bounce in real time (Pro Tools), while others are capable of a non-real time bounce (Logic,
Digital Performer, Adobe Audition, Sonar, and others).
Pg. 76
Appendix L: Composing to Picture Basics
Objectives:
 Discuss and demonstrate the scoring process
 Demonstrate the functionality in a DAW program that supports scoring to picture
Scoring Process Overview
In a professional setting, the following steps are part of the process that takes place as a
Film or TV production team collaborates with a composer to create and add music to a film
or video.
The Spotting Session is a meeting in which the film director (or TV producer) and the
composer review a preliminary version of a project and determine the musical needs for
each scene. Usually, the project’s music editor is also present at the meeting taking notes,
which include…
 The SMPTE start and end points for each musical cue or scene
 The dramatic and musical goals for each cue
 Type of usage: Underscore, source music, pre-existing song, or new song.
Music Production…Composers in film or high budget TV score to picture. Composers for
lower budget TV shows sometimes compose without ever seeing any video. As part of the
process, composers and their teams create a mockup of the eventual score in a DAW using
virtual instruments and sample libraries. Some elements (sometimes many) created in this
pre-production process might actually be used in the final product. In preparation for the
scoring session, those elements will be edited, mixed and rendered as “prelays” that can be
included in the headphone cues during the subsequent recording. Depending on the project,
live musicians are added on top of the MIDI mockup. For a large budget film, this could
include 80-100 orchestral musicians. For a TV show, this might only be a single guitarist.
Following the recording session(s) the music is edited and submixed into a group of tracks
called stems, to reduce the total number of tracks for easier mixing.
The Dubbing Session…is when all of the project’s elements are merged into the final product.
These elements include the final edits of the film and the three areas of audio—dialog,
sound effects and video. The dubbing session is attended by the project’s director
(producer in TV), and audio mixers for the three audio areas. The music editor attends and
represents the composer’s interests.
Preparing and Importing Video Into a DAW Session
The production company will make a copy of the project available to the director. Each cue
is given a window burn that displays the current SMPTE (Society of Motion Picture and
Television Engineers) timecode value directly on the video. Historically, timecode was an
audio track that was printed to the film. The timecode signal recorded to this track was
distributed to the audio and video machines in use and enabled them to synchronize during
Pg. 77
playback. Today timecode is “burned” into an overlay window in the video file. In order to
sync multiple devices, two things are needed; (1) positional reference (here’s where I am)
and (2) speed reference (this is how fast I’m going). SMPTE does that by giving timecode
location (positional reference) and frame rate (speed reference). Timecode reads as
Hours:Minutes:Seconds:Frames or 00:00:00:00.
There are several frame rates in use for different situations:
 US Theatrical Film: 24fps
 US black and white TV or audio only sessions: 30fps
 US color TV and video: 29.97fps
 European film or video: 25fps
Timecode can be added to an existing video in Final Cut Pro using the following
instructions.
1. Open the application
2. Import the video file into a new project
a. File > Import > Files…(Command-I)
b. Or drag and drop from finder window to the Final Cut Browser
c. Or drag and drop directly to a track
3. If the video file (or clip) does not have the same properties as the project, an error
message will appear and advise you to change the project (sequence) settings to
match the clip—you will definitely want to click “Yes.”
4. Double-click the video clip on the timeline so that it shows in the viewer window.
5. Click the Browser’s Effect tab…
a. Open the Video Filters tab
b. Find the Timecode Generator filter
c. Drag and drop the filter on the selected video track in the timeline
d. Click the Viewer’s Filters tab
e. Set the Filter parameters
i. Label = blank or perhaps the composer’s initials
ii. Set the frame rate to the video clip frame rate. If you don’t know the file’s
frame rate, select the video file in the browser then scroll to the right and find
the frame rate column!
iii. Hour offset: if the cue starts at 1 hour then the offset should be set to 1, etc.
iv. Frame offset: if the cue starts “off the hour,” add the necessary number of
frames. For example, if the clip starts at 1 hour and 3 minutes, enter 1 into the
hour offset field and enter 5400 (3 minutes x 60 seconds x 30 frames) into the
frame offset field.
6. To see the added timecode field, the video clip will need to be rendered
a. Select the video clip
b. Select Sequence > Render Selection > Video
7. Export the edited video clip
a. Select File > Send to > Compressor
b. In the Compressor Settings Window, click the Settings Tab
Pg. 78
c. Navigate the folder structure and select a video format. We suggest QuickTime
H.264 as a good compromise.
i. DAW programs will all import QuickTime videos
ii. H.264 video looks good and results in fairly small file sizes
d. In the Compressor Settings window, click the Destinations tab
i. Select a save location for the resulting file.
e. If necessary, adjust the format settings in the Inspector. If the video includes
production audio (dialog, sound effects) that you want to consider while writing
to picture, set the audio format to Linear PCM, 48kHz sampling rate and 16-bit.
Videos for class projects can be found online at the following websites
 Internet Movie Archive: http://archive.org
 Film Archives Online: http://filmarchivesonline.org
 Entertainment Magazine: http://emol.org/movies
 Movie Trailers: http://apple.com/trailers
 Audio from films: http://classicmovies.org
Videos or scenes from a video can be ripped from a DVD using a free program called
Handbrake. (http://handbrake.fr/)
Pro Tools Video Basics
The following outline gives basic information about how video can be integrated into a Pro
Tools session.
E.
Pro Tools Video Requirements
1. PTs LE (in comparison to the full PTs Complete Production Kit or PTs HD
product) only allows QuickTime-related video in a PTs session.
a) You must have QuickTime loaded on the computer
b) HD systems also allow Avid video
2. Number of video tracks allowed per session
a) PTs only allows one video track with one video region
b) PTs LE with Complete Production Kit and PTs HD allow multiple video
tracks, video regions and playlists.
F. The Main Video track
1. Pro Tools only allows video playback from one track at a time. This track is
referred to as the Main Video track.
2. The Main Video track is either the first video track in the session or the video
track with the Online button enabled.
G. Video Engine Rate (VER) and Frame Rate
1. A session’s video engine rate is automatically selected when you import video
into a PTs session. If you’re operating an HD session (or LE equipped with DV
Toolkit 2) it’s set when the first video is imported into the session.
2. The VER is equal to the frame rate of the imported video.
Pg. 79
3.
The VER displays in white on the track header unless it doesn’t match the
session’s frame rate. In that case the VER displays in red.
Example: Video Track Header
Video Type (QuickTime or Avid)
Track View selector
Online/Offline toggle button
Video Engine Rate
H. Session Frame rate
1. Background
a) The session’s frame rate should be set to match the Video Engine Rate.
This will allow the grids and rulers to align correctly with the frames of
the video file.
b) Film or video frame rates relate to SMPTE timecode which displays time in
hours, minutes, seconds and frame in two digit fields separated by colons.
(1) For example, 1 hour, 12 minutes, 4 seconds and 20 frames displays as
01:12:04:20
(2) When drop frame rates are used, the separator between the minutes
and seconds field is a semicolon, for example: 01:12:04;20
2. Set the session’s frame rate in the Session Setup window (Command-2)
3. Supported Frame Rates include:
a) 23.976 FPS: Used to convert HD video to NTSC
b) 24 FPS: US film (theatrical release) frame rate
c) 25 FPS: PAL/EBU frame rate, used in Europe and other countries that
adhere to PAL standards.
d) 29.97 FPS: NTSC frame rate, used in the US for color video
e) 29.97 FPS Drop: NTSC video rate, used in the US for color video. Drop
frame is used to enable sync between “hour of the day clock” and video. It
still runs at 29.97 fps, but two frame numbers are dropped at the
beginning of every minute except minutes divisible by 10.
f) 30 FPS: NTSC frame rate used with black and white video and audio only
sessions
g) 30 FPS Drop: Misleading, not a real frame rate. Only used to correct errors
in existing timecode.
4. Session Start Time
a) PTs also allows the user to specify a session start time.
b) Note that 00:00:00:00 is not used. Starting playback before that location
would require machines to roll back past “midnight” to 23:59:59:59. This
often causes problems with other machines that may be synchronized to
PTs and should be avoided.
Pg. 80
Example: Session Setup Window
Session Start Time (note the drop frame format)
Frame Rate (Timecode Rate)
I.
Importing Video and Managing Video tracks
1. Importing QuickTime Video
a) Import Video Command
(1) Choose File > Import > Video
(2) In the Select Video File to Import dialog box, select the desired video
file
(3) Click Open
(4) Set parameters as desired in the Video Import Options dialog box and
click OK.
b) Drag and drop methods
(1) Drag from a Finder window to the Regions List a track or Track List
or…
(2) Drag from the Workspace Browser…
(3) The Video Import Options dialog box will appear, etc.
2. Video tracks
a) Creating video tracks and placing video regions on video tracks is similar
to working with audio regions.
Example: Video media in the Region List
Video Region
Pg. 81
3.
4.
Viewing QuickTime video in PTs
Video tracks have two View options…
a) Frames
(1) Shows key frames of the video
(2) The more the track is zoomed out, the larger the number of
“thumbnails” that are displayed
(3) Processor-intensive so, if performance seems compromised, try
switching to Blocks view
b) Block
(1) Video regions are displayed as colored boxes.
Example: Frames View and Block Views
J.
The Video Window
1. To display the Video window, choose Windows > Video (or Command-9)
2. To resize the Video window…
a) Move the cursor to the lower left hand corner. When the resize cursor
appears, click and drag the window to the desired size.
b) Right-click (Control-click) the Video window and choose the desired size
from the pop-up menu.
Example: The Video Window
Pg. 82
Resize Pop-Up menu (Right- or Control-click)
K. Importing Audio from a QT Video
1. Choose File > Import > Audio
2. Navigate the Import Audio dialog and select the QT movie from which you
wish to import the audio.
3. In the Regions column, select the audio region to be imported and choose Add,
Copy or Convert as is appropriate.
4. Click Done.
5. Choose to Import the region to the Regions List or to a new track.
6. The QT movie’s audio will automatically be imported at using the current
session parameters
L. Editing a Video Region: Video Editing is only available on HD systems and Pro
Tools LE systems equipped with Complete Production Kit. Simple edits like
trimming can be performed.
Pg. 83
Appendix M: Computer DAW and MIDI Sequencing Software
Software available for TI:ME 2A Computer Music Sequencing. This is not an exhaustive list,
but contains some of the most popular computer sequencing software available. Keep in
mind that there is no such thing as “the best DAW” application. Each has strengths and
weaknesses. This appendix is a starting point for available options and possible
demonstration downloads. Most downloadable demos have all of the software’s features
available but you cannot save your work or they will run as a demo for only a “trial period.”
Note that several manufacturers offer different versions or levels of a sequencer
application. Therefore, you can start out with a basic version and upgrade to a more
powerful version without having to learn a new interface. Also, many manufacturers offer
the same software for both the Macintosh and Windows platforms. For a lab situation, you
may want to investigate lab pack and site license pricing packages; such information is
typically available from the manufacturer’s web site.
The applications in the following table are industry leading digital audio and sequencing
programs. The table lists which OSs the application runs on; whether the application
supports MIDI, digital audio, or video import and export; and if it supports audio time
compression and expansion (TCE).
Application
Logic Pro Studio
Pro Tools
Reason
Record
Reaper
FL Studio
Ableton Live
Cubase
Nuendo
Sonar
Digital Performer
Audition v3+
Sound Forge
Manufacturer
Apple
Avid
Propellerhead
Propellerhead
Cockos
FL Studio
Ableton, AG
Steinberg
Steinberg
Cakewalk
MOTU
Adobe
Sony
OS
Mac
Mac/PC
Mac/PC
Mac/PC
Mac/PC
Mac/PC
Mac/PC
Mac/PC
Mac/PC
PC
Mac
Mac/PC
PC
MIDI
x
x
x
x
x
x
x
x
x
x
Audio
x
x
Video
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
DAW Website URLs:
 Logic Studio: http://www.apple.com/logicstudio/
 Pro Tools: http://www.avid.com
 Propellerhead: http://www.propellerheads.se/
 Reaper: http://www.reaper.fm/
 FL Studio: http://www.flstudio.com/
 Ableton Live: http://www.ableton.com
 Steinberg: http://www.steinberg.net
 Sonar: http://www.cakewalk.com/
 Digital Performer: http://www.motu.com/
 Audition: http://www.adobe.com/products/audition.html
Pg. 84
TCE
x
x
x
x
x
x
x
x
x
x
x

Sound Forge: http://www.sonycreativesoftware.com/soundforge
Third Party Virtual Instruments and Sample Library URLs:
 Native Instruments (Kontakt, Komplete, etc.): http://www.native-instruments.com
 Spectrasonics (Omnisphere, Stylus RMX, etc.): http://www.spectrasonics.net/
 EastWest/Quantum Leap (Hollywood Brass, Goliath, etc.):
http://www.soundsonline.com/
 Vienna Symphonic Library (Vienna Special Edition, Vienna Instruments Pro, etc.):
http://www.vsl.co.at/en/
 Synthology (Ivory Pianos): http://www.synthogy.com
 Piano TEQ (The Grand Pianos): http://www.pianoteq.com/
 FXpansion (BFD): http://www.fxpansion.com/
 Toontrack (EZ Drummer): http://www.toontrack.com/
 Xln audio (Addictive Drums): http://www.xlnaudio.com/
 Drumagog: http://www.drumagog.com/
 Celemony (Melodyne): http://www.celemony.com
 Arturia (MoogModular, Arturia Prophet, etc.): http://www.arturia.com/
 Big Fish Audio (Loops, instruments, etc.): http://www.bigfishaudio.com/
Pg. 85
Appendix N: Lesson Plan Guide
Short Answer Worksheet for creating Computer Notation Software Lesson Plans
Sequencer Software and the MENC National Standards
• Which national standard(s) can be addressed using Sequencing Software?
_________________________________________________________________________________________________
_________________________________________________________________________________________________
_________________________________________________________________________________________________
• What specific ways can Sequencing be used to address these standards?
_________________________________________________________________________________________________
_________________________________________________________________________________________________
_________________________________________________________________________________________________
MENC Standards:
1. Singing, alone and with others, a varied repertoire of music.
2. Performing on instruments, alone and with others, a varied repertoire of music.
3. Improvising melodies, harmonies, and accompaniments.
4. Composing and arranging music within specified guidelines.
5. Reading and notating music.
6. Listening to, analyzing and describing music.
7. Evaluating music and music performances.
8. Understanding relationships between music, the other arts, and disciplines outside
the arts.
9. Understanding music in relation to history and culture
Now, review the Teaching Strategies listed in the TI:ME Technology Strategies document.
See Appendix A of the Technology Strategies for Music Education (published by TI:ME).
Select at least three teaching strategies and briefly describe how you could apply that in
your own classroom; then, describe three or more ways that you could use Sequencing in
your teaching situation:
TI:ME Tech. Strat. #
__________________________
__________________________
__________________________
Teaching Application
________________________________________________________________
________________________________________________________________
________________________________________________________________
List three ways that Sequencers can be used in your teaching:
_________________________________________________________________________________________________
_________________________________________________________________________________________________
_________________________________________________________________________________________________
Pg. 86
Sample Lesson Planner
I. ADVANCE PLANNING
A. GRADE LEVEL AND SUBJECT
For what grade or age is the plan?
How long does each class session last? How many times do you meet per week?
Where are the students developmentally?
B. MATERIALS AND EQUIPMENT
Which books (include title and specific page numbers) are needed for the plan?
Which song materials are needed?
Which visual aids (PowerPoint presentation, flashcards, photos, charts, etc.) are
needed?
Which aural aids (MP3s, CDs, etc.) Are needed?
Which instruments are needed? Do they need to be tuned ahead of class time?
Which equipment (Whiteboard, Smart Board, sketchpad, etc.) is needed?
Is an LCD projector, tape recorder or DVD player needed?
Which props are needed?
Does the plan require open space for movement?
C. Specific Program Objectives
List several objectives for the year to meet in the activities of singing, playing, reading,
moving, creating, or listening.
List several objectives for the year to meet teaching goals based on music elements
(melody, rhythm, harmony, form, expression, and timbre).
D. Lesson Objectives
List several specific music objectives for this particular class. The objectives will
answer, in sentence form: Who, What Specific Activity (Active Verb), What Music Is
Used, How Well That Goal is Accomplished.
Pg. 87
Appendix O: TI:ME 2A Advanced Sequencing Project Journal Guide
IST Name ________________________________ email __________________________________
Sequencer Project Musical Material: Song Title __________________________________________
Composer __________________________________________
Date ___________________ File Name ________________________________________________
Goal(s) for realizing the song as a sequence (selections of sound, artistic recording and editing)
________________________________________________________________________________
________________________________________________________________________________
________________________________________________________________________________
________________________________________________________________________________
________________________________________________________________________________
________________________________________________________________________________
________________________________________________________________________________
________________________________________________________________________________
________________________________________________________________________________
________________________________________________________________________________
________________________________________________________________________________
Equipment List: Sequencer or DAW, etc.
________________________________________________________________________________
________________________________________________________________________________
Tempo Settings __________ Meter Settings __________ Key Settings __________
Sequencer Track Sheet
1: Track type, name, virtual instruments, signal processors and additional important information
________________________________________________________________________________
________________________________________________________________________________
________________________________________________________________________________
________________________________________________________________________________
2: Track type, name, virtual instruments, signal processors and additional important information
________________________________________________________________________________
________________________________________________________________________________
________________________________________________________________________________
________________________________________________________________________________
3: Track type, name, virtual instruments, signal processors and additional important information
________________________________________________________________________________
________________________________________________________________________________
________________________________________________________________________________
________________________________________________________________________________
Pg. 88
4: Track type, name, virtual instruments, signal processors and additional important information
________________________________________________________________________________
________________________________________________________________________________
________________________________________________________________________________
________________________________________________________________________________
5: Track type, name, virtual instruments, signal processors and additional important information
________________________________________________________________________________
________________________________________________________________________________
________________________________________________________________________________
________________________________________________________________________________
6: Track type, name, virtual instruments, signal processors and additional important information
________________________________________________________________________________
________________________________________________________________________________
________________________________________________________________________________
________________________________________________________________________________
7: Track type, name, virtual instruments, signal processors and additional important information
________________________________________________________________________________
________________________________________________________________________________
________________________________________________________________________________
________________________________________________________________________________
8: Track type, name, virtual instruments, signal processors and additional important information
________________________________________________________________________________
________________________________________________________________________________
________________________________________________________________________________
________________________________________________________________________________
________________________________________________________________________________
9: Track type, name, virtual instruments, signal processors and additional important information
________________________________________________________________________________
________________________________________________________________________________
________________________________________________________________________________
________________________________________________________________________________
10: Track type, name, virtual instruments, signal processors and additional important information
________________________________________________________________________________
________________________________________________________________________________
________________________________________________________________________________
________________________________________________________________________________
Digital Video Information (AVI – QuickTime)
Name of video and additional important information:
________________________________________________________________________________
________________________________________________________________________________
________________________________________________________________________________
Pg. 89
Appendix P: An Abridged Public Domain List from the Music in the Public
Domain Site for Sequencing Source Materials (http://www.pdinfo.com/list.php)
This list is intended only as a starting point to assist in researching public domain materials
and should not be considered definitive proof that the music listed is, in fact, in the public
domain.
A
A-Hunting We Will Go - Trad
Adeste Fideles - 1782
Afternoon of a Faun - Debussy 1895
Air for the G String - c1700
Alouette - 1879
Amazing Grace - John Newton c1800
America, My Country 'Tis of Thee - tune 1744, words Rev Samuel Francis Smith 1832
America the Beautiful - 1895
Angels We Have Heard On High - 1800s
Arkansas Traveler - 1851
Ash Grove, The - trad Welsh
Assembly (bugle call) - trad.
Au Clair de la Lune - 1811
Auld Lang Syne - music 1687, words verses 2&3 Robert Burns 1711
Aura Lee H - 1861
Ave Maria Gounod - 1859
Ave Maria Schubert - 1826
Away in the Manger - 1887
B
Baa Baa Black Sheep - 1765
Bach Johann Sebastian (1685-1750)
Barbara Allen - 1666
Barber of Seville - Rossini 1813
Battle Hymn of the Republic - Julia Ward Howe 1862
Beautiful Dreamer Foster - 1864
Beethoven Ludwig v (1770-1827)
Berlioz, Hector (1803-1869)
Bill Bailey Won't You Please Come Home - 1902
Bizet, Georges (1838-1875)
Blue Bells of Scotland, The - 1885
Borodin, Alexander (1834-1887)
Brahms Johannes (1833-1897)
Bridal Chorus, Lohengrin - 1885
British Grenadiers - 1750
C
Camptown Races - Foster 1850
Can Can - Offenbach 1858
Careless Love - (probably trad) 1895
Carnival of Venice - Bellak 1854
Chopin Frederic (1810-1849)
Chopsticks - 1877
Clementine - 1884
Cockles and Mussels - 1750
Columbia the Gem of the Ocean - 1843
Come All Ye Faithful Reading - 1885
Pg. 90
Comin' Thru the Rye - 1796
Concerto for Piano #2 - Rachmaninoff 1901
Couperin , Francois (1668-1733)
Czerny, Carl (1791-1857)
D
Dance Macabre - Saint-Saens 1872
Death and Transfiguration- R Strauss 1891
Deck the Halls - 1784
Did you Ever See a Lassie
Down by the Riverside - 1865
Drink to Me Only with Thine Eyes - music 1780, words 1616
Drunken Sailor, The - 1891
Dufay Guillaume (c1400-1474)
Dunstable John (1370?-1453)
E
East Side, West Side (see "Sidewalks of New York")
Entertainer The - Joplin 1902
Eroica Symphony - Beethoven
Espana Tango - Albeniz 1890
Evening Hymn - Tallis 1890
Eyes of Texas Are Upon You, The - 1903
F
Fantasie Impromptu - Chopin 1855
Farmer in the Dell - 1883
Fifteen Miles on the Erie Canal
First Call (bugle call)
First Nowel, The - 1833
Fisher's Hornpipe - 1849
Flight of the Bumble Bee - Rimsky-Korsikov 1900
Flying Dutchman Overture - Wagner 1844
For He's a Jolly Good Fellow - 1783
Frankie and Johnny - 1869
Frere Jacques - 1811
Frog Went A'Courtin' - 1580
From the New World - Dvorak 1893
Funeral March - Chopin 1840
Funeral March of a Marionette - Gounod 1872
Fur Elise - Beethoven 1810
Fux Johann Joseph (1660-1741)
G
Git Along Little Dogies - 1893
Give My Regards to Broadway - 1904
Go Tell Aunt Rhody - 1844
Go Tell it on the Mountain - 1865
God Rest You Merry Gentlemen - c1770
Golden Slippers - 1879
Goober Peas - 1864
Good King Wenceslas, music Swedish, 1582 words 1853-67
Good Morning to All (tune of Happy Birthday) - 1893
Goodnight Ladies - 1853
Gottschalk, Louis Moreau (1829-1869)
Gounod, Charles Francis (1818-1893)
Grand March (Aida) - Verdi
Grande Valse Brilliante - Chopin 1834
Pg. 91
Grandfather's Clock - Henry Work 1876
Greensleeves - 1580
Guido of Arezzo (d 1050 AD)
Gypsy Chorus (Carmen) – Bizet 1873
Gypsy Music, - Liszt
H
Habanera (Carmen) – Bizet 1873
Hail to the Chief - Scott 1812
Hallelujah Chorus - 1741
Handel, George Frederick (1685-1759)
Happy Farmer, The - Schumann 1849
Hard Times Come Again No More - Foster 1855
Hark the Herald Angels Sing - 1855
Haydn Franz Joseph (1732-1791)
Here We Go Round the Mulberry Bush - 1857
Hey Diddle Diddle - 1765
Hickory Dickory Dock - 1765
Home on the Range - 1873
Humoresque - Dvorak 1894
Hungarian Dances - Brahms 1859-1869
Hungarian Rhapsodies, Liszt
I
I Gave My Love a Cherry - 1850
I Saw Three Ships Come Sailing - 1765
I'm a Yankee Doodle Dandy - 1904
I've Been Working on the Railroad - 1894
In the Good Old Summertime - 1902
Invitation to the Dance - Weber 1821
Irish Washerwoman - 1792
It Came Upon a Midnight Clear - 1850
J
Jingle Bells - 1857
John Henry - 1873
Johnny Has Gone for a Soldier, Irish trad.
Joshua Fit the Battle of Jericho - 1865
Josquin Des Pres (c1450-1521)
Joy to the World Handel - 1839
K
No songs or composer last names beginning with K.
L
La Boheme - Puccini
La Donna e Mobile (Rigolette) - Verdi
Largo (New World Symphony) - Dvorak
Liebestramme - Liszt 1847
Liszt, Franz (1811-1886)
Little Boy Blue (Mother Goose) - 1765
Little Brown Jug Joe Winner - 1869
Little Jack Horner - 1765
London Bridge - 1744
Londonderry Air - 1855
Long Long Ago - Bayly 1843
Lord's Prayer, The - 1885
Lullabye, Brahms
Pg. 92
Lully Jean Baptiste (1633-1687)
M
Man on the Flying Trapeze, The - 1868
Maple Leaf Rag - 1899
March of the Toys, The - 1903
March Slav - Tchaikovsky 1876
Marriage of Figaro, The - Mozart 1786
Mary Had a Little Lamb - Sarah Josepha Hale 1866
Meet Me in St Louis, Louis - 1904
Mendelssohn-Bartholdy Felix (1809- 1847)
Messiah, The - Handel 1741
Michael Row the Boat Ashore - 1867
Mighty Fortress Is Our God, A - 1529
Minuet in G - Beethoven 1796
Monteverde Claudio (1568-1643)
Moonlight Sonata - Beethoven 1802
More, Sir Thomas (1478-1535)
Morely, Thomas (1557-1602)
Mousorgsky, Modeste (1835-1881)
Mozart, Wolfgang A (1756-1791)
My Bonnie Lies Over the Ocean - 1881
My Old Kentucky Home Foster -1853
N
New World Symphony - Dvorak 1893
Night on Bald Mountain, A - Mussorgsky 1887
Nocturne op. 9 no. 2 - Chopin 1832
Norwegian Dance, The - Grieg 1881
Now I Lay Me Down To Sleep - 1866
Nutcracker Suite, The - Tchaikovsky 1891-2
O
O Holy Night - 1843
O Little Town of Bethlehem - 1868
O Tannenbaum – music trad, words Ernst Anschutz 1824
Obrecht Jacob (1430-1505)
Offenbach, Jacques (1819-1880)
Oh Susannah - Foster 1848
Oh Them Golden Slippers - James A Bland 1879
Old Folks at Home, The - Foster 1851
Old MacDonald Had a Farm – music 1859, words 1706
Orpheus in the Underworld - Offenbach
P
Pat-a-Cake (Mother Goose)
Pathetique Sonata - Beethoven 1799
Pavanne for a Dead Infanta - Ravel 1899
Peer Gynt Suite - Grieg 1888
Peter and the Wolf – Prokoviev 1936
Peter Peter Pumpkin Eater - 1765
Piano Concerto #1 - Tchaikovsky 1875
Piano Concerto #2 – Rachmaninoff 1901
Piano Concerto - Grieg 1873
Pictures at an Exhibition - Mussorgsky 1887
Pirates of Penzance - Gilbert & Sullivan
Pizzicato Polka - Strauss
Polly Wolly Doodle - Foster 1885
Pg. 93
Polonaise Militaire Chopin 1840
Polovetsian Dances Borodin ©1888
Pomp and Circumstance Elgar ©1902
Pop Goes the Weasel ©1853
Prelude in C# Minor Rachmaninoff ©1893
Prelude op 28 no 7 Chopin 1839
Purcell, Henry (1658-1695)
Q
Quantz, Johann Joachim (1697-1733)
R
Rameau, Jean Philippe (1683-1764)
Red River Valley, The - 1896
Reverie - Debussy 1895
Riddle Song, The - 1850
Robert Burns
Rock of Ages - Hastings 1832
Rock-a My Soul - 1830
Romeo and Juliet - Tchaikovsky 1871
Rossini, Gioacchimo Antonio (1792-1868)
Rousseau, Jean Jacques (1712-1778)
Row Row Row Your Boat - words 1852 music 1881
Rub-a-Dub-Dub (Mother Goose)
S
Sailing Sailing (Over the Bounding Main) - 1880
Sailor's Hornpipe - 1795
St Matthew's Passion – Bach 1727
Scarlatti, Alessandro (1659-1725)
Scarlatti, Domenico (1685-1757)
Scheherazade - Rimsky-Korsikov 1890
Schubert, Franz Peter (1797-1828)
Schumann, Clara Josephine Weick (1819-1896)
Schumann, Robert (1810-1856)
Semper Fidelis - Sousa 1888
Serenade - Schubert 1824
She'll Be Comin' Round the Mountain - 1899
Shenendoah - 1826
Shoo Fly Don't Bother Me - 1869
Silent Night, Holy Night – music Franz Gruber 1818, words anon., translation Josef Mohr
Silver Moon - 1849
Simple Simon - 1765
Slavonic Dances - Dvorak 1887
Sleeping Beauty Waltz - Tchaikovsky 1890
Sonatas of III Parts - Henry Purcell 1683
Song of India - Rimsky-Korsikov 1897
Song of the Volga Boatman - 1867
Sorcerer's Apprentice, The - Dukas 1897
Spring Song - Mendelssohn - 1844
Star Spangled Banner - 1812
Stars and Stripes Forever March - 1897
Strauss, Joseph (1827-1870)
Streets of Laredo 1860
Summer is Icumen In - 1226
Swan The Saint Saens - 1887
Swing Low Sweet Chariot - 1872
Pg. 94
T
Ta Ra Ra Boom De Ay - 1891
Tales from the Vienna Woods J Strauss - 1868
Tallis, Thomas (1505-1585)
Taps
Tarantella (Italian trad )
Tchaikovsky, Peter Illich (1840-1893)
Teleman, Georg Philipp (1681-1767)
Tenting Tonight on the Old Camp Ground - Kittredge 1864
There is a Tavern in the Town - 1883
There Was A Crooked Man (Mother Goose)
There Was an Old Woman Who Lived in a Shoe - 1765
Three Blind Mice - 1609
Till Eulenspiegel - R Strauss 1895
Toreador Song (Carmen) - Bizet 1873
Toyland - Herbert 1903
Tramp!Tramp!Tramp! - Root 1864
Trois Gymnopedies – Satie 1888
Turkey in de straw - 1834
Twinkle Twinkle Little Star - 1765
U
Unfinished Symphony - Schubert
V
Verdi Giuseppe (1813-1901)
Vivaldi, Antonio (c 1680-1743)
W
Wagner, Wilhelm Richard (1813-1883)
Waltz of the Flowers (The Nutcracker Suite) - Tchaikovsky 1891
Waltzing Matilda - 1903
We Three Kings of Orient Are - 1857
Weber, Carl Marie von (1786-1826)
Wedding March A Midsummer Night's Dream - Mendelssohn 1844
Wedding March (Lohengrin) - Wagner 1852
Wedding March – Mendelssohn 1844
Well-Tempered Clavier 1 - Bach 1722
When Johnny Comes Marching Home - Lambert 1863
When the Saints Go Marching In - 1896
Wildwood Flower (I'll Twine Mid the Ringlets) - Maude Irving & JD Webster 1860
William Tell Overture - Rossini 1829
Wolf, Hugo (1860-1903)
X
No songs or composer last names beginning with X.
Y
Yankee Doodle - 1775
Yellow Rose of Texas - 1853
Z
No songs or composer last names beginning with Z.
Pg. 95
Appendix Q: Bibliography for Further Study
Books on MIDI and Sequencing
Allen, Corey • Arranging in the Digital World • Berklee Press • 2000
Bergersen, T. • Sequencing Samples, Part 1. Virtual Instrument Magazine •
December/January 2007
Bergersen, T. • Sequencing Samples, Part 2. Virtual Instrument Magazine • April/May 2007
Hewitt, Michael • Composition for Computer Musicians • Course Technology • 2009
Miles-Huber, David • The MIDI Manual • Focal Press • 2007
Pedergnana, D. • Subtle Gestures • Electronic Musician March • 2005
Pejrolo, Andrea • Creative Sequencing Techniques for Music Production, 2nd Edition • Focal
Press • 2011
Pejrolo, Andrea and DeRosa, Richard • Acoustic and MIDI Orchestration for the
Contemporary Composer • 2007
Russ, F. • MIDI Mockup Microscope • Virtual Instrument Magazine • April/May 2006
Videos on MIDI, Recording and Music Production
Video tutorials are available on a number of Internet-based training sites. Subscriptions to
the websites and video series can be purchased, or, in some cases, the videos (in DVD or
tape format) can be purchased.
Groove3: http://www.groove3.com
Lynda.com: http://www.lynda.com
MacProVideo.com: http://www.macprovideo.com
Books on Digital Audio and Recording
Bartlett, Bruce • Practical Recording Techniques, Fifth Edition • Focal Press • 2008
Izhaki, Roey • Mixing Audio: Concepts, Practices and Tools • Focal Press • 2008
Katz, Bob • Mastering Audio: The Art and the Science • Focal Press • 2007
Miles-Huber, David and Runstein, Robert • Modern Recording Techniques, Seventh Edition •
Focal Press • 2009
Owsinski, Bobby • The Mastering Engineer’s Handbook • Thomson Course Technology •
2007
Owsinski, Bobby • The Mixing Engineer’s Handbook • Thomson Course Technology • 2006
Owsinski, Bobby • The Recording Engineer’s Handbook • Thomson Course Technology •
2009
Purse, Bill • Home Recording Basics (Ultimate Beginner Tech Start Series) • Warner Bros.
Publishing • 2000
Pg. 96
Books on Digital Audio and Multimedia
Mash, David Ultimate Beginner Tech Start Series – Musicians and Multimedia • Warner
Bros. • 1999
Holman, Tomlinson • Sound for Film and Television, Third Edition • Focal Press • 2010
Holman, Tomlinson • Surround Sound, Second Edition • Focal Press • 2007
Shepherd, Ashley • Pro Tools for Video, Film and Multimedia • Muska & Lipman Publishing •
2003
Tozzoli, Rich • Pro Tools Surround Sound Mixing, Second Edition • Hal Leonard • 2011
Books on Technology and Music Education
Aflred Publishing Staff • Integrating Technology with Music Instruction • Alfred Publishing •
2009
Burns, Amy • Technology Integration in the Elementary Music Classroom • 2008
MENC • Spotlight on Technology in the Music Classroom • Rowman & Littlefield Education •
2003
Rudolph, Tom and Richmond, Floyd and Mash, Dave • Technology Strategies for Music
Education, Second Edition • TI:ME Publications • 2005
Rudolph, Tom • Teaching Music With Technology • GIA Publications • 2004
TI:ME, Edited by Scott Watson • Technology Guide for Music Educators • Artist Pro • 2005
Williams, David and Webster, Peter • Experiencing Music Technology • Schrimer • 2008
Music Technology & Music References
Frankel, James • The Teacher’s Guide to Music, Media and Copyright Law • Hal Leonard •
2009
Gallagher, Mitch • The Music Tech: A Glossary of Audio-Related Terms and Technologies •
Course technology • 2008
Holmes, Thom • The Routledge Guide to Music Technology • Routledge • 2006
Periodicals
The following periodicals are popular sources of information about current sequencer
technology. These publications include product reviews, announcements of updates, and
advertisements by leading hardware and software manufacturers.
Electronic Musician: http://www.emusician.com
EQ Magazine: http://www.eqmag.com/
Future Music Magazine: http://www.musicradar.com/futuremusic
Keyboard: http://www.keyboardmag.com
Music and Computers: http://www.musicradar.com/computermusic
Recording Magazine: http://www.recordingmag.com
Sound on Sound: http://www.soundonsound.com
Pg. 97
Useful Web Links
International Music Score Library Project (IMSLP): http://imslp.org/
MIDI Manufacturers Association: http://www.midi.org
Pg. 98
Appendix R: Sequencing, Computer and Music Technology Terminology
If this is your first experience with sequencing software or the dedicated sequencer and
MIDI workstation, you should take some time to learn the essential vocabulary. It is for this
reason that the following glossary of important computer and sequencing terms is included.
Each should be understood or assimilated and will apply to all available computer software.
A/D Converter – Analog-to-digital converter. This is a device that encodes a continuously
varying (analog) audio signal into a string of discrete (digital) numeric values. Each of these
numbers represents a measurement of the amplitude of the analog signal at a particular
instant in time. A converter’s resolution is specified in bits (binary digits), typically 8-, 1216- or 24-bits. The greater the bit resolution, the less distortion of the original signal will
occur in the conversion process. The rate at which the analog signal is converted is called
the sample rate. Typical sample rates used in MIDI audio sequencers are 44.1 (CD quality),
48, 88.2, or 96 thousand times per second. The highest frequency a digital system can
reproduce is equal to one-half of the sample rate of the A/D converter (known as the
Nyquist limit).
ADSR – Attack, Decay, Sustain and Release. These are the four most commonly used
segments of an envelope generator. An example: When an envelope generator is used to
control a synthesizer’s note volume over time, Attack controls the time it takes the volume
to reach an initial level, while Decay governs the time it takes for the volume to transition
to a steady “sustain” level. Sustain controls the time the volume will remain static, and
Release controls the time it takes for the volume to fade out once a “note off” command is
received. Envelope generators are most commonly found in synthesizers, but can be
simulated (see Envelope).
After Touch – A MIDI parameter that describes the intensity of modulation applied to a
note after it has been played and before a note off is generated. With MIDI keyboard
controllers, after touch sensors in the keyboard measure the pressure applied at the
bottom of key travel, and can generate polyphonic after touch (individual MIDI values are
generated for each note pressed) or monophonic (one MIDI value is generated for all notes
pressed).
Aliasing – A highly audible form of digital distortion that manifests as a modulated
whistling sound. It is caused when an audio signal is introduced into a digital system (A/D
converter) that is higher then one-half of the sample rate.
All Notes Off – A MIDI message that turns all the notes off in a MIDI network. Helpful if you
have a “stuck” MIDI note, as can occur when an instrument did not get a MIDI message to
turn off the note when the unit was switched off.
Arpeggiator – A feature on some sequencers that retriggers notes within a held chord and
changes the order in which they are heard. Typical note orders include Up, Down, Up and
Down, Random, and As Played. Most arpeggiators can be programmed to alter MIDI data
for pitch, duration, timing, and velocity of notes, and some are capable of creating guitar
strumming effects or drum patterns.
Autolocate – The ability to locate and/or set specific temporal locations available in some
sequencers, allowing the user to instantly return to one of these predefined locations.
Pg. 99
Autocorrect – See Quantization.
Bank – A container in memory that can store multiple sounds, samples, patterns, etc. An
individual MIDI bank can hold up to 127 items, most commonly synthesizer patches. MIDI
allows individual banks to be selected using ‘Bank Select' command.
Binary – A numbering system consisting of only the numbers 0 and 1, binary is the basis of
all computer languages including MIDI.
Bit – A contraction of “binary digit”, a bit is the basic unit of information in MIDI and in
computer systems in general. A bit can have only one of two values—zero and one. Eight
bits constitute a byte, and a byte can contain 128 unique values (from 0 to 127 in decimal).
MIDI messages are generally transmitted in bytes.
Buffer – An area of RAM used for temporary data storage. When MIDI is copied, the copy
resides in a buffer, usually until it is replaced.
Byte – Eight bits (see Bit).
Channel – A term applied to MIDI for one of its 16 available software transmission lines
over which MIDI data can be sent or received.
Channelize – A term used for assigning MIDI data to a particular MIDI channel.
Click Track – A metronome track generated by the sequencer to which a performer may
listen as they record overdubs. This will help to maintain rhythmic cohesiveness during the
course of recording a sequence.
Clock – A master timing reference used by a sequencer to maintain a tempo. MIDI clock,
which substitutes MIDI messages for the master clock’s electronic pulses, may be sent to
other time-based MIDI devices, including other sequencers and some effects, to
synchronize them to the master sequencer.
Contiguous – Items that are immediately next to each other; in sequencing this usually
refers to adjacent MIDI regions (see antonym Non-Contiguous).
Continue – A MIDI message that tells a sequencer or drum machine to continue playing
from the current location if stopped by a previous MIDI Stop message.
Continuous Controller – A MIDI parameter that generates data over a range of values, as
opposed to a switch controller that has only two possible states: on or off.
Controller #7 – The controller number assigned to effect MIDI volume changes.
DAC – Digital-to-analog converter; a circuit that accepts a stream of digital data which
represents the amplitude of a sound wave, and produces a corresponding analog voltage at
its output that can be fed to a speaker or headphone system.
DAW – Digital Audio Workstation; a software program that provides recording, editing, and
playback facilities for both MIDI and digital audio.
Default – when several options are available within a computer program and you do not
explicitly pick one, then one is automatically assigned by default. Using the program’s
“Preferences” (Mac) or “Options” (Win) settings, you can assign personal default settings
for options when you launch a program.
Default Window – A computer- or user-assigned window that appears when a program is
first launched.
Dialog Box – A box on the screen requesting information or a decision from you.
Pg. 100
Digitize – To convert an analog audio signal into a digital code that represents that signal
(see A/D Converter).
Disable – To turn off a function in a sequencer, as in “disable the track arm button.”
Double-clicking – positioning the pointer and then quickly pressing and releasing the
button on the mouse twice.
Dynamics – Fluctuations in volume; also refers to the class of processors that effect volume
levels, including compressors and limiters.
EQ – Short for equalizer, a device or plug-in that allows attenuation or emphasis of
frequencies in an audio signal. Bass and treble controls on a radio represent a simple form
of EQ.
Enable – To turn on a function in a sequencer, as in “enable real time quantization.”
Envelope – A term that describes how a sound changes over time with respect to volume,
timbre, or pitch (see ADSR).
Event – In MIDI, this refers to a single and complete MIDI message. Note on, note off, or
pitch bend messages may each be referred to as a MIDI event, and displayed in a sequencer
in a numeric, chronological format within a window called the Event List.
Field – A box in a dialog window into which you type information, such as word or
numerical data.
File Format – Refers to how digital data is organized and stored such that it is available for
use in other software applications. There are file formats for digital audio (WAV and AIFF)
as well as for MIDI (SMF or Standard MIDI File format).
Graphic Editing – An editing option that shows and manipulates data pictorially, as
opposed to using numbers or text.
Hard Disk Recording – The process of recording digital audio signals directly to a hard
drive for storage and playback. Modern sequencers often include hard disk recording
features, turning them into full-fledged DAWs.
Hexadecimal – A numbering system based on sixteen values, as opposed to the decimal
system’s ten values, with the letters “A” through “F” providing the additional six values.
MIDI code is often expressed in hexadecimal, because it is compact and easy to differentiate
from decimal.
Humanize – To add minute variations in a sequencer’s data to create a more expressive
performance.
Interface – A device that allows for the transfer, input, or viewing of information. The
computer screen is an interface that displays information. The way in which software is
designed to accept data would be its interface.
Launch – Double-clicking on a computer application’s icon to start the program.
Local Control – A MIDI feature that determines whether a keyboard’s voice generators are
controlled by the unit’s keyboard (Local On) or by the MIDI In port (Local Off).
LFO – Low-Frequency Oscillator. An oscillator whose frequency is below the range of
human hearing, generally from 0 to 20Hz. LFOs are used to modulate other oscillators with
regard to pitch, volume, or timbre.
Pg. 101
Loop – Describes a portion of a music sequencer’s tracks that repeat for a specified number
of times or indefinitely.
Macros – A combination of commands that may be executed after one computer command
or keystroke(s).
Menu – A list of functions available in a computer program or part of a computer program.
May have pull-down options (submenus) when a menu item is selected with a mouse.
Menu Bar – A strip, usually located at the top of a window, used to select an option or
command from a menu.
MIDI File – A shortened version of Standard MIDI File or SMF (see Appendix D).
MIDI Interface – A device that converts MIDI data into a format that can be understood by a
computer. With the advent of USB-equipped MIDI devices, the separate MIDI Interface is
nearly extinct.
MIDI Merge – A process whereby a MIDI device accepts multiple MIDI sources and
combines them into one. In a sequencer, MIDI Merge is a function that allows recording of
new MIDI data over existing data without altering the latter.
MMC – MIDI Machine Control: Refers to a group of MIDI commands that provide transport
control (start, stop and record) to other MIDI devices, including some older MMC-equipped
tape recorders.
MTC – MIDI Time Code: Refers to MIDI messages that contain the information embedded in
SMPTE timecode, allowing MIDI devices to operate in synchronization with SMPTE-driven
devices.
Nibble – Four bits constitute a nibble, a value used to describe command and channel
information within a MIDI byte (See Bit).
Non-Contiguous – Items that are not directly adjacent next to each other; in sequencing this
usually refers to non-adjacent MIDI regions (See antonym Contiguous).
Pan – Short for panorama, a control that places the audio signal at a specific point within
the stereo field of two speakers.
Patch – See Preset.
PPQ (Pulse Per Quarternote) – The number of clock (sync) pulses into which a sequencer
or drum machine subdivides a quarter note as a rhythmic reference. The higher the PPQ
number, the finer will be the resolution of the sequencer. This number must be divisible by
three to allow triplets. The most common PPQ available in DAW applications is 960 (see
Tick).
PPQN – See PPQ (Pulse Per Quarternote).
Preset – Specific settings stored in a synthesizer to create a particular sound. A preset will
be assigned to a specific MIDI standard program number to facilitate MIDI Program Change
commands.
Program Change – A MIDI message that instructs the receiving MIDI device to switch to a
different preset/patch. If not accompanied by a Bank Select command, a Program Change
command will change to another program within the current bank. Program Change
commands are MIDI channel-specific.
Pg. 102
Punch In – To initiate recording at a specific point on a particular track in a composition
with a sequencer. Punching in will either erase existing material recorded starting a the
punch in point, or it can add new material on top of existing material (sound on sound).
Punch Out – To exit the recording process after initiating a punch in.
Quantization – All sequencers allow the user the ability to correct timing to a specified
rhythmic value (i.e. eight notes, sixteenth notes, eight note triplets, etc.). This should be
used sparingly as it makes sequencer tracks sound perfect or robotic in nature.
RAM – Random Access Memory is computer memory that is used repeatedly to temporarily
store data. Both computer applications and documents are located into RAM as you work.
Work in progress must be regularly saved from RAM to a hard drive since, when you turn
off your computer, all data in RAM is lost.
Real Time Recording – To record data into a sequencer’s memory as it is being played on a
keyboard or other controller. In early computer programs, a composition had to be entered
one note at a time (see Step Time).
Scroll View – the music is viewed as a continuous horizontal band on the computer screen.
The computer redraws the screen quickly in Scroll View.
Sampler – A device that records and stores digital representations of actual sounds into its
digital memory to be played back on command from a keyboard, MIDI controller or
sequencer.
Sampling – The act of recording sound into a sampler or computer memory.
Sampling Rate – The rate at which a signal is digitized into samples (i.e., the number of
samples per second). Reading back the samples at the same rate reproduces the original
sound, while playing back at at a higher or lower rate varies the pitch on playback.
SMPTE – A time-based code that originated at NASA for logging telemetry data that was
later adopted and modified by the Society of Motion Picture and Television Engineers
(SMPTE) to label each frame of a video tape by recording a unique piece of digital data on
that frame. For the American standard (NTSC), each second of SMPTE timecode is divided
into 29.97 frames. A complete timecode address includes hours:minutes:seconds:frames =
00:00:00:00.
Song Position Pointer – A MIDI message that describes where a sequencer or drum
machine is (or should locate to) in reference to the beginning of the composition.
Start – A MIDI message that tells the sequencer or drum machine when to start and follow
MIDI timing messages.
Status Byte – A byte used in MIDI to identify the particular message type to which
subsequent data bytes relate.
Step Time – To enter notes into a sequencer or drum machine one note or chord at a time.
Usually, a note value resolution is selected beforehand and the pitch information is
supplied by the controller or keyboard.
Stop – A MIDI message that tells sequencer or drum machine when to stop playback or
record.
Pg. 103
Sysex – System Exclusive Messages: MIDI messages that are unique to a particular
manufacturer. These allow the manufacturer to send data (e.g., presets) that relate only to
their specific products and models.
Template – A file that does not contain any note data but is pre-formatted for special
layouts, such as projects preloaded with tracks, virtual instruments, presets and
customized track inputs and outputs, etc. You can use the pre-made templates that come
with your software package or design your own as a time saver when you create a project,
session or score.
Tick – A contemporary term for the smallest increment of a beat; its value is dependent
upon the available resolution of the MIDI device (see PPQ).
Toggle – A computer command option that allows you move between two possible states
like a toggle switch; for example, on or off, page view or scroll view, etc.
Track Shift – To shift or slide a sequencer track ahead or behind in time, usually in small
increments.
Pg. 104
Download