Chapter 11 Main Points

advertisement
Chapter 11 Main Points
► The production chain (in nonmusic production) generally begins with the talking
performer and therefore involves considerations that relate to producing speech.
► How speech is produced depends on the type of program or production; the
medium—radio, TV, film—and, in TV and film, whether the production technique
is single- or multicamera; whether it is done in the studio or in the field; and
whether it is live, live-to-recording, or produced for later release.
► The frequency range of the human voice is not wide compared with that of other
instruments. The adult male’s fundamental voicing frequencies are from roughly
80 to 240 Hz; for the adult female, they are from roughly 140 to 500 Hz.
Harmonics and overtones carry these ranges somewhat higher (ranges for the
singing voice are significantly wider).
► Speech intelligibility is at a maximum when levels are about 70 to 90 dB-SPL.
Certain frequencies, particularly in the midrange, are also more critical to speech
intelligibility than others. For example, an absence of frequencies above 600 Hz
adversely affects the intelligibility of consonants; an absence of frequencies
below 600 Hz adversely affects the intelligibility of vowels.
► Influences of nonverbal speech on meaning include emphasis, inflection, speech
patterns, pace, mood, and accent.
► Acoustics in a speech studio should be relatively reverberant-free so as not to
unduly color sound.
► Acoustical phase is the time relationship between two or more sound waves at a
given point in their cycles. Polarity is not related to the time offset between two
similar waves; it is related to the positive or negative direction of a force—
acoustical, electrical, or magnetic.
► In electricity, polarity is the relative position of two signal leads—the high (+) and
the low (–)—in the same circuit.
► When acoustical waves are in phase—roughly coincident in time—their
amplitudes are additive. When these waves are out of phase—not coincident in
time—their amplitudes are reduced.
► By changing relative time relationships between given points in the cycles of two
sound waves, a phase shift occurs. Phase shift is referred to as delay when it is
equal in time at all frequencies.
► Evaluation of a microphone for speech includes at least four criteria: clarity,
presence, richness, and versatility.
► Factors that influence a microphone’s effect on the sound of the speaking (and
singing) voice are its sound quality, directional pattern, and mic-to-source
distance.
► The closer a mic is to a sound source, the warmer, more detailed, denser, more
oppressive (if too close) the sound, and the closer the listener perceives the
speaker to be.
► The farther a mic is from the sound source, the less warm, less detailed, thinner
(if there is little reverberation) the sound, and the more distant the listener
perceives the speaker to be.
► Performing speech generally involves either the voice-over (VO)—recording copy
to which other sonic material is added—or dialogue. Voice-over material includes
short-form material, such as spot announcements, and long-form material, such
as documentaries and audiobooks.
► Speech sounds are harmonically and dynamically complex.
► Voice acting involves taking the words off the page and conveying meaning,
making them believable and memorable.
► Among the considerations of a voice actor in bringing the appropriate delivery to
copy are voice quality, message, audience, word values, and character.
► Prerecorded voice collections provide a variety of words, phrases, sayings, and
vocal sounds to use in the absence of, or to supplement, a voice actor.
► When hundreds of voice-overs may be required for such projects as video games
and online dictionaries, automation is necessary and software programs are
available to handle these needs.
► Recording speech begins with good acoustics. Mediocre acoustics can make
speech sound boxy, oppressive, ringy, or hollow.
► Producing a solo performer and a microphone is a considerable challenge: there
is no place to hide.
► Among the things to avoid in recording speech are plosives, sibilance,
breathiness, and tongue and lip smacks.
► Mic-to-source distance depends on the strength, resonance, and intensity of the
performer’s delivery, but it should be close enough to retain intimacy without
sounding too close or oppressive.
► In terms of recording levels, recording is usually done flat so that there’s no sonic
“baggage” to deal with in the mix or if the recording has to be continued on
another day or in a different studio. Any signal processing done during recording
is typically quite minimal.
► In producing a voice-over, if ambience is important to the overall delivery, it may
be necessary to record it separately, after the performer finishes recording, by
letting the recorder overroll.
► One time-consuming factor often overlooked in recording speech is the editing. If
mistakes are made, rather than leave them for the editor, try for a trouble-free
retake.
► Making the performer comfortable is of the utmost importance.
► To avoid generating unnecessary noises the clothing a performer wears should
be cotton rather than synthetic fabrics and be loose-fitting for better comfort than
tightfitting clothes.
► Doing a voice-over when elements such as background music and sound effects
are to be added later requires care to ensure that, when mixed, these elements
do not interfere with the intelligibility of the copy.
► In mixing audio for a spot announcement, fading the music after it is established
to create a hole for the announcement and then reestablishing the music at its
former full level is called a donut.
► Using compression, in moderation, enhances the spoken word by giving it clarity
and power.
► Backtiming is a method of subtracting the time of a program segment from the
total time of a program so that the segment and the program end simultaneously.
► Deadpotting, also called deadrolling, is starting a recording with the fader turned
down all the way.
► Three types of narration are direct, indirect, and contrapuntal.
Download