Digital Media Curriculum Primer

advertisement
Digital Media
The material in this handout is excerpted from “Digital Media Curriculum Primer”
a work written by Dr. Yue-Ling Wong (ylwong@wfu.edu), Department of Computer Science
and Department of Art, Wake Forest University and supported by the
National Science Foundation under Grant No. DUE-0127280.
http://digitalmedia.wfu.edu/project/digital-media-curriculum-development/
This handout was prepared for students in MMP320 Multimedia Networks at Borough of
Manhattan Community College, City University of New York as part of a curriculum redesign
project supported by National Science Foundation Grant No. DUE NSF-0511209, Co PI’s
Christopher Stein (cstein@bmcc.cuny.edu) and Jody Culkin (jculkin@bmcc.cuny.edu)
http://teachingmultimedia.net
Capturing and Editing Digital Audio*
Acquiring Digital Audio
Digital audio can be acquired by two methods:
(1) recording sound and directly storing it in digital format;
(2) digitizing an analog audio media such as analog audio tapes. In this section,
we will discuss what hardware and software may be needed for these two
methods.
Recording
In order to record audio directly into digital format or digitize analog audio, your
computer needs to have a sound card. A sound card acts as an analog-todigital converter that converts the electrical signals into digital format by
sampling the signals based on the sampling rate and bit depth you choose.
To record sound, you will also need a microphone. Many laptops come with one
built-in, but the sound quality is usually not good enough for many multimedia
project.
Some microphones are designed to have directional response to sound, for
example, uni-directional and bi-directional. Their sensitivity to sound varies
with the angles of the sound approaching the microphone. As a general rule for
recording with a microphone, if you do not have access to the specifications of
the microphone, place the sound source directly in front of the microphone.
Example of specifications for a uni-directional microphone.
For the software requirement, you need a digital audio recording program.
Most, if not all, the digital audio editing programs let you record audio. Before you
start recording, you need to specify the sampling rate, bit depth, and the
number of channels you want to use. The possible sampling rate and bit depth
for recording are determined by the capability of the hardware of the sound card.
Audio software usually has level meters that let you monitor the sound recording
level during recording. The optimal recording level is to have the meters just
brushing near the red area without going over it. Going over the red area will
cause clipping and distortion of your recording. If the level is too low, the
recording is too soft.
Sound Card
Instead of being installed as a separate card on a computer, the sound
subsystem is now commonly built into the motherboard of the computer.
• Recording: sound card acts as an analog-to-digital converter. Converts
the electrical signals into digital format
• Playback: sound card acts as digital-to-analog to converter. Converts
back to analog by reconnecting samples to create a waveform
A sound card has a line-out, line-in, and a microphone-in ports.
Digitizing Analog Medium
To digitize analog audio medium, such as cassette tapes, connect the line-out of
whatever device contains your analog audio with the line-in port on your sound
card using an audio cable. Always digitize a little more before and after the
segment that you need.
Workspace Elements in Digital Audio Editing Programs
Basic Editing – Working with One Audio
The two very basic elements you would find in a digital audio editing program
are:
(1) a waveform display window;
(2) the transport controls where you find the play, record, rewind, and fast
forward buttons.
In the waveform display, the x-axis represents the time and the y-axis represents
the amplitude value.
Audio Mixing – Working with Multiple Audio
In audio mixing, you can import multiple audio tracks into software and place
each audio on its own track.
It is advantageous to create or record audio separately and mix them later, so
you can adjust the volume level and apply effects such as fade-in and fade-out to
each audio track.
Spectral View
In spectral view the x-axis of the display is still time, like in the waveform display,
but the y-axis is the frequency of the audio, not the amplitude. The amplitude
value is now represented using color.
In Adobe Audition, the greater amplitude is represented by the brighter color. As
you see below, the middle section of the audio is missing sound that is higher
than 3000 Hz. The brightest color of this section occurs in the range of about
300-500 Hz. In the section right after this, the brightest color occurs around 4000
Hz but the 300-500 Hz range is very dark (low in amplitude).
When working in Spectral View, you can still select audio data. Selecting audio
data in the spectral view allows you to select the audio data in frequency and
time ranges. Thus, working in the spectral view also allows you to select and
edit just the specified frequencies.
Spectral view is useful in analyzing the frequency range of the audio. It helps you
to get an idea of the most important frequency range of the audio
Recall the implication of Nyquist Theorem – the sampling rate of the audio
must be at least twice of the audio frequency. The audio with the highest
amplitude usually is the most important part of the audio. If the frequency range
that has the highest amplitude is around 4000 Hz, then resampling your audio to
8000 Hz will be acceptable.
Basic Digital Audio Editing
Re-assembling Waveform
The audio waveform can be selected and then applied with an operation such as
Cut, Copy, or Delete. Selecting a segment of the waveform can be accomplished
by simply dragging the mouse cursor over the waveform. Because the waveform
is a visual representation the audio in time, it is possible to identify the segment
of the waveform corresponding to the sound.
Changing Volume
You can adjust the volume or amplitude of an audio. Amplify lets you specify
amplification in a dB (decibels) or percentage. Normalize lets you set a peak
level for a file or selection and it will amplify the entire file or selection so that the
highest level is the peak level you specify. Use normalize instead of amplify to
ensure the amplification does not result in clipping. For example, if you
normalize an audio to 100%, you achieve the maximum amplitude that digital
audio allows.
Fade-ins and fade-outs are the extended application of volume adjustment.
Fade-in effect suppresses the volume at the beginning and gradually
increases to 100% within the designated duration for the fade-in. Similarly, fadeout effect lowers the volume level gradually to zero within the fade-out period.
Envelope effect allows you to control the volume changes over time using a
curve. The x-axis of the envelope graph represents the time and the y-axis the
amplification level. There are often presets available for the Envelope effect. You
can also define your own curve.
Noise Reduction
Recordings often contain background noise, such as the computer fan noise or
hiss from the microphone. The two most useful features for noise reduction found
in Adobe Audition are Hiss Reduction and Noise Reduction. The Hiss Reduction
can be used to reduce hiss from the source such as audio cassette tape or
microphone.
Hiss noise is often characterized by a certain frequency range. The basic idea
of how the hiss reduction works is that a threshold amplitude for a frequency
range, called the noise floor, is defined – as a preset or defined by you. If the
audio amplitude within that frequency range falls below the threshold, the audio
of that frequency range will be greatly reduced. However, audio outside that
frequency range and audio in that frequency range that is louder than the
threshold will remain unchanged.
Noise Reduction is another effect you can use to reduce noise that is constant
throughout a waveform. The basic idea of how it works is that it performs a
statistically analysis of a pure noise segment, which is defined by you. Then, it
performs noise reduction on the whole audio based on this noise profile.
Because it reduces noise based on a noise profile generated statistically, not a
frequency range, the Noise Reduction can remove a wider range of noise.
The downside of this effect compared to Hiss Reduction is that you need to have
a sufficiently long segment (preferably more than one second) of pure noise for
the statistically analysis to generate a profile for the noise. To get a pure noise
segment, you can start the recording with a second or more of silence.
Special Effects
There are many special effects or filters you can apply to an audio. Many audio
programs let you alter the pitch of a sound file, alter the length (in time) of a
sound file, and add echo or reverb of a sound
Downsampling and Reduction of Bit Depth
It is advantageous to record and work with audio at CD-quality (i.e. 44,100 Hz
sampling rate, 16-bit, and stereo) or higher. However, in most situations,
depending on the project, you may need to downsample the file to lower the file
size for the final delivery. If you downsample, always keep the original higher
quality audio file for editing.
Another way of lowering file size is to save the audio file as an .mp3. MP3 file
uses a lossy but effective compression by applying human perception of
sound. This means that some audio data are removed and the audio data is
approximated but the data removed are basically imperceptible by human.
General Steps of Digital Audio Recording Touch-up
1. Noise reduction: Remove the background noise by applying noise
reduction effect.
2. Trimming: Remove the extra time at the beginning or the end, or at other
times in of the audio recording
3. Adjusting volume level to the desired level.
Music Creation
Music scores can be created using sheet music software in MIDI format. Many
multimedia projects, such as games, product advertisements and some digital
art, do not require an original score, but often just loop music
MIDI
MIDI and sampled digital audio are fundamentally different representation of
digital sound. When using MIDI, you edit notation and instrument assignment in a
MIDI file. Examples of software application programs are Cakewalk SONAR,
Steinberg Cubase, and MakeMusic! Finale.
Below are some of the views available in Cakewalk SONAR for different creation
and editing methods: staff view for working with notation and staff; piano-roll
view for visualizing the notes as they would appear on a player-piano roll; event
list view for working with a sequence of MIDI information or MIDI messages.
staff view
piano roll view
event list
When the MIDI file is played back, the sound card uses the synthesizer to
recreate the sound of the notes on the right instrument. Because not all
synthesizers are the same, the MIDI file will sound different depending on the
sound card that plays it.
Loop Music
There are many commercially available libraries of short sound effects and music
clips that are designed to loop seamlessly, which often might be sufficient for
your production. Loop music application programs, such as Sony ACID Pro and
SmartSound SonicFire Pro, are available to make the job of sequencing and
mixing of loop music easier. These programs often also support MIDI and let you
import MIDI music.
Delivery of Digital Audio
• Audio CD
• Multimedia presentations
• Audio in video productions
• Web delivery
Audio CD
Audio on CDs must be 44,100 Hz sampling rate, 16 bit, and stereo.
Audio editing programs such as Adobe Audition also let you create audio CD
project in which you can specify a title and artist for each track. CD players that
support CD Text can display such information during playback. It also let you
designate the length of pauses between tracks, enable or disable copy
protection, and add an ISRC (International Standard Recording Code) number –
a code that is used for commercial distribution.
Audio in Multimedia Authoring
Multimedia authoring programs often support a variety of audio file format. For
example, the types of audio file you can import in Macromedia Flash and Director
include .wav, .aif, .au, mp3, and audio-only QuickTime.
Use in Video
If your audio is intended to import into a video editor, you will need to check the
audio file format that is supported by your video editing software. Most video
editing programs support .wav, .aif, and .mp3.
Many audio editing applications allow you to import video while you are editing.
While you cannot edit video in an audio program, being able to import the video
into the audio editing program provides you the convenience in synchronizing
video and audio without having to switch back and forth between the video and
the audio editing programs.
Web based delivery
To use audio on the Web, you can:
(1) embedding the audio on the page. The audio file formats you can embed
on the Web page is determined by the browser plug-in and audio player
that the viewer has on the computer.
(2) link to the audio file. This will cause the audio file to play in a player
window external to the Web page.
The audio file format you can embed on the Web page is determined by the
browser plug-in and audio player that the viewer has on the computer. The
chapter shows several examples of HTML codes in embedding QuickTime and
RealAudio.
*
The material in this handout is excerpted from Chapter 5 of the Primer: Capturing and
Editing Digital Audio, a work supported by the National Science Foundation under Grant No.
DUE-0127280 written by Dr. Yue-Ling Wong (ylwong@wfu.edu), Department of Computer
Science and Department of Art, Wake Forest University.
http://digitalmedia.wfu.edu/project/digital-media-curriculum-development/
Download