Uploaded by enibusola

EIE418 Module2

advertisement
DATA COMMUNICATION AND COMPUTER NETWORKS
(EIE418)
Lead/Coordinator: Prof. E. Adetiba (Ph.D,
R.Engr.(COREN))
Lecturer: Michael Adedokun (M.Eng.)
Department of Electrical & Information Engineering,
College of Engineering, Covenant University, Ota, Ogun
State, Nigeria.
MODULE 2
2.1 An Overview of Data Communication Network
2.2 Voice Coding and Pulse Code Modulation(PCM)
2.1 DATA COMMUNICATION
• This is the exchange of data between two
devices via some form of transmission
medium (wire or wireless).
• For data communications to occur, the
communicating devices must be part of a
communication system made up of a
combination
of
hardware
(physical
equipment) and software (programs).
• Data communications system effectiveness
depends on four characteristics:
i. Delivery. Data must be received by the intended
device or user and only by that device or user.
ii. Accuracy. The system must deliver the data
accurately because Data that have been altered
in transmission and left uncorrected are
unusable.
iii. Timeliness. The system must deliver data in a
timely manner, because data delivered late are
useless. This is highly essential for real-time
transmission of audio and video.
iv) Jitter. It is the uneven delay in the delivery of
audio or video packets. e.g. assuming video packets
are sent every 3ms, If some of the packets arrive
with 3ms delay and others with 4ms delay, this will
result in an uneven quality in the video.
2.1.1 Components
• A data communications system
components as shown in Fig. 1
has
Fig. 2.1: Components of a Data Communications System
five
i)Message. This is the information (data) to be
communicated. This include text, numbers, pictures,
audio, and video.
ii) Sender. This is the device that sends the message.
This includes a computer, workstation, telephone
handset, video camera, etc.
iii) Receiver. The receiver is the device that receives
the message. It can be a computer, workstation,
telephone handset, television and etc.
iv) Transmission medium. This is the physical path
by which a message travels from sender to receiver.
Examples are twisted-pair wire, coaxial cable,
fiber-optic cable and radio waves.
v) Protocol. This is a set of rules that govern data
communications. It connotes an agreement
between the communicating devices without which
two devices may be connected but not
communicate.
2.1.2 Data Representation
• Data comes in different forms such as text,
numbers, images, audio and video. Data
representation is therefore the form in
which data is stored, processed and transmitted.
• Digitization is the process of converting text,
numbers, images, audio and video into digital data
that can be manipulated and transmitted by
electronic devices.
• Any data stored and transmitted by digital devices
is encoded as a combination of either 0 or 1 each
of which is referred to as binary digit(bit).
• Bit and the many different sizes for the groups of
bits are as described in Table 1.
Table 2.1: Different Sizes for Data Representation
S/N
Representation
Description
1
Bit
Smallest unit of data which can either be 0 or 1.
2
Byte
Group of 8 bits.
3
Nibble
Group of 4 bits, which is half of a byte. With a nibble,
we can represent up to 16 distinct values, thus, Binary
Coded Decimal (BCD) and hexadecimal numbers use
nibble. Note that hexadecimal is a numbering
system(see Table 2.2 for the hexadecimal numbers)
while BCD is an encoding scheme. The digits for BCD
are (0, 1,2,3,4,5,6,7,8,9) and it works by encoding
each digit of a decimal number by its binary
equivalent in nibble (see Table 2.3).
4
Word
The number of bits(word length) that can be
processed by a computer in a single step. It represents
the datasize that is handled most efficiently by a
particular architecture. Words can be 16-bits, 32-bits
or 64-bits.
A word of length n bits will have 2n distinct bit
patterns.
Table 2.2: Examples of Decimal to Binary to Hexadecimal
Numbers
Decimal
Binary
Hexadecimal
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
…
0000
0001
0010
0011
0100
0101
0110
0111
1000
1001
1010
1011
1100
1101
1110
1111
…
0
1
2
3
4
5
6
7
8
9
A
B
C
D
E
F
…
Table 2.3: Binary Coded Decimal
Decimal
BCD
0
1
2
3
4
5
6
7
8
9
0000
0001
0010
0011
0100
0101
0110
0111
1000
1001
Zones
1111
1100
1101
Unsigned
Positive
Negative
NOTE: In data communication, bits is used for data
rates such as Internet connection and file download
speeds while bytes is used for file sizes and storage
capacities.
A) Number Representation
• Numeric data consists of numbers that can be
used in arithmetic operations and they are
represented in digital system as bit patterns.
• Examples of number representations are binary
integers and floating-point numbers.
• Binary integers use a fixed number of bits to represent
signed and unsigned integers. The number of bits
often used are 8, 16, 32, and 64 bits.
• For instance, the 8-bit unsigned integers represent
integers values in the range 0 to 255 (28 - 1 ) while the
16-bit unsigned integers represent integer values in
the range 0 to 65,535 (216 - 1 ).
• The two encoding methods used for the
representation of signed integers are the signmagnitude and the two's complement. However, most
computers(like the Intel processors) use the two’s
complement representation.
• Floating-point numbers are used to
approximate real numbers. The IEEE-754
Floating-Point Standard is one of the widely
used models.
B) Text Representation
• Textual characters(or alphanumeric symbols)
include letters, numbers, punctuation marks
and other special characters like % or # that
appear on a typical keyboard.
• The commonly used textual character codes
are as follows:
i)Extended Binary Coded Decimal Information
Code (EBCDIC):
• BCD was used primarily in IBM mainframe in the
1950s. By 1964, BCD was extended to EBCDIC,
which is an 8-bit code.
Exercise 2.1
How many characters can EBCDIC encode?
• EBCDIC was one of the first widely-used coding
scheme that supported upper and lowercase
alphabetic characters(in addition to special
characters, such as punctuation and control
characters). Both EBCDIC and BCD are still in use
by IBM mainframes today.
ii)American Standard Code for Information
Interchange (ASCII):
• Though ASCII code is an 8-bit code, only the rightmost 7-bits are used to encode information(27=128
characters).
• The ASCII code consists of the control codes, which
are used as signals to control the different
components of the system and printable character
code, that can be displayed on the screen or sent
to the printer.
• Extended ASCII uses all the 8-bits to represent
28=256 characters. This is used on systems with
Intel processors.
iii) UNICODE:
• With ASCII and Extended ASCII, the requirements
of languages like Japanese and Chinese could not
be met, thus, Unicode was created in 1991 as a
new 16-bits international information exchange
code. Therefore, Unicode has capacity to encode
the major characters used in every language.
• The ASCII code constitute the first 127 characters
of Unicode.
C) Image Representation
• Images are also represented with bit patterns.
An image is composed of a matrix of pixels
(picture elements).
• Each pixel in an image is assigned a bit pattern,
and the size as well as the value of the pattern
depend on the image.
• For instance, 1-bit pattern is enough to
represent a pixel for a black and white image.
Samples of Black and White Images
• 2-bit patterns can be used to represent four levels of
gray scale, i.e.
black pixel Ξ 00,
dark gray pixel Ξ 01,
light gray pixel Ξ 10,
white pixel Ξ 11.
Exercise 2.2
Different levels of gray scale
How many levels of gray scale can be represented with
8-bit patterns?
• Colors are represented as mixtures of three
primary colors such as red (R), green (G) and blue
(B). Each pixel is represented using three 8-bit
values, one for each color component.
A Sample Color Image
Exercise 2.3
How many bytes are needed to store an image of
(640x480) pixels as i)Black and White ii) Grayscale
and iii) 24-bit Color?
D) Audio and Video Representation
• Audio refers to the recording or broadcasting of
sound or music. Because it is continuous, it is by
nature different from text, numbers or images. A
continuous signal is created when we use a
microphone to change voice or music to an
electric signal. Audio/Speech coding will be
explored more later.
Sound = Periodic Changes in Air Pressure
• Video refers to the recording or
broadcasting of an image or a movie.
Video can either be produced as a
continuous entity or it can be a
combination of images(each a discrete
entity) arranged to convey the idea of
motion.
• Note that:
Video = image + sound
2.1.3 Data Flow
• Communication between two devices can be
simplex, half-duplex, or full-duplex as shown in
Figure 2.2.
Fig. 2.2: Simplex, Half-duplex and Full-duplex
i) Simplex
• Data communication is unidirectional in
simplex mode, which implies that only one of
the two devices on a link can transmit while
the other can only receive (see Figure 2.2a).
• Examples of simplex devices are keyboards,
monitors, mouse and etc. To send data in one
direction, the simplex mode can use the entire
capacity of the channel.
ii) Half-Duplex
• Each station can both transmit and receive in
half-duplex mode. However, when one device
is sending, the other can only receive, and
vice versa (see Figure 2.2b).
• The entire capacity of a channel is taken over
by whichever of the two devices is
transmitting at the time. Examples of devices
that operate in this mode are Walkie-talkies
and citizens band(CB) radios.
iii) Full Duplex
• Full-duplex mode is also called Duplex. This mode
allows both stations to transmit and receive
simultaneously (see Figure 2.2c).
• Signals going in one direction share the capacity of the
link with signals going in the other direction.
• This sharing can occur in two ways: a) The link must
contain two physically separate transmission paths,
one for sending and the other for receiving b) The
capacity of the channel is divided between signals
traveling in both directions.
• An example of full-duplex communication is the
telephone network. When two people are
communicating by a telephone line, both can talk
and listen at the same time.
2.1.4 Data Communication Networks
• A network is the interconnection of a set of devices
capable of communication.
• A device can be a host (or an end system) such as a
server, desktop, laptop, workstation, cellular phone, IP
Camera, IoT node and etc.
• A device can also be a connecting device such as:
✓ Router: Connects the network to other networks.
✓Switch: Connects devices together.
✓Modem(Modulator-demodulator): Changes the
form of data.
• These devices in a network are connected using wired
or wireless transmission media such as cable or air.
2.1.4.1 Network Criteria
• The most important criteria that a network must be able to
meet are performance, reliability, and security.
A) Performance - The performance of a network depends on
factors such as:
✓ Type of transmission medium.
✓ Capabilities of the connected hardware.
✓ Efficiency of the software.
✓ Number of active users.
• Performance can be measured using transit time and
response time.
• Transit time is the amount of time required for a message
to travel from one device to another while Response time is
the elapsed time between an inquiry and a response.
• The network metrics for evaluating Performance
are throughput and delay.
B) Reliability - Network reliability is measured by the:
✓ Accuracy of delivery.
✓ Frequency of failure.
✓Time it takes a link to recover from a failure.
✓Network’s robustness in a catastrophe.
C) Security: Issues on security in a network include:
✓Protecting data from unauthorized access.
✓ Protecting data from damage.
✓ Implementing policies for recovery from data losses.
2.1.4.2 Network Topology
• Network Topology: This is the geometric
representation of the relationship of all the links
and linking devices (nodes) to one another.
• The four basic topologies are:
✓ Mesh
✓Star
✓Bus
✓Ring
Star
Mesh
Bus
Ring
Fig. 2.3: Network Topologies
2.2 VOICE (SPEECH) CODING AND PULSE CODE
MODULATION
• The speech signal is a one-dimensional function
(air pressure) of time.
Fig. 2.3: Speech Signal
• Microphones convert the fluctuating air pressure
into electrical signals (voltages or currents) for
processing, storage or transmission.
2.2.1 Speech Coding
• Speech coding is the process of obtaining a
compact representation of the speech signal that
can be efficiently transmitted over band-limited
wired and wireless channels or stored in digital
media.
• The objective of speech coding is to compress the
speech signal by reducing the number of bits per
sample, such that the decoded speech is audibly
indistinguishable from the original speech signal.
• Speech coding are utilized in the following areas:
✓ Wired telephony.
✓ Cellular communications.
✓ Voice over Internet Protocol (VoIP).
✓ Secure voice for privacy and encryption.
✓ Storage of speech for telephone answering machines and on other
storage media.
✓ Interactive Voice Response (IVR) systems.
• The benefits of speech coding are:
✓Reduction in bit-rate.
✓Reduction in memory requirements.
✓Reduction in the transmission power.
✓Immunity to noise.
• A speech coder converts a digitized speech signal
into a coded representation, which is usually
transmitted in frames. It receives coded frames
and synthesizes reconstructed speech as shown in
Fig. 2.4.
Fig. 2.4: Block Diagram of Generic Speech Coding
2.2.2 Speech Coding Techniques
• Based on the coding technique used, speech coders are
classified into three types namely: i) Waveform
Representation ii) Parametric Representation and iii)
Hybrid Representation. Examples of each type are
illustrated in Fig. 2.5.
Pulse Code Modulation
Time Domain
Delta Modulation
Waveform
Representation
Subband Coder
Frequency Domain
Adaptive Transform Coder
Linear Predictive Coder (LPC)
Speech Signal Coding
Parametric
Representation
Mixed Excitation Linear
Predictive (MELP) Coder
Hybrid Representation
Fig. 2.5: Classification of Speech Coders
Code Excited Linear Predictive
(CELP) Coder
• Waveform Representation: Waveform coders
attempt to code the exact shape of the speech signal
waveform, without considering in detail the nature
of human speech production and speech perception.
• Waveform coders are most useful in applications
that require the successful coding of both speech
and non-speech signals.
• In PSTN for instance, successful transmission of
modem and fax signaling tones and switching signals
is as important as the successful transmission of
speech, thus waveform representation is adopted.
• Examples of Waveform coders are as illustrated in
Fig. 2.5
2.2.3 Speech Coding Standards
• Speech Coding Standards for PSTN and Cellular communication
networks were established by the International Telecommunication
Union (ITU) and European Telecommunication Standard
Institute(ETSI). Table 2.4 contains descriptions of the standards.
Table 2.4: Speech Coding Standards
Application
Bandwidth
(kHz)
Bit Rate
(kb/s)
Standard
Organization
Standard
Number
Landline
Telephone
3.4
64
ITU
G.711
PCM
7
64
ITU
G.722
ADPCM
3.4
8
ITU
G.729
ACELP
3.4
12.2
ETSI
EFR
ACELP
3.4
5.3 – 6.3
ITU
H.323
CELP
Video
Conferencing
Digital
Cellular
VoIP
Algorithm
2.2.4 Pulse Code Modulation
• Pulse Code Modulation (PCM) is an analog-todigital conversion method in which the
information contained in the instantaneous
samples of an analog signal is represented by
digital words in a serial bit stream.
• The bit streams obtained from PCM can be
transmitted through the digital communication
network and the analog signal can be reproduced
by demodulation at the receiver.
• PCM is also the standard form of digital audio in
computers, CDs, digital telephony and other digital
audio applications.
• A PCM encoder has three processes which are:
i) Sampling
ii) Quantization
iii) Encoding
as illustrated in Fig. 2.6
PCM
Analog
Input
LPF
Continuous-time
continuous
amplitude (analog)
input signal
Sampler
Bandlimited
analog
signal
Discrete-time
continuous
amplitude signal
(PAM)
Quantizer
Encoder
Discrete-time
discrete-amplitude
signal (PCM pulses)
Fig. 2.6: Pulse Code Modulation Block Diagram
PCM output
(Digital bit
stream)
i) Sampling
• This is the first step in PCM as shown in Fig.
2.6.
• The analog signal is sampled every Ts second,
where Ts is the sample interval or period.
• The inverse of the sampling interval is called
the sampling rate or sampling frequency and
denoted by fs, where
1
fs =
(2.1)
Ts
• There are three sampling methods, which are
ideal, natural and flat-top as shown in Fig. 2.7.
Fig. 2.7: Sampling Methods
• In ideal sampling, pulses from the analog
signal are sampled. This is an ideal sampling
method and cannot be easily implemented.
• In natural sampling, a high-speed switch is turned
on for only the small period of time when the
sampling occurs. The result is a sequence of
samples that retains the shape of the analog signal.
• The most common sampling method, called sample
and hold, creates flat-top samples by using a circuit.
• Sampling process is also called Pulse Amplitude
Modulation (PAM) and it produces an analog signal
with non-integral values referred to as discretetime continuous amplitude signal(Fig. 2.6).
• Sampling Rate: A very important consideration in the
sampling process is the sampling rate or frequency (fs).
• The Nyquist theorem states that in order to reproduce the
original analog signal, the sampling rate must be at least 2
times the highest frequency contained in the signal i.e.
f s ³ 2 ´ f max
(2.2)
where fmax is the highest frequency in the signal.
• Equation 2.2 is referred to as the Nyquist Rate, and
sampling at this rate helps to overcome Aliasing Effect.
• It should be noted that a signal can be sampled only if
the signal is band-limited i.e. a signal with an infinite
bandwidth cannot be sampled.
• As stated in the Nyquist theorem, the sampling rate
must be at least 2 times the highest frequency (fmax),
not the bandwidth.
• Note that If the analog signal is lowpass, the
bandwidth and fmax are the same value. However, If
the analog signal is bandpass, the bandwidth value is
lower than fmax (See Figs 2.8 and 2.9).
Fig. 2.8: Nyquist Rate for Lowpass Signal
Fig. 2.9: Nyquist Rate for Bandpass Signal
• fs that is very higher than the Nyquist rate
leads to Oversampling while fs that is less than
the Nyquist rate leads to Undersampling (Fig.
2.10).
Nyquist Rate Sampling: fs= 2fmax
Oversampling: fs= 4fmax
Undersampling: fs= fmax
Fig. 2.10: Different Sampling Rates and their Effects
Example 2.1
A low-pass analog signal has a bandwidth of 300
kHz, calculate:
i) The minimum and maximum frequencies of the
signal.
ii) The appropriate sampling rate and period for
accurate reproduction of this signal at the receiver.
Solution 2.1
ia) The minimum frequency fmin = 0Hz
ib) The maximum frequency fmax = 300kHz
ii) The appropriate sampling rate is Nyquist Rate:
fs = 2fmax = 2x300kHz = 600kHz
Ts = 1/fs = 1/600,000 = 1.67μs
Exercise 2.4
Calculate the sampling rate and period that are
required by a Telephone company to digitize the
human voice for accurate reproduction at the
receiver.
ii) Quantization
• As shown in Fig. 2.6, Quantization is the
second step in PCM.
• Quantization is done by dividing the
range of possible values of the discrete
time continuous amplitude
signal
obtained from the sampler into different
levels, and assigning the center value of
each level to any sample in the
quantization interval.
• The Steps in Quantization are:
Step1: Represent the original analog signal has having
instantaneous amplitudes between Vmin and Vmax.
Step2: Divide the range into L uniformly spaced
intervals, each of height (delta).
V max -V min
D=
L
(2.3)
Step3: Assign quantized values of 0 to L − 1 to the
midpoint of each interval.
Quantized codes
Step4: Approximate the value of the sample
amplitude to the quantized values.
• These steps are illustrated in Fig. 2.11 below.
Fig. 2.11: Illustration of Quantization Process
• As illustrated in Fig. 2.11, quantization
approximates the analog sample values with the
nearest quantization values. So almost all the
quantized samples will differ from the original
samples by a small amount. That amount is called
quantizing error.
Fig. 2.12: Illustration of Quantization Error
• Note that the maximum error for any sample
point’s quantized value is at most /2 i.e. error ≤
Δ/2.
• The quantization error changes the signal-to-noise
ratio of the signal, and the contribution of the
quantization error to the SNRdB of the signal
depends on the number of quantization levels L(or
the bits per sample nb) as shown in equation (2.4).
(2.4)
SNR = (20 Log L +1.76)dB
dB
10
Exercise 2.5
What is the quantization SNRdB of an analog signal
encoded with 8bits codeword?
• Quantization is classified into a) Uniform
and
b) Non-uniform
a) Uniform Quantization:
• In uniform quantization, the step sizes
(Δ) are fixed and the levels are equally
spaced apart (Fig. 2.13). Uniform
quantizers are optimal when the input
distribution is uniform.
Fig. 2.13: Uniform Quantization
b) Non-uniform Quantization:
• This is a type of quantization in which the step
sizes (Δ) are not fixed and the levels are unequally
spaced apart.
• For many applications, changes in amplitude often
occur more frequently in the lower amplitudes
than in the higher ones. For these types of
applications it is better to use non-uniform
quantization.
• By using a greater number of quantizing steps for
signals of low amplitude, and a smaller number of
quantizing steps for signals of large amplitude, a
marked reduction in overall signal distortion is
achieved.
Uniform Quantization
Non-uniform Quantization
Fig. 2.14: Strong and Weak Signals with Different Quantization Types
• Non-uniform quantization can be achieved by using
a process called Companding (compressing and
expanding).
• Companding is a process that compresses the
intensity range of a signal by imparting more gain
to weak signals than to strong signals on
input(transmitter) and at the output(receiver), the
reverse operation is performed (See Fig. 2.15).
Fig. 2.15: Companding Illustration
• Two types of Companding methods have been accepted for
telephone systems, namely; the American µ-law and the
European A-law,
• The outputs of the two systems are:
(2.5)
(2.6)
• Vout represents output voltage, Vm represents maximum
input voltage Vin stands for instantaneous input voltage.
• μ=255 (µ-Law) and A = 87.6 (A-Law).
µ--law Companding
A--law Companding
Fig. 2.16: Characteristic Curves of the Two Companding Methods
iii) Encoding
• The last step in PCM is encoding. Once each
sample is quantized and the number of bits per
sample is decided, each sample can be changed to
an nb-bit code word.
• If the number of quantization levels is L, the
number of bits is computed as:
nb = éêlog 2 L ùú
• Where
éê ùú
(2.7)
is a ceil function and éê4.8ùú = éê4.2ùú = 5
• The bit rate can be found using:
Bit rate = sampling rate x number of bits per sample
BitRate = f s ´ nb
(2.8)
Exercise 2.6
What is the bit rate to digitize the human voice
assuming 8bits per sample?
Solution
f max = 4kHz
f s = 2 ´ 4000 = 8000 Hz
nb = 8bits
\ BitRate = 8000 ´ 8 = 64kbps
• Using PCM, the minimum bandwidth of the
channel that can pass the digital signal is given as:
(2.9)
B =n ´B
min
b
ana log
• This implies that the price we pay for digitization
is nb times greater than the bandwidth of the
analog signal(Banalog).
Fig. 2.17: Sample Representations of a PCM Code
Exercise 2.6
A low-pass signal with a bandwidth of 200 kHz is
sampled using 1024 levels of quantization.
a) Calculate the bit rate of the digitized signal.
b) Calculate the SNRdB for this signal.
c) Calculate the PCM bandwidth of this signal.
Download