LECTURES

advertisement
Networks and Communications.
Lecturer -
Eric Goodyer
email eg@dmu.ac.uk
Lecture notes prepared by Dr Amelia Platt
with minor revisions and additions by Eric Goodyer
Last Revision January 2000
Recommended Reading:Computer Networks
by A Tanenbaum, 3rd edition, Prentice Hall
Data Communications, Computer Networks and Open Systems
by Fred Halsall, 4th edition, Addison Wesley
Computer Networks: A First Course
by Jean Walrand, Aksen Associates
Data and Computer Communications
by W Stallings, 3rd edition, Macmillan
Packet Switching and X25 Networks
by Simon Poulton, Pitman
1
1 Introduction
1.1 Different types of Networks
 Wide Area Networks (WANs)
 Local Area Networks (LANs)
 Metropolitan Area Networks (MANs)
1.2 Differences between WANs, LANs & MANs
Coverage Area
 LANs cover a small area, typically a room or building
 MANs cover a larger area, typically a city or county
 WANs have no limit on size or area covered
Ownership
 LANs are private - not owned by PTTs
 MANs can be private or public
 WANS can be private or public
Transmission Rates (Speed)
 LANs typically have high data rates compared to WANs
 MANs have higher rates than LANs
 WANs have low data rates
Topologies
 LANs and MANs typically have ring and bus topologies
 WANs have mesh topologies
Type of Transmission
 LAN and MAN broadcast by nature of topologies
 WANs private (user to user)
Signalling
 All represent 'bits' differently on transmission lines
However the above picture is changing rapidly and the distinction is becoming increasingly
blurred
This course will consider these types of networks in terms of
 Network access techniques
 Protocols
2
2 Communication Architectures
2.1 Problems associated with communication
The task of a communication is very complex. To give an understanding of the complexity,
below is a small sample of the type of problems which must be solved by the
communications software:






How to connect two users together - communications channel.
How to represent signals on a communications channel.
How to detect and correct errors on channel to ensure error free transmission.
How to allow users to gain access to the communications channel.
How to route data to the correct user across a network.
How to ensure the receiver interprets the data correctly - the receiving machine may
differ from the sending machine.
How to allow the user to run applications over the channel.
Now consider a number of communication situations :Two users at either end of
 a piece of wire
 a network
 a set of interconnected networks
Clearly, connecting two users across a set of interconnected networks, is much more complex
when compared to connecting the same users over a single piece of wire.
2.2 Communication Architectures
The solution is to break the overall problem down into a set of small, well defined tasks. The
result is typically referred to as a communications architecture (or structure). There are a
number of such architectures, and two of the most widely used are :

ISO Open Systems Interconnection (OSI) 7 layer reference model.
TCP/IP reference model
2.3 OSI 7 Layer Reference Model
Each layer is intended to perform a specific task in the overall problem of communication.
Each layer is independent of all the others. Communication with the layers immediately
above and below is via a well defined interface. Layer N is said to request service from layer
N-1 (below) and provide a service to the layer N+1 (above). Layer N in one protocol stack
communications with the same layer in a remote protocol stack via the layers below. This is
known as virtual or peer-to-peer communication.
In particular, the OSI 7 layer model defines 7 layers. At the end points, the 7 layer model can
be viewed as the :


Upper layers (5-7) (Application layers - (7)Application, (6)Presentation, (5)Session)
Transport layer (4) (Interface between subnet and application layers)
Lower layers (1-3) (Subnet - (3)Network, (2)Link, (1)Physical)
3
2.4 TCP/IP (Transmission Control Protocol/ Internet Protocol)
Another reference model which is very widely used is the TCP/IP Reference Model - it is
used for example in the Internet.. TCP/IP defines only 4 layers Host-to-network layer
Allows host to connect to the network so that IP Datagrams can be sent
Internet layer
Allows host to inject packets onto the network and to route packets
Transport layer
Provides a peer-peer link between source & destination
Application layer
Higher level protocols, such as TELNET, FTP
It is discussed in more detail in section 11.5 below
4
3 Overview of Switching methods
There are 2 types of switching methods
 Circuit switching
 Packet switching
3.1 Circuit switching
Set up a dedicated end-to-end connection. Switching implies that the connection is switched
through a number of intermediate exchanges. How could this be modelled?
E.g. Present telephone networks, mobile cellular networks
3.2 Packet switching
Information is broken into segments. These segments are called packets at layer 3 and frames
at layer 2. More generally they are called Protocol Data Units (PDUs).
Packets are sent individually through the network.
What problems could this cause for voice traffic?
E.g. Internet, Superhighway, most data networks
N.B. There are a number of variants of packet switching, but the same principle applies. See
Figure at the end.
Advantages and disadvantages of the two switching methods
Circuit switching?
Private, secure, not subject to congestion
But inefficient use of bandwidth, pay for time call is connected regardless of amount of data
Packet switching?
Shared use of high cost components, efficient use of bandwidth, only pay for data in transit
But not secure or private, subject to congestion
5
4 Delays associated with networks
4.1 Propagation Delay
 Time taken for a signal to travel from the transmitter to the receiver
 Speed of light is the fastest a signal will propagate
3 X 108 m/sec through space
2 X 108 m/sec through copper
4.2 Transmission Delay (Time)
 Time taken to put the bits on the transmission media
Transmission speed of 2Mbps means
2 X 106 bits can be transmitted in 1 second
4.3 Processing Delay
 Time taken to execute protocols
check for errors
send Acks etc.
4.4 Queuing Delay
 Only in packet switched networks
 Time spent waiting in buffer for transmission
 Increases as load on network increases
4.5 Round Trip Delay
Round trip delay is defined as the time between the first bit of the message being put onto the
transmission medium, and the last bit the acknowledgement being received back by the
transmitter. It is the sum of the all the delays detailed above. The round trip delay is a critical
factor in the performance of packet switched protocols and networks. Indeed, it has been
stated that a good algorithm for estimating the round trip delay is at the heart of a good
packet switch protocol.
6
5 Properties of Signals
5.1 Bandwidth
 Bandwidth is a measurement of the width of a range of frequencies and is measured in
hertz (Hz).
 In data networks bandwidth is normally specified as bits per second (BPS)
 Shannon-Hartley Theorem states that
Dmax = Blog2(1 + S/N)
where Dmax is the maximum bit rate
B is the bandwidth in Hz
and S/N is the signal to noise ratio
All transmission mediums are degraded by ‘noise’. If the average power of the signal is given
by S, and the average power of the noise is given by N, then the signal to noise ratio is given
by S/N. The greater the value of S/N then the greater is the theoretical transmission rate of
that medium.
5.2 Square wave properties
time
period of wave
1 milli sec
Square wave is composed of sine waves with frequencies: Fundamental frequency F0+
 Odd harmonics 3F0+, 5F0+, 7F0+, 9F0+... (the 3rd, 5th, 7th, and 9th harmonics)

Note, the fundamental frequency is equal to the basic repetition frequency of the wave
form.
The amplitude of the harmonics are increasingly proportional to the harmonic number.
 Amplitude of the 3rd harmonic is 1/3 of the amplitude of the fundamental frequency.
 Amplitude of the 5th harmonic is 1/5 of the amplitude of the fundamental frequency.
Sine waves up to and including the 9th harmonic represent over 95% of the signal power.
- Implications.


Don't need to receive all of the harmonics to receive the signal.
Must receive at most (at least) up to the 9th harmonic.
7
Note : The more harmonics received, the flatter the peak or trough.
The graph below shows a square wave that consists of sine waves up to the 9th harmonic
only.
5.3 Signal distortion
Attenuation
 Decrease in the amplitude of the transmitted signal.
 Attenuation increases with distance, repeaters must be used to restore signal to
transmitted level.
 Attenuation increases with frequency, repeaters must also take this into consideration.
Propagation delay
 Propagation delay varies with frequency.
 So various frequencies of a signal propagate at different rates.
 Clearly they will incur different amounts of delay.
 As bit rate increases, so does the probability of frequencies from one signal interfering
with the next.
 The longer the transmission media then the more is the ‘spread’ of the component
frequencies of the original transmitted square wave.
Noise
 There are different sorts of noise, which effect different media :-
8

Thermal noise. All electronic components generate ‘noise’ internally; the level of
this noise is related to the temperature of the electronic components, thus the term
‘thermal noise’

Atmospheric noise. This is electrical interference induced into the electronics as a
result of external electromagnetic radiation. This includes interference from nearby
electrical equipment, such as computers, mains switches and CRTs, and also interference
from radio waves. These sources of interference are also known as RFI, or Radio
Frequency Interference, and EMC, or Electro-Magnetic Coupling.
RFI/EMC can be reduced by ‘good wiring practice’. This means ensuring that there are
quality connections between cables, that good earth connections are made , and that
protective shields are wrapped around transmission cables.

Ringing. If cables are incorrectly terminated, then some of the energy in the
transmitted signal is reflected back down the cable from which it came. This results in an
effect that is similar to an optical interference pattern, with a characteristic ‘ringing’
distortion of the received signal. It is a major cause of distortion in high speed networks
that are not correctly terminated, or have used the wrong (typically cheaper) cables.
9
6 Physical Layer
These describe the electrical and mechanical interface necessary to establish a
communications path.
Layer 1 protocols are concerned with the physical and electrical interfaces. It defines for
example: Connection types and allocation of signals to pins
 Electrical characteristics of signals which includes bit synchronisation and identifying a
signal element as a 0 or 1
Put simply, layer 1 is responsible for transmitting and receiving the signals.
6.1 RS232/V.24
Signal voltage levels
 -3V to -25V binary 1 for data, OFF for a control signal
 +3V to +25V binary 0 for data, On for a control signal
25 Volts is the maximum rating for a line without a load. In practice RS232/V24 signals are
set to typically be +-12V
Use of RS232/V.24 as DTE/DCE interface standard
Ground Signals
 Pin 1 (SHG) Protective Ground / Shield Ground to reduce external interference
 Pin 7 (SIG) Signal Ground - provides a reference for other signals
Transmit and Receive
 Pin 2 (TxD) Transmit Data
 Pin 3 (RxD) Receive Data
Maintaining a Connection / ‘Hardware Handshaking’
 Pin 6 (DSR) Data Set Ready, Modem indicates to DTE that it is ready, i.e. connected to a
telephone wire
 Pin 20 (DTR) Data Terminal Ready, DTE uses this to prepare the modem to be
connected to the telephone line. If it is placed in an OFF condition it causes the modem
to drop any connection in progress. Thus the DTE ultimately controls the connection.
‘Hardware’ Flow Control
 Pin 4 (RTS) Request to Send, Sent by DTE to modem to prepare it for transmission.
 Pin 5 (CTS) Clear to Send, Modem indicates to DTE that it is ready to transmit.
 Pin 8 (CD) Carrier Detect, Sent by modem to DTE, to inform it that a signal has been
received from the other end of link.
Other

Pin 22 (RI) Ring Indicator, sent by modem to DTE to inform it that a ringing signal
has been received from the other end of the link. Used by auto-answer modems to wake-up
the attached terminal.
6.2 X.21 interface
10
X.21
 Full duplex
 Synchronous Interface
 15 pin connector, but only 8 defined - these are explained below
Ground Signals
G - Ground Signal
Ga - DTE common return
Clocks
S - Signal element Timing (bits)
B - Byte Timing
DTE to DCE
 T - transport
 C - control
T carries bit stream
C indicates how it should be interpreted
Three inactive states are defined as follows:T = 1 C=OFF
interpreted as DTE ready
T=1
C=OFF
interpreted as DTE not ready due to abnormal condition
T = 0101..
C=OFF
interpreted as DTE operational but not ready. Used for flow
control.
DCE to DTE
 R - Receive
 I - Indication
R carries bit stream
I indicates how it should be interpreted
Three inactive states are defined in a similar way as for DTE to DCE
11
7 DIGITAL AND ANALOGUE SIGNALS
There are two types of data :Digital
Analogue
There are two types of transmission :Digital
Analogue
This gives rise to 4 potential situations : Digital data - Analogue transmission
 Digital data - Digital transmission
 Analogue data - Analogue transmission
 Analogue data - Digital transmission
What are examples of digital data?
What are examples of analogue data?
When the types of data and the transmission are not the same, data must be changed to suit
the transmission media.
Analogue Signals
Analogue signals are continuous waveforms. The absolute level of the signal can be any
value between full ON to full OFF. For example a pure sine wave has the following form -
Digital Signals
Digital signals can only have one of two values, full ON (or logic level 1) and full OFF (or
logic level 0).
So how can we convert a continuous analogue signal into a digital signal?
12
7.1 Pulse Code Modulation
An analogue signal can be converted into a digital signal by a process known as Pulse Code
Modulation. It is achieved by slicing up the analogue signal in time, at regular sampling
intervals. On each sample instance the absolute level of the analogue signal is measured, and
converted into a binary number. The resolution of the conversion is given by the number of
bits allowed for each samples binary representation. So if we sample our original sine wave
using only 3 bits we will achieve the following result -
Note that there are 8 discrete levels, as three binary bits gives a maximum number of
quantisation levels of only 8. The more bits that we have, the better will be the digital
representation of the analogue signal. So if we use 8 bits (i.e. 256 quantisation levels) we
obtain the following representation of the signal -
13
Each individual sample can now be converted into a binary number. So is we used a 3 bit
ADC, then each sample would consist of just three bits. A digital data link transmits the
sample data as a digital bit stream. The receiver can reconstruct the original analogue signal
by using a Digital to Analogue Converter (DAC).
This technique, of sampling an analogue signal, and generating a stream of digital data, is
known as Pulse Code Modulation.
7.2 Sampling Rate
We can improve the representation of an analogue by increasing the number of quantisation
levels. We can also improve the representation by increasing the sampling rate.
If we decrease the sampling rate we can reach a point where the representation is so poor the
receiver can no longer reconstruct the original analogue signal. This limit is given by
Nyquist’s Theorem, which states that in order to sample a signal with a frequency F then the
minimum sampling rate is 2F.
7.3 Companding
Let us assume that we are using an 8 ADC. The quantisation level represents the minimum
level of uncertainty that is always present in any sampled signal. This uncertainty degrades
the Signal to Noise (S/N) ratio of the transmission link. As the uncertainty (or quantisation
noise) is always a fixed level, the degradation of S/N is worse when the overall signals level
are low.
One solution to this is the process known as companding. Instead of using an 8 bit ADC, we
instead use a 6 bit ADC. The other two bits are used to represent a compression factor that is
applied to the signal before the signal is presented to the ADC.
The full scale input level of the ADC is set to be 1/16 of maximum allowed input signal. So if
a signal is low, it can still use the full range of the ADC, thereby minimising the quantisation
noise. Higher level signals are reduced (or compressed) to fit the ADC range. The final
digital representation consists of two bits that represent the compression factor, and 6 bits
that represent the absolute value of the signal as seen by the ADC.
14
7.5 Modulation Techniques
The public switch telephone network (PSTN) was designed for carrying analogue (i.e. voice)
signals, not digital data. It was not designed to carry digital signals, which would switch the
voltage levels on the line between OFF and full scale ON. The bandwidth of voice telephone
lines is little more then 3 kHz, which is far too small for today’s data communications
demands; also old style PULSE DIALLING telephone exchange would attempt to interpret
binary switched signals as ‘call progression tones’ (e.g. dialling, ringing line busy etc.).
So how can we transmit digital data over the PSTN?
The solution to this problem is to modulate the digital information onto an analogue carrier
signal. This is achieved by one of three main techniques 1) Amplitude Shift Keying (ASK)
2) Frequency Shift Keying (FSK)
3) Phase Shift Keying (PSK)
In order to connect a digital data source to a telephone line we use a piece of equipment
known as a MODULATOR/DEMODULATOR or MODEM for short. The modulator part of
a MODEM converts the digital data that is to be transmitted into a modulated analogue
signal, the demodulator part accepts a modulated analogue signal off of the line and turns it
back into digital data.
15
7.5.1 ASK
Amplitude shift keying uses a single carrier frequency, that is transmitted at two different
amplitude (or volume) levels in order to represent a logic level 0 and a logic level 1.
Given the problems of receiving signals correctly, what is the disadvantage of this
modulation technique? Hence, pure ASK is now seldom used.
7.5.2 FSK
Frequency shift keying uses two different frequencies to represent a logic level 0 and a logic
level 1. For example the V23 MODEM standard uses a signal of 1300 Hz to represent a 1 and
a signal of 2100 Hz to represent a 0.
Full duplex operation is achieved by using two other frequencies (390 Hz and 450 Hz) for the
other (or back) channel.
Note: - Less susceptible to errors than ASK, used up to 1200 BPS on voice lines.
Techniques is also used in high frequency radio transmission and in LANs
7.5.3 PSK
Phase shift keying uses a single carrier frequency for each channel (2 are required for full
duplex operation). Typically these are 1200 Hz and 2400 Hz. The logic levels are represented
by phase changes in the signal.
Consider a system with 4 phases -
16
If we have four different states we can represent 2 bits with each signalling element. So we
can define 0 degrees as 00, 90 degrees as 01, 180 degrees as 11 and 270 degrees as 10.
This now gives you a clue as to how we can obtain very high data rates (currently up 56600
bits per second) down a telephone line that was designed far at best 3 kHz of analogue data.
The total number of data bits transmitted is double the number of signalling element changes
on the telephone wire.
The total number of signalling element changes is known as the BAUD RATE, and this is
bandwidth limited by the transmission medium. However if each signalling element
represents N bits then the actual data rate is N * BAUD RATE.
7.5.5 Differential Phase Shift Keying (DPSK)
In order to design electronic circuitry that detects phase shifts, it is advantageous to ensure
that lots of phase shifts occur even if the line is idle. A PSK system would just transmit a
continuos tone in these circumstances, so the receiver clock will tend to drift. One solution is
to use DPSK, which defines each pair of data bits (or dibits) as the phase change between
two signalling elements.
For example V22 defines the following coding system -
17
DIBIT VALUE
00
01
11
10
PHASE CHANGE
90 Degrees
0 Degrees
270 Degrees
180 Degrees
The carrier frequencies are again 1200 Hz and 2400 Hz, with a baud rate of 600. This means
that the data rate is 1200 bite per second (BPS).
7.5.6 Quadrature Amplitude Modulation QAM
Higher data rates are achieved by a combination of PSK with ASK. So as well as changing
the phase of the transmitted signal we also alter its’ amplitude. The V22bis MODEM
standard is the simplest example of this technique. V22bis defines 16 different types of
signalling element, so each element represents 4 binary bits. The baud rate is still 600 baud,
so the data rate achieved by V22bis is 2400 BPS.
Another examples is V32 which defines 16 states, transmitted as 2400 baud = 9600 BPS.
Higher data rates are now achievable (up to 56.6 kBPS) by increasing the number of discrete
signalling elements available to the MODEM.
7.6 Modems
Sending Computer Data over Telephone Channels
 Computers produce digital data (pulses)
 Telephone channels are designed for analogue signals
 So digital data must be converted into a suitable format (analogue signals) if telephone
channels are to be used
 Device which does this is called a MODEM
Modulator Demodulator
A MODEM can set up a switched path through the telephone network, or use a leased line.
Computer
Digital
Local
MODEM
Analogue
Local
MODEM
Computer
Analogue
Remote
Digital
Remote
MODEM characteristics include







Speed and variable speed
Auto-answer / Manual answer
Auto-dial / Manual dial
Programmable Control
Automatic redial
Synchronous/ Asynchronous
Compatibility with
Hayes command set and modems
18





Bell modems
CCITT standards
Voice-over data
Self-test mode
A wide range of Call Progression tones
Data Compression - MNP & V42 protocols
etc.
MODEM Loop-back Testing
Returning messages
Computer
MODEM
MODEM
Computer
The computer sends out messages, these are returned at the various stages of transmission,
after each stage the message it returned. If the message is not the same as the message that
was transmitted originally then there is a fault on the line.
19
8 Transmission Modes
There are two transmission modes:Asynchronous Transmission
Synchronous Transmission
Fundamental difference between the two modes is :Asynchronous Transmission - The receiver clock is not synchronised with respect to the
received signal
Synchronous Transmission - The receiver clock operates in synchronisation with the
received signal
For both types of transmission the receiver must be able to achieve bit synchronisation
For Asynch transmission byte synchronisation must also be achieved
For Synch transmission, synchronisation of a block of bits (or bytes) must also be achieved.
8.1 Asynchronous Transmission
8.1.1 Bit Synchronisation
 Transmitter must operate with the same characteristics as receiver
 Receiver clocks runs asynchronously with respect to the incoming signal
 Problem is to ensure the incoming signal (bit) is sampled as near centre as possible
 Local receiver clock runs at N times transmitted bit rate (typically x16 or x64)
 Each new signal is sampled after N ticks of the clock
 The higher the receiver clock rate, the closer to the centre the signal will be sampled
8.1.2 Character Synchronisation
 Each character is enveloped in start and stop bits
 Transmitter and receiver must be programmed to operate with the same number of start
and stop bits.
 Transmitter and receiver must be programmed to operate with the same number of bits
for the transmitted character. This is typically 7 for ASCII, 5 for TELEX, or 8 for CEPT
display profiles (e.g. teletext).
 When the line is idle, 1's are normally transmitted and the stop bits are also 1's
 Start bit is usually a zero, thus there is always a 1-0 transition at the start or every
character
 Note the start bit is sample at N/2 clock ticks
 Receiver can achieve character synchronisation simply by counting the number of bits in
the character
 These are then transferred to a buffer
 Next 1-0 transition indicates the start of the next character on the line.
20
8.1.3 Other Information
 Oldest, most common technique
 Application areas
slow speed modems - up to



56.6 Kbps switched
38.4 Kbps leased (over distance of 50 feet)
interactive applications running on dumb terminals
Transmitter and receiver must be configured to have the same characteristics:5,6,7,8 data bits
0,1 parity bits
1 start bit
1, 1.5, 2 stop bits
These can be set by software, or alternatively can be set using hardware
switches.
Large overhead associated with asynchronous transmission
i.e. start stop bits for every character therefore the true information rate is
much less that the bit rate
Less reliable as bit rate increases
8.2 Synchronous transmission
Two variants of synchronous transmission
 Bit oriented - used by most modern protocols because it is more efficient
 Character oriented - older protocols
Bit and frame (block) synchronisation must be obtained
8.2.1 Frame Synchronisation
This relates to delimiting the frame, i.e. finding the start and end of the frame.
There are a number of ways in which this can be achieved. Three typical methods are : Fixed length frames - used in ATM
 Carry frame length in fixed position in packet - used in Ethernet
 Use of FLAGS and Bit stuffing - as in X.25 & Frame Relay, which marks both the
beginning and the end of the frame.
Flags and Bit Stuffing
 Data transmission entity is a frame
 Frame is view as a string of bits
 Typically a frame consists of many thousands of bits
 The frame is encapsulated by two flags. Flags have bit value 01111110
 Bit Stuffing is used to ensure the flag is not embedded in the frame, thus causing the end
of the frame to be assumed - incorrectly. This process ensure that the encapsulated data is
TRANSPARENT to the link level protocol.
 Bit stuffing is the process of automatically stuffing (adding) a 0 in the bit stream when 5
consecutive 1's are found
 Thus, 6 consecutive 1's never appear in the frame contents, thus flag pattern (and end of
frame) is never found in the frame contents
 Normally 1's are transmitted when the line is idle.
21
Note that a detailed discussion of how frame synchronisation is achieved in character
oriented protocols is not covered. However, the techniques are similar in principle to those
used in Bit Stuffing.
8.2.2 Bit synchronisation in Synchronous Transmission
There are two ways in which the receiver can obtain bit synchronisation: Encoding the clock in the data
 Use of a digital phase lock loop circuit with this scheme frequent transitions in the data are needed
(i.e. frequent changes of binary zeros and ones)
8.2.2.1 Encoding schemes
Manchester encoding
Data is encoded using two signal levels
 A binary 1 is encoded as a low-high signal
 A binary 0 is encoded as a high-low signal
 The transition (i.e. from high to low) always occurs at the centre of the bit
 The receiver uses this transition to sample the signal close to the centre of the second half
 So for a binary 1 which is low-high the signal will be high
 For a binary 0 which is high-low the signal will be low
 Bit is then added to the register
 Twice the bandwidth is needed for this scheme, so it is normally only used with LANs
Differential Manchester encoding
 A transition at the start of the bit only occurs if the next bit to be coded is a 0
 There is still a transition in the centre of each bit
(Similar concept to differential PSK modulation technique)
8.2.2.2 Bit Synchronisation using Digital Phase Locked Loop (DPLL) circuit
A DPLL is the electronic equivalent of a musical tuning fork. It resonates at a fixed
frequency, which is used to sample the input data stream. The role of the DPLL is to ensure
that the ‘tuning fork’ oscillates in phase with the arriving data. It therefore must see a
transition (1 to 0 or 0 to 1) every now again in order to adjust itself back to the correct phase.
When the output of the DPLL is in phase with the input data stream it is said to be ‘in lock’.
There are two aspects to the synchronisation process
 Must obtain the same frequency as the transmitter
 Must sample in the middle of a bit
Obtaining the frequency of the transmitter
 Receiver can extract this from the signal, but it will drift unless transitions occur.
How are transitions guaranteed?
Finding the middle of the bit.
 Receivers clock runs at a multiple of the transmitters clock (32 is a typical number)
 Pulse sample is adjusted to quickly find the middle
 Need occasional transitions to maintain synchronisation
22
Bit sampled
Bit sampled
The next sample
in the middle,
in the middle,
may have to be
32 ticks from
32 ticks from
adjusted relative
the last sample.
the last sample.
16
32
Bit N
to the transition.
16
Bit N + 1
Bit N + 2
A
No transition
has occured but
this is assumed
to be the start of
the bit.
B
C
D
The next actual
transition can occur
in regions A, B, C, D
or on the B/C boundary.
23
Regions A, B, C, D are 8 ticks
9 Layer 2 - Data Link Layer
There are many different concepts and techniques used to build protocols at layer 2.
Typically these concepts and techniques (or variations of them) are also used with layer 3 and
4 protocols. Hence, the strategy is to understand these first, then consider how they are used
in real protocols. This makes the material covered at layer 2 rather long and it is worthwhile
to first review the structure of the material for layer 2 so that the framework is apparent.
9
Layer 2 - Data Link Layer
9.0
Introduction
9.1
Error Detection
9.2
9.2.1
9.2.2
Error Recovery (Correction) Classification
Forward Error Correction (FEC)
Backward Error Correction
9.3
Sequence Numbers
9.4
Idle RQ Protocols
9.5
Sliding Window Protocols
9.6
Timing Diagrams
9.7
9.7.1
9.7.2
9.7.3
9.7.4
9.7.5
Go-back-N ARQ
Justification for the maximum window size for Go-back-N
Recovery from loss of an I frame
Recovery from loss of a RR frame
Effects of loss of a REJ frame
Summary of Go-back-N ARQ
9.8
Selective Repeat ARQ
9.9
Comparative performance of Go-back-N and Selective Repeat
9.10
Practical Layer 2 protocols
9.10.1
Simple point-to-point connection
9.10.2
Link Layer in packet switched networks
9.10.3
LAPB Point-to-multipoint connection
9.11
Summary of Flow Control
24
9.0 Introduction
The main function of layer 2 is to provide an error free link to layer 3. Therefore the focus of
the protocols at layer 2 is to detect and recover from errors. In this section error detection is
explained first, followed by error recovery. The most widely used error recovery technique is
Backward Error Correction (BEC). There are many different BEC protocols, but all use the
concept of a window or sliding window. Thus the understanding of windows and particularly
sliding windows is critical to the understanding of these BEC protocols. Hence a large part of
this section is dedicated to explaining the principles of sliding windows.
9.1 Error Detection
Parity Bit
 Very simple scheme, where transmitter adds an extra parity bit to each character
 If even parity is used then the total number of 1's in the character (including the parity
bit) must be even
 Can also have odd parity
Which error patterns can the parity scheme not detect?
Block Check Sum
 This is an extension to the parity bit method
 There is a parity bit associated with each character as before
 String of characters are viewed as a vector 8 bits wide with a 'parity character' added to
the end
 So there is a parity bit for each bit position in string of characters
 For a one character overhead, many more errors can be detected
Which error patterns can the block check sum not detect?
Cyclic Redundancy Check (CRC)
 CRC techniques used to produce the Frame Check Sequence (FCS) included in layer 2
frames
 Normally it is placed at the end of the frame
 CRCs can have varying lengths
16 bits is the normal CRC in WANS
32 bits is the normal CRC in LANs
 The greater the number of bits in the CRC, the greater the length of the frame that can be
covered
25
9.2 Error Recovery (Correction) Classification
Error detection allows the receiver to detect transmission errors, for instance using the CRC.
Having detected an error, the receiver must recover from it.
There are two mains types or error correction schemes
 Forward error correction
 Backward error correction
With forward error correction, extra information is generated by the transmitter and sent as
part of the frame. The receiver uses the information to detect and correct the errors. The
overhead (extra bits) associated with this form of recovery dictates that it is normally used
only on channels with a high Bit Error Rate (BER) e.g. air interface on mobile cellular
networks.
With backward error correction, the receiver asks the transmitter to retransmit the necessary
frames. Currently backward error correction is the most common form of error correction.
9.2.1 Forward Error Correction (FEC)
Additional bits are added to the message
These enable receiver to detect and correct errors
Thus there is an overhead associated with FEC
The Hamming Single Bit Code is used to explain the principles of error correction. However,
in practice, much more complicated encoding schemes, based on convolutional codes, are
used.
Hamming Single Bit Code
n
Extra check bits are placed at positions 2 in the data. Applying the Single Bit Code to a
seven bit ASCII character
e.g.
1000101
11 10 9 8 7 6 5 4 3 2 1 - bit positions
C
C CC
1 0 0 0 1 0 1
Result is an 11 bit code (11,7)
To compute the 4 Check (C) bits
The binary numbers corresponding to the bit positions having a binary 1 are added together
(modulo 2). These become the 4 C bits
11
6
3
1011
0110
0011
-----1110
The C bits are thus
11 10 9 8 7 6 5 4 3 2 1
1
1 10
1 0 0 0 1 0 1
26
27
To check the bits at the receiver
Again, the binary numbers corresponding to the bit positions having a binary 1 are added
together (modulo 2). These will be zero if no errors are found
11 10 9 8 7 6 5 4 3 2 1
1
1 10
1 0 0 0 1 0 1
11
8
6
4
3
2
1011
1000
0110
0100
0011
0010
-----0000
If the following is received
11 10 9 8 7 6 5 4 3 2 1
1
1 10
1 0 0 0 0 0 1
11
8
4
3
2
1011
1000
0100
0011
0010
-----0110 - This gives the position of the error
i.e. bit 6
Non zero result indicates an error has occurred
and the bit position of the error.
The Single Bit Code
Can detect and correct all 1 bit errors
Can detect all 2 bit errors
Cannot detect errors bursts of > 2 bits
CRC Error Correcting Codes
CRCs can also be used to detect and correct errors. Not all CRC algorithms are capable of
achieving this, some CRCs are only error detection codes, others can detect and correct. A
full understanding of this process is not a requirement of this module.
A brief analysis is as follows -
28
A fixed length of N binary numbers can be considered to be a binary polynomial. So any
message can be expressed as a polynomial M(x). If we multiply this by a CRC of the form
G(x), then the received message should be R(x) where:
T(x) = M(x) * G(x)
If the message is corrupted by noise, we can also express the error as a polynomial as well
E(x).
The corrupted message C(x) therefore is T(x) + E(x).
C(x) = (M(x) * G(x) ) + E(x)
If we divide through C(x) by G(x) we obtain
(M(x) * G(x))
---------------G(x)
+
E(x)
-----G(x)
CRC’s use a special branch of mathematics known as Galois Fields, based on MODULO 2
arithmetic. Detailed knowledge of how this works is not a requirement of this module. It
should be clear to you that T(x) is perfectly divisible by the generator polynomial G(x). So
after we have divided the corrupted message by G(x) any remainder, R(x), left over after the
division must be due only to the effect of the error polynomial E(x).
E(x) = R(x)/G(x)
We can derive R(x) in our receiver, we know what G(x) is, therefore if we can solve the
above equation we can determine the error polynomial E(x).
9.2.2 Backward Error Correction
The principle of BEC is that the transmitter repeats frames which have got ‘lost’. There are
many different BEC protocols. These can be roughly classified according to the Repeat
reQuest (RQ) strategy they operate. The three most widely used RQ strategies are:Idle RQ :
With Idle RQ the transmitter sends a frame and waits for an acknowledgement before sending
the next frame. The transmitter remains idle until the acknowledgement arrives - hence the
name.
Go-back-N Automatic Repeat reQuest (ARQ) :
The receiver identifies an error and requests that all outstanding frames are retransmitted,
starting with the frame in error. It therefore discards all frames that is has already received
which were transmitted after the lost or corrupted frame.
Selective Repeat ARQ :
The receiver identifies an error and requests that only the frame in error is retransmitted.
Before these protocols can be fully understood the mechanics of sequence numbers and
sliding windows must be mastered.
29
9.3 Sequence Numbers
 Errors can occur on the link, for example a frame gets corrupted by noise on the link.
 The receiver detects the error and requests another copy of the frame.
 Implication is both receiver and transmitter can uniquely identify specific frames.
 Frames are uniquely identified by sequence numbers.
 The sequence number must be carried in the frame - normally in the header.
 The obvious format is an integer.
 The link will carry large numbers of frames (perhaps thousands per minute) but it is not
possible to reserve a large number of bits for the frame number. Remember, in packet
switching a link is shared by a large number of calls.
 Thus a scheme is needed, whereby sequence numbers are reused after a period of time.
 The receiver controls when the sequence number can be reused by acknowledging
 correctly received frames.
For example with a 3 bit sequence number
8 sequence numbers - which is the sequence number space
the stream of sequence numbers allocated would be
0,1,2,3,4,5,6,7,0,1,2,3,4,5,6,7,0
3 bits gives
7 bits gives
8 numbers (0 - 7)
128 numbers (0-127)
Example:If the following frames arrive
0 1 2 4 5 6...
Then it is clear that frame 3 is ‘lost’.
In summary, sequence numbers uniquely identify frames and allow the receiver to detect
missing frames.
30
9.4 Idle RQ Protocols
Idle RQ is probably the simplest BEC protocol. The transmitter sends the first frame, frame 0
and waits for a positive acknowledgement. When the acknowledgement arrives it sends frame
1 and again waits for an acknowledgement. Eventually the sequence number will cycle back
to 0 again.
If the receiver detects an error in a frame, then it discards the frame sends a negative
acknowledgement. The transmitter responds to the negative acknowledgement by resending
the frame. At any time, there is only one frame awaiting acknowledgement, therefore there is
never any confusion over which frame must be retransmitted.
A frame may also get ‘lost’. Clearly, in this case the receiver will not receive the frame,
therefore an acknowledgement cannot be sent. There is a potential deadlock situation here.
The transmitter cannot send a new frame until an acknowledgement is received and the
receiver cannot send an acknowledgement until the frame is received. This is resolved by
introducing a ‘time-out’. When a transmitter sends a frame a clock is started. If the clock
goes off before the acknowledgement is received, then the frame is automatically
retransmitted again.
With Idle-RQ the link utilisation is poor. To increase the efficiency of the protocol Idle-RQ is
extended to allow the transmitter to have many frames outstanding. This is achieved by
operating a sliding window as follows.
31
9.5 Sliding Window Protocols
Related to sequence numbers is the concept of a sliding window. Note that the notation used
to describe sliding windows matches that used with LAPB - LAPB is discussed below.
The following constants/variables are required to implement a sliding window
Constants
Sequence number space
Window size
Variables :
Upper window edge = V(S) S denotes Send
Lower window edge = V(A) A denotes Acknowledge
V(S) is the value of the next sequence number which will be allocated by the transmitter. It is
incremented by 1 every time a frame is transmitted.
V(A) is updated by the receiver, when an acknowledgement is received from the other end of
the link. Note, an acknowledgement may (and typically does) acknowledge multiple frames.
Together, these two variables control the send window.
V(R) is present in the receiver and is used to check whether the correct frame sequence has
been received. The frame sequence number in the frame is compared with V(R). If they are
equal, then this is the frame expected, and V(R) is incremented by one, ready for receipt of
the next frame.
These variables always refer to the next frame. For instance, V(S) is the sequence number
that will be allocated to the next frame that is transmitted. Similarly, V(R) in the receiver, is
the sequence number of the frame that the receiver expects to receive next. The contents of
the variable V(A) is the number of frame that will next be acknowledged. Note also that these
are the variables required for data transmission in one direction and acknowledgements in
the opposite direction. For a full duplex link (simultaneous transmissions in both directions) a
set of these variables would be required in both transmitter and receiver.
How Sequence Numbers work
Initial state:- Nothing has been transmitted
Assume the sequence number space is 8 and the window size is 3
(Note that with 8 numbers the window could be bigger)
Remember the sequence numbers must wrap round from 7 back to 0
Initial state of the window:0
V(A)
V(S)
1
2
3
4
5
6
7
Two frames are transmitted :Frame number 0 is transmitted. (V(S) is then incremented and now has the value 1)
Frame number 1 is transmitted. (V(S) is then incremented and now has the value 2)
New state of the window:-
32
0
V(A)
1
2
3
4
5
6
7
3
4
5
6
7
V(S)
One frame is acknowledged
0
1
V(A)
2
V(S)
33
If V(A) and V(S) have the same value, then there are no outstanding acknowledgements.
The window size constrains the relative values of V(A) and V(S). With a window size of 3, if
V(A) is 1 then V(S) cannot be greater than 4.
0
1
V(A)
2
3
4
5
6
7
V(S)
The numerical difference between V(S) and V(A) is the number of acknowledgements
outstanding , i.e. frames which have been sent but not yet acknowledged.
Note that copies of the transmitted frames are stored in a buffer and only deleted when an
acknowledgement is received. Thus the window size also dictates the buffer size required at
the transmitter.
9.6 Timing diagrams
Timing diagrams are one method of showing the flow of frames between a transmitter and
receiver. Unless stated otherwise, the following timing diagrams assume a 2 bit sequence
number, giving a sequence space of ? (4).
The following variables are needed to implement the Go-back-N protocol
V(A) - Acknowledge }variables required in the transmitter
V(S) - Send
}to implement the sliding window
Buffer to store transmitted frames
V(R) - Receive
}variable required in the receiver,
to assist with error recovery
The timing diagram below shows the basic flow of Information frames from transmitter to
receiver, in one direction. The diagram also shows how the V(S) and V(R) variables are
incremented as frames are successfully transmitted and received, respectively. The V(S),
V(R) and V(A) variables are always initialised to zero. No acknowledgements are shown in
this diagram.
Transmitter
Receiver
V(A) V(S)
V(R)
0
V(S) is incremented each
time a frame is transmitted
0
1
2
0
(I,0)
(I,1)
1
(I,2)
2
3
3
34
Receiver accepts the frame - only if
the number in the frame header equals V(R)
V(R) is then incremented
I (Information) frame (data)
The timing diagram below show how a receiver acknowledges successfully received
Information frames.
Transmitter
Receiver
V(A) V(S)
V(R)
0
0
1
2
0
(I,0)
(I,1)
1
(I,2)
2
3
(RR,3)*
3 RR (Receiver Ready) frame (acknowledgement)
3
0
(I,3)
(I,0)
1
0
1
* Note, acknowledgements always inform the transmitter
of the frame the receiver expects to receive NEXT
9.7 Go-Back-N ARQ
With backward error control protocols, transmitters must store transmitted frames in a buffer
until they have been acknowledged by the receiver. Acknowledgements may be piggy-backed
on Information frames going in the reverse direction (piggy-backing will be explained later)
or alternatively separate explicit acknowledgements may be generated (Receiver Ready (RR)
frames). A timer is used to ensure that acknowledgements are sent on time, otherwise the
transmitter will time-out and automatically retransmit all unacknowledged frames. In the
event of a transmission error, the receiver requests retransmission, beginning with a specified
frame number and the transmitter responds accordingly. Clearly there must be no ambiguity
in the frame number requested by the receiver and the frame number used by the transmitter.
This has implications for the maximum window size, as demonstrated in the timing diagrams
below.
35
9.7.1 Justification for the maximum window size for Go-back-N
Transmitter
Receiver
V(A) V(S)
V(R)
0
0
1
2
3
0
(I,0)
(I,1)
1
(I,2)
2
(I,3)
3
(I,0)
0
(I,1)
1
0
1
2
(RR,2)
What is the problem with how this protocol operates?
The transmitter has no way of knowing which group of frames has been acknowledged, the
first two frames or all six frames, because the same frame acknowledgement number applies
in both cases.
How can the problem be solved?
Restrict the window size to the sequence number space. This ensures that all
unacknowledged frames have unique sequence numbers.
However, even if the window size is restricted to the sequence number space, problems can
still arise in some circumstances, as shown in the diagram below. Remember, protocols must
always operate correctly, irrespective of the type, or number of errors which occur.
36
Transmitter
Receiver
V(A) V(S)
V(R)
0
0
0
(I,0)
1
(I,1)
1
(I,2)
2
(I,3)
3
2
3
0
(RR,0)
Transmitter does not
receive the ack, so it
0
times out and retransmits
all unacknowledged
1
frames
Receiver is expecting frame 0, and receives frame
0, therefore accepts it as the next new frame.
But it is a retransmission of frame 0, which has
1 already been received successfully.
(I,0)
(I,1)
However, if the window size is less than the sequence number space, then the problem does
not arise, as the following timing diagram shows. Note that in this example the window size
is 3.
Transmitter
Receiver
V(A) V(S)
V(R)
0
0
0
1
(I,0)
2
(I,1)
1
(I,2)
2
3
3
Source does not
receive the ack
0
so it times out and
retranmits
1
(RR,3)
(I,0)
*
Receiver is expecting frame 3
Thus detects an error when
frame 0 arrives
* Note, how the receiver responds to errors is covered later.
In summary, with Go-back-N the maximum window size must be less than the sequence
number space (N-1). Thus for a 2 bit sequence number, the sequence number space is 4 and
the maximum window size is 3 (4-1). Note that a window size smaller than the maximum is
also acceptable. The retransmission buffer size is dependant upon the maximum window size.
The transmitter must be able to retransmit any of the unacknowledged frames, therefore the
retransmission buffer size is also N-1.
37
9.7.2 Recovery from loss of an I frame
Transmitter
Receiver
V(A) V(S)
V(R)
0
0
0
(I,0)
1
(I,1)
1
2
Receiver expects frame 1, but
receives frame 2. It sends a REJ to
the transmitter, and discards all
subsequent I frames until frame 1 arrives
(I,2)
3
(REJ,1)
Retransmit 1
1
Retransmit 2
2
(I,1)
Retransmission of
frame 1 accepted
(I,2)
2
9.7.3 Recovery from loss of a RR frame
The effects of the loss of an RR frame, depends on whether another RR is generated and
received by the transmitter before it times out. Consider the following two timing diagrams:-
Transmitter
Receiver
V(A) V(S)
V(R)
0
0
0
(I,0)
1
(I,1)
2
1
2
(I,2)
(RR,2)
3
Source does not
receive the ack
0
so it times out and
retranmits
1
3
(I,0)
(I,1)
(REJ,3)
etc
Receiver is expecting frame 3.
Thus detects an error when
frame 0 arrives, and responds
by sending an REJ and discards
all subsequent I frames until
frame 3 arrives.
In the above timing diagram, the RR was corrupted and therefore discarded on arrival by the
transmitter. No other acknowledgements were sent by the receiver, thus the transmitter
eventually times out and retransmits all unacknowledged frames.
38
Transmitter
Receiver
V(A) V(S)
V(R)
0
0
1
0
(I,0)
increment
V(S)
increment
V(R)
(I,1)
1
2
3
(I,2)
2
(RR,2)
3 A subsequent ack, received
in time, will prevent the timeout
(RR,3)
3
(I,3)
0
Compare the above diagram to the previous one. In this instance the first RR was corrupted,
but the receiver generated a subsequent RR which was successfully received at the
transmitter. Note that the loss of the first RR has no effect on either the transmitter or
receiver because the second RR arrived before the transmitter timed out.
9.7.4 Effects of loss of a REJ
Transmitter
Receiver
V(A) V(S)
V(R)
1
2
3
1
(I,1)
(I,2)
Receiver expects frame 1,
but receives frame 2,
thus assumes an error
(REJ,1)
(I,3)
Timeout, retransmit
from oldest frame 1
(I,1)
2
(I,2)
Frame 2 and subsequent
I frames are discarded
Retransmission of
frame 1 accepted
2
It is worth considering the relationship between V(S), V(R) and V(A). V(S) is used to
generate the sequence number which is copied to the frame header and is incremented every
time an Information frame is sent. When the frame arrives at the receiver, the receiver
compares the sequence number in the frame header with V(R). If they have the same value
then the frame is accepted and V(R) is incremented, otherwise the frame is discarded. Note
therefore that the V(R) in the receiver tracks the V(S) in the transmitter, and it is used to
detect out of sequence frames, thus plays a vital role in error detection. The receiver must
also inform the source (periodically) what frames have been received successfully; this
allows the transmitter to remove them from the buffer (and thus move the sliding window).
39
The current value of V(R) indicates the frame number the receiver expects next, therefore
also indicates what frames have been successfully received. Therefore it is the value of V(R)
which is sent as the acknowledgement back to the transmitter. When the acknowledgement is
received by the transmitter, it is copied to V(A). This has the effect of moving the lower part
of the sliding window (and therefore releasing space in the buffer).
Note that these three variable are required to control the transmission of Information frame in
one direction. For a full duplex connection two sets of these variables are required.
9.7.5 Summary of Go-back-N ARQ
How Go-back-N recovers from the following situations:Corrupted Information frame
 Corrupted frame will be discarded by the receiver.
 Receiver will become aware of the missing I frame when the next I frame arrives - it will
detect an out of sequence error (V(R) not = frame sequence number).
 Receiver sends an REJ frame, which requests retransmission.
(Note, if the corrupted I frame is also the last frame to be sent, then the transmitter will timeout because it will be expecting an acknowledgement from the receiver. On time-out the
frame will be automatically retransmitted.)
Corrupted Receiver Ready (ACK) frame
 Transmitter may time-out and retransmit unacknowledged frames.
 However, if a subsequent RR frame is received in time, then the transmitter and receiver
are oblivious to the missing acknowledgement.
Corrupted REJ
 On detecting an out of sequence error, the receiver will send an REJ frame to the
transmitter.
 The REJ frame informs the transmitter what frame number the receiver is expecting and
the transmitter begin retransmission from that point.
 Clearly all previous frames have been received, thus the REJ also acts as an
acknowledgement.
 If the REJ is corrupted then the transmitter will eventually time-out and retransmit all
unacknowledged frames.
40
Time-out Variants
In the literature it is usually assumed that on time-out the transmitter will automatically
retransmit all unacknowledged frames, and for the purposes of understanding generic
protocols based on Go-back-N, this explanation is adequate. However, the reader should
realise that particular protocol implementations other variations are applied when a time-out
occurs. For example :Link Access Protocol Balanced (LAPB) - The link layer protocol for X.25, retransmits only
one frame - the oldest unacknowledged frame.
LAPF - The link layer protocol for Frame Relay, has the option of retransmitting the most
recent frame, or alternatively an RR frame can be transmitted.
Note that whatever variant is used, the Poll/Final bit is set in the first frame transmitted on
time-out; this bit 'commands' the receiver to reply.
9.8 Selective Repeat (sometimes called Selective Reject)
As the name implies, with Selective Reject, the destination requests retransmission of
individual frames which have been corrupted. Retransmissions are requested using the
frame type SREJ. In other respects however the protocol functions in much the same
way as Go-back-N, including frame formats, control field formats, etc. Typically the
higher layer expects to receive Protocol Data Units (PDUs) in order, thus with
Selective Reject, it is necessary to buffer PDUs at the destination while awaiting
retransmission of earlier ones.
Using a similar approach as was taken for Go-back-N, it is possible to show that
Selective Reject has the following characteristics:For a sequence space of 8 (3 bit sequence number)
1. The window size cannot exceed half the sequence space, thus for a 3 bit
sequence number the window size must not exceed 4. This can lead to a
situation known as sequence number starvation.
2. Clearly the transmit buffer is the same size as the window.
3. A buffer of the same size as the transmit buffer must also be implemented in
the destination. This buffer is typically more complicated than the transmit
buffer because PDUs may have to be reordered within the buffer.
41
9.9 Comparative performance of Go-back-N and Selective Reject
1. Throughput efficiency - Retransmissions scheme for Go-back-N leads to reduced
throughput compared to Selective Reject. For Go-back-N, with a high offered load,
this can lead to a situation known as congestion collapse.
2. Maximum window size - The maximum window size for Selective Reject is half
the sequence number space compared to the sequence number space less one for Goback-N, i.e. Selective Reject the window size is almost half that for Go-back-N, for
the same sequence number space. Note that for both schemes, the window size
dictates the transmit buffer size.
3. For Go-back-N the receive buffer in the destination is 1, while for Selective Reject
it is the same as transmit buffer.
9.10 Practical Layer 2 protocols
There are many layer 2 protocols which typically use some of the BEC techniques explained
above. This section reviews a small number of practical protocols and describes a typical
application / environment in which they may be successfully employed.
9.10.1 Simple point-to-point connection
The connection of two DTEs is probably one of the simplest environments in which a link
layer protocol could be employed. This is generally referred to as a point-to-point connection.
Alternatively the DTE could be connected via the PSTN, in which case modems would also
have to be employed. A typical application would be a file transfer. For low bit rate transfers
then KERMIT and X-MODEM are the most widely used protocols. These are character
based protocols and employ variations of Idle RQ. Further details will not be given.
9.10.2 Link Layer in packet switched networks
In packet switched networks there is a need for efficient link layer protocols. Therefore these
typically employ continuous RQ error recovery schemes. Link Access Protocol Balanced
(LAPB), and LAPF (Frame Relay) are the link layer protocols used in X.25 and Frame Relay
networks respectively. LAPF is merely a variation of LAPB. These operate on every link in
the network. LAPB uses Go-Back-N while for LAPF the default is Go-Back-N but Selective
Reject is available as an option. Details of LAPB are given below. Note that there are a
number of variations of LAP based protocols which are derived from one of the modes of
High-level Data Link Control (HDLC). HDLC is discussed below.
42
Link Access Protocol Balanced - LAPB
There are 3 distinct phases at the link layer
 Link establishment - very seldom.
 Data transfer - constantly.
 Link disconnect - very seldom.
There are 3 types of frames which are used at specific phases of the call:Unnumbered frames.
These are used to set-up (establish) the link so that information may be sent over the link, and
to disconnect the link when the transfer of information is complete. The devices which
operate LAPB are usually kept switched on, hence link set-up and disconnect occur very
infrequently - typically when the devices are switched on and off.
SABM (Set Asynchronous Balanced Mode) - Sets up the link.
DISC (Disconnect) - Disconnect frame sent, link breaks.
SABM and DISC are Commands
DM (Disconnect Mode) - Response frame to DISC
UA (Unnumbered Acknowledgement) - Response frame to SABM
FRMR (Frame Reject) - Used as a negative acknowledgement when a corrupted
unnumbered frame is received.
Information frames.
These are used to encapsulate (carry) layer 3 packets.
I (Information frame) - carries layer 3 packet
Supervisory.
These are used to control the flow of information on the link and to provide
acknowledgements. Refer to Section 9.7 for an example of how they are used.
RR (Receiver Ready) - Acknowledgement - ACK number supplied.
RNR ( Receiver Not Ready) - Acknowledgement & flow control - ACK number supplied.
REJ (Reject) - Negative acknowledgement - Sequence number to retransmit from
supplied.
Layer 2 frame format.
In LAPB only
Closing frame
possible address
Layer 3 packet
Could be start of
are 01 and 03
Unnumbered w hen I frame
next frame
01111110
Address
Control
Information
CRC1
CRC2
Opening / closing flag
Defines frame type
Error detection
Bit stuffing ensures
and carries other
mechanism
transparency
control information
43
01111110
I frame control field.
Piggy Backed
acknow ledgement
V(S)
V(R)
1
3
1
0
N(S)
P/F
3
N(R)
Identifies this as
Sequence number
P/F poll/final, orders
Acknow ledgement number
being an I frame
of this I frame
a reply regardless
In fact the number of the
next frame expected
LAPB - Supervisory frame control field.
Supervisory action
w hich supervisory
frame?
1
0
S
S
P/F
N(R)
Identifies this as
Acknow ledgement number
being a supervisory
In fact the number of the
frame
next frame expected
Unnumbered control field.
1
1
Identifies this as
M
M
P/F
M
M
M
Bit pattern i M-bits indicates - Set asynchronous balanced mode (SABM)
being unnumbered
frame
Disconnect (DISC)
Unnumbered acknow ledgement (UA)
Frame reject (FRMR)
etc (32 possibilities)
9.10.3 Point-to-multipoint connection
The connection of a computer to a number of terminals all sharing the same channel is called
a point-to-multipoint (sometimes also referred to as multidrop) connection. Because the
terminals all share the same channel, fair access to the channel becomes an issue. Typically
in this situation the computer assumes the role of master and controls the access to the
44
channel, while and the terminals assume the role of the slaves and respond to the commands
from the master. Clearly a more complicated link layer protocol is required for this situation.
HDLC is a general protocol which can be used in this and other similar situations. On link
establishment, the DTEs specify the environment in which the protocol will operate. Details
of HDLC are given below.
High level Data Link Control (HDLC)
General purpose data link control protocol.
- Full duplex.
- Used on different configurations :Point-to-point (WAN)
Point-to-multipoint (computer to multiple terminals)
HDLC defines : Primary (Master) stations
Responsibility for controlling the link.

Secondary (Slave) stations
Controlled by primary stations.

Combined stations
Both primary and secondary features
Frames issued by primary are called Commands
Frames issued by secondary are called Responses
Modes of Operation
Balanced mode : Only on point-to-point configurations
 Only with combined stations (hence the term balanced)
Unbalanced mode : Either point-to-point or point-to-multipoint
 Only with primary / secondary stations (hence unbalanced)
At link establishment the set up message specifies transfer mode to be used.
Normal response mode - synchronous - the master prompts the slave.
 Used with unbalanced configurations.
 True master / slave configuration.
Asynchronous response mode.
 Used with unbalanced configurations.
 Secondary does not have to want to be polled (seldom used) - the slave may send without
prompt from the master.
Asynchronous balanced mode.
 Used with balanced configurations, combined stations.
45

This is the mode of operations for WANs, i.e. Link Access Protocol (LAP's).
9.11 Summary of Flow Control
Flow control is required to allow a receiver to temporarily suspend a transmitter. This will be
necessary in order to prevent the receiver’s buffer from over flowing. There are a number of
ways in which flow control can be implemented. The techniques are summarised below.
Flow control
Asynchronous
Hardware
RS232 Signals
RTS CTS
Synchronous
Software
Software
XON / XOFF
Transmit
Window
combined with
RNR frames
Flow control
46
10 Line sharing
A communication line can be shared using the multi-point configuration discussed above. In
this situation the terminals all wish to communicate with the same master computer and
typically the communication will be low volume.
There are other situations in which it is advantageous for DTEs to share a communication
line. For example, there may be a number of DTEs at two (remote) sites which need to
communicate with each other. It is possible to establish a link for each pair of DTEs which
need to communicate. The link could be leased (permanent) or switched (set up on demand).
However many links will be required and the management and cost of these will be high. An
alternative arrangement is to allow the DTEs to share a link.
Multiplexing allows a group of TEs to share a high-speed link.
Multiplexing : Time division multiplexing - mainly in circuit switching.
 Statistical multiplexing - mainly on packet switching.
 Frequency Division Multiplexing - mainly on unguided transmissions
10.1 Time Division Multiplexing (TDM).
Each TE has a separate connection to the TDM
TDM samples each TE, in turn and puts aggregate onto high speed link.
Each cycle (of servicing all TE's) has a fixed time period, and data from cycle is called a
frame (time frame). This is similar to time-slicing in operating systems. Thus, the bandwidth
allocated to a TE is fixed.
TE's
TE's
Time Frame
Multiplexing
DeMultiplexing
S1
R2
S2
Sn
S2
S1
Sn
S2
R1
S1
High-speed Link
Sn
TDM
TDM
Time Division Multiplexer
What is the minimum rate of the high-speed link?
What are the disadvantages of time division multiplexing?
Thus when is it a suitable choice of multiplexer?
47
Rn
10.2 Statistical multiplexing (Stat MUX)
The diagram below shows a number of TEs connected to a Stat MUX. Note that typically
there will be another Stat MUX connected to other end of the synchronous link, which also
terminates a number of TEs. Before data can be accepted from a TE, the TE must inform the
Stat MUX of the destination TE it wishes to communicate with. This information is conveyed
to the Stat MUX at the other end of the synchronous link. Thus, at the destination, when a
frame arrives from TE S1 (this is stored in the frame header), the Stat MUX routes it to the
appropriate destination.
The Stat MUX collects data (characters - asynchronous transmission) from the TEs and
builds variable length frames. The end of a frame will be recognised for instance when
carriage return is detected. Frames are moved to the stat mux's output buffer to await
transmission. Mux will operate a particular layer 2 protocol for synchronous link (e.g.
LAPB). With statistical multiplexing, only active TEs are serviced by the multiplexer. During
transmission a frame will occupy the entire bandwidth of the high-speed link.
Sum of the average transmissions, for all TEs, must not exceed approximately 0.7 of the
capacity of the high-speed link.
TE
S1
Asynchronous
Stat MUX
Link
UART
S2
Output Buffer
S2
Sn
S1
S1
UART
Sn
S3
Synchronous (high-speed)
Link
UART
Statistical Division Multiplexer
10.3 Multiplexing - Comparison
Stat MUX (or concentrator) :Frames are layer 2 frames and can be variable length.
Frames from DTE occupies entire bandwidth of high-speed link while it is being
transmitted.
Therefore, each DTE can take a variable amount of bandwidth - bandwidth on demand.
Frames are stored on a buffer while they await transmission.
Sum of average transmissions for low-speed links must not exceed approximately 0.7 of
the
capacity of the high-speed link. Note, the sum of the average transmissions is not the same
as
the sum of the capacity!
TDM :Time Frames have a fixed length and have the same structure.
Time Frame contains a data sample from all TE's.
Bandwidth allocation is fixed.
Capacity of high-speed link is equal to sum of capacity of low-speed links.
10.4 Frequency Division Multiplexing
48
Unlike TDM and STATISTICAL Multiplexing, FDM is a technique that divides the available
bandwidth into discrete frequency bands or channels. Each DTE is allocated their own
channel, all of which can be used simultaneously. Data is put onto the communications media
using a radio frequency modem which is tuned to that DTE’s channel frequency.
49
11 Layer 3 - Network Layer
11.1 Switching Techniques
11.1.1 Circuit switching
 Most well known example of circuit switching networks is the telephone network.

A route is set up, through a (variable) number of exchanges, to a destination.

Channels used in setting up route are dedicated to caller for duration of call. Thus there is
no queuing and therefore minimum delay end-to-end (propagation delay) .

This is why tariff is based on duration of call.
11.1.2 Packet Switching
 Again a route is set up, through a (variable) number of packet switches to a destination.

However, the channels are not dedicated to calls. Packets from many calls can share the
same channels. Packets are multiplexed (interleaved) onto links.

The reason for this is efficiency. Data calls are naturally bursty, thus for these types of
calls, resources are used inefficiently in a circuit switched network.

The trend in communication networks is packet switching.
Note, modern ‘circuit switched’ networks as used for the traditional analogue telephone
network convert the analogue voice data using PCM into data packets at the earliest
opportunity. Only the short connection from the telephone exchange to a domestic subscriber
is analogue; this link is known as The Local Loop.
There are two variations of packet switching
 Virtual Circuit
 Datagram
11.1.2.1 Virtual circuit (Connection Oriented Network Service, CONS)
Usual 3 phases :Call set up
Data transfer
Disconnect
At call set-up, a Call Request packet is built by the source TE and transmitted into the
network. A logical channel number is allocated at each node along the route and this
information is stored in the switch table (refer to diagram overleaf). The logical channel
number uniquely identifies the call, and has local significance only. The routing information,
i.e. the output link which the packets should be switched to, is supplied by the routing
algorithm. When the call has been established, data transfer can take place. The logical
channel number is carried in the header of every packet. The node accesses the switch table
and translates the input logical channel number and link number to the output logical channel
number and link number. Thus the packet is switched to the appropriate output link and
clearly, all data packets follow the same route.
When all the data has been transferred the call can be cleared.
50
11.1.2.2 Datagram (Connectionless Network Service, CLNS)
With datagram networks, there is no call set-up, therefore there is no call disconnect. Packets
are routed independently, thus they must carry the full destination address in the packet
header. Packets can therefore arrive out of order, so it usual for the destination to operate a
Selective Reject error control scheme. No flow control can be applied in the network
therefore the nodes are more vulnerable to congestion. However, depending on the routing
algorithm in operation, it is possible to route around congested areas of the network and
network failures. Datagram networks are particularly well suited to calls which transmit only
a very small amount of data. Because packets are routed independently they can potentially
follow a different route in the network.
11.2 Functions of Layer 3
Relies on error free transport of its PDU's (packets) across a link (layer 2).
Responsibilities of layer 3 are :



Multiplexing.
Routing.
Flow control.
Congestion control.
11.2.1 Multiplexing
This is achieved by interleaving packets from many virtual circuits onto a layer 2 link. In
X.25 for example the switch table is used to translate the input link and logical channel
number to the output link and logical channel number. Clearly many logical channels can
share the same output link.
11.2.2 Routing
Routing relates to finding a route across the network, from source TE to destination TE.
Routing is done at call set-up in virtual circuit networks and for every packet in datagram
networks. Routing algorithms supply information to the network nodes on how best to route a
packet across the network. This information is typically supplied to the nodes in the form of a
routing table. The routing table provides an entry for every destination in the network and the
corresponding output link which a packet should take to reach that destination. The diagram
overleaf shows the information which must be stored in a VC switch in order to carry out the
routing and switching functions.
There are many types of routing algorithms and they can be classified as follows:static
dynamic (or adaptive) - which can be either of the following
centralised or distributed
A static routing algorithm cannot be changed. Therefore it is not possible for a PSE to reroute
packets to avoid congestion or damaged links.
It is possible to use routing to avoid congested parts of the network. To do this the routing
algorithm must respond to what is happening in the network, so it must be adaptive. Adaptive
algorithms can be centralised or distributed. A centralised adaptive algorithm relies upon a
central system that monitors the flow of traffic throughout the network. This centralised
system then downloads new routing table to all or some of the PSEs. Centralised algorithms
51
suffer from a number of disadvantages. They are not scaleable and as with all centralised
systems security is an issue.
Therefore distributed algorithms are generally considered more appropriate for networks
which have a fluctuating load. With such a scheme the PSE’s have the ability to alter their
own routing directories locally, without referring to a higher centralised authority. The
Internet uses a distributed adaptive algorithm to avoid network congestion.
11.2.3 Flow control
Similar to layer 2. For example, in X.25, the way in which flow control is implemented at
layer 3 is identical to the way it is implemented at layer 2, i.e. by RRs and RNRs. In this case
the receiver referred to is the DTE itself and not the PSE.
The fundamental difference between the flow control applied at layer 2 and the flow control
applied at layer 3 is that layer 2 applies flow control at the link layer and layer 3 applies flow
control at the call level. Thus, with layer 3 it is possible to suspend transmission of an
individual call.
Is it necessary to have flow control at both layer 2 and layer 3?
11.2.4 Congestion Control
Congestion occurs in a packet switched network, when the total offered load exceeds the
capacity of the network. This can occur for a number of reasons:


The network is under engineered, i.e., it was never designed to carry the offered load.
A failure in the network means the capacity of the network is reduced temporarily
The statistical nature of the traffic may cause a temporary overload on the network
There are a number of techniques used to manage congestion.
11.2.4.1 Reserving Buffer Space
Buffer space is reserved for each call at each node along the route. Clearly this can only be
used with connection oriented networks and is not suitable for connectionless networks. The
problem is in deciding how much buffer space should be reserved for each call. To guarantee
that all packets arriving at a node can be buffered, then the maximum send window must be
reserved for each call. While this ensures that no packets will be discarded, it doesn't address
the other symptom of congestion, namely delay. Also, the advantage of sharing resources is
lost and a large amount of buffer storage is required. This technique does not really address
the problem of congestion and is not practical.
11.2.4.2 Discarding packets
Packets arriving at a node are discarded if there is no available buffer space. This is very
much in keeping with connectionless networks where the service is described as 'best effort'
and higher layers in the TE take care of any error recovery. With connection oriented
networks however, error recovery typically takes place on a link by link basis and discarding
one frame may cause a number of frames to be transmitted, depending on the ARQ in
operation. Also, care must be taken to ensure acknowledgements are not discarded, as these
allow buffer space to be released. In summary, this technique is more suitable for
connectionless networks.
52
11.2.4.3 Use of tokens
The objective is to try and control the number of packets in the network. When an access
node wants to send a packet it must first capture a token and when the destination access
node receives the packet it must release the token. Thus the number of packets in the network
is kept constant. However there are a number of disadvantages to this scheme. Although there
may be available tokens in the network there is no guarantee that they will be available to the
access nodes which want to transmit. Also, while the number of packets in the network are
controlled it does not protect individual nodes. Finally there is no method for regenerating
tokens which get destroyed in the network. This method is not very robust and is equally
unsuitable for connection oriented and connectionless networks.
11.2.4.4 Adaptive windowing
Network nodes send congestion signals to source TEs which are expected to respond by
reducing their window sizes. The reduction in window sizes means that the flow of traffic
into the network is automatically throttled back. When congestion clears the TEs gradually
increase their window to the original value. There are number of disadvantages with this
scheme. The congestion signal will take time to teach the source TE. It is difficult for the
node to decide when a congestion signal should be sent, and which calls should the signal be
sent to. Also it is not possible for the network to know that the TEs have responded to the
congestion signal. However, if used this technique is equally suitable for both connection
oriented and connectionless networks.
11.3 Example Layer 3 protocols
11.3.1 X.25 (Offers a connection oriented service)
X.25 is a network access protocol. This means it is defined at the access link of the network
only, i.e. only between the TEs and the node they are connected to. However, in practice it is
likely to be used on every link in the network.
Data Packet
Q
D
VC#
0
1
A packet is the same as a frame.
Group #
This data packet is the equivalent
of an I frame at layer 2.
Channel #
P(R)
Ack no
Send no
M
P(S)
0
User data
(from layer 4)
0 Data packet
1 Control packet
Control Packet
0
VC#
Type of
0
0
1
Group #
Channel #
Packet type
1
control packet
Additional information
Includes destination
address
M - more
53
D - confirmation
1 - end-to-end
0 - local
54
X.25 Packet (PPDU) Types
DTE - DCE
Call request
Call accepted
Clear request
DTE clear confirmation
DTE data
Interrupt request
DTE receiver ready
DTE receiver not ready
DTE reject
Reset request
DTE reset confirmation
Restart request
DTE restart confirmation
Diagnosis
DCE - DTE
Incoming call
Call confirmation
Clear indication
DCE clear confirmation
DCE data
Interrupt confirmation
DCE receiver ready
DCE receiver not ready
Reset indication
DCE reset confirmation
Restart indication
DCE restart confirmation
Diagnostics
Protocol usage
Call set up
Call clearing
Data transfer
Flow control
Resynchronisation
Network error reporting
From the above table it is clear that the mechanics of X.25 layer 3 is similar to X.25 layer 2.
There is almost a one-to-one mapping of the packet types used at layer 3 and the frame types
used at layer 2, and they are used in a similar way. For example, the Call Request packet at
layer 3 initiates the set-up of a call, while the Set Asynchronous Balanced Mode (SABM)
frame initiates the set-up of a link. Note however that there is no need for an equivalent REJ
frame, as layer 2 has already taken care of transmission errors etc. However, it must be
remembered that errors can still occur, albeit very, very infrequently. For example, an
undetected CRC error may have corrupted the Logical Channel Number (LCN) in the packet
header. The corrupted LCN may not be known by the layer 3 at the receiver. A software bug
may exist in the transmitter, such that the sequence number allocated to the packet is not the
number expected. The Reset and Restart packet types facilitate recovery from these types of
errors. The Reset relates to a single Virtual Circuit and the Restart relates to all Virtual
Circuits established at a node. In practice Reset and Restart packets cause the appropriate
VCs to be disconnected, leaving the re-establishment/recovery to layer 4.
11.4 PAD (Packet Assembler Disassembler)
Essentially a PAD is a concentrator which provides the interface to allow non X.25
terminals to be connected to an X.25 network. On the input side the PAD connects
asynchronous terminals. These terminals send asynchronous characters. The PAD
assembles these characters into X.25 packets ready for transmission on an X.25
network (on the output side). Thus the PAD performs all the X.25 functions on behalf
of the terminal, e.g. call establishment, flow control etc. In addition to the X.25
functions the PAD must also control the setting of parameters used by the
asynchronous terminals, such as whether echo checking is required, etc. Defaults
values are available for these parameters and there are standard profile for the more
popular terminals. There are four standard defined for PADs:X.3 - Functionality of PAD
X.28 - Interface between the terminal and PAD
X.25 - Interface between the PAD and X.25 network node
X.29 - Interface between the PAD and remote TE
55
11.4.1 Creation of a Packet
The PAD is required to assemble the incoming asynchronous characters into a packet. The
PAD must therefore be instructed how this is to be achieved. Normally one of three options
are
selected by using an asynchronous link level command.
Length - The packet is transmitted when N asynchronous character are received
Time - The packet is sent every n milliseconds, regardless of how long it is
Termination - a special character (e.g. Carriage Return) is designated as the end of packet
marker
11.5 Internet
11.5.0 Introduction
The context of much of the material presented so far was the OSI 7 Layer Reference Model.
Another reference model which is very widely used is the TCP/IP Reference Model - it is
used for example in the Internet. Most of the concepts / techniques (or variations of them )
already discussed can be also applied to the TCP/IP Reference Model. The purpose of this
section is to consider the TCP/IP Reference Model within the context of the Internet.
11.5.1 TCP/IP Reference Model
There are 4 layers to the model:Host-to-network layer - The function of this layer is to allow the host to connect to the
network so that IP datagrams can be sent. Essentially this is not a layer at all, but an interface
- the interface between the Internet layer and the underlying network. This layer is not
actually defined and varies from network to network.
Internet layer - The function of this layer is to allow hosts to inject packets onto any network
and to route the packets independently to the appropriate destination. It provides an
unreliable service. This is discussed in detail later.
Transport layer - The function of this layer is to allow communication between two peer
entities in the source and destination hosts. Two protocols are defined TCP and UDP. This
operates in the end points only. Transport layer protocols are not covered in this module.
Application layer - This layer contains all the higher-level protocols, for example FTP,
TELNET etc.
TCP/IP Reference Model
Peers
.
Application
Application
Layer 4
Transport
Transport
Layer 3
Internet
Internet
Internet
Host-toNetw ork
Host-toNetw ork
Host-toNetw ork
HOST
Gatew ay
HOST
(Source)
(Router)
(Destination)
56
11.5.2 Internet Protocol (offers a connectionless service)
The Internet Protocol (IP) is the protocol used in the Internet. The Internet comprises a large
number of interconnected networks. The computers attached to the networks are called hosts
(hosts are the equivalent of DTEs in X.25) and the devices used to interconnect the networks
are called gateways (gateways are the equivalent of X.25 switches). Routing is the main
function carried out by a gateway. The basic unit of transfer is a datagram (PDU). Similar to
other protocols, an IP datagram (PDU) comprises a header followed by data. The IP datagram
format is shown below. The role of the various fields which make up the header are as
follows:-
0
4
VERS
HLEN
8
16
SERVICE TY PE
24
31
TOTAL LENGTH
IDENTIFICATION
TIME TO LIVE
19
FLAGS
PROTOCOL
FRAGMENT OFFSET
HEADER CHECKSUM
SOURCE IP ADDRESS
DESTINATION IP ADDRESS
IP OPTIONS ( IF ANY )
PADDING
DATA
. . .
VERS (4 bits) - This contains the version of IP that was used to create the datagram. It
essentially defines the format of the datagram and is required to ensure all network
components which process the datagram apply the same format. The latest version is 4, but
version 6 is currently being defined.
HLEN (4 bits) - Defines the length of the header in 32 bit words.
SERVICE TYPE - Defines how the datagram should be processed. This comprises a number
of sub fields which are shown below.
PRECEDENCE (or priority) (3 bits) - Defines the priority of the datagram, from 0
(normal) through to 7 (highest). Typically this is ignored, but it will become more
important in the provision of QoS.
D - Is a one bit flag which when set requests low delay.
T - Is a one bit flag which when set specifies high throughput.
R - Is a one bit flag which when set specifies high reliability.
Note that it is not possible for the network to guarantee the service requests that have
been made, however it is clearly important for the network to at least know the user
requirements.
The last two bits of the service type field are unused.
TOTAL LENGTH (16 bits) - Defines the total length of the datagram measured in octets.
IDENTIFICATION, FLAGS and FRAGMENT fields (total 32 bits) control the fragmentation
and reassembly of datagrams. This is discussed later in this section.
TIME TO LIVE (8 bits) - This is measured in seconds, and defines how long the datagram is
allowed to remain in the Internet. This field is decremented as the datagram is moves through
the network. When the field reaches zero the datagram is discarded and a message sent back
to the source.
57
PROTOCOL (8 bits) - Defines the high-level protocol that was used to create the message
being carried in the data. This essentially defines the format of the data portion of the
datagram.
HEADER CHECKSUM (16 bits) - This is a checking field used to check the integrity of the
header.
SOURCE IP ADDRESS and DESTINATION IP ADDRESS (32 bits each) - Define the
source and destination IP address.
IP OPTIONS - The length of this field is variable, depending on which options are chosen.
Options are stored contiguously in the OPTIONS field, and each option comprises an
OPTION CODE field followed by an optional LENGTH (8 bits) and DATA field (variable
integral number of 8 bits). Options relate to network management and control. For instance,
if the record route option is specified then each network element which processes the
datagram must add their IP address to the record route option field.
A detailed description of all options is beyond the scope of these notes.
The major functions carried out by IP are Fragmentation/Reassembly, Routing and Error
Reporting.
Fragmentation/Reassembly
Datagrams may be transported across many physical networks. There may be various
maximum physical frame sizes associated with these physical networks. Consequently if a
datagram is longer that the maximum physical frame size defined by the network it must be
fragmented into a number of smaller fragments accordingly. Each of the fragments becomes a
new datagram and most (not all) of the header fields will be copied from the original
datagram to each new fragment.
Of great importance in the fragmentation process is the IDENTIFICATION field. This must
be copied into each fragment because this identifies the original datagram to which the
fragment belongs. The FRAGMENT OFFSET field specifies where (the offset) this fragment
was positioned in the original datagram. The last fragment resets the MORE FRAGMENTS
bits. From the FRAGMENT OFFSET and TOTAL LENGTH fields in the last fragment the
destination can calculate the length of the original datagram. It is a simple task to reassemble
the datagram.
Routing
Before routing can be discussed, addressing must be understood. An IP address is 32 bits
long, and comprises a netid (network identifier) field and a hostid (host identifier) field. Each
network must be allocated a unique netid within the domain of the Internet, while each host
must be allocated a unique hostid within the domain of the network to which it is attached. In
this way a (netid, hostid) pair can uniquely identify a host, and thus it is possible to route a
datagram from a source host to a destination host without any ambiguity. Note that the hostid
is typically divided into subnetid and hostid. This subdivision allows the network identified
by the netid to be considered as a ‘local internet’ comprising networks identified by the
subnetid and hosts attached to the subnetwork identified by hostid.
58
There are three different address classes, Class A, B and C. These differ only in the number
of bits allocated to the netid and the hostid. For instance, Class A addresses allocate 7 bits for
the netid and 24 bits for the hostid, while Class C allocates 21 bits for the netid and 8 bits for
the hostid. Clearly Class A addresses allow fewer networks but a greater number of hosts on
each of the networks compared to Class C. The format of these address classes are shown
below. To make it easier to read the address, the 32 bits are broken down into 4 bytes, the
byte values converted to decimal and separated by dots (periods). Also, in quoting network
addresses the hostid field is set to zero. For example, the netid for DMU is 146.227.0.0
0 1 2 3 4
Cl a s s A
0
Cl a s s B
1 0
Cl a s s C
1 1 0
8
16
n e ti d
24
31
h o s ti d
n e ti d
h o s ti d
n e ti d
h o s ti d
To explain the process of routing, consider the example internet shown below. The internet
comprises 5 networks connected together by 3 gateways to form an internet. Each of the
networks have a unique netid which is shown on the diagram and each gateway has a
(netid,hostid) address for every network to which it is attached. For instance gateway G2 is
connected to three networks. Every gateway must maintain a routing table. Typically on a
local internet the routing tables are maintained manually by a network administrator who
must update them as the network topology changes. In contrast, in the Internet an adaptive
distributed routing algorithm maintains the routing tables automatically. Each entry in the
routing table gives a netid and the address of the next gateway in the route to that netid.
Because the routing process is carried out one hop at a time, this is commonly called hop by
hop routing. The routing process carried out by each gateway can be summarised as follows:A datagram arrives at a gateway
The destination address is extracted
The routing table is searched using the netid part of the address
If the gateway is attached directly to the network netid
the destination address is converted to a physical address
the datagram is encapsulated in a physical frame
the destination physical address is included in the physical frame
the frame is transmitted on the network
else
the next gateway address is extracted from the routing table
the gateway address is converted to a physical address
the datagram is encapsulated in a physical frame
the gateway physical address is included in the physical frame
the frame is transmitted on the appropriate network
Given the example network below:Compile a routing table for G2 and explain how G2 will route datagrams arriving for hosts.
Assign network identifiers to the networks.
59
Network
1
G1
Network
2
G2
Network
3
G3
Network
4
Network
5
Error Reporting
A datagram service is a best effort service. If a host or gateway discards a datagram for
reasons other than transmission errors then an ICMP (Internet Control Message Protocol) is
sent back to the source host. This is termed error reporting. The ICMP gives the reason for
discard, for instance the destination host is unreachable.
The ICMP is also used to manage congestion. If a datagram is discarded because the buffers
are full then a ‘source quench’ ICMP message is return to the host. The host is expected to
respond to the message by reducing the traffic rate.
There are other functions carried out by the ICMP which are outside the scope of these notes.
11.4 Comparison of TCP/IP and OSI Reference Models
Both Models have similarities and differences.
Similarities
1. Both use the concept of a protocol architecture, where there are a number of independent
layers, each carrying out a specific task.
2. The functionality is very similar for most of the layers in each reference model - e.g. both
have a transport layer which operates end-to-end.
Differences.
1. OSI reference model makes clear the distinction between services, interfaces and
protocols. The service defines what services the layer offers, the interface defines how they
are accessed and the protocols are the actual implementation of the services. This adheres to
standard software engineering practice. In contrast, the TCP/IP Reference Model does not use
this approach and hence the protocols (implementations) are not always transparent.
2. TCP/IP has no presentation or session layer.
3. OSI supports connection-oriented and connectionless communication in the network layer,
but only connection-oriented communication in the transport layer. In contrast, TCP/IP
allows only connectionless communication in the network layer but a choice of connectionoriented and connectionless in the transport layer.
4. The OSI defines very precisely the physical and data link layers. TCP/IP ignores this
approach and instead the Host-to-network layer merely defines the interface to the underlying
network.
60
12 Local Area Networks (LAN's)
12.1 Issues
 MAC (Medium Access Control) layer (how nodes gain access to transmission media).
 Topology
 Transmission media
 Ownership
 Applications
 Speed
12.2 Media Access Control
Deterministic
TOKEN RING
(IEEE 802.5)
E.g.
Standards
Probabilistic (Random access)
ETHERNET
(IEEE 802.3)
Layer #
Netw ork
3
Logical Link Control 802.2
2
Medium Access Control
Physical
802.3
802.5
1
Protocol stack
3
Layer 2
3
Piers
LLC
LLC
MAC
MAC
1
1
DTE
(Source)
Layer 2
DTE
(Destination)
General Characteristics: End to end communications LLC to LLC.
 With bus and ring topologies, all DTEs share the same physical transmission media, i.e.
they are all attached to it. All frames are transmitted on it.
 MAC layer is concerned with how they gain access to, and that they share transmission
media in a fair way.
Why is this not an issue in WANs?
12.2.1 CSMA/CD (Carrier Sense, Multiple Access / Collision Detect)
 Used in technical / office environments.
 10 Mbps baseband COAXIAL cable.
10 base 2 thin wire (0.25 diameter)
61
maximum segment length : 200m
thick wire (0.5 diameter)
maximum segment length : 500m
10 base T Hub (star) topology, but using twisted pair
It is possible to connect together with repeaters to extend the length to 2.5 kilometres.
10 base 5 -

12.2.1.1 Frame transmission.
 Sending DTE encapsulates data in a MAC frame, with required source and destination
addresses in the frame header.
 The frame is broadcast on media, the bits will propagate in both directions of the bus.
12.2.1.2 Frame reception.
 All DTEs are linked to the cable, and the DTE for whom the frame is destined,
recognises the destination address and continues to read the rest of the frame until
completion. Once the DTE realises the destination address is not its own it ignores the
remainder of the transmission.
 The data part of the frame is sent up to the LLC (Logical Link Control) layer.
Frame Format
Preamble
7 Octets
Start of frame
1 Octet
Destination address
2 or 6 Octets
Source address
2 or 6 Octets
Length indicator
2 Octets
Data
<= 1500 Octets
Pad
Optional (Minimum field length)
FCS
4 Octets (32 bit CRC)
Preamble - allows MAC unit to achieve bit synchronisation (Manchester encoding used). It
consists of 7 octets with the format 10101010 followed by a single ‘start of frame’ octet
of the form 10101011.
12.2.1.3 Frame transmission in detail.
 DTE listens to transmission media, to decide whether another frame is being transmitted.
(Carrier sense)
 If / when media is idle, DTE transmits frame, and simultaneously monitors media to
ascertain whether another DTE has also transmitted a frame. i.e. To ascertain whether a
collision has occurred.
What are the implications of this ?
(If it listens while it transmits it must listen for the round trip delay time plus a small amount
for error handling.)
62
Worst case collision
DTE 2
DTE 4
DTE 6
X
DTE 1
DTE 3
BUS
DTE 5
If a collision occurs at X, which is close to DTE 6 but some distance from DTE 1, DTE 1 will
have to wait much longer than DTE 6 for the occurrence of the collision to reach it, this is
shown by the arrows under the diagram, the large arrow going both ways shows DTE 1 round
trip delay, DTE 6 is much smaller.




When a DTE which is transmitting a frame detects a collision, it reinforces it by
transmitting (Immediately) a jam sequence, and discontinues transmitting its frame.
( How many DTEs will detect a collision? As many as the amount that have decided the
line is idle.)
If a DTE detects a collision it attempts a (maximum) number of retransmissions before
giving up.
It retries after an integral number of slot times.
Slot time = twice propagation delay of LAN + transmission delay of jam
sequence.
Number of slots : 0  R  2k
k = number of retries R = number of slots
12.2.2 TOKEN ring LAN.
 Topology : Closed loop.
 Each station is attached to two other stations : 1 up stream - 1 down stream.
 It is actually a series of point to point links.
 All DTEs are connected together in a physical ring.
 Just like BUS based LAN's, all the frames share the same transmission path i.e. ring.
 One/A token (small frame, 24 bits long) circulates round the ring.
 The token is either available or in use.
 In order to send a frame a DTE must acquire the token.
A free token consists of 24 bits divided into 3 Octets. The first octet is the Start Delimeter
(SD), the middle octet is the Access Control (AC) field, and the final octet is the End
Delimeter (ED).
Manchester encoding is used at the physical layer. Violations of Manchester encoding are
used to create the SD and ED octets.
12.2.2.1 Frame Transmission
 A DTE must first wait for the token to be available, to circulate round the ring to the
DTE.
 It then changes the token from available to in use. (Set the T bit in Ac field to 1) It then
transmits its MAC frame immediately after the AC field of the token, CRC computed,
etc.
 The frame is repeated. i.e. each bit is received by all DTEs and retransmitted.
 Until it circulates back to the initiating DTE where it is removed, and the token is made
available again, and passed on.
63
12.2.2.2 Frame Reception
 All DTEs receive frame, but only DTE whose address matches the destination address in
the frame header keeps a copy of the frame.
 Receiving DTE updates Acknowledge (A) and Copy (C) fields in the frame trailer, this
informs the originator that another DTE has received and copied the frame.
 The more DTEs on the ring the better the throughput, as the transmission line fairly used
and there are no collisions, the only delay is once around the ring.
12.2.2.3 Other issues.
 Priorities can be implemented by setting the Reservation bits in an in-use tokens AC
field..
 1 of the DTEs must be set up as a monitor to ensure frames do not circulate continuously
on the ring.
 Rings are usually wired such that if one DTE fails, then the others are still connected in a
ring.
12.2.3 Slotted Rings.
 Fixed length frames circulate round the ring in slot frames. Most common slotted ring
LAN is the Cambridge ring.
 Frame comprises : Source address (1 octet)
Destination address (1 octet)
Two data octets
5 control bits
Total of 32 bits.
 The ring operates in a similar way manner to token ring.
 Frame transmission :Sending station waits for an empty frame.
The full / empty bit is set to 1.
The source and destination addresses are copied onto the frame
header.
2 octets of data are copied to the data field.
The frame is circulated onto the ring.
 Frame reception : At each station the frame is received and if the destination address
matches the stations address, a copy of the frame is kept.
The destination station sets two bits which indicate that :
- The station is active.
- The frame was accepted.
 The frame continues round the ring until it is received by the sender, which checks that
the frame has been accepted, it then resets the full / empty bit.
 Note, a sending station is not allowed to have more than one frame in transit.
 A monitor station is required to manage the ring.
12.3 BUS VS ring LAN.
Distance
BUS - maximum bus length, 5 x 5 segments (each 500m) in CSMA/CD,
(2.5Km)
Ring - maximum distance between stations 500m, but no overall limit on size.
Performance
64
BUS - low delay and high throughput when the offered load is low. As offered
load is increased, collisions occur and the delay increases and throughput
decreases.
Ring - no collisions. When the offered load is low, there is a delay in waiting
for the token to arrive at the sending station. As the offered load increases, the
delay does not increase substantially. Throughput is equal to offered load.
Robustness
BUS - More robust under failure. One station going down does not bring the
network down. (If no ACK is received then it will not resend to that
destination).
Ring - If the ring is broken then the network is down, thus more elaborate
wiring systems are required to ensure this does not happen.
Acknowledgements
BUS - separate ACKs are required, thus extra traffic on the network.
Ring - ACKs from the destination to the source are piggy backed on the frame,
thus no separate ACK's are required.
Simplicity
BUS - MAC protocol complex.
Ring - MAC protocol simple.
65
13 Internetworking
Internetworking relates to the interconnection of two or more networks. In some instances,
interconnection is necessary because the networks to be merged are different (have different
or partially different protocol stacks). However, sometimes for performance reasons,
networks are partitioned and the partitions interconnected. In this situation it is possible to
restrict some of the network traffic to the partitions.
There are a number of devices which can be used to interconnect networks. The difference
between the devices relates to the layer of the protocol stack at which the interconnection
takes place. In general two networks will be interconnected at the lowest layer where they
implement the same protocol stack. A brief description of typical devices follows.
Repeater - The interconnection takes place at the physical layer. These are typically used to
join two segments of an Ethernet together. Used to extend the length of the LAN. It merely
regenerates the signal and transmits it onto the next segment.
Bridges - These can be used to interconnect LANs which operate the same LLC. Note, a
bridge may be used to interconnect LANs which have different MAC layers. However, they
may be used to interconnect LANs which have similar MAC layers; this is generally to
increase performance.
Routers - These are used to interconnect networks which operate the same layer 3 (but the
layer 2’s are different).
Gateways - These are used to interconnect networks which have dissimilar layers from 3
upwards. The interconnection may have to take place above the application layer.
66
Download