CCN E

advertisement
COMPUTER COMMUNICATION NETWORK
UNIT – I
SECTION - A
1.
2.
3.
4.
5.
6.
7.
8.
Data Means Information
Communication is a process that allows organisms to Exchange information
Link is the physical communication pathway .
Twisted Pair cabling is a form of wiring in which two conductors are wound
together
Coaxial cable
is an electrical transmission line comprising an inner, central
conductor surrounded by a tubular outer conductor
Optical fiber is designed to guide light along its length.
To increase the distance served by terrestrial microwave, a system of
Repeater can be installed with each antenna
(30 MHz to 300 MHz) is Very High Frequency
9. A modem is a Modulator/Demodulator
10. Optical modem Modem uses optical fibre cable instead of wire.
11. V.34 28.8 Kbps modem standard.
12. Analog to Analog Conversion
is the representation of Analog information
by an analog signal
13. Frequency modulation is a type Modulation of where the frequency of the
carrier is varied
14. NRZ code represents binary 1s and 0s by two different levels that are constant
during bit duration
15. Manchester uses inversion at the middle of each bit interval for both the
synchronous
and Bit representation
16. PCM is a digital representation of an analog signal where the magnitude of the
signal is sampled regularly
17. ASK is a form of modulation that represents digital data as variations in the
amplitude of a carrier wave.
18. PSK is a digital modulation scheme that conveys data by changing
the phase of a reference signal
19. Channel capacity is the tightest upper bound on the amount of information
20. Broad band
type of data transmission in which a single medium can carry
several channels at once.
21. Attenuation
is the reduction in amplitude and intensity of a signal.
22. Attenuation is usually reported in dB
23. Distortion is the alteration of the original shape.
24. Thermal noise is the random motion of electrons in wire
25. Multiplexing is sending multiple signals on a carrier at the same time
26. In Synchronous
TDM many slots are wasted
27. wavelength-division multiplexing is a technology which multiplexes multiple
optical carrier signals on a single optical fiber.
28. In Parity checking method if the number of ones in the data including the parity
bit is even then it is called Even Parity parity.
29. Cyclic redundancy check (CRC) is a type of function that takes as input a data
stream of any length and produces as output a value of a certain fixed size
30. Link is the physical communication pathway that transfers data from one device
to another.
(True/False)
31. Star Topology requires central controller or hub. True
32. Point to Point Line configuration provides a dedicated link between two devices
True
33. Hybrid topology uses only mesh and ring network topologies. False
SECTION - B
1. What is Data Communications?
The distance over which data moves within a computer may vary from a few
thousandths of an inch, as is the case within a single IC chip, to as much as several
feet along the backplane of the main circuit board. Over such small distances, digital
data may be transmitted as direct, two-level electrical signals over simple copper
conductors. Except for the fastest computers, circuit designers are not very concerned
about the shape of the conductor or the analog characteristics of signal transmission.
Frequently, however, data must be sent beyond the local circuitry that constitutes a
computer. In many cases, the distances involved may be enormous. Unfortunately, as
the distance between the source of a message and its destination increases, accurate
transmission becomes increasingly difficult. This results from the electrical distortion
of signals traveling through long conductors, and from noise added to the signal as it
propagates through a transmission medium. Although some precautions must be
taken for data exchange within a computer, the biggest problems occur when data is
transferred to devices outside the computer's circuitry. In this case, distortion and
noise can become so severe that information is lost.
Data Communications concerns the transmission of digital messages to devices
external to the message source. "External" devices are generally thought of as being
independently powered circuitry that exists beyond the chassis of a computer or other
digital message source. As a rule, the maximum permissible transmission rate of a
message is directly proportional to signal power, and inversely proportional to
channel noise. It is the aim of any communications system to provide the highest
possible transmission rate at the lowest possible power and with the least possible
noise.
2. Explain about Line Configuration:
It refers to the way two or more communication devices attach to a link. Link is
the physical communication pathway that transfers data from one device to another. Two
possible line configuration. 1. Point to Point and 2. Multipoint
1. Point to Point provides a dedicated link between two devices.
2. Multipoint is one in which more than two specific devices share a single link.
3. What are the Main Types of Physical Topologies?
Topology:
The physical topology of a network refers to the configuration of cables,
computers, and other peripherals. Physical topology should not be confused with
logical topology which is the method used to pass information between workstations.
1.
2.
3.
4.
5.
6.
Bus
Star
Ring
Tree
Mesh
Hybrid
Bus Topology
A type of network setup where each of the computers and network devices are
connected to a single cable. Below is a visual example of a simple computer setup on a
network using the bus topology.
Star topology
Also known as a star network, a star topology is one of the most common network
setups where each of the devices and computers on a network connect to a central hub.
A major disadvantage of this type of network topology is that if the central hub
fails, all computers connected to that hub would be disconnected.
Ring topology
Also known as a ring network, the ring topology is a type of computer network
configuration where each network computer and device is connected to each other
forming a large circle (or similar shape).
Each packet is sent around the ring until it reaches its final destination. Today,
the ring topology is seldom used.
Tree topology:
Also known as a star bus topology, tree topology is one of the most common
types of network setups that is similar to a bus topology and a star topology.
A tree topology connects multiple star networks to other star networks.
Mesh topology
A type of network setup where each of the computers and network devices are
interconnected with one another, allowing for most transmissions to be distributed, even
if one of the connections go down.
This type of topology is not commonly used for most computer networks as it is
difficult and expensive to have redundant connection to every computer. However, this
type of topology is commonly used for wireless networks.
Hybrid topology:
A network topology that uses two or more network topologies.
4. Explain about concepts of Data Communication
These basic concepts of data communications will help you understand how the Cascade
DataHub works.
2.2.1. Send and Receive Data
Send/write data: A program sends a value for a data point, and the DataHub
records, or writes, the value for that point. This type of communication is
synchronous. The send and the write are essentially two parts of a single process,
so we use the terms pretty much interchangeably. You can write a value to the
DataHub manually using the Data Browser.
A typical write command from a program using DDE protocol is DDEPoke.
Receive/read data: A program requests to receive the value of a data point. The
DataHub then responds by sending the current value of the point. We call this
reading the value from the Cascade DataHub. Again, we sometimes use the two
terms interchangeably, and again, this type of communication is synchronous.
A typical read command from a program using DDE protocol is DDERequest.
'Automatic' Receive: It is possible to set up live data channels, where a program
receives updates on data points sent from the Cascade DataHub. How it works is
the program sends an initial request to the DataHub to register for all changes to a
data point. The DataHub immediately sends the current value of the point, and
then again whenever it changes. The DataHub can receive data automatically in a
similar way. This asynchronous type of communication is sometimes referred to
as publish-subscribe.
A DDEAdvise command sets up this type of connection, which is called an
advise loop.
2.2.2. Client - Server
Exchanging data with the Cascade DataHub is done through a client-server mechanism,
where the client requests a service, and the server provides the service. Depending on the
programs it interacts with, the DataHub is capable of acting as a client, as a server, or as
both simultaneously.
The client-server relationship itself does not determine the direction of data flow. For
example, a client may read data from the server, or it might write data to the server. The
data can flow either way; the client might initiate a read or a write, and the server would
respond.
5. What is the principle of Modem?
MODEM:
A modem is a Modulator/Demodulator, it connects a terminal/computer (DTE) to the
Voice Channel (dial-up line).
Basic Definition
The modem (DCE - Data Communication Equipment) is connected between the
terminal/computer (DTE - Data Terminal Equipment) and the phone line (Voice
Channel). A modem converts the DTE (Data Terminal Equipment) digital signal to an
analog signal that the Voice Channel can use.
Digital Connection
The connection between the modem and terminal/computer is a digital
connection. A basic connection consists of a Transmit Data (TXD) line, a Receive Data
(RXD) line and many hardware hand-shaking control lines.
The control lines determine: whose turn it is to talk (modem or terminal), if the
terminal/computer is turned on, if the modem is turned on, if there is a connection to
another modem, etc..
Analog Connection
The connection between the modem and outside world (phone line) is an
analog connection. The Voice Channel has a bandwidth of 0-4 kHz but only 300 - 3400
Hz is usable for data communications.
The modem converts the digital information into tones (frequencies) for transmitting
through the phone lines. The tones are in the 300-3400 Hz Voice Band.
6. What are the Types of Internal and External Modem?
External/Internal Modems
There are 2 basic physical types of modems: Internal & External modems.
External modems sit next to the computer and connect to the serial port using a
straight through serial cable.
Internal modems are a plug-in circuit board that sits inside the computer. It
incorporates the serial port on-board. They are less expensive than external modems
because they do not require a case, power supply and serial cable. They appear to the
communication programs as if they were an external modem for all intensive purposes.
Modem Types
There are many types of modems, the most common are:
i.
ii.
iii.
iv.
v.
vi.
Optical Modems
Uses optical fibre cable instead of wire. The modem converts the digital signal to
pulses of light to be transmitted over optical lines. (more commonly called a
media adapter or transceiver)
Short Haul Modems
Modems used to transmit over 20 miles or less. Modems we use at home or to
connect computers together between different offices in the same building.
Acoustic Modem
A modem that coupled to the telephone handset with what looked like suction
cups that contained a speaker and microphone. Used for connecting to hotel
phones for travelling salespeople.
Smart Modem
Modem with a CPU (microprocessor) on board that uses the Hayes AT
command set. This allows auto-answer & dial capability rather than manually
dialing & answering.
Digital Modems
Converts the RS-232 digital signals to digital signals more suitable for
transmission. (also called a media adapter or transceiver)
V.32 Modem
Milestone modem that used a 2400 Baud modem with 4 bit encoding. This
results in a 9600 bps (bits per second) transfer rate. It brought the price of high
speed modems below $5,000.
7. Explain about Modem Speeds / Standards:
Bell 103 300 bps FSK -Half duplex
Bell 113 300 bps FSK - Full duplex
Bell 202 1200 baud half duplex
Bell
212A
1200 bps DPSK (Dibit Phase Shift Keying) - V.22 compatible
300 bps FSK (Frequency Shift Keying) - NOT V.22 compatible
MNP1-3 Microcon Networking Protocol - Basic error detection and control of errors.
MNP4
Error correction + adapts to line conditions.
MNP5
Error correction + adapts to line conditions and adds Compression technique
used to double the data transfer rate.
RS-
Cable and connector standard
232D
V.22
1200 bps DPSK (Dibit Phase Shift Keying) - Bell 212A compatible
600 bps PSK (Phase Shift Keying) - NOT Bell 212A compatible
V.22bis
2400 bps - International Standard
Fallback in Europe to V.22
Fallback in America to Bell 212A
V.24
European Mechanical specifications for RS-232D
V.26
Synchronous 2400 bps modem
1200 bps DPSK full duplex
V.27
Synchronous 4800 bps DPSK modem
V.28
European Electrical specifications for RS-232D
V.29
Synchronous 9600 bps QAM
V.32
9600 bps QAM
V.32bis
14.4 Kbs QAM1
V.33
14.4 Kbps Trellis Coded Modulation for noise immunity.
V.34
28.8 Kbps modem standard
V.34bis
33.6 Kbps modem standard
V.42bis
Compression technique to roughly double the data transfer rate. Uses
Automatic Repeat Request ARQ and CRC (Cyclic Redundancy Checking)
WE201
Synchronous Western Electric 2400 bps DPSK
WE208
Synchronous 4800 bps DPSK
WE209
Synchronous 9600 bps
8. Explain about Channel Capacity:



It is the tightest upper bound on the amount of information that can be reliably
transmitted over a communications channel.
Information is the result of processing, gathering, manipulating and organizing data
in a way that adds to the knowledge of the receiver
Channel, in communications (sometimes called communications channel), refers to
the medium used to convey information from a sender (or transmitter) to a
receiver.
9. Explain about BASEBAND and BROAD BAND :
 The original band of frequencies of a signal before it is modulated for
transmission at a higher frequency.
 Base band is an adjective that describes signals and systems whose range of
frequencies is measured from 0 to a maximum bandwidth or highest signal
frequency.
 Base band transmission allows only one signal at a time.
BROADBAND:
 A type of data transmission in which a single medium (wire) can carry several
channels at once.
 Wide band of frequencies is available So, information can be multiplexed and
sent on many different frequencies or channels within the band concurrently,
allowing more information to be transmitted in a given amount of time.
10. What are the types of Transmission impairment?
TRANSMISSION IMPAIRMENT
Transmission media are not perfect. The imperfections cause impairment in the
signal sent through the medium. This means that the signal at the beginning and end of
the medium are not the same. Three Types of impairment occurs usually: i) attenuation ii)
distortion and iii) Noise
Attenuation:
 Attenuation is the reduction in amplitude and intensity of a signal. Signals may
be attenuated exponentially by transmission through a medium.
 Attenuation is usually reported in dB with respect to distance traveled through
the medium.
 Decibel measures the relative strengths of two signals or a signal at two different
points.
dB = 10 log10 (P2/P1)
Where P1 and P2 are the power of a signal at two different points
 Attenuation can also be understood to be the opposite of amplification.
Distortion:




Distortion means that the signal changes its form or shape.
A distortion is the alteration of the original shape (or other characteristic)
Distortion occurs in a composite signal, made of different frequencies.
Each signal component has its own propagation speed through a medium.
Noise:
 Several types of noise such as thermal, induced, crosstalk and impulse may
corrupt the signal.
 Thermal noise is the random motion of electrons in wire that creates an extra
signal not originally sent by the transmitter.
 Induced noise comes from sources such as motors and appliance. These devices
act as a sending antenna and the transmission medium acts as the receiving
antenna.
 Induced noise is the effect of one wire on the other. One wire acts as a sending
antenna and the other as the receiving antenna.
 Impulse noise is a spike that comes from power lines, lightning and so on.
11. Write difference between analog and digital transmission
analog
digital
data
signal
continuous (e.g., voice)
discrete (e.g., text)
continuous electromagnetic waves
sequence of voltage pulses
Used mainly for transmitting data across a
network.
Used mainly internally within
computers.
transmission transmission of analog signals without
regards to their content (the data may be
analog or binary). The signals become
weaker (attenuated) with the distance.
Amplifiers may be used to strengthen the
signals, but as side effect they also boost the
noise. This might not be a problem for
analog data, such as voice, but is a problem
for digital data.
Transmission that is
concerned with the content of
the signal. Repeaters are used
to overcome attenuation. A
repeater recovers the digital
pattern from the signal it gets,
and resubmits a new signal.
12.Give an Advantages of digital transmission.





Technology Sees a drop in cost due to LSI and VLSI
Data integrity Repeaters allow longer distances over lines of lesser quality.
Capacity utilization Digital techniques can more easily and cheaply utilize,
through multiplexing, available transmission links of high bandwidth.
Security and privacy Encryption techniques are more readily applied to digital
data
Integration Simplified if digitized data is used everywhere.
SECTION - C
1. Explain about topology in detail
Topology
Diagram of different network topologies.
Network topology is the layout pattern of interconnections of the various elements
(links, nodes, etc.) of a computer or biological network.[3] Network topologies may be
physical or logical. Physical topology refers to the physical design of a network
including the devices, location and cable installation. Logical topology refers to how data
is actually transferred in a network as opposed to its physical design. In general physical
topology relates to a core network whereas logical topology relates to basic network.
Topology can be understood as the shape or structure of a network. This shape does not
necessarily correspond to the actual physical design of the devices on the computer
network. The computers on a home network can be arranged in a circle but it does not
necessarily mean that it represents a ring topology.
There are two basic categories of network topologies:


Physical topologies
Logical topologies
The shape of the cabling layout used to link devices is called the physical topology of the
network. This refers to the layout of cabling, the locations of nodes, and the
interconnections between the nodes and the cabling.[1] The physical topology of a
network is determined by the capabilities of the network access devices and media, the
level of control or fault tolerance desired, and the cost associated with cabling or
telecommunications circuits.
The logical topology, in contrast, is the way that the signals act on the network media, or
the way that the data passes through the network from one device to the next without
regard to the physical interconnection of the devices. A network's logical topology is not
necessarily the same as its physical topology. For example, the original twisted pair
Ethernet using repeater hubs was a logical bus topology with a physical star topology
layout. Token Ring is a logical ring topology, but is wired a physical star from the
Media Access Unit.
The logical classification of network topologies generally follows the same classifications
as those in the physical classifications of network topologies but describes the path that
the data takes between nodes being used as opposed to the actual physical connections
between nodes. The logical topologies are generally determined by network protocols as
opposed to being determined by the physical layout of cables, wires, and network devices
or by the flow of the electrical signals, although in many cases the paths that the electrical
signals take between nodes may closely match the logical flow of data, hence the
convention of using the terms logical topology and signal topology interchangeably.
Logical topologies are often closely associated with Media Access Control methods and
protocols. Logical topologies are able to be dynamically reconfigured by special types of
equipment such as routers and switches.
The study of network topology recognizes seven basic topologies:[5]








Point-to-point
Bus
Star
Ring
Mesh
Tree
Hybrid
Daisy chain
Point-to-point
The simplest topology is a permanent link between two endpoints. Switched point-topoint topologies are the basic model of conventional telephony. The value of a
permanent point-to-point network is unimpeded communications between the two
endpoints. The value of an on-demand point-to-point connection is proportional to the
number of potential pairs of subscribers, and has been expressed as Metcalfe's Law.
Permanent (dedicated)
Easiest to understand, of the variations of point-to-point topology, is a point-to-point
communications channel that appears, to the user, to be permanently associated with the
two endpoints. A children's tin can telephone is one example of a physical dedicated
channel.
Within many switched telecommunications systems, it is possible to establish a
permanent circuit. One example might be a telephone in the lobby of a public building,
which is programmed to ring only the number of a telephone dispatcher. "Nailing down"
a switched connection saves the cost of running a physical circuit between the two points.
The resources in such a connection can be released when no longer needed, for example,
a television circuit from a parade route back to the studio.
Switched:
Using circuit-switching or packet-switching technologies, a point-to-point circuit can be
set up dynamically, and dropped when no longer needed. This is the basic mode of
conventional telephony.
Bus
Bus network topology
In local area networks where bus topology is used, each node is connected to a single
cable. Each computer or server is connected to the single bus cable. A signal from the
source travels in both directions to all machines connected on the bus cable until it finds
the intended recipient. If the machine address does not match the intended address for the
data, the machine ignores the data. Alternatively, if the data matches the machine
address, the data is accepted. Since the bus topology consists of only one wire, it is rather
inexpensive to implement when compared to other topologies. However, the low cost of
implementing the technology is offset by the high cost of managing the network.
Additionally, since only one cable is utilized, it can be the single point of failure. If the
network cable is terminated on both ends and when without termination data transfer
stop and when cable breaks, the entire network will be down.
Linear bus
The type of network topology in which all of the nodes of the network are connected to a
common transmission medium which has exactly two endpoints (this is the 'bus', which is
also commonly referred to as the backbone, or trunk) – all data that is transmitted
between nodes in the network is transmitted over this common transmission medium and
is able to be received by all nodes in the network simultaneously.[1]
Distributed bus
The type of network topology in which all of the nodes of the network are connected to a
common transmission medium which has more than two endpoints that are created by
adding branches to the main section of the transmission medium – the physical
distributed bus topology functions in exactly the same fashion as the physical linear bus
topology (i.e., all nodes share a common transmission medium).
Notes:
1.
All of the endpoints of the common transmission medium are normally terminated
using 50 ohm resistor.
2.
The linear bus topology is sometimes considered to be a special case of the
distributed bus topology – i.e., a distributed bus with no branching segments.
3.
The physical distributed bus topology is sometimes incorrectly referred to as a
physical tree topology – however, although the physical distributed bus topology
resembles the physical tree topology, it differs from the physical tree topology in that
there is no central node to which any other nodes are connected, since this hierarchical
functionality is replaced by the common bus.
Star
Star network topology
In local area networks with a star topology, each network host is connected to a central
hub with a point-to-point connection. The network does not necessarily have to resemble
a star to be classified as a star network, but all of the nodes on the network must be
connected to one central device. All traffic that traverses the network passes through the
central hub. The hub acts as a signal repeater. The star topology is considered the
easiest topology to design and implement. An advantage of the star topology is the
simplicity of adding additional nodes. The primary disadvantage of the star topology is
that the hub represents a single point of failure.
Notes
1. A point-to-point link (described above) is sometimes categorized as a special
instance of the physical star topology – therefore, the simplest type of network that is
based upon the physical star topology would consist of one node with a single pointto-point link to a second node, the choice of which node is the 'hub' and which node
is the 'spoke' being arbitrary.[1]
2. After the special case of the point-to-point link, as in note (1) above, the next
simplest type of network that is based upon the physical star topology would consist
of one central node – the 'hub' – with two separate point-to-point links to two
peripheral nodes – the 'spokes'.
3. Although most networks that are based upon the physical star topology are
commonly implemented using a special device such as a hub or switch as the central
node (i.e., the 'hub' of the star), it is also possible to implement a network that is
based upon the physical star topology using a computer or even a simple common
connection point as the 'hub' or central node.[citation needed]
4. Star networks may also be described as either broadcast multi-access or
nonbroadcast multi-access (NBMA), depending on whether the technology of the
network either automatically propagates a signal at the hub to all spokes, or only
addresses individual spokes with each communication.
Extended star
A type of network topology in which a network that is based upon the physical star
topology has one or more repeaters between the central node (the 'hub' of the star) and
the peripheral or 'spoke' nodes, the repeaters being used to extend the maximum
transmission distance of the point-to-point links between the central node and the
peripheral nodes beyond that which is supported by the transmitter power of the central
node or beyond that which is supported by the standard upon which the physical layer of
the physical star network is based.
If the repeaters in a network that is based upon the physical extended star topology are
replaced with hubs or switches, then a hybrid network topology is created that is referred
to as a physical hierarchical star topology, although some texts make no distinction
between the two topologies.
Distributed Star
A type of network topology that is composed of individual networks that are based upon
the physical star topology connected together in a linear fashion – i.e., 'daisy-chained' –
with no central or top level connection point (e.g., two or more 'stacked' hubs, along
with their associated star connected nodes or 'spokes').
Ring
Ring network topology
A network topology that is set up in a circular fashion in which data travels around the
ring in one direction and each device on the right acts as a repeater to keep the signal
strong as it travels. Each device incorporates a receiver for the incoming signal and a
transmitter to send the data on to the next device in the ring. The network is dependent on
the ability of the signal to travel around the ring.[4]
Mesh
It has been
suggested that
Fully
connected
network be
merged into
this article or
section.
(Discuss)
Proposed since
October 2011.
The value of fully meshed networks is proportional to the exponent of the number of
subscribers, assuming that communicating groups of any two endpoints, up to and
including all the endpoints, is approximated by Reed's Law.
Fully connected
Fully connected mesh topology
The number of connections in a full mesh = n(n - 1) / 2.
Note: The physical fully connected mesh topology is generally too costly and complex
for practical networks, although the topology is used when there are only a small number
of nodes to be interconnected (see Combinatorial explosion).
Partially connected
Partially connected mesh topology
The type of network topology in which some of the nodes of the network are connected
to more than one other node in the network with a point-to-point link – this makes it
possible to take advantage of some of the redundancy that is provided by a physical fully
connected mesh topology without the expense and complexity required for a connection
between every node in the network.
Tree
Tree network topology
The type of network topology in which a central 'root' node (the top level of the
hierarchy) is connected to one or more other nodes that are one level lower in the
hierarchy (i.e., the second level) with a point-to-point link between each of the second
level nodes and the top level central 'root' node, while each of the second level nodes that
are connected to the top level central 'root' node will also have one or more other nodes
that are one level lower in the hierarchy (i.e., the third level) connected to it, also with a
point-to-point link, the top level central 'root' node being the only node that has no other
node above it in the hierarchy (The hierarchy of the tree is symmetrical.) Each node in
the network having a specific fixed number, of nodes connected to it at the next lower
level in the hierarchy, the number, being referred to as the 'branching factor' of the
hierarchical tree.This tree has individual peripheral nodes.
1.A network that is based upon the physical hierarchical topology must have at least
three levels in the hierarchy of the tree, since a network with a central 'root' node and
only one hierarchical level below it would exhibit the physical topology of a star.
2.A network that is based upon the physical hierarchical topology and with a branching
factor of 1 would be classified as a physical linear topology.
3.The branching factor, f, is independent of the total number of nodes in the network
and, therefore, if the nodes in the network require ports for connection to other nodes
the total number of ports per node may be kept low even though the total number of
nodes is large – this makes the effect of the cost of adding ports to each node totally
dependent upon the branching factor and may therefore be kept as low as required
without any effect upon the total number of nodes that are possible.
4. The total number of point-to-point links in a network that is based upon the physical
hierarchical topology will be one less than the total number of nodes in the network.
5. If the nodes in a network that is based upon the physical hierarchical topology are
required to perform any processing upon the data that is transmitted between nodes in
the network, the nodes that are at higher levels in the hierarchy will be required to
perform more processing operations on behalf of other nodes than the nodes that are
lower in the hierarchy. Such a type of network topology is very useful and highly
recommended.
definition : Tree topology is a combination of Bus and Star topology.
Hybrid
Hybrid networks use a combination of any two or more topologies in such a way that the
resulting network does not exhibit one of the standard topologies (e.g., bus, star, ring,
etc.). For example, a tree network connected to a tree network is still a tree network
topology. A hybrid topology is always produced when two different basic network
topologies are connected. Two common examples for Hybrid network are: star ring
network and star bus network


A Star ring network consists of two or more star topologies connected using a
multistation access unit (MAU) as a centralized hub.
A Star Bus network consists of two or more star topologies connected using a bus
trunk (the bus trunk serves as the network's backbone).
While grid and torus networks have found popularity in high-performance computing
applications, some systems have used genetic algorithms to design custom networks that
have the fewest possible hops in between different nodes. Some of the resulting layouts
are nearly incomprehensible, although they function quite well.
A Snowflake topology is really a "Star of Stars" network, so it exhibits characteristics of
a hybrid network topology but is not composed of two different basic network topologies
being connected together. Definition : Hybrid topology is a combination of Bus ,Star and
ring topology.
Daisy chain
Except for star-based networks, the easiest way to add more computers into a network is
by daisy-chaining, or connecting each computer in series to the next. If a message is
intended for a computer partway down the line, each system bounces it along in sequence
until it reaches the destination. A daisy-chained network can take two basic forms: linear
and ring.


A linear topology puts a two-way link between one computer and the next.
However, this was expensive in the early days of computing, since each computer (except
for the ones at each end) required two receivers and two transmitters.
By connecting the computers at each end, a ring topology can be formed. An
advantage of the ring is that the number of transmitters and receivers can be cut in half,
since a message will eventually loop all of the way around. When a node sends a
message, the message is processed by each computer in the ring. If a computer is not the
destination node, it will pass the message to the next node, until the message arrives at its
destination. If the message is not accepted by any node on the network, it will travel
around the entire ring and return to the sender. This potentially results in a doubling of
travel time for data.
2. What is mean by transmission?
Transmission is the act of passing something on. In communication passing data
from source to destination. i.e., the act of transmitting messages over distances
Transmission Medium:
Transmission media refers to the medium which carries the signal from the sender
to the receiver. The following are the types of Transmission medium
I) Guided:
(i) Twisted Pair
(ii) Coaxial Cable
(iii) Optical
Fiber
II) Unguided: (i) Radio waves
(ii) Microwave
(iii) Satellite
Guided:
Twisted Pair Cable
Twisted pair cabling is a form of wiring in which two conductors are wound
together for the purposes of canceling out electromagnetic interference (EMI) from
external sources.
Two types of Twisted pair cables
(i)
(ii)
(i)
Shielded twisted pair (STP) and
Unshielded twisted pair (UTP)
Shielded twisted pair: STP is a special kind of copper telephone wiring used
in some business installations. An outer covering or shield is added to the
ordinary twisted pair telephone wires; the shield functions as a ground.
(ii)
Unshielded twisted pair: UTP cables are not shielded. This lack of shielding
results in a high degree of flexibility as well as rugged durability. UTP cables
are found in many ethernet networks and telephone systems.
Coaxial Cable
An electrical transmission line comprising an inner, central conductor
surrounded by a tubular outer conductor. The two conductors are separated by an
electrically insulating medium which supports the inner conductor and keeps it concentric
with the outer conductor.
Optical Fiber
An optical signal transmission medium consisting of (a) a glass fiber or plastic
fiber (or filament) surrounded by protective cladding, (b) strengthening material, and
(c) an outer jacket. Signals may be transmitted, along the cable, as light pulses
introduced into the fiber by a laser or light-emitting diode. Its advantages over the
transmission of electrical signals along (metal) wire cable include low attenuation along
the cable, freedom from electromagnetic interference and electrical grounding problems,
small physical size, light weight, and large transmission bandwidth.
An optical fiber (or fibre) is a glass or plastic fiber designed to guide light along
its length.
UNGUIDED MEDIA
Unguided media, or wireless communication, transport electromagnetic waves
with out using a physical conductor. Instead, signals are broadcast through air (or,
cases, water), and thus are available to anyone who has a device capable of them.
Radio Frequency Allocation
The section of the electromagnetic spectrum defined as radio communication is
divided into eight ranges called bands each regulated by government authorities.
These bands are rated from very low frequency (VLF) to extremely high frequency
(EHF). Figure shows all eight bands and their acronyms.
VLF
Very low frequency
LF
Low frequency
MF
Middle frequency
HF
High frequency
VHF
Very high frequency
UHF
SHF
3KHz
Ultra high frequency
Super high frequency
EHF
Extremely high frequency
300GHz
PROPAGATION OF RADIO WAVES
Types of Propagation
Radio wave transmission utilizes five different types of propagation: surface,
Tropospheric, Ionospheric, line-of-sight, and space.
Radio technology considers the earth as surrounded by two layers of atmosphere: the
troposphere and the ionosphere. The troposphere is the portion of the atmosphere extending
outward approximately 30 miles from the earth's surface (in radio technology. the troposphere
includes the high-altitude layer called the stratosphere) and contains what we generally think of
as air. Clouds, wind etc., the ionosphere is the layer of atmosphere above the troposphere but
below space. It is beyond what we think of as atmosphere and contains free electrically charged
particles (hence the name).
Surface Propagation:
In surface propagation, radio waves travel through the lowest portion of the
atmosphere, hugging the earth. At the lowest frequencies, signals emanate in all directions
from the transmitting antenna and follow the curvature of the planet. Distance depends on the
amount of power in the signal: the greater the power, the greater the distance. Surface
propagation can also take place in seawater.
Tropospheric Propagation:
Tropospheric propagation can work two ways. Either a signal can be directed in a
straight line from antenna to antenna (line-of-sight), or it can be broadcast at an angle into
the upper layers of the troposphere where it is reflected back down to the earth's surface. The
first method requires that the placement of the receiver and the transmitter be within line-ofsight distances, limited by the curvature of the earth in relation to the height of the antennas.
The second method allows greater distances to be covered.
Ionospheric Propagation
In ionospheric propagation, higher-frequency radio waves radiate upward into the
ionosphere where they are reflected back to earth. The density difference between the
troposphere and the ionosphere causes each radio waves to speed up and change direction,
bending back to earth. This type of transmission allows for greater distances to be covered with
lower power output.
Line-of-Sight Propagation
In line-of-sight propagation, very high frequency signals are transmitted in straight
lines directly from antenna to antenna. Antennas must be directional facing each other and
either tall enough or close enough together not to be affected by the curvature of the earth.
Line-of-sight propagation is tricky because radio transmissions cannot be completely focused.
Waves emanate upward and downward as well as forward and can reflect off the surface of the
earth or parts of the atmosphere. Reflected waves that arrive at the receiving antenna later than
the direct portion of the transmission can corrupt the received signal.
Space Propagation
Space propagation utilizes satellite relays in place of atmospheric refraction. A
broadcast signal is received by an orbiting satellite, which rebroadcasts the signal to the
intended receiver back on the earth. Satellite transmission is basically line-of-sight with an
intermediary (the satellite). The distance of the satellite from the earth makes it the equivalent
of a super-high-gain antenna and dramatically increases the distance coverable by a signal.
PROPAGATION OF SPECIFIC SIGNALS
The type of propagation used in radio transmission depends on the frequency (speed)
of the signal. Each frequency is suited for a specific layer of the atmosphere and is most
efficiently transmitted and received by technologies adapted to that layer.
VLF: Very low frequency (VLF) waves are propagated as surface waves, usually through air
but sometimes through seawater. VLF waves do not suffer much attenuation in transmission
but are susceptible to the high levels of atmospheric noise (heat and electricity) active at low
altitudes. VLF waves are used mostly for long-range radio navigation and for submarine
communication. Frequency ranges from 3 KHz to 30 KHz
LF: Similar to VLF, low frequency (LF) waves are also propagated as surface waves. LF
waves are used for long-range radio navigation and for radio beacons or navigational locators
Attenuation is greater during the daytime, when absorption of waves by natural obstacles
increases. Frequency ranges from 30 KHz to 300 KHz.
MF: Middle frequency (MF) signals are propagated in the troposphere. These frequencies
are absorbed by the ionosphere. The distance they can cover is therefore limited by the angle
needed to reflect the signal within the troposphere without entering the ionosphere. Absorption
increases during the daytime, but most MF transmissions rely on line-of-sight antennas to
increase control and avoid the absorption problem altogether. Uses for MF transmissions
include AM radio, maritime radio,-.radio direction finding (RDF), and emergency frequencies.
Frequency ranges from 300 KHz to 3 MHz. (AM Radio 535 KHz to 1.605 MHz)
HF: High frequency (HF) signals use ionospheric propagation. These frequencies move into
the ionosphere, where the density difference reflects them back to earth. Uses for HF signals
include amateur radio (ham radio), citizen's band (CB) radio, international broadcasting,
military communication, long-distance aircraft and ship communication, telephone, telegraph,
and facsimile. Frequency ranges from 3 MHz to 30 MHz.
VHF: Very high frequency (VHF) waves use line-of-sight propagation. Uses for VHF include
VHF television, FM radio, aircraft AM radio, and aircraft navigational aid. Frequency ranges
from 30 MHz to 300 MHz
UHF: Ultrahigh frequency (UHF) waves always use line-of-sight propagation. Uses for UHF
include UHF television, mobile telephone, cellular radio, paging, and microwave links. Note
that microwave communication begins at 1 GHz in the UHF band and continues into the SHF
and EHF bands. Frequency ranges from 300 MHz to 3 GHz.
SHF: Super High frequency (SHF) waves are transmitted using mostly line-of-sight and some
space propagation. Uses for SHF include terrestrial and satellite microwave and radar
communication. Frequency ranges from 3 GHz to 30 GHz.
EHF: Extremely high frequency (EHF) waves use space propagation. Uses for EHF are
predominantly scientific and include radar, satellite, and experimental communications.
Frequency ranges from 30 GHz to 300 GHz.
Terrestrial Microwave
Microwaves do not follow the curvature of the earth and therefore require
line-of-sight transmission and reception equipment. The distance coverable by a lineof-sight signal depends to a large extent on the height of the antenna: the taller the
antennas the longer the sight distance. Height allows the signal to travel farther
without being stopped by the curvature of the planet and raises the signal above many
surface obstacles, such as low hills and tall buildings that would otherwise block
transmission. Typically, antennas are mounted on towers that are in turn often mounted
on hills or mountains.
Microwave signals propagate in one direction at a time, which means that
two frequencies are necessary for two-way communication such as a telephone
conversation. One frequency is reserved for microwave transmission in one direction
and the other for transmission in the other. Each frequency requires its own transmitter
and receiver. Today, both pieces of equipment usually are combined in a single piece
of equipment called a transceiver, which allows a single antenna to serve both frequencies and functions.
Repeaters
To increase the distance served by terrestrial microwave, a system of repeaters
can be installed with each antenna. A signal received by one antenna can be
converted back into transmittable form and relayed to the next antenna. The
distance required between repeaters varies with the frequency of the signal and the
environment in which the antennas are found. A repeater may broadcast the
regenerated signal either at the original frequency or at a new frequency, depending on
the system. Terrestrial microwave with repeaters provides the basis for most
contemporary telephone systems worldwide.
Antennas
Two types of antennas are used for terrestrial microwave communications:
parabolic dish and horn.
A parabolic dish antenna is based on the geometry of a parabola: every line
parallel to the line of symmetry (line of sight) reflects of the curve at angles such that
they intersect in a common point called the focus. The parabolic dish works like a funnel,
catching a wide range of waves and directing them to a common point. In this way, more of the
signal is recovered than would be possible with a single-point receiver. Outgoing transmissions
are broadcast through a horn aimed at the dish. The microwaves hit the dish and are deflected
outward in a reversal of the receipt path.
A horn antenna looks like a gigantic scoop. Outgoing transmissions are broadcast up a
stem (resembling a handle) and deflected outward in a series of narrow parallel beams by the
curved head. Received transmissions are collected by the scooped shape of the horn, in a
manner similar to the parabolic dish, and are deflected down into the stem.
Satellite Communication
Satellite transmission is much like line-of-sight microwave transmission in which one of
the stations is a satellite orbiting the earth. The principle is the same as terrestrial microwave,
with a satellite acting as a super tall antenna and repeater. Although in satellite transmission
signals must still travel in straight lines, the limitations imposed on distance by the curvature of
the earth are reduced. In this way, satellite relays allow microwave signals to span continents
and oceans with a single bounce.
Satellite microwave can provide transmission capability to and from any location on
earth, no matter how remote. This advantage makes high-quality communication available to
undeveloped parts of the world without requiring a huge investment in ground-based
infrastructure. Satellites themselves are extremely expensive, of course, but leasing time or
frequencies on one can be relatively cheap.
Geosynchronous Satellites.
Line-of-sight propagation requires that the sending and receiving antennas be locked onto
each other's location at all times (one antenna must have the other in sight). For this reason, a
satellite that moves faster or slower than the earth's rotation is useful only for short periods of
time (just as a stopped clock is accurate twice a day). To ensure constant communication, the
satellite must move at the same speed as the earth so that it seems to remain fixed above a
certain spot. Such satellites are called geosynchronous
Because orbital speed is based on distance from the planet, only one orbit can be
geosynchronous. This orbit occurs at the equatorial plane and is approximately 22,000 miles
from the surface of the earth. But one geosynchronous satellite cannot cover the whole earth.
One satellite in orbit has line-of-sight contact with a vast number of stations, but the curvature
of the earth still keeps much of the planet out of sight. It takes a minimum of three satellites
equidistant from each other in geosynchronous orbit to provide full global transmission and the
figure shows three satellites, each 120 degrees from another in geosynchronous orbit around
the equator. The view is from the North Pole
EARTH
3. Describe about Modems and its types
Mobile modems and routers
Modems which use a mobile telephone system (GPRS, UMTS, HSPA, EVDO, WiMax, etc.),
are known as wireless modems (sometimes also called cellular modems). Wireless modems can
be embedded inside a laptop or appliance or external to it. External wireless modems are connect
cards, usb modems for mobile broadband and cellular routers. A connect card is a PC card
or ExpressCard which slides into a PCMCIA/PC card/ExpressCard slot on a computer. USB
wireless modems use a USB port on the laptop instead of a PC card or ExpressCard slot. A
cellular router may have an external datacard (AirCard) that slides into it. Most cellular routers
do allow such datacards or USB modems. Cellular Routers may not be modems per se, but they
contain modems or allow modems to be slid into them. The difference between a cellular router
and a wireless modem is that a cellular router normally allows multiple people to connect to it
(since it can route, or support multipoint to multipoint connections), while the modem is made
for one connection.
Types of modem
USB modem
While the term USB modem refers to any type of data/fax/voice modem device which can be
connected to a computer using USB, the term more commonly describes a specific portable USB
device that looks similar to a USB flash drive and can be as small as 100 x 35 x 23mm in
physical size and weigh only around 25grams . These small portable USB fax modems do not
require a power source and can be plugged into any USB port on your PC, notebook, or
Macintosh computer and can also be disconnected from the computer without turning off the
system. One end of the portable USB modem will have a USB interface, while the other end
will have an RJ-11 port for connecting your phone line.
Wireless modem
A modem that accesses a private wireless data network or a wireless telephone system, such as
the CDPD system.
fax modem
A device you can attach to a personal computer that enables you to transmit and receive
electronic documents as faxes. A fax modem is like a regular modem except that it is designed
to transmit documents to a fax machine or to another fax modem. Some, but not all, fax modems
do double duty as regular modems. As with regular modems, fax modems can be either internal
or external. Internal fax modems are often called fax boards.
Documents sent through a fax modem must already be in an electronic form (that is, in a disk
file), and the documents you receive are likewise stored in files on your disk. To create fax
documents from images on paper, you need an optical scanner.
Fax modems come with communications software similar to communications software for
regular modems. This software can give the fax modem many capabilities that are not available
with stand-alone fax machines. For example, you can broadcast a fax document to several sites
at once. In addition, fax modems offer the following advantages over fax machines:
price: fax modems are less expensive. In addition, they require less maintenance
because there are no moving parts. However, if you need to purchase an optical scanner
in addition to the fax modem, there is no price advantage.
convenience: fax modems are more convenient if the documents you want to send are
already in electronic form. With a fax machine, you would first need to print the
document. A fax modem lets you send it directly.
speed: fax modems can almost always transmit documents at the maximum speed of
9,600 bps, whereas not all fax machines support such high data -transmission rates.
image quality: The image quality of documents transmitted by fax modems is usually
superior because the documents remain in electronic form.
The principal disadvantage of fax modems is that you cannot fax paper documents unless you
buy a separate optical scanner, which eliminates any cost and convenience advantages of fax
modems. Another problem with fax modems is that each document you receive requires a large
amount of disk storage (about 100K per page). Not only does this eat up disk storage, but it
takes a long time to print such files.
Cable modem
A modem designed to operate over cable TV lines. Because the coaxial cable used by cable TV
provides much greater bandwidth than telephone lines, a cable modem can be used to achieve
extremely fast access to the World Wide Web. This, combined with the fact that millions of
homes are already wired for cable TV, has made the cable modem something of a holy grail for
Internet and cable TV companies.
There are a number of technical difficulties, however. One is that the cable TV infrastructure is
designed to broadcast TV signals in just one direction - from the cable TV company to people's
homes. The Internet, however, is a two-way system where data also needs to flow from the client
to the server. In addition, it is still unknown whether the cable TV networks can handle the
traffic that would ensue if millions of users began using the system for Internet access.
Despite these problems, cable modems that offer speeds up to 2 Mbps are already available in
many areas.
PCI modem
A modem that plugs into a PCI bus and is controlled by a device driver. Other types of modems
may use the older ISA bus slot.
External modem
)A modem that resides in a self-contained box outside the computer system. Contrast with an
internal modem, which resides on a printed circuit board inserted into the computer.
External modems tend to be slightly more expensive than internal modems. Many experts
consider them superior because they contain lights that indicate how the modem is functioning.
In addition, they can easily be moved from one computer to another. However, they do use up
one COM port.
Software modem
A modem implemented entirely in software. Software modems rely on the computer's
processor to modulate and demodulate signals.
4. Explain about analog and digital transmission.
TRANSMISSION
Propagating signals through any medium. There are analog transmission systems and
digital transmission systems. In an analog transmission system, signals propagate through the
medium as continuously varying electromagnetic waves. In a digital system, signals propagate as
discrete voltage pulses (that is, a positive voltage represents binary 1, and a negative voltage
represents binary 0), which are measured in bits per second
ANALOG AND DIGITAL TRANSMISSION
An analog signal is a continuous signal whose amplitude, phase, or some other property
varies in a direct proportion to the instantaneous value of a physical variable.
Modulation:
Modulation is the process of varying some characteristic of a periodic wave with
external signals.
Analog to Analog Conversion: It is the representation of Analog information by an analog
signal.
Amplitude Modulation:

Amplitude modulation is a type of modulation where the amplitude of the
carrier signal is varied in accordance with the information bearing signal.
Frequency Modulation:

Frequency modulation is a type Modulation of where the frequency of the
carrier is varied in accordance with the modulating signal. The amplitude of the
carrier remains constant.
Phase Modulation:

Phase modulation (PM) is a form of modulation represents information as
variations in the instantaneous phase of carrier wave.
The frequency ranges of several analog transmission systems are listed here:
300-3,000 kHz
AM radio
3 to-30 MHz
Shortwave and CB radio
30-300 MHz
VHF television and FM radio
300-3,000 MHz
UHF television and cellular telephones,
and microwave systems
A digital signal is a discontinuous signal that changes from one state to another in
discrete steps. A popular form of digital modulation is binary, or two levels, digital modulation.
Line coding is the process of arranging symbols that represent binary data in a particular pattern
for transmission.
The most common types of line coding used in communications include non-return-tozero (NRZ), return-to-zero (RZ), and Biphase,
Digital To Digital Conversation: It is the representation of Digital information by a Digital
signal.
NRZ:

NRZ code represents binary 1s and 0s by two different levels that are constant
during bit duration.
 The presence of a high level (ie., positive voltage) in the bit duration represents a
binary 1, while a low level (Negative Voltage) represents a binary 0.
 NRZ codes make the most efficient use of system bandwidth.
 However, loss of timing may result if long strings of 1s and 0s are present causing
a lack of level transitions.
+3 Volts ___ ___
________
| | |
|
0 Volts
| | |
|
|___| |___________|
-3 Volts
________________________________________________
TIME: | | | | | | | |
0 1 2 3 4 5 6 7
RZ:




RZ coding uses only half the bit duration for data transmission. In RZ encoding, a half
period pulse present in the first half of the bit duration represents a binary 1.
While a pulse is present in the first half of the bit duration, the level returns to zero
during the second half.
A binary 0 is represented by the absence of a pulse during the entire bit duration.
Because RZ coding uses only half the bit duration for data transmission, it requires
twice the bandwidth of NRZ coding.
Loss of timing can occur if long strings of 0s are present.
Biphase:

The best existing solution to the problem of synchronization is biphase encoding.
In this method the signal changes at the middle of the bit interval but does not
return to zero. Instead, it continues to the opposite pole. Two types of Biphase 1.
Manchester or 2. differential Manchester
1. Manchester type: It uses inversion at the middle of each bit interval for both the
synchronous and Bit representation. A negative to positive transition represents
binary 1 and a positive to negative transition represents binary 0.
2. Differential Manchester: In this method the inversion at the idle of the bit interval is
used for synchronization, but the presence or absence of an additional transition at the
beginning o the interval is used to identify ty ebit. A transiltin means binary 0 and no
transition means binary 1. It requires two signal changes to represent binary 0 but
only one to represent binary 1.
Analog to Digital Conversion: It is the representation of Analog information by a Digital
signal.
Pulse Amplitude Modulation


This type of modulation is used as the first step in converting an analog signal to
a discrete signal or in cases where it may be difficult to change the frequency or
phase of the carrier.
Pulse-amplitude modulation, acronym PAM, is a form of signal modulation
where the message information is encoded in the amplitude of a series of signal
pulses.
Pulse Coded Modulation:


Pulse-code modulation (PCM) is a digital representation of an analog signal
where the magnitude of the signal is sampled regularly at uniform intervals.
It is quantized to a series of symbols in a digital (usually binary) code.
Digital To Analog Conversion: It is the representation of Digital information by a Analog
signal.
Amplitude-shift keying:


ASK is a form of modulation that represents digital data as variations in the
amplitude of a carrier wave.
The amplitude of an analog carrier signal varies in accordance with the bit stream
(modulating signal), keeping frequency and phase constant.


The level of amplitude can be used to represent binary logic 0s and 1s.
Think of a carrier signal as an ON or OFF switch. In the modulated signal, logic 0
is represented by the absence of a carrier, thus giving OFF/ON keying operation
and hence the name given.
Frequency-shift keying:




FSK is a frequency modulation scheme in which digital information is
transmitted through discrete frequency changes of a carrier wave.
The most common form of frequency shift keying is 2-FSK.
As suggested by the name, 2-FSK uses two discrete frequencies to transmit
binary (0's and 1's) information.
With this scheme, the "1" is called the mark frequency and the "0" is called the
space frequency.
Phase-shift keying :

PSK is a digital modulation scheme that conveys data by changing, or
modulating, the phase of a reference signal (the carrier wave).



Any digital modulation scheme uses a finite number of distinct signals to
represent digital data.
PSK uses a finite number of phases; each assigned a unique pattern of binary bits.
Usually, each phase encodes an equal number of bits. Each pattern of bits forms
the symbol that is represented by the particular phase.
Phase Shifting Table for Data
Bit value Amount of shift
00
None
01
1/4
10
1/2
11
3/4
5. Explain about Multiplexing :
 Multiplexing is sending multiple signals or streams of information on a carrier at the same
time in the form of a single, complex signal and then recovering the separate signals at the
receiving end.
 Three techniques for multiplexing
1) Frequency division multiplexing
2) Time division multiplexing
3) Wave division multiplexing
Frequency division multiplexing
 Frequency-division multiplexing (FDM) is a form of signal multiplexing where
multiple baseband signals are modulated on different frequency carrier waves
and added together to create a composite signal.
Time division multiplexing
 A technology that transmits multiple signals simultaneously over a single
transmission path.
 Time-Division Multiplexing (TDM) is a type of digital or (rarely) analog
multiplexing in which two or more signals or bit streams are transferred apparently
simultaneously as sub-channels in one communication channel, but physically are
taking turns on the channel.
 Two types 1) Synchronous TDM and 2) Asynchronous TDM
Synchronous TDM
 The term synchronous has a different meaning from that used in other areas of
communication. Here Synchronous means that the multiplexer allocates exactly the
same time slot to each device at all times.
 Time will be allocated whether or not a device has anything to transmit.
 It will not guarantee that the full capacity of a link is used.
Asynchronous TDM




In Synchronous TDM many slots are wasted
Statistical TDM allocates time slots dynamically based on demand
Multiplexer scans input lines and collects data until frame full
Asynchronous TDM or Statistical TDM is designed to avoid wastage of capacity.
Wave division multiplexing
 WDM is conceptually the same as FDM, except that the multiplexing and
demultiplexing involves light signals transmitted through fiber optic channels.
 In fiber-optic communications, wavelength-division multiplexing (WDM) is a
technology which multiplexes multiple optical carrier signals on a single optical
fiber by using different wavelengths (colours) of laser light to carry different
signals.
 This allows for a multiplication in capacity, in addition to enabling bidirectional
communications over one strand of fiber."
6. Explain about Error Detection and Control
Error: Error refers to a difference between actual behavior or measurement and the norms or
expectations for the behavior or measurement
Types of Errors:
1. Single Bit Error: The Term Single bit error means that only one bit of a given
data unit is changed from 1 to 0 or from 0 to 1.
2. Burst Error: The Term Burst error means that two or more bits in the data
unit have changed from 1 to 0 or from 0 to 1.
Detection: Error detection is the ability to detect the presence of errors caused by noise or other
impairments during transmission from the transmitter to the receiver.
Types of Detection
1. Vertical redundancy check (VRC): Often called Parity Check. In this a redundant bit called
parity bit is appended to every data unit. Two types of parity check.
a) Even Parity: The parity-checking mode in which each set of transmitted bits
must have an even number of set bits including the parity bit.
b) Odd Parity: The parity-checking mode in which each set of transmitted bits must
have an Odd number of set bits including the parity bit.
2. Longitudinal redundancy check (LRC):
 A block of data organized in a table (rows and columns).
 For example instead of sending a block of 32 bits, a table made of four rows and eight
columns.
 Calculate the parity bit for each column and create a new row of eight bits, which are the
parity bits for the whole block.
3. Cyclic Redundancy Check (CRC):
 A cyclic redundancy check (CRC) is a type of function that takes as input a data stream of
any length and produces as output a value of a certain fixed size.
 The term CRC is often used to denote either the function or the function's output.
 A CRC can be used in the same way as a checksum to detect accidental alteration of data
during transmission or storage.
 A CRC is an error-detecting code whose computation resembles a long division
computation in which the quotient is discarded and the remainder becomes the result, with
the important distinction that the arithmetic used is the carry-less arithmetic of a finite field.
 The length of the remainder is always less than the length of the divisor, which therefore
determines how long the result can be.
4. CheckSum :
 A checksum of a message is an arithmetic sum of message code words of a certain word
length, for example byte values, and their carry value.
 The sum is negated by means of ones-complement, and stored or transferred as an extra code
word extending the message.
 On the receiver side, a new checksum may be calculated, from the extended message. If the
new checksum is not 0, error is detected
7. Explain about Channel capacity
Information theory, channel capacity is the tightest upper bound on the amount of
information that can be reliably transmitted over a communications channel. By the noisychannel coding theorem, the channel capacity of a given channel is the limiting information
rate (in units of information per unit time) that can be achieved with arbitrarily small error
probability.[1] [2]
Information theory, developed by Claude E. Shannon during World War II, defines the
notion of channel capacity and provides a mathematical model by which one can compute it. The
key result states that the capacity of the channel, as defined above, is given by the maximum of
the mutual information between the input and output of the channel, where the maximization is
with respect to the input distribution.[3]
Contents




1 Formal definition
2 Noisy-channel coding theorem
3 Example application
4 Channel capacity in wireless communications
o 4.1 AWGN channel
Formal definition
Let X represent the space of signals that can be transmitted, and Y the space of signals received,
during a block of time over the channel. Let
be the conditional distribution function of Y given X. Treating the channel as a known statistic
system, pY | X(y | x) is an inherent fixed property of the communications channel (representing the
nature of the noise in it). Then the joint distribution
of X and Y is completely determined by the channel and by the choice of
the marginal distribution of signals we choose to send over the channel. The joint distribution
can be recovered by using the identity
Under these constraints, next maximize the amount of information, or the message, that one can
communicate over the channel. The appropriate measure for this is the mutual information
I(X;Y), and this maximum mutual information is called the channel capacity and is given by
Noisy-channel coding theorem
The noisy-channel coding theorem states that for any ε > 0 and for any rate R less than the
channel capacity C, there is an encoding and decoding scheme that can be used to ensure that the
probability of block error is less than ε for a sufficiently long code. Also, for any rate greater
than the channel capacity, the probability of block error at the receiver goes to one as the block
length goes to infinity.
Example application
An application of the channel capacity concept to an additive white Gaussian noise (AWGN)
channel with B Hz bandwidth and signal-to-noise ratio S/N is the Shannon–Hartley theorem:
C is measured in bits per second if the logarithm is taken in base 2, or nats per second if the
natural logarithm is used, assuming B is in hertz; the signal and noise powers S and N are
measured in watts or volts2, so the signal-to-noise ratio here is expressed as a power ratio, not in
decibels (dB); since figures are often cited in dB, a conversion may be needed. For example, 30
dB is a power ratio of 1030 / 10 = 103 = 1000.
It has been suggested that
Information_theory#Capacity_of_particular_channel_models be merged into this
article or section. (Discuss)
Channel capacity in wireless communications
This section[4] focuses on the single-antenna, point-to-point scenario. For channel capacity in
systems with multiple antennas, see the article on MIMO.
AWGN channel
If the average received power is
AWGN channel capacity is
[W] and the noise power spectral density is N0 [W/Hz], the
[bits/Hz],
where
is the received signal-to-noise ratio (SNR).
When the SNR is large (SNR >> 0 dB), the capacity
is logarithmic in
power and approximately linear in bandwidth. This is called the bandwidth-limited regime.
When the SNR is small (SNR << 0 dB), the capacity
insensitive to bandwidth. This is called the power-limited regime.
is linear in power but
The bandwidth-limited regime and power-limited regime are illustrated in the figure.
UNIT – II
SECTION – A
1.
A communication protocol that converts noisy (error-prone) data links into
communication channels free of transmission errors.
2. Automatic Repeat reQuest (ARQ), also known as Automatic Repeat Query
3. Simple Mail Transfer Protocol (SMTP) is an Internet standard for electronic mail (e-mail)
transmission across Internet Protocol (IP) networks.
4. OSI stands for Open Systems Interconnection
5. The physical layer defines electrical and physical specifications for devices
6. The data link layer provides the functional and procedural means to transfer data between
network entities
7. SMTP is a delivery protocol only.
8. Channel capacity is the tightest upper bound on the amount of information that can be
reliably transmitted over a communications channel
9. the datalink layer transmits the IP packet in the data portion of a frame
10. the transport layer divides the data into TCP segments
SECTION – B
1. Explain about OSI model
OSI Model
Data unit
Layer
Function
7.
Network process to application
Application
Host
Data
layers
Data representation, encryption and decryption,
6.
convert machine dependent data to machine
Presentation
independent data
5. Session
Interhost communication
Segments
4. Transport End-to-end connections, reliability and flow control
Packet/Datagram 3. Network Path determination and logical addressing
Media
Frame
layers
Bit
2. Data link Physical addressing
1. Physical
Media, signal and binary transmission
Some orthogonal aspects, such as management and security, involve every layer.
Security services are not related to a specific layer: they can be related by a number of layers, as
defined by ITU-T X.800 Recommendation.
These services are aimed to improve the CIA triad (confidentiality, integrity, and availability)
of transmitted data. Actually the availability of communication service is determined by
network design and/or network management protocols. Appropriate choices for these are
needed to protect against denial of service.
2. Explain about data link control protocol
Data link control protocol
Data link control protocol: A communication protocol that converts noisy (error-prone) data
links into communication channels free of transmission errors. Data is broken into frames, each
of which is protected by checksum. Frames are retransmitted as many times as needed to
accomplish correct transmission. A data link control protocol must prevent data loss caused by
mismatched sending/receiving capacities. A flow control procedure, usually a simple sliding
window mechanism, provides this function. Data link control protocols must provide
transparent data transfer. Bit stuffing or byte stuffing strategies are used to mask control
patterns that occur in the text being transmitted. Control frames are used to start/stop logical
connections over links. Addressing may be provided to support several virtual connections on
the same physical link.
3. Explain about ARQ
Automatic Repeat reQuest (ARQ), also known as Automatic Repeat Query, is an errorcontrol method for data transmission that uses acknowledgements (messages sent by the
receiver indicating that it has correctly received a data frame or packet) and timeouts (specified
periods of time allowed to elapse before an acknowledgment is to be received) to achieve
reliable data transmission over an unreliable service. If the sender does not receive an
acknowledgment before the timeout, it usually re-transmits the frame/packet until the sender
receives an acknowledgment or exceeds a predefined number of re-transmissions.
4.
Write about SMTP
Simple Mail Transfer Protocol
Simple Mail Transfer Protocol (SMTP) is an Internet standard for electronic mail (e-mail)
transmission across Internet Protocol (IP) networks. SMTP was first defined by RFC 821
(1982, eventually declared STD 10),[1] and last updated by RFC 5321 (2008)[2] which includes
the extended SMTP (ESMTP) additions, and is the protocol in widespread use today. SMTP is
specified for outgoing mail transport and uses TCP port 25. The protocol for new submissions is
effectively the same as SMTP, but it uses port 587 instead. SMTP connections secured by SSL
are known by the shorthand SMTPS, though SMTPS is not a protocol in its own right.
While electronic mail servers and other mail transfer agents use SMTP to send and receive mail
messages, user-level client mail applications typically only use SMTP for sending messages to a
mail server for relaying. For receiving messages, client applications usually use either the Post
Office Protocol (POP) or the Internet Message Access Protocol (IMAP) or a proprietary
system (such as Microsoft Exchange or Lotus Notes/Domino) to access their mail box accounts
on a mail server
5.
Explain about OSI architecture
OSI Network Architecture 7 Layers Model
Open Systems Interconnection (OSI) model is a reference model developed by ISO
(International Organization for Standardization) in 1984, as a conceptual framework of standards
for communication in the network across different equipment and applications by different
vendors. It is now considered the primary architectural model for inter-computing and internetworking communications.
The specific description for each layer is as follows:
Layer 7: Application Layer

Defines interface-to-user processes for communication and data transfer in network

Provides standardized services such as virtual terminal, file and job transfer and
operations
Layer 6: Presentation Layer

Masks the differences of data formats between dissimilar systems

Specifies architecture-independent data transfer format

Encodes and decodes data; encrypts and decrypts data; compresses and decompresses
data
Layer 5: Session Layer

Manages user sessions and dialogues

Controls establishment and termination of logic links between users

Reports upper layer errors
Layer 4: Transport Layer

Manages end-to-end message delivery in network

Provides reliable and sequential packet delivery through error recovery and flow control
mechanisms

Provides connectionless oriented packet delivery
Layer 3: Network Layer

Determines how data are transferred between network devices

Routes packets according to unique network device addresses

Provides flow and congestion control to prevent network resource depletion
Layer 2: Data Link Layer

Defines procedures for operating the communication links

Frames packets

Detects and corrects packets transmit errors
Layer 1: Physical Layer

Defines physical means of sending data over network devices

Interfaces between network medium and devices

Defines optical, electrical and mechanical characteristics
6. Explain about first layer of OSI
Layer 1: physical layer
The physical layer defines electrical and physical specifications for devices. In particular, it
defines the relationship between a device and a transmission medium, such as a copper or
optical cable. This includes the layout of pins, voltages, cable specifications, hubs, repeaters,
network adapters, host bus adapters (HBA used in storage area networks) and more.
The major functions and services performed by the physical layer are:



Establishment and termination of a connection to a communications medium.
Participation in the process whereby the communication resources are effectively shared
among multiple users. For example, contention resolution and flow control.
Modulation, or conversion between the representation of digital data in user equipment
and the corresponding signals transmitted over a communications channel. These are signals
operating over the physical cabling (such as copper and optical fiber) or over a radio link.
Parallel SCSI buses operate in this layer, although it must be remembered that the logical SCSI
protocol is a transport layer protocol that runs over this bus. Various physical-layer Ethernet
standards are also in this layer; Ethernet incorporates both this layer and the data link layer. The
same applies to other local-area networks, such as token ring, FDDI, ITU-T G.hn and IEEE
802.11, as well as personal area networks such as Bluetooth and IEEE 802.15.4.
7.Explain about second layer of OSI
Layer 2: data link layer
The data link layer provides the functional and procedural means to transfer data between
network entities and to detect and possibly correct errors that may occur in the physical layer.
Originally, this layer was intended for point-to-point and point-to-multipoint media,
characteristic of wide area media in the telephone system. Local area network architecture,
which included broadcast-capable multiaccess media, was developed independently of the ISO
work in IEEE Project 802. IEEE work assumed sublayering and management functions not
required for WAN use. In modern practice, only error detection, not flow control using sliding
window, is present in data link protocols such as Point-to-Point Protocol (PPP), and, on local
area networks, the IEEE 802.2 LLC layer is not used for most protocols on the Ethernet, and on
other local area networks, its flow control and acknowledgment mechanisms are rarely used.
Sliding window flow control and acknowledgment is used at the transport layer by protocols
such as TCP, but is still used in niches where X.25 offers performance advantages.
The ITU-T G.hn standard, which provides high-speed local area networking over existing wires
(power lines, phone lines and coaxial cables), includes a complete data link layer which
provides both error correction and flow control by means of a selective repeat Sliding Window
Protocol.
Both WAN and LAN service arrange bits, from the physical layer, into logical sequences called
frames. Not all physical layer bits necessarily go into frames, as some of these bits are purely
intended for physical layer functions. For example, every fifth bit of the FDDI bit stream is not
used by the layer.
WAN protocol architecture
Connection-oriented WAN data link protocols, in addition to framing, detect and may correct
errors. They are also capable of controlling the rate of transmission. A WAN data link layer
might implement a sliding window flow control and acknowledgment mechanism to provide
reliable delivery of frames; that is the case for Synchronous Data Link Control (SDLC) and
HDLC, and derivatives of HDLC such as LAPB and LAPD.
8.Explain about transport layer of OSI
Layer 4: transport layer
The transport layer provides transparent transfer of data between end users, providing reliable
data transfer services to the upper layers. The transport layer controls the reliability of a given
link through flow control, segmentation/desegmentation, and error control. Some protocols are
state- and connection-oriented. This means that the transport layer can keep track of the
segments and retransmit those that fail. The transport layer also provides the acknowledgement
of the successful data transmission and sends the next data if no errors occurred.
OSI defines five classes of connection-mode transport protocols ranging from class 0 (which is
also known as TP0 and provides the least features) to class 4 (TP4, designed for less reliable
networks, similar to the Internet). Class 0 contains no error recovery, and was designed for use
on network layers that provide error-free connections. Class 4 is closest to TCP, although TCP
contains functions, such as the graceful close, which OSI assigns to the session layer. Also, all
OSI TP connection-mode protocol classes provide expedited data and preservation of record
boundaries. Detailed characteristics of TP0-4 classes are shown in the following table:[4]
Feature Name
TP0 TP1 TP2 TP3 TP4
Connection oriented network
Yes Yes Yes Yes Yes
Connectionless network
No No No No Yes
Concatenation and separation
No Yes Yes Yes Yes
Segmentation and reassembly
Yes Yes Yes Yes Yes
Error Recovery
No Yes Yes Yes Yes
Reinitiate connection (if an excessive number of PDUs are
unacknowledged)
No Yes No Yes No
Multiplexing and demultiplexing over a single virtual circuit
No No Yes Yes Yes
Explicit flow control
No No Yes Yes Yes
Retransmission on timeout
No No No No Yes
Reliable Transport Service
No Yes No Yes Yes
Perhaps an easy way to visualize the transport layer is to compare it with a Post Office, which
deals with the dispatch and classification of mail and parcels sent. Do remember, however, that a
post office manages the outer envelope of mail. Higher layers may have the equivalent of double
envelopes, such as cryptographic presentation services that can be read by the addressee only.
Roughly speaking, tunneling protocols operate at the transport layer, such as carrying non-IP
protocols such as IBM's SNA or Novell's IPX over an IP network, or end-to-end encryption with
IPsec. While Generic Routing Encapsulation (GRE) might seem to be a network-layer
protocol, if the encapsulation of the payload takes place only at endpoint, GRE becomes closer to
a transport protocol that uses IP headers but contains complete frames or packets to deliver to an
endpoint. L2TP carries PPP frames inside transport packet.
9. Write short notes on SDLC
Synchronous Data Link Control (SDLC) Overview
The SYSTEMS NETWORK ARCHITECTURE (SNA), as it is defined and implemented by
IBM, defines the division of all of the network functions into clearly defined layers. These layers
provide many of the same functions as the Open Systems Interconnect (OSI) seven-layered
architecture defined by the International Standards Organization (ISO). The SNA layers are not
identical to the OSI layers and are not compatible or interoperable.
The DATA LINK CONTROL Layer provides the error-free movement of data between the
NETWORK ADDRESSABLE UNITS (NAUs) within a given communication network via the
SYNCHRONOUS DATA LINK CONTROL (SDLC) Protocol. The flow of information passes
down from the higher layers through the DATA LINK CONTROL Layer and is passed into the
PHYSICAL CONTROL Layer. It then passes into the communication links through some type
of interface.
All NETWORK ADDRESSABLE UNITS (NAUs) are interconnected by some type of physical
or logical link which is defined as the DATA LINK LAYER of the SNA Architecture. The
DATA LINK CONTROL LAYER provides the capability to pass information between these
associated NAUs using the SDLC Protocol. There are two basic types of links:


I/O CHANNEL COMMUNICATION links are usually hard-wired between co-located
nodes (in the same room or facility). They normally operate at a very high data rate and
are usually error free.
NON I/O CHANNEL COMMUNICATION links normally require some form of data
communications equipment to pass information between nodes. SDLC Protocol is used to
ensure error-free performance over these links, which may be either point-to-point
(switched or non-switched) or multipoint configurations. Some specific equipment types
use the SDLC loop transmission facilities to pass data.
10. Write about HTTP
People often say that HTTP is a synchronous, not asyncronous protocol. The often offer SMTP
as an example of an asynchronous protocol. If SMTP is synchronous, HTTP is too. If SMTP is
asynchronous, HTTP is too.
Both of these protocols are synchronous in the sense that a client connects to a server and it
submits some information and waits for a response. SMTP is actually a much more "chatty"
protocol than HTTP:
S: MAIL FROM:<Smith@Alpha.ARPA>
R: 250 OK
S: RCPT TO:<Jones@Beta.ARPA>
R: 250 OK
S: RCPT TO:<Green@Beta.ARPA>
R: 550 No such user here
S: RCPT TO:<Brown@Beta.ARPA>
R: 250 OK
S: DATA
R: 354 Start mail input; end with <CRLF>.<CRLF>
S: Blah blah blah...
S: ...etc. etc. etc.
S: <CRLF>.<CRLF>
R: 250 OK
If the other server is slow your SMTP message send will be slow also. If your client-side process
is waiting for the OK then it will hang, just as it would with HTTP. That's why scalable clients
and servers are typically implemented in an asynchronous fashion, whether or not the actual
protocol is synchronous or asynchronous.
Now the SMTP software has the nice feature that it will take your message and forward it to
someone else, using a different SMTP server. HTTP software could do that too. Part of what
SMTP has that HTTP does not (built-in) is explicit message routing. It is really easy to build
this on top of HTTP but it is not part of HTTP. If we were using HTTP for a mail system of
course we would add routing (but not asynch). Any system that does an HTTP to mail gateway
does do some routing so it is clear that this isn't hard. It just isn't standardized. So that's a real
difference between HTTP and SMTP, but it isn't about asynchronous versus synchronous.
The SMTP software also has a nice feature that it will keep trying to deliver the message even if
the other service isn't online. This is store and forward. It's just a feature of the SMTP software,
not of the protocol spoken between the client and the server. HTTP software could implement
this just as easily and I've implemented HTTP software that does! It is no harder to write or
maintain than the equivalent SMTP software.
SMTP software also has the nice feature that it will contact you if it can't deliver your message
(though there is no way to ask it if it did!). This is a "callback." Callbacks are very common in
HTTP systems where both ends of the system are HTTP servers. For instance, you can register a
channel with the Meerkat RSS indexing service and it will "call you back" once a day (or
regularly, anyhow) to check for new information. That's a callback. Dave Winer proposes a
callback be built-into RSS.
All you need for a callback is for the client to register a callback address with the server and get a
message later. There is no magic involved! All SMTP software implements this callback
mechanism and some (but not all) HTTP software does also.
Store and forward is not that useful without callbacks (otherwise how will you know if your
message got delivered?). So callbacks are an important important precondition for HTTP store
and forward. Why are HTTP callbacks rare?



SMTP had callbacks before there was an HTTP so people are used to that.
SMTP-delivered callbacks are compatible with existing mail clients. Of course this
hardly matters for machine-to-machine applications but I guess it is nice to have a nice
human-oriented user interface for debugging.
HTTP has four main methods, GET, PUT, POST and DELETE. In most people's minds,
HTTP "defaults" to GET in that GET is the simplest method to invoke in HTTP. SMTP,


on the other hand, basically defaults to "POST". Therefore, people tend to use HTTP for
GET callbacks and SMTP for POST ones. Meerkat's callback is logically a GET. Dave
Winer's "Web bugs in XML" is logically a POST.
If you use HTTP for GET and SMTP for POST then you don't have to say what method
you are trying to invoke. SMTP can "only" do POST so using "mailto:" implies POST
callback. HTTP strongly defaults to GET so using "http:" implies GET callback.
HTTP doesn't have a standardized way to ask for callbacks. SMTP doesn't either, but the
failure callbacks it provides are automatically implied by the act of sending mail. For
HTTP it would be logical thing to ask for callbacks on resource changes. Here's a
proposal for how you might standardize this. Here is another very similar one.
I would agree that HTTP is not as "asynchronous" as protocols that do not allow responses to
requests. In those protocols you must build synchronous interfaces on top of the asynchronous
infrastructure and in HTTP or SMTP you build the asynchronous infrastructure on top of the
synchronous one. I can't see any killer advantage either way. I also don't see any benefit to
defining specially asynchronous protocols rather than just using HTTP in an asynchronous
manner.
11. Explain about SMTP
Protocol
SMTP is a connection-oriented, text-based protocol in which a mail sender communicates with
a mail receiver by issuing command strings and supplying necessary data over a reliable ordered
data stream channel, typically a Transmission Control Protocol (TCP) connection. An SMTP
session consists of commands originated by an SMTP client (the initiating agent, sender, or
transmitter) and corresponding responses from the SMTP server (the listening agent, or receiver)
so that the session is opened, and session parameters are exchanged. A session may include zero
or more SMTP transactions. An SMTP transaction consists of three command/reply sequences
(see example below.) They are:
1. MAIL command, to establish the return address, a.k.a. Return-Path, 5321.From, mfrom,
or envelope sender. This is the address for bounce messages.
2. RCPT command, to establish a recipient of this message. This command can be issued
multiple times, one for each recipient. These addresses are also part of the envelope.
3. DATA to send the message text. This is the content of the message, as opposed to its
envelope. It consists of a message header and a message body separated by an empty
line. DATA is actually a group of commands, and the server replies twice: once to the
DATA command proper, to acknowledge that it is ready to receive the text, and the
second time after the end-of-data sequence, to either accept or reject the entire message.
Besides the intermediate reply for DATA, each server's reply can be either positive (2xx reply
codes) or negative. Negative replies can be permanent (5xx codes) or transient (4xx codes). A
reject is a permanent failure by an SMTP server; in this case the SMTP client should send a
bounce message. A drop is a positive response followed by message discard rather than delivery.
The initiating host, the SMTP client, can be either an end-user's email client, functionally
identified as a mail user agent (MUA), or a relay server's mail transfer agent (MTA), that is an
SMTP server acting as an SMTP client, in the relevant session, in order to relay mail. Fully
capable SMTP servers maintain queues of messages for retrying message transmissions that
resulted in transient failures.
A MUA knows the outgoing mail SMTP server from its configuration. An SMTP server acting
as client, i.e. relaying, typically determines which SMTP server to connect to by looking up the
MX (Mail eXchange) DNS resource record for each recipient's domain name. Conformant
MTAs (not all) fall back to a simple A record in case no MX record can be found. Relaying
servers can also be configured to use a smart host.
An SMTP server acting as client initiates a TCP connection to the server on the "well-known
port" designated for SMTP: port 25. MUAs should use port 587 to connect to an MSA. The main
difference between an MTA and an MSA is that SMTP Authentication is mandatory for the
latter only.
SMTP vs mail retrieval
SMTP is a delivery protocol only. It cannot pull messages from a remote server on demand.
Other protocols, such as the Post Office Protocol (POP) and the Internet Message Access
Protocol (IMAP) are specifically designed for retrieving messages and managing mail boxes.
However, SMTP has a feature to initiate mail queue processing on a remote server so that the
requesting system may receive any messages destined for it (see Remote Message Queue
Starting below). POP and IMAP are preferred protocols when a user's personal computer is only
intermittently powered up, or Internet connectivity is only transient and hosts cannot receive
message during off-line periods.
Mail processing model
Blue arrows can be implemented using SMTP variations.
Email is submitted by a mail client (MUA, mail user agent) to a mail server (MSA, mail
submission agent) using SMTP on TCP port 587. Most mailbox providers still allow
submission on traditional port 25. From there, the MSA delivers the mail to its mail transfer
agent (MTA, mail transfer agent). Often, these two agents are just different instances of the same
software launched with different options on the same machine. Local processing can be done
either on a single machine, or split among
SECTION – C
1. Explain about functions of layers
OSI Network Architecture 7 Layers Model - 1
The seven OSI layers use various forms of control information to communicate with their peer
layers in other computer systems. This control information consists of specific requests and
instructions that are exchanged between peer OSI layers. Headers and Trailers of data at each
layer are the two basic forms to carry the control information.
Headers are prepended to data that has been passed down from upper layers. Trailers are
appended to data that has been passed down from upper layers. An OSI layer is not required to
attach a header or a trailer to data from upper layers.
Each layer may add a Header and a Trailer to its Data, which consists of the upper layer's
Header, Trailer and Data as it proceeds through the layers. The Headers contain information that
specifically addresses layer-to-layer communication. Headers, trailers and data are relative
concepts, depending on the layer that analyzes the information unit. For example, the Transport
Header (TH) contains information that only the Transport layer sees. All other layers below the
Transport layer pass the Transport Header as part of their Data. At the network layer, an
information unit consists of a Layer 3 header (NH) and data. At the data link layer, however, all
the information passed down by the network layer (the Layer 3 header and the data) is treated as
data. In other words, the data portion of an information unit at a given OSI layer potentially can
contain headers, trailers, and data from all the higher layers. This is known as encapsulation.
OSI Network Architecture 7 Layers Model - 2
For example, if computer A has data from a software application to send to computer B, the data
is passed to the application layer. The application layer in computer A then communicates any
control information required by the application layer in computer B by prepending a header to
the data. The resulting message unit, which includes a header, the data and maybe a trailer, is
passed to the presentation layer, which prepends its own header containing control information
intended for the presentation layer in computer B. The message unit grows in size as each layer
prepends its own header and trailer containing control information to be used by its peer layer in
computer B. At the physical layer, the entire information unit is transmitted through the network
medium.
The physical layer in computer B receives the information unit and passes it to the data link
layer. The data link layer in computer B then reads the control information contained in the
header prepended by the data link layer in computer A. The header and the trailer are then
removed, and the remainder of the information unit is passed to the network layer. Each layer
performs the same actions: The layer reads the header and trailer from its peer layer, strips it off,
and passes the remaining information unit to the next higher layer. After the application layer
performs these actions, the data is passed to the recipient software application in computer B, in
exactly the form in which it was transmitted by the application in computer A.
OSI Network Architecture 7 Layers Model - 3
One OSI layer communicates with another layer to make use of the services provided by the
second layer. The services provided by adjacent layers help a given OSI layer communicate with
its peer layer in other computer systems. A given layer in the OSI model generally communicates
with three other OSI layers: the layer directly above it, the layer directly below it and its peer
layer in other networked computer systems. The data link layer in computer A, for example,
communicates with the network layer of computer A, the physical layer of computer A and the
data link layer in computer B. The following chart illustrates this example.
The Open Systems Interconnect (OSI) model has seven layers. This article describes and
explains them, beginning with the 'lowest' in the hierarchy (the physical) and proceeding to the
'highest' (the application). The layers are stacked this way:

Application






Presentation
Session
Transport
Network
Data Link
Physical
PHYSICAL LAYER
The physical layer, the lowest layer of the OSI model, is concerned with the transmission and
reception of the unstructured raw bit stream over a physical medium. It describes the
electrical/optical, mechanical, and functional interfaces to the physical medium, and carries the
signals for all of the higher layers. It provides:

Data encoding: modifies the simple digital signal pattern (1s and 0s) used by the PC to
better accommodate the characteristics of the physical medium, and to aid in bit and
frame synchronization. It determines:
o
o
What signal state represents a binary 1
How the receiving station knows when a "bit-time" starts
o

How the receiving station delimits a frame
Physical medium attachment, accommodating various possibilities in the medium:
o


Will an external transceiver (MAU) be used to connect to the medium?
o How many pins do the connectors have and what is each pin used for?
Transmission technique: determines whether the encoded bits will be transmitted by
baseband (digital) or broadband (analog) signaling.
Physical medium transmission: transmits bits as electrical or optical signals appropriate
for the physical medium, and determines:
o
o
What physical medium options can be used
How many volts/db should be used to represent a given signal state, using a given
physical medium
DATA LINK LAYER
The data link layer provides error-free transfer of data frames from one node to another over the
physical layer, allowing layers above it to assume virtually error-free transmission over the link.
To do this, the data link layer provides:







Link establishment and termination: establishes and terminates the logical link between
two nodes.
Frame traffic control: tells the transmitting node to "back-off" when no frame buffers are
available.
Frame sequencing: transmits/receives frames sequentially.
Frame acknowledgment: provides/expects frame acknowledgments. Detects and recovers
from errors that occur in the physical layer by retransmitting non-acknowledged frames
and handling duplicate frame receipt.
Frame delimiting: creates and recognizes frame boundaries.
Frame error checking: checks received frames for integrity.
Media access management: determines when the node "has the right" to use the physical
medium.
NETWORK LAYER
The network layer controls the operation of the subnet, deciding which physical path the data
should take based on network conditions, priority of service, and other factors. It provides:





Routing: routes frames among networks.
Subnet traffic control: routers (network layer intermediate systems) can instruct a sending
station to "throttle back" its frame transmission when the router's buffer fills up.
Frame fragmentation: if it determines that a downstream router's maximum transmission
unit (MTU) size is less than the frame size, a router can fragment a frame for
transmission and re-assembly at the destination station.
Logical-physical address mapping: translates logical addresses, or names, into physical
addresses.
Subnet usage accounting: has accounting functions to keep track of frames forwarded by
subnet intermediate systems, to produce billing information.
Communications Subnet
The network layer software must build headers so that the network layer software residing in the
subnet intermediate systems can recognize them and use them to route data to the destination
address.
This layer relieves the upper layers of the need to know anything about the data transmission and
intermediate switching technologies used to connect systems. It establishes, maintains and
terminates connections across the intervening communications facility (one or several
intermediate systems in the communication subnet).
In the network layer and the layers below, peer protocols exist between a node and its immediate
neighbor, but the neighbor may be a node through which data is routed, not the destination
station. The source and destination stations may be separated by many intermediate systems.
Back to the top
TRANSPORT LAYER
The transport layer ensures that messages are delivered error-free, in sequence, and with no
losses or duplications. It relieves the higher layer protocols from any concern with the transfer of
data between them and their peers.
The size and complexity of a transport protocol depends on the type of service it can get from the
network layer. For a reliable network layer with virtual circuit capability, a minimal transport
layer is required. If the network layer is unreliable and/or only supports datagrams, the transport
protocol should include extensive error detection and recovery.
The transport layer provides:




Message segmentation: accepts a message from the (session) layer above it, splits the
message into smaller units (if not already small enough), and passes the smaller units
down to the network layer. The transport layer at the destination station reassembles the
message.
Message acknowledgment: provides reliable end-to-end message delivery with
acknowledgments.
Message traffic control: tells the transmitting station to "back-off" when no message
buffers are available.
Session multiplexing: multiplexes several message streams, or sessions onto one logical
link and keeps track of which messages belong to which sessions (see session layer).
Typically, the transport layer can accept relatively large messages, but there are strict message
size limits imposed by the network (or lower) layer. Consequently, the transport layer must break
up the messages into smaller units, or frames, prepending a header to each frame.
The transport layer header information must then include control information, such as message
start and message end flags, to enable the transport layer on the other end to recognize message
boundaries. In addition, if the lower layers do not maintain sequence, the transport header must
contain sequence information to enable the transport layer on the receiving end to get the pieces
back together in the right order before handing the received message up to the layer above.
End-to-end layers
Unlike the lower "subnet" layers whose protocol is between immediately adjacent nodes, the
transport layer and the layers above are true "source to destination" or end-to-end layers, and are
not concerned with the details of the underlying communications facility. Transport layer
software (and software above it) on the source station carries on a conversation with similar
software on the destination station by using message headers and control messages.
SESSION LAYER
The session layer allows session establishment between processes running on different stations.
It provides:

Session establishment, maintenance and termination: allows two application processes on

different machines to establish, use and terminate a connection, called a session.
Session support: performs the functions that allow these processes to communicate over
the network, performing security, name recognition, logging, and so on.
PRESENTATION LAYER
The presentation layer formats the data to be presented to the application layer. It can be viewed
as the translator for the network. This layer may translate data from a format used by the
application layer into a common format at the sending station, then translate the common format
to a format known to the application layer at the receiving station.
The presentation layer provides:

Character code translation: for example, ASCII to EBCDIC.

Data conversion: bit order, CR-CR/LF, integer-floating point, and so on.
Data compression: reduces the number of bits that need to be transmitted on the network.
Data encryption: encrypt data for security purposes. For example, password encryption.


APPLICATION LAYER
The application layer serves as the window for users and application processes to access network
services. This layer contains a variety of commonly needed functions:

Resource sharing and device redirection

Remote file access
Remote printer access
Inter-process communication
Network management
Directory services






Electronic messaging (such as mail)
Network virtual terminals
2. Describe about ARQ and its types.
The types of ARQ protocols include



Stop-and-wait ARQ
Go-Back-N ARQ
Selective Repeat ARQ
These protocols reside in the Data Link or Transport Layers of the OSI model.
Stop-and-wait ARQ
Stop-and-wait ARQ is a method used in telecommunications to send information between two
connected devices. It ensures that information is not lost due to dropped packets and that packets
are received in the correct order. It is the simplest kind of automatic repeat-request (ARQ)
method. A stop-and-wait ARQ sender sends one frame at a time; it is a special case of the
general sliding window protocol with both transmit and receive window sizes equal to 1. After
sending each frame, the sender doesn't send any further frames until it receives an
acknowledgement (ACK) signal. After receiving a good frame, the receiver sends an ACK. If
the ACK does not reach the sender before a certain time, known as the timeout, the sender sends
the same frame again.
The above behavior is the simplest Stop-and-Wait implementation. However, in a real life
implementation there are problems to be addressed.
Typically the transmitter adds a redundancy check number to the end of each frame. The
receiver uses the redundancy check number to check for possible damage. If the receiver sees
that the frame is good, it sends an ACK. If the receiver sees that the frame is damaged, the
receiver discards it and does not send an ACK -- pretending that the frame was completely lost,
not merely damaged.
One problem is where the ACK sent by the receiver is damaged or lost. In this case, the sender
doesn't receive the ACK, times out, and sends the frame again. Now the receiver has two copies
of the same frame, and doesn't know if the second one is a duplicate frame or the next frame of
the sequence carrying identical data.
Another problem is when the transmission medium has such a long latency that the sender's
timeout runs out before the frame reaches the receiver. In this case the sender resends the same
packet. Eventually the receiver gets two copies of the same frame, and sends an ACK for each
one. The sender, waiting for a single ACK, receives two ACKs, which may cause problems if it
assumes that the second ACK is for the next frame in the sequence.
To avoid these problems, the most common solution is to define a 1 bit sequence number in the
header of the frame. This sequence number alternates (from 0 to 1) in subsequent frames. When
the receiver sends an ACK, it includes the sequence number of the next packet it expects. This
way, the receiver can detect duplicated frames by checking if the frame sequence numbers
alternate. If two subsequent frames have the same sequence number, they are duplicates, and the
second frame is discarded. Similarly, if two subsequent ACKs reference the same sequence
number, they are acknowledging the same frame.
Stop-and-wait ARQ is inefficient compared to other ARQs, because the time between packets, if
the ACK and the data are received successfully, is twice the transit time (assuming the
turnaround time can be zero). The throughput on the channel is a fraction of what it could be. To
solve this problem, one can send more than one packet at a time with a larger sequence number
and use one ACK for a set. This is what is done in Go-Back-N ARQ and the Selective Repeat
ARQ.
Go-Back-N ARQ
Go-Back-N ARQ is a specific instance of the automatic repeat request (ARQ) protocol, in
which the sending process continues to send a number of frames specified by a window size
even without receiving an acknowledgement (ACK) packet from the receiver. It is a special
case of the general sliding window protocol with the transmit window size of N and receive
window size of 1.
The receiver process keeps track of the sequence number of the next frame it expects to receive,
and sends that number with every ACK it sends. The receiver will ignore any frame that does not
have the exact sequence number it expects – whether that frame is a "past" duplicate of a frame it
has already ACK'ed [1] or whether that frame is a "future" frame past the last packet it is waiting
for. Once the sender has sent all of the frames in its window, it will detect that all of the frames
since the first lost frame are outstanding, and will go back to sequence number of the last ACK it
received from the receiver process and fill its window starting with that frame and continue the
process over again.
Go-Back-N ARQ is a more efficient use of a connection than Stop-and-wait ARQ, since unlike
waiting for an acknowledgement for each packet, the connection is still being utilized as packets
are being sent. In other words, during the time that would otherwise be spent waiting, more
packets are being sent. However, this method also results in sending frames multiple times – if
any frame was lost or damaged, or the ACK acknowledging them was lost or damaged, then that
frame and all following frames in the window (even if they were received without error) will be
re-sent. To avoid this, Selective Repeat ARQ can be used. [2]
Selective Repeat ARQ
Selective Repeat ARQ / Selective Reject ARQ is a specific instance of the Automatic RepeatRequest (ARQ) protocol used for communications.
Concept
It may be used as a protocol for the delivery and acknowledgement of message units, or it may
be used as a protocol for the delivery of subdivided message sub-units.
When used as the protocol for the delivery of messages, the sending process continues to send a
number of frames specified by a window size even after a frame loss. Unlike Go-Back-N ARQ,
the receiving process will continue to accept and acknowledge frames sent after an initial error;
this is the general case of the sliding window protocol with both transmit and receive window
sizes greater than 1.
The receiver process keeps track of the sequence number of the earliest frame it has not received,
and sends that number with every acknowledgement (ACK) it sends. If a frame from the sender
does not reach the receiver, the sender continues to send subsequent frames until it has emptied
its window. The receiver continues to fill its receiving window with the subsequent frames,
replying each time with an ACK containing the sequence number of the earliest missing frame.
Once the sender has sent all the frames in its window, it re-sends the frame number given by the
ACKs, and then continues where it left off.
The size of the sending and receiving windows must be equal, and half the maximum sequence
number (assuming that sequence numbers are numbered from 0 to n−1) to avoid
miscommunication in all cases of packets being dropped. To understand this, consider the case
when all ACKs are destroyed. If the receiving window is larger than half the maximum sequence
number, some, possibly even all, of the packages that are resent after timeouts are duplicates that
are not recognized as such. The sender moves its window for every packet that is
acknowledged.[1]
When used as the protocol for the delivery of subdivided messages it works somewhat
differently. In non-continuous channels where messages may be variable in length, standard
ARQ or Hybrid ARQ protocols may treat the message as a single unit. Alternately selective
retransmission may be employed in conjunction with the basic ARQ mechanism where the
message is first subdivided into sub-blocks (typically of fixed length) in a process called packet
segmentation. The original variable length message is thus represented as a concatenation of a
variable number of sub-blocks. While in standard ARQ the message as a whole is either
acknowledged (ACKed) or negatively acknowledged (NAKed), in ARQ with selective
transmission the NAKed response would additionally carry a bit flag indicating the identity of
each sub-block successfully received. In ARQ with selective retransmission of sub-divided
messages each retransmission diminishes in length, needing to only contain the sub-blocks that
were NAKed.
In most channel models with variable length messages, the probability of error-free reception
diminishes in inverse proportion with increasing message length. In other words it's easier to
receive a short message than a longer message. Therefore standard ARQ techniques involving
variable length messages have increased difficulty delivering longer messages, as each repeat is
the full length. Selective retransmission applied to variable length messages completely
eliminates the difficulty in delivering longer messages, as successfully delivered sub-blocks are
retained after each transmission, and the number of outstanding sub-blocks in following
transmissions diminishes.
3. Explain about TCP/IP in detail
The Transmission Control Protocol (TCP)
TCP is a reliable stream delivery protocol. It establishes a virtual circuit between the two
applications, and sends a stream of bytes to the destination in exactly the same order as they left
the source.
Before transmission begins, the applications at both ends obtain a TCP port, similar to that used
by UDP.
TCP segments are encapsulated into an IP datagram. TCP buffers the stream by waiting for
enough data to fill a large datagram before sending it.
TCP is full duplex, and assigns each segment a sequence number, which the receiving end uses
to ensure all segments are received in the correct order. Upon arrival of the next segment, the
receiving end sends an acknowledgement to the sending node.
If the sending node does not receive an acknowledgement within a certain time, it retransmits the
segment.
TCP/IP, a common protocol method used to interconnect computers together, and also serves as
the default protocol for accessing information over the Internet.
Objectives
At the end of this section you should be able to









list three main features of TCP/IP
identify on a given diagram the relationship between the OSI model and the various
layers of TCP/IP
explain the difference between a physical and logical address
discuss reasons why TCP/IP hosts must have unique configurations
given a set of TCP/IP addresses, determine which class the address belongs to
given a specific type of application, select either UDP or TCP as the correct datagram to
use
describe the function of a Domain Name Service
define the term "socket"
given a list of standard network commands, describe their use and give an example
Inter Networking
UNIX systems are usually interconnected using TCP/IP (transmission control protocol, Internet
protocol). This is a protocol mechanism that is widely used by large networks world wide to
interconnect computers of different types.
A protocol is a set of rules that govern how computers talk to each other. TCP/IP is a widely
used and very popular protocol. With TCP/IP, different computer systems can reliably exchange
data on an interconnected network. It also provides a consistent set of application programming
interfaces (API's) to support application development. This means that software programs can
use TCP/IP to exchange data. An example of this is web servers and web browsers, software
applications which use TCP/IP to exchange data.
Features of TCP/IP
Below are a few of the common features of TCP/IP.

File Transfer
The file transfer protocol (FTP and remote copy (RCP) applications let users transfer files
between their computer systems.

Terminal Emulation
Telnet and rlogin provide a method for establishing an interactive connection between
computer systems.

Transparent distributed file access and sharing
The Network File System (NFS) uses the IP protocol to extend the file system to support
access to directories and disk on other computer systems.

Remote command execution
Using the remote shell (rsh) and remote execution (rexec) programs, users can run
programs on remote computers and see the results on their own computer. This lets users
of slow computers take advantage of faster computers by running their programs on the
faster remote computer.

Remote Printing
The UNIX command lpr provides remote printing services.
TCP/IP History
The concept of connecting dissimilar computers into a common network arose from research
conducted by the Defense Advanced Research Projects Agency (DARPA). DARPA developed
the TCP/IP suite of protocols, and implemented an internetwork called ARPANET, which has
evolved into the INTERNET.
TCP/IP and OSI
The protocols used closely resemble the OSI model. The Open Systems Inter-connect model is a
model of 7 layers, which deal with the exchange of data from one computer to another.
Applications developed for TCP/IP generally use several of the protocols. The sum of the layers
used is known as the protocol stack.
User Application programs communicate with the top layer in the protocol stack. This layer
passes information to the next subsequent lower layer of the stack, and son on till the information
is passed to the lowest layer, the physical layer, which transfers the information to the destination
network. The lower layer levels of the destination computer passes the received information to
its higher levels, which in turn passes the data to the destination application. Each protocol layer
performs various functions which are independent of the other layers. Each layer communicates
with equivalent layers on another computer, e.g., the session layer of two different computers
interact.
An application program, transferring files using TCP/IP, performs the following,







the application layer passes the data to the transport layer of the source computer
the transport layer
divides the data into TCP segments
adds a header with a sequence number to each TCP segment
passes the TCP segments to the IP layer
the IP layer
creates a packet with a data portion containing the TCP segment
adds a packet header containing the source and destination IP addresses
determines the physical address of the destination computer
passes the packet and destination physical address to the datalink layer
the datalink layer transmits the IP packet in the data portion of a frame
the destination computers datalink layer
discards the datalink header and passes the IP packet to the IP layer
the destinations IP layer
checks the IP packet header and checksum
if okay, it discards the IP header and passes the TCP segment to the TCP layer
the destinations TCP layer
computes a checksum for the TCP segment data and header
if okay, sends acknowledge to the source computer
discards the TCP header and passes the data to the application
Physical Addresses and Internet Addresses
Each networked computer is assigned a physical address, which takes different forms on
different networks. For ETHERNET networks, the physical address is a 6 byte numeric (or 12
digit hexadecimal) value (e.g. 080BF0AFDC09). Each computers Ethernet address is unique,
and corresponds to the address of the physical network card installed in the computer.
Internet addresses are logical addresses, and are independent of any particular hardware or
network component.
The TCP/IP protocol implements a logical network numbering, stored in configuration files,
which a machine identifies itself as. This logical numbering is important in sending information
to other users at other networks, or accessing machines remotely. Internet addresses are logical
addresses, and are independent of any particular hardware or network component. It consists of a
4 byte (32-bit) numeric value which identifies the network number and the device number on the
network. The 4 byte IP address is represented in dotted decimal notation, where each byte
represents a value between 0 and 255, e.g., 127.46.6.11
When a computer wants to exchange data with another computer using TCP/IP, it first translates
the destination IP address into a physical address in order to send packets to other computers on
the network (this is called address resolution).
In addition, computers in a TCP/IP network each have unique logical names like
ICE.CIT.AC.NZ. These logical names are connected to their IP address, in this example, the IP
address of ice.cit.ac.nz is 156.59.20.50. The logical name is also referred to as the domain
name.
When a client computer wishes to communicate with the host computer ICE, it must translate its
logical name into its IP address. It does this via a domain name lookup query, which asks a
domain name server the IP address of the destination host. The domain name server has a set of
static tables that it uses to find the IP address. Notably, the domain name server is a mission
critical piece of hardware, and if it fails, all lookup requests cannot be answered and thus you
will not be able to connect to any computer using its domain name. Once the IP address is
known, an address resolution is performed to return the physical address of the computer.
The IP logical numbering is comprised of a network number and a local number. For sites
connected to the Internet (a global computer network of universities, databases, companies and
US defence sites), the network portion is assigned by applying to a company responsible for
maintaining the Internet Domain Names.
The construction of an IP address is divided into three classes. Which class is used by an
organization depends upon the maximum number of work stations that is required by that
organization. Each node or computer using TCP/IP within the organization MUST HAVE a
unique host part of the IP address.
Class A Addressing






first byte specifies the network portion
remaining bytes specify the host portion
the highest order bit of the network byte is always 0
network values of 0 and 127 are reserved
there are 126 class A networks
there are more than 16 million host values for each class A network
Class B Addressing

the first two bytes specify the network portion




the last two bytes specify the host portion
the highest order bits 6 and 7 of the network portion are 10
there are more than 16 thousand class B networks
there are 65 thousand nodes in each class B network
Class C Addressing





the first three bytes specify the network portion
the last byte specifies the host portion
the highest order bits 5, 6 and 7 of the network portion are 110
there are more than 2 million class C networks
there are 254 nodes in each class C network
Reserved IP Addresses
The following IP addresses are reserved for special purposes, and must NOT be assigned to any
host.



Network Addresses : The host portion is set to all zero's (129.47.0.0)
Broadcast Address : The host portion is set to all one's (129.47.255.255)
Loopback Addresses : 127.0.0.0 and 127.0.0.1
Internet to Physical Address Translation
When an IP packet is sent, it is encapsulated (enclosed) within the physical frame used by the
network. The IP address is mapped onto the physical address using the Address Resolution
Protocol (arp) for networks such as Ethernet, token-ring, and Arcnet.
When a node wants to send an IP packet, it determines the physical address of the destination
node by first broadcasting an ARP packet which contains the destination IP address. The
destination node responds by sending its physical address back to the requesting node.
The Internet Protocol (IP)
This defines the format of the packets and how to handle them when sending or receiving. The
form of the packets is called an IP datagram.
The Internet Control Message Protocol (ICMP)
ICMP packets contain information about failures on the network, such as inoperative nodes and
gateways, packet congestion etc. The IP software interprets ICMP messages. ICMP messages
often travel across many networks to reach their destination, so they are encapsulated in the data
portion of an IP datagram.
The User Datagram Protocol (UDP)
This permits users to exchange individual packets over a network. It defines a set of destinations
known as protocol ports. Ports are numbered, and TCP/IP reserves 1 to 255 for certain
applications. The UDP datagram is encapsulated into one or more IP datagrams.
Domain Name Servers
This is a hierarchical naming system for identifying hosts. Each host name is comprised of
domain labels separated by periods. If your machine is connected to the Internet, you assign local
domain names to host computers only, and your higher level domain name is assigned to you by
the organization that controls the domain names. Domain names must be registered, so they don't
conflict with an existing one.
For example, the domain name assigned to CIT is,
cit.ac.nz
An example of the host computers at CIT are called cit1, cit2, and mail. Their host names in the
domain are
cit1.cit.ac.nz
cit2.cit.ac.nz
mail.cit.ac.nz
Users are also assigned names. Consider the user joe, who has an account on the host machine
mail. The domain name for this user is,
joe@mail.cit.ac.nz
Hosts in your domain can be referred to by host name only. One host acts as a name resolver
(host domain name server), which resolves machine names. For example, if you want to ftp into
the local host ftp.cit.ac.nz, it will send a request to the name domain server, which will send back
it's IP address.
The name domain server uses a special file called hosts to resolve host names and their IP
addresses. This file is a static file that must be periodically updated every time changes are made.
Simple Network Management Protocol (snmp)
This provides a means for managing a network environment. Each host, router or gateway
running SNMP can be interrogated for information related to the network.
Examples of information are




host names
packets transmitted and received
errors
routing information
Boot Protocol (bootp)
This service allows a local machine to get its Internet address from a designated boot server. The
bootp server has a list of Ethernet addresses and IP addresses stored in a file (bootptab). When it
receives a request from a machine, it looks at this file for a match and responds with the assigned
IP address. The bootp server uses static tables to maintain a link between the Ethernet addresses
and IP addresses for computers on the network. Obviously, this requires continual updating as
network cards are changed and computers moved within the organization.
Network Services
All of the above network services like snmp and ftp are enabled on the host machine by running
the system daemon process to support the service.
If the daemon process is not started, the service is not available at that host. In other words, you
cannot ftp into a host which is not running the ftp daemon service.
When a UNIX host starts up, it usually runs an inetd service, which reads the file inetd.lst which
contains a list of the networking services for the host to start.
Sockets
Sockets are an end to end numbered connection between two UNIX machines communicating
via TCP/IP. Standard packages are assigned special socket numbers (telnet is port 23). The
socket numbers for various protocols and services are found in /etc/services.
A programming socket interface provides calls for opening, reading, writing and closing a socket
to another host machine. In this way, the programmer need not be concerned with the underlying
protocol associated with the socket interface.
Networking Commands
Below is a discussion of some of the more common networking commands.
arp (address resolution protocol)
This command displays and modifies the internet to physical hardware address translation tables.
Examples
arp -a
; show all ARP entries on host kai
arp -d 156.59.20.50
; delete an ARP entry for the host ice
arp -f
; delete all ARP entries
netstat (network status)
This command displays the network status of the local host. It provides information about the
TCP connections, packet statistics, memory buffers and socket information.
Examples
netstat
netstat
netstat
netstat
-s
-r
-a
-?
; show socket information
; show routing tables
; show addresses of network interfaces
; show help
ping
This command sends an echo request to a host. It is a diagnostic tool for testing whether a host
can be found. When the request reaches the host, it is sent back to the originator.
Examples
ping ice.cit.ac.nz ; send an echo request to host ice.cit.ac.nz
ping 156.45.208.1 ; ping host at IP address 156.45.208.1
c:\winnt\system32> Ping ice.cit.ac.nz
Pinging ice.cit.ac.nz [156.59.20.50] with 32 bytes of data:
Reply from 156.59.20.50: bytes=32 time<10ms TTL=128
Reply from 156.59.20.50: bytes=32 time<10ms TTL=128
Reply from 156.59.20.50: bytes=32 time<10ms TTL=128
Reply from 156.59.20.50: bytes=32 time<10ms TTL=128
route
This command manually manipulates the network routing tables which are used to connect to
other hosts.
Examples
route add net 129.34.10.0 129.34.20.1 1
; add a new network 129.34.10.0 accessible via the gateway 129.34.20.1 and
; there is one metric hop to this destination
4. Explain about x modem, y modem and Kermit of asynchronous protocols in detail
Asynchronous protocol - technical definition
A communications protocol that controls an asynchronous transmission, for example, ASCII,
TTY, Kermit and Xmodem. Contrast with synchronous protocol.
DataExpress Implementation Of The XMODEM Protocol
The other file protocols (XMODEM, YMODEM, ZMODEM, KERMIT) provide more features
over ASCII. Each of these provide for transmitting data in frames rather than records, error
detecting, and re-transmitting bad frames. They provide greater reliability, and in some cases,
greater speed. Each may have some advantages. XMODEM is one of the earliest, and still one of
the simplest of the file transfer protocols. It is almost universally available. It uses short, fixedlength frames. For error detection, it provides either a simple one-byte checksum (CHKSUM), or
a two-byte CRC error check. XMODEM is only capable of re-transmitting the frame last
transmitted for error recovery. The frames are also numbered (1-255) to distinguish new frames
from re-transmitted frames. Because of the simple error recovery, the sending computer must
wait until the receiving compute acknowledges each frame before it will transmit the next. This
causes XMODEM to be relatively slow -- similar in this respect to paced ASCII, except that
XMODEM sends additional characters with each frame.
XMODEM and Flow Control
XMODEM requires an 8-bit channel, because it has no provision for translating binary data.
DataExpress automatically changes to 8-bit, no parity mode during XMODEM, even if the
conversational ASCII portion of the session uses a different mode. Because of this, XON/XOFF
(Software) Flow Control is not compatible with XMODEM. Because XMODEM blocks are 132
or 133 bytes long, and the sender waits for an acknowledgement, XMODEM does not require
any flow control, although RTS/CTS (Hardware) Flow Control is compatible with XMODEM.
Control Characters Used by XMODEM
XMODEM uses one control character at the beginning of each frame. This signifies the type of
frame. Except for data frames, the control character constitutes a one-byte frame.
The control characters are shown in this table:
<SOH>
<EOT>
<ACK>
<NAK>
<CAN>
01H
04H
06H
15H
18H
Start of data frame
End of Transmission
Positive Acknowledgement
Negative Acknowledgement
Cancel Transmission
XMODEM Frames
XMODEM data frames are 132 or 133 bytes long, depending on the type of error detection used.
They contain 128 bytes of file data, even if there are fewer bytes remaining in the file to be
transmitted.
XMODEM Transfer Frame
Format
Considerations
The time-out is 10 seconds; retry 10 times.
A <CAN> will abort both a send and a receive.
The last frame sent is padded with the File Term/Pad Char (as defined on the ID05 Define Link Characteristics screen) until the frame contains 128 characters, regardless
of whether distribution Append = Y or N.
Refer to the configuration section later in this chapter for information concerning sysgen
parameters.
Data Express Implementation Of The XMODEM1K Protocol
XMODEM1K is based on the XMODEM protocol. It works basically the same, except that it
allows frames with 1029 bytes. These frames contain 1024 bytes of file data. This enhancement
improves the transfer speed, since the sending computer must wait fewer times for an
acknowledgement from the receiving computer. XMODEM1K still uses the same error detection
and recovery as XMODEM.
XMODEM1K also allows the transmission of the shorter XMODEM frames -- this reduces the
extra pad characters used after the end of the file has been transmitted. To distinguish between
the long and short frames, XMODEM1K uses an additional control character to indicate the long
frames.
Control Characters Used by XMODEM1K
<SOH> 01H
Start of short data frame
<STX> 02H
Start of long data frame
<EOT> 04H
End of Transmission
<ACK> 06H
Positive Acknowledgement
<NAK> 15H
Negative Acknowledgement
<CAN> 18H
Cancel Transmission
XMODEM1K and Flow Control
With XMODEM1K, flow control must be considered. The size of the frame, 1029 bytes, is
larger than the buffer contained in the Asynchronous Clip used by DataExpress. Thus, data
overrun can occur if flow control is not enabled. While this will not prevent the transmission
from completing successfully, the occasional errors will slow down the transmission. As with
XMODEM, XON/XOFF (Software) Flow Control is not compatible. Thus, it is a recommended
that RTS/CTS (Hardware) Flow Control be enabled in both the HP NonStop controller using
CMI, and in the modem. The use of RTS for flow control is different from the ANSI standard for
RS-232 asynchronous communication, but is supported as an alternative by most modems.
XMODEM1k Transfer Frame Format
DataExpress Implementation Of The YMODEM Protocol
YMODEM is based on the XMODEM1K protocol. It works basically the same, except that it
allows multiple files to be sent in one session, and information about the files is sent.
DataExpress is not able to make use of the file information when receiving files (future
implementations might use this information), but it can be useful when sending files -- especially
the file name and modification date -- on the remote computer. Multiple-File Batches
YMODEM uses special file-header frames before transmitting the data frames. These frames
allow multiple files to be transmitted in one session, and also transmit the names and other
information about the files. These file-header frames are indicated by using zero as the frame
number. The end of each file is transmitted the same as in XMODEM1K, using the EOT
character. The end of the entire session is indicated by sending a final file header frame
containing no file name or other file information, just a frame filled with nulls.
DataExpress has implemented the YMODEM file protocol so that it can receive multiple files.
However, all of the files must use the same file format as defined for the schedule, and the data
from these files is concatenated into a single file. When sending data using YMODEM format,
DataExpress always sends just one file.
YMODEM and Flow Control
As with XMODEM1K, YMODEM can result in data overrun if flow control is not enabled. See
the discussion of XON/XOFF vs RTS/CTS flow control for XMODEM1K.
DataExpress Implementation Of The ZMODEM
DataExpress Implementation Of The KERMIT
KERMIT is different from all of the X/Y/Z-MODEM protocols. It has all of the features of
YMODEM, and adds several more features, including some of the features of ZMODEM. One
of its advantages is that during error-recovery, it only re-transmits the bad frames. It does not
need to continue to re-transmit the subsequent frames as ZMODEM does. It also has repeatedcharacter compression, which can reduce the transmission time when the data contains many pad
characters. On the other hand, Kermit does not inherently support recovery from previously
failed sessions, although it contains some extensions that could support that in theory. Kermit
can be used in either 8-bit or 7-bit channels, so the user can choose whether parity bits will be
translated using extra prefix characters, or transmitted as-is.
Options Supported
Both text and binary files are supported. Text files can contain either ASCII or EBCDIC. For
block checks - 6 or 12 bit checksum and 16 bit CRC are supported. The maximum packet size
and number of windows are set on the ID05 - Define Link Characteristics screen. DataExpress
will support the windowed (Full Duplex) version of Kermit. Alternative start of packet characters
will be supported (from the ID05 screen). Either seven or eight bit transmissions will be
supported. Run length compression is also supported.
Pad Characters
DataExpress will honor a request from the remote to transmit PAD characters, but will not
request PAD characters, and will ignore the receipt of them. The Kermit PAD characters precede
a packet; they are not related to the Pad Chars on ID05, or those used in XMODEM.
Packet Termination Character
DataExpress will request a carriage return (hex %HD) and will honor the character that is
requested by the remote.
File Information Transmitted
For Distributions, the remote Kermit software might use file attributes when storing the data. The
Physical File Name on the schedule will be used to create a file on the remote. The HP NonStop
file name of the Warehouse file will be translated into an eight byte name that could be used in
the case where the remote file name is not specified as the Physical File Name on the Schedule.
The file will be indicated as originating on HP NonStop systems. If records are fixed, the length
is transmitted, otherwise the record delimiters from the ID05 - Define Link Characteristics
screen are transmitted. An estimate of the total size of the file will be transmitted, based on the
size of the Warehouse file. A timestamp will be transmitted, using the last modification time of
the Warehouse file.
Kermit and Flow Control
Kermit allows the maximum frame size and maximum window size to be selected by the user.
Depending on the size selected, flow control may or may not be required. A frame size of less
than 504 bytes and a window size of one frame would not require flow control. Anything larger
than that could. Kermit is compatible with either XON/XOFF (Software) flow control or
RTS/CTS (Hardware) flow control, because it translates all control characters. RTS/CTS is
recommended, though, because it is faster, and is compatible with all other file protocols that
might be used on the same line. If flow control is not enabled, Kermit will tend to re-transmit
occasional frames due to data overrun.
Kermit and End-of-File
For a collection, DataExpress looks for the file-termination-character defined in the ID05 screen,
and when it finds it, all data after that point is discarded. If the Error-After-Eof indicator (also in
ID05) is set to “Y”, the collection will fail with an error. In all cases, if the file-terminationcharacter is not located in the final block of data proceeding the end of transmission, the
collection will fail with an error.
5. Describe about FTP
File Transfer Protocol (FTP)
File Transfer Protocol (FTP) is a standard network protocol used to transfer files from one host
to another host over a TCP-based network, such as the Internet. File Transfer Protocol, the
protocol for exchanging files over the Internet. FTP works in the same way as HTTP for
transferring Web pages from a server to a user's browser and SMTP for transferring electronic
mail across the Internet in that, like these technologies, FTP uses the Internet's TCP/IP protocols
to enable data transfer.
FTP is most commonly used to download a file from a server using the Internet or to upload a
file to a server (e.g., uploading a Web page file to a server).
FTP is built on a client-server architecture and utilizes separate control and data connections
between the client and server.[1] FTP users may authenticate themselves using a clear-text sign-in
protocol but can connect anonymously if the server is configured to allow it.
The first FTP client applications were interactive command-line tools, implementing standard
commands and syntax. Graphical user interface clients have since been developed for many of
the popular desktop operating systems in use today.[2][3]
Protocol overview
The protocol is specified in RFC 959,[2] which is summarized below.[6]
FTP operates on the application layer of the OSI model, and is used to transfer files using
TCP/IP.[3] In order to do this an FTP server needs to be running and waiting for incoming
requests.[3] The client computer is then able to communicate with the server on port 21.[3][7] This
connection, called the control connection,[8] remains open for the duration of the session, with a
second connection, called the data connection,[2][8] either opened by the server from its port 20
to a negotiated client port (active mode) or opened by the client from an arbitrary port to a
negotiated server port (passive mode) as required to transfer file data.[2][7] The control connection
is used for session administration (i.e., commands, identification, passwords)[9] exchanged
between the client and server using a telnet-like protocol. For example "RETR filename" would
transfer the specified file from the server to the client. Due to this two-port structure, FTP is
considered an out-of-band protocol, as opposed to an in-band protocol such as HTTP.[9]
The server responds on the control connection with three digit status codes in ASCII with an
optional text message, for example "200" (or "200 OK.") means that the last command was
successful. The numbers represent the code number and the optional text represent explanations
(e.g., <OK>) or needed parameters (e.g., <Need account for storing file>).[1] A file transfer in
progress over the data connection can be aborted using an interrupt message sent over the control
connection.
FTP can be run in active or passive mode, which determine how the data connection is
established.[8] In active mode, the client sends the server the IP address and port number on
which the client will listen, and the server initiates the TCP connection.[7] In situations where the
client is behind a firewall and unable to accept incoming TCP connections, passive mode may be
used. In this mode the client sends a PASV command to the server and receives an IP address
and port number in return.[7][8] The client uses these to open the data connection to the server.[6]
Both modes were updated in September 1998 to add support for IPv6. Other changes were made
to passive mode at that time, making it extended passive mode.[10]
While transferring data over the network, four data representations can be used:[2][3][5]




ASCII mode: used for text. Data is converted, if needed, from the sending host's
character representation to "8-bit ASCII" before transmission, and (again, if necessary) to
the receiving host's character representation. As a consequence, this mode is inappropriate
for files that contain data other than plain text.
Image mode (commonly called Binary mode): the sending machine sends each file byte
for byte, and the recipient stores the bytestream as it receives it. (Image mode support has
been recommended for all implementations of FTP).
EBCDIC mode: use for plain text between hosts using the EBCDIC character set. This
mode is otherwise like ASCII mode.
Local mode: Allows two computers with identical setups to send data in a proprietary
format without the need to convert it to ASCII
For text files, different format control and record structure options are provided. These features
were designed to facilitate files containing Telnet or ASA formatting.
Data transfer can be done in any of three modes:



Stream mode: Data is sent as a continuous stream, relieving FTP from doing any
processing. Rather, all processing is left up to TCP. No End-of-file indicator is needed,
unless the data is divided into records.
Block mode: FTP breaks the data into several blocks (block header, byte count, and data
field) and then passes it on to TCP.[5]
Compressed mode: Data is compressed using a single algorithm (usually run-length
encoding).
Login
FTP login utilizes a normal username/password scheme for granting access.[2] The username is
sent to the server using the USER command, and the password is sent using the PASS
command.[2] If the information provided by the client is accepted by the server, the server will
send a greeting to the client and the session will be open.[2] If the server supports it, users may
log in without providing login credentials. The server will also limit access for that session
based on what the user is authorized.[2]
Security
FTP was not designed to be a secure protocol—especially by today's standards—and has many
security weaknesses.[11] In May 1999, the authors of RFC 2577 enumerated the following
flaws:[12]






Bounce attacks
Spoof attacks
Brute force attacks
Packet capture (sniffing)
Username protection
Port stealing
FTP was not designed to encrypt its traffic; all transmissions are in clear text, and user names,
passwords, commands and data can be easily read by anyone able to perform packet capture
(sniffing) on the network.[2][11] This problem is common to many Internet Protocol specifications
(such as SMTP, Telnet, POP and IMAP) designed prior to the creation of encryption
mechanisms such as TLS or SSL.[5] A common solution to this problem is use of the "secure",
TLS-protected versions of the insecure protocols (e.g. FTPS for FTP, TelnetS for Telnet, etc.) or
selection of a different, more secure protocol that can handle the job, such as the SFTP/SCP
tools included with most implementations of the Secure Shell protocol.
Secure FTP
There are several methods of securely transferring files that have been called "Secure FTP" at
one point or another.
FTPS (explicit)
Explicit FTPS is an extension to the FTP standard that allows clients to request that the FTP
session be encrypted. This is done by sending the "AUTH TLS" command. The server has the
option of allowing or denying connections that do not request TLS. This protocol extension is
defined in the proposed standard: RFC 4217.port no 21
FTPS (implicit)
Implicit FTPS is deprecated standard for FTP that required the use of a SSL or TLS connection.
It was specified to use different ports than plain FTP.
SFTP
SFTP, the "SSH File Transfer Protocol," is not related to FTP except that it also transfers files
and has a similar command set for users.
SFTP, or secure FTP, is a program that uses SSH to transfer files. Unlike standard FTP, it
encrypts both commands and data, preventing passwords and sensitive information from being
transmitted in the clear over the network. It is functionally similar to FTP, but because it uses a
different protocol, you can't use a standard FTP client to talk to an SFTP server, nor can you
connect to an FTP server with a client that supports only SFTP.
FTP over SSH (not SFTP)
FTP over SSH (not SFTP) refers to the practice of tunneling a normal FTP session over an SSH
connection.[11]
Because FTP uses multiple TCP connections (unusual for a TCP/IP protocol that is still in use),
it is particularly difficult to tunnel over SSH. With many SSH clients, attempting to set up a
tunnel for the control channel (the initial client-to-server connection on port 21) will protect only
that channel; when data is transferred, the FTP software at either end will set up new TCP
connections (data channels), which bypass the SSH connection, and thus have no
confidentiality, integrity protection, etc.
Otherwise, it is necessary for the SSH client software to have specific knowledge of the FTP
protocol, and monitor and rewrite FTP control channel messages and autonomously open new
packet forwardings for FTP data channels. Version 3 of SSH Communications Security's
software suite, the GPL licensed FONC, and Co:Z FTPSSH Proxy are three software packages
that support this mode.
FTP over SSH is sometimes referred to as secure FTP; this should not be confused with other
methods of securing FTP, such as with SSL/TLS (FTPS). Other methods of transferring files
using SSH that are not related to FTP include SFTP and SCP; in each of these, the entire
conversation (credentials and data) is always protected by the SSH protocol.
List of FTP commands
Below is a list of FTP commands that may be sent to an FTP server, including all commands
that are standardized in RFC 959 by the IETF. All commands below are RFC 959 based unless
stated otherwise. Note that most command-line FTP clients present their own set of commands to
users. For example, GET is the common user command to download a file instead of the raw
command RETR.
Command
RFC
Description
ABOR
Abort an active file transfer.
ACCT
Account information.
ADAT
ALLO
RFC
2228
Authentication/Security Data
Allocate sufficient disk space to receive a file.
Command
RFC
APPE
Description
Append.
AUTH
RFC
2228
Authentication/Security Mechanism
CCC
RFC
2228
Clear Command Channel
CDUP
CONF
Change to Parent Directory.
RFC
2228
Confidentiality Protection Command
CWD
Change working directory.
DELE
Delete file.
ENC
RFC
2228
Privacy Protected Channel
EPRT
RFC
2428
Specifies an extended address and port to which the server should connect.
EPSV
RFC
2428
Enter extended passive mode.
FEAT
RFC
2389
Get the feature list implemented by the server.
LANG
RFC
2640
Language Negotiation
Returns information of a file or directory if specified, else information of
the current working directory is returned.
LIST
LPRT
RFC
1639
Specifies a long address and port to which the server should connect.
LPSV
RFC
Enter long passive mode.
Command
RFC
Description
1639
MDTM
RFC
3659
Return the last-modified time of a specified file.
MIC
RFC
2228
Integrity Protected Command
MKD
Make directory.
MLSD
RFC
3659
Lists the contents of a directory if a directory is named.
MLST
RFC
3659
Provides data about exactly the object named on its command line, and no
others.
MODE
Sets the transfer mode (Stream, Block, or Compressed).
NLST
Returns a list of file names in a specified directory.
NOOP
No operation (dummy packet; used mostly on keepalives).
OPTS
RFC
2389
Select options for a feature.
PASS
Authentication password.
PASV
Enter passive mode.
PBSZ
RFC
2228
PORT
PROT
PWD
Protection Buffer Size
Specifies an address and port to which the server should connect.
RFC
2228
Data Channel Protection Level.
Print working directory. Returns the current directory of the host.
Command
RFC
Description
QUIT
Disconnect.
REIN
Re initializes the connection.
REST
Restart transfer from the specified point.
RETR
Transfer a copy of the file
RMD
Remove a directory.
RNFR
Rename from.
RNTO
Rename to.
SITE
Sends site specific commands to remote server.
SIZE
RFC
3659
Return the size of a file.
SMNT
Mount file structure.
STAT
Returns the current status.
STOR
Accept the data and to store the data as a file at the server site
STOU
Store file uniquely.
STRU
Set file transfer structure.
SYST
Return system type.
TYPE
Sets the transfer mode (ASCII/Binary).
USER
Authentication username.
FTP reply codes
Below is a summary of the reply codes that may be returned by an FTP server. These codes have
been standardized in RFC 959 by the IETF. The reply code is a three-digit value.
The first digit of the reply code is used to indicate one of three possible outcomes, 1) success, 2)
failure, and 3) error or incomplete:



2xx - Success reply
4xx or 5xx - Failure Reply
1xx or 3xx - Error or Incomplete reply
The second digit defines the kind of error:






x0z - Syntax - These replies refer to syntax errors.
x1z - Information - Replies to requests for information.
x2z - Connections - Replies referring to the control and data connections.
x3z - Authentication and accounting - Replies for the login process and accounting
procedures.
x4z - Not defined.
x5z - File system - These replies relay status codes from the server file system.
The third digit of the reply code is used to provide additional detail for each of the categories
defined by the second digit.
UNIT – III
SECTION – A
1. MAC stands for Media Access Control
2. Devices that forward data units based on network addresses are called routers.
3. CSMA/CD stands for Carrier Sense Multiple Access / Collision Detection
4. A common connection point for devices in a network is called HUB
5. The Serial Line Internet Protocol (SLIP) is an encapsulation of the Internet Protocol
designed to work over serial ports and modem connections.
6. Link Control Protocol (LCP) forms part of the Point-to-Point Protocol.
7. The Fiber Distributed Data Interface (FDDI) specifies a 100-Mbps token-passing, dual-ring
LAN using fiber-optic cable.
8. FDDI's primary fault-tolerant feature is the dual ring.
9. ALOHAnet, also known as the ALOHA System
10.
Token Bus was a 4 Mbps Local Area Networking technology created by IBM to
connect their terminals to IBM mainframes.
SECTION – B
1. Explain about LLC
LLC: Logic Link Control (IEEE 802.2)
Logic Link Control (LLC) is the IEEE 802.2 LAN protocol that specifies an implementation of
the LLC sublayer of the data link layer. IEEE 802.2 LLC is used in IEEE802.3 (Ethernet) and
IEEE802.5 (Token Ring) LANs to perform these functions:
1.
2.
3.
4.
Managing the data-link communication
Link Addressing
Defining Service Access Points (SAPs)
Sequencing
The LLC provides a way for the upper layers to deal with any type of MAC layer (e.g. Ethernet IEEE 802.3 CSMA/CD or Token Ring IEEE 802.5 Token Passing).
LLC was originated from the High-Level Data-Link Control (HDLC) and it uses a subclass of
the HDLC specification. LLC defines three types of operation for data communication:



Type 1: Connectionless, the connectionless operation is basically sending but no
guarantee of receiving.
Type 2: Connection Oriented. The connection-Oriented operation for the LLC layer
provides these 4 services: Connection establishment;Confirmation and acknowledgement
that data has been received; Error recovery by requesting received bad data to be
resent; Sliding Windows (Modulus: 128), which is a method of increasing the rate of data
transfer.
Type 3: Acknowledgement with connectionless service.
The Type 1 connectionless service of LLC specifies a static-frame format and allows network
protocols to run on it. Network protocols that fully implement a transport layer will generally use
Type 1 service.
The Type 2 connection-oriented service of LLC provides reliable data transfer. It is used in LAN
environment that do not invoke network and transport layer protocols.
Protocol Structure - LLC: Logic Link Control (IEEE 802.2)Logic Link Control Layer (LLC)
Header:
8
DSAP
16
SSAP
24 or 32bit
Control
Variable
LLC information
DSAP - The destination service access point structure is as follows:
1
I/G
8bit
Address bits
I/G: Individual/group address may be: 0 Individual DSAP; 1 Group DSAP.
SSAP - The source service access point structure is as follows:
1
8bit
C/R
Address bits
C/R: Command/response: 0 Command; 1 Response.
Control - The structure of the control field is as follows:
1
8
Information
0
N(S)
Supervisory
1
0
SS
Unnumbered
1
1
MM
XXXX
P/F
N(S)
Transmitter send sequence number.
N(R)
Transmitter receive sequence number.
P/F
Poll/final bit. Command LLC PDU transmission/
response LLC PDU transmission.
S
Supervisory function bits:
00
RR (receive ready).
01
REJ (reject).
10
RNR (receive not ready).
X
Reserved and set to zero.
M
Modifier function bits.
9
16bit
P/F
N(R)
P/F
N(R)
MMM
2. Explain about MAC
MAC Address
In computer networking, the Media Access Control (MAC) address is every bit as important as
an IP address. Learn in this article how MAC addresses work and how to find the MAC
addresses being used by a computer
The MAC address is a unique value associated with a network adapter. MAC addresses are also
known as hardware address or physical addresses. They uniquely identify an adapter on a LAN.
MAC addresses are 12-digit hexadecimal numbers (48 bits in length). By convention, MAC
addresses are usually written in one of the following two formats:
MM:MM:MM:SS:SS:SS
MM-MM-MM-SS-SS-SS
The first half of a MAC address contains the ID number of the adapter manufacturer. These IDs
are regulated by an Internet standards body (see sidebar). The second half of a MAC address
represents the serial number assigned to the adapter by the manufacturer. In the example,
00:A0:C9:14:C8:29
The prefix
00A0C9
indicates the manufacturer is Intel Corporation.
3.Explain about CSMA/CD
CSMA/CD
Short for Carrier Sense Multiple Access / Collision Detection, a set of rules determining how
network devices respond when two devices attempt to use a data channel simultaneously
(called a collision). Standard Ethernet networks use CSMA/CD to physically monitor the traffic
on the line at participating stations. If no transmission is taking place at the time, the particular
station can transmit. If two stations attempt to transmit simultaneously, this causes a collision,
which is detected by all participating stations. After a random time interval, the stations that
collided attempt to transmit again. If another collision occurs, the time intervals from which the
random waiting time is selected are increased step by step. This is known as exponential back
off.
CSMA/CD is a type of contention protocol. Networks using the CSMA/CD procedure are
simple to implement but do not have deterministic transmission characteristics. The CSMA/CD
method is internationally standardized in IEEE 802.3 and ISO 8802.3.
Bridges, Routers, and Switches:
Data can be routed through an internetwork using the following three types of
information:
• The physical address of the destination device, found at the data link layer.
Devices that forward messages based on physical addresses generally are called
bridges.
• The address of the destination network, found at the network layer. Devices that
use network addresses to forward messages usually are called routers, although
the original name, still commonly used in the TCP/IP world, is gateway.
• The circuit that has been established for a particular connection. Devices that
route messages based on assigned circuits are called switches.
4. Write short notes on bridges and routers.
Bridges:
Figure 61 illustrates the protocol stack model for bridging in terms of the OSI Reference Model.
Bridges build and maintain a database that lists known addresses of devices and how to reach
those devices. When it receives a frame, the switch consults its database to determine which of
its connections should be used to forward the frame.
A bridge must implement both the physical and data link layers of the protocol stack. Bridges are
fairly simple devices. The receive frames from on connection and forward them to another
connection known to be en route to the destination. When more than one route is possible,
bridges ordinarily can’t determine which route is most efficient. In fact, when multiple routes are
available, bridging can result in frames simply travelling in circles. Having multiple paths
available on the network is desirable, however, so that a failure of one path does not stop the
network. With Ethernet, a technique called the spanning-tree algorithm enables bridged
networks to contain redundant paths.
Token Ring uses a different approach to bridging. When a device needs to send to another
device, it goes through a discovery process to determine a route to the destination. The routing
information is stored in each frame transmitted and is used by bridges to forward the frames to
the appropriate networks. Although this actually is a data link layer function, the technique
Token Ring uses is called source routing.
The bridge must implement two protocol stacks, one for each connection. Theoretically, these
stacks could belong to different protocols, enabling a bridge to connect different types of
networks. However, each type of network, such as Ethernet and Token Ring, has its own
protocols at the data link layer. Translating data from the data link layer of an Ethernet to the
data link layer of a Token Ring is difficult, but not impossible. Bridges, which operate at the data
link layer, therefore, generally can join only networks of the same type. You see bridges
employed most often in networks that are all Ethernet or all Token Ring. A few bridges have
been marketed that can bridges networks that have different data link layers.
Routers:
Figure 62 illustrates the protocol stack model for routing in terms of the OSI Reference Model.
A different method of path determination can be employed using data found at the network layer.
At that layer, networks are identified by logical network identifiers. This information can be used
to build a picture of the network. This picture can be used to improve the efficiency of the paths
that are chosen. Devices that forward data units based on network addresses are called routers.
With TCP/IP, routing is a function of the internet layer. By convention, the network on which the
data unit originates counts as one hop. Each time a data unit crosses a router, the hop count
increases by one.
Figure 63 illustrates Hop-count routing.
A wide variety of paths could be identified between A and F:
• A-E-F (4 hops)
• A-E-D-F (5 hops)
• A-E-C-F (5 hops)
• A-B-C-F (5 hops)
By this method, A-E-F is the most efficient route. This assumes that all of the paths between the
routers provide the same rate of service. A simple hop-count algorithm would be misleading if
A-D and D-E were 1.5 Mbps lines while A-E was a 56 Kbps line. Apart from such extreme
cases, however, hop-count routing is a definite improvement over no routing planning at all.
Routing operates at the network layer. By the time data reach that layer, all evidence of the
physical network has been shorn away. Both protocol stacks in the router can share a common
network layer protocol. The network layer does not know or care if the network is Ethernet or
Token Ring. Therefore, each stack can support different data link and physical layers.
Consequently, routers posses a capability, fairly rare in bridges, to forward traffic between
dissimilar types of networks. Owing to that capability, routers often are used to connect LAN’s
to WAN’s.
Building routers around the same protocol stack as are used on the end-nodes is possible. TCP/IP
networks can use routers based on the same IP protocol employed at the workstation. However,
it is not required that routers and end-nodes use the same routing protocol. Because network
layers need not communicate with upper-layer protocols, different protocols may be used in
routers than are used in the end-nodes. Commercial routers employ proprietary network layer
protocols to perform routing. These custom protocols are among the keys to the improved
routing performance provided by the bets routers.
5. Explain about hub
Hub
A common connection point for devices in a network. Hubs are commonly used to connect
segments of a LAN. A hub contains multiple ports. When a packet arrives at one port, it is
copied to the other ports so that all segments of the LAN can see all packets.
A passive hub serves simply as a conduit for the data, enabling it to go from one device (or
segment) to another. So-called intelligent hubs include additional features that enables an
administrator to monitor the traffic passing through the hub and to configure each port in the
hub. Intelligent hubs are also called manageable hubs.
A third type of hub, called a switching hub, actually reads the destination address of each packet
and then forwards the packet to the correct port.
6.Give details about gateway
Gateway (telecommunications)
Juniper SRX210 service gateway
In telecommunications, the term gateway has the following meaning:


In a communications network, a network node equipped for interfacing with another
network that uses different protocols.
o
A gateway may contain devices such as protocol translators, impedance
matching devices, rate converters, fault isolators, or signal translators as necessary to
provide system interoperability. It also requires the establishment of mutually
acceptable administrative procedures between both networks.
o
A protocol translation/mapping gateway interconnects networks with different
network protocol technologies by performing the required protocol conversions.
Loosely, a computer or computer program configured to perform the tasks of a
gateway. For a specific case, see default gateway.
Gateways, also called protocol converters, can operate at any network layer. The activities of a
gateway are more complex than that of the router or switch as it communicates using more than
one protocol.
7.What is meant by x.25 ?
X.25
X.25 network diagram.
X.25 is an ITU-T standard protocol suite for packet switched wide area network (WAN)
communication. An X.25 WAN consists of packet-switching exchange (PSE) nodes as the
networking hardware, and leased lines, Plain old telephone service connections or ISDN
connections as physical links. X.25 is a family of protocols that was popular during the 1980s
with telecommunications companies and in financial transaction systems such as automated
teller machines. X.25 was originally defined by the International Telegraph and Telephone
Consultative Committee (CCITT, now ITU-T) in a series of drafts[1] and finalized in a
publication known as The Orange Book in 1976.[2]
While X.25 has been, to a large extent, replaced by less complex protocols, especially the
Internet protocol (IP), the service is still used and available in niche and legacy applications.
8.Explain about SLIP
Serial Line Internet Protocol
The Serial Line Internet Protocol (SLIP) is an encapsulation of the Internet Protocol
designed to work over serial ports and modem connections. It is documented in RFC 1055. On
personal computers, SLIP has been largely replaced by the Point-to-Point Protocol (PPP),
which is better engineered, has more features and does not require its IP address configuration to
be set before it is established. On microcontrollers, however, SLIP is still the preferred way of
encapsulating IP packets due to its very small overhead.
SLIP modifies a standard TCP/IP datagram by appending a special "SLIP END" character to
it, which distinguishes datagram boundaries in the byte stream. SLIP requires a serial port
configuration of 8 data bits, no parity, and either EIA hardware flow control, or CLOCAL
mode (3-wire null-modem) UART operation settings.
SLIP does not provide error detection, being reliant on upper layer protocols for this. Therefore
SLIP on its own is not satisfactory over an error-prone dial-up connection. It is however still
useful for testing operating systems' response capabilities under load (by looking at flood-ping
statistics).
SLIP is also currently used in the BlueCore Serial Protocol for communication between
Bluetooth modules and host computers.[1]
9. Explain about Link Control Protocol (LCP)
Link Control Protocol
In computing, the Link Control Protocol (LCP) forms part of the Point-to-Point Protocol. In
setting up PPP communications, both the sending and receiving devices send out LCP packets to
determine the standards of the ensuing data transmission. LCP is logically a Transport Layer
protocol according to the OSI model, however it occurs as part of the Data link layer in terms
of the actual implemented Network stack.
The LCP protocol:




checks the identity of the linked device and either accepts or rejects the peer device
determines the acceptable packet size for transmission
searches for errors in configuration
can terminate the link if requirements exceed the parameters
Devices cannot use PPP to transmit data over a network until the LCP packet determines the
acceptability of the link, but LCP packets are embedded into PPP packets and therefore a basic
PPP connection has to be established before LCP can reconfigure it. The LCP over PPP packets
have control code 0xC021 and their info field contains the LCP packet, which has four fields
(Code, ID, Length and Data).


Code: Operation requested: configure link, terminate link, ... and acknowledge and deny
codes.
Data: Parameters for the operation
LCP was created to overcome the noisy and unreliable physical lines of the time (e.g. phone lines
on dial up modems), in a way which didn't lock the PPP protocol into specific proprietary vendor
protocols and physical transmission media.
10. Explain about SONET
SONET
Short for Synchronous Optical Network, a standard for connecting fiber-optic transmission
systems. SONET was proposed by Bellcore in the middle 1980s and is now an ANSI standard.
SONET defines interface standards at the physical layer of the OSI seven-layer model. The
standard defines a hierarchy of interface rates that allow data streams at different rates to be
multiplexed. SONET establishes Optical Carrier (OC) levels from 51.8 Mbps (OC-1) to 9.95
Gbps (OC-192). Prior rate standards used by different countries specified rates that were not
compatible for multiplexing. With the implementation of SONET, communication carriers
throughout the world can interconnect their existing digital carrier and fiber optic systems.
The international equivalent of SONET, standardized by the ITU, is called SDH.
Synchronous optical networking
Synchronous Optical Networking (SONET) and Synchronous Digital Hierarchy (SDH) are
standardized multiplexing protocols that transfer multiple digital bit streams over optical fiber
using lasers or highly coherent light from light-emitting diodes (LEDs). At low transmission
rates data can also be transferred via an electrical interface. The method was developed to
replace the Plesiochronous Digital Hierarchy (PDH) system for transporting large amounts of
telephone calls and data traffic over the same fiber without synchronization problems. SONET
generic criteria are detailed in Telcordia Technologies Generic Requirements document GR253-CORE.[1] Generic criteria applicable to SONET and other transmission systems (e.g.,
asynchronous fiber optic systems or digital radio systems) are found in Telcordia GR-499CORE.[2]
SONET and SDH, which are essentially the same, were originally designed to transport circuit
mode communications (e.g., DS1, DS3) from a variety of different sources, but they were
primarily designed to support real-time, uncompressed, circuit-switched voice encoded in PCM
format.[3] The primary difficulty in doing this prior to SONET/SDH was that the synchronization
sources of these various circuits were different. This meant that each circuit was actually
operating at a slightly different rate and with different phase. SONET/SDH allowed for the
simultaneous transport of many different circuits of differing origin within a single framing
protocol. SONET/SDH is not itself a communications protocol per se, but a transport protocol.
Due to SONET/SDH's essential protocol neutrality and transport-oriented features, SONET/SDH
was the obvious choice for transporting Asynchronous Transfer Mode (ATM) frames. It
quickly evolved mapping structures and concatenated payload containers to transport ATM
connections. In other words, for ATM (and eventually other protocols such as Ethernet), the
internal complex structure previously used to transport circuit-oriented connections was removed
and replaced with a large and concatenated frame (such as OC-3c) into which ATM cells, IP
packets, or Ethernet frames are placed.
Racks of Alcatel STM-16 SDH add-drop multiplexers
Both SDH and SONET are widely used today: SONET in the United States and Canada, and
SDH in the rest of the world. Although the SONET standards were developed before SDH, it is
considered a variation of SDH because of SDH's greater worldwide market penetration.
The SDH standard was originally defined by the European Telecommunications Standards
Institute (ETSI), and is formalized as International Telecommunications Union (ITU)
standards G.707,[4] G.783,[5] G.784,[6] and G.803.[7][8] The SONET standard was defined by
Telcordia[1] and American National Standards Institute (ANSI) standard T1.105.[9][8]
SECTION – C
1.Explain about IEEE standards
IEEE 802.1AE
802.1AE is the IEEE MAC Security standard (also known as MACsec) which defines
connectionless data confidentiality and integrity for media access independent protocols. It is
standardized by the IEEE 802.1 working group.
Key management and the establishment of secure associations is outside the scope of 802.1AE,
but is specified by 802.1X-2010.
The 802.1AE standard specifies the implementation of a MAC Security Entities (SecY) that can
be thought of as part of the stations attached to the same LAN, providing secure MAC service to
the client. The standard defines




MACsec frame format, which is similar to the Ethernet frame, but includes additional
fields:
o
Security Tag, which is an extension of the EtherType
o
Message authentication code (ICV)
Secure Connectivity Associations that represent groups of stations connected via
unidirectional Secure Channels
Security Associations within each secure channel. Each association uses its own key
(SAK). More than one association is permitted within the channel for the purpose of key
change without traffic interruption (standard requires devices to support at least two)
Default cipher suite (Galois/Counter Mode of Advanced Encryption Standard cipher
with 128-bit key)
o
Galois/Counter Mode of Advanced Encryption Standard cipher with 256-bit
key is being added to the standard.
Security tag inside each frame in addition to EtherType includes:



association number within the channel
packet number to provide unique initialization vector for encryption and authentication
algorithms as well as protection against replay attack
optional LAN-wide secure channel identifier (not required on point-to-point links).
The IEEE 802.1AE (MACsec) standard specifies a set of protocols to meet the security
requirements for protecting data traversing Ethernet LANs. This norm assures incomplete
network operations by identifying unauthorized actions on a LAN and preventing
communication from them.
MACsec allows unauthorised LAN connections to be identified and excluded from
communication within the network. In common with IPsec and SSL, MACsec defines a security
infrastructure to provide data confidentiality, data integrity and data origin authentication. By
assuring that a frame comes from the station that claimed to send it, MACSec can mitigate
attacks on Layer 2 protocols.
IEEE 802.1ag
IEEE 802.1ag (aka "CFM") IEEE Standard for Local and Metropolitan Area Networks Virtual
Bridged Local Area Networks Amendment 5: Connectivity Fault Management is a standard
defined by IEEE. It defines protocols and practices for OAM (Operations, Administration, and
Maintenance) for paths through 802.1 bridges and local area networks (LANs). It is an
amendment to IEEE 802.1Q-2005 and was approved in 2007.[1]
IEEE 802.1ag is largely identical with ITU-T Recommendation Y.1731, which additionally
addresses performance management.[2]
The standard:




Defines maintenance domains, their constituent maintenance points, and the managed
objects required to create and administer them
Defines the relationship between maintenance domains and the services offered by
VLAN-aware bridges and provider bridges
Describes the protocols and procedures used by maintenance points to maintain and
diagnose connectivity faults within a maintenance domain;
Provides means for future expansion of the capabilities of maintenance points and their
protocols
Definitions
The document defines various terms:
Maintenance Domain
Maintenance Domains (MDs) are management space on a network, typically owned and
operated by a single entity. MDs are configured with Names and Levels, where the eight
levels range from 0 to 7. A hierarchal relationship exists between domains based on
levels. The larger the domain, the higher the level value. Recommended values of levels
are as follows:



Customer Domain: Largest (e.g., 7)
Provider Domain: In between (e.g., 3)
Operator Domain: Smallest (e.g., 1)
CFM MD Levels Example
Maintenance Association
Defined as a "set of MEPs, all of which are configured with the same MAID (Maintenance
Association Identifier) and MD Level, each of which is configured with a MEPID unique within
that MAID and MD Level, and all of which are configured with the complete list of MEPIDs.”
Maintenance End Point
Points at the edge of the domain, define the boundary for the domain. A MEP sends and
receives CFM frames through the relay function, drops all CFM frames of its level or lower that
come from the wire side
Maintenance Intermediate Point
Points internal to a domain, not at the boundary. CFM frames received from MEPs and other
MIPs are cataloged and forwarded, all CFM frames at a lower level are stopped and dropped.
MIPs are Passive points, respond only when triggered by CFM trace route and loop-back
messages
CFM Protocols
IEEE 802.1ag Ethernet CFM (Connectivity Fault Management) protocols comprise three
protocols that work together to help administrators debug Ethernet networks. They are:
Continuity Check Protocol
"Heart beat" messages for CFM. The Continuity Check Message provides a means to detect
connectivity failures in an MA. CCMs are multicast messages. CCMs are confined to a domain
(MD). These messages are unidirectional and do not solicit a response. Each MEP transmits a
periodic multicast Continuity Check Message inward towards the other MEPs.
Link Trace
Link Trace messages otherwise known as Mac Trace Route are Multicast frames that a MEP
transmits to track the path (hop-by-hop) to a destination MEP which is similar in concept to User
Datagram Protocol (UDP) Trace Route. Each receiving MEP sends a Trace route Reply directly
to the Originating MEP, and regenerates the Trace Route Message.
Loop-back
Loop-back messages otherwise known as MaC ping are Unicast frames that a MEP transmits,
they are similar in concept to an Internet Control Message Protocol (ICMP) Echo (Ping)
messages, sending Loopback to successive MIPs can determine the location of a fault. Sending a
high volume of Loopback Messages can test bandwidth, reliability, or jitter of a service, which is
similar to flood ping. A MEP can send a Loopback to any MEP or MIP in the service. Unlike
CCMs, Loop back messages are administratively initiated and stopped.
IEEE 802.6
IEEE 802.6 is a standard governed by the ANSI for Metropolitan Area Networks (MAN). It is
an improvement of an older standard (also created by ANSI) which used the Fiber distributed
data interface (FDDI) network structure. The FDDI-based standard failed due to its expensive
implementation and lack of compatibility with current LAN standards. The IEEE 802.6 standard
uses the Distributed Queue Dual Bus (DQDB) network form. This form supports 150 Mbit/s
transfer rates. It consists of two unconnected unidirectional buses. DQDB is rated for a
maximum of 160 km before significant signal degradation over fiberoptic cable with an optical
wavelength of 1310 nm.
This standard has also failed, mostly for the same reasons that the FDDI standard failed. Most
MANs now use Synchronous Optical Network (SONET) or Asynchronous Transfer Mode
(ATM) network designs, with recent designs using native Ethernet or MPLS.
2. Explain about Ethernet
Ethernet
An 8P8C modular connector (often called RJ45) commonly used on cat 5 cables in Ethernet
networks
Ethernet /ˈiːθərnɛt/ is a family of computer networking technologies for local area
networks (LANs) commercially introduced in 1980. Standardized in IEEE 802.3, Ethernet has
largely replaced competing wired LAN technologies.
Systems communicating over Ethernet divide a stream of data into individual packets called
frames. Each frame contains source and destination addresses and error-checking data so that
damaged data can be detected and re-transmitted.
The standards define several wiring and signaling variants. The original 10BASE5 Ethernet used
coaxial cable as a shared medium. Later the coaxial cables were replaced by twisted pair and
fiber optic links in conjunction with hubs or switches. Data rates were periodically increased
from the original 10 megabits per second, to 100 gigabits per second.
Since its commercial release, Ethernet has retained a good degree of compatibility. Features such
as the 48-bit MAC address and Ethernet frame format have influenced other networking
protocols.
History
Ethernet was developed at Xerox PARC between 1973 and 1974.[1][2] It was inspired by
ALOHAnet, which Robert Metcalfe had studied as part of his PhD dissertation.[3] The idea was
first documented in a memo that Metcalfe wrote on May 22, 1972.[1][4] In 1975, Xerox filed a
patent application listing Metcalfe, David Boggs, Chuck Thacker and Butler Lampson as
inventors.[5] In 1976, after the system was deployed at PARC, Metcalfe and Boggs published a
seminal paper.[6][note 1]
Metcalfe left Xerox in June 1979 to form 3Com.[1] He convinced Digital Equipment
Corporation (DEC), Intel, and Xerox to work together to promote Ethernet as a standard. The
so-called "DIX" standard, for "Digital/Intel/Xerox" specified 10 Mbit/s Ethernet, with 48-bit
destination and source addresses and a global 16-bit Ethertype-type field. It was published on
September 30, 1980 as "The Ethernet, A Local Area Network. Data Link Layer and Physical
Layer Specifications".[8] Version 2 was published in November, 1982 and defines what has
become known as Ethernet II. Formal standardization efforts proceeded at the same time.
Ethernet initially competed with two largely proprietary systems, Token Ring and Token Bus.
Because Ethernet was able to adapt to market realities and shift to inexpensive and ubiquitous
twisted pair wiring, these proprietary protocols soon found themselves competing in a market
inundated by Ethernet products and by the end of the 1980s, Ethernet was clearly the dominant
network technology.[1] In the process, 3Com became a major company. 3Com shipped its first
10 Mbit/s Ethernet 3C100 transceiver in March 1981, and that year started selling adapters for
PDP-11s and VAXes, as well as Multibus-based Intel and Sun Microsystems computers.[9]
This was followed quickly by DEC's Unibus to Ethernet adapter, which DEC sold and used
internally to build its own corporate network, which reached over 10,000 nodes by 1986, making
it one of the largest computer networks in the world at that time.[10]
Since then Ethernet technology has evolved to meet new bandwidth and market requirements.[11]
In addition to computers, Ethernet is now used to interconnect appliances and other personal
devices.[1] It is used in industrial applications and is quickly replacing legacy data transmission
systems in the world's telecommunications networks.[12] By 2010, the market for Ethernet
equipment amounted to over $16 billion per year.[13]
Standardization
Notwithstanding its technical merits, timely standardization was instrumental to the success of
Ethernet. It required well-coordinated and partly competitive activities in several standardization
bodies such as the IEEE, ECMA, IEC, and finally ISO.
In February 1980, the Institute of Electrical and Electronics Engineers (IEEE) started project
802 to standardize local area networks (LAN).[14]
The "DIX-group" with Gary Robinson (DEC), Phil Arst (Intel), and Bob Printis (Xerox)
submitted the so-called "Blue Book" CSMA/CD specification as a candidate for the LAN
specification.[8] Since IEEE membership is open to all professionals, including students, the
group received countless comments on this technology.
In addition to CSMA/CD, Token Ring (supported by IBM) and Token Bus (selected and
henceforward supported by General Motors) were also considered as candidates for a LAN
standard. Due to the goal of IEEE 802 to forward only one standard and due to the strong
company support for all three designs, the necessary agreement on a LAN standard was
significantly delayed.
In the Ethernet camp, it put at risk the market introduction of the Xerox Star workstation and
3Com's Ethernet LAN products. With such business implications in mind, David Liddle
(General Manager, Xerox Office Systems) and Metcalfe (3Com) strongly supported a proposal
of Fritz Röscheisen (Siemens Private Networks) for an alliance in the emerging office
communication market, including Siemens' support for the international standardization of
Ethernet (April 10, 1981). Ingrid Fromm, Siemens' representative to IEEE 802, quickly achieved
broader support for Ethernet beyond IEEE by the establishment of a competing Task Group
"Local Networks" within the European standards body ECMA TC24. As early as March 1982
ECMA TC24 with its corporate members reached agreement on a standard for CSMA/CD based
on the IEEE 802 draft. The speedy action taken by ECMA decisively contributed to the
conciliation of opinions within IEEE and approval of IEEE 802.3 CSMA/CD by the end of 1982.
IEEE published the 802.3 standard as a draft in 1983[15] and as a standard in 1985.
Approval of Ethernet on the international level was achieved by a similar, cross-partisan action
with Fromm as liaison officer working to integrate International Electrotechnical
Commission, TC83 and International Organization for Standardization (ISO) TC97SC6, and
the ISO/IEEE 802/3 standard was approved in 1984.
3.Explain about token bus and token ring
Token bus network
Token passing in a Token bus network
Token bus is a network implementing the token ring protocol over a "virtual ring" on a coaxial
cable. A token is passed around the network nodes and only the node possessing the token may
transmit. If a node doesn't have anything to send, the token is passed on to the next node on the
virtual ring. Each node must know the address of its neighbour in the ring, so a special protocol
is needed to notify the other nodes of connections to, and disconnections from, the ring.
Token bus was standardized by IEEE standard 802.4. It is mainly used for industrial
applications. Token bus was used by GM (General Motors) for their Manufacturing
Automation Protocol (MAP) standardization effort. This is an application of the concepts used
in token ring networks. The main difference is that the endpoints of the bus do not meet to form
a physical ring. The IEEE 802.4 Working Group is disbanded. In order to guarantee the packet
delay and transmission in Token bus protocol, a modified Token bus was proposed in
Manufacturing Automation Systems and flexible manufacturing system (FMS).
Token Bus
Token Bus was a 4 Mbps Local Area Networking technology created by IBM to connect their
terminals to IBM mainframes. Token bus utilized a copper coaxial cable to connect multiple end
stations (terminals, wokstations, shared printers etc.) to the mainframe. The coaxial cable served
as a common communication bus and a token was created by the Token Bus protocol to manage
or 'arbitrate' access to the bus. Any station that holds the token packet has permission to transmit
data. The station releases the token when it is done communicating or when a higher priority
device needs to transmit (such as the mainframe). This keeps two or more devices from
transmitting information on the bus at the same time and accidentally destroying the transmitted
data.
Token Bus suffered from two limitations. Any failure in the bus caused all the devices beyond
the failure to be unable to communicate with the rest of the network. Second, adding more
stations to the bus was somewhat difficult. Any new station that was improperly attached was
unlikely to be able to communicate and all devices beyond it were also affected. Thus, token bus
networks were seen as somewhat unreliable and difficult to expand and upgrade.
Token Ring
Token Ring was created by IBM to compete with what became known as the DIX Standard of
Ethernet (DEC/Intel/Xerox) and to improve upon their previous Token Bus technology. Up until
that time, IBM had produced solutions that started from the mainframe and ran all the way to the
desktop (or dumb terminal), allowing them to extend their SNA protocol from the AS400's all
the way down to the end user. Mainframes were so expensive that many large corporations that
purchased a mainframe as far back as 30-40 years ago are still using these mainframe devices, so
Token Ring is still out there and you will encounter it. Token Ring is also still in use where high
reliability and redundancy are important--such as in large military craft.
Token Ring comes in standard 4 and 16 Mbsp and high-speed Token Ring at 100Mbps(IEEE
802.5t) and 1Gbps (IEEE 802.5v). Many mainframes (and until recently, ALL IBM mainframes)
used a Front End Processor (FEP) with either a Line Interface Coupler (LIC) at 56kbps, or a
Token-ring Interface Coupler (TIC) at 16 Mbps. Cisco still produces FEP cards for their routers
(as of 2004).
Token Ring uses a ring based topology and passes a token around the network to control access
to the network wiring. This token passing scheme makes conflicts in accessing the wire unlikely
and therefore total throughput is as high as typical Ethernet and Fast Ethernet networks. The
Token Ring protocol also provides features for allowing delay-sensitive traffic, to share the
network with other data, which is key to a mainframe's operation. This feature is not available in
any other LAN protocol, except Asynchronous Transfer Mode (ATM).
Token Ring does come with a higher price tag because token ring hardware is more complex and
more expensive to manufacture. As a network technology, token ring is passing out of use
because it has a maximum speed of 16 Mbps which is slow by today's gigabit Ethernet standards.
4.Explain about WAN
WIDE AREA NETWORKING (WAN)
Frame Relay
Sometimes referred to as Fast packet, it is designed for modern networks which do not need lots
of error recovery (unlike packet switching). Typical Frame relay connections range from 56Kbps
to 2Mbps. Frame relay is similar to packet switching X.25, but is more streamlined giving higher
performance and greater efficiency.
Frame relay, like X.25, implements multiple virtual circuits over a single connection, but does so
using statistical multiplexing techniques which yields a much more flexible and efficient use of
the available bandwidth. FR includes a cyclic redundancy check (CRC) for detecting corrupted
data, but does not include any mechanism for corrected corrupted data.
In addition, because many higher level protocols include their own flow control algorithms, FR
implements a simple congestion notification mechanism to notify the user when the network is
nearing saturation.
Frame Format
The format of FR frames is shown in the diagram below. Flags define a frames start and end. The
address field is 16 bytes long, 10 of which comprise the actual circuit ID (Data Link Connection
Identifier). The DLCI identifies the logical connection that is multiplexed into the physical
channel. Three bits of the address field are allocated to congestion control.
FR also supports multi-casting, the ability to send to more than one destination simultaneously.
Four reserved DLCI values (1019 to 1022) are designated as multicast groups.
Advantages
Disadvantages
Low incremental cost per
connection (PVC)
Relatively high initial Interconnecting lots of
cost
remote LAN's together
Exploits recent advances in
network technology
Common Usage
Supports multicasting
Asynchronous Transfer Mode (ATM)
ATM breaks data into small chunks of fixed size cells (48 bytes of data plus a 5 byte overhead).
ATM is designed for handling large amounts of data across long distances using a high speed
backbone approach. Rather than allocating a dedicated virtual circuit for the duration of each
call, data is assembled into small packets and statistically multiplexed according to their traffic
characteristics.
One problem with other protocols which implement virtual connections is that some time slots
are wasted if no data is being transmitted. ATM avoids this by dynamically allocating bandwidth
for traffic on demand. This means greater utilization of bandwidth and better capacity to handle
heavy load situations.
When an ATM connection is requested, details concerning the connection are specified which
allow decisions to be made concerning the route and handling of the data to be made. Typical
details are the type of traffic [video requires higher priority], destination, peak and average
bandwidth allows requirements [which the network can use to estimate resources and cost
structures], a cost factor [which the network to chose a route which fits within the cost structure]
and other parameters.
UNDER SONSTRUCTION
155Mbps
622Mbps
Digital Subscriber Line (xDSL)
xDSL is a high speed solution that allows megabit bandwidth from tele-communications to
customers over existing copper cable, namely, the installed telephone pair to the customers
premises (called the local loop). With the high penetration and existing infrastructure of copper
cable to virtually everyone's home (for providing a voice telephone connection), xDSL offers
significant increases in connection speed and data transfers for access to information.
In many cases, the cost of relaying fiber optic cable to subscriber premises is prohibitive. As
access to the Internet and associated applications like multi-media, tele-conferencing and on
demand video become pervasive, the speed of the local loop (from the subscriber to the
telephone company) is now a limiting factor. Current technology during the 1980's and most of
the 1990's has relied on the use of the analog modem with connection rates up to 56Kbps, which
is too slow for most applications except simple email.
xDSL is a number of different technologies that provide megabit speeds over the local loop,
without the use of amplifiers or repeaters. This technology works over non-loaded local loops
(loaded coils were added by telephone companies on some copper cable pairs to improve voice
quality). xDSL coexists with existing voice over the same cable pair, the subscriber is still able to
use their telephone, at the same time. This technology is referred to seamless.
To implement xDSL, a terminating device is required at each end of the cable, which accepts the
digital data and converts it to analogue signals for transmission over the copper cable. In this
respect, it is very similar to modem technology.
xDSL provides for both symmetric and asymmetric configurations.
Asymmetric
Symmetric
Bandwidth is higher in one direction Bandwidth same in both directions
Suitable for Web Browsing
Suitable for video-conferencing
Variations of xDSL
There are currently six variations of xDSL.
xDSL Technology Meaning
Rate
DSL
Digital Subscriber Line
2 x 64Kbps circuit switched
1 x 16Kbps packet switched
(similar to ISDN-BRI)
HDSL
High-bit-rate DSL
2.048Mbps over two pairs at
a distance up to 4.2Km
S-HDSL/SDSL
Single-pair or Symmetric
768Kbps over a single pair
High-bit-rate DSL
ADSL
Asymmetric DSL
up to 6Mbps in one direction
Rate Adaptive DSL
An extension of ADSL which supports
a variety of data rates depending upon
the quality of the local loop
RADSL
VDSL
Very High-bit-rate
asymmetric DSL
Up to 52Mbps in one direction and
2Mbps in the other direction.
5.Explain about Point-to-Point Protocol (PPP)
The Point-to-Point Protocol (PPP) provides a standard method for transporting multi-protocol
datagrams over point-to-point links. PPP is comprised of three main components:
1. A method for encapsulating multi-protocol datagrams.
2. A Link Control Protocol (LCP) for establishing, configuring, and testing the data-link
connection.
3. A family of Network Control Protocols (NCPs) for establishing and configuring
different network-layer protocols.
In networking, the Point-to-Point Protocol (PPP) is a data link protocol commonly used in
establishing a direct connection between two networking nodes. It can provide connection
authentication, transmission encryption , and compression.
PPP is used over many types of physical networks including serial cable, phone line, trunk
line, cellular telephone, specialized radio links, and fiber optic links such as SONET. PPP is
also used over Internet access connections (now marketed as "broadband"). Internet service
providers (ISPs) have used PPP for customer dial-up access to the Internet, since IP packets
cannot be transmitted over a modem line on their own, without some data link protocol. Two
encapsulated forms of PPP, Point-to-Point Protocol over Ethernet (PPPoE) and Point-toPoint Protocol over ATM (PPPoA), are used most commonly by Internet Service Providers
(ISPs) to establish a Digital Subscriber Line (DSL) Internet service connection with customers.
PPP is commonly used as a data link layer protocol for connection over synchronous and
asynchronous circuits, where it has largely superseded the older Serial Line Internet Protocol
(SLIP) and telephone company mandated standards (such as Link Access Protocol, Balanced
(LAPB) in the X.25 protocol suite). PPP was designed to work with numerous network layer
protocols, including Internet Protocol (IP), TRILL, Novell's Internetwork Packet Exchange
(IPX), NBF and AppleTalk.
Description
PPP and TCP/IP protocol stack
Application
FTP
SMTP
Transport
HTTP
TCP
Internet
…
DNS
…
UDP
IP
IPv6
PPP
Network access
PPPoE
PPPoA
Ethernet
ATM
PPP was designed somewhat after the original HDLC specifications. The designers of PPP
included many additional features that had been seen only in proprietary data-link protocols up to
that time.
RFC 2516 describes Point-to-Point Protocol over Ethernet (PPPoE) as a method for
transmitting PPP over Ethernet that is sometimes used with DSL. RFC 2364 describes Pointto-Point Protocol over ATM (PPPoA) as a method for transmitting PPP over ATM Adaptation
Layer 5 (AAL5), which is also a common alternative to PPPoE used with DSL.
PPP is specified in RFC 1661.
Automatic self configuration
Link Control Protocol (LCP) initiates and terminates connections gracefully, allowing hosts to
negotiate connection options. It is an integral part of PPP, and is defined in the same standard
specification. LCP provides automatic configuration of the interfaces at each end (such as setting
datagram size, escaped characters, and magic numbers) and for selecting optional
authentication. The LCP protocol runs on top of PPP (with PPP protocol number 0xC021) and
therefore a basic PPP connection has to be established before LCP is able to configure it.
RFC 1994 describes Challenge-handshake authentication protocol (CHAP), which is
preferred for establishing dial-up connections with ISPs. Although deprecated, Password
authentication protocol (PAP) is still sometimes used.
Another option for authentication over PPP is Extensible Authentication Protocol (EAP)
described in RFC 2284.
After the link has been established, additional network (layer 3) configuration may take place.
Most commonly, the Internet Protocol Control Protocol (IPCP) is used, although
Internetwork Packet Exchange Control Protocol (IPXCP) and AppleTalk Control Protocol
(ATCP) were once very popular.[citation needed] Internet Protocol Version 6 Control Protocol
(IPv6CP) will see extended use in the future, when IPv6 replaces IPv4's position as the
dominant layer-3 protocol.
Multiple network layer protocols
PPP architecture
LCP
CHAP PAP EAP
IPCP
IP
PPP encapsulation
HDLC-like Framing
PPPoE
PPPoA
Ethernet
ATM
POS
RS-232
SONET/SDH
PPP permits multiple network layer protocols to operate on the same communication link. For
every network layer protocol used, a separate Network Control Protocol (NCP) is provided in
order to encapsulate and negotiate options for the multiple network layer protocols. It negotiates
network-layer information, e.g. network address or compression options, after the connection has
been established.
For example, Internet Protocol (IP) uses the IP Control Protocol (IPCP), and Internetwork
Packet Exchange (IPX) uses the Novell IPX Control Protocol (IPX/SPX). NCPs include fields
containing standardized codes to indicate the network layer protocol type that the PPP
connection encapsulates.
Looped link detection
PPP detects looped links using a feature involving magic numbers. When the node sends PPP
LCP messages, these messages may include a magic number. If a line is looped, the node
receives an LCP message with its own magic number, instead of getting a message with the
peer's magic number.
PPP Configuration Options
The previous section introduced the use of LCP options to meet specific WAN connection
requirements. PPP may include the following LCP options:

Authentication - Peer routers exchange authentication messages. Two authentication
choices are Password Authentication Protocol (PAP) and Challenge Handshake
Authentication Protocol (CHAP). Authentication is explained in the next section.



Compression - Increases the effective throughput on PPP connections by reducing the
amount of data in the frame that must travel across the link. The protocol decompresses the
frame at its destination. See RFC 1962 for more details.
Error detection - Identifies fault conditions. The Quality and Magic Number options
help ensure a reliable, loop-free data link. The Magic Number field helps in detecting links
that are in a looped-back condition. Until the Magic-Number Configuration Option has been
successfully negotiated, the Magic-Number must be transmitted as zero. Magic numbers are
generated randomly at each end of the connection.
Multilink - Provides load balancing several interfaces used by PPP through Multilink
PPP (see below).
PPP frame
Structure of a PPP frame
Name
Protocol
Number of bytes
1 or 2
Description
setting of protocol in data field
Information variable (0 or more) datagram
Padding
variable (0 or more) optional padding
The Protocol field indicates the type of payload packet (e.g. LCP, NCP, IP, IPX, AppleTalk,
etc.).
The Information field contains the PPP payload; it has a variable length with a negotiated
maximum called the Maximum Transmission Unit. By default, the maximum is 1500 octets. It
might be padded on transmission; if the information for a particular protocol can be padded, that
protocol must allow information to be distinguished from padding.
Encapsulation
PPP frames are encapsulated in a lower-layer protocol that provides framing and may provide
other functions such as a checksum to detect transmission errors. PPP on serial links is usually
encapsulated in a framing similar to HDLC, described by IETF RFC 1662.
Name
Number of bytes
Description
Flag
1
indicates frame's begin or end
Address
1
broadcast address
Control
1
control byte
Protocol
1 or 2
setting of protocol in information field
Information variable (0 or more) datagram
Padding
variable (0 or more) optional padding
FCS
2 (or 4)
error check
The Flag field is present when PPP with HDLC-like framing is used.
The Address and Control fields always have the value hex FF (for "all stations") and hex 03 (for
"unnumbered information"), and can be omitted whenever PPP LCP Address-and-Control-FieldCompression (ACFC) is negotiated.
The Frame Check Sequence (FCS) field is used for determining whether an individual frame
has an error. It contains a checksum computed over the frame to provide basic protection against
errors in transmission. This is a CRC code similar to the one used for other layer two protocol
error protection schemes such as the one used in Ethernet. According to RFC 1662, it can be
either 16 bits (2 bytes) or 32 bits (4 bytes) in size (default is 16 bits - Polynomial x16 + x12 + x5 +
1).
The FCS is calculated over the Address, Control, Protocol, Information and Padding fields after
the message have been escaped.
PPP line activation and phases
A diagram depicting the phases of PPP according to RFC 1661.
The phases of the Point to Point Protocol according to RFC 1661 are listed below:





Link Dead. This phase occurs when the link fails, or one side has been told not to
connect (e.g. a user has finished his or her dialup connection.)
Link Establishment Phase. This phase is where Link Control Protocol negotiation is
attempted. If successful, control goes either to the authentication phase or the Network-Layer
Protocol phase, depending on whether authentication is desired.
Authentication Phase. This phase is optional. It allows the sides to authenticate each
other before a connection is established. If successful, control goes to the network-layer
protocol phase.
Network-Layer Protocol Phase. This phase is where each desired protocols' Network
Control Protocols are invoked. For example, IPCP is used in establishing IP service over
the line. Data transport for all protocols which are successfully started with their network
control protocols also occurs in this phase. Closing down of network protocols also occur in
this phase.
Link Termination Phase. This phase closes down this connection. This can happen if
there is an authentication failure, if there are so many checksum errors that the two parties
decide to tear down the link automatically, if the link suddenly fails, or if the user decides to
hang up his connection.
PPP over several links
Multilink PPP
Multilink PPP (also referred to as MLPPP, MP, MPPP, MLP, or Multilink) provides a method
for spreading traffic across multiple distinct PPP connections. It is defined in RFC 1990. It can
be used, for example, to connect a home computer to an Internet Service Provider using two
traditional 56k modems, or to connect a company through two leased lines.
On a single PPP line frames cannot arrive out of order, but this is possible when the frames are
divided among multiple PPP connections. Therefore Multilink PPP must number the fragments
so they can be put in the right order again when they arrive.
Multilink PPP is an example of a link aggregation technology. Cisco IOS Release 11.1 and later
supports Multilink PPP.
Multiclass PPP
With PPP, one cannot establish several simultaneous distinct PPP connections over a single link.
That's not possible with Multilink PPP either. Multilink PPP uses contiguous numbers for all the
fragments of a packet, and as a consequence it is not possible to suspend the sending of a
sequence of fragments of one packet in order to send another packet. This prevents from running
Multilink PPP multiple times on the same links.
Multiclass PPP is a kind of Multilink PPP where each "class" of traffic uses a separate sequence
number space and reassembly buffer. Multiclass PPP is defined in RFC 2686.
[edit] PPP and tunnels
Simplified OSI protocol stack for an example SSH+PPP tunnel
Application
Transport
FTP
SMTP
HTTP
TCP
IP
Data Link
PPP
Application
SSH
Transport
TCP
Network
IP
Physical
DNS
…
UDP
Network
Data Link
…
Ethernet
ATM
Cables, NICs, and so on
PPP as a layer 2 protocol between both ends of a tunnel
Many protocols can be used to tunnel data over IP networks. Some of them, like SSL, SSH, or
L2TP create virtual network interfaces and give the impression a direct physical connections
between the tunnel endpoints. On a Linux host for example, these interfaces would be called
tun0.
As there are only two endpoints on a tunnel, the tunnel is a point-to-point connection and PPP is
a natural choice as a data link layer protocol between the virtual network interfaces. PPP can
assign IP addresses to these virtual interfaces, and these IP addresses can be used, for example,
to route between the networks on both sides of the tunnel.
IPsec in tunneling mode does not create virtual physical interfaces at the end of the tunnel, since
the tunnel is handled directly by the TCP/IP stack. L2TP can be used to provide these interfaces,
this technique is called L2TP/IPsec. In this case too, PPP provides IP addresses to the extremities
of the tunnel.
6. Explain about Fiber Distributed Data Interface (FDDI)
Fiber Distributed Data Interface (FDDI)
Background
The Fiber Distributed Data Interface (FDDI) specifies a 100-Mbps token-passing, dual-ring LAN
using fiber-optic cable. FDDI is frequently used as high-speed backbone technology because of
its support for high bandwidth and greater distances than copper. It should be noted that
relatively recently, a related copper specification, called Copper Distributed Data Interface
(CDDI) has emerged to provide 100-Mbps service over copper. CDDI is the implementation of
FDDI protocols over twisted-pair copper wire. This chapter focuses mainly on FDDI
specifications and operations, but it also provides a high-level overview of CDDI.
FDDI uses a dual-ring architecture with traffic on each ring flowing in opposite directions (called
counter-rotating). The dual-rings consist of a primary and a secondary ring. During normal
operation, the primary ring is used for data transmission, and the secondary ring remains idle.
The primary purpose of the dual rings, as will be discussed in detail later in this chapter, is to
provide superior reliability and robustness. Figure 8-1 shows the counter-rotating primary and
secondary FDDI rings.
Figure 8-1: FDDI uses counter-rotating primary and secondary rings.
Standards
FDDI was developed by the American National Standards Institute (ANSI) X3T9.5 standards
committee in the mid-1980s. At the time, high-speed engineering workstations were beginning to
tax the bandwidth of existing local area networks (LANs) based on Ethernet and Token Ring). A
new LAN media was needed that could easily support these workstations and their new
distributed applications. At the same time, network reliability had become an increasingly
important issue as system managers migrated mission-critical applications from large computers
to networks. FDDI was developed to fill these needs. After completing the FDDI specification,
ANSI submitted FDDI to the International Organization for Standardization (ISO), which created
an international version of FDDI that is completely compatible with the ANSI standard version.
FDDI Transmission Media
FDDI uses optical fiber as the primary transmission medium, but it also can run over copper
cabling. As mentioned earlier, FDDI over copper is referred to as Copper-Distributed Data
Interface (CDDI). Optical fiber has several advantages over copper media. In particular, security,
reliability, and performance all are enhanced with optical fiber media because fiber does not emit
electrical signals. A physical medium that does emit electrical signals (copper) can be tapped and
therefore would permit unauthorized access to the data that is transiting the medium. In addition,
fiber is immune to electrical interference from radio frequency interference (RFI) and
electromagnetic interference (EMI). Fiber historically has supported much higher bandwidth
(throughput potential) than copper, although recent technological advances have made copper
capable of transmitting at 100 Mbps. Finally, FDDI allows two kilometers between stations
using multi-mode fiber, and even longer distances using a single mode.
FDDI defines two types of optical fiber: single-mode and multi-mode. A mode is a ray of light
that enters the fiber at a particular angle. Multi-mode fiber uses LED as the light-generating
devices, while single-mode fiber generally uses lasers.
Multi-mode fiber allows multiple modes of light to propagate through the fiber. Because these
modes of light enter the fiber at different angles, they will arrive at the end of the fiber at
different times. This characteristic is known as modal dispersion. Modal dispersion limits the
bandwidth and distances that can be accomplished using multi-mode fibers. For this reason,
multi-mode fiber is generally used for connectivity within a building or within a relatively
geographically contained environment.
Single-mode fiber allows only one mode of light to propagate through the fiber. Because only a
single mode of light is used, modal dispersion is not present with single-mode fiber. Therefore,
single-mode is capable of delivering considerably higher performance connectivity and over
much larger distances, which is why it generally is used for connectivity between buildings and
within environments that are more geographically dispersed.
Figure 8-2 depicts single-mode fiber using a laser light source and multi-mode fiber using a
light-emitting diode (LED) light source.
Figure 8-2: Light sources differ for single-mode and multi-mode fibers.
FDDI Specifications
FDDI specifies the physical and media-access portions of the OSI reference model. FDDI is not
actually a single specification, but it is a collection of four separate specifications each with a
specific function. Combined, these specifications have the capability to provide high-speed
connectivity between upper-layer protocols such as TCP/IP and IPX, and media such as fiberoptic cabling.
FDDI's four specifications are the Media Access Control (MAC), Physical Layer Protocol
(PHY), Physical-Medium Dependent (PMD), and Station Management (SMT). The MAC
specification defines how the medium is accessed, including frame format, token handling,
addressing, algorithms for calculating cyclic redundancy check (CRC) value, and error-recovery
mechanisms. The PHY specification defines data encoding/decoding procedures, clocking
requirements, and framing, among other functions. The PMD specification defines the
characteristics of the transmission medium, including fiber-optic links, power levels, bit-error
rates, optical components, and connectors. The SMT specification defines FDDI station
configuration, ring configuration, and ring control features, including station insertion and
removal, initialization, fault isolation and recovery, scheduling, and statistics collection.
FDDI is similar to IEEE 802.3 Ethernet and IEEE 802.5 Token Ring in its relationship with the
OSI model. Its primary purpose is to provide connectivity between upper OSI layers of common
protocols and the media used to connect network devices. Figure 8-3 illustrates the four FDDI
specifications and their relationship to each other and to the IEEE-defined Logical-Link Control
(LLC) sublayer. The LLC sublayer is a component of Layer 2, the MAC layer, of the OSI
reference model.
Figure 8-3: FDDI specifications map to the OSI hierarchical model.
FDDI Station-Attachment Types
One of the unique characteristics of FDDI is that multiple ways actually exist by which to
connect FDDI devices. FDDI defines three types of devices: single-attachment station (SAS),
dual-attachment station (DAS), and a concentrator.
An SAS attaches to only one ring (the primary) through a concentrator. One of the primary
advantages of connecting devices with SAS attachments is that the devices will not have any
effect on the FDDI ring if they are disconnected or powered off. Concentrators will be discussed
in more detail in the following discussion.
Each FDDI DAS has two ports, designated A and B. These ports connect the DAS to the dual
FDDI ring. Therefore, each port provides a connection for both the primary and the secondary
ring. As you will see in the next section, devices using DAS connections will affect the ring if
they are disconnected or powered off. Figure 8-4 shows FDDI DAS A and B ports with
attachments to the primary and secondary rings.
Figure 8-4: FDDI DAS ports attach to the primary and secondary rings.
An FDDI concentrator (also called a dual-attachment concentrator [DAC]) is the building block
of an FDDI network. It attaches directly to both the primary and secondary rings and ensures that
the failure or power-down of any SAS does not bring down the ring. This is particularly useful
when PCs, or similar devices that are frequently powered on and off, connect to the ring. Figure
8-5 shows the ring attachments of an FDDI SAS, DAS, and concentrator.
Figure 8-5: A concentrator attaches to both the primary and secondary rings.
FDDI Fault Tolerance
FDDI provides a number of fault-tolerant features. In particular, FDDI's dual-ring environment,
the implementation of the optical bypass switch, and dual-homing support make FDDI a resilient
media technology.
Dual Ring
FDDI's primary fault-tolerant feature is the dual ring. If a station on the dual ring fails or is
powered down, or if the cable is damaged, the dual ring is automatically wrapped (doubled back
onto itself) into a single ring. When the ring is wrapped, the dual-ring topology becomes a
single-ring topology. Data continues to be transmitted on the FDDI ring without performance
impact during the wrap condition. Figure 8-6 and Figure 8-7 illustrate the effect of a ring
wrapping in FDDI.
Figure 8-6: A ring recovers from a station failure by wrapping.
Figure 8-7: A ring also wraps to withstand a cable failure.
When a single station fails, as shown in Figure 8-6, devices on either side of the failed (or
powered down) station wrap, forming a single ring. Network operation continues for the
remaining stations on the ring. When a cable failure occurs, as shown in Figure 8-7, devices on
either side of the cable fault wrap. Network operation continues for all stations.
It should be noted that FDDI truly provides fault-tolerance against a single failure only. When
two or more failures occur, the FDDI ring segments into two or more independent rings that are
unable to communicate with each other.
Optical Bypass Switch
An optical bypass switch provides continuous dual-ring operation if a device on the dual ring
fails. This is used both to prevent ring segmentation and to eliminate failed stations from the
ring. The optical bypass switch performs this function through the use of optical mirrors that pass
light from the ring directly to the DAS device during normal operation. In the event of a failure
of the DAS device, such as a power-off, the optical bypass switch will pass the light through
itself by using internal mirrors and thereby maintain the ring's integrity. The benefit of this
capability is that the ring will not enter a wrapped condition in the event of a device failure.
Figure 8-8 shows the functionality of an optical bypass switch in an FDDI network.
Figure 8-8: The optical bypass switch uses internal mirrors to maintain a network.
Dual Homing
Critical devices, such as routers or mainframe hosts, can use a fault-tolerant technique called
dual homing to provide additional redundancy and to help guarantee operation. In dual-homing
situations, the critical device is attached to two concentrators. Figure 8-9 shows a dual-homed
configuration for devices such as file servers and routers.
Figure 8-9: A dual-homed configuration guarantees operation.
One pair of concentrator links is declared the active link; the other pair is declared passive. The
passive link stays in back-up mode until the primary link (or the concentrator to which it is
attached) is determined to have failed. When this occurs, the passive link automatically activates.
FDDI Frame Format
The FDDI frame format is similar to the format of a Token Ring frame. This is one of the areas
where FDDI borrows heavily from earlier LAN technologies, such as Token Ring. FDDI frames
can be as large as 4,500 bytes. Figure 8-10 shows the frame format of an FDDI data frame and
token.
Figure 8-10: The FDDI frame is similar to that of a Token Ring frame.
FDDI Frame Fields
The following descriptions summarize the FDDI data frame and token fields illustrated in Figure
8-10.

Preamble---A unique sequence that prepares each station for an upcoming frame.








Start Delimiter---Indicates the beginning of a frame by employing a signaling pattern that
differentiates it from the rest of the frame.
Frame Control---Indicates the size of the address fields and whether the frame contains
asynchronous or synchronous data, among other control information.
Destination Address---Contains a unicast (singular), multicast (group), or broadcast
(every station) address. As with Ethernet and Token Ring addresses, FDDI destination
addresses are 6 bytes long.
Source Address---Identifies the single station that sent the frame. As with Ethernet and
Token Ring addresses, FDDI source addresses are 6 bytes long.
Data---Contains either information destined for an upper-layer protocol or control
information.
Frame Check Sequence (FCS)---Filed by the source station with a calculated cyclic
redundancy check value dependent on frame contents (as with Token Ring and Ethernet).
The destination address recalculates the value to determine whether the frame was
damaged in transit. If so, the frame is discarded.
End Delimiter---Contains unique symbols, which cannot be data symbols, that indicate
the end of the frame
Frame Status---Allows the source station to determine whether an error occurred and
whether the frame was recognized and copied by a receiving station.
Copper Distributed Data Interface (CDDI)
Copper Distributed Data Interface (CDDI) is the implementation of FDDI protocols over
twisted-pair copper wire. Like FDDI, CDDI provides data rates of 100 Mbps and uses a dualring architecture to provide redundancy. CDDI supports distances of about 100 meters from
desktop to concentrator.
CDDI is defined by the ANSI X3T9.5 Committee. The CDDI standard is officially named the
Twisted-Pair Physical Medium Dependent (TP-PMD) standard. It is also referred to as the
Twisted-Pair Distributed Data Interface (TP-DDI), consistent with the term Fiber-Distributed
Data Interface (FDDI). CDDI is consistent with the physical and media-access control layers
defined by the ANSI standard.
The ANSI standard recognizes only two types of cables for CDDI: shielded twisted pair (STP)
and unshielded twisted pair (UTP). STP cabling has a 150-ohm impedance and adheres to
EIA/TIA 568 (IBM Type 1) specifications. UTP is data-grade cabling (Category 5) consisting of
four unshielded pairs using tight-pair twists and specially developed insulating polymers in
plastic jackets adhering to EIA/TIA 568B specifications.
Figure 8-11 illustrates the CDDI TP-PMD specification in relation to the remaining FDDI
specifications.
Figure 8-11: CDDI TP-PMD and FDDI specifications adhere to different standards.
Fiber Distributed Data Interface
Dual-attach FDDI Board
Fiber Distributed Data Interface (FDDI) provides a 100 Mbit/s optical standard for data
transmission in a local area network that can extend in range up to 200 kilometers (120 mi).
Although FDDI logical topology is a ring-based token network, it does not use the IEEE 802.5
token ring protocol as its basis; instead, its protocol is derived from the IEEE 802.4 token bus
timed token protocol. In addition to covering large geographical areas, FDDI local area networks
can support thousands of users. As a standard underlying medium it uses optical fiber, although
it can use copper cable, in which case it may be referred to as CDDI (Copper Distributed Data
Interface). FDDI offers both a Dual-Attached Station (DAS), counter-rotating token ring
topology and a Single-Attached Station (SAS), token bus passing ring topology.
FDDI was considered an attractive campus backbone technology in the early to mid 1990s since
existing Ethernet networks only offered 10 Mbit/s transfer speeds and Token Ring networks only
offered 4 Mbit/s or 16 Mbit/s speeds. Thus it was the preferred choice of that era for a highspeed backbone, but FDDI has since been effectively obsolesced by fast Ethernet which offered
the same 100 Mbit/s speeds, but at a much lower cost and, since 1998, by Gigabit Ethernet due
to its speed, and even lower cost, and ubiquity.
FDDI, as a product of American National Standards Institute X3T9.5 (now X3T12), conforms
to the Open Systems Interconnection (OSI) model of functional layering of LANs using other
protocols. FDDI-II, a version of FDDI, adds the capability to add circuit-switched service to the
network so that it can also handle voice and video signals. Work has started to connect FDDI
networks to the developing Synchronous Optical Network SONET.
A FDDI network contains two rings, one as a secondary backup in case the primary ring fails.
The primary ring offers up to 100 Mbit/s capacity. When a network has no requirement for the
secondary ring to do backup, it can also carry data, extending capacity to 200 Mbit/s. The single
ring can extend the maximum distance; a dual ring can extend 100 km (62 mi). FDDI has a
larger maximum-frame size (4,352 bytes) than standard 100 Mbit/s Ethernet which only
supports a maximum-frame size of 1,500 bytes, allowing better throughput.
Designers normally construct FDDI rings in the form of a "dual ring of trees" (see network
topology). A small number of devices (typically infrastructure devices such as routers and
concentrators rather than host computers) connect to both rings - hence the term "dual-attached".
Host computers then connect as single-attached devices to the routers or concentrators. The dual
ring in its most degenerate form simply collapses into a single device. Typically, a computerroom contains the whole dual ring, although some implementations have deployed FDDI as a
Metropolitan area network.










E-mail
Print
A
AA
AAA
LinkedIn
Facebook
Twitter
Share This
Reprints
7. Explain about ALOHA
ALOHA
Aloha, also called the Aloha method, refers toa simple communications scheme in which each
source (transmitter) in a network sends data wheneverthere is a frame to send. If the
framesuccessfully reaches the destination (receiver), the next frame is sent. If theframe fails to be
received at the destination, it is sent again. This protocol wasoriginally developed at the
University of Hawaii for use with satellite communication
ALOHAnet
ALOHAnet, also known as the ALOHA System,[1][2] or simply ALOHA, was a pioneering
computer networking system[3] developed at the University of Hawaii. ALOHAnet became
operational in June, 1971, providing the first public demonstration of a wireless packet data
network.[4]
The ALOHAnet used a new method of medium access (ALOHA random access) and
experimental UHF frequencies for its operation, since frequency assignments for
communications to and from a computer were not available for commercial applications in the
1970s. But even before such frequencies were assigned there were two other media available for
the application of an ALOHA channel – cables and satellites. In the 1970s ALOHA random
access was employed in the widely used Ethernet cable based network[5] and then in the
Marisat (now Inmarsat) satellite network.[6]
In the early 1980s frequencies for mobile networks became available, and in 1985 frequencies
suitable for what became known as Wi-Fi were allocated in the US. These regulatory
developments made it possible to use the ALOHA random access techniques in both Wi-Fi and
in mobile telephone networks.
ALOHA channels were used in a limited way in the 1980s in 1G mobile phones for signaling
and control purposes.[7] In the 1990s, Matti Makkonen and others at Telecom Finland greatly
expanded the use of ALOHA channels in order to implement SMS message texting in 2G mobile
phones. In the early 2000s additional ALOHA channels were added to 2.5G and 3G mobile
phones with the widespread introduction of GPRS, using a slotted ALOHA random access
channel combined with a version of the Reservation ALOHA scheme first analyzed by a group at
BBN.[8]
Comparison of Pure Aloha and Slotted Aloha shown on Throughput vs. Traffic Load plot.
The average amount of transmission-attempts for 2 consecutive frame-times is 2G. Hence, for
any pair of consecutive frame-times, the probability of there being k transmission-attempts
during those two frame-times is:
Therefore, the probability (Probpure) of there being zero transmission-attempts between t-T and
t+T (and thus of a successful transmission for us) is:
Probpure = e − 2G
The throughput can be calculated as the rate of transmission-attempts multiplied by the
probability of success, and so we can conclude that the throughput (Spure) is:
Spure = Ge − 2G
The maximum throughput is 0.5/e frames per frame-time (reached when G = 0.5), which is
approximately 0.184 frames per frame-time. This means that, in Pure ALOHA, only about 18.4%
of the time is used for successful transmissions.
Slotted ALOHA
Slotted ALOHA protocol. Boxes indicate frames. Shaded boxes indicate frames which are in the
same slots.
An improvement to the original ALOHA protocol was "Slotted ALOHA", which introduced
discrete timeslots and increased the maximum throughput.[10] A station can send only at the
beginning of a timeslot, and thus collisions are reduced. In this case, we only need to worry
about the transmission-attempts within 1 frame-time and not 2 consecutive frame-times, since
collisions can only occur during each timeslot. Thus, the probability of there being zero
transmission-attempts in a single timeslot is:
Probslotted = e − G
the probability of k packets is:
Probslottedk = e − G(1 − e − G)k − 1
The throughput is:
Sslotted = Ge − G
The maximum throughput is 1/e frames per frame-time (reached when G = 1), which is
approximately 0.368 frames per frame-time, or 36.8%.
Slotted ALOHA is used in low-data-rate tactical satellite communications networks by military
forces, in subscriber-based satellite communications networks, mobile telephony call setup, and
in the contactless RFID technologies.
In a wireless broadcast systemor a half-duplex two-way link, Aloha works perfectly. But as
networks become morecomplex, for example in an Ethernet systeminvolving multiple sources
and destinations that share a common data path,trouble occurs because data frames collide
(conflict). The heavier thecommunications volume, the worse the collision problems become.
The result isdegradation of system efficiency, because when two frames collide, the data
contained inboth frames is lost.
To minimize the number of collisions, thereby optimizing networkefficiency and increasing the
number of subscribers that can use a given network, a schemecalled slotted Aloha was
developed. This system employs signals called beacons that are sentat precise intervals and tell
each source when the channel is clear to send aframe. Further improvement can be realized by a
more sophisticated protocol called Carrier Sense Multiple Access withCollision Detection
(CSMA/CD).
Related glossary terms: online, IT Survival Kit: Networking refreshers, downstream,
TCP/IP (Transmission Control Protocol/Internet Protocol), connection, wavelength, active
network, ARPANET, frame, ELF (extremely low frequency)
8. Explain about wireless local area network (WLAN)
Wireless LAN
This notebook computer is connected to a wireless access point using a PC card wireless card.
An example of a Wi-Fi network.
A wireless local area network (WLAN) links two or more devices using some wireless
distribution method (typically spread-spectrum or OFDM radio), and usually providing a
connection through an access point to the wider internet. This gives users the mobility to move
around within a local coverage area and still be connected to the network. Most modern WLANs
are based on IEEE 802.11 standards, marketed under the Wi-Fi brand name.
Wireless LANs have become popular in the home due to ease of installation, and the increasing
to offer wireless access to their customers; often for free. Large wireless network projects are
being put up in many major cities: New York City, for instance, has begun a pilot program to
provide city workers in all five boroughs of the city with wireless Internet access.[1]
An embedded RouterBoard 112 with U.FL-RSMA pigtail and R52 mini PCI Wi-Fi card
widely used by wireless Internet service providers (WISPs)
History
Norman Abramson, a professor at the University of Hawaii, developed the world’s first
wireless computer communication network, ALOHAnet, using low-cost ham-like radios. The
system included seven computers deployed over four islands to communicate with the central
computer on the Oahu Island without using phone lines.[2]
"In 1979, F.R. Gfeller and U. Bapst published a paper in the IEEE Proceedings reporting an
experimental wireless local area network using diffused infrared communications. Shortly
thereafter, in 1980, P. Ferrert reported on an experimental application of a single code spread
spectrum radio for wireless terminal communications in the IEEE National Telecommunications
Conference. In 1984, a comparison between infrared and CDMA spread spectrum
communications for wireless office information networks was published by Kaveh Pahlavan in
IEEE Computer Networking Symposium which appeared later in the IEEE Communication
Society Magazine. In May 1985, the efforts of Marcus led the FCC to announce experimental
ISM bands for commercial application of spread spectrum technology. Later on, M. Kavehrad
reported on an experimental wireless PBX system using code division multiple access. These
efforts prompted significant industrial activities in the development of a new generation of
wireless local area networks and it updated several old discussions in the portable and mobile
radio industry.
The first generation of wireless data modems was developed in the early 1980s by amateur
radio operators, who commonly referred to this as packet radio. They added a voice band data
communication modem, with data rates below 9600-bit/s, to an existing short distance radio
system, typically in the two meter amateur band. The second generation of wireless modems was
developed immediately after the FCC announcement in the experimental bands for non-military
use of the spread spectrum technology. These modems provided data rates on the order of
hundreds of kbit/s. The third generation of wireless modem then aimed at compatibility with the
existing LANs with data rates on the order of Mbit/s. Several companies developed the third
generation products with data rates above 1 Mbit/s and a couple of products had already been
announced by the time of the first IEEE Workshop on Wireless LANs."[3]
54 Mbit/s WLAN PCI Card (802.11g)
"The first of the IEEE Workshops on Wireless LAN was held in 1991. At that time early
wireless LAN products had just appeared in the market and the IEEE 802.11 committee had just
started its activities to develop a standard for wireless LANs. The focus of that first workshop
was evaluation of the alternative technologies. By 1996, the technology was relatively mature, a
variety of applications had been identified and addressed and technologies that enable these
applications were well understood. Chip sets aimed at wireless LAN implementations and
applications, a key enabling technology for rapid market growth, were emerging in the market.
Wireless LANs were being used in hospitals, stock exchanges, and other in building and campus
settings for nomadic access, point-to-point LAN bridges, ad-hoc networking, and even larger
applications through internetworking. The IEEE 802.11 standard and variants and alternatives,
such as the wireless LAN interoperability forum and the European HiperLAN specification had
made rapid progress, and the unlicensed PCS Unlicensed Personal Communications Services
and the proposed SUPERNet, later on renamed as U-NII, bands also presented new
opportunities."[4]
WLAN hardware was initially so expensive that it was only used as an alternative to cabled LAN
in places where cabling was difficult or impossible. Early development included industryspecific solutions and proprietary protocols, but at the end of the 1990s these were replaced by
standards, primarily the various versions of IEEE 802.11 (in products using the Wi-Fi brand
name). An alternative ATM-like 5 GHz standardized technology, HiperLAN/2, has so far not
succeeded in the market, and with the release of the faster 54 Mbit/s 802.11a (5 GHz) and
802.11g (2.4 GHz) standards, almost certainly never will.[citation needed] Since 2002 there has been
newer standard added to 802.11; 802.11n which operates on both the 5Ghz and 2.4Ghz bands at
300 Mbit/s, most newer routers including those manufactured by Apple Inc. can broadcast a
wireless network on both wireless bands, this is called dualband. A HomeRF group was formed
in 1997 to promote a technology aimed for residential use, but disbanded at the end of 2002.[5]
Types of wireless LANs
Peer-to-peer
Peer-to-Peer or ad-hoc wireless LAN
An ad-hoc network is a network where stations communicate only peer to peer (P2P). There is
no base and no one gives permission to talk. This is accomplished using the Independent Basic
Service Set (IBSS).
A peer-to-peer (P2P) network allows wireless devices to directly communicate with each other.
Wireless devices within range of each other can discover and communicate directly without
involving central access points. This method is typically used by two computers so that they can
connect to each other to form a network.
If a signal strength meter is used in this situation, it may not read the strength accurately and can
be misleading, because it registers the strength of the strongest signal, which may be the closest
computer.
Hidden node problem: Devices A and C are both communicating with B, but are unaware of
each other
IEEE 802.11 defines the physical layer (PHY) and MAC (Media Access Control) layers based
on CSMA/CA (Carrier Sense Multiple Access with Collision Avoidance). The 802.11
specification includes provisions designed to minimize collisions, because two mobile units may
both be in range of a common access point, but out of range of each other.
The 802.11 has two basic modes of operation: Ad hoc mode enables peer-to-peer transmission
between mobile units. Infrastructure mode in which mobile units communicate through an access
point that serves as a bridge to a wired network infrastructure is the more common wireless LAN
application the one being covered. Since wireless communication uses a more open medium for
communication in comparison to wired LANs, the 802.11 designers also included shared-key
encryption mechanisms: Wired Equivalent Privacy (WEP), Wi-Fi Protected Access (WPA,
WPA2), to secure wireless computer networks.
Bridge
A bridge can be used to connect networks, typically of different types. A wireless Ethernet
bridge allows the connection of devices on a wired Ethernet network to a wireless network. The
bridge acts as the connection point to the Wireless LAN.
UNIT – IV
SECTION – A
.
1. Broadband ISDN can support data rates of 1.5 million bits per second (bps), but it has not
been widely implemented
2. Frame Relay is a standardized wide area network technology that specifies the physical
and logical link layers of digital telecommunications channels using a packet switching
methodology.
3. Integrated Services Digital Network (ISDN) is a set of communications standards for
simultaneous digital transmission of voice, video, data, and other network services over the
traditional circuits of the public switched telephone network.
4. LAPD works in the Asynchronous Balanced Mode (ABM).
5. Signalling System No. 7 (SS7) is a set of telephony signaling protocols which are used to
set up most of the world's public switched telephone network telephone calls.
6. Two types of ISDN are Basic Rate Interface (BRI) , Primary Rate Interface (PRI
7. The U interface is a two-wire interface between the exchange and a network terminating unit,
which is usually the demarcation point in non-North American networks.
8. The S interface is a four-wire bus that ISDN consumer devices plug into; the S & T reference
points are commonly implemented as a single interface labeled 'S/T' on an Network
termination 1 (NT1).
9.
The R interface defines the point between a non-ISDN device and a terminal adapter (TA)
which provides translation to and from such a device.
10. GFC - (Generic Flow Control) used only in the UNI format. No general purpose services
have been assigned to this field, and it is significant for only the local site The T interface is a
serial interface between a computing device and a terminal adapter, which is the digital
equivalent of a modem.
SECTION – B
1.Explain about B-ISDN
Broadband ISDN (B-ISDN)
A standard for transmitting voice, video and data at the same time over fiber optic telephone
lines. Broadband ISDN can support data rates of 1.5 million bits per second (bps), but it has not
been widely implemented.
Broadband Integrated Services Digital Network
In the 1980s the telecommunications industry expected that digital services would follow much
the same pattern as voice services did on the public switched telephone network, and
conceived a grandiose end-to-end circuit switched services, known as Broadband Integrated
Services Digital Network (B-ISDN). This was designed in the as a logical extension of the endto-end circuit switched data service, Integrated Services Digital Network (ISDN).
Before B-ISDN, the original ISDN attempted to substitute the analog telephone system with a
digital system which was appropriate for both voice and non voice traffic. Obtaining worldwide
agreement on the basic rate interface standard was expected to lead to a large user demand for
ISDN equipment, hence leading to mass production and inexpensive ISDN chips. However, the
standardization process took years while computer network technology moved rapidly. Once
the ISDN standard was finally agreed upon and products were available, it was already obsolete.
For home use the largest demand for new services was video and voice transfer, but the ISDN
basic rate lacks the necessary channel capacity. For business, ISDN's 64 kbit/s data rate
compared unfavorably to 10 Mbit/s local area networks such as Ethernet.
This led to introduction of B-ISDN, by adding the word broadband. Although the term had a
meaning in physics and engineering (similar to wideband), the CCITT defined it as: "Qualifying
a service or system requiring transmission channels capable of supporting rates greater than the
primary rate"[1] referring to the primary rate which ranged from about 1.5 to 2 Mbit/s. Services
envisioned included video telephone and video conferencing. Technical papers were published
in early 1988.[2] Standards were issued by the Comité Consultatif International Téléphonique
et Télégraphique, (CCITT, now known as ITU-T), and called "Recommendations". They
included G.707-709, and I.121 which defined the principal aspects of B-ISDN, with many others
following through the 1990s.[3][4]
The designated technology for B-ISDN was Asynchronous Transfer Mode (ATM), which was
intended to carry both synchronous voice and asynchronous data services on the same
transport.[5] The B-ISDN vision has been overtaken by other disruptive technologies used in the
Internet. The ATM technology survived as a low-level layer in most Digital Subscriber Line
(DSL) technologies, and as a payload type in some wireless technologies such as WiMAX. The
term "broadband" became a marketing term for any digital Internet access service.
2. Explain about frame relay
Frame Relay
Frame Relay is a standardized wide area network technology that specifies the physical and
logical link layers of digital telecommunications channels using a packet switching
methodology. Originally designed for transport across Integrated Services Digital Network
(ISDN) infrastructure, it may be used today in the context of many other network interfaces.
Network providers commonly implement Frame Relay for voice (VoFR) and data as an
encapsulation technique, used between local area networks (LANs) over a wide area network
(WAN). Each end-user gets a private line (or leased line) to a Frame Relay node. The Frame
Relay network handles the transmission over a frequently-changing path transparent to all endusers.
Frame Relay has become one of the most extensively-used WAN protocols. Its cheapness
(compared to leased lines) provided one reason for its popularity. The extreme simplicity of
configuring user equipment in a Frame Relay network offers another reason for Frame Relay's
popularity.
With the advent of Ethernet over fiber optics, MPLS, VPN and dedicated broadband services
such as cable modem and DSL, the end may loom for the Frame Relay protocol and
encapsulation.[citation needed] However many rural areas remain lacking DSL and cable modem
services. In such cases the least expensive type of non-dial-up connection remains a 64-kbit/s
frame-relay line. Thus a retail chain, for instance, may use Frame Relay for connecting rural
stores into their corporate WAN.
3. Explain about Data Encryption
Data Encryption
Privacy is a great concern in data communications. Faxed business letters can be intercepted at
will through tapped phone lines or intercepted microwave transmissions without the knowledge
of the sender or receiver. To increase the security of this and other data communications,
including digitized telephone conversations, the binary codes representing data may be
scrambled in such a way that unauthorized interception will produce an indecipherable sequence
of characters. Authorized receive stations will be equipped with a decoder that enables the
message to be restored. The process of scrambling, transmitting, and descrambling is known as
encryption.
Custom integrated circuits have been designed to perform this task and are available at low cost.
In some cases, they will be incorporated into the main circuitry of a data communications device
and function without operator knowledge. In other cases, an external circuit is used so that the
device, and its encrypting/decrypting technique, may be transported easily.
SECTION – C
1. Explain about ISDN
ISDN
Abbreviation of integrated services digital network, an international communications standard
for sending voice, video, and data over digital telephone lines or normal telephone wires. ISDN
supports data transfer rates of 64 Kbps (64,000 bits per second).
There are two types of ISDN:

Basic Rate Interface (BRI) -- consists of two 64-Kbps B-channels and one D-channel for
transmitting control information.

Primary Rate Interface (PRI) -- consists of 23 B-channels and one D-channel (U.S.) or 30
B-channels and one D-channel (Europe).
The original version of ISDN employs baseband transmission. Another version, called B-ISDN,
uses broadband transmission and is able to support transmission rates of 1.5 Mbps. B-ISDN
requires fiber optic cables and is not widely available.
Integrated Services Digital Network
Integrated Services Digital Network (ISDN) is a set of communications standards for
simultaneous digital transmission of voice, video, data, and other network services over the
traditional circuits of the public switched telephone network. It was first defined in 1988 in the
CCITT red book.[1] Prior to ISDN, the telephone system was viewed as a way to transport voice,
with some special services available for data. The key feature of ISDN is that it integrates speech
and data on the same lines, adding features that were not available in the classic telephone
system. There are several kinds of access interfaces to ISDN defined as Basic Rate Interface
(BRI), Primary Rate Interface (PRI) and Broadband ISDN (B-ISDN).
ISDN is a circuit-switched telephone network system, which also provides access to packet
switched networks, designed to allow digital transmission of voice and data over ordinary
telephone copper wires, resulting in potentially better voice quality than an analog phone can
provide. It offers circuit-switched connections (for either voice or data), and packet-switched
connections (for data), in increments of 64 kilobit/s. A major market application for ISDN in
some countries is Internet access, where ISDN typically provides a maximum of 128 kbit/s in
both upstream and downstream directions. Channel bonding can achieve a greater data rate;
typically the ISDN B-channels of 3 or 4 BRIs (6 to 8 64 kbit/s channels) are bonded.
ISDN should not be mistaken for its use with a specific protocol, such as Q.931 whereby ISDN
is employed as the network, data-link and physical layers in the context of the OSI model. In a
broad sense ISDN can be considered a suite of digital services existing on layers 1, 2, and 3 of
the OSI model. ISDN is designed to provide access to voice and data services simultaneously.
However, common use reduced ISDN to be limited to Q.931 and related protocols, which are a
set of protocols for establishing and breaking circuit switched connections, and for advanced
calling features for the user. They were introduced in 1986.[2]
In a videoconference, ISDN provides simultaneous voice, video, and text transmission between
individual desktop videoconferencing systems and group (room) videoconferencing systems.
ISDN elements
Integrated services refers to ISDN's ability to deliver at minimum two simultaneous connections,
in any combination of data, voice, video, and fax, over a single line. Multiple devices can be
attached to the line, and used as needed. That means an ISDN line can take care of most people's
complete communications needs (apart from broadband Internet access and entertainment
television) at a much higher transmission rate, without forcing the purchase of multiple analog
phone lines. It also refers to integrated switching and transmission[3] in that telephone switching
and carrier wave transmission are integrated rather than separate as in earlier technology.
Basic Rate Interface
The entry level interface to ISDN is the Basic(s) Rate Interface (BRI), a 128 kbit/s service
delivered over a pair of standard telephone copper wires. The 144 kbit/s payload rate is broken
down into two 64 kbit/s bearer channels ('B' channels) and one 16 kbit/s signaling channel ('D'
channel or delta channel). This is sometimes referred to as 2B+D.
The interface specifies the following network interfaces:




The U interface is a two-wire interface between the exchange and a network terminating
unit, which is usually the demarcation point in non-North American networks.
The T interface is a serial interface between a computing device and a terminal adapter,
which is the digital equivalent of a modem.
The S interface is a four-wire bus that ISDN consumer devices plug into; the S & T
reference points are commonly implemented as a single interface labeled 'S/T' on an
Network termination 1 (NT1).
The R interface defines the point between a non-ISDN device and a terminal adapter
(TA) which provides translation to and from such a device.
BRI-ISDN is very popular in Europe but is much less common in North America. It is also
common in Japan - where it is known as INS64.
Primary Rate Interface
The other ISDN access available is the Primary Rate Interface (PRI), which is carried over an E1
(2048 kbit/s) in most parts of the world. An E1 is 30 'B' channels of 64 kbit/s, one 'D' channel of
64 kbit/s and a timing and alarm channel of 64 kbit/s.
In North America PRI service is delivered on one or more T1 carriers (often referred to as
23B+D) of 1544 kbit/s (24 channels). A PRI has 23 'B' channels and 1 'D' channel for signalling
(Japan uses a circuit called a J1, which is similar to a T1). Inter-changeably but incorrectly, a PRI
is referred to as T1 because it uses the T1 carrier format. A true T1 or commonly called 'Analog
T1' to avoid confusion uses 24 channels of 64 kbit/s of in-band signaling. Each channel uses 56
kb for data and voice and 8 kb for signaling and messaging. PRI uses out of band signaling
which provides the 23 B channels with clear 64 kb for voice and data and one 64 kb 'D' channel
for signaling and messaging. In North America, Non-Facility Associated Signalling allows two
or more PRIs to be controlled by a single D channel, and is sometimes called "23B+D + n*24B".
D-channel backup allows for a second D channel in case the primary fails. NFAS is commonly
used on a T3.
PRI-ISDN is popular throughout the world, especially for connecting PBXs to PSTN.
While the North American PSTN can use PRI or Analog T1 format from PBX to PBX, the
POTS or BRI can be delivered to a business or residence. North American PSTN can connect
from PBX to PBX via Analog T1, T3, PRI, OC3, etc...
Even though many network professionals use the term "ISDN" to refer to the lower-bandwidth
BRI circuit, in North America BRI is relatively uncommon whilst PRI circuits serving PBXs are
commonplace.
Data channel
The bearer channel (B) is a standard 64 kbit/s voice channel of 8 bits sampled at 8 kHz with
G.711 encoding. B-Channels can also be used to carry data, since they are nothing more than
digital channels.
Each one of these channels is known as a DS0.
Most B channels can carry a 64 kbit/s signal, but some were limited to 56K because they traveled
over RBS lines. This was commonplace in the 20th century, but has since become less so.
Signaling channel
The signaling channel (D) uses Q.931 for signaling with the other side of the link.
X.25
X.25 can be carried over the B or D channels of a BRI line, and over the B channels of a PRI
line. X.25 over the D channel is used at many point-of-sale (credit card) terminals because it
eliminates the modem setup, and because it connects to the central system over a B channel,
thereby eliminating the need for modems and making much better use of the central system's
telephone lines.
X.25 was also part of an ISDN protocol called "Always On/Dynamic ISDN", or AO/DI. This
allowed a user to have a constant multi-link PPP connection to the internet over X.25 on the D
channel, and brought up one or two B channels as needed.
Frame Relay
In theory, Frame Relay can operate over the D channel of BRIs and PRIs, but it is seldom, if
ever, used.
Consumer and industry perspectives
There are two points of view into the ISDN world. The most common viewpoint is that of the
end user, who wants to get a digital connection into the telephone network from home, whose
performance would be better than a 20th century analog 56K modem connection. Discussion on
the merits of various ISDN modems, carriers' offerings and tariffs (features, pricing) are from
this perspective. Since the principal consumer application is for Internet access, ISDN was
mostly superseded by DSL in the early 21st century. Inexpensive ADSL service offers speeds up
to 384 kbit/s, while more expensive versions are improving in speed all the time. As of fall 2005,
standard ADSL speeds are in millions of bits per second.
There is a second viewpoint: that of the telephone industry, where ISDN is a core technology. A
telephone network can be thought of as a collection of wires strung between switching systems.
The common electrical specification for the signals on these wires is T1 or E1. Between
telephone company switches, the signaling is performed via SS7. Normally, a PBX is connected
via a T1 with robbed bit signaling to indicate on-hook or off-hook conditions and MF and
DTMF tones to encode the destination number. ISDN is much better because messages can be
sent much more quickly than by trying to encode numbers as long (100 ms per digit) tone
sequences. This results in faster call setup times. Also, a greater number of features are available
and fraud is reduced.
ISDN is also used as a smart-network technology intended to add new services to the public
switched telephone network (PSTN) by giving users direct access to end-to-end circuitswitched digital services and as a backup or failsafe circuit solution for critical use data circuits.
2.Explain about ATM concepts and its architecture
ATM - Concepts and Architecture
Cell-Relay provides a compromise between fixed synchronous allocation mechanisms and
bursty, routable packet interfaces.
The Asynchronous Transfer Mode (ATM) protocols and architecture have managed to gather an
impressive amount of market and media attention over the last several years. Intended as a
technique to achieve a working compromise between the rigidity of the telecommunication
synchronous architecture and packet network's unpredictable load behavior, ATM products are
appearing for everything from high-speed switching to local area networking. ATM has caught
the interest of both the telecommunications community as a broadband carrier for Integrated
Services Digital Network (ISDN) networks as well as the computer industry, who view ATM as
a strong candidate for high-speed Local Area Networking. This article covers the basic concepts
involved in the ATM architecture.
At the core of the ATM architecture is a fixed length "cell." An ATM cell is a short, fixed length
block of data that contains a short header with addressing information, followed by the upper
layer traffic, or "payload." The cell structure, shown in Figure 1, is 53 octets long, with a 5 octet
header, followed by 48 bytes of payload. While the short packet may seem to be somewhat
inefficient in its ratio of overhead to actual data, it does have some distinct advantages over the
alternatives. By fixing the length of each cell, the timing characteristics of the links and the
corresponding network are regular and relatively easy to predict; predicting the dynamics of
variable length packet switched networks isn't always easy. By using short cells, hardware based
switching can be accomplished. Finally, the use of short cells provides an ability to transfer
isochronous information with a short delay.
Figure 1 - ATM Cell Structure (UNI Format)
The information contained in the header of each cell is used to identify the circuit (in the context)
of the local link, carries local flow control information, and includes error detection to prevent
cells from being mis-routed. The remaining 48 octets are routed through the network to the
destination using the circuit.
ATM has evolved over the last 5-10 years to include a wide range of support protocols. Routing
and congestion management, have been particular areas of research. The early concepts of cell
transfer networks revolved around the thought that users could "reserve" a pre-specified amount
of traffic through a circuit on the network. Some amount of guaranteed throughput would be
provided with an additional amount only as needed. Then, through this contract, traffic in excess
of the pre-allocated bandwidth could be arbitrarily dropped if congestion problems occurred.
However, the complexities of implementation have proven these techniques to be far too
difficult. Several vendors have proposed flow control architectures that involve more active
windowing protocols between the switches for data traffic.
ATM Architecture
As in the case of many large systems, there are a range of components and connections involved
in the ATM networks. Figure 2 shows an example network architecture. All connections in the
ATM network are point-to-point, with traffic being switched through the network by the
switching nodes. Two types of networks are included in the ATM architecture, Public Networks
and Private Networks. Private Networks, often referred to as Customer Premises Networks, are
typically concerned with end-user connections, or bridging services to other types of networks
including circuit switched services, frame relay, and voice subsystems. The interface between the
components in the Private Networks is referred to as the Private User Network Interface (UNI).
ATM also extends into the wider area Public Networks.
Interfaces between the Public and Private network switches conform to the Public UNI.
Interfaces between the switches within the Public network are the Network Node Interface
(NNI). Specifications for both the Public and Private UNI can be found in the ATM Forum's
publication "ATM User-Network Interface (UNI) Specification." The private networks often
permit the use of lower speed short haul interconnects that are useful in LAN environments, but
not of great use in wider area public networks. Three types of NNI have been developed, NNIISSI that connects switches in the same Local Area Transport Area (LATA), the NNI-ICI, that
connects ATM networks of different carriers (InterCarrier), and finally, a Private NNI that
permits the connection of different switches in a private network.
Figure 2 - ATM Sample Network Architecture
Protocol Reference Model
There is more to the ATM standards than the ATM cell format alone. Specifications exist to
describe acceptable physical signaling, call control, and upper layer payload formats. Figure 3
shows the hierarchy of protocols involved in ATM. Mapping roughly to layers 1 and 2 of the
OSI model, ATM is broken into 3 distinct layers. At the bottom, several classes of physical
layers have been adapted to support the different types of ATM applications. The ATM layer
provides the cell-switching and routing services. Application services rely on the ATM
Adaptation Layer (AAL) that serves two purposes, to provide a common framework for the
segmentation and reassembly of larger data sets into the ATM cells and to provide service
specific mechanisms for the transport of different types of data. Four different classes of traffic
are supported by the AAL ranging from straight circuit switched data through packet mode
applications. Many of the early implementations of ATM have been focused on the packet mode
services, often as a backbone for Frame Relay services. Typically, the AAL should be viewed as
an internal, software interface to bridge end-user services over ATM. There is typically a good
bit of work required to bind other protocols to the ATM stack.
Figure 3 - ATM Protocol Architecture
Traffic Flow Through The Network
A two tiered addressing scheme is used with the following elements being involved in the
addressing assignments:


Virtual Channel: A virtual channel represents the flow of a single network connection
data flow between 2 ATM end users. The ATM standards define this as a unidirectional
connection between 2 end-points on the network
Virtual Path: A virtual path is used to carry one or more virtual channels through the
network. It is represented as a bundle of channels between the two end-points.
This logical grouping of paths and channels provides some flexibility in managing the addressing
of the flow of information through an ATM network. Figure 4 shows a sample configuration of a
network with an assortment of Virtual Paths and Virtual Channels. As can be seen in the figure,
each virtual path contains one or more virtual channels. It is important to note that the actual
numbers assigned to each of the paths and channels are used only to represent a Virtual Path or
Channel segment that exists between two adjacent nodes of the network. These values are
established when the actual Virtual Channel Connections (VCC) are established. The number of
Paths and Channels over a single link are limited by the ATM cell format. This limitation helps
to explain why there are differences between the UNI and NNI formats. The NNI format replaces
the 4 bits of the Generic Flow Control indication with additional VPI bits, extending the number
of possible paths over the NNI from 256 to 4096.
Figure 4 - Example ATM Circuit and Path Connections
Figure 5 shows the formats for the UNI and NNI cells. The fields in the ATM Cells are:





GFC - (Generic Flow Control) used only in the UNI format. No general purpose services
have been assigned to this field, and it is significant for only the local site. This flow
control information is not carried from end-to-end. Two modes have been used for GFC
based flow control, "uncontrolled access" and "controlled access." in uncontrolled access,
this field is set to all zeroes. In controlled access mode, this field is set when congestion
has occurred. The receiving equipment will report instances in which the GFC has been
set a significant number of times to Layer Management.
VPI/VCI - (Virtual Path Identifier/Virtual Channel Identifier). The distribution of bits
between these two fields can be negotiated between the user and network equipment.
Referring back to Figure 4, these identifiers are used to tag only the portion of the
path/circuit connection over a single link. It is the combination of all of the individual
paths and circuits that comprise the connection.
PT - (Payload Type) Indicates whether or not the cell contains use information or Layer
Management Information. It also carries implicit congestion information.
CLP - (Cell Loss Priority). Indicates the cell's priority in the ATM selective loss
algorithm. Set by the initiating equipment, when this set to 0, the cell is given preference
over cells with CLP set to 1.
HEC - (Header Error Control). Provides a capability to correct all single bit errors in the
cell header as well as the detection of the majority of multiple-bit errors. The use of this
field is up to interpretation of the equipment designers. If most errors are likely to be
single bit errors, it can be used for error correction. Using the field for error correction
does carry some level of risk of introducing unwanted errant traffic on the network
should a mistake be made in the correction process.
Note that the circuit and path identification fields are used to indicate the path that each cell is to
take through the network. The identifiers carried in the cells carry only the information required
to identify the cell's route to the receiving switch or end-point, they are not network addresses as
is found in the case of IP or OSI networks.
Figure 5 - ATM Cell Formats
This article covers some of the important general concepts in the ATM architecture, but
scratches just the surface. Other important areas of the ATM architecture include how it is
mapped to the various physical interfaces, the ATM Adaptation Layer, signaling protocols, layer
management, along with switching strategies.
3.Explain about ISDN protocol
ISDN protocols described here include:
LAPD
International Variants of
ISDN
ISDN Frame Structure
ISDN Terminology
ISDN (Integrated Services Digital Network) is an all digital communications
line that allows for the transmission of voice, data, video and graphics, at very
high speeds, over standard communication lines. ISDN provides a single,
common interface with which to access digital communications services that
are required by varying devices, while remaining transparent to the user. Due
to the large amounts of information that ISDN lines can carry, ISDN
applications are revolutionizing the way businesses communicate.ISDN is not
restricted to public telephone networks alone; it may be transmitted via packet
switched networks, telex, CATV networks, etc.
The ISDN is illustrated here in relation to the OSI model:
ISDN applications
LAPD
The LAPD (Link Access Protocol - Channel D) is a layer 2 protocol which is
defined in CCITT Q.920/921. LAPD works in the Asynchronous Balanced
Mode (ABM). This mode is totally balanced (i.e., no master/slave
relationship). Each station may initialize, supervise, recover from errors, and
send frames at any time. The protocol treats the DTE and DCE as equals.
The format of a standard LAPD frame is as follows:
Flag Address field Control field Information
FCS
Flag
LAPD frame structure
Flag
The value of the flag is always (0x7E). In order to ensure that the bit pattern of
the frame delimiter flag does not appear in the data field of the frame (and
therefore cause frame misalignment), a technique known as Bit Stuffing is
used by both the transmitter and the receiver.
Address field
The first two bytes of the frame after the header flag is known as the address
field. The format of the address field is as follows:
8
7
6
5
SAPI
TEI
4
3
2
1
C/R
EA1
EA2
LAPD address field
EA1 First Address Extension bit which is always set to 0.
C/R Command/Response bit. Frames from the user with this bit set to 0 are
command frames, as are frames from the network with this bit set to 1.
Other values indicate a response frame.
EA2 Second Address Extension bit which is always set to 1.
TEI Terminal Endpoint Identifier. Valid values are as follows:
0-63
Used by non-automatic TEI assignment user equipment.
64-126 Used by automatic TEI assignment equipment.
127
Used for a broadcast connection meant for all Terminal
Endpoints.
Control field
The field following the Address Field is called the Control Field and serves to
identify the type of the frame. In addition, it includes sequence numbers,
control features and error tracking according to the frame type.
FCS
The Frame Check Sequence (FCS) enables a high level of physical error
control by allowing the integrity of the transmitted frame data to be checked.
The sequence is first calculated by the transmitter using an algorithm based on
the values of all the bits in the frame. The receiver then performs the same
calculation on the received frame and compares its value to the CRC.
Window size
LAPD supports an extended window size (modulo 128) where the number of
possible outstanding frames for acknowledgement is raised from 8 to 128. This
extension is generally used for satellite transmissions where the
acknowledgement delay is significantly greater than the frame transmission
times. The type of the link initialization frame determines the modulo of the
session and an "E" is added to the basic frame type name (e.g., SABM
becomes SABME).
Frame types
The following are the Supervisory Frame Types in LAPD:
RR
Information frame acknowledgement and indication to receive more.
REJ
Request for retransmission of all frames after a given sequence
number.
RNR
Indicates a state of temporary occupation of station (e.g., window
full).
The following are the Unnumbered Frame Types in LAPD:
DISC
Request disconnection
UA
Acknowledgement frame.
DM
Response to DISC indicating disconnected mode.
FRMR
Frame reject.
SABM
Initiator for asynchronous balanced mode. No master/slave
relationship.
SABME SABM in extended mode.
UI
Unnumbered Information.
XID
Exchange Information.
There is one Information Frame Type in LAPD:
Info
Information transfer frame.
ISDN decode
International Variants of ISDN
The organization primarily responsible for producing the ISDN standards is the
CCITT. The CCITT study group responsible for ISDN first published a set of
ISDN recommendations in 1984 (Red Books). Prior to this publication, various
geographical areas had developed different versions of ISDN. This resulted in
the CCITT recommendation of a common ISDN standard for all countries, in
addition to allocated variants definable for each country.
The use of nation-specific information elements is enabled by using the
Codeset mechanism which allows different areas to use their own information
elements within the data frames.
Following is a description of most ISDN variants:
National ISDN1 (Bellcore)
This variant is used in the USA by Bellcore. It has four network-specific
message types. It does not have any single octet information elements. In
addition to Codeset 0 elements it has four Codeset 5 and five Codeset 6
information elements.
National ISDN-2 (Bellcore)
The main difference between National ISDN-1 and ISDN-2 is parameter
downloading via components (a component being a sub-element of the
Extended Facility information element). These components are used to
communicate parameter information between ISDN user equipment, such as an
ISDN telephone, and the ISDN switch.
Other changes are the addition of the SEGMENT, FACILITY and REGISTER
message types and the Segmented Message and Extended Facility information
elements. Also, some meanings of field values have changed and some new
accepted field values have been added.
5ESS (AT&T)
This variant is used in the USA by AT&T. It is the most widely used of the
ISDN protocols and contains 19 network-specific message types. It has no
Codeset 5, but does have 18 Codeset 6 elements and an extensive information
management element.
Euro ISDN (ETSI)
This variant is to be adopted by all of the European countries. Presently, it
contains single octet message types and has five single octet information
elements. Within the framework of the protocol there are no Codeset 5 and
Codeset 6 elements, however each country is permitted to define its own
individual elements.
VN3, VN4 (France)
These variants are prevalent in France. The VN3 decoding and some of its
error messages are translated into French. It is a sub-set of the CCITT
document and only has single octet message types. The more recent VN4 is not
fully backward compatible but closely follows the CCITT recommendations.
As with VN3, some translation has taken place. It has only single octet
message types, five single octet information elements, and two Codeset 6
elements.
1TR6 (Germany)
This variant is prevalent in Germany. It is a sub-set of the CCITT version, with
minor amendments. The protocol is part English and part German.
ISDN 30 [DASS-2] (England)
This variant is used by British Telecom in addition to ETSI (see above). At
layers 2 and 3 this standard does not conform to CCITT structure. Frames are
headed by one octet and optionally followed by information. However most of
the information is IA5 coded, and therefore ASCII decoded.
Australia
In 1989 Australian ISDN was introduced. This used Telecom Australia
specified protocols TPH 1856 for PRI and TPH 1962 for BAI. These were
adopted by the Regulator Austel as Australian Technical Standards in 1990 TS 014 and TS013 respectively. These protocols were developed from CCITT
Red Book ISDN recommendations.
In 1996, a new ISDN was established using EuroISDN protocols. The
Regulator (Austel) issued new Standards, these being TS031 for BAI and TS
038 for PRI. These were replaced by new industry Standards in 2001, these
being AS/ACIF S.031 and AS/ACIF S.038 for BAI and PRI respectively.
There are currently no Australian ISDN BAI (TS 013) services in operation,
while there are a small and declining number of Australian ISDN PRI (TS 014)
in service.
All Australian carrier networks are EuroISDN capable, but there may be some
differences in Supplementary Services offered. Some smaller carrier networks
are also Australian ISDN (TS 014) capable.
The major carrier only provides EuroISDN based services.
NTT-Japan
The Japanese ISDN service provided by NTT is known as INS-Net and its
main features are as follows:

Provides a user-network interface that conforms to the CCITT
Recommendation Blue Book.




Provides both basic and primary rate interfaces.
Provides a packet-mode using Case B.
Supported by Signalling System No. 7 ISDN User Part with the
network.
Offered as a public network service.
ARINC 746
In passenger airplanes today there are phones in front of each passenger. These
telephones are connected in a T1 network and the conversation is transferred
via a satellite. The signalling protocol used is based on Q.931, but with a few
modifications and is known as ARINC 746. The leading companies in this area
are GTE and AT&T. In order to analyze ARINC, the LAPD variant should
also be specified as ARINC.
ARINC 746 Attachment 11
ARINC (Aeronautical Radio, INC.) Attachment 11 describes the Network
Layer (layer 3) message transfer necessary for equipment control and circuit
switched call control procedures between the Cabin Telecommunications Unit
(CTU) and SATCOM system, North American Telephone System (NATS),
and Terrestrial Flight Telephone System (TFTS). The interface described in
this attachment is derived from the CCITT recommendations Q.930, Q.931 and
Q.932 for call control and the ISO/OSI standards DIS 9595 and DIS 9596 for
equipment control. These Network Layer messages should be transported in
the information field of the Data Link Layer frame.
ARINC 746 Attachment 17
ARINC (Aeronautical Radio, INC.) Attachment 17 represents a system which
provides passenger and cabin crew access to services provided by the CTU and
intelligent cabin equipment. The distribution portion of the CDS transports the
signalling and voice channels from headend units to the individual seat units.
Each zone within the aircraft has a zone unit that controls and services seat
units within that zone.
Northern Telecom - DMS 100
This variant represents Northern Telecom’s implementation of National ISDN1. It provides ISDN BRI user-network interfaces between the Northern
Telecom ISDN DMS-100 switch and terminals designed for the BRI DSL. It is
based on CCITT ISDN-1 and Q Series Recommendations and the ISDN Basic
Interface Call Control Switching and Signalling Requirements and
supplementary service Technical References published by Bellcore.
DPNSS1
DPNSS1 (Digital Private Network Signalling System No. 1) is a commonchannel signalling system used in Great Britain. It extends facilities normally
only available between extensions on a single PBX to all extensions on PBXs
that are connected together in a private network. It is primarily intended for use
between PBXs in private networks via time-slot 16 of a 2048 kbit/s digital
transmission system. Similarly it may be used in time-slot 24 of a 1.544 kbit/s
digital transmission system. Note that the LAPD variant should also be
selected to be DPNSS1.
Swiss Telecom
The ISDN variant operated by the Swiss Telecom PTT is called SwissNet. The
DSS1 protocol for SwissNet is fully based on ETS. Amendments to this
standard for SwissNet fall into the category of definitions of various options in
the standard and of missing requirements. They also address SwissNet-specific
conditions, e.g., assuring compatibility between user equipment and SwissNet
exchanges of different evolution steps.
QSIG
QSIG is a modern, powerful and intelligent inter-private PABX signalling
system. QSIG standards specify a signalling system at the Q reference point
which is primarily intended for use on a common channel; e.g. a G.703
interface. However, QSIG will work on any suitable method of connecting the
PINX equipment. The QSIG protocol stack is identical in structure to the DSSI
protocol stack. Both follow the ISO reference model. Both can have an
identical layer 1 and layer 2 (LAPD), however, at layer 3 QSIG and DSS1
differ.
ISDN Frame Structure
Shown below is the general structure of the ISDN frame.
8
7
6
5
4
3
2
1
Protocol discriminator
0
0
0
0
Length of reference call value
Flag
Call reference value
0
Message type
Other information elements as required
ISDN frame structure
Protocol discriminator
The protocol used to encode the remainder of the Layer.
Length of call reference value
Defines the length of the next field. The Call reference may be one or two
octets long depending on the size of the value being encoded.
Flag
Set to zero for messages sent by the party that allocated the call reference
value; otherwise set to one.
Call reference value
An arbitrary value that is allocated for the duration of the specific session,
which identifies the call between the device maintaining the call and the ISDN
switch.
Message type
Defines the primary purpose of the frame. The message type may be one octet
or two octets (for network specific messages). When there is more than one
octet, the first octet is coded as eight zeros. A complete list of message types is
given in ISDN Message Types below.
ISDN Information Elements
There are two types of information elements: single octet and variable length.
Single octet information elements
The single octet information element appears as follows:
87
1
6
5
4
Information element identifier
3
2
1
Information element
Single octet information element
Following are the available single octet information elements:
1 000 ----
Reserved
1 001 ----
Shift
1 010 0000
More data
1 010 0001
Sending Complete
1 011 ----
Congestion Level
1 101 ----
Repeat indicator
Variable length information elements
The following is the format of the variable length information element:
8
7
0
6
5
4
3
2
1
Information element identifier
Length of information elements
Information elements (multiple bytes)
Variable length information element
The information element identifier identifies the chosen element and is unique
only within the given Codeset. The length of the information element informs
the receiver as to the amount of the following octets belonging to each
information element. Following are possible variable length information
elements:
0 0000000
Segmented Message
0 0000100
Bearer Capability
0 0001000
Cause
0 0010100
Call identify
0 0010100
Call state
0 0011000
Channel identification
0 0011100
Facility
0 0011110
Progress indicator
0 0100000
Network-specific facilities
0 0100111
Notification indicator
0 0101000
Display
0 0101001
Date/time
0 0101100
Keypad facility
0 0110100
Signal
0 0110110
Switchhook
0 0111000
Feature activation
0 0111001
Feature indication
0 1000000
Information rate
0 1000010
End-to-end transit delay
0 1000011
Transit delay selection and indication
0 1000100
Packet layer binary parameters
0 1000101
Packet layer window size
0 1000110
Packet size
0 1101100
Calling party number
0 1101101
Calling party subaddress
0 1110000
Called party number
0 1110001
Called Party subaddress
0 1110100
Redirecting number
0 1111000
Transit network selection
0 1111001
Restart indicator
0 1111100
Low layer compatibility
0 1111101
High layer compatibility
0 1111110
User-user
0 1111111
Escape for ex
Other values
Reserved
ISDN Message Types
Following are possible ISDN message types:
Call Establishment
000 00001
Alerting
000 00010
Call Proceeding
000 00011
Progress
000 00101
Setup
000 00111
Connect
000 01101
Setup Acknowledge
000 01111
Connect Acknowledge
Call Information Phase
001 00000
User Information
001 00001
Suspend Reject
001 00010
Resume Reject
001 00100
Hold
001 00101
Suspend
001 00110
Resume
001 01000
Hold Acknowledge
001 01101
Suspend Acknowledge
001 01110
Resume Acknowledge
001 10000
Hold Reject
001 10001
Retrieve
001 10011
Retrieve Acknowledge
001 10111
Retrieve Reject
Call Clearing
010 00101
Disconnect
010 00110
Restart
010 01101
Release
010 01110
Restart Acknowledge
010 11010
Release Complete
Miscellaneous
011 00000
Segment
011 00010
Facility
011 00100
Register
011 01110
Notify
011 10101
Status inquiry
011 11001
Congestion Control
011 11011
Information
011 11101
Status
ISDN Terminology
BRI
The Basic Rate Interface is one of the two services provided by ISDN. BRI is
comprised of two B-channels and one D-channel (2B+D). The B-channels each
operate at 64 Kbps and the D-channel operates at 16 Kbps. It is used by single
line business customers for typical desk-top type applications.
C/R
C/R refers to Command or Response. The C/R bit in the address field defines
the frame as either a command frame or a response frame to the previous
command.
Codeset
Three main Codesets are defined. In each Codeset, a section of the information
elements are defined by the associated variant of the protocol:
Codeset 0
The default code, referring to the CCITT set of information
elements.
Codeset 5
The national specific Codeset.
Codeset 6
The network specific Codeset.
The same value may have different meanings in various Codesets. Most
elements usually appear only once in each frame.
In order to change Codesets two methods are defined:
Shift
This method enables a temporary change to another Codeset.
Also termed as non-locking shift, the shift only applies to the
next information element.
Shift Lock
This method implements a permanent change until indicated
otherwise. Shift-Lock may only change to a higher Codeset.
CPE
Customer Premises Equipment - refers to all ISDN compatible equipment
connected at the user sight. Examples of devices are telephone, PC, Telex,
Facsimile, etc. The exception is the FCC definition of NT1. The FCC views
the NT1 as a CPE because it is on the customer sight, but the CCITT views
NT1 as part of the network. Consequently the network reference point of the
network boundary is dependent on the variant in use.
ISDN Channels B, D and H
The three logical digital communication channels of ISDN perform the
following functions:
B-Channel
Carries user service information including: digital data, video,
and voice.
D-Channel
Carries signals and data packets between the user and the
network
H-Channel
Performs the same function as B-Channels, but operates at
rates exceeding DS-0 (64 Kbps).
ISDN Devices
Devices connecting a CPE and a network. In addition to facsimile, telex, PC,
telephone, ISDN devices may include the following:
TA
Terminal Adapters - devices that are used to portray non-ISDN
equipment as ISDN compatible.
LE
Local Exchange - ISDN central office (CO). The LE implements the
ISDN protocol and is part of the network.
LT
Local Termination - used to express the LE responsible for the
functions associated with the end of the Local Loop.
ET
Exchange Termination - used to express the LE responsible for the
switching functions.
NT
Network Termination equipment exists in two forms and is referred to
accordingly. The two forms are each responsible for different
operations and functions.


TE
NT1 - Is the termination of the connection between the user
sight and the LE. NT1 is responsible for performance,
monitoring, power transfer, and multiplexing of the channels.
NT2 - May be any device that is responsible for providing user
sight switching, multiplexing, and concentration: LANs,
mainframe computers, terminal controllers, etc. In ISDN
residential environments there is no NT2
Terminal Equipment - any user device e.g.: telephone or facsimile.
There are two forms of terminal equipment:


TE1 - Equipment is ISDN compatible.
TE2 - Equipment is not ISDN compatible
ISDN Reference Points
Reference points define the communication points between different devices
and suggest that different protocols may be used at each side of the point. The
main points are as follows:
R
A communication reference point between a non-ISDN compatible
TE and a TA.
S
A communication reference link between the TE or TA and the NT
equipment.
T
A communication reference point between user switching
equipment and a Local Loop Terminator.
U
A communication reference point between the NT equipment and
the LE. This reference point may be referred to as the network
boundary when the FCC definition of the Network terminal is used.
The following diagram illustrates the ISDN Functional Devices and Reference
Points:
LAPD
The Link Access Protocol on the D-channel. LAPD is a bit orientated protocol
on the data link layer of the OSI reference model. Its prime function is
ensuring the error free transmission of bits on the physical layer (layer 1).
PRI
The Primary Rate Interface is one of the two services provided by ISDN. PRI
is standard dependent and thus varies according to country. In North America,
PRI has twenty-three B-channels and one D-channel (23B+D). In Europe, PRI
has thirty B-channels and one D-channel (30B+D).
The American B- and D-channels operate at an equal rate of 64 Kbps.
Consequently, the D-channel is sometimes not activated on certain interfaces,
thus allowing the time slot to be used as another B-channel. The 23B+D PRI
operates at the CCITT designated rate of 1544 Kbps.
The European PRI is comprised of thirty B-channels and one D-channel
(30B+D). As in the American PRI all the channels operate at 64 Kbps.
However, the 30B+D PRI operates at the CCITT designated rate of 2048 Kbps.
SAPI
Service Access Point Identifier -the first part of the address of each frame.
TEI
Terminal End Point Identifier - the second part of the address of each frame.
4. Explain about SS7
Signalling System No. 7
SS7 protocol suite
OSI layer
Application
SS7 protocols
INAP, MAP, IS-41...
TCAP, CAP, ISUP, ...
Network
MTP Level 3 + SCCP
Data link
MTP Level 2
Physical
MTP Level 1
Signalling System No. 7 (SS7) is a set of telephony signaling protocols which are used to set up
most of the world's public switched telephone network telephone calls. The main purpose is to
set up and tear down telephone calls. Other uses include number translation, local number
portability, prepaid billing mechanisms, short message service (SMS), and a variety of other
mass market services.
It is usually referenced as Signalling System No. 7 or Signalling System #7, or simply abbreviated
to SS7. In North America it is often referred to as CCSS7, an abbreviation for Common Channel
Signalling System 7. In some European countries, specifically the United Kingdom, it is
sometimes called C7 (CCITT number 7) and is also known as number 7 and CCIS7 (Common
Channel Interoffice Signaling 7).
There is only one international SS7 protocol defined by ITU-T in its Q.700-series
recommendations. There are however, many national variants of the SS7 protocols. Most
national variants are based on two widely deployed national variants as standardized by ANSI
and ETSI, which are in turn based on the international protocol defined by ITU-T. Each national
variant has its own unique characteristics. Some national variants with rather striking
characteristics are the China (PRC) and Japan (TTC) national variants.
The Internet Engineering Task Force (IETF) has also defined level 2, 3, and 4 protocols that
are compatible with SS7:



MTP2 (M2UA and M2PA)
MTP3 (M3UA)
Signalling Connection Control Part (SCCP) (SUA)
but use a Stream Control Transmission Protocol (SCTP) transport mechanism. This suite of
protocols is called SIGTRAN.
Contents




1 History
2 Functionality
o
2.1 Signaling modes
3 Physical network
4 SS7 protocol suite
History
Common Channel Signaling protocols have been developed by major telephone companies and
the ITU-T since 1975 and the first international Common Channel Signaling protocol was
defined by the ITU-T as Signalling System No. 6 (SS6) in 1977.[2] Signalling System No. 7 was
defined as an international standard by ITU-T in its 1980 (Yellow Book) Q.7XX-series
recommendations.[1] SS7 was designed to replace SS6, which had a restricted 28-bit signal unit
that was both limited in function and not amenable to digital systems.[2] SS7 has substantially
replaced SS6, Signalling System No. 5 (SS5), R1 and R2, with the exception that R1 and R2
variants are still used in numerous nations.
SS5 and earlier systems used in-band signaling, in which the call-setup information was sent by
playing special multi-frequency tones into the telephone lines, known as bearer channels in the
parlance of the telecom industry. This led to security problems with blue boxes. SS6 and SS7
implement out-of-band signaling protocols, carried in a separate signaling channel,[3] explicitly
keep the end-user's audio path—the so-called speech path—separate from the signaling phase to
eliminate the possibility that end users may introduce tones that would be mistaken for those
used for signaling. See falsing. SS6 and SS7 are referred to as so-called Common Channel
Interoffice Signalling Systems (CCIS) or Common Channel Signaling (CCS) due to their hard
separation of signaling and bearer channels. This required a separate channel dedicated solely to
signaling, but the greater speed of signaling decreased the holding time of the bearer channels,
and the number of available channels was rapidly increasing anyway at the time SS7 was
implemented.
The common channel signaling paradigm was translated to IP via the SIGTRAN protocols as
defined by the IETF. While running on a transport based upon IP, the SIGTRAN protocols are
not an SS7 variant, but simply transport existing national and international variants of SS7.[4]
Functionality
The term signaling, when used in telephony, refers to the exchange of control information
associated with the establishment of a telephone call on a telecommunications circuit.[5] An
example of this control information is the digits dialed by the caller, the caller's billing number,
and other call-related information.
When the signaling is performed on the same circuit that will ultimately carry the conversation
of the call, it is termed channel associated signaling (CAS). This is the case for earlier analogue
trunks, MF and R2 digital trunks, and DSS1/DASS PBX trunks.
In contrast, SS7 signaling is termed Common Channel Signaling (CCS) in that the path and
facility used by the signaling is separate and distinct from the telecommunications channels that
will ultimately carry the telephone conversation. With CCS, it becomes possible to exchange
signaling without first seizing a facility, leading to significant savings and performance increases
in both signaling and facility usage.
Because of the mechanisms used by signaling methods prior to SS7 (battery reversal, multifrequency digit outpulsing, A- and B-bit signaling), these older methods could not
communicate much signaling information. Usually only the dialed digits were signaled, and only
during call setup. For charged calls, dialed digits and charge number digits were outpulsed. SS7,
being a high-speed and high-performance packet-based communications protocol, can
communicate significant amounts of information when setting up a call, during the call, and at
the end of the call. This permits rich call-related services to be developed. Some of the first such
services were call management related services that many take for granted today: call
forwarding (busy and no answer), voice mail, call waiting, conference calling, calling name
and number display, call screening, malicious caller identification, busy callback.[6]
The earliest deployed upper layer protocols in the SS7 signaling suite were dedicated to the
setup, maintenance, and release of telephone calls.[7] The Telephone User Part (TUP) was
adopted in Europe and the Integrated Services Digital Network (ISDN) User Part (ISUP)
adapted for public switched telephone network (PSTN) calls was adopted in North America.
ISUP was later used in Europe when the European networks upgraded to the ISDN. (North
America never accomplished full upgrade to the ISDN and the predominant telephone service is
still the older POTS.) Due to its richness and the need for an out-of-band channel for its
operation, SS7 signaling is mostly used for signaling between telephone switches and not for
signaling between local exchanges and customer-premises equipment (CPE).
Because SS7 signaling does not require seizure of a channel for a conversation prior to the
exchange of control information, non-facility associated signalling (NFAS) became possible.
NFAS is signaling that is not directly associated with the path that a conversation will traverse
and may concern other information located at a centralized database such as service subscription,
feature activation, and service logic. This makes possible a set of network-based services that do
not rely upon the call being routed to a particular subscription switch at which service logic
would be executed, but permits service logic to be distributed throughout the telephone network
and executed more expediently at originating switches far in advance of call routing. It also
permits the subscriber increased mobility due to the decoupling of service logic from the
subscription switch. Another characteristic of ISUP made possible by SS7 with NFAS is the
exchange of signaling information during the middle of a call.[5]
Also possible with SS7 is Non-Call-Associated Signaling, which is signaling that is not directly
related to the establishment of a telephone call.[8] An example of this is the exchange of the
registration information used between a mobile telephone and a home location register (HLR)
database: a database that tracks the location of the mobile. Other examples include Intelligent
Network and local number portability databases.[9]
Signaling modes
As well as providing for signaling with these various degrees of association with call set up and
the facilities used to carry calls, SS7 is designed to operate in two modes: associated mode and
quasi-associated mode.[10]
When operating in the associated mode, SS7 signaling progresses from switch to switch through
the PSTN following the same path as the associated facilities that carry the telephone call. This
mode is more economical for small networks. The associated mode of signaling is not the
predominant choice of modes in North America.[11]
When operating in the quasi-associated mode, SS7 signaling progresses from the originating
switch to the terminating switch, following a path through a separate SS7 signaling network
composed of signal transfer points. This mode is more economical for large networks with
lightly loaded signaling links. The quasi-associated mode of signaling is the predominant choice
of modes in North America.[12]
Physical network
SS7 is an out-of-band signaling protocol, i.e. separate from the bearer channels that carry the
voice or data. This separation extends onto the physical network by having circuits that are solely
dedicated to carrying the SS7 links. Clearly splitting the signaling plane and voice circuits. An
SS7 network has to be made up of SS7-capable equipment from end to end in order to provide its
full functionality. The network is made up of several link types (A, B, C, D, E, and F) and three
signaling nodes - Service switching point (SSPs), signal transfer point (STPs), and service
control point (SCPs). Each node is identified on the network by a number, a point code.
Extended services are provided by a database interface at the SCP level using the SS7 network.
The links between nodes are full-duplex 56, 64, 1,536, or 1,984 kbit/s graded communications
channels. In Europe they are usually one (64 kbit/s) or all (1,984 kbit/s) timeslots (DS0s) within
an E1 facility; in North America one (56 or 64 kbit/s) or all (1,536 kbit/s) timeslots (DS0As or
DS0s) within a T1 facility. One or more signaling links can be connected to the same two
endpoints that together form a signaling link set. Signaling links are added to link sets to increase
the signaling capacity of the link set.
In Europe, SS7 links normally are directly connected between switching exchanges using Flinks. This direct connection is called associated signaling. In North America, SS7 links are
normally indirectly connected between switching exchanges using an intervening network of
STPs. This indirect connection is called quasi-associated signaling. Quasi-associated signaling
reduces the number of SS7 links necessary to interconnect all switching exchanges and SCPs in
an SS7 signaling network.[13]
SS7 links at higher signaling capacity (1.536 and 1.984 Mbit/s, simply referred to as the 1.5
Mbit/s and 2.0 Mbit/s rates) are called high speed links (HSL) in contrast to the low speed (56
and 64 kbit/s) links. High speed links are specified in ITU-T Recommendation Q.703 for the
1.5 Mbit/s and 2.0 Mbit/s rates, and ANSI Standard T1.111.3 for the 1.536 Mbit/s rate. There are
differences between the specifications for the 1.5 Mbit/s rate. High speed links utilize the entire
bandwidth of a T1 (1.536 Mbit/s) or E1 (1.984 Mbit/s) transmission facility for the transport of
SS7 signaling messages.[14]
SIGTRAN provides signaling using SCTP associations over the Internet Protocol.[15] The
protocols for SIGTRAN are M2PA, M2UA, M3UA and SUA.
SS7 protocol suite
SS7 protocol suite
OSI layer
Application
SS7 protocols
INAP, MAP, IS-41...
TCAP, CAP, ISUP, ...
Network
MTP Level 3 + SCCP
Data link
MTP Level 2
Physical
MTP Level 1
The SS7 protocol stack borrows partially from the OSI Model of a packetized digital protocol
stack. OSI layers 1 to 3 are provided by the Message Transfer Part (MTP) and the Signalling
Connection Control Part (SCCP) of the SS7 protocol (together referred to as the Network
Service Part (NSP)); for circuit related signaling, such as the Telephone User Part (TUP) or the
ISDN User Part (ISUP), the User Part provides layer 7. Currently there are no protocol
components that provide OSI layers 4 through 6.[1] The Transaction Capabilities Application
Part (TCAP) is the primary SCCP User in the Core Network, using SCCP in connectionless
mode. SCCP in connection oriented mode provides the transport layer for air interface protocols
such as BSSAP and RANAP. TCAP provides transaction capabilities to its Users (TC-Users),
such as the Mobile Application Part, the Intelligent Network Application Part and the
CAMEL Application Part.
The Message Transfer Part (MTP) covers a portion of the functions of the OSI network layer
including: network interface, information transfer, message handling and routing to the higher
levels. Signalling Connection Control Part (SCCP) is at functional Level 4. Together with MTP
Level 3 it is called the Network Service Part (NSP). SCCP completes the functions of the OSI
network layer: end-to-end addressing and routing, connectionless messages (UDTs), and
management services for users of the Network Service Part (NSP).[16] Telephone User Part
(TUP) is a link-by-link signaling system used to connect calls. ISDN User Part (ISUP) is the
key user part, providing a circuit-based protocol to establish, maintain, and end the connections
for calls. Transaction Capabilities Application Part (TCAP) is used to create database queries and
invoke advanced network functionality, or links to Intelligent Network Application Part (INAP)
for intelligent networks, or Mobile Application Part (MAP) for mobile services.
5.Explain about ISDN D- CHANNEL
ISDN D-Channel Operation
OSI Model Conformance
ISDN operations conform to the general layers specified in the OSI model as indicated in the
diagram below:
Layer 1 (Physical Services) are the actual framing and line coding used by the Basic Rate and
Primary Rate ISDN lines. This layer is described in other ISDN sections.
Layer 2 (Data Link) is described herein. The D-Channel utilizes a HDLC (High-level Data Link
Control) protocol called LAPD, the "Link Access Procedure D-channel". This is somewhat
analogous to the Data Link layer in X.25 communications, known as LAPB (Link Access
Protocol Balanced) and Frame Relay's equivalent: (LAPF). Operation of LAPD is detailed in
ITU Recommendation Q.921.
Layer 3 (Network) is also described herein. This is known as DSS 1 (Digital Subscriber
Signalling System #1) and is detailed in ITU Recommendation Q.931. This function is
responsible for the actual setup and teardown of ISDN calls.
Layer 2 Description

Flag
The Flag character is part of the Header information and is used to maintain
synchronization between transmitter and receiver. The Flag character is a hexadecimal
"7E", or binary "01111110". The Flag character may be appened to the end of a block
(trailer) as well. In most point-to-point HDLC applications, each end will "idle flags".

Address
The Address is two bytes (octets) long, and consists of three fields: (1) the Service
Access Point Identifier (SAPI), (2) a Command/Response (C/R) bit, and (3) the Terminal
Endpoint Identifier (TEI).

Control
The Control field may be either one or two octets (bytes) in length. The Control field
indicates one of three frame formats: (1) Information format, (2) Supervisory Format, or
(3) Unnumbered format. The Control field is two bytes long for the Information and
Supervisory formats, and a single byte in length for the Unnumbered format. Only the
Information and certain types of Unnumbered frames may have an information field
associated.
The Information format contains Layer 3 data, but some information may be sent in
Unnumbered Information frames or XID frames. The Unnumbered Information frame is
useful because it is often used when requesting TEI assignment from the ISDN switch.
The SABME (Set Asynchronous Balanced Mode Extended) is a type of Unnumbered
frame that reinitializes the data link layer. The XID frame is often used to exchange
parameters, such as window sizes, timer values, and frame sizes after TEI acquisition.
RR (Receiver Ready), RNR (Receiver Not Ready), and REJ (Reject) messages utilize the
Supervisory frame format.

Information
The Information field is used to carry Layer 3 Call Control (Q.931) data. In certain cases,
it may carry Unnumbered Information data (TEI assignment) or XID (Connection
Management/parameter negotiation) information.

CRC Error Check
Each frame is suffixed with two octets (bytes) of a CRC code (Cyclic Redundancy
Check). This is commonly called the FCS (Frame Check Sequence). The CRC is
generated by the transmitting entity and compared at the receiver for errors.
Layer 2 Bit Stuffing
The Flag character is a unique bit-sequence consisting of 01111110 (hexadecimal "7E"). Within
an HDLC frame, the flag character is not allowed to appear, so a "bit stuffing" algorithm is
employed.
Between Flag characters, the transmitter will insert a "0"-bit after any sequence of 5 consecutive
"1"s.
At the receiving end, if 6 consecutive "1"s are received, the character is interpreted to be a Flag
character. If 7 or more consecutive "1"s are received, the receiver interprets this as an abort
sequence and ignores the received frame. If the receiver gets five consecutive "1"s followed by a
"0", the zero is simply deleted.
TIP: This operation can be used to provide a unique "fix". For example, if your digital facility is
sensitive to consecutive zeroes (eg. AMI), its possible to run with an "inverted D-Channel". Most
of the carrier-class PRI ISDN switches have some type of support for this operation. In this
manner, the T1 line will never have more the 6 consecutive "0"s on the line except in cases of
aborted frames!
Layer 3 Description

Protocol Discriminator
The Protocol Discriminator is part of the Layer 3 header information. It is a single byte
(octet) that is usually set to a value of 00001000 (hexadecimal "08") - meaning Q.931
Call Maintenance. It's been observed that AT&T (now Lucent) PRI systems have also
used the value, 00000011 (hexadecimal "03") for Maintenance purposes (putting BChannels in-service/out-of-service).

Call Reference Value
The Call Reference value consists of either two or three bytes (octets). BRI systems have
a 7-bit Call Reference value (127 references) and consist of two bytes. PRI systems have
a 15-bit Call Reference value (32767 references). The Call Reference value has no
particular end-to-end significance. Either end can assign an arbitrary value. The Call
Reference value is used to associate messages with a particulary channel connection.
It is possible to have two different calls using the same Call Reference value. This is
permissible by using the Call Reference Flag bit to associate which end of the link
originated the call. The following diagram depicts a BRI Call Reference field:

Message Type
The Message Type is a single byte (octet) that indicates what type of message is being
sent/received:
There are four general categories of messages that might be present: Call Establishment,
Call Information, Call Clearing, and Miscellaneous. Generally, the most useful messages
to understand are the Call Establishment and Call Clearing messages.

Information Elements
Each type of message has Mandatory and Optional Information Elements associated
with it. The Information Element is identified with a single byte (octet). While there are
few single octet (byte) Information Elements, most have multiple octets associated with
them.
When Information Elements consist of multiple octets, the byte (octet) immediately
following the Information Element Identifier is used to describe how many bytes (octets)
are in the Information Element. Therefore, the receiver knows where to start looking for
the next Information Element in a message.
Some of the Information Elements are listed below:
1.
2.
3.
4.
5.
Bearer Capability (identifies transport requirements of the requested B-Channel)
Cause (identifies reasons for disconnect or incomplete calls)
Channel Identification (indentifies type and number of B-Channel(s) requested)
Progress Indicator (Indicates status of outgoing call)
Network Specific Facilities (Useful for North American PRI calls - identifies
network type, Carrier ID, Carrier Service Type[WATS/SDN/ASDS,etc.])
6. Calling Party Number (identifies caller)
7. Calling Party Number Subaddress
8. Called Party Number (destination number, type of number[unknown],
numbering plan[national,etc])
9. Called Party Number Subaddress
UNIT – V
SECTION – A
1. Data compression, source coding or bit-rate reduction is the process of encoding
information using fewer bits than the original representation
2. Lossy image compression is used in digital cameras, to increase storage capacities with
minimal degradation of picture quality.
3. The presentation layer is layer 6 of the seven-layer OSI model of computer networking
4. File transfer is a generic term for the act of transmitting files over a computer network or
the Internet.
5. The translation of data into a secret code is Encryption
6. Two types of file transfers are Pull-based, Push-based f
7. The session layer is layer 5 of the seven-layer OSI model of computer networking.
8. Authentication is the act of confirming the truth of an attribute of a datum or entity
9. The Common Management Information Protocol (CMIP) is the OSI specified network
management protocol.
10. Compression is useful because it helps reduce the consumption of expensive resources, such
as hard disk space or transmission bandwidth.
SECTION – B
1.Explain about Data Compression
Data compression
In computer science and information theory, data compression, source coding or bit-rate
reduction is the process of encoding information using fewer bits than the original
representation would use.
Compression is useful because it helps reduce the consumption of expensive resources, such as
hard disk space or transmission bandwidth. On the downside, compressed data must be
decompressed to be used, and this extra processing may be detrimental to some applications. For
instance, a compression scheme for video may require expensive hardware for the video to be
decompressed fast enough to be viewed as it is being decompressed (the option of
decompressing the video in full before watching it may be inconvenient, and requires storage
space for the decompressed video). The design of data compression schemes therefore involves
trade-offs among various factors, including the degree of compression, the amount of distortion
introduced (if using a lossy compression scheme), and the computational resources required to
compress and uncompress the data. Compression was one of the main drivers for the growth of
information during the past two decades[1].
Lossless versus lossy compression
Lossless compression algorithms usually exploit statistical redundancy in such a way as to
represent the sender's data more concisely without error. Lossless compression is possible
because most real-world data has statistical redundancy. For example, in English text, the letter
'e' is much more common than the letter 'z', and the probability that the letter 'q' will be followed
by the letter 'z' is very small. Another kind of compression, called lossy data compression or
perceptual coding, is possible if some loss of fidelity is acceptable. Generally, a lossy data
compression will be guided by research on how people perceive the data in question. For
example, the human eye is more sensitive to subtle variations in luminance than it is to
variations in color. JPEG image compression works in part by "rounding off" some of this lessimportant information. Lossy data compression provides a way to obtain the best fidelity for a
given amount of compression.
Lossy
Lossy image compression is used in digital cameras, to increase storage capacities with
minimal degradation of picture quality. Similarly, DVDs use the lossy MPEG-2 Video codec for
video compression.
In lossy audio compression, methods of psychoacoustics are used to remove non-audible (or
less audible) components of the signal. Compression of human speech is often performed with
even more specialized techniques, so that "speech compression" or "voice coding" is sometimes
distinguished as a separate discipline from "audio compression". Different audio and speech
compression standards are listed under audio codecs. Voice compression is used in Internet
telephony for example, while audio compression is used for CD ripping and is decoded by audio
players.
Lossless
The Lempel–Ziv (LZ) compression methods are among the most popular algorithms for lossless
storage. DEFLATE is a variation on LZ which is optimized for decompression speed and
compression ratio, but compression can be slow. DEFLATE is used in PKZIP, gzip and PNG.
LZW (Lempel–Ziv–Welch) is used in GIF images. Also noteworthy are the LZR (LZ–Renau)
methods, which serve as the basis of the Zip method. LZ methods utilize a table-based
compression model where table entries are substituted for repeated strings of data. For most LZ
methods, this table is generated dynamically from earlier data in the input. The table itself is
often Huffman encoded (e.g. SHRI, LZX). A current LZ-based coding scheme that performs
well is LZX, used in Microsoft's CAB format.
The very best modern lossless compressors use probabilistic models, such as prediction by
partial matching. The Burrows–Wheeler transform can also be viewed as an indirect form of
statistical modelling.
In a further refinement of these techniques, statistical predictions can be coupled to an algorithm
called arithmetic coding. Arithmetic coding, invented by Jorma Rissanen, and turned into a
practical method by Witten, Neal, and Cleary, achieves superior compression to the betterknown Huffman algorithm, and lends itself especially well to adaptive data compression tasks
where the predictions are strongly context-dependent. Arithmetic coding is used in the bilevel
image-compression standard JBIG, and the document-compression standard DjVu. The text
entry system, Dasher, is an inverse-arithmetic-coder.
Theory
The theoretical background of compression is provided by information theory (which is closely
related to algorithmic information theory) for lossless compression, and by rate–distortion
theory for lossy compression. These fields of study were essentially created by Claude
Shannon, who published fundamental papers on the topic in the late 1940s and early 1950s.
Coding theory is also related. The idea of data compression is deeply connected with statistical
inference.
Machine learning
There is a close connection between machine learning and compression: a system that predicts
the posterior probabilities of a sequence given its entire history can be used for optimal data
compression (by using arithmetic coding on the output distribution), while an optimal
compressor can be used for prediction (by finding the symbol that compresses best, given the
previous history). This equivalence has been used as justification for data compression as a
benchmark for "general intelligence".[2]
2.Explain about presentation layer
Presentation layer
The OSI model
7 Application layer
6 Presentation layer
5 Session layer
4 Transport layer
3 Network layer
2 Data link layer

LLC sublayer

MAC sublayer
1 Physical layer
The presentation layer is layer 6 of the seven-layer OSI model of computer networking and
serves as the data translator for the network.[1][2] It is sometimes called the syntax layer.[3]
Description
The presentation layer is responsible for the delivery and formatting of information to the
application layer for further processing or display.[4] It relieves the application layer of concern
regarding syntactical differences in data representation within the end-user systems. An
example of a presentation service would be the conversion of an EBCDIC-coded text computer
file to an ASCII-coded file.
The presentation layer is the lowest layer at which application programmers consider data
structure and presentation, instead of simply sending data in form of datagrams or packets
between hosts. This layer deals with issues of string representation - whether they use the Pascal
method (an integer length field followed by the specified amount of bytes) or the C/C++ method
(null-terminated strings, e.g. "thisisastring\0"). The idea is that the application layer should be
able to point at the data to be moved, and the presentation layer will deal with the rest.
Serialization of complex data structures into flat byte-strings (using mechanisms such as TLV
or XML) can be thought of as the key functionality of the presentation layer.
Encryption is typically done at this level too, although it can be done on the application,
session, transport, or network layers, each having its own advantages and disadvantages.[1]
Decryption is also handled at the presentation layer. For example, when logging off bank
account sites the presentation layer will decrypt the data as it is received.[1] Another example is
representing structure, which is normally standardized at this level, often by using XML. As well
as simple pieces of data, like strings, more complicated things are standardized in this layer. Two
common examples are 'objects' in object-oriented programming, and the exact way that
streaming video is transmitted.
In many widely used applications and protocols, no distinction is made between the presentation
and application layers. For example, HyperText Transfer Protocol (HTTP), generally regarded
as an application-layer protocol, has presentation-layer aspects such as the ability to identify
character encoding for proper conversion, which is then done in the application layer.
Within the service layering semantics of the OSI network architecture, the presentation layer
responds to service requests from the application layer and issues service requests to the session
layer
3.Explain about Encryption
Encryption
The translation of data into a secret code. Encryption is the most effective way to achieve data
security. To read an encrypted file, you must have access to a secret key or password that
enables you to decrypt it. Unencrypted data is called plain text ; encrypted data is referred to as
cipher text.
There are two main types of encryption: asymmetric encryption (also called public-key
encryption) and symmetric encryption.
In cryptography, encryption is the process of transforming information (referred to as
plaintext) using an algorithm (called a cipher) to make it unreadable to anyone except those
possessing special knowledge, usually referred to as a key. The result of the process is
encrypted information (in cryptography, referred to as ciphertext). In many contexts, the word
encryption also implicitly refers to the reverse process, decryption (e.g. “software for
encryption” can typically also perform decryption), to make the encrypted information readable
again (i.e. to make it unencrypted).
Encryption has long been used by militaries and governments to facilitate secret communication.
Encryption is now commonly used in protecting information within many kinds of civilian
systems. For example, the Computer Security Institute reported that in 2007, 71% of
companies surveyed utilized encryption for some of their data in transit, and 53% utilized
encryption for some of their data in storage.[1] Encryption can be used to protect data "at rest",
such as files on computers and storage devices (e.g. USB flash drives). In recent years there
have been numerous reports of confidential data such as customers' personal records being
exposed through loss or theft of laptops or backup drives. Encrypting such files at rest helps
protect them should physical security measures fail. Digital rights management systems which
prevent unauthorized use or reproduction of copyrighted material and protect software against
reverse engineering (see also copy protection) are another somewhat different example of
using encryption on data at rest.
Encryption is also used to protect data in transit, for example data being transferred via networks
(e.g. the Internet, e-commerce), mobile telephones, wireless microphones, wireless intercom
systems, Bluetooth devices and bank automatic teller machines. There have been numerous
reports of data in transit being intercepted in recent years.[2] Encrypting data in transit also helps
to secure it as it is often difficult to physically secure all access to networks.
Encryption, by itself, can protect the confidentiality of messages, but other techniques are still
needed to protect the integrity and authenticity of a message; for example, verification of a
message authentication code (MAC) or a digital signature. Standards and cryptographic
software and hardware to perform encryption are widely available, but successfully using
encryption to ensure security may be a challenging problem. A single slip-up in system design or
execution can allow successful attacks. Sometimes an adversary can obtain unencrypted
information without directly undoing the encryption. See, e.g., traffic analysis, TEMPEST, or
Trojan horse.
One of the earliest public key encryption applications was called Pretty Good Privacy (PGP).
It was written in 1991 by Phil Zimmermann and was purchased by Symantec in 2010.[3]
Digital signature and encryption must be applied at message creation time (i.e. on the same
device it has been composed) to avoid tampering. Otherwise any node between the sender and
the encryption agent could potentially tamper it.
4. Explain about File transfer
File transfer
File transfer is a generic term for the act of transmitting files over a computer network or the
Internet. There are numerous ways and protocols to transfer files over a network. Computers
which provide a file transfer service are often called file servers. Depending on the client's
perspective the data transfer is called uploading or downloading. File transfer for the enterprise
now increasingly is done with Managed File Transfer.
There are 2 types of file transfers:


Pull-based file transfers where the receiver initiates a file transmission request
Push-based file transfers where the sender initiates a file transmission request.
File transfer can take place over a variety of levels:







Transparent file transfers over network file systems
Explicit file transfers from dedicated file transfer services like FTP or HTTP
Distributed file transfers over peer-to-peer networks like Bittorent or Gnutella
In IBM Systems Network Architecture, LU 6.2 peer-to-peer file transfer programs such
as CA, Inc.'s XCOM Data Transport
File transfers over instant messaging or LAN messenger
File transfers between computers and peripheral devices
File transfers over direct modem or serial (null modem) links, such as XMODEM,
YMODEM and ZMODEM.
Protocols
A file transfer protocol is a convention that describes how to transfer files between two
computing endpoints. They are meant solely to send the stream of bits stored as a single unit in a
file system, plus any relevant metadata such as the filename, file size and timestamp. File
transfer protocols usually operate on top of a lower-level protocol in a protocol stack. For
example, the HTTP protocol operates at the topmost application layer of the TCP/IP stack,
whereas XMODEM, YMODEM, and ZMODEM typically operate across RS-232 serial
connections.
5. Explain about virtual terminal
Virtual terminal
In open systems, a virtual terminal (VT) is an application service that:
1. Allows host terminals on a multi-user network to interact with other hosts regardless of
terminal type and characteristics,
2. Allows remote log-on by local area network managers for the purpose of management,
3. Allows users to access information from another host processor for transaction
processing,
4. Serves as a backup facility.
PuTTY is an example of a virtual terminal.
ITU-T defines a virtual terminal protocol based on the OSI application layer protocols.
However, the virtual terminal protocol is not widely used on the Internet.
SECTION - C
1. Explain about session layer protocols
Session layer
The OSI model
7 Application layer
6 Presentation layer
5 Session layer
4 Transport layer
3 Network layer
2 Data link layer


LLC sublayer
MAC sublayer
1 Physical layer
OSI Session Layer Protocol (X.225, ISO 8327)
The OSI Session Layer Protocol (ISO-SP) provides the session management, e.g. opening and
closing of sessions. In case of a connection loss it tries to recover the connection. If a connection
is not used for a longer period, the session layer may close it down and re-open it for next use.
This happens transparently to the higher layers. The Session layer provides synchronization
points in the stream of exchanged packets.
The Session Protocol Machine (SPM), an abstract machine that carries out the procedures
specified in the session layer protocol, communicates with the session service user (SS-user)
through an session-service-access-point (SSAP) by means of the service primitives. Service
primitives will cause or be the result of session protocol data unit exchanges between the peer
SPMs using a transport connection. These protocol exchanges are effected using the services of
the transport layer.
Session connection endpoints are identified in end systems by an internal, implementation
dependent, mechanism so that the SS-user and the SPM can refer to each session connection.
The functions in the Session Layer are those necessary to bridge the gap between the services
available from the Transport Layer and those offered to the SS-users.
The functions in the Session Layer are concerned with dialogue management, data flow
synchronization, and data flow resynchronization.
These functions are described below; the descriptions are grouped into those concerned with the
connection establishment phase, the data transfer phase, and the release phase.
Protocol Structure - ISO-SP: OSI Session Layer Protocol (X.225, X.215, ISO 8327, 8326)
ISO Session Protocol (ISO-SP) Messages:
Functional unit
SPDU code
SPDU name
Kernel
CN
OA
CDO
AC
RF
FN
DN
AB
AA
DT
PR
CONNECT
OVERFLOW ACCEPT
CONNECT DATA OVERFLOW
ACCEPT
REFUSE
FINISH
DISCONNECT
ABORT
ABORT ACCEPT
DATA TRANSFER
PREPARE
Negotiated release
NF
GT
PT
NOT FINISHED
GIVE TOKENS
PLEASE TOKENS
Half-duplex
GT
PT
GIVE TOKENS
PLEASE TOKENS
Duplex
No additional associated SPDUs
Expedited data
EX
EXPEDITED DATA
Typed data
TD
TYPED DATA
Capability data exchange
CD
CDA
CAPABILITY DATA
CAPABILITY DATA ACK
Minor synchronize
MIP
MIA
GT
PT
MINOR SYNC POINT
MINOR SYNC ACK
GIVE TOKENS
PLEASE TOKENS
Symmetric synchronize
MIP
MIA
MINOR SYNC POINT
MINOR SYNC ACK
Data separation
No additional associated SPDUs
Major synchronize
MAP
MAA
PR
GT
PT
MAJOR SYNC POINT
MAJOR SYNC ACK
PREPARE
GIVE TOKENS
PLEASE TOKENS
Resynchronize
RS
RA
PR
RESYNCHRONIZE
RESYNCHRONIZE ACK
PREPARE
Exceptions
ER
ED
EXCEPTION REPORT
EXCEPTION DATA
Activity management
AS
AR
AI
AIA
AD
ADA
AE
AEA
PR
GT
PT
GTC
GTA
ACTIVITY START
ACTIVITY RESUME
ACTIVITY INTERRUPT
ACTIVITY INTERRUPT ACK
ACTIVITY DISCARD
ACTIVITY DISCARD ACK
ACTIVITY END
ACTIVITY END ACK
PREPARE
GIVE TOKENS
PLEASE TOKENS
GIVE TOKENS CONFIRM
GIVE TOKENS ACK
The session layer is layer 5 of the seven-layer OSI model of computer networking.
The session layer provides the mechanism for opening, closing and managing a session between
end-user application processes, i.e., a semi-permanent dialogue. Communication sessions consist
of requests and responses that occur between applications. Session-layer services are commonly
used in application environments that make use of remote procedure calls (RPCs).
An example of a session-layer protocol is the OSI protocol suite session-layer protocol, also
known as X.235 or ISO 8327. In case of a connection loss this protocol may try to recover the
connection. If a connection is not used for a long period, the session-layer protocol may close it
and re-open it. It provides for either full duplex or half-duplex operation and provides
synchronization points in the stream of exchanged messages.[1]
Other examples of session layer implementations include Zone Information Protocol (ZIP) –
the AppleTalk protocol that coordinates the name binding process, and Session Control Protocol
(SCP) – the DECnet Phase IV session-layer protocol.
Within the service layering semantics of the OSI network architecture, the session layer responds
to service requests from the presentation layer and issues service requests to the transport
layer.
2. Explain about authentication
Authentication
Authentication (from Greek: αὐθεντικός; real or genuine, from αὐθέντης authentes; author) is
the act of confirming the truth of an attribute of a datum or entity. This might involve confirming
the identity of a person, tracing the origins of an artifact, ensuring that a product is what its
packaging and labeling claims to be, or assuring that a computer program is a trusted one.
Authentication methods
In art, antiques, and anthropology, a common problem is verifying that a person has the said
identity or a given artifact was produced by a certain person, or was produced in a certain place
or period of history.
There are three types of techniques for doing this.
The first type authentication is accepting proof of identity given by a credible person which has
evidence on the said identity or on the originator and the object under assessment as his artifact
respectively.
The second type authentication is comparing the attributes of the object itself to what is known
about objects of that origin. For example, an art expert might look for similarities in the style of
painting, check the location and form of a signature, or compare the object to an old photograph.
An archaeologist might use carbon dating to verify the age of an artifact, do a chemical
analysis of the materials used, or compare the style of construction or decoration to other
artifacts of similar origin. The physics of sound and light, and comparison with a known physical
environment, can be used to examine the authenticity of audio recordings, photographs, or
videos.
Attribute comparison may be vulnerable to forgery. In general, it relies on the fact that creating a
forgery indistinguishable from a genuine artifact requires expert knowledge, that mistakes are
easily made, or that the amount of effort required to do so is considerably greater than the
amount of money that can be gained by selling the forgery.
In art and antiques certificates are of great importance, authenticating an object of interest and
value. Certificates can, however, also be forged and the authentication of these pose a problem.
For instance, the son of Han van Meegeren, the well-known art-forger, forged the work of his
father and provided a certificate for its provenance as well; see the article Jacques van
Meegeren.
Criminal and civil penalties for fraud, forgery, and counterfeiting can reduce the incentive for
falsification, depending on the risk of getting caught.
The third type authentication relies on documentation or other external affirmations. For
example, the rules of evidence in criminal courts often require establishing the chain of custody
of evidence presented. This can be accomplished through a written evidence log, or by testimony
from the police detectives and forensics staff that handled it. Some antiques are accompanied by
certificates attesting to their authenticity. External records have their own problems of forgery
and perjury, and are also vulnerable to being separated from the artifact and lost.
Currency and other financial instruments commonly use the first type of authentication method.
Bills, coins, and cheques incorporate hard-to-duplicate physical features, such as fine printing or
engraving, distinctive feel, watermarks, and holographic imagery, which are easy for receivers
to verify.
Consumer goods such as pharmaceuticals, perfume, fashion clothing can use either type of
authentication method to prevent counterfeit goods from taking advantage of a popular brand's
reputation (damaging the brand owner's sales and reputation). A trademark is a legally
protected marking or other identifying feature which aids consumers in the identification of
genuine brand-name goods.
Authentication factors and identity
The ways in which someone may be authenticated fall into three categories, based on what are
known as the factors of authentication: something you know, something you have, or something
you are. Each authentication factor covers a range of elements used to authenticate or verify a
person's identity prior to being granted access, approving a transaction request, signing a
document or other work product, granting authority to others, and establishing a chain of
authority.
Security research has determined that for a positive identification, elements from at least two,
and preferably all three, factors be verified.[1] The three factors (classes) and some of elements of
each factor are:



the ownership factors: Something the user has (e.g., wrist band, ID card, security
token, software token, phone, or cell phone)
the knowledge factors: Something the user knows (e.g., a password, pass phrase, or
personal identification number (PIN), challenge response (the user must answer a
question))
the inherence factors: Something the user is or does (e.g., fingerprint, retinal pattern,
DNA sequence (there are assorted definitions of what is sufficient), signature, face, voice,
unique bio-electric signals, or other biometric identifier).
Two-factor authentication
When elements representing two factors are required for identification, the term two-factor
authentication is applied. . e.g. a bankcard (something the user has) and a PIN (something the
user knows). Business networks may require users to provide a password (knowledge factor) and
a pseudorandom number from a security token (ownership factor). Access to a very high
security system might require a mantrap screening of height, weight, facial, and fingerprint
checks (several inherence factor elements) plus a PIN and a day code (knowledge factor
elements), but this is still a two-factor authentication.
Product authentication
A Security hologram label on an electronics box for authentication
Counterfeit products are often offered to consumers as being authentic. Counterfeit consumer
goods such as electronics, music, apparel, and Counterfeit medications have been sold as being
legitimate. Efforts to control the supply chain and educate consumers to evaluate the packaging
and labeling help ensure that authentic products are sold and used. Even security printing on
packages, labels, and nameplates, however, is subject to counterfeiting.
Information content
The authentication of information can pose special problems (especially man-in-the-middle
attacks), and is often wrapped up with authenticating identity.
Literary forgery can involve imitating the style of a famous author. If an original manuscript,
typewritten text, or recording is available, then the medium itself (or its packaging - anything
from a box to e-mail headers) can help prove or disprove the authenticity of the document.
However, text, audio, and video can be copied into new media, possibly leaving only the
informational content itself to use in authentication.
Various systems have been invented to allow authors to provide a means for readers to reliably
authenticate that a given message originated from or was relayed by them. These involve
authentication factors like:



A difficult-to-reproduce physical artifact, such as a seal, signature, watermark, special
stationery, or fingerprint.
A shared secret, such as a passphrase, in the content of the message.
An electronic signature; public key infrastructure is often used to cryptographically
guarantee that a message has been signed by the holder of a particular private key.
The opposite problem is detection of plagiarism, where information from a different author is
passed of as a person's own work. A common technique for proving plagiarism is the discovery
of another copy of the same or very similar text, which has different attribution. In some cases
excessively high quality or a style mismatch may raise suspicion of plagiarism.
Factual verification
Determining the truth or factual accuracy of information in a message is generally considered a
separate problem from authentication. A wide range of techniques, from detective work to fact
checking in journalism, to scientific experiment might be employed.
Video authentication
It is sometimes necessary to authenticate the veracity of video recordings used as evidence in
judicial proceedings. Proper chain-of-custody records and secure storage facilities can help
ensure the admissibility of digital or analog recordings by the Court.
History and state-of-the-art
Historically, fingerprints have been used as the most authoritative method of authentication, but
recent court cases in the US and elsewhere have raised fundamental doubts about fingerprint
reliability.[citation needed] Outside of the legal system as well, fingerprints have been shown to be
easily spoofable, with British Telecom's top computer-security official noting that "few"
fingerprint readers have not already been tricked by one spoof or another.[2] Hybrid or two-tiered
authentication methods offer a compelling solution, such as private keys encrypted by fingerprint
inside of a USB device.
In a computer data context, cryptographic methods have been developed (see digital signature
and challenge-response authentication) which are currently not spoofable if and only if the
originator's key has not been compromised. That the originator (or anyone other than an
attacker) knows (or doesn't know) about a compromise is irrelevant. It is not known whether
these cryptographically based authentication methods are provably secure since unanticipated
mathematical developments may make them vulnerable to attack in future. If that were to occur,
it may call into question much of the authentication in the past. In particular, a digitally signed
contract may be questioned when a new attack on the cryptography underlying the signature is
discovered.
Access control
One familiar use of authentication and authorization is access control. A computer system
that is supposed to be used only by those authorized must attempt to detect and exclude the
unauthorized. Access to it is therefore usually controlled by insisting on an authentication
procedure to establish with some degree of confidence the identity of the user, granting
privileges established for that identity. Common examples of access control involving
authentication include:








Asking for photoID when a contractor first arrives at a house to perform work.
Using captcha as a means of asserting that a user is a human being and not a computer
program.
A computer program using a blind credential to authenticate to another program
Entering a country with a passport
Logging in to a computer
Using a confirmation E-mail to verify ownership of an e-mail address
Using an Internet banking system
Withdrawing cash from an ATM
In some cases, ease of access is balanced against the strictness of access checks. For
example, the credit card network does not require a personal identification number for
authentication of the claimed identity; and a small transaction usually does not even require a
signature of the authenticated person for proof of authorization of the transaction. The
security of the system is maintained by limiting distribution of credit card numbers, and by
the threat of punishment for fraud.
Security experts argue that it is impossible to prove the identity of a computer user with
absolute certainty. It is only possible to apply one or more tests which, if passed, have been
previously declared to be sufficient to proceed. The problem is to determine which tests are
sufficient, and many such are inadequate. Any given test can be spoofed one way or another,
with varying degrees of difficulty.
3. Explain about Message Handling System (MHS)
Message Handling System
The MH Message Handling System is a free, open source e-mail client. It is different from
almost all other mail reading systems in that, instead of a single program, it is made from several
different programs which are designed to work from the command line provided by the shell on
Unix-like operating systems. Another difference is that rather than storing multiple messages in
a single file, messages each have their own separate file in a special directory. Taken together,
these design choices mean that it is very easy and natural to script actions on mail messages
using the normal shell scripting tools. A descendant of MH continues to be developed under the
name of nmh.
Design
MH is made up of separate programs such as show, to view a message, scan, to see message
titles and rmm to remove messages. By using the pick program, it is possible to select messages,
based on sender for example, which the other programs act on.
Because the different programs are run separately and at different times, communication between
them has to be arranged specially. Information such as the mail which is currently selected is
stored in files (in this case .mh_profile in the user's own directory).
MH follows the Unix philosophy of write programs that do one thing and do it well. Write
programs to work together. Write programs to handle text streams, because that is a universal
interface (Doug McIlroy).
Message Handling System
IC R3 supports two messaging servers: a multi-protocol message transfer agent (MTA), and a
multi-protocol message store (MS).
The MTA implements both the X.400/MHS 1988 and Internet standards for messaging and
includes support for the following:







Protocol channels for message exchange using X.400 (1984, 1988, and 1992 versions),
SMTP, DECnet Mail, UUCP, and fax.
Class 2 Fax Access Unit providing delivery of fax messages.
Submission and delivery of messages in a wide range of formats including X.400 (P3),
RFC822 submission/delivery, P3-file, shell and structured delivery.
Body-part and content-type conversion facilities including MIME-MHS.
Distribution-list processing.
Comprehensive authorization support.
An API for the development of additional channels is provided.
The above functionality includes numerous major enhancements since the last release.
The Message Store provides a message store database with a defined API, a P7 server [X.413],
an LMAP server, and a message delivery channel for the MTA. Indirect message submission is
provided via the P7 and LMAP servers.
Multi-Protocol Message Transfer Agent
The ISODE Consortium MTA is a high-performance, high-capability multi-protocol message
transfer agent. It implements both the X.400/MHS 1988 and SMTP standards for messaging
including their related content-types and addressing. It supports sophisticated and extensible
content and body-part-type conversion capabilities. This provides the capability to gateway
between different types of mail system, in particular between X.400 and Internet mail.

X.400 Conformance
The capabilities of the MTA are based on the service elements and protocols defined by the
MHS (MOTIS) series of standards [X.400, ISO/IEC 10021] for the Message Transfer Service.
All the Basic Services are supported. All the Essential Optional User Facilities are supported. All
the Additional Optional User Facilities are supported with the exception of deferred delivery
cancellation and explicit conversion.

Functional Groups
The following functional groups are supported:







Conversion (CV)
Distribution List (DL)
Redirection (RED)
Latest Delivery (LD)
Return of Contents (RoC)
Use of Directory (DIR) (excluding use of Directory Name when
O/R Address does not identify a user and use of Directory to determine a recipient's
capabilities),
1984 Interworking (84IW) (excluding internal trace information).
The Physical Delivery (PD) and Security (SEC) functional groups are not supported.

Queue Manager
The operation of an MTA is based around a queue. The queue is managed by a Queue Manager
process (qmgr) which invokes various so-called channels to process messages. Processing can
include conversion when necessary but mostly involves relaying messages to the next MTA
along their route or performing final delivery for local users.

Security
In IC R3, the message security element type definitions have been aligned with the most recent
X.400 specifications.
Message Transfer
Protocol channels are used to exchange messages with adjacent MTAs. Different channels are
used to support different standards and network environments. Most channels support both
inbound and outbound message transfer. Inbound transfer is generally invoked by the remote
system and the appropriate channel is started by a network daemon upon reception of the
connection. Outbound channels are run under the control of the queue manager.

X.400 and RTSE
The X.400 channels support both the 1984 and 1988 recommendations including the mts-transfer
(P1-1988, RTSE normal-mode), mts-transfer-protocol (P1-1988, RTSE X.410(84)-mode) and
mts-transfer-protocol-1984 (P1-1984, RTSE X.410(84)-mode) application contexts.
Full RTSE recovery is supported for both inbound and outbound transfers.
All three application contexts can be supported by a receiving channel on a single Session
Address.

X.400 Downgrade
The sending channel will perform downgrading of P1-1988 to P1-1984 when sending using mtstransfer-protocol-1984. A message will be non-delivered if downgrading fails (according to the
rules of X.411, Annex B).
RTSE recovery in the mts-transfer and mts-transfer-protocol application contexts, P1-1988 to
P1-1984 downgrading, and single-address multi-context responder channel capability are
significant enhanced functionality in IC R3.0.
MTA-MTA authentication is password-based with an access control table, and message
authentication and authorization will be the same in IC R3 as IC R2.
MTA-provided X.400 strong authentication services may be added in a future release.

SMTP
The Simple Mail Transfer Protocol (SMTP) [RFC821] conforming to the host requirements for
messaging [RFC 1123] are supported. The inbound channel can operate in stand-alone mode or
from the standard TCP network daemon (usually inetd).

DECnet
Interworking with DECnet Mail is available for Sun systems with Sunlink DNI.

UUCP
The Unix-to-Unix-Copy subsystem can used with the UUCP channels.

Fax Access Unit
The fax access unit supports outgoing faxes via fax modems with Class 2 capability.
Drivers for two other (almost obsolete) modems are also available. These are the Panasonic
Systemfax 250 and the Fujitsu dexNet200.
Fax headers may be customized with a logo, and facsimile (graphics) and text body parts may be
mixed.
Submission and Delivery
Message submission and delivery are independent mechanisms. Local submission and delivery
may be achieved by various supplied channels and programs and an API is available.

P3 Submission and Delivery
A channel pair implements P3 protocol access to the ISODE Consortium MTA.

P3 File-based Delivery
Furthermore, the p3-file channel provides local submission and delivery of X.400 P2 messages.
Messages are exchanged via files in the format of P3 protocol submission and delivery PDUs.
This mechanism is compatible with that of certain other MTAs.

RFC822 Submission
Programs that look like the standard mail and sendmail programs are provided and these can be
used as substitutes for the normal UNIX utilities subject to a few restrictions.
The SMTP channel can also be used for RFC822 submission and this is a common mechanism
employed by user agents, particularly MH and PC agents.

RFC822 Delivery
The 822-local channel provides for local delivery of messages in RFC822 format. Sophisticated
local (per-system and per-user) tailoring is provided and both standard mailbox formats are
supported. Tools are provided to give users asynchronous notification of the delivery of new
messages.

Program Delivery
The shell channel can be used to deliver messages to programs.

Application Programming Interface
A proprietary API provides full control over the submission mechanism and can be used for both
the MTS (UA to MTA message submission) and MTA (MTA to MTA message transfer)
services.
Message Conversion Facilities
The MTA provides both content-conversion and body-part conversion facilities. It also provides
for normalization of message header information.

Body-part Conversion
Body parts are more correctly called encoded information types (EITs). Individual body-parts
can be converted from one to another using conversion filters. Typically this is used for
converting a text body part from one character set to another: for example, from T.61 (teletext) to
IA5 (international alphabet 5, similar to ASCII) as specified in [X.408]. Such conversions are
often necessary as part of content-type conversion.
The necessary conversions are calculated when a message is first submitted and they may be reevaluated as processing progresses.

Content-type Conversion
Content-type conversion is used to convert between the formats of different messaging
standards: for example, between X.400 P2 and RFC822. It is done by exploding a message
content into its individual elements (usually a header and one or more body parts). These
elements are then individually converted as if they were normal EITs. Finally, the resultant parts
are flattened to produce the output content type.

X.400-Internet Interoperability
Channels to convert between RFC822 and X.400 P2 according to the rules of [RFC1327] are
provided. Conversion of message headers is performed according to [RFC1494], [RFC1495] and
[RFC1496].

MIME &λτ;--&γτ; MHS Content Conversion
Channels are provided for the conversion of the message body-part types of X.400 and those of
MIME [RFC1521, RFC1522].

Header Normalization
It is often desirable to rewrite header information in particular to normalize addresses by
rewriting the address in some canonical form. Header normalization is provided and uses the
generic EIT conversion capabilities. Channels for the normalization of RFC822 and P2 headers
are provided.

File-transfer Body Part (FTBP) Submission, Conversion & Spooling
The X.420(1992) Recommendation adds the specification of a file-transfer body part (FTBP),
based on FTAM document types. IC R3.0 includes functionality to support the submission,
conversion and delivery of such body parts.
A conversion filter is provided that will encode an FTBP as text (using uuencode) so that it may
be relayed to destinations not capable of receiving FTBPs. This will probably be replaced with a
MIME-compatible encoding in a future release.
A delivery mechanism is provided for FTBPs which cannot be handled directly by a user's user
agent. This is in the form of a filter which stores the transferred FTBP in a spool area together
with catalogue information about it, and then replaces the FTBP in the message itself with a text
body part describing the replaced FTBP.
The user may retrieve the stored file and catalogue information with the fex command. This filter
and the fex program, specifically the location of each user's spool area, are configured using a
table with the same format as that used by the 822-local channel.
A simple user-agent program is provided to enable a user to send a file as an FTBP.
Distribution List Processing
Expansion of distribution lists is provided by two channels, one which uses plain files for the
specification of list membership, and another which uses the Directory.
The list expansion facilities are content-type independent. They can work with any type of
message content which can be handled by the MTA.
Tools are provided for list maintenance.
Configuration
The heart of the MTA configuration is a straightforward tailor file. This is used to specify:




the available channels (protocol, submission & delivery, conversion, and various
housekeeping ones);
the tables used for routing, address-format conversion (according to RFC1327), and those
used by individual channels;
the EITs (body parts and header types) understood by the various channels;
many other individual items of configuration.
Outside the tailor file, tables are used for many aspects of configuration. Some of these can be
replaced or augmented by use of the X.500 Directory and the Domain Name System (DNS).

X.500 Configuration
A large part of configuration, in particular addressing and routing, may be performed using data
stored in the X.500 Directory according to the Internet MHS-DS specification. Local MHS users,
typically Message Store users, may be entirely configured using the Directory. A GUI is
provided for management.

Addressing and Routing
The main elements of configuration are to do with addressing and consequent message routing.
The system is extensible so that simple use of the MTA only requires simple configuration. The
use of each of the more sophisticated capabilities requires use of incremental configuration
elements.
The following are the main elements related to addressing and routing:








local users with no interconnection to other MTAs;
configuration as a pure X.400 MTA with a small number (possibly one) of bi-lateral
connections;
configuration as a pure RFC822 Internet-connected MTA with DNS-based configuration
of communications links;
a gateway between X.400 and RFC822 mail systems;
mail hub operation supporting multiple addressing domains;
Support for separate internal and external addressing styles;
additional facilities such as fax access unit and distribution lists.
Authorization
Comprehensive authorization allows for control on security and accounting grounds and enables
policy-based routing. Overall control is based upon a 3-level hierarchy of pairwise (sender and
receiver) elements: channels, adjacent MTAs, users. This can be refined on the basis of message
content and size.
Management
The IC R3 MTA supports a rich set of management facilities.
Graphics and terminal-based console programs allow the state of an MTA to be monitored. This
can provide an overview or examine the system in progressively greater detail down to the level
of an individual message.
In privileged mode, connections can be enabled or disabled at the levels of system, channel, peer
MTA, message and recipient.

SNMP
An SNMP agent permits integration of MTA monitoring into a general network-management
system.
Comprehensive logging facilities allow the tracking and accounting of all messages and provide
extensive trouble-shooting and debugging capabilities.
Message Store
Overview
The Multi-Protocol Message Store was introduced in IC R2. It serves as an intermediary between
User Agents and the Message Transfer Agent, accepting delivery of messages on the user's
behalf and storing them for subsequent retrieval. The Message Store also provides an indirect
submission service and facilities for searching, listing and deleting messages. The additional
functionality of the Multi-Protocol Message Store, as compared with a standard X.413 Message
Store, allows partial and complete messages to be stored and manipulated within the Message
Store prior to submission.
The key to the Multi-Protocol Message Store is its internal API, which makes the Message Store
multi-protocol in two senses:


It stores multiple message formats in a single abstract format. For IC R3, P2 and MIME
message formats are supported.
It allows access by multiple protocols. For IC R3.0, P7(88), P3(88), and LMAP are
provided.
The Lightweight Mail Access Protocol (LMAP), an added-value ISODE Consortium proprietary
protocol, defines both the service specification of the API and the access protocol used to
provide remote access for native client applications. LMAP takes many of its design cues from
the successful Lightweight Directory Access Protocol (LDAP). The protocol's simplicity is
intended to facilitate the wide-scale deployment of X.400 User Agents on workstations and PCs.

Message Store Components
The Message Store itself consists of several components:




a database that implements the LMAP protocol semantics;
a P7 process which uses the database to provide an X.413 Message Store service;
a Message Store delivery channel which allows the PP Message Transfer Agent to deliver
messages into the Message Store.
Security
LMAP and P7 are defined to have simple and strong authentication. Simple authentication is
available from the server, P7 and LMAP libraries in IC R3. In IC R4, strong authentication will
be provided by the P7 and LMAP servers and protocol libraries.
The appropriate X.400(88) P1 security protocol elements will be accessible through both
protocols and in the MTA and MS library routines.
LMAP Protocol
The Lightweight Mail Access Protocol provides facilities for accessing a message store,
including message submission, and access to and management of folders, messages and
components of messages.

Design Goals
Some of the design goals for the Lightweight Mail Access Protocol were:



Provision of a lightweight X.400 access protocol, in the style of LDAP, to facilitate the
implementation and deployment of X.400 UAs on small machines.
Support for RFC 822 messaging, to facilitate mixed RFC 822/X.400 working and to
provide an RFC 822 message access protocol superior to others available.
Provision of a means to give protocol access to various LAN mail systems. A particular
goal was to design a protocol which could be implemented as a MAPI DLL (providing
integration with various PC network mail systems).
In addition, the protocol design aimed to provide:


A core protocol that is simple but with good extensibility.
An abstraction that is a useful basis for a User Agent.




A protocol that can easily and efficiently map on to a range of underlying message
database technologies.
Integration with LDAP, so that combined mail and directory operations can be handled
over the same association.
Facilities to allow a message to be represented as either an atomic object or in a
structured manner, as required by client applications.
Why Not Use an Existing Protocol?
The primary reason for designing a new messaging protocol, instead of using the existing P7 or
IMAP (Interactive Mail Access Protocol) protocols, as the basis for the ISODE Consortium's
Message Store is that the very flexible information and access model offered by P7 makes
building a database which is cleanly aligned to the protocol very difficult. Using such a database
to support other protocols (such as RFC 822) is still more problematical.
Other reasons for the design of a new protocol included:





Both P7 and IMAP have dissimilar client models for submission and access. This
increases the complexity of both the Message Store and User Agents.
The ASN.1 describing P7 is complex, necessitating extensive libraries for the
manipulation of the resulting data structures. This could be a particular problem on small
machine implementations.
P7 does not provide any general facilities for folder and message manipulation.
P7's generalized treatment of body parts as attributes is awkward to implement efficiently
for large body parts.
The LMAP Information Model
LMAP represents information (messages) in two ways, as objects and as attributes:
Objects are typed entities which are named within an LMAP Message Store; including folders,
messages, message headers, message envelopes and atomic Body Parts.
Objects are (potentially large) collections of attributes. The protocol transfer of objects is
designed to allow fragmented reception or transmission, removing the need for applications to
hold the whole of a large object in memory.
Attributes are typed entities which are found within objects. They are identified by their type,
rather than by a name.
An attribute consists of a type/value pair; the value of an attribute is either a plain text encoding
(i.e. an rfc822-mailbox attribute might be stored as "ic.info@isode.com") or a text wrapping
around an ASN.1 encoding (a P7 Content Identifier of "12345", represented in ASN.1 as
"[APPLICATION 10] 12345" would be stored as "6A053132333435").

LMAP Protocol Operations
The abstract operations provided by the LMAP protocol are as follows:
Bind Associate a client with the LMAP Message Store.
Unbind Terminate an association.
Create a new instance of a specified object type (folder, message or body part).
Remove Delete a particular instance of an object.
Modify Change an existing object (e.g., add attributes, change body part).
Move a message or a range of messages from one folder to another.
Copy an atomic body part or message to make a new instance.
Modify Annotation Alter the annotations associated with an object. Annotations are attributes
associated with an object which may be modified by the user but are not accessed when a
message is submitted.
Retrieve Transfer objects from the Message Store to the user. The mode of transfer of objects
can be controlled.
Sort messages in a folder keyed by a selected attribute.
Scan Provide ordered listings of folders, messages and bodyparts.
Find Search collections of messages for particular attribute values.
Abort Terminate a partially completed Retrieve, Search or Scan operation.
Verify an address to the maximum extent possible by the Message Store prior to submission.
Submit a particular message or probe to the MTA.
Delivery Identify to the Message Store that a message under construction is complete. This
operation is typically used by a delivery channel to indicate that the message concerned can be
"seen" by a UA accessing the same mailbox.
4. Explain about CMIP
Common Management Information Protocol
The Common Management Information Protocol (CMIP) is the OSI specified network
management protocol.
Defined in ITU-T Recommendation X.711, ISO/IEC International Standard 9596-1. It
provides an implementation for the services defined by the Common Management
Information Service (CMIS) specified in ITU-T Recommendation X.710, ISO/IEC
International Standard 9595, allowing communication between network management
applications and management agents. CMIS/CMIP is the network management protocol
specified by the ISO/OSI Network management model and is further defined by the ITU-T in
the X.700 series of recommendations.
CMIP models management information in terms of managed objects and allows both
modification and performing actions on managed objects. Managed objects are described using
GDMO (Guidelines for the Definition of Managed Objects), and can be identified by a
distinguished name (DN), from the X.500 directory.
CMIP also provides good security (support authorization, access control, and security logs) and
flexible reporting of unusual network conditions.
Services implemented
The management functionality implemented by CMIP is described under CMIS services.
In a typical Telecommunications Management Network, a network management system will
make use of the management operation services to monitor network elements. Management
agents found on network elements will make use of the management notification services to
send notifications or alarms to the network management system.
Deployment
CMIP is implemented in association with the ACSE and ROSE protocols.Both are Layer 7 OSI
protocols (Application Layer). ACSE is used to manage associations between management
applications (i.e. manage connections between CMIP agents). ROSE is employed for all data
exchange interactions. Besides the presence of these Layer 7 protocols, CMIP assumes the
presence of all OSI layers at lower levels but does not explicitly specify what these should be.
There have been some attempts to adapt CMIP to the TCP/IP protocol stack. Most notable is
CMOT contained in RFC 1189 (detailing CMIP over TCP). Other possibilities include RFC
1006 (which provides an ISO transport service on top of TCP), and CMIP over LPP (a
presentation layer protocol that can run on top of TCP or UDP).
There is also a form of CMIS that is developed to operate directly on top of the LLC sublayer. It
is called the LAN/MAN Management Protocol (LMMP), formerly it was the Common
Management Information Services and Protocol over IEEE 802 Logical Link Control (CMOL).
This protocol does away with the need for the OSI stack as is the case with CMIP.
Download