NOTE:Attempt ANYTWO QUESTIONS

advertisement
JRE SCHOOL OF Engineering
CLASS TEST-II EXAMINATIONS MAR14
Subject Name Computer Network
Roll No. of Student
Date
28Mar 2014
For CS and IT branches only
Subject Code
Max Marks
Max Duration
Time
ECS601
20
60 min
09:20 to 10:20 AM
SOLUTIONS
SECTION – A(2 marks * 2 Questions = 4 Marks)
NOTE:ATTEMPT ANYTWO QUESTIONS
1. What is a bridge?
Ans: A bridge device filters data traffic at a network boundary. Bridges reduce the
amount of traffic on a local area network (LAN) by dividing it into two segments.
Bridges operate at the data link layer (Layer 2) of the OSI model. Bridges inspect
incoming traffic and decide whether to forward or discard it. An Ethernet bridge, for
example, inspects each incoming Ethernet frame - including the source and
destination MAC addresses, and sometimes the frame size - in making individual
forwarding decisions.
2. What is bit stuffing and where it is used?
Ans.The third method allows data frames to contain an arbitrary number of bits and
allows character codes with an arbitrary number of bits per character. At the start and
end of each frame is a flag byte consisting of the special bit pattern 01111110.
Whenever the sender's data link layer encounters five consecutive 1s in the data, it
automatically stuffs a zero bit into the outgoing bit stream. This technique is called bit
stuffing. When the receiver sees five consecutive 1s in the incoming data stream,
followed by a zero bit, it automatically de-stuffs the 0 bit. The boundary between two
frames can be determined by locating the flag pattern.
3. What is hamming distance; differentiate between congestion and flow control?
Ans: In information theory, the Hamming distance between two strings of equal
length is the number of positions at which the corresponding symbols are different. In
another way, it measures the minimum number ofsubstitutions required to change
one string into the other, or the minimum number of errors that could have
transformed one string into the other. And difference between congestion control and
flow control is :
Flow control is controlled by the receiving side. It ensures that the sender only
sends what the receiver can handle. Think of a situation where someone with a fast
fiber connection might be sending to someone on dialup or something similar. The
sender would have the ability to send packets very quickly, but that would be useless
to the receiver on dialup, so they would need a way to throttle what the sending side
can send. Flow control deals with the mechanisms available to ensure that this
communication goes smoothly.
Congestion control is a method of ensuring that everyone across a network has a
"fair" amount of access to network resources, at any given time. In a mixed-network
environment, everyone needs to be able to assume the same general level of
performance. A common scenario to help understand this is an office LAN. You have
a number of LAN segments in an office all doing their thing within the LAN, but then
they may all need to go out over a WAN link that is slower than the constituent LAN
segments. Picture having 100mb connections within the LAN that ultimately go out
through a 5mb WAN link. Some kind of congestion control would need to be in place
there to ensure there are no issues across the greater network.
SECTION – B(3 marks * 2 Questions = 6 Marks)
NOTE:ATTEMPT ANYTWOQUESTIONS
1. List and explain flow control method used in DLL.
Ans.


Mechanisms For Flow Control:
Stop and Wait Protocol: This is the simplest file control protocol in
which the sender transmits a frame and then waits for an
acknowledgement, either positive or negative, from the receiver before
proceeding. If a positive acknowledgement is received, the sender
transmits the next packet; else it retransmits the same frame. However,
this protocol has one major flaw in it. If a packet or an acknowledgement
is completely destroyed in transit due to a noise burst, a deadlock will
occur because the sender cannot proceed until it receives an
acknowledgement. This problem may be solved using timers on the
sender's side. When the frame is transmitted, the timer is set. If there is
no response from the receiver within a certain time interval, the timer
goes off and the frame may be retransmitted.
Sliding Window Protocols: Inspite of the use of timers, the stop and
wait protocol still suffers from a few drawbacks. Firstly, if the receiver
had the capacity to accept more than one frame, its resources are being
underutilized. Secondly, if the receiver was busy and did not wish to
receive any more packets, it may delay the acknowledgement. However,
the timer on the sender's side may go off and cause an unnecessary
retransmission. These drawbacks are overcome by the sliding window
protocols.
In sliding window protocols the sender's data link layer maintains a
'sending window' which consists of a set of sequence numbers
corresponding to the frames it is permitted to send. Similarly, the
receiver maintains a 'receiving window' corresponding to the set of
frames it is permitted to accept. The window size is dependent on the
retransmission policy and it may differ in values for the receiver's and
the sender's window. The sequence numbers within the sender's window
represent the frames sent but as yet not acknowledged. Whenever a new
packet arrives from the network layer, the upper edge of the window is
advanced by one. When an acknowledgement arrives from the receiver
the lower edge is advanced by one. The receiver's window corresponds
to the frames that the receiver's data link layer may accept. When a frame
with sequence number equal to the lower edge of the window is received,
it is passed to the network layer, an acknowledgement is generated and
the window is rotated by one. If however, a frame falling outside the
window is received, the receiver's data link layer has two options. It may
either discard this frame and all subsequent frames until the desired
frame is received or it may accept these frames and buffer them until the
appropriate frame is received and then pass the frames to the network
layer in sequence.
In this simple example, there is a 4-byte sliding window. Moving from
left to right, the window "slides" as bytes in the stream are sent and
acknowledged.
Most sliding window protocols also employ ARQ ( Automatic Repeat
reQuest ) mechanism. In ARQ, the sender waits for a positive
acknowledgement before proceeding to the next frame. If no
acknowledgement is received within a certain time interval it retransmits
the frame. ARQ is of two types :
1. Go Back 'n': If a frame is lost or received in error, the receiver
may simply discard all subsequent frames, sending no
acknowledgments for the discarded frames. In this case the
receive window is of size 1. Since no acknowledgements are being
received the sender's window will fill up, the sender will
eventually time out and retransmit all the unacknowledged frames
in order starting from the damaged or lost frame. The maximum
window size for this protocol can be obtained as follows. Assume
that the window size of the sender is n. So the window will
initially contain the frames with sequence numbers from 0 to (w1). Consider that the sender transmits all these frames and the
receiver's data link layer receives all of them correctly. However,
the sender's data link layer does not receive any
acknowledgements as all of them are lost. So the sender will
retransmit all the frames after its timer goes off. However the
receiver window has already advanced to w. Hence to avoid
overlap , the sum of the two windows should be less than the
sequence number space.
w-1 + 1 < Sequence Number Space
i.e., w < Sequence Number Space
Maximum Window Size = Sequence Number Space - 1
2. Selective Repeat:In this protocol rather than discard all the
subsequent frames following a damaged or lost frame, the
receiver's data link layer simply stores them in buffers. When the
sender does not receive an acknowledgement for the first frame
it's timer goes off after a certain time interval and it retransmits
only the lost frame. Assuming error - free transmission this time,
the sender's data link layer will have a sequence of a many correct
frames which it can hand over to the network layer. Thus there is
less overhead in retransmission than in the case of Go Back n
protocol.
In case of selective repeat protocol the window size may be
calculated as follows. Assume that the size of both the sender's
and the receiver's window is w. So initially both of them contain
the values 0 to (w-1). Consider that sender's data link layer
transmits all the w frames, the receiver's data link layer receives
them correctly and sends acknowledgements for each of them.
However, all the acknowledgemnets are lost and the sender does
not advance it's window. The receiver window at this point
contains the values w to (2w-1). To avoid overlap when the
sender's data link layer retransmits, we must have the sum of these
two windows less than sequence number space. Hence, we get the
condition
2. What are the issues that a routing algorithms has to deal with
Routing Issues
The routing algorithm must deal with the following issues:

Correctness and simplicity: networks are never taken down; individual
parts (e.g., links, routers) may fail, but the whole network should not.


Stability: if a link or router fails, how much time elapses before the
remaining routers recognize the topology change? (Some never do..)
Fairness and optimality: an inherently intractable problem. Definition of
optimality usually doesn't consider fairness. Do we want to maximize
channel usage? Minimize average delay?
3. Explain the function of Network layer in OSI model.
SECTION – C
(5 marks * 2 Questions = 10 marks)
NOTE:ATTEMPT ANYTWO QUESTIONS
1. Differentiate between centralized and distributed routing algorithms.
Centralized routing model carries out routing using a centralized database, while
distributed routing model deals with carrying out routing using a distributed database.
In simple terms, one central node holds the routing table in the centralized model,
while each node keeps a routing table in the distributed model. Because much of the
information does not change frequently, many believe that this information is much
suitable to be resided in a centralized database. The pre-computations needed for
restoration can take the advantage of global information available in a centralized
database. But, unlike the distributed routing system, the centralized system cannot be
relied upon to bear the responsibility of the on-demand computation of paths of
recovery for each of the light-paths that have failed (at a time of detecting an
expected failure). Unlike the centralized approach, the distributed routing model is
highly consistent with the existing Internet’s own distributed routing philosophy.
2. Explain link state routing also explain count to infinity problem.
A link-state routing protocol is one of the two main classes of routing
protocols used in packet switching networks for computer communications (the other
is thedistance-vector routing protocol). Examples of link-state routing protocols
include open shortest path first (OSPF) and intermediate system to intermediate
system(IS-IS).
The link-state protocol is performed by every switching node in the network (i.e.,
nodes that are prepared to forward packets; in the Internet, these are calledrouters).
The basic concept of link-state routing is that every node constructs a map of the
connectivity to the network, in the form of a graph, showing which nodes are
connected to which other nodes. Each node then independently calculates the next
best logical path from it to every possible destination in the network. The collection of
best paths will then form the node's routing table.
This contrasts with distance-vector routing protocols, which work by having each
node share its routing table with its neighbors. In a link-state protocol the only
information passed between nodes is connectivity related.
The Bellman–Ford algorithm does not prevent routing loops from happening and
suffers from the count-to-infinity problem. The core of the count-to-infinity problem
is that if A tells B that it has a path somewhere, there is no way for B to know if the
path has B as a part of it. To see the problem clearly, imagine a subnet connected
like A–B–C–D–E–F, and let the metric between the routers be "number of jumps".
Now suppose that A is taken offline. In the vector-update-process B notices that the
route to A, which was distance 1, is down – B does not receive the vector update
from A. The problem is, B also gets an update from C, and C is still not aware of the
fact that A is down – so it tells B that A is only two jumps from C (C to B to A), which
is false. This slowly propagates through the network until it reaches infinity (in which
case the algorithm corrects itself, due to the relaxation property of Bellman–Ford).
3. Explain the concept of choke packets used to control congestion.
A choke packet is used in network maintenance and quality management to
inform a specific node or transmitter that its transmitted traffic is creating
congestion over the network. This forces the node or transmitter to reduce its
output
rate.
Choke packets are used for congestion and flow control over a network. The
source node is addressed directly by the router, forcing it to decrease its sending
rate .The source node acknowledges this by reducing the sending rate by some
percentage.
An Internet Control Message Protocol (ICMP) source quench packet is a type of
choke packet normally used by routers.
4. How does routing works for mobile hosts.
Mobile IP enables routing of IP datagrams to mobile nodes. The mobile node's home address
always identifies the mobile node, regardless of its current point of attachment to the Internet or an
organization's network. When away from home, a care-of address associates the mobile node with
its home address by providing information about the mobile node's current point of attachment to
the Internet or an organization's network. Mobile IP uses a registration mechanism to register the
care-of address with a home agent.
The home agent redirects datagrams from the home network to the care-of address by constructing
a new IP header that contains the mobile node's care-of address as the destination IP address.
This new header then encapsulates the original IP datagram, causing the mobile node's home
address to have no effect on the encapsulated datagram's routing until it arrives at the care-of
address. This type of encapsulation is also called tunneling. After arriving at the care-of address,
each datagram is de-encapsulated and then delivered to the mobile node.
The following illustration shows a mobile node residing on its home network, Network A, before the
mobile node moves to a foreign network,Network B. Both networks support Mobile IP. The mobile
node is always associated with its home network by its permanent IP address, 128.226.3.30.
Though Network A has a home agent, datagrams destined for the mobile node are delivered
through the normal IP process.
Figure 1–2 Mobile Node Residing on Home Network
The following illustration shows the mobile node moving to a foreign network, Network B.
Datagrams destined for the mobile node are intercepted by the home agent on the home network,
Network A, encapsulated, and sent to the foreign agent on Network B. Upon receiving the
encapsulated datagram, the foreign agent strips off the outer header and delivers the datagram to
the mobile node visiting Network B.
Figure 1–3 Mobile Node Moving to a Foreign Network
The care-of address might belong to a foreign agent, or might be acquired by the mobile node
through Dynamic Host Configuration Protocol (DHCP) or Point-to-Point Protocol (PPP). In the latter
case, a mobile node is said to have a co-located care-of address.
The mobile node uses a special registration process to keep its home agent informed about its
current location. Whenever a mobile node moves from its home network to a foreign network, or
from one foreign network to another, it chooses a foreign agent on the new network and uses it to
forward a registration message to its home agent.
Mobility agents (home agents and foreign agents) advertise their presence using agent
advertisement messages. A mobile node can optionally solicit an agent advertisement message
from any locally attached mobility agents through an agent solicitation message. A mobile
node receives these agent advertisements and determines whether they are on its home network or
a foreign network.
When the mobile node detects that it is located on its home network, it operates without mobility
services. If returning to its home network from being registered elsewhere, the mobile
node deregisters with its home agent.
************
Download