computer network - model test paper

advertisement
Computer network
Q1 (a) A host with Ip Addresses 128.23.67.3 sends a message to a host
with IP Address 193.45.23.7. Does the message travel through any
router? Assume no sub netting.
Ans)
Yes Class B to C. This message must travel through a router because the netids of the two addresses
are different (128.23 versus 193.45.23).Since the address 128.23.67.3 lies in the range of
class B, and the address 193.45.23.7 lies in the range of class A.
Q1 (b) Why ATM cells have a small and fixed size?
Ans)
1. Allows better statistical multiplexing of information on a given medium than does
the use of larger packets of variable length.
2. With longer variable-length packets, high-speed switching would be more complex.
3. The limited amount of information in the header allows ATM cells to be processed
at a very high rate from 150 Mbps to several Gbps.
4. Provides speed, flexibility and compatibility with many different applications and
physical media.
5. The reduced size of the internal buffers guarantees a minimal delay and delay jitter.
This is why ATM can handle both constant rate traffic (audio, video) and variable
rate traffic (data) easily.
6. T o limit the queuing delays in internal buffers.
Q1 (c) A channel has a bit rate of 4kbps and a propagation delay of 20
msec. For what range of frame sizes does stop-and-wait give an
efficiency of at least 50 percent?
Ans) You need to use two formulas - firstly a= propagation delay/transmission time. We'll
denote this by tprop and ttrans.
tprop is given in the question, it is 20 x 10-3
ttrans is Distance/Data Rate
So ttrans = L / 4 x 103
ttrans = 80/L
(Because the powers of ten cancel each other out so you just multiply 20 x 4.
we use powers of ten because we need the same format as the propogation delay
a=80/L
2. Use the formula 1/1+2a >= .5 = 1/1+ 160/L>=.5
First 1>= .5(1+160/L)
second 1>= .5 + 80/L
third 80/L >= 1 x .5
fourth .5 >= 80/L
L >= 80/.5
L >= 160
Efficiency will be 50% when the time to transmit the frame equals the round trip
propagation delay. At a transmission rate of 4bits/ms, 160 bits takes 40 ms. For
frame sizes above 160 bits, stop-and-wait is reasonably efficient.
Q1(d) Compare LAN standards 802.3,802.4,802.5.
Ans)
802.3 Ethernet
802.3 is the standard which Ethernet operates by. It is the standard for CSMA/CD
(Carrier Sense Multiple Access with Collision Detection). This standard encompasses
both the MAC and Physical Layer standards.
CSMA/CD is what Ethernet uses to control access to the network medium (network
cable). If there is no data, any node may attempt to transmit, if the nodes detect a
collision, both stop transmitting and wait a random amount of time before
retransmitting the data.
The original 802.3 standard is 10 Mbps (Megabits per second). 802.3u defined the 100
Mbps (Fast Ethernet) standard, 802.3z/802.3ab defined 1000 Mbps Gigabit Ethernet,
and 802.3ae define 10 Gigabit Ethernet.
Commonly, Ethernet networks transmit data in packets, or small bits of information. A
packet can be a minimum size of 72 bytes or a maximum of 1518 bytes.
The most common topology for Ethernet is the star topology.
802.5 Token Ring
Token ring is designed to use the ring topology and utilizes a token to control the
transmission of data on the network.
The token is a special frame which is designed to travel from node to node around the
ring. When it does not have any data attached to it, a node on the network can modify
the frame, attach its data and transmit. Each node on the network checks the token as it
passes to see if the data is intended for that node, if it is; it accepts the data and
transmits a new token. If it is not intended for that node, it retransmits the token on to
the next node.
The token ring network is designed in such a way that each node on the network is
guaranteed access to the token at some point. This equalizes the data transfer on the
network. This is different from an Ethernet network where each workstation has equal
access to grab the available bandwidth, with the possible of a node using more
bandwidth than other nodes.Originally, token ring operated at a speed of about 4 Mbps
and 16 Mbps. 802.5t allows for 100 Mbps speeds and 802.5v provides for 1 Gbps over
fibber.
Token ring can be run over a star topology as well as the ring topology.There are three
major cable types for token ring: Unshielded twisted pair (UTP), Shielded twisted pair
(STP), and fibber.
The IEEE 802.4 standard describes a token bus
Token bus is a network implementing the token ring protocol over a "virtual ring" on
a coaxial cable A token is passed around the network nodes and only the node possessing
the token may transmit. If a node doesn't have anything to send, the token is passed on to
the next node on the virtual ring. Each node must know the address of its neighbor in the
ring, so a special protocol is needed to notify the other nodes of connections to, and
disconnections from, the ring. Token bus combines the physical configuration of Ethernet
and the collision free feature of Token ring.
This is an application of the concepts used in token ring networks. The main difference is
that the endpoints of the bus do not meet to form a physical ring.
Token Bus
Q1(e) Explain ARP and RARP
Ans)
Address Resolution Protocol: A TCP/IP protocol used to obtain a node's physical address.
A client station broadcasts an ARP request onto the network with the IP address of the
target node it wishes to communicate with, and the node with that address responds by
sending back its physical address so that packets can be transmitted. ARP returns the layer 2
address for a layer 3 address.
ARP'ing
The IP protocol broadcasts the IP address of the destination station onto the network, and the
node with that address responds.ARP -Meaning of ARP "Address Resolution Protocol", is
used to map ip Network addresses to the hardware (Media Access Control sub layer)
addresses used by the data link protocol. The ARP protocol operates between the
network layer and the data link layer in the Open System Interconnection (osi) model.
RARP- RARP (Reverse Address Resolution Protocol) is a protocol by which a physical
machine in a local area network can request to learn its IP address from a gateway server's
Address Resolution Protocol (ARP) table or cache. A network administrator creates a table in
a local area network's gateway router that maps the physical machine (or Media Access
Control - MAC address) addresses to corresponding Internet Protocol addresses. When a new
machine is set up, its RARP client program requests from the RARP server on the router to
be sent its IP address. Assuming that an entry has been set up in the router table, the RARP
server will return the IP address to the machine which can store it for future use. RARP is
available for Ethernet, Fiber Distributed-Data Interface, and token ring LANs.
Q2(a) Explain the following with their advantages and disadvantages
(1) Star Topology
Advantages:
1.
2.
3.
4.
5.
6.
Good performance
Reliable (if one connection fails, it doesn't affect others)
Easy to replace or remove hosts or other devices
Easy to install and wire.
No disruptions to the network when connecting or removing devices
Easy to detect faults and to remove parts.
Disadvantages of Star Network Topology
1. In star network topology, data communication depends on HUB. If central
hub fails, then whole network fails.
2. Since each computer will be connected with HUB by means of a separate
wire, star network topology needs more cable to connect computers.
3. It is more expensive due to more wires.
4. More expensive than linear bus topologies because of the cost of the hubs
5. Expensive to install
6. Extra hardware required
(2) Bus Topology
Advantages of Bus Network Topology
1. It is very simple topology.
2. It is easy to use.
3. It needs small amount of wire for connecting computers.
4. It is less expensive due to small wire needed.
5. If one computer fails, it does not disturb the other computers in network.
Other computers will continue to share information and other resources with
other connected computers.
Disadvantages of Bus Network Topology
1.
2.
Only small number of computers can be connected in a bus network.
Network speed slows down as the number of computer increases in bus
topology.
3. Finding a fault is difficult in bus topology.
4. Entire network shuts of there is a break in main cable.
(3)
Ring Topology
Advantages of Ring Network Topology
1. It is relatively less expensive than a star topology network.
2. In a Ring topology, every computer has an equal access to the network.
3. Performs better than a bus topology under heavy network load
Disadvantages of Ring Network Topology
1. Failure of one computer in the ring can affect the whole network.
2. It is difficult to find faults in a ring network topology.
3. Adding or removing computers will also affect the whole network since every
computer is connected with previous and next computer.
4. Sending a message from one computer to another takes time according to the
number of nodes between the two computers. Communication delay is directly
proportional to number of nodes in the network.
(4) Mesh Topology
Advantages of Mesh Network Topology
1. Since, there are many links to transfer data, Mesh topology gets rid of the traffic problem.
Data may be transferred through different links.
2. If one link becomes unusable, it does not disturb the whole system. Other links can be
used for communication.
3. Since each node has physical connection with other nodes, therefore, one node can
transfer data to many nodes at the same time.
Disadvantages of Mesh Network Topology
It is very expensive due to implementation of multiple links for each node.
2. It is difficult to install and reconfigure.
3. Adding or removing a computer is difficult
Q 2(b) Why cell switching is used in ATM? Explain ATM reference model.
Ans) ATM transfers information in fixed-size units called cells. Each cell consists of
53 octets, or bytes as shown in Fig. 4.6.4. The first 5 bytes contain cell-header
information, and the remaining 48 contain the payload (user information). Small,
fixed-length cells are well suited to transfer voice and video traffic because such
traffic is intolerant to delays that result from having to wait for a large data packet to
download, among other things.
An ATM cell header can be one of two formats: UNI or NNI. The UNI header is
used for communication between ATM endpoints and ATM switches in private
ATM networks. The NNI header is used for communication between ATM
switches. Unlike the UNI, the NNI header does not include the Generic Flow
Control (GFC) field. Additionally, the NNI header has a Virtual Path Identifier
(VPI) field that occupies the first 12 bits, allowing for larger trunks between public
ATM switches.
ATM Cell Header Fields
The following descriptions summarize the ATM cell header fields:
• Generic Flow Control (GFC)—Provides local functions, such as identifying
multiple stations that share a single ATM interface. This field is typically not used
and is set to its default value of 0 (binary 0000).
• Virtual Path Identifier (VPI)—In conjunction with the VCI, identifies the next
destination of a cell as it passes through a series of ATM switches on the way to
its destination.
• Virtual Channel Identifier (VCI)—In conjunction with the VPI, identifies the
next destination of a cell as it passes through a series of ATM switches on the way
to its destination.
• Payload Type (PT)—Indicates in the first bit whether the cell contains user data
or control data. If the cell contains user data, the bit is set to 0. If it contains
control data, it is set to 1. The second bit indicates congestion (0 = no congestion,
1 = congestion), and the third bit indicates whether the cell is the last in a series of
cells that represent a single AAL5 frame (1 = last cell for the frame).
• Cell Loss Priority (CLP)—Indicates whether the cell should be discarded if it
encounters extreme congestion as it moves through the network. If the CLP bit
equals 1, the cell should be discarded in preference to cells with the CLP bit equal
to 0.
• Header Error Control (HEC)—Calculates checksum only on the first 4 bytes of
the header. HEC can correct a single bit error in these bytes, thereby preserving
the cell rather than discard
The ATM reference model is composed of the following ATM layers:



Physical layer - Analogous to the physical layer of the OSI reference model, the
ATM physical layer manages the medium-dependent transmission.
ATM layer - Combined with the ATM adaptation layer, the ATM layer is roughly
analogous to the data link layer of the OSI reference model. The ATM layer is
responsible for the simultaneous sharing of virtual circuits over a physical link (cell
multiplexing) and passing cells through the ATM network (cell relay). To do this, it uses
the VPI and VCI information in the header of each ATM cell.
ATM adaptation layer (AAL) - Combined with the ATM layer, the AAL is
roughly analogous to the data link layer of the OSI model. The AAL is responsible for
isolating higher-layer protocols from the details of the ATM processes. The adaptation
layer prepares user data for conversion into cells and segments the data into 48-byte cell
payloads.
Finally, the higher layers residing above the AAL accept user data, arrange it into
packets, and hand it to the AAL.
The ATM Physical Layer
The ATM physical layer has four functions: Cells are converted into a bitstream, the
transmission and receipt of bits on the physical medium are controlled, ATM cell
boundaries are tracked, and cells are packaged into the appropriate types of frames for the
physical medium. For example, cells are packaged differently for SONET than for DS-3/E3 media types.
The ATM physical layer is divided into two parts: the physical medium-dependent (PMD)
sublayer and the transmission convergence (TC) sublayer.
The PMD sublayer provides two key functions. First, it synchronizes transmission and
reception by sending and receiving a continuous flow of bits with associated timing
information. Second, it specifies the physical media for the physical medium used,
including connector types and cable.
The TC sub layer has four functions: cell delineation, header error control (HEC) sequence
generation and verification, cell-rate decoupling, and transmission frame adaptation. The
cell delineation function maintains ATM cell boundaries, allowing devices to locate cells
within a stream of bits. HEC sequence generation and verification generates and checks the
header error control code to ensure valid data. Cell-rate decoupling maintains
synchronization and inserts or suppresses idle (unassigned) ATM cells to adapt the rate of
valid ATM cells to the payload capacity of the transmission system. Transmission frame
adaptation packages ATM cells into frames acceptable to the particular physical layer
implementation.
ATM Adaptation Layers: AAL1
AAL1, a connection-oriented service, is suitable for handling constant bit rate sources
(CBR), such as voice and videoconferencing. ATM transports CBR traffic using circuitemulation services. Circuit-emulation service also accommodates the attachment of
equipment currently using leased lines to an ATM backbone network. AAL1 requires
timing synchronization between the source and the destination. For this reason, AAL1
depends on a medium, such as SONET, that supports clocking.
ATM Adaptation Layers: AAL2
Another traffic type has timing requirements like CBR but tends to be bursty in nature. This
is called variable bit rate (VBR) traffic. This typically includes services characterized as
packetized voice or video that do not have a constant data transmission speed but that do
have requirements similar to constant bit rate services. AAL2 is suitable for VBR traffic.
The AAL2 process uses 44 bytes of the cell payload for user data and reserves 4 bytes of the
payload to support the AAL2 processes.
VBR traffic is characterized as either real-time (VBR-RT) or as non-real-time (VBR-NRT).
AAL2 supports both types of VBR traffic.
ATM Adaptation Layers: AAL3/4
AAL3/4 supports both connection-oriented and connectionless data. It was designed for
network service providers and is closely aligned with Switched Multimegabit Data Service
(SMDS). AAL3/4 is used to transmit SMDS packets over an ATM network.
An AAL 3/4 SAR PDU header consists of Type, Sequence Number, and Multiplexing
Identifier fields. Type fields identify whether a cell is the beginning, continuation, or end of
a message. Sequence number fields identify the order in which cells should be reassembled.
The Multiplexing Identifier field determines which cells from different traffic sources are
interleaved on the same virtual circuit connection (VCC) so that the correct cells are
reassembled at the destination.
ATM Adaptation Layers: AAL5
AAL5 is the primary AAL for data and supports both connection-oriented and
connectionless data. It is used to transfer most non-SMDS data, such as classical IP over
ATM and LAN Emulation (LANE). AAL5 also is known as the simple and efficient
adaptation layer (SEAL) because the SAR sublayer simply accepts the CS-PDU and
segments it into 48-octet SAR-PDUs without reserving any bytes in each cell.
Q 2(c) What are advantages of using Fiber optic cable compared to copper cable.
Thinner - Optical fibers can be drawn to smaller diameters than copper wire.
Higher carrying capacity - Because optical fibers are thinner than copper wires, more
fibers can be bundled into a given-diameter cable than copper wires. This allows more
phone lines to go over the same cable or more channels to come through the cable into your
cable TV box.
Less signal degradation - The loss of signal in optical fiber is less than in copper wire
Light signals – Unlike electrical signals in copper wires, light signals from one fiber do not
interfere with those of other fibers in the same cable. This means clearer phone
conversations or TV reception
Low power - Because signals in optical fibers degrade less, lower-power transmitters can
be used instead of the high-voltage electrical transmitters needed for copper wires. Again,
this saves your provider and you money.
Digital signals - Optical fibers are ideally suited for carrying digital information, which is
especially useful in computer networks.
Non-flammable - Because no electricity is passed through optical fibers, there is no fire
hazard
.Lightweight - An optical cable weighs less than a comparable copper wire cable. Fiberoptic cables take up less space in the ground.
Flexible - Because fiber optics are so flexible and can transmit and receive light, they are
used in many flexible digital cameras for the following purposes:
Speed: Fiber optic networks operate at speeds up to 10 gigabits per second or higher, as
opposed to 1.54 megabits per second for copper. A fiber optic system is now capable of
transmitting the equivalent of an entire encyclopedia (24 volumes) of information in one
second. Fiber can carry information so fast that you could transmit three television episodes
in one second.
Bandwidth: Taken in bulk, it would take 33 tons of copper to transmit the same amount of
information handled by 1/4 pound of optical fiber.
Resistance: Fiber optic cables have a greater resistance to electromagnetic noise such as
radios, motors or other nearby cables. Because optical fibers carry beams of light, they are
free of electrical noise and interference.
Q 3(a) In class A, the first address is 20.0.0.0.What is the 220000th address.
For address 220,000 in x.y.z.t notation is 0.3.91.96.In order to solve the problem we add this to 20.0.0.0,
and the 220,001th address is 20.3.91.96.
So, the 220,000th address is 20.3.91.95.
Q 3(b) Give an advantage and disadvantage of NT12 in an ISDN network.
NT12
Q 3(c) Compare and ISO-OSI with TCP/IP reference model.
SIMILARITIES
The main similarities between the two models include the following:
They share similar architecture. - Both of the models share a similar architecture.
This can be illustrated by the fact that both of them are constructed with layers.
They share a common application layer.- Both of the models share a common
"application layer". However in practice this layer includes different services depending
upon each model.
Both models have comparable transport and network layers.- This can be illustrated
by the fact that whatever functions are performed between the presentation and network
layer of the OSI model similar functions are performed at the Transport layer of the
TCP/IP model.
Knowledge of both models is required by networking professionals.- According to
article obtained from the internet networking professionals "need to know both models".
(Source:
Both models assume that packets are switched.- Basically this means that individual
packets may take differing paths in order to reach the same destination.
DIFFERENCES
The main differences between the two models are as follows:
TCP/IP Protocols are considered to be standards around which the internet has
developed. The OSI model however is a "generic, protocol- independent
standard." (www.netfact.com/crs)
TCP/IP combines the presentation and session layer issues into its application layer.
TCP/IP combines the OSI data link and physical layers into the network access layer.
TCP/IP appears to be a more simpler model and this is mainly due to the fact that it has
fewer layers.
TCP/IP is considered to be a more credible model- This is mainly due to the fact because
TCP/IP protocols are the standards around which the internet was developed therefore it
mainly gains creditability due to this reason. Where as in contrast networks are not
usually built around the OSI model as it is merely used as a guidance tool.
The OSI model consists of 7 architectural layers whereas the TCP/IP only has 4 layers.
Q 4(a) Explain the working of 3-bit sliding window protocol with suitable example.
The simplex stop and wait ARQ protocol using sequence numbers and sequence
number acknowledgments will ensure an error free communications channel for
higher levels. However, waiting for an acknowledgment for each frame in turn is
very wasteful and a more efficient alternative is to use a sliding window protocol
which enables a number of frames to be transmitted and separately
acknowledged.
In a sliding window protocol each outbound frame contains a sequence number in
the range 0 to some maximum (MaxSeq). If n bits are allocated in the header to
store a sequence number the number range would be from 0 to 2" - 1, eg if a 3 bit
number is used the sequence numbers would range from 0 to 7. The sender and
receiver maintain a window:
Sending window
is a list of consecutive frame sequence numbers that can be sent by the sender
or that have been sent and acknowledgments are waited for. When
an ack arrives and all previous frames have already been acknowledged the
window can be advanced and a new message obtained from the host to be
transmitted with the next highest available sequence number. If a ack arrives
for a frame that is not within the ‘window’ it is discarded, eg an ‘extra’ ack for
a frame that has already been acknowledged.
is a list of sequence numbers for frames that can be accepted by the receiver.
When a valid frame arrives and all previous frames have already arrived the
Receiving window
window is advanced. If a frame arrives that is not within the ‘window’ it is
discarded.
Although the IMPs (interface message processor) now have more freedom in the
order in which frames are sent or received the higher layers of the destination host
must get the messages in the same order that the source host supplied them. In
addition, the physical layer must still appear to be a ‘piece of wire’ to the data
link layer.
Since frames currently within the senders window could be lost the sender has to
have sufficient buffer space to store n frames. When all the buffers are full
(waiting for acks) the IMP must stop the host from passing more messages until a
buffer is free. Must make sure that no more than 2n - 1 frames are un-acked at any point,not 2n frames (even though there are 2n distinct frame
numbers).
Consider 3 bit frame number, where frames numbered 0..7 (8 distinct
numbers)
1. 8 un-ack-ed frames bad:
transmit 8 frames numbered 0..7
If received ok, will get ack=7
If error, will get ack=7 too! (Ack up to last ok frame)
Next transmission 8 frames 0..7
ack=7 comes back
is that ack-ing the new batch, or just re-ack-ing the old?
2. 7 un-ack-ed frames ok:
If transmit 7 frames 0..6
If error, ack=7
If received, ack=6
Then transmit 7 frames 7,0..5
If error, ack=6
If received, ack=5
That is, after transmitting 7 packets, there are 8 possible results: Anywhere from 0 to 7 packets
could have been received successfully. This is 8 possibilities, and the transmitter needs enough
information in the acknowledgment to distinguish them all.If the transmitter sent 8 packets
without waiting for acknowledgment, it could find itself in a quandary similar to the stop-and-wait
case: does the acknowledgment mean that all 8 packets were received successfully, or none of
them?
Q 4(b) Consider an error-free 64-kbps satellite channel used to send 512-byte data
frames in one direction with very short acknowledgements coming back the other way.
What is the maximum throughput for window sizes of 1, 7, 15 and 127?
A 512 byte (4096 bits) data frame has a duration of 4096/64000 seconds – that is 64 msec.
Assume that the satellite is at 36000 km distance. This leads to roundtrip propagation time
of 240 msec. We should also add 64 msec to this to account for transmission time. Hence
with a window size of 1, 4096 bits can be sent every 240+64=304 msec. This equates to the
throughput of 4096 bits/304 msec or 13.5 kbps. For a window size greater than 5, the full 64
kbps is used. 3- Two adjacent nodes (A and B) use a sliding window protocol and a 3-bit
sequence
OR
Sliding window – send specified amount of frames, wait for one ack for that set of frames
512 bytes x 8 bits/B = 4096 bits per frame
4096/64000 bps = 64 msec to send one frame
Round trip delay = 540 msec
Window size 1: send 4096 bits per 540msec
4096 bits / 540 msec = 7.585 x 103 bps throughput
Window size 7: 7585 x 7 = 53096 bps
Window size 9 and greater: 7585 x 9 = 68265 bps but the maximum capacity is 64 kbps so
for window sizes greater than 9 the maximum throughput is 64 kbps.
Q 5(a) Why framing is necessary? Explain framing techniques.
Framing determines start and end of packets (all we see is a string of bits) and their order by
inserting packet headers The main problem is to decide where successive packets start and
end. Therefore, the DLC encapsulates the packet into a frame by adding its own header and
trailer.
Moreover, if there is a period of idle fills between successive ,it becomes also necessary to
separate idle fills from frames. Even when idle fills are replaced by dead periods, the
problem is not simplified, because often there are no dead periods.
Types of Framing:
1 Character based framing: Character based codes, such as ASCII (7 bits and 1 parity
bit), provide binary representation for keyboard characters and terminal control characters.
The idea in character based framing is that such codes can also provide representation for
various communication characters:
One disadvantage of character based framing is that the frame must contain aninteger
number of characters; otherwise, the characters cannot be read correctly.
2 Fixed length packets/framesIn networks with fixed length packets, the length field is
implicit (not needed) because all packets (and hence frames) have the same size. An
example of such networks is ATM, where all packets are 53 bytes long. This framing
technique requires synchronization upon initialization. Another drawback is that message
length often not a multiple of packet size.
3 Bit oriented framing
In bit oriented framing, a special binary flag indicates the end of the frame, and it is
avoided within the frame itself by using a technique called bit stuffing. Therefore, the
concept is thesame, but the main difference is that a flag.
Q 5(b) Explain Sliding window protocol. Also explain error control at DLL.
A sliding window protocol is a feature of packet-based data transmission protocols. Sliding
window protocols are used where reliable in-order delivery of packets is required, such as in
the Data Link Layer (OSI model) as well as in the Transmission Control Protocol (TCP).
Conceptually, each portion of the transmission (packets in most data link layers, but bytes in
TCP) is assigned a unique consecutive sequence number, and the receiver uses the numbers
to place received packets in the correct order, discarding duplicate packets and identifying
missing ones. The problem with this is that there is no limit of the size of the sequence
numbers that can be required.
By placing limits on the number of packets that can be transmitted or received at any given
time, a sliding window protocol allows an unlimited number of packets to be communicated
using fixed-size sequence numbers. The term "window" on transmitter side represents the
logical boundary of the total number of packets yet to be acknowledged by the receiver. The
receiver informs the transmitter in each acknowledgment packet the current maximum
receiver buffer size (window boundary). The TCP header uses a 16 bit field to report the
receive window size to the sender. Therefore, the largest window that can be used is
2^16=65k bytes. In slow-start mode, the transmitter starts with low packet count and
increases the number of packets in each transmission after receiving acknowledgment
packets from receiver. For every ack packet received, the window slides by one packet
(logically) to transmit one new packet. When the window threshold is reached, the
transmitter sends one packet for one ack packet received. If the window limit is 10 packets
then in slow start mode the transmitter may start transmitting one packet followed by two
packets (before transmitting two packets, one packet ack has to be received), followed by
three packets and so on until 10 packets. But after reaching 10 packets, further
transmissions are restricted to one packet transmitted for one ack packet received. In a
simulation this appears as if the window is moving by one packet distance for every ack
packet received. On the receiver side also the window moves one packet for every packet
received. The sliding window method ensures that traffic congestion on the network is
avoided. The application layer will still be offering data for transmission to TCP without
worrying about the network traffic congestion issues as the TCP on sender and receiver side
implement sliding windows of packet buffer. The window size may vary dynamically
depending on network traffic.
For the highest possible throughput, it is important that the transmitter is not forced to stop
sending by the sliding window protocol earlier than one round-trip delay time (RTT). The
limit on the amount of data that it can send before stopping to wait for
an acknowledgment should be larger than the bandwidth-delay product of the
communications link. If it is not, the protocol will limit the effective bandwidth of the link.
1. Sender window might grow as it receives more frames to send and still has ones
un-ack'ed. Starts with nothing to send, then NA gives it frames to send.
Later, window may shrink as frames are ack-ed and NA has no more.
2. Receiver window constant size Receiver window size 1 means will only accept
them in order. Size n means will receive out of order (e.g. receive later ones after
earlier frame is lost) and then must buffer them before sending to NB (must send to
NB in order).
Examples of Error Detecting methods:



Parity bit: Simple example of error detection technique is parity bit. The parity bit
is chosen that the number of 1 bits in the code word is either even( for even parity)
or odd (for odd parity). For example when 10110101 is transmitted then for even
parity an 1 will be appended to the data and for odd parity a 0 will be appended.
This scheme can detect only single bits. So if two or more bits are changed then that
can not be detected.
Longitudinal Redundancy Checksum: Longitudinal Redundancy Checksum is an
error detecting scheme which overcomes the problem of two erroneous bits. In this
concept of parity bit is used but with slightly more intelligence. With each byte we
send one parity bit then send one additional byte which have the parity
corresponding to the each bit position of the sent bytes. So the parity bit is set in
both horizontal and vertical direction. If one bit get flipped we can tell which row
and column have error then we find the intersection of the two and determine the
erroneous bit. If 2 bits are in error and they are in the different column and row then
they can be detected. If the error are in the same column then the row will
differentiate and vice versa. Parity can detect the only odd number of errors. If they
are even and distributed in a fashion that in all direction then LRC may not be able
to find the error.
Cyclic
Redundancy
Checksum
(CRC):
We have an n-bit message. The sender adds a k-bit Frame Check Sequence (FCS) to
this message before sending. The resulting (n+k) bit message is divisible by some
(k+1) bit number. The receiver divides the message ((n+k)-bit) by the same (k+1)bit number and if there is no remainder, assumes that there was no error. How do we
choose
this
number?
For example, if k=12 then 1000000000000 (13-bit number) can be chosen, but this
is a pretty crappy choice. Because it will result in a zero remainder for all (n+k) bit
messages with the last 12 bits zero. Thus, any bits flipping beyond the last 12 go
undetected. If k=12, and we take 1110001000110 as the 13-bit number (incidentally,
in decimal representation this turns out to be 7238). This will be unable to detect
errors only if the corrupt message and original message have a difference of a
multiple of 7238. The probablilty of this is low, much lower than the probability that
anything beyond the last 12-bits flips. In practice, this number is chosen after
analyzing common network transmission errors and then selecting a number which
is likely to detect these common errors.
Q 6(a) Explain network devices repeaters, switches, hubs, and bridges.
Repeaters and hubs
A repeater is an electronic device that receives a signal, cleans it of unnecessary noise,
regenerates it, and retransmits it at a higher power level, or so that the signal can cover
longer distances without degradation. In most twisted pair Ethernet configurations, repeaters
are required for cable that runs longer than 100 meters. A repeater with multiple ports is
known as a hub. Repeaters work on the Physical Layer of the OSI model. Repeaters require
a small amount of time to regenerate the signal. This can cause a propagation delay which
can affect network communication when there are several repeaters in a row. Many network
architectures limit the number of repeaters that can be used in a row (e.g. Ethernet's 5-4-3
rule).
Bridges
A network bridge connects multiple network segments at the data link layer (layer 2) of
the OSI model. Bridges broadcast to all ports except the port on which the broadcast was
received. However, bridges do not promiscuously copy traffic to all ports, as hubs do, but
learn which MAC addresses are reachable through specific ports. Once the bridge associates
a port and an address, it will send traffic for that address to that port only.
Bridges learn the association of ports and addresses by examining the source address of
frames that it sees on various ports. Once a frame arrives through a port, its source address
is stored and the bridge assumes that MAC address is associated with that port. The first
time that a previously unknown destination address is seen, the bridge will forward the
frame to all ports other than the one on which the frame arrived.
Bridges come in three basic types:

Local bridges: Directly connect LANs

Remote bridges: Can be used to create a wide area network (WAN) link between
LANs. Remote bridges, where the connecting link is slower than the end networks,
largely have been replaced with routers.

Wireless bridges: Can be used to join LANs or connect remote stations to LANs.
Switches
A network switch is a device that forwards and filters OSI layer 2 datagrams (chunks of
data communication) between ports (connected cables) based on the MAC addresses in the
packets.[15] A switch is distinct from a hub in that it only forwards the frames to the ports
involved in the communication rather than all ports connected. A switch breaks the collision
domain but represents itself as a broadcast domain. Switches make forwarding decisions of
frames on the basis of MAC addresses. A switch normally has numerous ports, facilitating a
star topology for devices, and cascading additional switches.[16] Some switches are capable
of routing based on Layer 3 addressing or additional logical levels; these are called multilayer switches. The term switch is used loosely in marketing to encompass devices
including routers and bridges, as well as devices that may distribute traffic on load or by
application content (e.g., a Web URL identifier).
Repeater






Forwards every frame it receives
it is a generator,not an amplifier(i.e it removes noise & regenerates signal )
Bi-directional in nature
Useful in increasing ethernet size/length
Maximum of 5 Repeaters in an Ethernet
Hub


basically a multiport repeater
can be used to divide a single LAN into multiple levels of Hierarchy
Bridge







connect similar/dissimilar LANS
Designed to store and forward frame
Protocol independent
Transparent to End Stations
Operates in Layer-1 & Layer-2
uses a table for filtering/routing
does not change the Mac address in the frame
Q 6(b) Explain MAC sublayer of token ring.
802.5 Token Ring protocols specify that the MAC sub-layer must supply a 48-bit (6 byte)
address. The MAC address is most frequently represented as 12 hexadecimal digits. The MAC
address uniquely identifies a specificnetwork device and MAC addresses must be unique on a
given LAN (a network of computing devices in a single subnet of IP addresses). The first 12bit portion of the MAC address identifies the vendor of the network device, the last 12-bit portion
identifies the unique id of the device itself. When looking at a hexadecimal representation of the
MAC address, the first six hexadecimal digits identify the vendor and the last
six hexadecimal digits identify the specific network interface card.
Token Ring and IEEE 802.5 support two basic frame types: tokens and data/command
frames. Tokens are 3 bytes in length and consist of a start delimiter, an access control
byte, and an end delimiter. Data/command frames vary in size, depending on the size of
the Information field. Data frames carry information for upper-layer protocols, while
command frames contain control information and have no data for upper-layer protocols.
Token Frame Fields
Start Delimiter
Access Control
Ending delimiter
Token Frame contains three fields, each of which is 1 byte in length:
•
•
•
Start delimiter (1 byte): Alerts each station of the arrival of a token (or
data/command frame). This field includes signals that distinguish the byte from
the rest of the frame by violating the encoding scheme used elsewhere in the
frame.
Access-control (1 byte): Contains the Priority field (the most significant 3 bits)
and the Reservation field (the least significant 3 bits), as well as a token bit (used
to differentiate a token from a data/command frame) and a monitor bit (used by
the active monitor to determine whether a frame is circling the ring endlessly).
End delimiter (1 byte): Signals the end of the token or data/command frame.
This field also contains bits to indicate a damaged frame and identify the frame
that is the last in a logical sequence.
Data/Command Frame Fields
Start
Delimiter
Access
Control
Frame
Control
Destination
address
Source
address
Data
Frame check
sequence
End
Delimiter
Frame
Status
Data/command frames have the same three fields as Token Frames, plus several others.
The Data/command frame fields are described below:
•
•
•
•
•
Frame-control byte (1 byte)—Indicates whether the frame contains data or control
information. In control frames, this byte specifies the type of control information.
Destination and source addresses (2-6 bytes)—Consists of two 6-byte address
fields that identify the destination and source station addresses.
Data (up to 4500 bytes)—Indicates that the length of field is limited by the ring
token holding time, which defines the maximum time a station can hold the token.
Frame-check sequence (FCS- 4 byte)—Is filed by the source station with a
calculated value dependent on the frame contents. The destination station recalculates
the value to determine whether the frame was damaged in transit. If so, the frame is
discarded.
Frame Status (1 byte)—This is the terminating field of a command/data frame. The
Frame Status field includes the address-recognized indicator and frame-copied
indicator.
Q 6(c) Explain FDDI
Fiber distributed data interface (FDDI), which is an optical data communication standard used
for long distance networks provides communication with fiber optic lines up to 200 kilometers at
a speed of 100 megabit per second (Mbps).
FDDI uses a dual-ring architecture with traffic on each ring flowing in opposite directions
called counter-rotating. The dual-rings consist of a primary and a secondary ring. The primary
ring offers up to 100 Mbps capacity. During normal operation, the primary ring is used for data
transmission, and the secondary ring remains idle and available for backup. The primary purpose
of the dual rings is to provide superior reliability and robustness. FDDI was later extended to
FDDI-2 for long distance voice and multimedia communication.
FDDI can have two types of Network Interface Cards, A and B, that connect to it.
1) Class A Network Interface Cards connect to both rings while class B Network Interface Cards
connect to only one ring.
2) Only class A cards can be used to heal broken rings. Thus the number of class A cards define
the fault tolerant characteristics of the network.
3) When an error occurs the nearest computer routes frames from the inner ring to the outer ring.
Q 7 (a) Imagine two LAN bridges, both connecting a pair of 802.4 networks . The first
bridge is faced with 1000 512-byte frames per second that must be forwarded. The second
is faced with 200 4096-byte frames per second. Which bridge do you think will need the
faster CPU? Discuss.
The bridge with 1000 512-byte frames will require a faster CPU since it has to do 1000
forwarding lookup and learning operations compared to 200 for the other bridge. It is the number
if frame headers not the size of the frame which governs how busy the bridge microprocessor is.
The 100 frames/sec bridge woul need a faster CPU. Although the other one has a higher
throughput,
the
1000 frames/sec bridge
has
more interrupts, more process
switches, more frames passed and more of everything that needs the CPU.
Q 7(b) Explain the disadvantages of CDMA?
Main disadvantage of CDMA is the degradation if the number of user will be increases an RF
channel because in cdma 10 users can acess at a time.
One major problem in CDMA technology is channel pollution, where signals from too many cell
sites are present in the subscriber’s phone but none of them is dominant. When this situation
arises the quality of the audio degrades. Another disadvantage in this technology when compared
to GSM is the lack of international roaming capabilities. The ability to upgrade or change to
another handset is not easy with this technology because the network service information for the
phone is put in the actual phone unlike GSM which uses SIM card for this. One another
disadvantage is the limited variety of the handset, because at present the major mobile companies
use GSM technology. Some other drawbacks :
• Due to its proprietary nature, all of CDMA's flaws are not known to the engineering
community.
• CDMA is relatively new, and the network is not as mature as GSM.
• CDMA cannot offer international roaming, a large GSM advantage.
Q 7(c) Explain ALOHA
Q 8(a) What is the maximum burst length on an 155.2Mbps ATM ABR connection whose
PCR value is 200,000 & whose L value is 25 micro sec?
Q 8(b) Give three examples of protocol parameters that might be negotiated when a
connection is set up.
The negotiation can be on setting the following :
1)
2)
3)
4)
Window size
Maximum packet size (MTU or MSS)
Timer values.
The protocol parameters that may be negotiated during the setup of a connection
could be
A) Maximum packet size
B) Maximum transmission/reception speed
C) Quality of service standards.
Q 9 (a) Give two examples for which connection oriented service is appropriate. Now give
two examples for which connectionless service is best.
Connection-oriented communication is
a
data
communication
mode
in telecommunications whereby the devices at the end points use a protocol to establish an endto-end logical or physical connection before any data may be sent. Connection-oriented
protocol services are often but not always reliable network services, that provide
acknowledgment after successful delivery, and automatic repeat request functions in case of
missing data or detected bit-errors.
Circuit
mode communication,
for
example
the public
switched
telephone
network, ISDN, SONET/SDH and optical mesh networks, are examples of connection-oriented
communication. Circuit mode communication provides guarantees that data will arrive with
constant bandwidth and at constant delay.
Packet mode communication may also be connection-oriented, which is called virtual
circuit mode communication. Due to the packet switching, the communication may suffer from
variable bit rate and delay, due to varying traffic load and packet queue lengths. . Examples of
connection-oriented packet mode communication, i.e. virtual circuit mode communication:






The Transmission Control Protocol (TCP) is a connection-oriented reliable protocol
that is based on a datagram protocol (the IP protocol).
X.25 is a connection-oriented reliable network protocol.
Frame relay is a connection-oriented unreliable data link layer protocol.
GPRS
Asynchronous Transfer Mode
Multiprotocol Label Switching
The alternative to connection-oriented transmission is connectionless communication, also
known as datagram communication, in which data is sent in form of packets from one end point
to another without prior arrangement or signaling. Each data packet must contain complete
address information, since packets are routed individually and independently of each other,
possibly transmitted along different network paths.
Connectionless protocols are usually described as stateless because the end points have no
protocol-defined
List of connectionless protocols

Hypertext Transfer Protocol

IP

UDP

ICMP

IPX

TIPC

NetBEUI

DNS

SNMP
Q 9(b) Give an argument why the leaky bucket algorithm should allow just one packet per
tick, independent of how large the packet is.
. When variable sized packet are being used, it is often better to allow a fixed
number of bytes per tick rather than just one packet the leaky bucket consists of a
finite queue. When a packet aarives if there ia a room on the queue it is appended
to the queue, otherwise it is discarded. At every clock tick, one packet is
transmitted. The host is aalowed to put one packet per clock tick onto the network.
Again this can be enforced by the interface card or by OS this algorithm is mostly
used for traffic shaping. If leaky bucket allow more than one packet then
congestion will occurs.
If we send more than one packet per tick, independent of packet size, then
eventually you might see a potential increase of processing going on at the
nearest router by the leaky bucket. The more packets you are putting on the line
per tick for the router to handle, the more time it has to give to handle multiple
packets per clock tick. The router might get behind it’s work and become
congested.
Therefore, we would only want to put out one packet per clock tick.
Download