Computer Networks - Fulton Internet Information Server

advertisement
Computer Networks - Theory and
Practice
CSE 434 / 598
Spring 2001
Sourav Bhattacharya
Computer Science & Engineering
Arizona State University
Sourav@asu.edu
1
Class Objectives

Technical Goals:






Provide basic training in the area of “Computer and
Communication Networks”
A comprehensive protocol/algorithm level understanding of the
“essentials” of a network
Concept driven, not implementation/package driven
Focus on core communication aspects, and not on cosmetics
Achieve a level where you are ready to learn about specific
network implementations
Other Goals:

Learn to learn, Job well done, intellectual honesty, mutual “good
wish”, promote research careers, …
Sourav@asu.edu
2
Success Criteria
 At
the end of the class
 Class
does well in the tests, and projects
 Class has learnt the subject matter from the instructor
 Instructor has inspired few (at least !!) career
advancements
 Instructor has improved the class material …
 Don’t
Do List
 Instructor:
Demonstration of “research”
 Class: Inhibitions, shy to ask questions, interrupt...
Sourav@asu.edu
3
Text and Syllabus


Computer Networks, by Andrew S. Tannenbaum, 3rd ed.,
Prentice Hall, 1996
Flow of Discussion

Chapter 1 and 2 - background, assumed !!






You are graduate students or undergrad seniors !!
Chapter 4 - Medium Access Sublayer
Chapter 3 - Data Link Layer
Chapter 5 - Network Layer
Chapter 6 - Transport Layer
Sporadic Coverages: Security and Encryption, Network
Management, Multimedia, WWW, ... (as time permits)
Sourav@asu.edu
4
References






High-Speed Networks: TCP/IP and ATM Design Principles, by
William Stallings, Prentice Hall
Network Analysis with Applications, by William D. Stanley, Prentice
Hall
Local and Metropolitan Area Networks, by William Stallings, Prentice
Hall
Protocol Design for Local and Metropolitan Area Networks, by Pawel
Gburzynski, Prentice Hall
Introduction to Data Communications: A Practical Approach, by Larry
Hughes, Jones and Burlett Publishers.
High-Speed LANs Handbook, by Stephen Saunders, McGraw-Hill
Sourav@asu.edu
5
The Network Design Problem: At A
Glance




Design Analogy: N persons can successfully, and
efficiently communicate amongst themselves, sharing
individual, group and global views
Step 1: Two remote persons can communicate
Step 2: Three or more remote persons can efficiently share
a common medium to exchange distinct views (but these
people have to do the entire co-ordination by themselves)
Step 3: Increasingly convenient ways of doing Step 2



Abstraction
Quality of Service
Value added features...
Sourav@asu.edu
6
Layered Protocol Hierarchies


Basic data transfer occurs
at the lowest layer
The rest is merely solving
“human problems”



Abstraction and convenience
of access
Inter-operability
Making sure that multiple
users do not fight

Or, if they do, at least
gracefully, and with a
recourse
Layer N
Layer N
...
...
Layer 3
Layer 3
Layer 2
Layer 2
Layer 1
Layer 1
Physical Medium
Sourav@asu.edu
7
Quality of Connections

Issues

Layered Protocol Interfaces




Protocol Header, and Body
Nework Architecture
Network Architecture
Connections Type


Simplex, vs. Duplex
Connection-oriented, vs. Connection-less (datagram)


Life of connection, vs. Delay of setting up a new connection
QoS of Connections
Sourav@asu.edu
8
OSI Model




Open System
Interconnection
(OSI) Model
Data header at
each layer
Real data transfer
at the lowest layer
Logical data flow
at upper layers
Application
Application
Presentation
Presentation
Session
Session
Transport
Transport
Network
Network
Data Link
Data Link
Physical
Physical
Physical Medium
Sourav@asu.edu
9
TCP / IP Model
Application
Application
Transport
Transport
Network
Network
Data Link
Data Link
Physical
Physical
Telnet, Ftp, Smtp, DNS, ...
TCP, or UDP
IP
NSFNet,
} ArpaNet,
various LANs, ...
Physical Medium

Application layer controls everything above the
Transport Layer (theme: “reduce the overhead”)
Sourav@asu.edu
10
Network Standardization

International Standards Organization (ISO)





Various TCs, and Working Groups
ANSI (Am. Nat’l Standards Inst.)
NIST
IEEE
Internet Engineering Task Force (IETF)

Produces stream of RFCs
Sourav@asu.edu
11
Medium Access Control
Chapter 4 of the Text
Sourav@asu.edu
12
Problem Introduction


Two or more contenders for a common media
Contenders: Independent nodes or stations with its own
data/information to distribute



Distribute: one-to-one, one-to-many, one-to-all
(routing, multicast, broadcast)
Data/Information: anything from a bit to a long message stream
Common media


Fiber, cable, radio frequency channel, ...
Characteristics of the media -- refer Chapter 2
Sourav@asu.edu
13
The Most Obvious Solution


N cars to share a common road
Two approaches

Slice the road width into N parallel parts, i.e., Lanes
(hopefully each part will still be wide enough for a car)


Regulate the cars to drive on a rotation basis, i.e., one after the other



Each car drives in its own Lane
Careful co-ordination is critical
No width restriction. Each car can enjoy the entire road width !!
Problems


Naive, and simplistic
Opportunity for resource wastage
Sourav@asu.edu
14
The Two Naive Solutions...

Frequency Division Multiplexing (FDM)




For N user stations, partition the bandwidth into N (equally sized?)
frequency bands
Each user transmits onto a particular bandwidth slot
No contention. But, likely under-utilization of bandwidth.
Time Division Multiplexing (TDM)



For N user stations, create a cycle of N (equally sized ?) time slots
Each user takes its turn, and transmits only during the
corresponding time slot
No contention. But, likely under-utilization of the time slots
Sourav@asu.edu
15
Channel Allocation on “as needed”
Basis



Instead of apriori partitioning of the channel resource
(bandwidth, time) - employ dynamic resource management
Advantages include: reduced channel resource wastage
Disadvantages:


Require explicit (or, implicit) co-ordination of transmission schedules
Co-ordination can be of several categories



Detection and Correction
Avoidance
Prevention (contention-free !)
Sourav@asu.edu
16
Model and Assumptions

User stations or Nodes



Probability of a frame being generated in an interval T is L*T,
where L is a constant for a particular user.
Independent in their transmissions. Can transmit a frame any time.
Concerns:


This model is not valid for co-related transmissions (e.g., performance
analysis for a set of parallel/distributed programs or threads)
Single channel Assumption


No second medium is available among the stations to communicate
(data, and/or control information)
Concern: this assumption is not true for many environments, where
the control information may be carried on a second channel.
Sourav@asu.edu
17
Model (contd...)

Carrier Sense, or No Carrier Sense




Before transmission nodes can (or, cannot) sense if the channel is
currently busy due to another user’s message
Protocols can be lot more efficient if “Carrier Sense” is true
Issue: It is hardware, and analog device specific
Activation Instances


Continuous time: a message can be attempted for transmission at
any time. There is no master clock.
Slotted time: a message can be delivered only at a fixed set of
points in time. Time axis is discretized. Requires a master clock.
Sourav@asu.edu
18
ALOHA - A Simple Multiple Access
Protocol

N user stations, randomly generating data frames




Anytime data is ready ---> transmit on the media
(without care for collison)
Listen to the channel, and find out if there is/was collison
If collison, then wait for a random time and goto step 1
Collision vulnerability period



If frame time = t, then vulnerability period = 2t
Reason: two frames can collide (head, tail) or (tail, head) at the
extreme ends
Refer Figure 4-2
Sourav@asu.edu
19

insert figure 4-2 here
Sourav@asu.edu
20
Performance of ALOHA


A lot of nodes are suddenly jumping into the shared, common
channel - What can you expect about the performance ?
G = # frame transmission attempts (including new, and retransmission)


Probability that k frames are generated during a given
vulnerability period = ((2G)^k * e^(-2G)) / k!


Thus, during a 2-frame vulnerability period (refer Fig 4-2) there will
be 2G frames generated
Probability that no frame will be generated, i.e., k=0, => e^(-2G)
Successful transmissions, or throughput = rate * prob(none
else transmits) = G * e^(-2G)
Sourav@asu.edu
21
ALOHA => Slotted ALOHA

Best case performance of ALOHA



G = 0.5, Throughput = 1/(2e), nearly 18%
What else can you expect from purely random, and no carrier sense
protocols
Slotted ALOHA





Like ALOHA, in every sense, except when a transmission request
can originate
Discretize the time axis into slots, 1 slot = 1 frame width
A node can only transmit a frame at a slot beginning
Requires a master clock, typically one node transmitting a special
control signal at the beginning of each frame
Issue: Is clock synchronization that easy ?
Sourav@asu.edu
22
Performance of Slotted ALOHA

Effect of restricted transmission request time instants




Vulnerability period is reduced from 2t to t, where t is the frame
width (refer Figure 4-2, and explain why ?)
Probability of no other transmission during one frame = e^(-G)
Thus, Throughput = G * e^(-G)
Best throughput is for G=1, with nearly 37% throughput



37% utilization, 37% empty slots and 26% collisions
About twice better than pure ALOHA
Exercise


Increasing G would reduce the # of empty slots. Why that will not
increase the throughput ?
Work out few examples...
Sourav@asu.edu
23
ALOHA ==> Slotted ALOHA
Insert Fig 4-3 here
Sourav@asu.edu
24
Carrier Sense Protocols

Best performance of Slotted ALOHA = 1/e



Carrier Sense Protocols



Since, nodes cannot sense the carrier prior to transmission
In other words, they cannot avoid collision, can only detect
Can listen for a carrier, i.e., shared channel, to become idle and then transmit
Carrier Sense Multiple Access (CSMA) class of protocols
Persistent CSMA


Also, called as 1-persistent, since it transmits with a probability = 1
A node with ready data



Listen for idle channel, if line is busy then WAIT Persistently
When channel is free, transmit the packet, and then listen for a collision
If collision, then sleep for a random time and goto Step 1
Sourav@asu.edu
25
Persistent CSMA

How does contention resolution occur ?



Role of Propagation Delay





Depends on the “randomness” of the wait periods
If a set of random wait periods, one from each user, are in effect then
eventually everyone will get through...
Collision detection time depends on the propagation delay
If d is the propagation delay, then worst case collision detection time = 2d
d = 0, there may still be some collisions
Analogous to round table conference discussions among human users
Improvement over ALOHA

Nodes do not jump in at the middle of another node’s transmission
Sourav@asu.edu
26
Non-Persistent CSMA

Persistent CSMA




When looking for an idle channel, it keeps a continuous wait
A greedy mode for “seize asap”
Consequence: multiple contenders, each in the “seize asap” mode will lead
to followup collisions
Non-Persistent CSMA



If an idle channel is not found, the node desiring to transmit does not wait
in a “grab as soon as available” mode
Instead, the node attempting to transmit goes into a random wait period. It
wakes up at the end of the random wait, and re-tries for an idle channel
Benefit: reduced contention (Note: it includes a 2-level randomness)


Random wait, if not found idle channel
Random wait, if found idle channel, transmitted but had collision
Sourav@asu.edu
27
Non-Persistent CSMA => pPersistent CSMA

Contention reduction strategy



Involve more and more random delays in each user activities
Throughput will increase, but individual user delays will decrease
p-Persistent CSMA


Channel is time slotted, similar to Slotted ALOHA
A node with ready data




Look for an idle channel, if channel is busy then wait for the next slot
If idle channel found then transmit with probability = p (i.e., defer until the
next slot with prob = 1-p)
If next slot is also idle, then transmit with prob=p, and defer for the second
next slot with prob = 1-p
Continue until the data is transmitted, or some other node starts transmitting

If so, wait for a random time and goto Step 1
Sourav@asu.edu
28
Why p-Persistent CSMA ?


The more probabilistic events, and randomness => the less
contention and increased throughput
Degrees of uncertainty



Persistent CSMA = 1, random delay when a collision occurs
Non-Persistent CSMA = 2, random delay both at the channel seek,
and at the collision
p-Persistent CSMA = 2 (but different kind from Non-Persistent)




Random delay at collision (as Non-Persistent)
Deterministic seizure attitude at channel seek time (like Persistent)
Slotted time (like Slotted ALOHA)
But, non-deterministic transmission even when channel is idle

An additional level of uncertainty beyond Persistent CSMA)
Sourav@asu.edu
29
Performance of CSMA Class of
Protocols


Throughput and individual user delays are against each other
Throughput


Non-persistent is better than Persistent
Non-Persistent VS. p-Persistent




Depends on the value of p
Both have 2 degrees of uncertainty, but different kinds
Refer Figure 4-4 for an aggregate performance depiction
In increasing throughput






Pure ALOHA
Slotted ALOHA
1-Persistent, or Persistent CSMA
0.5 Persistent CSMA
(Non-Persistent, 0.1 Persistent) CSMA
0.01 Persistent CSMA
Sourav@asu.edu
30

include figure 4-4 here
Sourav@asu.edu
31
CSMA with Collision Detection

CSMA does not abort a transmission when a collision occurs




Colliding transmissions will continue (until the frame completion)
A fair (!!) amount of garbage being generated, once a collision occurs
Why not abort transmission as soon as a collision is detected
CSMA with Collision Detection





IEEE 802.3, Ethernet protocol
Quickly terminate damaged frames
Contention periods are single slot each, not a frame width (Fig 4-5)
Resource wastage = width of the slots (and not those of the frames)
Slot width = worst case signal propagation delay


Actually, twice of that
Includes the delay of the analog devices as well
Sourav@asu.edu
32

include fig 4-5 here
Sourav@asu.edu
33
Collision-Free Protocols

Channel co-ordination can be of several categories




Static MAC Policies



Collision-free by design, i.e., avoidance
Resource utilization may be questionable
Dynamic MAC with Collision Detection


Detection and Correction
Avoidance
Prevention (contention-free !)
Like CSMA/CD
Dynamic MAC with contention prevention

Protocol does few extra steps in run-time to prevent collision
Sourav@asu.edu
34
Reservation-Based Dynamic MAC
Protocols

Protocols consist of two phases



Reservation phase



All nodes with data to transmit go through the reservation phase
Result: one or more winners ==> implicit reservations
Transmission phase


Reservation or bidding process
Actual usage, after the bidding process
The winner channel(s) transmits (one after another)
Bit-Map Protocol - One Reservation Policy


Basic idea stems from Link List approach
Refer Figure 4-6
Sourav@asu.edu
35

include fig 4-6 here
Sourav@asu.edu
36
Bit-Map Protocol

N Contention Slots for N stations



Node i transmits a “1” in Slot i, iff node i has data to send
The collection of 1’s in the Contention Slot will indicate which
stations are with data (to transmit)
Followed by Transmission Phase


Allocate Frames only for those Nodes with a 1 in the Contention Slots
Performance

Low load :




Frames’ time << Contention Slot time
Contention Slot’s delay for Low numbered station -- 1.5N (why ?)
Contention Slot’s delay for High numbered station -- 0.5N (why?)
Average wait = N slots (sloppy analysis !!)
For d-bit data frames, efficiency = d / (d +N)
Sourav@asu.edu
37
Performance of Bit-Map Protocol

At high load




Multiple (k) frames per each group of N Contention Slots
Efficiency = k*d / (N + k*d)
For k ==> N, efficiency = d/(d+1)
Question ?



Is this a realistic analysis ?
Can you do a queueing analysis for this protocol ?
Is there any fundamental bottleneck ?
Sourav@asu.edu
38
Binary Countdown Protocol


2-phase Protocol : Reservation followed by Transmission
Reservation phase




Each station, with ready data, transmits its bit address in msb to lsb order
At each bit-position, binary OR of all the respective bits from each node.
If a node with a 0-bit, observes a 1 after the OR operation - then it
withdraws from the competition. The latest surviving node is the winner.
Transmission phase: Winner (single) transmits the data
Example, nodes 3, 4 and 6 have data to transmit

Node ids (0011), (0100) and (0110) get transmitted




First transmission: 0, 0, and 0
Second transmission: 0, 1 and 1 ==> Node 3 withdraws
Third transmission: none, 0, and 1 ==> Node 4 withdraws
Node 6 is the winner. Node 6 transmits data frame.
Sourav@asu.edu
39
Performance of Binary Countdown
Protocol

Note: only a single winner in this approach





The node with the highest bit address
This approach may starve the lower numbered users
For N nodes, ln(N) bit addresses will be transmitted
d bits frame ==> efficiency = d / (d + ln(N))
Enhancements:



Bit ordering different from (msb --> lsb) type
Parallelized version of binary countdown, instead of serial
Efficiency can reach upto 100%
Sourav@asu.edu
40
Limited Contention Protocols

Design features:



Low traffic load - Collision detection approaches are better, they
offer low delay, and not much collision occurs anyways
High traffic load - Collision free protocols are better, they have
higher delay, but at least the channel efficiency is much better...
What if we combine the advantages of the two ?




Limited Contention Protocols
Idea: Do not let every station compete for the channel with equal
probability. Allow different groups of nodes to compete at different
times...
Refer Figure 4-8, for Success Probability = f(# ready stations)
Question: give an analogy of this idea using the car/road domain...
Sourav@asu.edu
41

include fig 4-8 here
Sourav@asu.edu
42
Adaptive Tree Walk - Limited
Contention Protocol

Group the N nodes as a log(N) height binary tree


Tree leaves are the N nodes
Starting phase, or immediately after a successful transmit



All N nodes can compete for the channel
If one of the nodes acquire a channel, then repeat with all “N nodes” as the
contenders’ list
Else, if collision then narrow the contenders’ list = left subgroup of nodes

If one of the nodes acquire a channel, then shift to the right sibling group of
nodes for the next slot


Else, if there is a further collision, narrow down the contenders’ list to the leftward
children subtree (Repeat...)
Refer Figure 4-9, essentially walk around with various subgroups of the tree
leaves at each time as the Contenders’ list
Sourav@asu.edu
43

Figure 4-9
Sourav@asu.edu
44
Wavelength Division Multiplexed
MAC Protocol


Analogous to FDM, used popularly for optical networks
Partition the wavelength spectrum into (equal ?) slices



One slice for each node / user
Can apply TDM in conjunction as well
Useful for implementation of broadcast topologies




Refer Figure 4-10, each wavelength slice has two parts - for control
information, and for data values
Can also implement point-to-point network topologies (how ?)
Collectively it is called TWDM (time-wave-division multiplexed)
MAC protocol
Key design issue: #transmitters, and #receivers at each node

Frequencies and Tunability of the transceivers...
Sourav@asu.edu
45

Figure 4-10
Sourav@asu.edu
46
WDMA - A Particular WDM MAC

WDMA - a broadcast based protocol


Each node is assigned two channels, for Control and for Data
The data channel is slotted




The control channel is also slotted
Supports three classes of traffic




One slot for every other node
One slot for status information of the host node itself
Constant data rate connection-oriented traffic
Variable data rate connection-oriented traffic
Datagram traffic, e.g., UDP packets
Each node has two receivers (one fixed freq, another tunable) and
two transmitted (one fixed freq, another tunable)
Sourav@asu.edu
47
Arbitrary Topology Configurations
using WDM and TDM





Consider any graph topology
Replace every bi-directional edge using two back-to-back
simplex edges
Assign each simplex edge of the graph topology to one slot
in the (frequency, time)
Select #time slots just adequate enough so that #freq *
#time slots >= the #simplex edges
Work out an example
Sourav@asu.edu
48
Wireless LAN Protocols



Consider a Cellular Network, with Cell sizes anywhere between
few meters to several miles
Frequency reuse is adopted, as a feature of Cellular system
What could be a typical MAC ? Can CSMA work ?




No, since there is no common broadcast channel which everyone
eventually listens to
Refer Figure 4-11
Design difficulty: how to detect interference at the Receiver ?
Hidden station problem: Two nodes transmit to a common receiver
located in the middle


Competitor station is too far away
Exposed station problem: Two adjacent nodes transmitting in opposite
directions. False sense of competition...
Sourav@asu.edu
49

figure 4-11
Sourav@asu.edu
50
MACA - Multiple Access with
Collision Avoidance

Idea: have both the sender and receiver ackn each other stating the
length of upcoming transmission



Protocol




Consequently, neighbors both around the sender and receiver will be aware of
the transmission activity and its duration (from the #bits in the transmission)
Figure 4-12
Sender: send a request-to-send (RTS) signal to receiver with #bits in the
upcoming data frame
Receiver: ackn to sender using a clear-to-send (CTS), if no collision.
Sender: start transmitting upon receiving the CTS
Where is the catch ? Both the sender’s and receiver’s neighbor can
hear the message initiation along with size !!
Sourav@asu.edu
51

figure 4-12
Sourav@asu.edu
52
MACA and MACAW

Collisions in MACA





Still possible, but chances are much reduced
If two nodes initiate an RTS simultaneously
Collision ==> backoff and re-try later (like CSMA)
Backoff approach is based upon a binary exponential scheme
MACAW - an enhanced MACA Protocol




ACK signal at the MAC layer, after each data frame
Include carrier sensing to further reduce collision
(Although, carrier could only be sensed locally.)
Random wait and re-try transmission at every message level,
instead of at every node level
Congestion information exchange between pairwise stations,
leading to better congestion control and backoff approaches
Sourav@asu.edu
53
Protocols for Digital Cellular Radio

Significant usage for mobile telephony



Each connection lasts longer than few msec
Hence, channel allocation per Call is better than per Frame (why ?)
Preferably use digital coding, instead of analog




Allows compression of data/speech
Allows to integrate voice, data, fax, ...
Can include error-correcting codes (for reliability) and encryption (for security)
GSM - Global System for Mobile Communication



Allocated in the 900 MHz band, later re-shuffled to the 1800 MHz range as
well (called DCS 1800)
Employs 124 bi-directional freuqncy channels within each cell
Refer Figure 4-13
Sourav@asu.edu
54

figure 4-13
Sourav@asu.edu
55
GSM - Details

Each cell has 124 (base station --> user nodes) frequency
channels + 124 (user nodes --> base station) freq. channels





These are used for Data In/Out and Control signals
Each freq. channel is 200KHz wide, allowing a fair bit rate !!
Each freq. channel is 8-way TDM slotted
Thus, a total of 992 (=124 * 8) logical connections are possible
Not all of the 992 connections are implemented



for avoiding frequency conflicts with neighboring cells
Also, for enhancing the bps within each logical connection
Format of the TDM slots

148 bit in each slot, 8 slots per frame for time division multiplexing,
and 26 frames to create a multiframe
Sourav@asu.edu
56
Data Format of GSM Frames


Refer Fig. 4-14
Each TDM slot, of 148 bits, consist of








3 start bits
57 bit Information
1 bit Voice/Data toggle
26 bit synchronization information
1 bit Voice/Data toggle
57 bit Information
3 stop bits
8 TDM slots create a TDM frame


Slots are separated by 30 microsec guard time (worth 8.25 bit)
Guard times accommodate lack of sync, and data overflow
Sourav@asu.edu
57

figure 4-14
Sourav@asu.edu
58
GSM (contd...)

26 TDM Frames constitute a TDM multi-frame




24 frames are data use, 1 frame for control, 1 left for future use
Time spent for a TDM multiframe is 120 milisec
Effective data rate in each logical connection is 9600 bps
Other GSM channels


Apart from the GSM framing structure, it also supports other
specific purpose channels
Broadcast Control Channel


Continuous stream of outputs from the Base Station to all the nodes
describing the Base Station id
Mobile nodes check the strength of this signal to detect the cellular
parenthood
Sourav@asu.edu
59
Other GSM Channel (contd...)

Dedicated Control Channel



for location updating, registration and call setup
each base station maaintains a data structure with all intra-cell mobile nodes;
the control channel exchanges information to keep this data structure updated
Common Control Channel

Paging Channel



Random Access Channel



Base station uses this for announcing Incoming Calls
Mobile nodes listen to this for answering Incoming calls
Slotted ALOHA to setup a call in the Dedicated Control Channel
A node can setup a Call using this Channel
Access Grant Channel

response of Random Access Channel
Sourav@asu.edu
60
GSM vs. CDPD = Cellular Digital
Packet Data

GSM




CDPD



Circuit Switched, not packet switched
Not friendly to cellular handoffs, each handoff can miss some data
Increased error rate
A packet switched, digital datagram service
Using 30 KHz channels, it can offer 19.2 Kbps links (excluding protocol
overhead ==> 9600 bps data channels)
CDPD System Architecture


Three kinds of nodes: mobile end system, base stations and base interface
stations (which connect between base stations and to the Internet)
Refer Figure 4-15
Sourav@asu.edu
61

figure 4-15
Sourav@asu.edu
62
CDPD Details

Uses three types of interfaces



E-Interface: Connects a CDPD Network to the outside world
networks, e.g., the Internet
I-Interface: Connects between multiple CDPD areas (basically,
between multiple cells)
A-Interface: Between base station and mobile nodes

One Downlink part, from Base Station to Mobile Noeds


Not difficult to manage, since it has only one user (the Base Station)
One Uplink channel, shared by all the mobile end users




Digital Sense Multiple Access protocol adopted by the mobile end nodes
Similar to Slotted, p-Persistent CSMA
Data is packetized, time axis is slotted, and re-entry attempts are spread out
to non-consecutive time slots
Combines the benefits of Slotted ALOHA, p-Persistent CSMA
Sourav@asu.edu
63
Collision in CDPD

Possible, when two or more mobile end nodes start on a time slot
together


Mobile hosts may not immediately detect a collision (sensing delay due to
RF propagation)
Microblock transmission is faster than the rate of detection of a failure




Correct/Incorrect reception of microblock n is delayed until microblock n+2
In between, the mobile node just goes ahead and continues transmission
If a failure is detected (later), it stops - otherwise transmission continues
Voice data has higher priority, data transmission is next
Sourav@asu.edu
64
Code Division Multiple Access

CDMA - a completely new line of MAC approach


MAC approaches so far: TDM, FDM, WDMA, slotted ALOHA, ...
CDMA - each user transmits across the entire spectrum




However, nobody collides with each other
Each node has a unique code, called Chip, using which it transmits
The uniquness of the Chips ensure no eventual collision
Analogy - Multiple people speaking in a room



TDM: everyone takes turn in speaking
FDM: Separate clusters of people, each speaking within its cluster,
yet not being overheard at other clusters
CDMA: Everybody speaks loud and clear to everybody else, but
using different languages
Sourav@asu.edu
65
CDMA - Summary

Each node has a unique sequence, called Chip


Usually its a 64 or 128 bit pattern, but we demonstrate using a 8-bit
Chip
Example: A’s chip = (0, 0, 0, 1, 1, 0, 1, 1)



Another node, B, will have a different Chip sequence





If A wants to transmit a “1”, it will send the above chip
If A wants to transmit a “0”, then it will send 1’s complement of the Chip
Orthogonal from every other node’s Chip
Normalized inner product of any pair of Chip sequences = 0
Thus, A’s Chip <normalized inner product> B’s Chip = 0
By definition, A’s Chip <norm. inner prod.> Complement(B’s Chip) = 0
Bit sequence within the Chips are transmitted across the entire spread
spectrum
Sourav@asu.edu
66
CDMA - Bandwidth Usage


Consider 100 nodes, and 1 MHz spectrum with 1 Mbps
FDM allocates 10 KHz per station


CDMA, with m bit Chips




Each station has a 10 Kbps data rate
Allocates the entire 1 MHz to each station
Thus, each station’s data rate = 1000/m Kbps
When m is smaller than 100, CDMA is a better bandwidth utilization
Where is the catch ?



CDMA will expect to treat the RF media in an analog fashion
Voltages (RF transmission powers) will be expected to be additive in value
It can get more noisy, likely to be more erroneous
Sourav@asu.edu
67
CDMA - Example (refer Figure 4-16)

Four nodes, A, B, C, and D each with unique 8-bit Chip



0-bits in the Chip sequence can be treated as -1 for voltage or
transmission power point of view
Two or more nodes transmitting together simply adds their voltages
(addition of negative values indicate voltage or RF power reduction -> this is a major source of error, in analog handling)
The design of the Chip sequences ensure that



A <norm. inner prod.> B = 0
A <norm. inner prod.> (complement of B) = 0
Suppose, A and C transmit a 1, while B transmit a 0


T= (A + not(B) + C) is transmitted. Everyone receives this.
Receiver node D, trying to listen to C, computes C <norm. inner prod> T

= C.A + C.(not(B)) + C.C = 1, where “1” is what C transmitted
Sourav@asu.edu
68

figure 4-16
Sourav@asu.edu
69
CDMA Example (contd...)

Suppose, C transmitted a 0 in the previous example


T = (A + not(B) + not(C))
The receiving node D will compute


C . T = C.A + C.(not(B)) + C.(not(C))
= 0 + 0 + (-1) = -1


0-bit is assumed to have a value = -1
Efficiency of CDMA




Theoretically, can be arbitrarily large
In practice, the noise level, analog value handling and #bits/Chip
pose limitations
Design rule: if you want to enhance b/w, and can live with some
noise - go for CDMA (Korean Telecom)
Question: Why the name “Chip”
Sourav@asu.edu
70
Theory to Practice

CSMA/CD MAC Protocol with various degrees of Persistency





IEEE 802.3 is a specific implementation
Random delay, if collision occurs, is based on a Binary Exponential
Backoff algorithm
Average case performance: Moderate
However, no worst case delay guarantee for individual stations
Token Bus and Token Ring Protocols



Worst case bounded delay, may be useful for Real-Time application
IEEE 802.4 and 802.5 LAN standards
LAN to MAN and fairness issues


Distributed Queue Dual Bus (DQDB), IEEE 802.6
IEEE 802.2: Logical Link Control
Sourav@asu.edu
71
Ethernet 802.3

Essentially, it is a 1-persistent CSMA/CD Protocol

Looking for an idle channel


If not found, i.e., Channel=busy, station waits in a greedy mode
If Channel = idle, station immediately attempts to transmit data



If no collision, then successful transmission
If collision, stop transmission immediately and go into a random delay
wait more
Requires broadcast mode cable topology



Linear, Backbone, Tree, Segments with Repeaters
Figure 4-19
Worst case delay in broadcast transmission affects performance
(Efficiency, for example)
Sourav@asu.edu
72

figure 4-19
Sourav@asu.edu
73
Binary Exponential Backoff
Algorithm for Random Delay Wait

Motivation:

Random delay to ensure that collissions will eventually be resolved



If few stations compete, the range of random delays should be smaller


Minimize the probability that two (or more) colliding stations will keep
colliding again and again
Once done so, then minimize the absolute ranges of delay periods during
these random wait cycles
Chances of consecutive collisions is less, hence minimize the random
delay period
If collisions occur in consecutive attempts, then the range of random
delays should be increased (perhaps, rapidly) to quickly resolve the
colliding stations

Here, two or more stations are repeatedly colliding. Hence, most
immediate priority is to resolve the conflict between them.
Sourav@asu.edu
74
Binary Exponential Backoff (contd...)

After the first collision


After the second consecutive collision


random wait period is in the range {0, 1, 2, and 3}
After the i-th consecutive collision, i<= 10




random wait period is either 0 (i.e., re-try next slot) or 1
random wait period is in the range {0, 1, 2, ..., 2^i -1}
For, 11 <= i <=15, the random wait period range is fixed: {0, 1023}
For i=16, an abnormal transmission event interrupt is sent to the message
source
Features


For fewer stations, and fewer collisions ==> average randon wait is small
For many stations, and lot of collisions ==> collision gets resolve quickly
Sourav@asu.edu
75
Ethernet Addressing

Frame Format


Transmission @ frame quantums (viz. collision detection advt.)
Preamble: 7 Bytes







7
Each byte = 10101010 => 10 MHz square wave for 5.6 microsec
Used for clock synchronization
Start Delimitter: 1 Byte (10101011)
Destination (and, Source) Address: 2 or 6 Bytes
Data Length: 2 Bytes; Actual Data: 0 to 1500 Bytes
Pad: 0 to 46 Bytes (used for ensuring >= 64 bytes after dest.addr)
Checksum: 4 Bytes (32-bit CRC + 8 bit end-delimiter)
1
2 or 6
Preamble Start Dest.Addr.
2 or 6
2
Source Addr Length
Sourav@asu.edu
0-1500 0-46
Data
Pad
4
Checksum
76
802.3 Frame Format
Insert Figure 4-21 here
Sourav@asu.edu
77
Ethernet Addressing (contd.)


Data length: 0 to 1500 Bytes
Effects of short data frames






Too small data length can confuse the receiver
Is it a collided frame, or real (short) data ?
Also, two frames may start at distant ends of the cable
Answer: Each frame must be at least 64 bytes after the destination
address
If actual data size is small, then create a Pad (upto 46 bytes)
Why 64 Bytes ?


10-Mbps LAN, 2.5 km cable (specs), and 2t collision window
Minimum frame width = 51.2 microsec ==> 64 Bytes length
Sourav@asu.edu
78
Broadcast and Multicast Addresses

Destination Address



Msb: 1 for “group” (multicast or broadcast), 0 for “unicast”
Address = all 1’s: indication of Broadcast
How does multicast work ?


2nd Msb: Local vs. Global addresses


Group addr. id = programmed to listen at individual nodes
Useful for address filtering, and flooding control
Uniqueness of Node Addresses




Total 46 bit addressing (6 bytes - 2 msb)
Approx. 7 * 10^13 addresses
Can provide unique address to every node !!
Manufacturers procure a bulk of address ranges
Sourav@asu.edu
79
Broadcast, Multicast, and Unicast


Each transmitted frame is listened to by every adapter
Adaptors act as filters


Frames that are ok-ed by the filter are sent to the backend host
computer
Filter Modes




Listen to self-address only: Unicast
Promiscuous: Listen to all addresses (useful for gateway design)
Listen to addresses with all 1’s: Broadcast
Listen to specific group-ID: Multicast
Sourav@asu.edu
80
ARP vs. RARP

Issue: Upper Layer Address vs. Ethernet Address


Address Resolution Protocol





Forward and Reverse Mapping
32-bit IP address => 48 bit Ethernet address
Naive Approach: Configuration Files (IP address vs. Ethernet
address)
ARP Algorithm: Broadcast IP address and seek a response
ARP records can be cached, optimized for locality
Reverse Address Resolution Protocol

Host machine (at boot time) transmits ethernet address and seeks
IP address (from RARP server)
Sourav@asu.edu
81
Ethernet Connectors

10Base 5 (Thick Ethernet)


10Base 2 (Thin Ethernet)


Flexible Connector
10Base T (Central Hub)


Vampire Tap
Nodes connect twisted pair cable to a switch
10Base F

Version for optical fiber
Sourav@asu.edu
82
Worst Case Collision Detection
Insert Fig 4-22 here
Sourav@asu.edu
83
Performance of 802.3

Simplistic analysis



Assume a fixed number of, k, stations always with data to transmit
p = probability with which each station transmits during a contention slot
Then, the probability that one of those k stations will successfully acquire
the channel is A = k * p * (1-p)^{k-1}



k times, one for each station being the channel winner
(k-1) stations did not transmit, while the winner stations did transmit ==>
Probability = p * (1-p)^{k-1}
Probability that the contention interval is exactly j slots, will be =
A * (1-A)^{j-1}


Contention interval is not in the (j-1) slots ==> (1-A)^{j-1}
It is at the j-th slot ==> A * (1-A)^{j-1}
Sourav@asu.edu
84
Performance of 802.3 (contd...)

Mean number of slots per contention



Each slot is a duration = 2where is the worst case broadcast
delay


sum (from j=0, to j=infinity) [ j * A * (1-A)^{j-1} ]
= 1/A
Hence, mean contention interval = 2 * 1/A
If the average frame takes P time units to transmit, then the total
time taken to transmit = P + mean contention interval = P + 2



Hence, Channel efficiency = P / ( P + 2/A )
Refer Figure 4-23, for channel efficiency as a function of the #stations
trying to send data
Large P ==> higher efficiency, but increased frame fragmentation
Sourav@asu.edu
85

figure 4-23
Sourav@asu.edu
86
Switched Ethernet

Switched Ethernet





Intelligent processing allows packet filtering
Useful for traffic reduction, containment
Example: multicast filtering, broadcast filtering, …
Other usage: security, workgroup establishment
Design Paradox



Ethernet had not been initially meant to be point-to-point
However, design needs led it to becoming point-to-point
Its still called Ethernet, and behaves like Ethernet - for compliance,
and ability to (still !!) use existing ethernet adapter cards

Sometimes, it is an expensive mistake to carry one !!
Sourav@asu.edu
87
Full Duplex Ethernet

Design Rationale



Ethernet does not scale well
# Connect Points, also bandwidth...
Solution: Several 802.3 LAN connected via a faster switch



FDSE Architecture



Each 802.3 LAN is in reality a plug-in card at the switch
Full Duplex Switched Ethernet
Not a shared bus LAN
Instead, a point-to-point protocol around a fast switch
Switch has several (<=32) “Plug-in Cards”


Each Plug-in Card has few (<=8) Connectors
Each connector is a 10Base T link to a host computer
Sourav@asu.edu
88
FDSE Block Diagram
Insert Fig 4-24 here
Sourav@asu.edu
89
FDSE Structure
host
host
to other FDSE
FDSE
802.3 LAN
Hub
Sourav@asu.edu
hosts
90
FDSE Design


Idenaitcal frame format, addressing, ....
On-Card LAN



If a frame is addressed to another node on the same card, then the
frame is locally copied
Else, it is transmitted over the high-speed backbone bus to another
on-card LAN
Input Buffering



Collision resolution with on-card LAN
(Btw, collision never occurs across multiple cards)
Approach 1: adopt CSMA/CD within each card
Approach 2: Input packet buffering + scheduling

Whao !! Feasibility for Packet Prioritization, Periodic traffic support...
Sourav@asu.edu
91
Packet Priority in FDSE LAN

802.3 has no support for priority


However, FDSE is a much digressed version of the initial
“ethernet”



It is point-to-point, instead of shared media
It is input buffered, and scheduled, instead of collision and re-try
Hence, packet priority establishment is feasible in FDSE




802.4 and 5 evolved precisely for these reasons
Priority implementation in the scheduling of input buffer
Still, ethernet frame format does not accommodate priority values
One way to accommodate priority is as part of the data field
Priority support from upper OSI layers (e.g., TCP) is always
feasible
Sourav@asu.edu
92
Periodic Traffic Support in FDSE
LAN



Not directly supported
But, can always be implemented from TCP or IPX layer
Admission Control Stage


Dynamic Scheduling Stage



At TCP or upper application layer
At FDSE input buffer scheduling algorithm
Upper OSI Layer Connection-Oriented Virtual Circuit can
solve this problem
Aperiodic RT Traffic Support

Use placeholder (i.e., stub) periodic traffic
Sourav@asu.edu
93
FDSE LAN of LANs

FDSE as a switch easily lends itself to hierarchical
construction as LAN or MAN / WAN (as LAN os LANs)
Sourav@asu.edu
94
FDSE Flow Control



Prevent over-bandwidth situations, and recover from
congestions and hot spots
Objective: Forward packets from in=>out ports without any
loss of packets and minimum (=0 ?) latency
TCP or Window Based Protocol


Several packets transmitted before a “destination port” overloaded
message can be reverse ackn-ed
Solution: modest sized buffer, time to fill up the buffer (due to
destnation port jamming) is adequate to inform the sender node


Disadvantage: Large buffer ==> large (individual) packet latency
Another solution: reduce the window size of the upper layer
protocol (e.g., TCP or IPX)
Sourav@asu.edu
95
Learn Table (Address Mapping)


Learn table: a table of information associating 48-bit
Ethernet addresses with ports
New frame arrival:



Look up the port address, from (destination’s) ethernet address
If port address unavailable, then broadcast (unfortunate situation,
cannot be helped) - “are you out there, please respond” type
Learn Table: updated bu current lookup information


Recent failures in lookup, and eventual resolution (by broadcast)
Old entries are flushed in a cache-page update manner

LRU, or FIFO
Sourav@asu.edu
96
FDSE and Fast Ethernet Connectors

100 Base - Fx




100 Base - T4



Specs for 100 Mbps Fast Ethernet over fiber
Similar to FDDI specs
Signals are unscrambled, 4B5B encoded
same as above, except for category 3 or better twisted pair cabling
Full duplex not supported under T4
100 Base - TX


same as above, except for category 5 twisted pair cabling
Similar to CDDI specs, signals are scrambled, 4B5B encoded
Sourav@asu.edu
97
Limitations of 802.3





No worst case delay bound for any given stations
No notion of priorities to any of the nodes/stations
Focusses on the overall channel efficiency, not on the
individual user station needs
Certainly not good for time-critical traffic
IEEE 802.4 evolves from 802.3




Token Bus structure, logically
Each one of the N nodes takes turn in sending their respective frames
If each node takes T time units, then no node will have to wait more
then NT time units
Figure 4-25 as an example Token Bus
Sourav@asu.edu
98

figure 4-25
Sourav@asu.edu
99
IEEE 802.4: Token Bus

Logical linear connection







Each node has a predecessor and a successor node
The Token arrives from the predecessor node, and is destined to the
successor node after usage by the current node
The highest numbered station sends the first frame
If a node has no data to send, it passes the Token immediately
Logically the nodes are organized as a Ring (Fig. 4-25)
Collison avoidance by mutual exclusion in Token ownership
Physically, the nodes may be in any connection pattern



Tree, Bus, ...
Essentially, a broadcast transmission medium is needed
Logical ordering of the stations is independent of the physical locations
Sourav@asu.edu
100
Priority in Token Bus

Worst case response time for each node < N*T time units, for
N nodes and T time units per node (i.e., per Token)




This prevents from unbounded response delay situations
Yet, may not follow hard real-time guarantees
How to assign priorities to the traffic within each node ?
Token Bus defines four priority classes, 0, 2, 4 and 6


Priority 6 is the highest, Priority 0 is the least
When a node acquires the Token, say for T time units


First, it allocates transmission from Priority 6 messages
After all the data from Priority 6 set is exhausted, if any more time is still
left ==> allocate traffic from Priority 4 messages

After all Priority 4 messages are over, if still some time is left, then use for
Priority 2 messages, and so on
Sourav@asu.edu
101
Synchronous Traffic in Token Bus

The bandwidth for at least one of the Priority 6 messages is
guaranteed



“<=T” (as much as desired) time units of transmission per every
N*T time units
Synchronous traffic, e.g., live video, multimedia, automated
factories and production environments, are supported
Limitations



Ranges of deadline that can be honored
No notion of periodic traffic support
Fault-Tolerance


What if a node/station goes down while holding a Token ?
A max-time parameter for claiming tokens
Sourav@asu.edu
102
Token Ring: IEEE 802.5

Token Bus

Requires a broadcast channel




Large delay
Analog characteristics
Enjoys the freedom of logical predecessor/successor assignment
Token Ring




A set of point-to-point connections
Most typically digital connections
Suitable for most physical media, e.g., twisted pair, co-ax, fiber
Predecessor/Successor defined by the physical topology


In contrast to 802.4, where it was a Logical relationship
Refer Fig. 4-28
Sourav@asu.edu
103

figure 4-28
Sourav@asu.edu
104
Token Ring Operations


The media is no longer a Broadcast bus
Each point-to-point element of the Ring must now



Transmit data bits in/out for the speedy operation of the Ring
Cater both to the originating/destined traffic, as well as the traffic passing by
the node
Circulating Token



A 3-byte pattern, called Token, circulates around in the Ring
The endless circulation ends anytime one (or, more) node(s) has data to send
The transmitting node seizes the Token, changes a single, particular bit in the
Token



The interpretation of the 3-byte immediately changes from Token to Data
The station starts to pour its bit-stream on the Ring
Length of the data (i.e., message frame) can be much longer than 3 bytes
Sourav@asu.edu
105
Token Ring Operations (contd...)

A node can be in anyone of the two modes


Listen: copy input bits to the output bits, with a 1-bit delay
Transmit:

Break the connection between Input to the Output




Enter the node’s own data into the Output bit-stream
Remove the (previously) transmitted data bits from the Input bit-stream
The entire frame may never have to appear in the Ring, hence no
limitation on the Frame length


Be able to do so, i.e., switch from the “Listen” mode, within 1-bit delay
Unless exceed maximum Token-Hold time
After a node has finished transmission of all the frame’s bits


Must re-generate the Token’s 3-byte pattern
Switch back to “Listen” mode instantaneously after the last bit of the
Token has been generated and inserted into the Ring
Sourav@asu.edu
106
Minimum Ring Delay

Consider an worst case, none of the nodes are transmitting



The 3-byte Token pattern must be circulating around
The Ring delay must be long enough to accommodate the 3-bytes
Ring delay includes



Point-to-point transmission delay of each one of the Links
1-bit copying and re-transmission delay introduced at each station
Ring Length vs. Ring Data Rate




For R Mbps data rate, every bit is transmitted at 1/R microsec
Signal propagation delay, typically, 200 meters per microsec
Each bit occupies around 200/R meters on the Ring
A ring with 8 nodes: eight 1-bit delay by each one of the nodes

Additional 2-byte delay (NB: Token =3 bytes) must come from the Ring

Hence, Ring must be 3200/R meters long, at least (= 3.2 KM, for 1 Mbps)
Sourav@asu.edu
107
Ring Delay and Contention Resolution

Effect of Node Withdrawal from the Ring

What if one or more nodes withdraw from the Ring ?


The “Listen” mode must be honored by passive devices



Due to failure, or un-willingness to participate for the time being
Retain the “copy Input to Output” feature
Maintain the 1-bit delay
Contention Resolution


Mutually exclusive ownership of the Token by the nodes
Once a node starts to transmit, i.e., has modified the Token and is in the
middle of data transmission -- no other station can acquire the Token


A higher priority node can make reservations, in the special Reservation
Fields of the transmitting frame
But, no interruption until the Token-Holding-Time expires
Sourav@asu.edu
108
Performance of Token Ring

Light traffic



Heavy traffic load



Idle circulation of the Token
Occassional seizure by a transmitting node, transmission of a frame (or
arbitrary size), and re-generation/insertion of the Token
Nodes will wait for transmission, with their respective input Queues
A node currently transmitting will either finish its frame, or have a time-out: Next - The immediate next (in priority order, followed by the round-robin Ring order)
waiting node will acquire the Token
 Can lead to nearly 100% channel efficiency under heavy traffic load
Some implementation notions


Wire center, to better accommodate broken cables
Centralized monitor station, elected to one of the nodes (i.e., de-centralized)
Sourav@asu.edu
109
Priority in Token Ring



Supports multiple priority frames
Second byte 3-byte Token contains a “priority” field
A node with Priority=n data to transmit




Must wait and obtain a Token whose priority < n
May make a reservation on the current transmission, but only if no
other higher priority traffic has already made reservation
When the current frame transmission is over, the new Token generated
will have a priority = highest priority reservation being waited
Fairness in Priority Management


Token will raise the priority level arbitrarily, untill some node
explicitly lowers the priority
A node raising the priority is responsible for lowering the priority
again, after it is done with its transmission
Sourav@asu.edu
110
Comparison of 802.3, 802.4 and 802.5

Expect similar performances, overall


802.3 - Advantages




They all use similar LAN technologies
Popular in usage, simple; Passive cable, no modem
Nodes can be added, deleted without any re-work (i.e., scalable)
Little delay at low load
802.3 - Disadvantages





Lots of analog stuff, including analog “Carrier-Sense”
Cable length restricted due to sensing delays (affects the Channel efficiency)
Minimal frame size restriction, leading to frame fragmentation and wastage
Non-deterministic, not for RT applications, no notion of priorities
Lot of collisions at high traffic load ==> decreasing utilization
Sourav@asu.edu
111
Comparison (contd...)

802.4 - Advantages






Reliable, usage of cable TV equipment
More deterministic than 802.3, yet may not be for tight deadline
RT applications
Can support priorities, handle fixed bandwidth synchronous traffic
At high traffic load, it becomes close to TDM ==> good
throughput and efficiency
Short frames are possible
802.4 - Disadvantages



Still analog devices, including amplifiers and modems
Substantial delay at low traffic load
Complex protocol
Sourav@asu.edu
112
Comparisons (contd...)

802.5 - Advantages



Fully digital, and flexible/cheap connectors (e.g., twisted pairs)
Handles priorities, despite the fairness issue
Both short and arbitrarily long frames are possible


Good throughput and efficiency at high traffic load


Like 802.4, but unlike 802.3
802.5 - Disadvantages



Limited only by the Max-Token-Holding time
Usage of a (floating) centralized monitor
Relatively high delay at low load, due to waiting for the Token
Which one is best ?

Depends on your traffic model
Sourav@asu.edu
113
DQDB: IEEE 802.6

Distributed Queue Dual Bus - Evolve from LAN to MAN



2 Bus structure, leftward (Bus B) and rightward (Bus A) directions
Parallel, unidirectional Buses spanning through the metropolitan area
Each bus, with a Head-end, generates a steady stream of 53-byte cells




ATM cells ? AAL compatibility...
Cells (empty, or with data) travel from the Head-end to the Tail-end
Cells fall off after exceeding the Tail-end
Cell format


44 byte payload
Protocols Bits: Busy (= cell is occupied, or not), and Request (a third
party station with data to transmit can set this bit on)
Sourav@asu.edu
114
Transmission Sequence in DQDB

Station P has a data/cell to send, to station Q



What a Naive Sequence Could Do




If Q is rightward of P, then use Bus A
If Q is leftward of P, then use Bus B
Station P seeks (in a greedy mode) for an empty cell
Since cells are originated from the Head-end, stations near the
Head-end will get preference in receiving empty cells
Stations far away from the Head-end can lead to starvation
A Key Design Objective of DQDB


Implement FIFO (i.e., fairness ? ) among the transmitting stations
Issue: how to implement a FIFO ordering, where the transmission
requests are really generating in a distributed manner
Sourav@asu.edu
115
Distributed FIFO Ordering


A node with data to transmit, does not immediately try to seize an
empty cell and proceed with the transmission
Instead, the node checks out how many, if any, downstreams nodes had
made prior transmission requests







Downstream in regard to the intended transmission direction
Why downstream ?
Because the downstream nodes are likely victims of “unfairness”
Note: a node can never be unfair to another upstream node
If there had been k prior downstream transmission requests, then the
node will wait (i.e., skip) k empty cells
Next the node will transmit its own cell (assume, for now, that the node
has only 1 cell to transmit)
Finally, the node will wait (i.e., skip) for m additional cells, where m
fresh transmission requests might have arrived while waiting for the k
cell skips (3rd bullet above)
Sourav@asu.edu
116
Distributed FIFO: Implementation

How does a station (S) know about how many downstream nodes
would have made prior transmission requests ?

Have (all) the prior transmission requests explicitly notify every upstream node


Node S, when it’s data become ready to transmit, it begins to skip (i.e., wait) for
k empty cells pass by towards downstream




Require a counter, called Request Counter (RC), at each node to sum up the number
of such prior requests (= parameter k, in the previous slide)
This step will ensure that all the prior-requesting downstream nodes will be served
before S gets served
Will it ??? (Hint: transient effects)
During this wait (for k empty cells to pass by) time, node S will also count how
many additional RC requests arrive (=parameter m, in the previous slide). (Node
S swaps RC=k value with an alternate counter, CD.)
Transmission schedule (for S): k cell skips, transmit its own cell, m cell skips
Sourav@asu.edu
117
FIFO Transmission - Example

Refer Figure 4-32






Essentially, stations are operating in a “polite” mode
Instead of seize asap, they allow transmission requests from the potential
victims to take place
Node D had an earlier request, which got notified to Node B
Later, when node B had a data (i.e., cell) to send, B will wait (i.e., skip) an
empty cell to let node D finish
Beyond this, node B is free to use an empty cell, but it will keep track of late
arriving requests from downstream, and plan for a subsequent wait period as
well
Questions


Is this truly implementing a FIFO ?
How can you extend this scheme, for multi-cell data from a particular node ?
Sourav@asu.edu
118

figure 4-32
Sourav@asu.edu
119
LAN Bridges


Bridges are used to connect multiple LANs
Rationale for Bridge design






A set of previously designed LANs need to connect up at a later date, due
to evolving network infrastructure...
For geographically spread organizations, localized LANs and bridges
connecting them can be lot cheaper than a single large-sized LAN running
across the entire organization
For load sharing, it may be wise to split a LAN into multiple LANs and
interconnect them using bridges
Single LAN may not handle a long distance networking need -- multiple
LANs and bridges connecting them can be an wise solution
Multiple LANs (and, Bridges interconnecting them) could be more reliable
than a single (large!!) LAN
Bridges can conduct information filtering/screening ==> more secure
Sourav@asu.edu
120
LAN Bridges (Example)
Insert Fig. 4-38 here
Sourav@asu.edu
121
LAN Bridges (Example)
Insert Fig. 4-40 here
Sourav@asu.edu
122
FDDI - High-Speed LAN


802.3/4/5/6 LAN/MANs are meant for low speed and short distances
For higher speed, and longer spread




Fiber is recommended
It has higher bandwidth, thin/lightweight, no electro-magnetic interference, and
enhanced security feature
FDDI (Fiber Distributed Data Interface) is one such fiber based LAN
FDDI




Token ring LAN operating at 100 Mbps
Commonly used to interconnect LANs (refer Fig. 4-44)
FDDI-II is an updated version, which can handle synchronous traffic (with
reservation, i.e., circuit-swicthing)
NB: sense a blend of 802.5 with Synchronous traffic feature of 802.4...
Sourav@asu.edu
123

figure 4-44
Sourav@asu.edu
124
FDDI - Physical Layer

Multimode fiber


Devoid of lasers, and usage of normal spectrum light (using LEDs)


Singlemode, i.e., thinner and applicable for longer-distance, is not necessary
here. Singlemode fiber is lot more expensive...
Environment friendly, in case fiber is cut open and viewed
Two fiber rings (Ring I and II), running parallel to each other


Bit error < 10^(-9) range
Two classes of stations: A and B



Stations of type A connect to both the Rings, i.e., Ring I and II
Stations of type B (cheaper) connect to only one ring
Ring failure


One fails, the other serves as a backup
Both fails ==> join the two as a new ring (twice the length, Fig. 4-45)
Sourav@asu.edu
125

figure 4-45
Sourav@asu.edu
126
FDDI - Protocol

Similar to 802.5

Node wants to transmit (asynchronous) data




Node wants to transmit (synchronous) data


First, capture the Token
Then transmit frame(s), and keep removing the frame(s) when they cycle back
Unlike 802.5: One can generate the Token immediately after transmission ends
(since the Ring is longer, and it is wasteful to wait until the last frame re-cycles
back)
Handled similar to 802.4
Synchronization



All clocks are stable (per hardware design) within 0.005 percent
Thus, around 2000 bytes transmission ==> 1% clock error
Re-synchronization (using a preamble bit-pattern) in <= 4500 bytes
Sourav@asu.edu
127
FDDI - Synchronous Data and Priority
Handling

Synchronous frames generated every 125 microsec


Provides 8k samples per second for PCM or ISDN data
Synchronous frame includes 96 Byte data



Synchronous traffic is guaranteed bandwidth


Once allocated, stays connected until the node transmitts the last frame
Remaining bandwidth (= 96 Bytes * 8000 - load offered by Synchronous
traffic) is allocated on demand


Can accommodate upto 4 T1 data lines
24 byte * 8000 frames * 8 bits = T1 line’s bandwidth (1.544 Mbps)
Priority assignment similar to 802.4 (i.e., within node)
System Parameters


Token holding timer - maximum token holding time
Token rotation timer - check for long absent token (NB: fault detection
Sourav@asu.edu
128
Switched Architecture - Way to Go !!
Insert Figure 4-48 here
Sourav@asu.edu
129
Download