Lecture 21

advertisement
Lecture 21

Reminders: Homework 6 due today,
Programming Project 4 due on Thursday

Questions?

Current event: BGP router glitch on Nov. 7
http://money.cnn.com/2011/11/07/technology/juniper_internet_outage/
Tuesday, November 8
CS 475 Networks - Lecture 21
1
Outline
Chapter 6 - Congestion Control and Resource
Allocation
6.1 Issues in Resource Allocation
6.2 Queuing Disciplines
6.3 TCP Congestion Control
6.4 Congestion-Avoidance Mechanisms
6.5 Quality of Service
6.6 Summary
Tuesday, November 8
CS 475 Networks - Lecture 21
2
Issues in Resource Allocation
Congestion control refers to the efforts make by
network nodes to prevent or respond to overload
conditions. (Congestion control is different from
flow control, but they share some of the same
control mechanisms.)
Resource allocation refers to the process by
which network elements try to meet the competing
demands that applications have for network
resources – primarily link bandwidth and buffer
space in routers.
Tuesday, November 8
CS 475 Networks - Lecture 21
3
Issues in Resource Allocation
Resource allocation is partially implemented in the
routers or switches inside the network and
partially in the transport protocol.
End systems use signaling protocols to convey
resource requirements to network nodes. The
nodes respond with information about resource
availability.
Tuesday, November 8
CS 475 Networks - Lecture 21
4
Network Model
We will discuss resource allocation with respect to
a network that is



packet switched
connectionless
and best-effort.
Tuesday, November 8
CS 475 Networks - Lecture 21
5
Network Model
Packet Switched Network
Links in a packet-switched network will typically
have different bandwidths.
Although a given source may have enough
capacity on an outgoing link to send a packet. A
slow link somewhere in the middle can cause a
bottleneck.
Tuesday, November 8
CS 475 Networks - Lecture 21
6
Network Model
Connectionless Flows
Connectionless networks do not use setup
messages to reserve network resources prior to
transmitting data.
Although our network is connectionless and
packet-switched we will see that the concept of a
flow is important when discussing resource
allocation. A flow is a sequence of packets
traveling between a source and destination along
the same path.
Tuesday, November 8
CS 475 Networks - Lecture 21
7
Network Model
Connectionless Flows
Multiple flows passing through a network.
Tuesday, November 8
CS 475 Networks - Lecture 21
8
Network Model
Connectionless Flows
Routers may maintain soft state about a flow so
that it can perform more efficiently. (Routers in
connected networks maintain hard state.)
A flow can be detected implicitly by inspecting
packet source and destination addresses or
explicitly via flow setup messages.
Tuesday, November 8
CS 475 Networks - Lecture 21
9
Network Model
Service Model
We will initially assume that our network is a best
effort network. No guarantees are made on either
throughput or delay.
Some networks can guarantee certain levels of
performance (throughput, delay, etc). Such a
network is said to provide quality of service (QoS).
This type of network is covered in Section 6.5,
which we will be skipping.
Tuesday, November 8
CS 475 Networks - Lecture 21
10
Taxonomy
Router-Centric vs Host-Centric
Resource allocation mechanisms can be
categorized as being either router-centric or hostcentric.
In a router-centric design, the router decides
when packets are forwarded and dropped.
Routers inform sending hosts of how much data
they can send.
In a host-centric design, hosts observe the
network and adjust their behavior accordingly.
Tuesday, November 8
CS 475 Networks - Lecture 21
11
Taxonomy
Reservation vs. Feedback
In a reservation-based resource allocation
scheme a host asks the network for capacity at
the time a flow is established. If a route can not
satisfy the request the flow may be rejected. (A
router-centric design is implied.)
In a feedback-based approach hosts begin
sending data and adjust sending rates based on
feedback. Feedback may be explicit (in the form
of messages sent back from routers) or implicit.
Tuesday, November 8
CS 475 Networks - Lecture 21
12
Taxonomy
Window-Based vs. Rate-Based
In a window-based resource allocation method
windows are used to indicate to the sender how
much data should be transmitted. Both TCP flow
and congestion control methods are windows
based.
Alternatively, a sender could be asked to transmit
at a particular rate (bits per second). Rate-based
requests are common in networks that support
Quality of Service.
Tuesday, November 8
CS 475 Networks - Lecture 21
13
Evaluation Criteria
Effective Resource Allocation
We want to allocate resources so that we
maximize throughput while minimizing delay.
Many allocation methods seek to optimize the
power of the network:
Power = Throughput/Delay
For a given allocation
scheme, power is
optimized at a particular
load value.
Tuesday, November 8
CS 475 Networks - Lecture 21
14
Evaluation Criteria
Fair Resource Allocation
Unless there is a statement to the contrary, we
will assume that fair resource allocation implies
that all flows using a link should have equal
throughput.
Given a set of flow throughputs (x1, x2, ..., xn) one
proposed fairness index is:
∑ x

x  , x =
n
f  x 1,
Tuesday, November 8
i=1
2,
n
2
i
n
n ∑i=1 x 2i
CS 475 Networks - Lecture 21
15
Queuing Disciplines
A router's queuing discipline governs how packets
are buffered while waiting to be transmitted.
The queuing algorithm affects throughput (by
determining which packets get transmitted) and
buffer space (by deciding which packets get
dropped).
We will examine the first-in-first-out (FIFO) and
fair queuing (FQ) algorithms.
Tuesday, November 8
CS 475 Networks - Lecture 21
16
FIFO
FIFO is the simplest of
all queuing algorithms.
It is currently also the
most widely used
algorithm on Internet
routers.
(a) FIFO Queue, (b) tail drop
Tuesday, November 8
CS 475 Networks - Lecture 21
17
FIFO
FIFO is a scheduling discipline. Routers also use
a drop policy to determine which packets are
dropped when the queue is full. FIFO routers
typically use a tail drop drop policy.
A simple variation on FIFO queuing is to have
multiple priority queues. Packets could contain a
field that indicates the packet priority (the IP Type
of Service or TOS field could be used for this
purpose).
Tuesday, November 8
CS 475 Networks - Lecture 21
18
Fair Queuing
FIFO queues do not separate packets by flow. It
is possible for an ill-behaved source to use up
most of the available bandwidth.
We will see that TCP uses host-centric congestion
control. It is possible for an application to use
UDP instead and bypass TCP's congestion
control mechanism.
Tuesday, November 8
CS 475 Networks - Lecture 21
19
Fair Queuing
With fair queuing (FQ) a queue is maintained for
each flow and the flows are serviced in roundrobin fashion.
FQ would prevent a
UDP flow from using
more than its fair
share of network
bandwidth.
Tuesday, November 8
CS 475 Networks - Lecture 21
20
Fair Queuing
Fair queuing is not quite that simple due to
different packet lengths. We want to ensure equal
bit rates for all flows rather than equal packet
rates.
To ensure equal bit rates we compute the time
when the router would finish transmitting the
packet as:
Fi = max(Fi – 1, Ai) + Pi
where Fi is the finish time, Ai is the arrival time
and Pi is the transmit time.
Tuesday, November 8
CS 475 Networks - Lecture 21
21
Fair Queuing
The Fi values are treated as time stamps and we
transmit packets in order of increasing time
stamp. An arriving packet is not permitted to
interrupt the transmission of packet even if it has
a smaller Fi value.
Tuesday, November 8
CS 475 Networks - Lecture 21
22
Fair Queuing
FQ is work-conserving, the link is never idle as
long as there is one packet in a queue. If one
active flow is sharing a link with several idle flows,
the active flow will use the full link capacity.
If there are n active flows then each flow will get
no more than 1/nth of the link capacity. If a source
tries to send data at a faster rate, the queue will
fill and packets will be dropped.
Tuesday, November 8
CS 475 Networks - Lecture 21
23
Fair Queuing
A variation is weighted fair queuing (WFQ) in
which each flow is given a weight. With three
flows with weights of 1, 2, and 3, the flows would
get 1/6, 1/3, and 1/2 of the link capacity
respectively.
WFQ could be used on classes of traffic instead
of on flows. The IP TOS field could be used to
identify classes. The Differentiated Services
architecture (Section 6.5) uses this approach.
Tuesday, November 8
CS 475 Networks - Lecture 21
24
In-class Exercises

To be turned in as part of Homework 7

Problem 6.7 on pages 564-565

Problem 6.10(a) on page 565
Tuesday, November 8
CS 475 Networks - Lecture 21
25
Download