Outline
New Reno
SACK
TCP Vegas
RED
Fall, 2001 CS 640 1
• Recall from Internet traffic lecture that packets area often transmitted in bursts
• Thus, losses also often happen in bursts
– Due to FIFO (drop tail) queues in routers
• Problem with TCP RENO: If new ACK after Fast
Retransmit is not for ALL data transferred since dup ACK, then DO NOT do Fast Recovery
– Since only mechanism for recognizing other drops is a timeout
Fall, 2001 CS 640 2
• Simple observation…
• Recognized that losses happen in bursts
• Upon receipt of a partial ACK (ACK for some but not all outstanding data), retransmit next data packet in sequence
• Same congestion control mechanism as RENO
– But, do not avoid Fast Recovery on multiple drops
• Enables recovery from multiple packet losses in sequence without timeout
– Timeouts are still used…
Fall, 2001 CS 640 3
• Guess what? Losses do not always happen in bursts – sometimes things get a little crazy
– How about telling the sender what has arrived (!)
• Selective Acknowledgements (SACK)
– Same congestion control mechanisms as TCP RENO
– Uses TCP options fields
– When out of order data arrives, tell sender (via SACK block) which block of data has been received
• Enables sender to maintain image of receiver’s queue
– Sender then resends all missing segments without waiting to timeout
• Doesn’t send beyond CWND
– When no old data needs to be resent, then send new data
– Timeouts are still used…
Fall, 2001 CS 640 4
• TCP’s strategy
– control congestion once it happens
– repeatedly increase load in an effort to find the point at which congestion occurs, and then back off
• Alternative strategy
– predict when congestion is about to happen
– reduce rate before packets start being discarded
– call this congestion avoidance , instead of congestion control
• Two possibilities
– host-centric: TCP Vegas
– router-centric: RED Gateways
Fall, 2001 CS 640 5
• Idea: source watches for some sign that router’s queue is building up and congestion will happen too; e.g.,
– RTT grows
– sending rate flattens
70
60
50
40
30
20
10
0.5
1.0 1.5
2.0
2.5
3.0
3.5
4.0
4.5
Time (seconds)
5.0
5.5
6.0
6.5
7.0
7.5
8.0
8.5
Congestion window
Avg. source send rate s 1100
900
700
500
300
100
0.5
1.0
1.5
2.0
2.5
3.0
3.5
4.0
4.5
Time (seconds)
5.0
5.5
6.0
6.5
7.0
7.5
8.0
8.5
Buffer space at router
10
In shaded region we expect throughput to increase but it cannot increase beyond available bandwidth
5
0.5
1.0
1.5
2.0
2.5
3.0
3.5
4.0
4.5
Time (seconds)
5.0
5.5
6.0
6.5
7.0
7.5
8.0
8.5
CS 640 6 Fall, 2001
• Vegas tries not to send at a rate that causes buffers to fill
• Let BaseRTT be the minimum of all measured RTTs
• If not overflowing the connection, then
ExpectRate = CongestionWindow/BaseRTT
Assume CWND = number of bytes in transit
• Source calculates sending rate ( ActualRate ) once per RTT
– Pick one packet per RTT, timestamp send/ACKreceive time and divides by number of bytes in transit
• Source compares ActualRate with ExpectRate
– Diff = ExpectedRate – ActualRate
– if Diff < a
• increase CongestionWindow linearly
– else if Diff > b
• decrease CongestionWindow linearly
– Else
• leave CongestionWindow unchanged
Fall, 2001 CS 640 7
• Parameters
a
= 1 buffer b
= 3 buffers
70
60
50
40
30
20
10
0.5
1.0
1.5
2.0
2.5
3.0
3.5
4.0
4.5
5.0
5.5
6.0
6.5
7.0
7.5
8.0
Time (seconds)
Black line = actual rate
Green line = expected rate
Shaded = region between a and b
240
200
160
120
80
40
0.5
1.0
1.5
2.0
2.5
3.0
3.5
4.0
4.5
5.0
5.5
6.0
6.5
7.0
7.5
8.0
Time (seconds)
Note: Linear decrease in Vegas does not violate AIMD since it
Happens before packets loss
Fall, 2001 CS 640 8
• Congestion notification is implicit
– dropped packets indicate congestion
• Could make explicit by marking the packet (ECN)
– Current standard proposal for explicit notification (old method: DECbit)
– Mark packet as queue gets full, reduce sending rate when marks are seen
• Early random drop to force sources to back off
– rather than wait for queue to become full, drop each arriving packet with some drop probability whenever the queue length exceeds some drop level
Fall, 2001 CS 640 9
• Compute average queue length
AvgLen = (1 - Weight) * AvgLen +
Weight * SampleLen
0 < Weight < 1 (usually 0.002)
SampleLen is queue length each time a packet arrives
(Exactly the same as the EWMA calculation for RTT)
MaxThreshold MinThreshold
AvgLen
CS 640 Fall, 2001 10
• Two queue length thresholds if AvgLen <= MinThreshold then enqueue the packet if MinThreshold < AvgLen < MaxThreshold then calculate probability P drop arriving packet with probability P if Maxhreshold <= AvgLen then drop arriving packet
Fall, 2001 CS 640 11
• Computing probability P
TempP = MaxP * (AvgLen - MinThreshold)/
(MaxThreshold - MinThreshold)
P = TempP/(1 - count * TempP) count = number of newly arriving packets while AvgLen has been between two thresholds (P increases with count)
• Drop probability is a function of both AvgLen and how long it has been since the last drop.
– TempP tracks how many newly arriving packets have been queued while AvgLen is between thresholds
– Count is number of packets since last drop
– This prevents clusters of drops
Fall, 2001 CS 640 12
• Thresholds are hard to determine
• Drop Probability Curve
P(drop)
1.0
MaxP
AvgLen
MinThresh MaxThresh
• RED is good at keeping avg. queue size steady
Fall, 2001 CS 640 13
• There are new protocols proposed all the time
• Datagram Control Protocol: a congestion controlled, unreliable transport protocol
• Application can specify one of two congestion control mechanisms
– AIMD-like or equation-based
• Minimal protocol for apps which use UDP now
Fall, 2001 CS 640 14