CS514: Intermediate Course in Operating Systems Lecture 17 Oct. 24

advertisement

CS514: Intermediate

Course in Operating

Systems

Professor Ken Birman

Ben Atkin: TA

Lecture 17 Oct. 24

Internet Quality of

Service

• The term quality of service, or QoS, is used to talk about properties associated with communication links

– Telephone connections (virtual circuits) are slow but guarantee

• Low latency

• Steady 56kb throughput

• Low jitter (variability in latency)

• Relatively good isolation and noise properties

– But Internet lacks QoS guarantees

Why is Internet QoS hard?

• How does it work now?

– Recall that Internet itself is based on packet model

– But routers do have a forwarding policy

• Currently, “weighted fair queuing”

• Also, depends on routing policy

– A reasonable goal is to measure behavior of the network

Why is Internet QoS so hard?

Why is Internet QoS so hard?

Why is Internet QoS so hard?

Why is Internet QoS so hard?

• Life of a router: packets show up, are stored, then forwarded

• Problem: how does router impact dynamics of a “flow”?

Routers and flow properties

• Suppose that process A using connection A-B sends 50 8KB messages per second

• And process C on connection C-

D sends 25 per second

• Would we expect that B second?

sees

50 per second, and D sees 25 per

Weighted Fair Queuing

• Implemented by most routers

• Treats each (source,dest) IP pair as a

“flow”

– Ignores port numbers

• Normally, router forwards what it rcvs

• But congested router gives equal share of resources to each flow, no matter what load it presents

• Idea is to protect against flow that hogs resources

For our example?

• Router sees

– 50 msgs/sec from flow “x”

– 25 msgs/sec from flow “y”

• And has capacity to send 50 messages per second right now

• If congested…

– Each gets an equal share

– Hence “y” sees no loss, but “x” might see 50% loss rates

• Actually, with RED, “y” would lose some, too

What about real time issues?

• Life of a router is to

– Copy incoming messages from input links into storage

– Copy outgoing messages from storage to outgoing links

– Drop packets (RED) if overloaded

• Router is largely indifferent to packet “dynamics”

Life of a router

• Router could receive 50 msgs/sec from A

– But perhaps they sit on a queue because the link these must follow is busy

– So 10 or 15 pile up

• Finally router gets a chance to send packets on this busy link

– Now the router sends 10 or 15 as a burst

– Effective rate was zero for a while, then perhaps a few hundred per second

Can a router preserve packet flow dynamics?

• A very hard open problem

• The answer is probably

– Yes with infinite resources

– No with finite resources

• But in any case, modern routers don’t actually try to do so!

The Internet is a QoS randomizer!

• Whatever properties the input flow may have had…

• The Internet probably mixes things up in ways that can disrupt those properties

• The more hops taken by a packet through the network, the more chance for such disruption to occur

Behavior of the Internet

• Studies published mostly in

SIGCOMM and INFOCOM, the top networking conferences

• People seek to

– Accurately understand traffic patterns and QoS of the network

– Develop into a “model” that describes what they observe

No luck!

• Studies have repeatedly found:

– That the Internet is pretty chaotic

– Routing is surprisingly unstable

– Random periods of high loss rate

– Latencies vary wildly

– Most distributions are “heavy tailed”

• Idea is to graph percentage of messages having each latency value or loss rate

• Ideally, want a nice clean graph

• But in practice get graphs with very long tails

What does this tell us?

• We can sample, for example, the round-trip time between A and B

– But we can’t assume it will be steady

– And even if we average many samples the result may not be very meaningful

• Heavy-tailed distributions may have enormous or even infinite variance

– E.g. “2 ms +/- 2500”

• Makes it hard to even write down the properties of an Internet connection!

Other options?

• Email, TCP applications don’t really care

– They adapt rapidly to conditions

– No real effort to track or model the distributions associated with various Internet properties

• In this approach, Internet lacks guarantees and is proud of it!

Alternatives?

• Much talk about how to build a better Internet

• Current IP protocols are based on IPv4

• Proposed IPv6 would

– Extend address lengths to 64 bits

– Add security to DNS, routing, IGMP

– Provide user-level QoS features

Addressing

• Issue is that we are running out of

IPv4 addresses

– The field is only 32 bits long

– And a big chunk is reserved for multicast addresses

• Despite this, Internet multicast is generally not available

• Problems with load and charging for costs led

ISVs to disable the feature

– What can we do?

Addressing

• IP leasing is part of the answer

– Idea is that machines can

• Share a small pool of IP addresses, allocating on demand

• Even share a single IP address, like when you connect a home wireless network to roadrunner

– Trick is to remap on the fly

– Gets you pretty far but not far enough

• We’ll exhaust the address pool

“soon”

Security

• Issue here is that Internet is too easy to attack

• Any machine can claim to be a router

– Then its DNS and DHCP packets are trusted

– In effect, any machine can take control of Internet routing and naming!

• IPSec secures IP w/ cryptography

IPSec

• Idea is that a public key hierarchy is used to obtain triple DES keys for use by IP

• Lets us secure IP packets with signatures (“HMAC”) or encryption

• DNS and routing protocol use this to secure themselves

Before and After

• Without IPSec

– Hackers can easily “clear a route” for dedicated use by their applications and games

– Impossible to track problems to origin!

• With IPSec

– Only authorized routers, listed in central administrative tables, can originate such packets

– And if a router cheats we can easily detect the source of the problem

Also in the pipeline

• More and more ISVs are checking that return address makes sense

• Right now a packet can list any

IP address you like as the source or return address

• Effect is to hide true sender

More issues

• What about tolerance

network

fault-

– We’ve focused on application faulttolerance

– If a network link or router fails, routing adapts

– But it adapts slowly

• Users see fairly flaky behavior!

Network fault-tolerance

Network fault-tolerance

Network fault-tolerance

• In principle, there is a second route

• But starting in 1980, Internet rerouting mechanism was recognized as too expensive

• So routing is done less often now

– Result is that it can take a long time for routes to adjust after a problem

– A tradeoff: TCP backoff and RED work, so accept slow route updates as a kind of tax to get desired outcome

Network fault-tolerance

• Dual IP address concept

– Suppose one computer lives on two networks

– In this case the single machine will have two IP addresses, one for each network

– This is needed to make routing work

Dual IP addresses

128.84.96.33

swift.cs.cornell.edu

128.84.77.61

Idea: Use Dual IP to get network fault-tolerance

• Suppose that we work with computers that have dual IP addresses

• Thus: A,A’ and B,B’

• Will messages from A to B take different routes than those from

A’ to B’?

Network fault-tolerance with dual IP addresses

A’

A

B’

B

Idea: Use Dual IP to get network fault-tolerance

• Will messages from A to B take different routes than those from

A’ to B’?

– Unfortunately, no

– Right now there is no way to get routing to work in this manner

– Even with redundancy in the network, we can’t exploit it for network fault-tolerance

Network fault-tolerance with dual IP addresses

A’

A

B’

B

Network fault-tolerance with dual-IP addresses

• Here although parts of route are independent, parts happen to coincide

• Perhaps one link is lightly loaded hence “popular”

• But consequence is that it becomes a single point of failure for both (A,B) and (A’,B’) communication

Goals for a future network

• If the network is multiply connected there is a way to exploit it

• Routers minimally impact flow dynamics

• Can exploit these to build faulttolerant flows…

Will IPv6 be deployed?

• So far, we’ve had IPv6 capability in many routers for about 6 years

• No evidence that anyone plans to turn on this feature

– But by most estimates, > 90% of the network will be IPv6 capable soon

– Ability to have a big “turn on IPv6 day” is growing

• Points to lack of centralized control for the Internet… ruled by consensus!

IPv6 Issues

• Presumes a greater degree of centralized administrative control

– Who pays the administrators?

– What if an ISV doesn’t want to disclose its routing information?

• How to deal with pockets that still run IPv4?

IPv6 and QoS

• Many people mistakenly think that

IPv6 has QoS guarantees built in

• But this is inaccurate

• In fact, the major proposals for supporting QoS are separate

– RSVP

– Diffsrv

• These are only associated with IPv6 as a sort of accident of history… all were proposed at the same time

QoS via RSVP, Diffsrv

• Will be our topic on Thursday

• Basically

– Flow pre-specifies its goal

– Network reserves resources

– Goal is that if reservation is accepted, properties will hold

• But as we saw, network can mess up flow QoS properties

• This seems to be a fatal flaw for QoS mechanisms in a store-and-forward packet network

Summary?

• IPv6 is coming along but slowly

– Technology widely available

– But ISVs don’t want to be first to enable it

– And interoperation with IPv4 remains a big issue

– Won’t answer f.tol. needs

• Probably we’ll get it within a few years, in bits and pieces

Summary?

• But IPv6 has many aspects

– Big IP addresses

– Security standards

– New monitoring capabilities…

• Within this list, QoS mechanisms are being debated

• So IPv6 deployment won’t give us reliable, predictable networks

Download