The Limits of Traffic Management

advertisement
The Limits of Traffic Management














There is a quote ,“data consumption rates are increasing faster than Moore’s law” – posed by
David Clark as an open idea to think about.
Comcast is working on CDNI – content distribution network interface
o How can we build an economy around CDN?
The existence of Akamai proves the need for CDNs
History Lesson
o ~10 years ago, google said “We really need CDN”
o Network operators saw this and tried to join in – maybe there is something we can due
to improve network bandwidth
o And then the effort died (at Comcast)
 Due to recession
 Due to bandwidth cost drops
 Due to success of Akamai as a global solution
The pipes are getting so large that the routers are getting really large. They are getting
increasingly expensive, difficult, and power hungry.
“Is a peer to peer relationship actually sustainable for content distribution?”
o Peer-to-peer servers are evolving – and evolving in different ways
o The evolution of these distribution systems makes for difficult design choices about
where you actually draw content from
“There is an idea that there are these ‘backbone networks’. It’s not clear that that model is
sustainable as a business model. Being a company that is simply a backbone may be an economic
error (as seen in telecom companies that acted as the backbone for telephony)”
o Much of the debate has been about what end of the link is most valuable – the middlelink/backbone, or the last mile?
A major challenge is that many European consumers are consuming content from the US – and
that gets really expensive.
A clear problem is arising for companies in that the cost of distribution is growing faster (and will
soon outpace) the revenues gained from providing distribution
o If usage grows the same as bandwidth, then revenue and cost scale together, but that is
clearly not happening.
o Costs may be rising because the growth of middle-man routers is slower than that of
traffic demand.
We’ve never run out of demand for internet traffic – so we shouldn’t bank on that happening.
But instead how do we change the way we grow the supply?
o Part of the problem is that ISPs keep offering high bandwidths for competitive reasons,
which pushes people to create applications that consume all of that available
bandwidth.
What is the value of caching, and how do we design a system that takes advantages of the
strengths it offers?
o Is it a one-time benefit that postpones the inevitable? Or does it change the rates at
which demand/supply changes
The (current) traffic problem seems to have been caused by video, so do we take a special
exception to video traffic and make a second infrastructure solely for video?
o Most likely not, because video may only be the temporary spike of demand, it is not a
sustainable model to make separate systems for the current highest demand bits.
There seems to be a trend of ‘let’s get something up and then later we’ll optimize it’ – which
perhaps implies there is a fundamental problem in the original design, because there is no
guarantee that it was actually optimized once it was established.
Akamai’s killer advantage was the global footprint it was able to produce.





There are no rules to interconnecting in the US – it’s all done through negotiating between
consenting parties.
There is an old fashioned hypothesis that on-net traffic is cheaper than off-net traffic, but that is
not necessarily true depending on the distribution architecture and path.
Cost of bandwidth to consumers tends to be a hard number to settle on
o Most wireless providers have decided that all-you-can-eat wireless broadband is a thing
of the past
Under what circumstances is wireless a reasonable substitute?
o There tends to be a ~5yr delay between the capacity of wireless and fixed-line
o The difference between running fiber to the cab vs. to the home is about a factor of 5.
$1000 to the home, $200 to the cab.
o Can we offload the data from the cab?
Many of the problems effecting distribution are caused by the fact that the systems tend to be
negligently engineered. It’s a problem of ‘get something out there and optimize later’, except
that the optimization part never happens because of other pressing challenges.
Download