Internet Interrupted: Why Architectural Limitations Will Fracture the 'Net

advertisement
Internet Interrupted: Why Architectural Limitations
Will Fracture the 'Net
Executive Summary
In 2007, Nemertes Research conducted the first-ever study to independently model
Internet and IP infrastructure (which we call "capacity") and current and projected traffic
(which we call "demand") with the goal of evaluating how each changes over time. In
that study, we concluded that if current trends were to continue, demand would outstrip
capacity before 2012. Specifically, access bandwidth limitations will throttle back
innovation, as users become increasingly frustrated with their ability to run sophisticated
applications over primitive access infrastructure. This year, we revisit our original study,
update the data and our model, and extend the study to look beyond physical bandwidth
issues to assess the impact of potential logical constraints. Our conclusion? The situation
is worse than originally thought!
We continue to project that capacity in the core, and connectivity and fiber layers will
outpace all conceivable demand for the near future. However, demand will exceed access
line capacity within the next two to four years. Even factoring in the potential impact of a
global economic recession on both demand (users purchasing fewer Internet-attached
devices and services) and capacity (providers slowing their investment in infrastructure)
changes the impact by as little as a year (either delaying or accelerating, depending on
which is assumed to have the greater effect).
Table of Contents
1 Executive Summary
2 Volume 1:Introduction
3 Review of Overall Framework
3.1.1 Dynamics of Internet Demand
3.1.2 Dynamics of Internet Supply
3.1.3 Nemertes Internet Model Influence Diagram
3.2 Global Capacity and Demand - Five-Year View
3.2.1 High-Level Analysis
3.3 Review and Updates to Demand Curve
3.3.1 The Potential Impact of Economic Conditions On Demand
3.3.2 Looking at the Supply-Side Equation
3.3.3 Looking at the Demand Side of the Equation
3.3.4 The Rise of the Virtual Worker
3.4 Review and Updates to Supply Curves
3.4.1 Access Lines
3.4.2 Core Switching and Connectivity Layers
3.4.3 Optical Switching Capacity Updates
4 Key Finding: The Crunch is Still Coming
4.1 Assessing Trends and Dynamics in 2008
4.1.1 BT's iPlayer Debut in 2007
4.1.2 NBC's Olympic Internet Streaming Gold Medal
4.2 Internet Peering and Why it Matters
4.2.1 The Internet Peering Architecture
4.2.2 Welcome to the Carrier Hotel California
4.3 The Mystery of the "Missing" Traffic
4.3.1 Monitoring Traffic by Looking at the Toll Booths
4.3.2 A Closer Look at Public Peering Point Traffic
5 The Great Flattening
5.1.1 Peering Through the Clouds
5.1.2 Why The Flattening?
5.2 The Shifting Internet Hierarchy: From Oligarchy to City-State Fiefdoms
6 Volume 1: Conclusion and Summary
7 Volume 2: Introduction
8 Addressing and Routing: How it All Works
8.1 How the Internet Works: Routing
8.1.1 Routing in the Core
8.1.2 Routing in the Edge
8.2 Border Gateway Protocol
8.2.1 BGP and IPV4
9 IPV4: Why Address Proliferation Matters
9.1 Everything Internet: Everything Wants An Address
9.1.1 The Growth of Machine-to-Machine Communications
9.1.2 Telemetry and the Impact on IP Addressing
9.2 NAT and IPV4
10 Address Assignment and Exhaustion
11 IPv6: Not a Panacea
11.1.1 What IPv6 Doesn't Do
11.1.2 Interoperability Between IPv4 and IPv6
11.1.3 Multihoming and NAT
11.1.4 IPv6 Adoption
11.2 Other Options
11.3 Where Are We Headed?
11.4 Why Does This Matter?
12 Volume 2: Conclusions
13 Bibliography and Sources
13.1 Sources
13.2 Bibliography and Endnotes
Table of Figures
Figure 1: 2007 Nemertes Internet Model Influence Diagram
Figure 2: 2008 Nemertes Internet Model Influence Diagram
Figure 3: 2008 World Capacity and Demand Projections
Figure 4: North America Capacity Versus Demand - 2008
Figure 5: Impact of Delayed Access Line Supply
Figure 6: Impact of Delayed Demand
Figure 7: Percent of Workers Virtual
Figure 8: Total Access Lines - 2007 versus 2008
Figure 9: Total Access Line Bandwidth 2007 Versus 2008
Figure 10: Optical Bandwidth Projections: 2007 versus 2008
Figure 11: PlusNet Internet Traffic 11/07 - 02/08
Figure 12: The Internet as Medusa
Figure 13: BGP State Diagram
Figure 14: Active BGP Entries
Figure 15: Total Internet Connected Devices
Figure 16: IANA /8 Address Block Allocations
Figure 17: IPv4/IPv6 Dual Stack
Table of Tables
Table 1: Access Technology Line Rates
Table 2: 2008 Projected Broadband Lines North America
Table 3: NBC Olympic Coverage Internet Load
Table 4: MINTS Historical Internet Traffic Data
1 Executive Summary
In 2007, Nemertes Research conducted the first-ever study to independently model
Internet and IP infrastructure (which we call "capacity") and current and projected traffic
(which we call "demand") with the goal of evaluating how each changes over time. In
that study, we concluded that if current trends were to continue, demand would outstrip
capacity before 2012. Specifically, access bandwidth limitations will throttle back
innovation, as users become increasingly frustrated with their ability to run sophisticated
applications over primitive access infrastructure. This year, we revisit our original study,
update the data and our model, and extend the study to look beyond physical bandwidth
issues to assess the impact of potential logical constraints. Our conclusion? The situation
is worse than originally thought!
We continue to project that capacity in the core, and connectivity and fiber layers will
outpace all conceivable demand for the near future. However, demand will exceed access
line capacity within the next two to four years. Even factoring in the potential impact of a
global economic recession on both demand (users purchasing fewer Internet-attached
devices and services) and capacity (providers slowing their investment in infrastructure)
changes the impact by as little as a year (either delaying or accelerating, depending on
which is assumed to have the greater effect).
We've also reconciled a puzzling contradiction in last year's analysis, which is that some
of the best-available direct measurements of public Internet traffic indicate a slowdown in
the rate of growth at the same time that many other credible data points indicate an
acceleration in demand. The root cause of the conundrum, we believe, is that traffic is
increasingly being routed off the public Internet onto paid or private "overlay" networks
(for example, Google's recent purchases of undersea fiber, or NBC's use of Limelight
Networks to broadcast the Olympics). The result? An increasing trend towards content
providers who "pay-to-play" investing in technologies to accelerate traffic to their sites
ahead of that on the "regular Internet."
Delivering high-quality service requires that a content provider maintain control over the
delivery mechanism. This pressure to manage service delivery is forcing many service
and content providers to migrate their traffic off the "public" or Tier-1 peered Internet in
favor of dedicated pipes to the ISP aggregation points. We refer to this type of
fragmentation as the flattening of the Internet.
Additionally, as demand growth accelerates we are quickly depleting logical addresses
that identify destinations on the Internet. And, IPv6, largely touted as the answer, doesn't
appear to be poised to fill the gap. Not only is deployment woefully lagging (for reasons
we detail at length), but the protocol itself has inherent limits, particularly when it comes
to multihoming and mobility. Other potential solutions exist, but are so far primarily
experimental.
In this study, we assess the current challenges facing the Internet on two fronts. In the
volume on physical infrastructure, we examine the impact of recent major demands for
capacity on Internet performance. We also assess early indications of "flattening" of the
Internet. In the volume on logical infrastructure, we look at the impact of address exhaust
and the factors driving it. We look at alternatives being proposed and show why at least
one of them, IPv6, is probably too little too late.
Finally, we conclude with a few notions of the likely direction Internet evolution will
take as we approach 2012. The bottom line: The Internet continues to be bedeviled by
infrastructure issues that, if left untreated, will dramatically curtail application innovation
in the coming years.
2 Volume 1: Introduction
Last year, Nemertes issued a landmark research study that was - and still is - the only
study to independently model Internet and IP infrastructure (which we call "capacity")
and current and projected traffic (which we call "demand") with the goal of evaluating
how each changes over time, and determining if there will ever be a point at which
demand exceeds supply.
To assess infrastructure capacity, we reviewed details of carrier expenditures and vendor
revenues, and compared these against publicly available market research. To compute
demand, we took a unique approach. Instead of modeling user behavior based on
measuring the application portfolios users had deployed, and projecting deployment of
applications in the future, we looked directly at how user consumption of available
bandwidth has changed over time. The key component of this research is that it
independently modeled capacity and demand, which allowed us to decouple the impact of
each on the other. We found that if current trends continue, demand will outstrip capacity
before 2012 because of access limitations, and the Internet will show signs of stress as
early as 2010. One of the goals in this year's research project is to revisit these projections
and determine if anything had changed. As discussed, we find that very little has
changed, and our fundamental assertion that Internet demand exceeds Internet supply at
the access layer of the network still stands and still occurs within the coming years.
Further, this year we examined two events that occurred since last year's report: NBC's
coverage of the Olympics and the release of BBC's iPlayer. Both events are directly
relevant to our analysis, they are indicative of broader issues of the physical Internet
infrastructure: the flattening of the Internet and the corresponding migration of Internet
traffic from public to private.
3 Review of Overall Framework
3.1.1 Dynamics of Internet Demand
A challenge of projecting Internet demand is the unknown, and difficult-to-estimate,
effects of Internet externalities. In classic economic terms, a network externality is the
difference between private costs or benefits and social costs or benefits. Usually used in
relation to phone networks, the total value of the network increases at a much higher rate
than the actual cost of adding users: Adding Aunt Sally to the network brings benefit to
Sally and everyone else in the family that is already on the net, even though only Sally
bore the cost of joining.
So many Internet applications, such as YouTube, FaceBook and MySpace, depend upon
these network externalities for their success. But there are really two issues at play:
access and performance. It's not enough to have only access to the Internet. We also must
take into account the Internet experience itself. Anyone who has downloaded highdefinition YouTube videos via cable modem at 4:30 p.m. when kids are home from
school can attest to the direct affect of performance on user experience.
3.1.2 Dynamics of Internet Supply
On the supply-side of the equation, there are differences in the economics, physics, and
performance characteristics depending on the particular area of network infrastructure.
The core (backbone) and connectivity (metropolitan region) parts of the network seem to
be scaling well to meet demand. The reasons for this are primarily that core and
connectivity routing/switching technology is already in fixed plant, and adding new
functionality and performance is related to upgrading hardware, optics and lighting new
fiber strands.
In contrast, the edge network (the last mile) does not benefit from these same conditions.
If there is copper or cable plant in the ground, electronics can be upgraded but limitations
related to the physics of moving electrons through copper limit the performance
upgrades. If there is no plant in the ground, upgrades require extensive capital and
operational costs that dramatically change the economics of providing Internet access.
3.1.3 Nemertes Internet Model Influence Diagram
Last year, the first step to modeling supply and demand was to build an influence model
that directed our data gathering. Since publishing the report last year, we received
numerous requests from clients to perform "what-if" analysis, using the Nemertes
Internet Model. For example: "What if demand for 4G wireless devices doubled, what
would be the affect on the supply/demand equation?" To better support these requests,
this year we have realigned and refocused the model to streamline ongoing modeling of
Internet supply and demand. We also have modified the model to support the potential
linkage of the physical Internet issues discussed in this volume and the logical Internet
issues discussed in volume 2.0 of this report. (Please see Figure 2: 2008 Nemertes
Internet Model Influence Diagram, Page 1
We realigned the 2008 Nemertes Internet Model into four functional layers: input,
analysis, summary and output. All of these components were in the original model, but
the functional segmentation enhances our ability to do more granular"what-if" analysis.
Like last year, global users (input layer) drive the demand side of the model. We grouped
them geographically: Europe, North America, Latin America, Asia Pacific and Africa
Middle East (summary layer). Each of these groups has different levels of access to a
series of Internet-attached devices, each of which runs a range of applications: PC,
Internet-enabled mobile device, game console, and IPTV. The degree to which a device
can generate load is proportional to the amount of time the user desires to use the device,
as well as the degree to which the device in question is physically capable of pushing
data. Each of these devices, then, drives data through the five geographic areas and
ultimately generates a combined load felt by the Internet as a whole (world Internet
summary). We are then able to break down the demand into specific geographic regions.
Since our clients are located primarily in North America, the focus of this report is on
U.S.-North America demand.
Since release of our last report, vendors and organizations have published several studies
on broadband in North America.
These include:




Educause: "Blueprint for Big Broadband"
ITIF: "Explaining International Broadband Leadership"
Cisco: Visual Networking Index "Approaching the Zetabyte Era"
Akamai: "State of the Internet" - Q4/07, Q1/08
Unfortunately, as we found last year, most Internet modeling ignores the supply side
entirely, or simplifies it considerably, not without reason. True capacity is defined as the
maximum throughput measured over some period. This is a complex undertaking,
considering Internet throughput is characterized by billions of nodes with billions of
potential paths from one node to another node. What's more, because the Internet uses so
many routing protocols, it's nearly impossible to determine what path is being used.
Virtually all the available research literature that attempts to model such a problem is
concerned with deriving an algorithm that models Internet routing, rather than one that
works in a practical sense for sizing the Internet. As a consequence, in every case that we
examined, the algorithm was far too complex to solve. Instead, we continue to opt for a
more simplistic approach that fundamentally treats the Internet as a series of containers
for holding and moving bits. This approach lets us count the devices that generate and
move bits, multiply by their estimated throughput (device capacity), and add all the
capacities to determine the total capacity of the Internet.
This simplistic approach is not ideal to model exact bandwidth capacities, though it does
provide a framework to compare bandwidth capacity among different connectivity layers:
core, connectivity and access. Ideally, service providers someday will start sharing their
actual implemented and projected bandwidth. Until this time, we believe that our
approach is sufficient to assess Internet supply as an independent function of Internet
demand.
3.2 Global Capacity and Demand - Five-Year View
In this year's study, we keep the same five-year time horizon for our analysis; shifting
from 2007-2012 to 2008-2013. Keeping the time horizon constant is important for the
following reasons:
1. The accuracy and accountability of the projections is inversely related to the
length of the time horizon. Five years seems to be an accepted balance between
reasonability and believability. It's a long enough timeframe to account for the
fact that Internet infrastructure is a long-term investment with build-outs that take
years for planning, capitalization, implementation and activation, yet a short
enough timeframe to be considered plausible and not based on crystal-ball
analysis.
2. As discussed in volume 2.0 of this report, there also are looming logical Internet
issues. These issues appear within the same five-year time horizon as the physical
issues discussed initially in last year's report and updated in this volume of this
year's report.
3.2.1 High-Level Analysis
The big question: What has changed in a year? Fundamentally, nothing has occurred in
the past 12 months that leads us to alter our near-term or long-term projections for
Internet supply and demand. Of course, the total impact of the Fall 2008 financial crisis is
still to be determined, which we will discuss in the following section.
We still project capacity in the core and connectivity and fiber layers outpaces demand,
and demand crosses access-line capacity within the next five years. (Please refer to
Figure 3: 2008 World Capacity and Demand Projections, Page 1.) Globally, we still
project this intersection to occur by 2012. In North America, things are looking slightly
worse as the intersection between capacity and demand has shifted from 2011/2012 to
2011, based on updated data on North American broadband capacity. (Please see Figure
4: North America Capacity Versus Demand - 2008, Page 1.)
3.3 Review and Updates to Demand Curve
3.3.1 The Potential Impact of Economic Conditions On Demand
Given the Fall 2008 market turmoil, one must wonder how a global recession may affect
our projections of supply and demand. Though we are not economists or fortunetellers,
we are able to use the Nemertes Internet Model to postulate how dramatic shifts in the
North American economy might affect projected supply or demand.
Clearly, a recession affects both the supply and the demand sides of the equation. On the
supply side, higher interest rates, restricted access to capital and reduced demand can hurt
a service provider's ability to expand access-line capacity, thus accelerating the timing of
a potential bandwidth crunch. On the demand side, users could elect to put off purchasing
the next-generation cell phone, game console, or Wi-Fi router - thus pushing the
anticipated bandwidth crunch out for some period of time.
3.3.2 Looking at the Supply-Side Equation
The greatest potential effect to the supply side of the equation of these economic issues is
a service provider reducing or delaying new broadband access. As discussed in last year's
report, the last mile of access is the most expensive part of the broadband supply chain,
partly because of volume and partly because Moore's Law does not apply to digging
trenches and running fiber loops.
The fundamental question: How will a credit crunch affect service providers' ability to
invest the capital necessary for broadband expansion?
In a recent report from UBS Investment Research, it's clear that a credit crunch puts
pressure on telecom companies. However, assuming the credit restrictions are not long
term, it appears that the majority of U.S. broadband access providers are financially
solvent to weather the economic storm. As UBS states:
"In general, the investment-grade companies should not have an issue. However,
Verizon and Time Warner Cable have to finance transactions that could put
incremental pressure on EPS (see company sections). Verizon in particular has to
issue $32B in debt between now and the end of 2009 ($22B in new debt from the
Alltel deal and $10B in refinancing), likely on more-expensive terms... The
RLECs do not have significant maturities due before the end of 2010."i
The only area of caution UBS raised related to broadband access providers is with Qwest,
which has longer-term financial issues that could be negatively affected by capital
constriction. As UBS states:
"Qwest's problems are longer-term in nature and stem from the expiration of its
NOL's by 2012, coupled with rising maturities in 2010 and beyond, which may
force it to take a more conservative approach to buybacks."ii
Of course, this does not mean service providers will or will not pull back on broadbandaccess expansion. It just indicates that a restriction in credit markets does not
automatically negatively affect broadband-access expansion.
Access line capacity is the component of supply that is crossed by demand. As an
exercise, we model service providers delaying North American access line capacity one
and two years. (Please see Figure 5: Impact of Delayed Access Line Supply, Page 1
If North American access-line growth is delayed one year, demand crosses access-line
supply in about late 2010, rather than 2011. A two-year delay in supply shifts the
crossing point to about mid-2010. The effect isn't greater because of inherent dynamics of
access-line capacity versus demand: Demand is growing at a much faster rate.
3.3.3 Looking at the Demand Side of the Equation
Predicting how the Fall 2008 economic challenges may affect demand is extremely
difficult because there are so many factors affecting demand, many of which have
nothing to do with technology. However, given that we have a unique model for
predicting demand, we hypothesize that a recession would have the effect of delaying
demand as opposed to eliminating it. After all, people's desire for more bandwidth based
on more applications available on the Internet and the continual increase in means to
access these applications drives our demand model. This desire does not go away; it is
just delayed. Given this, we project what delayed demand may look like. (Please see
Figure 6: Impact of Delayed Demand, Page 1.)
To project delayed demand, we shifted demand to the right one and two years to generate
the curves. This projection assumes that demand expansion is delayed but current
demand levels do not drop. Delaying demand means demand stays constant for the
adjustment period, thus the flattening of the demand curves in 2009 and 2010. The
difference between delayed demand and reduced demand, for example, is the difference
between a user delaying the upgrade of the PC from one with a 100-Mbps Ethernet port
to one with a 1-Gbps port for a year, versus dropping high-speed Internet and going back
to the modem in the old PC for dial-up service.
There is not enough information available at the time of writing to even attempt modeling
the potential of Internet demand reductions (or increases) until more is known about the
real-world impact of near-term and long-term economic conditions. Yet, our projections
show that a one-year delay in demand expansion shifts the point where demand crosses
access line capacity in North America to the later part of 2011. A two-year shift in
demand pushes the demand curve out to about 2012. Because of the fundamental nature
of the curves, our projections show even a dramatic shift in demand resulting in a twoyear delay in demand growth shifts the point where demand crosses supply, yet the lines
will still cross only a year later than current projections.
3.3.4 The Rise of the Virtual Worker
So what are factors that increase demand? Last year we referenced the virtual workers,
defined as employees who are not physically located near their colleagues or supervisors.
Virtual workers drive increased demand because they typically are located remotely from
corporate resources, such as servers and applications. They expect seamless
communications, regardless of where they conduct business. And they often require more
advanced communication and collaboration tools than those who work at headquarters.
Organizations use technologies such as videoconferencing and Web conferencing to cut
down on travel and keep these virtual workers connected to the rest of their teams,
according to Nemertes' Unified Communications and Collaboration research report.
As organizations increase the number of branch offices and become more flexible about
where employees conduct business, a larger percentage of the total workforce will
become virtual. The issue of virtual-workforce expansion directly ties to Internet demand.
In our current model, enterprise Internet demand comes from two sources: the number of
PCs at work and estimated bandwidth capacity required from various-sized corporations.
We do not account for the potential impact of more workers shifting their demand from
the corporate LAN to the home/hotel WiFi access point. Essentially, what this shift to the
virtual worker does is push demand closer to the edges of the network; exactly the point
of greatest contention.
The movement of workers from corporate inhabitants to virtual workers is substantial. In
the aforementioned benchmark, 89% of companies identify their workplace as "virtual".
Among them, 29.8% of their employees are virtual, on average. This number is up
slightly from last year's 27%. Slightly more than half (55%) of organizations classify
25% or less of their total workforce as virtual workers. (Please see Figure 7: Percent of
Workers Virtual, Page 1.) On the other end of the spectrum, 10% of organizations have
between 76% and 100% of their employees working in a virtual environment. The
majority of those are large organizations with revenues in excess of $1 million
We can point to a few reasons for this increase. More companies (23%) are implementing
green policies. To cut down on energy consumption, they let employees work from home
either full- or part-time.
Also, as mentioned previously, offering employees the opportunity to work from home or
closer to home helps to attract and retain high-value employees.
Finally, large companies recruit heavily from colleges and universities. Today's college
graduates are accustomed to working virtually. They typically have had experience with
projects that involve working remotely from professors and their peers. The possibility of
a virtual work environment is not an issue for them and a perk they expect to receive.
3.4 Review and Updates to Supply Curves
3.4.1 Access Lines
In our report last year, we calculated access lines based on the following components:






Cable Modem
DSL
Wireless
Dial-up
FTTP
Wireless Mobile

Enterprise Connectivity
This year we have updated the Internet Capacity Model to reflect new information from
the United States Federal Communications Commission (FCC) on re-classification of
broadband. In June 2008, the FCC recharacterized its interpretation of broadband.
Previously, any connection over 200 kbps (in either direction) was considered broadband.
As noted in last year's report, this results in inflated estimates of North America
broadband lines. (Please see Figure 8: Total Access Lines - 2007 versus 2008, Page 1.)
This year the FCC reclassified broadband as follows:








"First Generation data:" 200 Kbps up to 768 Kbps
"Basic Broadband:" 768 Kbps to 1.5 Mbps
1.5 Mbps to < 3.0 Mbps
3.0 Mbps to < 6.0 Mbps
6.0 Mbps to < 10.0 Mbps
10.0 Mbps to < 25.0 Mbps
25.0 Mbps but < 100.0 Mbps
100.0 Mbps and beyond
Based on the FCC data and other sources we have matched the access-line data to the
typical modalities for broadband available around the World: Cable Modem, DSL (ADSL
& SDSL), FTTP, wireless mobile and dial-up. (Please see Table 1: Access Technology
Line Rates, Page 1.)
Access Technology
Upstream Bandwidth
Kbps
Downstream Bandwidth
Kbps
Cable Modem
384
4000
SDSL
128
384
ADSL
200
768
Dial-up
56
56
FTTP
2000
15000
Wireless Mobile (2.5G)
57.6
56.6
Wireless Mobile (3G)
384
768
Table 1: Access Technology LineRates
When the FCC first reclassified broadband, we assumed the radical shift in calculations
of broadband lines would affect our estimates of broadband access capacity. But our
original interpretation of FCC numbers and estimates of the percentages of cable, xDSL
and wireless data originally "hiding" in those numbers was very close. This increase in
accuracy does not bode well for North American projections. The slight decrease in
broadband access bandwidth accelerates the crossing of demand and access capacity into
2011, as discussed in section 3.3.1. (Please see Figure 9: Total Access Line Bandwidth
2007 Versus 2008, Page 1.)
To illustrate why the recharacterization doesn't change the reality of broadband in
America, only 2.24% of total broadband subscribers in North America are on fiber to the
premise (FTTP) broadband and about 45% are still on various flavors of DSL. (Please see
Table 2: 2008 Projected Broadband Lines North America, Page 1.)
Modality
NA Lines
Percent of Broadband
Cable Modem
37,278,700
51.35%
ADSL
31,310,283
43.13%
SDSL
1,182,423
1.63%
Wireless
1,200,000
1.65%
FTTP
1,623,097
2.24%
Total
72,594,503
Table 2: 2008Projected Broadband Lines North America
3.4.2 Core Switching and Connectivity Layers
Last year, we projected that core and connectivity capacities are increasing at roughly
50% per year. This was based on a historical estimate from 2000 "" 2006, calculated by
estimating bandwidth as a function of core and connectivity layers, routing and switching
equipment shipments. We recognize that adding 100 core routers to the network doesn't
raise the total number of core routers by 100. Some equipment replaces older equipment.
We assume, however, that capacity never decreases. Even when a new router replaces an
old router, the aggregate capacity of the new router is greater than the capacity of the
existing router since the main reason for upgrade has been, and will continue to be, to
increase capacity. Our calculation of incremental capacity for an upgraded node will
initially be higher than actual. However, as carriers add additional trunks and higherspeed trunks, our initial over-calculation nets out.
Ideally, the best way to estimate capacity is to reference service provider-deployed
capacity, but as mentioned in last year's report, this information is not readily available.
Based on our approach, a number of factors indicate our estimates of core and edge
capacity growing by 50% are still reasonable and possibly even conservative:

Cisco, Juniper and Alcatel-Lucent still make up the bulk of carrier core and edgerouting equipment.





Experts say core and edge revenue grew to $11.2 billion in 2007, a 23% annual
increase. Last year, we projected investment increases of 10% year-over-year in
carrier switching/routing gear. Combining a 10% increase with doubling of
capacity every 18 months, (Moore's Law), gives us a high confidence of capacity
increasing by 50% per year.
Cisco shipped $5.08B in service provider router/switch gear, equating to 63,616
units from Q3 2007 through Q2 2008. Cisco's high-end router revenue (CRS-1,
12000, and 7600 Series) to service providers increased by $855 million in Cisco's
fiscal year 2007 over fiscal year 2008 and increased by $925 million in Cisco
fiscal year 2008 over Cisco fiscal year 2007. Since 2007, growth compared to
2006 represented a 50% increase in capacity, based on revenue alone, it seems
reasonable that growth is still progressing at least at a 50% rate.
Our calculation for core router trunk speeds in 2006 was 1.99x10E10 (OC-384).
Verizon announced in 2007 that it connects its core via 3.98x10E10 bps (OC768), and ATT announced in April 2007 that it is upgrading its core routing with
Cisco CRS-1 routers - replacing Avici - and OC-768 links. In our capacity model,
a doubling of the trunk speed equates with a doubling of effective core capacity
since it is the trunk rates that are the limiting factors for effective throughput, not
the backplanes of the core routers, themselves. Based on these figures, 50%
growth is a conservative estimate of core capacity increases.
Vendors are providing router architectures better suited to meet the scale demands
of the Internet. For example, the Juniper T1600 (announced June 2007) offers a
fully scalable architecture with each switch supporting 1.6 Tbps of throughput
(1.6x10E12) and 100 Gbps (1.00x10E11) throughput per slot and the ability to
extend the switch platform to 2.5 Terabits of throughput (2.5x10E12). As a
reference point, 1.6 Tbps equates to a potential throughput of 518
petabytes/month. The T1600 is the follow-on to Juniper's last flagship
announcement of the T640 in 2002. The T640 has 640 Gbps of throughput
(6.4x10E11) or 25% the performance of the T1600 platform. Put another way,
Moore's Law is living, though a bit lethargically, in core routing equipment with a
four-time increase in performance in 60 months. Juniper shipped more than 1,000
T-Series units in 2007. By the end of the first quarter of 2008, Juniper had
shipped 77 T1600 units. Just counting these units could add 39,917
petabytes/month of core capacity to the Internet. The increase in capability is
more than enough to drive 50% growth in capacity.
Meanwhile, for Cisco, the service provider core router/switch platforms are
primarily the Cisco CRS-1, 7600 and 12000 series. Based on a scalable
architecture that can support multiple shelves with multiple multi-stage switching
fabrics, the CRS-1 can scale up to 92 Tbps in a single system. This means a fully
loaded CRS-1 can add 29,808 petabytes/month to the Internet infrastructure. In
other words, 10 CRS-1 fully loaded systems can meet the entire global core
capacity increase we are projecting for 2008. Once again, the increase in
capability is more than enough to drive 50% growth in capacity.
The bottom-line for core and edge capacities: We see no indication of a change in growth
for core/edge equipment that requires an update to our predictions of core and edge
capacity continuing to grow at 50% per year. Last year we based our projections on
historical growth over multiple years. Although we could increase our estimation of
capacity growth this year, based on an apparent increase in investment, a more
conservative estimate based on historical data is to maintain our estimate of 50%.
3.4.3 Optical Switching Capacity Updates
In last year's report, we projected the growth in optical fiber capacity to follow a slow
growth curve with a flattening of capacity beginning in 2010.iii This year, we revise those
projections based on two factors: accelerated introduction of OC-768 data rates in the
backbone and an increase in projected deployment of OC-192 data rates in the metro
fiber environment. (Please see Figure 10: Optical Bandwidth Projections: 2007 versus
2008, Page 1.) Both of these factors indicate that fiber capacity is increasing more steeply
than we projected last year and shows no signs of flattening.
Fiber capacity is defined more by the electronics that illuminate the fiber than the fiber
itself. Consequently, a sharply increasing capacity curve is reasonable since fiber
interfaces will follow a Moore's Law dynamic with increasing capability per dollar spent.
According to industry reports, this will have the effect of increasing the bits per second
per dollar by as much as 20% a year through 2012.
Additionally, more of the optical investment is going to increase the capability of existing
metro-fiber installations, where carriers are retrofitting DWDM technology using higher
data rates.
The bottom line: In 2007, we projected fiber capacity would be more than sufficient to
carry projected Internet loads. In 2008, we see no reason to change that conclusion. But
we do see a rapid introduction of new technology that will significantly improve datacarrying capacity of the Internet backbone and edge networks.
4 Finding: The Crunch is Still Coming
4.1 Assessing Trends and Dynamics in 2008
One of the greatest challenges we face is the perception that our model predicts an
Internet failure. This is incorrect. Rather, it predicts the potential for Internet brownouts
when access demand is greater than supply. In following on the power grid analogy, most
people experience minor disturbances, such as lights flickering, fans slowing, or
computers freezing, long before a system-wide brownout or outage occurs. In the
Internet, these disturbances may already be occurring, though they are yet to be systemic
or even predictable. For example, some access operators are putting caps on usage. The
justification is better bandwidth management so the 2% of the population that downloads
the equivalent of the Porn Library of Congress each week does not negatively affect the
98% of "normal" users.
Since last year's report, for example, Comcast has implemented a monthly usage cap of
250 gigabytes. Though 250 gigabytes sounds like a lot of traffic, we have interviewed
Comcast users that have exceeded the cap twice in the past six months and came within
15% of the cap in three of the other four months. Porn? No! It was downloading video
from the Olympics, the Democratic and Republican national conventions, and daily
online system backups.
Though this traffic load is more than typical, it certainly isn't exceptional. This type of
usage will become typical over the next three to five years. The fact that Comcast's
network is, by the company's own admission, not able to cope with such usage patterns is
a clear indication that the crunch we predicted last year is beginning to occur. Keep in
mind that biological systems, such as user adoption of new technologies, tend to follow
Gaussian, bell shaped, curves. So today's exceptional users are tomorrow's mainstream
users. And Comcast's network clearly can't handle an influx of such users. Stated more
succinctly and crudely by Cisco, "Today's bandwidth hog is tomorrow's average user."iv
Our model is, by design, independent of specific applications. In fact, we explicitly state
we're not in the business of predicting precisely which bandwidth-hungry applications
will be developed; only that by a Moore's Law of innovation, they surely will be.
However, for illustrative purposes in this year's report, we look closely at two Internet
events of 2008 that are indicative of the direction of Internet demand going forward, both
driven by video: BT's iPlayer rollout and the Beijing 2008 Olympics.
4.1.1BT's iPlayer Debut in 2007
Much discussion regarding online video focuses on the growth of YouTube. In fact, last
year, we highlighted YouTube's phenomenal traffic growth:
"YouTube, for example, which emerged in 2005, and which Cisco says was already
responsible for roughly 27 petabytes/month in 2006 - about as much traffic as traveled on
the Internet in total in the year 2000."v
YouTube continues to grow at a fast rate, estimated to be 100 petabytes per month in
2008.vi This equates to a CAGR of approximately 100% per year. Yet, YouTube traffic
pales in comparison to potential demands placed on the Internet by higher-bandwidth
video services, including YouTube HD and BBC's iPlayer.
Christmas 2007 moved the iPlayer, an application for downloading broadcast video
content over the Internet, from the BBC into millions of U.K. homes. By April 2008, the
BBC estimated that iPlayer accounted for 3% to 5% of all U.K. Internet traffic. For a
closer look at the impact of iPlayer, PlusNet, a U.K. ISP has been closely tracking iPlayer
traffic. (Please see Figure 11: PlusNet Internet Traffic 11/07 - 02/08, Page 1.)
PlusNet saw phenomenal growth of iPlayer traffic, characterized as follows:

5% growth intotal average usage since December 1

66% growth in volume of streaming traffic since December 1

2% growth in the number of customers using their connection for
streaming since December 1

72% growth in the number of customers using over 250 MB of streaming
in a month since December

100% growth in the number of customers using over 1 gigabyte of
streaming in a month since Decemberviii
Tiscali, another U.K. ISP, says iPlayer traffic is accounting for ever-increasing demand
on its network: 10% of all traffic in March 2008 was from iPlayer. Considering this was
only three months post iPlayer introduction, this is phenomenal growth.
To put the potential of higher-bandwidth video services in perspective, 4 million
households downloading two high-definition movies per month drives the same Internet
demand as 50 million households watching 50 YouTube videos per month.
As an Internet event in 2008, the BBC iPlayer has caused some consternation in the U.K.,
mostly driven by local ISPs who have to pay much higher transit fees because of the
increased demand by their customers. In terms of supply and demand, it appears that the
U.K. is supporting current iPlayer growth. BBC claims that the overall impact on the
U.K. Internet is "negligible" However, the fact that a new application can account for 3%
to 5% of all Internet traffic in the U.K. within four months of launch underscores the
premise of our research: New applications will drive increased demand, and restrictions
in capacity will limit the success of these new applications.
Further, as discussed in section 5.1.1, BBC is considering building out its own content
delivery network (CDN) to skirt much of the U.K. ISP infrastructure in an effort to move
content ever closer to the consumer.
4.1.2 NBC's Olympic Internet Streaming Gold Medal
Another Internet event of 2008 was the NBC Olympic coverage on the Internet. The
Olympics provide a nice contrast to the BBC iPlayer discussion. The BBC iPlayer is
largely a U.K. phenomenon with long-term implications for bandwidth demand. In
contrast, the Olympic coverage on the Internet was a global phenomenon with short-term
implications for bandwidth demand: 17 days. Yet, both examples comprise high-quality
video over the Internet, on-demand and streamed.
There was much speculation on the potential impact on the Internet leading up to both the
iPlayer launch and the coverage of the Beijing Olympics. In fact, there was a buzz in the
blogosphere that the demand posed for online video of Olympic events would cause
Internet outages. Yet, the Olympics came and went, and the Internet appears to be no
worse for wear. The catastrophe was averted for two reasons. First, Olympic Internet
coverage did not generate nearly as much traffic as people feared. Second, the Internet
backbone did not carry most of the burden of Olympic Internet coverage. This sounds
like a contradiction; Internet coverage not on the Internet. But as discussed in section
4.2.2.1, a growing amount of traffic is shifting away from "the public Internet." Before
discussing this point, let's take a look at the reality of Olympic Internet bandwidth
demands.
NBC had the exclusive rights to stream Olympic coverage over the Internet. Over the
course of the 17-day event, NBC had 3 million broadband-based viewers consuming 4
million hours of online streaming. An additional 4 million broadband-based viewers
watched on-demand events, equating to 3.5 million hours of video. Therefore, the total
count was 7 million broadband-based viewers and 7.5 million hours of online video.
There were 30 megabyte streams so the average length of any one video was 15 minutes.
Over the course of the event, NBC streamed an average of 0.44 million hours of video
over the Internet per day.
In July 2008, YouTube experienced 91 million viewers watching 5 billion videos with an
average duration of 2.9 minutes. This equates to 241.7 million hours per month or 7.8
million hours per day. In other words, the daily load of Beijing Olympic video on the
Internet is only 5.6% of a typical YouTube day. From another perspective, NBC
delivered 30.0 million streams, or 1.8 million streams/day on average. In July 2008,
YouTube streamed 5.0 billion videos, or 166.7 million streams/day.
Nevertheless, it is not as simple as this. After all, NBC proudly touted the high quality of
its videos, certainly higher than YouTube's. For the Olympics, NBC encoded video using
Digital Rapids DRC Stream technology combined with Microsoft Silverlight. To
compare actual bandwidth, we assume there is an equal distribution of viewers across the
different resolution services NBC offered between streaming and Video-on-Demand
(VoD). (Please see Table 3: NBC Olympic Coverage Internet Load, Page 1.)
Viewers
Streams
YouTube
(July)
91M
5B
241.7M
NBC
Live
Streams
3M
15M
4M
NBC
VoD
%
Size
Kbps
100
320
x
240
314
NBC streaming (high resolution)
50%
492
x
336
600
31.8
NBC streaming (regular resolution)
50%
320
x
176
300
15.9
25%
320
x
176
350
8.1
NBC VoD (low resolution) 25%
424
x
240
600
13.9
4M
15M
Hours
TB/Day
1101.7
3.5M
NBC VoD (lowest resolution)
NBC VoD (medium resolution)
25%
592
x
336
1050
24.3
NBC VoD (high resolution)
25%
848
x
480
1450
33.6
NBC Total
128
Table 3: NBC Olympic Coverage Internet Load
As shown, a typical July 2008 YouTube day drives 1101.7 terabytes of traffic compared
with only 128 terabytes per day of online Olympics traffic, despite YouTube running at a
lower bit rate. In other words, the online Olympics drove 12% as much traffic as
YouTube on a daily basis.
Worst-case, even if 100% of the NBC viewers were able to run their Silverlight viewers
at the highest resolution, the total load would have been 198 terabytes per day, which
equates to 18% of a YouTube July 2008 day. In other words, part of the reason the
Olympics did not bring the Internet to its knees as speculated is because the actual traffic
load on the Internet for broadband viewers was not nearly as great as people had feared.
Yet, there is another reason that ties directly to the discussion on the Internet peering
hierarchy. Before we can address this issue, we first must take a short dive into the murky
innards of ISP relationships: peering, transit and content-delivery networks.
4.2 Internet Peering and Why it Matters
4.2.1 The Internet Peering Architecture
To better understand why Internet Olympic coverage was only partially on the Internet, it
is valuable to review the history of peering and transit relationships. Fundamentally, no
one network directly reaches every point of the Internet so there must be a way for two
ISPs to transfer traffic. This approach is called "peering."; Peering requires three
components:


Physical interconnect of the networks
An exchange of routing information through the Border Gateway Protocol
(BGP)

An agreement (whether formal or informal) regarding the terms and
conditions under which traffic is exchanged.
Historically, ISPs peered at publicly operated Internet exchanges, which became known
as "network access points" or NAPs, and are now called "exchange points." These
facilities were by definition public; any provider with enough bandwidth to connect at a
site was (at least at first) welcome.
Over time, this open policy hardened increasingly into an arrangement in which providers
only peered with "like" providers. At the public exchanges, anyone could join. The
problem was that these exchanges were not secure (MAE East was in a parking garage,
and the door was often opened), and they were becoming increasingly non-resilient and
congested. So companies started forming private exchanges with their peers. If X and Y
generated roughly equivalent amounts of traffic, they would agree to peer with each
other, but not if either believed they were carrying more traffic than the other provider
was. The largest providers began to designate themselves as "Tier-1"; players. Yet, even
Tier-1 definitions differ and many ISPs use the term quite loosely. For example, some
make distinctions between "Global Tier 1" and "Regional Tier 1" as a means to claim
membership. For this report, the only Tier 1 is a Tier 1 that has direct global reach.
Tier-1 internetworking is essentially a cartel system (or an oligarchy) where a Tier-1
provider is only a Tier-1 provider if it interconnects with all other Tier-1 providers, and
transports all of those providers' traffic in exchange for the transport of ones' own, at no
cost to either party. Therefore, to become a Tier-1 provider one needs unanimous
acceptance into the club. As might be expected, these dynamics keep the number of Tier-
1 providers low: AT&T, Global Crossing, Level3, NTT, Qwest, Sprint, Verizon and
Savvis.
Providers that aren't among the Tier-1 players need to pay for some or all of their
connectivity to the Tier-1 players. So-called Tier-2 providers then peer with other Tier-2
providers for free. Tier-3 providers are, in essence, resellers of the backbone services of
Tier-1 or Tier-2 players.
4.2.2 Welcome to the Carrier Hotel California
Where do these physical (and logical) peering arrangements take place? There are
essentially two venues: Public peering points, at which multiple ISPs maintain a
presence, and private peering points, at which carriers connect one-on-one with each
other and negotiate terms and conditions of their relationships privately. Private peering
points typically occur in carrier hotels where a third party provides the physical
interconnections across which providers can communicate.
A large percentage of Internet traffic traverses private peering points, even for non-Tier-1
players. In many cases, this is simply because of Tier-2 providers connecting as peers
versus establishing transit relationships with Tier-1 providers. In others, it has to do with
the type of traffic carried or the service expectations of the customers whose traffic they
carry. For example, Cogent Networks (a Tier-2 player) operates a massive ISP network
with points of presence in 110 markets in the U.S. and Europe. Cogent's backbone
operates at 80 Gpbs to 200 Gbps, and can scale to more than 1 Tbps. Cogent states that
95% of its Internet traffic goes across private peering connections.
Moreover, the percentage of Internet traffic that traverses private peering points is
increasing, which has a dramatic impact on how Internet traffic is measured. The main
reason providers opt for private peering is performance. And, as we'll discuss, it's part of
a broader trend toward "flattening" and fragmenting the Internet.
Before we discuss the implications of private peering, it's valuable to discuss the
relationship between peering and pole vaulting.
4.2.2.1Peering and Pole Vaulting: Did Olympic Coverage Vault the Internet?
A closer look at the NBC infrastructure shows that the majority of the traffic traversed
private networks versus the public Internet. First, NBC transported video from Beijing to
NBC facilities in Los Angeles and New York City across three dedicated 150 Mbps (OC3) connections. For distribution of video (streams and VoD) to Web users, NBC relied
upon content-distribution services from Limelight Networks.
Limelight Networks is a global content-delivery network provider that has built a highspeed backbone network that "boasts direct connections to nearly 900 user access
networks around the world."ix Upon closer examination, it is clear that Limelight is
building a high-performance "overlay" network that is partially interconnected with ISPs
and partially private, with dedicated switches on leased/owned fiber. In 2006, Limelight
announced plans to build a 60 Gbps dense wave division multiplexing (DWDM) fiber
optic transport system between San Francisco, San Jose and Oakland, Calif., with support
for 10-Gbps interfaces. The dedicated optical connections interconnect Limelight's 18
regional data centers.
In the United States, Limelight has an agreement with Switch and Data to interconnect
with local ISPs and Internet aggregation points. Switch and Data offers carrier hotels at
12 locations around the country. In these facilities, Limelight interconnects with highspeed (10 Gbps) connections. In Europe Limelight relies on PacketExchange, providing
26 Points of Presence (PoP) for direct connectivity from the U.S.
The goal of a CDN is to distribute content as close to the end-user (eyeballs) as possible
to ensure maximum control of the end-to-end traffic path. Our assertion is that the NBC's
Olympic coverage over Limelight's CDN is indicative of a fundamental shifting of traffic
from "public Internet" to a mostly private network. Or more accurately, content providers
are no longer relying on the public Internet infrastructure to deliver their content. Instead,
they're purchasing or building overlay networks to ensure their content reaches users first.
Another example, though different in terms of bandwidth requirements, is the
relationship between Amazon and Sprint to deliver content to the Kindle electronic
books. Kindle users can download books simply by clicking a button on the device. The
button automatically launches a wireless link across Sprint's network to Amazon.
Amazon buys services in bulk from Sprint, negotiates service agreements, and pays for
the cost of transport end-to-end (that is, from Amazon's site across Sprint's network to the
user). The benefit to users is that the delivery is instantaneous. The tradeoff? Freedom of
choice. Users can't opt for, say, Verizon's wireless network instead.
4.3 The Mystery of the "Missing" Traffic
What does all this discussion about peering have to do with measuring Internet traffic?
Quite a lot, as it turns out. There is very little public information available about the
happenings inside service-provider networks. At best, we know of expansion of capacity
through press releases, annual reports and confidential interviews with the engineers and
architects inside carriers, cable companies, and other provider networks.
This means that the only practical way to catch a glimpse of real Internet traffic is to
monitor traffic at public peering points. This approach is effective to the extent that one
can safely assume that most traffic, or at least a constant percentage of Internet traffic, is
going across public peering points. Consequently, monitoring public peering points
doesn't necessarily capture what's actually going on in the Internet.
4.3.1 Monitoring Traffic by Looking at the Toll Booths
If the percentage of traffic that is going through public peering points is consistent over
time, you can simply monitor the traffic through public peering points, then multiply that
figure with the appropriate percentage, and have a good approximation of overall Internet
traffic. Additionally, the growth rates for both (traffic through public peering points and
private network traffic) should be identical.
It's as if you had two types of roads: freeways and toll roads. If you can assume that the
percentage of traffic that goes through both toll roads and freeways is constant, you can
effectively monitor growth rates on both sorts of roads by looking just at toll-road traffic.
But what if an increasing percentage of cars opt to travel through the freeways? Counting
toll way traffic then becomes an unreliable mechanism, and you can't even measure how
unreliable it might be, because there's no clear way to determine what percentage of cars
is forsaking toll roads for freeways. And if there's a compelling reason for drivers to
eschew the toll roads, for instance to avoid the costs of the tolls, it's reasonable to assume
exactly what is occurring.
That's exactly what's happening with today's Internet traffic. Based on extensive research
(including confidential interviews with enterprise organizations, content providers, telcos
and other carriers), we're seeing IP traffic growth continuing, and the rate of increase
growing. If traffic is shifting away from the public peering points, we would still expect
traffic to increase at the public peering points, just at a declining rate, a smaller volume of
data still growing at an increasing rate. In fact, the best available data on the subject of
Internet traffic tends to support our conclusions.
Dr. Andrew Odlyzko, a researcher at the University of Minnesota, is one of the accepted
authorities for measurement of Internet traffic. He and his team at the University of
Minnesota monitor Internet traffic flow at public peering points around the world. In
2007, Odlyzko said Internet traffic was increasing at approximately 60% year over year,
while our assessment was closer to 100%. Although we both agreed that growth was
extreme, where we really differed was in the rate of change in the growth. Odlyzko sees it
slowing down. This year, for example, he says, "There is not a single sign of an
unmanageable flood of traffic. If anything, a slowdown is visible more than a speedup." x
We propose that Dr. Odlyzko's findings are not an indication of reduced demand, but
rather an indication that Internet traffic is shifting away from the very points he is
measuring, a hypothesis that Odlyzko's own numbers actually validate, as we'll discuss
shortly.
Why? The "tollbooth" analogy is imperfect because traffic through the public peering
points is free. However, there is an "overhead" associated with transit through the public
peering points - a performance impact. As we'll discuss shortly, there are clear indications
that content providers in particular, and all ISPs generally, are migrating traffic toward
"flatter" networks (those with fewer routing and peering transit points) in an attempt to
optimize performance.
4.3.2A Closer Look at Public Peering Point Traffic
Odlyzko's MINTS (Minnesota Internet Traffic Studies) team is tracking Internet traffic at
111 sites around the world. For example, in 2008 the team has analyzed 74 sites of which
71 provide reliable data. (Please see Table 4: MINTS Historical Internet Traffic Dataxi,
Page 1.)
Year
Analyzed
Reliable
Sites
Mean
Annual
Growth
Rate
Median
Annual
GrowthRate
Volume Rated
Mean
AnnualGrowth
2008
74
71
1.665
1.167
1.314
2007
85
81
1.747
1.322
3.533
2006
75
58
3.357
1.578
2.078
2005
41
31
2.150
1.513
1.927
Table 4: MINTS Historical Internet TrafficDataxi
What this summary doesn't show and what is fascinating about the analysis is that of the
71 peering points observed until mid-2008, 26 of them show an annual growth rate of less
than 1.0. This means that more than one-third of the peering points are showing a
decreased traffic rate (1.0 would mean no change in rate). These points include major
international gateways, including Korea Internet Exchange (KINX), Amsterdam Internet
Exchange (AMS-IX), and London Internet Providers Exchange (LIPEX). It is hard to
imagine that these reflect broader traffic trends, especially given that other organizations
estimate that global Internet traffic increased at a rate between 50% and 100% from mid2007 to mid-2008 and globally, MINTS still shows that traffic is increasing at a mean
annual growth rate of 66%.
The most compelling explanation for this discrepancy is that traffic is shifting from
public peering points to private peering points and increasingly, to private or semi-private
"overlay" networks.
5 The Great Flattening
As discussed in section 4.2.2.1, Limelight Networks has been building a CDN that
combines high-speed private links with interconnects directly to regional ISPs and
aggregation points. Limelight uses Switch and Data's PAIX peering points. It's interesting
to note that Switch and Data estimates its traffic growth in the past year (10/07 - 10/08)
has been 112%, nearly twice the average rate calculated by looking at public peering
points.
If Limelight Networks is typical of other CDN activities, then this is indication that there
is a flattening of the Internet happening. Content providers are bypassing the traditional
multiple-interconnected Internet architecture by connecting directly with access
providers.
Limelight is only one example. What about the gorillas of content: Google, Yahoo and
Microsoft? There is much speculation on the extent of their private networks since the
companies rarely disclose details. We've found strong indications that they are investing
in direct network connectivity, and interconnecting these networks themselves:
"Yahoo!,Microsoft and Google, for example, have built out substantial networks
and are peering at exchange points around the world. xiiYahoo! currently has over
640 peering sessions and a multiple- OC-192 (10Gbps) global backbone to
distribute its content.xiii"
In other words, content providers Google, Yahoo and Microsoft, as well as
Amazon and NBC, are investing in network infrastructure, whether directly in the
form of optical fiber, or in the form of bulk purchases of carrier provided,
dedicated bandwidth that deliver content as directly as possible from their sites
to end-users.
5.1.1 Peering Through the Clouds
Given the opacity of the private networks of these content providers, the only way to
estimate the extent of the network is to use network-analysis tools along with extensive
inference. A study by Gill et. al. from the University of Calgary performed traceroute
analysis of the top 20 content providers. They looked at four metrics:




Average number of hops on Tier-1 networks
Number of paths that involve no Tier-1 networks
Number of different ISPs to which a content provider is connected
Number of geographic locations in which a content provider routers
appear.
The analysis concludes: xiv

Google and Yahooroutes show an average of only one Tier-1 ISP hop

Microsoft andYouTube (Google) show an average of only two Tier-1
ISPhops. The researchers also indicate that they see amigration of YouTube
traffic onto the Googleinfrastructure

Google andMicrosoft show nearly 35% of the connections never hit aTier1 network; Yahoo is at 30% and YouTube is about25%

Microsoft,Google, and Yahoo are extremely connected, showing 27,
27and 20 different autonomous systems(AS's), respectively

Geographically,Microsoft, Google and Yahoo show networks that span
theU.S. with access points in major metropolitan regions;Google's network covers
the globe.
There are issues with trace route analysis since there is no sensitivity to distance and
performance. However, the analysis is clear that the three largest content providers in the
U.S. have built out dedicated infrastructures. We believe that the flattening of the Internet
is a trend. And BBC is considering building out its own content-delivery network in line
with Google building out a CDN for YouTube and other Google video services. As
discussed in section 4.1.1, the BBC has seen explosive growth of traffic driven by its
launch of iPlayer in late 2007.
5.1.2 Why The Flattening?
If this flattening is happening, what's the driver? In a word: Performance. Performance is
the basis of successful consumption of Internet content. Content producers know that if
performance is poor, end-users will turn elsewhere for their entertainment, Web
searching, social networking and applications. This drives content producers to flatten
their networks as much as possible, and this flattening means controlling as much of the
end-to-end transport as possible.
Guaranteeing performance in networks is a complex task, tied to control and
management. As content shifts to more real-time delivery (video and Voice over IP) the
performance tolerances shrink, resulting in a need for even tighter control. From a
content-delivery perspective, the level of control rises as the number of peers and
networks that the traffic traverses drops. Essentially, the need for ever-increasing control
is flattening the Internet: Fewer hops, fewer peers, fewer relationships equates to greater
control over performance.
5.2 The Shifting Internet Hierarchy: From Oligarchy to
City-State Fiefdoms
If the Internet is getting "flatter" and content providers are increasingly seeking
proprietary infrastructure to ensure delivery of their applications and content, what
impact does that have on capacity? In essence, the overall trend appears to be away from
the old model of the Internet, in which multiple-interconnected backbones carried a mix
of traffic destined for all users, toward an increasingly fragmented architecture, in which
certain applications and content have specialized network infrastructure and other
applications and content don't. The oligarchy, in other words, is devolving into individual
city-states.
Going forward, the danger is that the playing field becomes increasingly tilted in favor of
larger and more established content providers, who have the market muscle to procure
proprietary networks to ensure their content receives priority delivery. If Google can
purchase (or build) its own backbone fiber-optic network and use its market power to
negotiate favorable terms from access providers, it can ensure that future Google
competitors - those without the wherewithal to purchase or negotiate massive amounts of
bandwidth - can be nipped in the bud. The barrier to entry goes up. No longer is it enough
to have an attractive site. Potential content providers also must supply the bandwidth to
ensure users can reach their sites effectively.
This fragmentation also has an impact directly on end-user access, because of how
carriers deliver access bandwidth to them. In the Nemertes Internet Model, we noted that
the typical Internet-attached user has multiple Internet access connections, and we
assessed the bandwidth available to each user by summing up the bandwidth available
across each access connection.
That's a good first-order approach, and it was enough to demonstrate the core finding
(reaffirmed in this year's report) that user demand would exceed aggregate access
capacity within three to four years. However, what that approach doesn't take into
consideration is the impact on users of receiving this access bandwidth in the form of
multiple low-speed connections that can't be aggregated.
Consider a user with a Blackberry, broadband wireless card, work and home wired
connections, and perhaps an Amazon Kindle. Even if, in the aggregate, the user has
access to 10 Mbps or more in both upstream and downstream bandwidth, the usable
bandwidth is fragmented across multiple disparate connections. (You cannot, for
example, connect a Kindle to a Blackberry to improve the network performance of either
device.)
So even though users may have, in theory, enough bandwidth across the various
connections to enable interactive videoconferencing, in practice the bandwidth is
unavailable for that application. It's like having many barrels full of water, but no pool
large enough for swimming.
This is obviously a simplistic example, and it overlooks the distinction between wired
and mobile connectivity, as well as the fact that bandwidth may be delivered to vastly
different locations, such as work and home. But if the overall trend is toward content
providers owning (or controlling) the connection end-to-end, including the access
component, fragmentation will increasingly limit available bandwidth.
6 Volume 1: Conclusion and Summary
To sum up, this year's research highlights several key points:

Demand will continue to grow to a point at which it outpaces capacity, with the
gating factor continuing to be the access layer. Nemertes still projects this point to
occur by 2012 , plus or minus a year or two (and dependent on how the current




economic state affects provider buildout and user demand). In fact, since last year,
indications are that sophisticated users already are experiencing capacity
limitations, and that the percentage of these users will increase rapidly in coming
years.
Internet traffic measurements that assess overall growth rates based on the public
peering point data are insufficient. Traffic continues to migrate away from public
peering points and increasingly onto private and semiprivate backbones and
overlay networks. To the best of our ability to ascertain, this is indeed occurring,
and at an accelerating pace.
Demand continues to accelerate. The growth of new applications, such as BBC
iPlayer, is indicative of the speed with which demand can rise when innovation is
delivered.
Content providers are driving much of the trend toward the flattening and
fragmentation of the Internet. The result for users is continued high quality of
service for favored content (whose providers can afford to invest in bandwidth for
their content).
Over time, the performance distinction between "favored" content (whose
providers can afford to procure bandwidth for their content) and "general" content
will increase.
Once again, none of this means the Internet will abruptly stop working (as some of the
media and industry experts inaccurately portrayed from our findings last year). Instead,
the "slowdown" will be in the area of innovation. New content and application providers
will be handicapped by the (relatively) poorer performance of their offerings vis-Ã -vis
those created by the established players. Users will find access bandwidth limitations
hampering their deployment of next-generation applications, ranging from software-as-aservice (SaaS) to interactive video. And people will wonder why it's taking so long for
the next Google, YouTube or Ebay to arise.
7 Volume 2: Introduction
When Nemertes issued The Internet Singularity Delayed, its landmark report on Internet
growth, capacity and demand in 2007, the principal issue that we sought to address was
whether there was enough Internet bandwidth to support the increasing application
demands that we could see developing. Although bandwidth is generally a function of the
absolute transmission rates that a given connection can sustain, frequently raw
transmission speed doesn't deliver a high quality of service. In spite of broadband access,
some applications just seem to work better than others do over the Internet. There is also
another dynamic at work here - one which we noted, but kept out of scope in the initial
research in the interests of maintaining a tight focus on the issue of bandwidth. That
dynamic is user performance, often referred to as "quality of experience."
It may be easy to blame a lack of bandwidth for slow responses or timeouts, but oftenlogical issues are as much to blame. Logical problems on the Internet, ranging from
routing loops, to slow DNS responses, to long paths between end-points contribute to a
phenomenon we call "perceived bandwidth" meaning how the user perceives the amount
of bandwidth they have available to them.
Perceived bandwidth depends on such things as how long a transmission rate is sustained
and how long it takes to make a connection. Users experience these factors in the form of
wait time to connect. With sensitive applications, such as voice and video, users
experience transmission problems, such as brief interruptions in the call or video or even
session drops. These issues have as much to do with the logical layer of the Internet as
the physical-layer limitations.
As we define it, the Internet's logical layer refers to the routing algorithms and address
schemas that enable the Internet to find and deliver data packets across the network. This
fabric of what is essentially software constitutes the logical Internet and is at least as
important as adequate bandwidth in ensuring the network transports data accurately and
quickly.
As with the physical Internet, the logical Internet has its own issues and there are many
contending opinions on how to address them. Unlike the physical Internet, the issues in
the logical Internet are less subject to simple applications of physical devices. For
example, you can't necessarily fix the impact of address fragmentation by adding bigger
routers. And, whereas physical constraints can be addressed locally, problems in the
logical layer may be systemic. Because there's no global owner of the Internet, that
makes logical-Internet issues singularly difficult to address and remedy.
In this volume, we review the logical infrastructure of the Internet and describe the
challenges in the ability of the logical layer to meet present and future demands. We look
at several potential scenarios to overcome logical layer challenges. And finally, we finish
with an estimate of potential impacts as the logical layer of the Internet is pressed to
support the demand developing for Internet capacity.
8 Addressing and Routing: How it All
Works
The logical Internet is composed of logically defined nodes communicating through a
complex series of virtual connections. Because it is hard to visualize, it tends to remain in
the domain of engineers and theorists. Nevertheless, there are those who have attempted
to generate pictures of what the Internet would look like if you could actually see the
logical space. One of the more interesting attempts, based on telemetry gleaned from
monitors across the Internet, is the map developed by Hebrew University in Jerusalem
(used with permission). (Please see Figure 12: The Internet as Medusa, Page 1.)
Visualized in this way, the Internet is composed of a concentrated core of primary nodes
connected through more and more dispersed logical connections. In other words, just as
in the physical architecture, the logical architecture is comprised of a core Internet
surrounded by an edge of more finely defined sub-networks. But the logical network does
not map precisely to the physical network since the logical locations are related by the
proximity of their addresses, not by their physical proximity. Let's see how this works.
8.1 How the Internet Works: Routing
Addresses define logical locations. These addresses, much like telephone numbers in the
public switched telephone network, serve as locators for the various resources attached to
the Internet. As the applications and users generate packets of data, each packet is
associated with an address header that defines the sender and the intended receiver.
Internet routers then read these addresses to determine where to send the packets. The
process is dynamic and can change from packet to packet, depending on network
conditions at any given time.
Within the traditional layered model for network communications, switching and routing
take place at layer two and layer three, respectively (layer one is the transport or physical
layer discussed in volume one). At layer two, the data-link layer, switches transfer data
between devices on the same logical network by reading the MAC (Media Access
Control) address from the packet header. Switching allows two or more devices to "see"
each other and exchange information. But what if the devices are not on the same logical
network? This is where routing comes in.
Layer three, the network layer, allows different networks to exchange information.
Routers translate data generated by a device on one network to find a target device on
another network. Both routing and switching are important to the Internet, but routing is
what ultimately makes a collection of networks like the Internet work, and routers are the
key to this process.
Routers are special-purpose computers whose software and hardware usually are
customized for the tasks of routing and forwarding information. Routers generally
contain a specialized operating system as well as many processors and multiple network
interfaces. Routers connect to one or more logical subnets, which do not necessarily map
one-to-one to the physical interfaces of the router.
Router operating systems are subdivided into two logical domains: a control plane and a
data plane. The control plane manages the assembly of a forwarding table and handles the
tasks associated with determining the most appropriate path for forwarding any given
packet. The data plane (also known as the forwarding plane) reads packet header
information and routes incoming packets to the appropriate outgoing interface.
The forwarding process looks up the destination address of an incoming packet, and
makes a decision based on information contained in the forwarding table. Vendors and
carriers may apply various enhancements to speed this process, such as grouping similar
packets into "flows"; that are forwarded the same way, or using caching to store recent
look-ups to improve router performance. But at its heart, routing requires making a
unique decision for each incoming packet.
Forwarding requires the router not only maintain an up-to-date forwarding table as
network conditions change, but also quickly find entries in the table so it can forward
packets to the right interface. The closest analogy is looking up a telephone number
manually. First, you need to know whom you are trying to call. Then, you need to scan
the book alphabetically to find the destination. The entry gives you a telephone number,
which you manually dial to connect. Likewise, a router is required to match a logical
address to a specific connection and then send the packet along to the next router in the
direction of the intended destination. In the case of the Internet, this function is repeated
at transmission speeds for every packet. The complexity of the process is not the same,
though, for each layer of the Internet.
8.1.1 Routing in the Core
Routing in the core takes much more horsepower than routing at the edge of the network.
This is because of the resolution required for route discrimination. In the core, routers
need a full "telephone book" of Internet addresses so they can accurately route packets
through peered networks. Consequently, the process of resolving an address can be
compute and memory intensive. As the number of entries increases, so does the routing
horsepower needed, not just to keep up with growing route tables, but also to recalculate
forwarding tables based on changing network conditions such as link failures, logical
failures, or new routes being introduced into the global routing table.
Although carriers can use a number of techniques to reduce the complexity of route and
destination resolution, ultimately, the core of the Internet requires very large routers with
very quick processing. Such routers are expensive and maintained by Tier-1 carriers in
peering points. Core routers also require long lead times to develop, test and deploy since
they depend on special-purpose Application Specific Integrated Circuits (ASICs).
Routing in the core typically takes place at peering points between Tier-1, the largest,
carriers. Peering points are where the routers of one carrier hand off traffic to the routers
of another.
8.1.2 Routing in the Edge
Routing in the edge can be much simpler. Rather than using a complete "telephone book"
such routers can often only know the addresses for local resources or for the Tier-1 ISP.
It is enough simply to know where to forward a request for a connection.
For routers that live on local or enterprise networks, the routing task can be simpler still.
Such routers can use much less-complex routing algorithms that require less overhead.
Using the telephone analogy once again, such routers, like PBXs, only need to know the
extension, not the complete telephone number.
The catch with this approach is that it's strictly hierarchical. It assumes the user is
connected to one, and only one, ISP. Many enterprise organizations (and end-users) seek
connectivity with multiple providers, a practice called multihoming. Multihoming
increases reliability (if one ISP goes down, the other continues to provide connectivity),
but it makes routing considerably harder. In order for multihoming to work, routers must
advertise multiple paths to their networks to multiple providers, adding complexity and
increasing the number of routes in the global routing table.
8.2 Border Gateway Protocol
Border Gateway Protocol, or BGP, is the primary routing algorithm for the Internet. BGP
determines and manages reach ability between networks (known as autonomous
systems). BGP is a simplistic, yet elegant, routing protocol that makes forwarding
decisions based on the shortest path to the destination network, but provides network
operators with a myriad of options for manually controlling how traffic flows across their
network.
Internet routing converted to BGP Version 4 in 1994. It is a definition for a logical state
machine that sets up and manages a table of IP networks. (Please see Figure 13: BGP
State Diagram , Page 1
As such, it is concerned with routing a packet over a specific connection based on the IP
address. As the number of routing-table entries expands, with the associated IP address
and connection information, the time to resolve a connection increases. One way to
reduce the number of entries and, consequently, the amount of time to resolve an address
is to aggregate groups of similar addresses into larger blocks and forward any packet
carrying the block address to the same location for further resolution. This approach can
only work if the addresses remain in non-fragmented blocks and currently, the integrity
of address blocks is maintained at the service provider level. Returning once again to our
telephone book analogy, it is precisely like having telephone directories for each area
code rather than a single directory for all area codes. In such a case, the directory would
be massive and unusable.
Furthermore, the growth of multihoming means that addresses are increasingly
fragmented, and not easily aggregated into higher-level blocks. Why? Because with
multihoming, multiple providers must advertise smaller and smaller blocks of addresses
to ensure that end-points are reachable in the event of a failure of any one single provider.
8.2.1 BGP and IPV4
As noted, the BGP routing process is tied intrinsically to resolving connections for
specific addresses. The current addressing schema, IPV4 (for Internet Protocol Version
4), uses a 32-bit address, providing 4,294,967,296 potential addresses. At the extreme,
BGP could maintain a table of more than 4 billion entries that assign specific routing
instructions for each endpoint or node. This is clearly a hard thing to do when the packet
transit time for the network may be measured in milliseconds.
The reason there aren't more than 4 billion entries is that addresses are typically assigned
to networks in blocks. In the early days of the Internet, this process was rigid, with three
possible block designations: Class A, B or C. As Internet architects realized this approach
was inefficient, they created something called "CIDR" or classless inter-domain routing,
enabling addresses to be assigned into smaller blocks. CIDR led to an increasing number
of routes in the global routing table, but at a gradual rate that was easy for operators to
digest. As long as routing tables can maintain addresses in blocks, it is possible to resolve
a connection without examining the entire address. Core routers only need table entries
that tell them where a block of addresses resides. Again using our area-code analogy,
think of being able to route all calls based on only looking at an area code, versus having
to look at the entire phone number. However, with multihoming, multiple providers must
advertise smaller and smaller blocks of addresses to ensure that end-points are reachable
in the event of a failure of any one single provider.
In addition to the growth of multihoming, the sheer growth of the Internet is leading to
increasing BGP table entries, as well. (Please see Figure 14: Active BGP Entries, Page 1.)
Since 2000, there has been a 167% increase in routing table entries, and there are now
more than 200,000 entries. Projecting a linear growth dynamic, there will be more than
400,000 entries by the end of 2013.
But this is only if there is a linear growth of table entries. With linear growth, a Moore's
Law exponential doubling of routing capacity would easily keep up with table entry
growth. As we will see, this is probably not going to be the case. The problem is that the
very architecture of IP and IP routing (that relies on longest prefix matching to make
forwarding decisions) is unable to support the economic reality that available IPv4
address space is becoming exhausted as end-users desire multihoming.
9 IPV4: Why Address Proliferation
Matters
IPV4 is an addressing schema that assigns addresses to specific Internet-connected
devices, or autonomous systems. The addresses are 32 bits long and are contained in the
header for each data packet. As previously noted, this gives a potential 4,294,967,296
addresses. It's a large number, indeed, but it may be small compared to the potential
demand. At this point, including reserved blocks, the Internet Assigned Numbers
Authority (IANA) has assigned nearly 85% of possible addresses (more on this later).
Several factors are likely to cause depletion of the remaining addresses. Chief among
these is the proliferation of active devices that want or need general Internet visibility, the
rise of machine-to-machine communications, and the migration of telemetry to the
Internet.
9.1 Everything Internet: Everything Wants An Address
Nemertes examined the growth of network-attached devices in preparation for this study
and built the projected increases into the model on which this report is based. Our
projections indicate there easily could be more than 5 billion devices in operation by the
end of the current study period. (Please see Figure 15: Total Internet Connected Devices,
Page 1.)
We based this projection on the number of human-usable devices, such as PCs, laptops,
wireless data devices and gaming devices. Although each device does not require a
globally unique IP address, the trend ultimately will generate a truly immense number of
connected devices. However, this is only the beginning.
9.1.1 The Growth of Machine-to-Machine Communications
Until recently, technology has largely addressed enabling human capabilities, a point well
known futurist Ray Kurzweil emphasizes in his book,The Singularity is Near. He notes
there is an increasing trend of machine-enabling technology; that is, the trend to enable
computers to function better together. This is clearly happening in networking, where
machine-to-machine communication is on the rise. By some estimates, as many as 110
million devices could be using wireless-enabled Internet connections to communicate
with other machines. Although not a huge figure when compared with the number of
human users right now, it will grow and compete for a dwindling number of IP addresses
as machine-to-machine communication increases.
As more of our dwellings, appliances and transport are network-enabled, the more IP
addresses are assigned to enable these connections. Yet, machine-to-machine
communication doesn't include a potentially staggering number of new connections:
those generated by telemetric devices and sensors.
9.1.2 Telemetry and the Impact on IP Addressing
The IP Smart Objects (IPSO) alliance advocates all telemetric sensors be IP-enabled and
communicate through IP-based networks. Examples of sensors that could be IP-enabled
include video cameras, pressure sensors, motion sensors, biometric sensors, chemical
sensors, and temperature sensors, to name a few. Applications include medical telemetry,
security systems, and anti-terrorism technologies. A Nemertes' client is constructing a
new smart building, where it projects the number of devices unrelated to the business will
equal the number of business-related devices on the network. Since these sensors must be
accessible from any place on the Internet by anyone on the Internet, they will all be
competing for IPV4 addresses. IPSO, while supporting IPV4 addressing, is fully aware
that IP-based sensors cannot be effectively supported within the current address space
limitations. With the potential for billions of sensors in just a few years, addressing is
becoming a critical concern.
From where will all these new addresses come?
9.2 NAT and IPV4
Fortunately, not every device needs a globally unique address. Using protocols like
Network Address Translation (NAT: RFC 1918), it is possible to subdivide the network
into addressable chunks and effectively reuse addresses. IETF originally developed
NATs to hide private addresses and to expand available address space. Many enterprises
continue to deploy it as a security measure (to avoid exposing internal company
addresses to the global Internet). NAT enables a single or small block of IP address to
connect a large number of private addresses to the public Internet. The complexity of the
privately addressed networks behind NAT is hidden from the Internet.
This approach allows a far greater number of connected devices to exist within a limited
number of IP addresses. This would seem to be a solution to address depletion. But by
hiding end-points behind NAT gateways, NAT causes problems for applications that are
reliant on peer-to-peer communications (VOIP, gaming, file sharing and video
conferencing.)
Because NAT effectively masks one network from another, applications that manipulate
header information in each packet may not work as planned or at all. For example, IPSec
security processes mask port information in the header. This denies NAT information on
ultimate destinations that are necessary for translation purposes. The session initiation
protocol (SIP) as initially designed carried the IP address of the node initiating a
communication session request, meaning that if that node were behind a NAT device, the
receiver of a SIP communication initiation request would have no way to respond.
Additionally, NAT introduces delay as it tries to resolve addresses across private/public
boundaries.
NAT workarounds are numerous. Approaches such as STUN (Simple Traversal of UDP
over NAT) and TURN (Traversal Using Relay NAT) enable applications behind a NAT
to determine how they are viewed from the outside by routers, and adjust application
headers accordingly. Still, NAT usage creates problems and additional headaches for
developers of peer-to-peer applications.
10 Address Assignment and Exhaustion
So when do addresses run out? The question seems simple and intuitive, but the answer is
complex, and the consequence of address consumption is a hot debate. One conclusion
that seems unmistakable, however, has to do with address-block depletion.
IANA, which is operated by the ICANN (Internet Corporation for Assigned Names and
Numbers), issues IP addresses. It typically issues these addresses in blocks of contiguous
numbers to service providers, enterprises and governments. After adoption of IPV4 as the
addressing schema, these blocks tended to be large, with a typical block representing
16,777,216 addresses. IANA soon observed that enterprises and carriers were
inefficiently consuming these "/8" blocks, and it began issuing smaller allocations. To
date, IANA has assigned more than 200 of the available 256/8 blocks.
The reason large blocks are desirable is that the larger the block, the fewer BGP route
table entries are required to resolve an address. In the core, a router merely needs to know
where the blocks are located and can allow the edge routers to resolve the discrete
address. When blocks become smaller, more routing table entries are needed to identify
the connection to a particular address. As noted, when blocks equal one address, then
there would need to be more than 4 billion entries in the BGP core routing tables. Over
time, routers will be required to process faster, in order to keep up with expanding router
tables.
Adding to the impact of route-table growth from more fragmented block assignment is
the rapid growth of multihoming, meaning providers must advertise individual blocks for
each customer, leading to multiple routing table entries for the same block of addresses.
As we shall see, the Moore's Law effect may not be enough to ensure that routers can
keep up.
The point at which address blocks are exhausted, based on current projections by
Nemertes using Geoff Huston (Asia Pacific Regional Internet Registry) data, occurs some
time in 2011. (Please see Figure 16: IANA /8 Address Block Allocations, Page 1.)
Address consumption is increasing steadily. Projections show an accelerating demand
from 2008 onward as developing markets, such as China, consume more addresses.
Additionally, newer uses for IP addresses also are contending for address space. Address
consumption is increasing as new users and devices come on line. This is likely to force a
crisis in the next several years that will cause address block fragmentation. This, in turn,
will have an initial impact on router efficiency as routing tables expand exponentially.
Internet architects have proposed several approaches to deal with address exhaustion.
Those who received large amounts of address space when Internet address blocks were
plentiful are being encouraged by IANA to return their unused space to the pool. This
approach has been in place as far back as 1996 in RFC 1917.
Others have proposed setting up a market, whereby those who have extra space can sell it
to those who need it. Proponents say this idea could delay address depletion by a decade
or more. But it carries its own risks as the cost of IP address space could price small
providers out of the market or lead to larger providers hoarding available space. In
addition, a public market for address space would likely exacerbate the routing-table
scalability issue by further fragmenting address blocks.
Finally, many within the Internet engineering community argue that the answer is a new
addressing approach that would vastly expand the pool of available address space. This
approach is known as IPv6.
11 IPv6: Not a Panacea
Developed in the mid-1990s in response to address exhaustion concerns, IPv6 increases
the address header to 128 bits. This yields a potential 34 billion, billion, billion, billion
addresses (3.4 times 10 to the 38.). IPv6 contains several other enhancements, including a
simplified header, built-in encryption, and more efficient class-of-service designations.
The potential for nearly limitless address space generates excitement among IPv6
proponents, who have argued that IPv6 eliminates the need for NAT and provides a
hierarchical routing model that would address routing table scalability concerns.
The sheer size of the IPv6 address pool could conceivably assign a unique address to
every atom in the Earth with addresses to spare. In theory, IPv6's hierarchical allocation
of addresses and efficient route aggregation allows more efficient routing by aggregating
blocks of addresses into larger blocks, assuming that blocks are assigned geographically.
Routers wouldn't have to maintain knowledge of multiple fragmented address blocks, in
effect a return to routing by area code rather than by entire phone number.
During the last several years, many device manufacturers have been building IPv6
capabilities into their products. Governments around the world have promoted IPv6 and
mandated networks become IPv6-compliant. In the United States, for example, the Office
of Management and Budget proposed in 2004 that all U.S. federal networks run IPv6 by
2008. That mandate changed to be "IPv6-compliant by June, 2008" because of concerns
about IPv6's deployment readiness.
The fact that IPv6 isn't entirely deployment-ready is a major concern, given the
imminence of address exhaustion. But it's not the specification's biggest problem. The
real issue with IPv6 is what it doesn't fix. In essence, implementing IPv6 pushes the
existing Internet infrastructure past its limits. It's a bit like taking a car that has inherent
design flaws, but runs fine at 20 miles per hour and accelerating it to 120 miles per hour.
The likely outcome isn't pretty.
11.1.1 What IPv6 Doesn't Do
To understand the problems IPv4 and IPv6 don't solve, it's important to understand the
role that addressing plays in a network (or operating system).
There are three types of necessary names/addresses for a complete architecture:



Application names, which are location-independent and indicate what is to be
accessed
Network-node addresses, which are location-dependent, route-independent and
indicate where the accessed application is
Point-of-attachment-addresses, which may or may not be location-dependent but
are route-dependent and have a role in how to get there.
In addition to these three address elements, there must be a mapping among them. For
example, the function that maps application names and network-node addresses is a
directory function. The function that maps network nodes to points of attachment is part
of the routing function.
A problem with the Internet architecture is that it names the same thing twice: MAC
addresses and IP addresses name the point of attachment. But there are no defined
mechanisms for creating either network-node or application addresses. In computerarchitecture terms, IPv4 or IPv6 is like building a system with only physical addresses.
Moreover, the provider assigns one of these two addresses, the IP address.
Why does any of this matter? In the Internet, the only handle for anything is the IPaddress. All we have is how, but as noted above, you also need the where and what. Even
a URL must first resolve to an IP address, then to a well-known port. If a system has
multiple interfaces, e.g. is multihomed, it has multiple aggregable IP addresses. However,
the routers can't tell that these different addresses go to the same place. So we must
assign it a non-aggregable address that increases everyone's router table size.
If there were node addresses, the point of attachment addresses would still be provider
based, but the node addresses would be provider independent and be aggregable within
that address space. Hence, there would be no increase in router table size.
This makes it especially difficult for systems or applications that move. If all you have is
how, things are really hard. For mobility, the Internet creates a "home" router that knows
when you move and creates a tunnel to the router where you are so it can forward your
traffic to you. This is very complex, distorts traffic characteristics and doesn't scale.
This is why mobility and multihoming pose such significant challenges to both IPv4 and
to IPv6. Ultimately in a network, the three types of addresses have to be completely
independent to preserve path independence. If this doesn't happen, the network can't
handle change.
That's why the failure of the Internet to include all three independent address types is so
fundamental. Because it's based on half an architecture, IP (regardless of version) is
fundamentally extremely brittle and intolerant of change.
Yet the one thing that continues to occur regularly, and at a dramatically increasing rate
in today's Internet, is precisely that: change. Both mobility and multihoming, in essence,
inject ongoing, dynamic, real-time change into the Internet. That's why the current
architecture has such difficulty supporting both, and IPv6 does nothing to fix that. In fact,
by scaling the Internet up to handle billions more mobile devices, it threatens to push the
Internet past its capacity to function. As noted Internet Architect John Day describes,
"The Internet architecture has been fundamentally flawed from the beginning. It's a demo
that was never finished."
11.1.2 Interoperability Between IPv4 and IPv6
If IPv6 were widely deployed, there might be a window of opportunity during which
Internet architects could work to upgrade the 'net without the threat of imminent address
exhaustion. But despite predictions of adoption dating back to the mid-1990s, IPv6
continues not to be widely deployed. The primary reason is that it's not compatible with
IPv4. In fact, it requires a forklift upgrade.
In order to integrate IPv6 into the existing IPv4 architecture, a form of address translation
is required. This dual-stack approach introduces some of the same translation issues that
NAT introduces with the added liability of increased router table entries. (Please see
Figure 17: IPv4/IPv6 Dual Stack, Page 1.)
This means that Internet routers, already struggling under the growing load of IPv4 route
expansion, would have to support IPv6 routes, as well. Although IPv6 routes are far more
scalable in theory, practical considerations around NAT and multihoming mean that the
real promise of IPv6 doesn't fit with real-world concerns.
11.1.3 Multihoming and NAT
As noted, multihoming is difficult with IPv4. It's equally bad with IPv6. The initial vision
presented by IPv6 developers was that they would assign IPv6 address blocks to regions.
Providers within those regions would allocate addresses to their customers, enabling the
IPv6 routing table to maintain a hierarchy, thus limiting route-table entries. IPv6
developers also envisioned an end to NAT, since IPv6 would provide a near limitless
supply of available address space.
The reality is that enterprises continue to demand multihoming, and so far enabling
multihoming with IPv6 is as difficult as with IPv4. There are several proposed
workarounds, but none has gained widespread traction, mostly because of the complexity
required.
Several proposals, such as shim6 that would have allowed devices to maintain multiple IP
addresses, have met with strong resistance from the service-provider community. In
response to multihoming requirements, IPv6 standards bodies recently made providerindependent IPv6 space available, but the fragmentation of the IPv6 space presents an
even bigger nightmare scenario for Internet operators, again already struggling to support
growth in the IPv4 routing table.
Finally, enterprises are reluctant to give up on NAT, which many view as much as a
security tool as one to minimize address-management concerns. NAT was not part of the
original IPv6 discussion, but IPv6 standards bodies have since created an approach for a
private IPv6 address space.
11.1.4 IPv6 Adoption
The final challenge for IPv6 is the chicken-and-egg argument. Vendors are reluctant to
deliver IPv6 products and services absent a market, and enterprises and service providers
aren't deploying IPv6 because of the aforementioned concerns over multihoming and
NAT. Also dampening their interest is the fact that IPv6 security, management and
WAN-optimization products are few and far between. In addition, many routing
platforms forward IPv6 traffic in software, and as such aren't equipped to handle high
data volumes on IPv6-enabled interfaces.
Perhaps most important is the lack of an economic argument that would justify rapid
migration to IPv6. Aside from government, there are few organizations or enterprises that
see any virtue in adopting IPv6 prior to any compelling need to do so. Only 1% of IT
executives participating in the Nemertes benchmark Advanced Communication Services
2008 say they are deploying IPv6, and in those cases, it is only to meet government
mandates for interconnectivity with government customer networks. And for carriers,
adopting IPv6 doesn't really offer any advantages over Ipv4 that would justify extensive
investments in the technology.
With the current difficulties in the global economy, funding for major network upgrades
to IPv6 are understandably low on the typical enterprise investment list.
11.2 Other Options
Day advocates a reworking of the ways routing and addressing interact, yielding a much
more dynamic and flexible infrastructure. Day's approach, as outlined in his
book,Patterns in Network Architecture, develops a complete architecture that includes
independent addresses, in which NATs do not create the problems found in the Internet,
and implies that a single global address space like IPv6 is unnecessary.
Intriguingly, this approach is repetitive. There's no upper limit to the number of
independent address layers that could exist, but no one system would have any more
layers than they do now, and the routing mechanisms at each address layer are identical,
meaning that routing devices would be highly efficient. Theoretically, therefore, this
architecture would resolve the challenges of multihoming, mobility, and address
exhaustion in one fell swoop - without replacing IP
The problem? The approach is still highly theoretical. But early indications are that
equipment and operational costs would be much lower than existing Internet technology.
Day is currently working on the specifications for a possible hardware architecture that
embodies this approach.
Other engineers and architects are pursuing alternative approaches, such as carrier-grade
NAT, that would let providers simplify their addressing architectures by hiding their
customers behind a large NAT. Again, widespread use of NAT presents problems for
peer-to-peer applications and those that rely on IP-header information to match
destination addresses. In addition, carrier-grade NAT may not be able to scale to support
growing application architectures based on establishing large numbers of sessions
between clients and servers. An application requiring 100 or more separate connections
per session may easily overwhelm NAT architectures unable to support millions of hosts
with perhaps billions of simultaneous sessions.
Another approach, known as LISP (Location/ID Separation Protocol) attempts to replace
longest-match routing with a simpler architecture that groups destination networks into a
common location identifier, removing the need to consider an entire IP address when
routing. Again, though supported by some of the leading Internet architects and
luminaries, LISP remains theoretical.
11.3Where Are We Headed?
Address consumption and multihoming will continue to accelerate routing-table
expansion. The principal effect is likely to be on routers in the core. As routing tables
expand, there comes a point where a router is unable to resolve an address in the time it
has to do so. This point represents another kind of singularity, where performance begins
to degrade for the user.
Of course, router performance is not static over time. Vendors are constantly upgrading
routing efficiency and horsepower. Within the next couple of years, Moore's law likely
will increase active electronic packing densities by 200% or more. If this Moore's law
dynamic extends to router architectures, then we can expect routers keep up, at least until
some time after address block fragmentation.
There is, though, some question of whether Moore's Law actually extends to large core
routers since these complex devices depend on specially designed ASICs. Lead times to
deliver a new ASIC often are much longer than a Moore's Law cycle (18 to 24 months.)
For example, it took Cisco four years and nearly $500 million to develop its CRS-1 and it
took Juniper more than five years to bring its T1600 to market.
The principal effect of IPV4 address exhaust, however, is not likely to be the collapse of
routers. It's simply the effect of introducing scarcity into a system never designed to
manage scarcity. As addresses become scarce, developing markets that need additional
address space will find it increasingly hard to obtain them. It is likely that scarce
addresses will lead to some sort of open or clandestine market for them. This will drive
the price of an address up. Even with the application of NAT to extend the use of the
current address pool, NAT doesn't prevent depletion. It just extends it and introduces
conversion overhead into services that don't tolerate such overhead easily. In fact, as
Huston says, NAT may have the effect of simply increasing the price of an IPv4 address
by increasing its leverage to generate service revenues.
There are likely to be performance-related issues, as well, because of route instability.
Applications dependent on reliable network infrastructure will struggle (voice, real-time
video). Connections such as VPN tunnels could break as paths become unreachable.
In any case, as issues compound from the impact of logical level difficulties, it will
become more attractive in a variety of situations to think in terms of different networks.
As the companion report on physical capacity and demand notes, there is reason to
suppose that as content providers increasingly require service quality, they may just move
their traffic away from the public peering points and use private transport. Likewise, as
logical addressing limitations introduce complexity into the process of network transport,
there could be increasing pressure on content providers to deliver their content over
logically independent networks rather than the IPv4-dependent Internet. These networks
could use IPv6.
This is easy to imagine. If Microsoft were to offer its office-productivity applications as a
service delivered over a network, and it wanted to ensure quality of service, it might
choose to deliver this service to the ISP head end over dedicated transport. In such a case,
it would be easier to mandate the use of IPv6 addressing and simply provide a thin agent
to subscribers embedding an IPv6 stack. Although such a dedicated network solution
would not be able to talk to the IPv4 Internet, the service could act as a magnet for
additional services that also use IPv6 addressing. Over time, that network would begin to
rival the IPv4 based Internet as a source of information and services.
And even if we stay with IPv4 as an overall standard for addressing, some form of super
NAT doesn't necessarily solve our problems. It is reasonable to expect the networks
masked behind the NAT would enjoy better performance intra-network, than extranetwork since the NAT would introduce translation delays across the interface. As an
example, if there were a North American NAT box and an American enterprise is using
VPNs across the Internet to connect virtual employees to the company network, this
would probably work well within the United States, but probably would not work so well
if employees were located in Europe. In fact, the delay might be so unacceptable as to
make the VPN worthless for company business.
Finally, carrier-grade NAT would significantly increase the cost of managing IPv4 for
carriers while diminishing the security of IPv4 for users. The reason is that certain
services such as point-to-point and FTP could call upon the NAT to translate between
IPv4 and secondary port numbers. In such a case, the NAT would need to examine each
packet in detail for port identification data. NATs can probably be designed to do this,
but it is costly and complex and would involve some form of deep-packet inspection.
Security in such a situation becomes problematic since the user would need to use some
form of signaling to tell NAT when to do this type of translation. And signaling requires
authentication to enable the NAT to do such data inspection. However, all of this
overhead merely opens another door to hacking and requires that carriers maintain
authentication databases. The net impact is that the NAT would introduce much more
complexity and expense into the carrier network.
In the absence of more effective solutions than NAT, the logical Internet will begin to
degrade in mid 2012. Application performance will suffer, as a consequence of ratchetedup stresses such as address exhaustion, increased multihoming, mobility, router
congestion or translation-induced inefficiencies. The problems are likely to manifest
themselves in a variety of ways. Route recalculations might not happen fast enough to
account for dynamic changes in route availability, meaning router operators would find
themselves in a perpetual state of re-convergence, or they will disconnect themselves
from peers generating large numbers of route advertisements.
In effect, the Internet could fracture into groups of networks. Haves (large providers) will
continue to peer, while have-nots (smaller providers) could find themselves out in the
cold or forced to contract connection arrangements with the larger carriers, if the
regulators will allow it. Enterprises that multihome will find a dwindling set of choices to
maintain connectivity, or they may have to connect to even more providers to ensure that
they can reach other users.
11.4 Why Does This Matter?
As noted, the logical Internet is primarily a software artifact defined in the programs
running in large, complex special-purpose computers (routers and NATs). As such, it is
likely that some combination of NAT along with more capable routers will keep the
network going. However, just as in the physical domain, as demand increasingly stresses
the network and threatens performance impacts, there will be pressure on carriers and
users to come up with solutions that preserve service quality. As suggested, such
solutions could result in network fragmentation and an increased cost of networking.
In particular, logical constraints might make the delivery of premium or high-value
content something providers do over a separate network. It is easy to imagine consumers
might use an IPv4 network-enabled by NAT to identify services they wish to consume,
but they would use a separate content network to access those services.
The most likely scenario is that around 2012, IPv4 addresses exhaust, and carriers and
large users renegotiate some form of IPv4 re-use scheme. This will complicate routing,
but routers will keep up; at least initially. As demand further increases, migration of
traffic off the Internet to private networks, potentially enabled by IPv6 or other more
efficient addressing and routing schemes, will accelerate for premium content. To obtain
that content or higher-service quality, consumers will pay to use those networks. Those
that can't afford to do so, will be left on an increasingly irrelevant public Internet; one
that contains lower-quality content and limited access.
12 Volume 2: Conclusions
It is easy to think of the Internet as a collection of transmission lines or data pipes. As our
original study shows, most analysis focuses on the capacity of these pipes, and seeks to
determine when the pipes fill up. The Internet, though, is much more than a collection of
pipes. It is a complex hierarchy of switching, routing, and addressing that directs the flow
of the pipes to do useful work.
To this point, the influence of Moore's Law on networking has been generally so
effective that we have forgotten that network capacity is dependent not only by the things
we can see, but also by the things that we can't. Now, though, it seems that the inherent
logical limitation of the Internet combined with the kind of demand that is developing
will conspire to affect reliability beginning in the 2012 timeframe.
An Internet that can't reliably deliver content is at best a toy. For business purposes, it is
useless. Content provision to consumers using the Internet, in particular, begins to look
dicey. And enterprise use of public Internet facilities as a way to transport business
communications and data ceases to be an option.
Nemertes believes enterprises planning to utilize the public Internet for business purposes
must carefully consider its limitations as they make their plans. For example, it may be
desirable to build Web-enabled customer care capabilities, but it may be necessary to
backstop that function with more traditional telephone-enabled services. Likewise,
enterprises may need to augment their plans for running private VPNs over the Internet
with private circuits.
In any case, it is clear that, absent major new efforts to adjust the logical fabric of the
Internet, its utility is likely to erode in the near future. This will lead inevitably to a
fragmented Internet; one piece of which enables access to low-quality content and
defined by a low standard for service quality, and one that is more capable and which
enables access to premium content at higher standards for service. The former will be
free, the latter will not.
13 Bibliography and Sources
13.1 Sources
The bibliography includes only sources directly cited in the text of this report. As noted
in the acknowledgements, data for the model and charts drew from a wide variety of
sources, including:

Research data and Internet traffic statistics collected by academic organizations
such as CAIDA and MINTS

User demand data from a variety of sources, such as Pew Research and the Center
for The Digital Future at the USC Annenberg School.

IP address tracking data from the Asia Pacific Regional Internet Registry.

Access Line and Broadband data from the U.S. Government Federal
Communications Commission (FCC).

Interviews with trade organizations such as the IP Smart Objects Forum (IPSO).

70+ confidential interviews with enterprise organizations, equipment vendors,
service providers and investment companies.

Interviews with leading Internet technology experts.

Interviews with the several hundred IT executives who regularly participate in
Nemertes' enterprise benchmarks

Investment figures from service providers and telecom equipment manufacturers
13.2Bibliography and Endnotes
Bibliography
"Assessing Risk from the Credit Crunch," UBS Telecoms, JC Hodulik, B Levi, L
Friedman, October 8, 2008.
"State of the Internet," Akamai
http://www.akamai.com/stateoftheinternet/
Berstein, Marc (2005), The Essential Components of IPTV,Converge!
http://www.convergedigest.com/bp-ttp/bp1.asp?ID=229&ctgy
Broadband Reports.com (2007), Ask DSLReports.com: Why Can't I get Comcast,
16Mbps,http://www.broadbandreports.com/shownews/Ask-DSLReportscom-Why-CantI-Get-Comcast-16Mbps-88281
Cisco(2008), Global IP Traffic Forecast and Methodology,2007-2012
http://www.cisco.com/en/US/solutions/collateral/ns341/ns525/ns537/ns705/ns827/white_
paper_c11-481374_ns827_Networking_Solutions_White_Paper.html
Cisco provides NBCnetwork backbone for Olympics
http://www.cisco.com/en/US/solutions/ns341/ns525/ns537/ns705/C36-49147900_NBC_External_CS.pdf
Cisco Annual Reports, 2007 and 2008
CiscoQuarterly Shipment Summaries, 2007 and 2008
ClickZ (2006), Internet Users Show TheirAge,http://www.clickz.com/3575136
Coffman, K.G. & Odlyzko (2001), Growth of the Internet,AT&T Labs Research
Cogent states that 95% of traffic goes across private peering points
http://www.cogentco.com/us/network_peering.php?Private
Congress of the United States (2007), Discussion Draft on a Bill to Provide for a
Comprehensive Nationwide Inventory of Existing Broadband Service, and for Other
Purposes,http://www.benton.org/benton_files//broadbandcensus.pdf
Tiscali (UK) sees 10%of all traffic in March from BBC iPlayer
http://www.connectedtv.eu/plusnet-reveals-new-iplayer-data-145/
YouTube had 91 million viewers in July 2008
http://www.comscore.com/press/release.asp?press=2444
The Cook Report (2007), Kick Starting the Commons and Preserving the Wealth of
Networks, http://cookreport.com/16.02.shtml
BBC consideringbuilding out own CDN
http://www.datacenterknowledge.com/archives/2008/03/19/bbc-may-build-its-own-cdnto-support-iplayer/
Dell'Oro Group (2005), Ethernet Alliance Presentation on 100Gigabit Ethernet
http://www.ethernetalliance.org/technology/presentations/DesignCon_2006_100G_ND.p
df
NBC uses DigitalRapids DRC with Microsoft Silverlight for Olympics
http://www.digital-rapids.com/News/Press+Releases/2008NBCOlympics.aspx
Digital TV News (2007), IPTV to reach103M Homes by 2011,
http://dtg.org.uk/news/news.php?id=2357
"Blueprint for Big Broadband,"Educause, 2008.
http://net.educause.edu/ir/library/pdf/EPO0801.pdf
Europa(2005), Employment in Europe,
http://europa.eu/rapid/pressReleasesAction.do?reference=MEMO/05/383&format=HTM
L&aged=0&language=EN&guiLanguage=en
Gantz,J., Reinsel, D., Chute, C., Schlicting, W., McArthur, J.,Minton, S., Xheneti, I.,
Toncheva, A., Manfrediz, A. (2007), The Expanding Digital Universe: A Foretaste of
World Information Growth Through 2010, an IDCPaper Sponsored by EMC
http://www.emc.com/about/destination/digital_universe/pdf/Expanding_Digital_Universe
_IDC_WhitePaper_022507.pdf
Gill, Arlett, Li andMahanti, "The Flattening Internet Topology: Natural Evolution,
Unsightly Barnacles or Contrived Collapse?," University of Calgary, 2008.
Internet World Stats (2007), Internet Usage in Europe,
http://www.internetworldstats.com/stats4.htm
ITFacts(2007), US Console Market to generate $66 bln by 2012,
http://www.itfacts.biz/index.php?id=P8689
"Explaining International BroadbandLeadership," ITIF, 2008.
http://www.itif.orghttp://nemertes.com/files//ExplainingBBLeadership.pdf
JuniperNetworks Annual Report, 2007
Kurzweil, Ray (2006), The Singularity is Near: When Humans Transcend Biology,
Penguin Group
Limelight connects to 900 user access networks around the world
http://www.limelightnetworks.com/network.htm
Limelight Networks carries NBC Olympic streaming and VoD
http://www.limelightnetworks.com/press/2008/07_29_nbc_beijing_olympic_games.html
Limelight Networks Builds Out private Net in Bay Area
http://www.broadbandproperties.com/2006issues/june06issues/news_june.pdf
Limelight Networks contracts with Packet Exchange in Europe
http://www.packetexchange.net/content/limelight-networks.htm
Minnesota Internet Traffic Studies (MINTS)
http://www.dtc.umn.edu/mints/home.php
Minnesota Internet Traffic Studies (MINTS) Historical Data
http://www.dtc.umn.edu/mints/2005/analysis-2005.html
NationMaster (2007), Average Size of Housholds by Country,
http://www.nationmaster.com/graph/peo_ave_siz_of_hou-people-averagesize-ofhouseholds
Netcraft (October, 2007), www.netcraft.com
O'Brien, Kevin (2007), In Europe, a Push by Phone Companiesinto TV, the New
YorkTimes,
http://www.nytimes.com/2007/08/29/business/worldbusiness/29tele.html
"Internet Singularity Delayed: Why Limits inInternet Capacity will Stifle Innovation on
theWeb," Page 24, Nemertes Research, 2007
NBC had 7 millionviewers and 7.5 million hours of online video
http://www.multichannel.com/article/CA6600332.html
Andrew Odlyzkointerview with Ars Technica, 08062008
PVCForum (2007), Game Sales Charts: Computer and Video Game market Sales, A
model of Internet topology using k-shell decomposition, Proceedings of the National
Science Foundation, 2007,SHAI CARMISHLOMOHAVLIN,SCOTT KIRKPATRICK,
YUVAL SHAVITT, andERAN SHIR
http://forum.pcvsconsole.com/viewthread.php?tid=15831
PlusNet seesphenomenal growth of iPlayer traffic
http://community.plus.net/blog/2008/02/08/iplayer-usage-effect-a-bandwidth-explosion/
Switchand Data estimates 112% traffic growth
http://www.switchanddata.com/press.asp?rls_id=143
Mid-2007 tomid""2008 traffic up 50-100%
http://www.telegeography.com/cu/article.php?article_id=24888
UnitedStates Federal Communications Commission, High Speed Servicesfor Internet
Access: Status as of June 30, 2007, (March,2008)
PointTopic, World Broadband Statistics: Q4 2006, (March 2007)
Footnotes
i - Assessing Risk fromthe Credit Crunch, UBS Telecoms, JC Hodulik, B Levi, LFriedman, October 8,
2008.
ii - Assessing Riskfrom the Credit Crunch, UBS Telecoms, JC Hodulik, B Levi, LFriedman, October 8,
2008.
iii - Internet Singularity Delayed: Why Limits inInternet Capacity will Stifle Innovation on theWeb," page
24, Nemertes Research, 2007
iv - http://www.cisco.com/en/US/solutions/collateral/ns341/ns525/ns537/ns705/ns827/white_paper_c11481374_ns827_Networking_Solutions_White_Paper.html
v" - Internet Singularity Delayed: Why Limits inInternet Capacity will Stifle Innovation on theWeb,"
Nemertes Research, 2007
vi - Cisco (2008), Global IP Traffic Forecast and Methodology, 2007-2012
vii - http://community.plus.net/blog/2008/02/08/iplayer-usage-effect-a-bandwidth-explosion/
viii - http://community.plus.net/blog/2008/02/08/iplayer-usage-effect-a-bandwidth-explosion/
ix - http://www.limelightnetworks.com/network.htm
x - Andrew Odlyzkointerview with Ars Technica, 08062008
xi - http://www.dtc.umn.edu/mints/2005/analysis-2005.html
xii - http://www.khirman.com/files/image/ppt/Internet+Video+Next+Wave+of+Disruption+v1.5.pdf
xiii - Brokaw Price(Yahoo!) presentation, Sydney Peering Forum, 2005.
xiv - Gill, Arlett,Li and Mahanti, "The Flattening Internet Topology: Natural Evolution, Unsightly
Barnacles or Contrived Collapse?,"University of Calgary, 2008.
Download