Uploaded by matthew.sholto.douglas

Data Center Fundamentals

advertisement
What is a data center?
What is a data center?
A data center is a space or building designed to house the servers that keep a
company’s technology running. This can range from a small closet with a handful of
servers in an office to entire buildings dedicated to housing thousands of servers.
All companies utilize data center infrastructure. Anything happening online runs through
a data center - whether it be credit card transactions, sending emails, posting to social
media, streaming sports, or storing medical information. Even the cloud is located in
data center facilities across the world.
Our world today relies on this IT infrastructure more than ever and the need is
continuing to increase. According to Infiniti Research, the global data center market is
anticipated to grow by over $270 billion between 2020-2024.
Why are data centers critical to our life?
Given the wide range of companies a data center can support, these facilities need to
be more than just a building with servers inside.
If a data center has downtime or an outage, then the technology it supports could stop
working. If you lose a photo because it couldn’t back up to the cloud, it might not be a
big deal. But if you lose the medical records of 10,000 patients or can’t relay an
emergency message then those outages could cost lives.
From a company perspective, an outage is no small cost. The average cost of a data
center outage in 2015 was more than $700,000. That equates to more than $9,000 per
minute of down time not to mention the hit a company would take from a brand equity or
customer trust perspective.
What are the important aspects of a data
center?
Given the critical nature of data centers a company’s operations, data center operators
have invested heaviliy to ensure to ensure their facilities stay online. There are several
aspects that go into keeping a data center online.
1. Safe from natural disasters
A data center’s location is important. Every region comes with its own hazards and data
centers are designed to mitigate the risks associated within that region. Most are built to
withstand winds of 125+ mph, high scale earthquakes, and are located outside of flood
plains.
2. Highly secure
Planning to reduce risk also means securing a data center from man-made hazards.
Most of these facilities are surrounded by high-grade perimeter fencing with controlled
gate access. Inside, 24/7 security personnel, man-trap entrances, biometric scanners,
card key access and floor to ceiling steel caging all help to ensure a company’s data
center environment is well protected.
3. Powered
Data centers consume 3% of our world’s electricity, and the best way to understand a
data center’s size is to recognize how much power is being utilized at the site. Data
centers can be measured in square footage but are more accurately measured in
power. Servers consume power at a relatively consistent rate, meaning the overall
power needs of a data center is a more accurate indicator of the size of a facility. A
single rack of servers in a data center consumes between 2-10 kilowatts (kW) of power,
while the entire data center facility can consume between 5-75 megawatts (MW),
depending on the user.
4. Redundant
Having systems in place to handle a negative equipment event is critical for data center
success. Data centers typically have redundant transformers, uninterruptible power
supply (UPS), backup generators, and cooling systems to keep the facility online at a ll
times.
5. Connected
A well connected data center includes a high number of fiber providers located at the
site, which provides a company flexibility with the fiber providers they want to use for the
business operations.. Data centers have low-latency access to the surrounding area,
the rest of the county, and even international markets.
6. Cool environments
Servers produce a significant amount of heat. Temperature control is one of the primary
limiting factors on how large a data center can be. In theory, power providers can
deliver hundreds of megawatts to a data center, but a data center’s size is limited by the
amount of power it can cool. It’s standard for a data center’s cooling capacity to range
from 2-10 kW per rack, but new technology is now allowing organizations to achieve
higher densities with their footprint.
7. Compliant
Most industries have strict methods, procedures, and standards around operations. This
also applies to how they store their data, meaning a data center must meet that
industry’s requirements for a company to use it. Data center compliances focus on a
range of subjects, but often involve the security, redundancy, and operational risk of a
data center.
8. Expensive
Data centers require a high degree of specialization and design, which leads to a higher
cost to build and operate a data center. Traditional estimates are as 10 times more
expensive than traditional office space.
Data centers require many different components and a high degree of operational
expertise. While some companies choose to operate their data center infrastructure
themselves, most now outsource their data center needs to a data center colocation or
cloud provider. Regardless of the strategy, companies’ data center needs are only
projected to increase.
What is colocation? A guide for new
industry professionals
What is colocation?
Colocation is a strategic path organizations take that includes leasing data
center power and space from a data center provider. This often involves
sharing space with other companies, called users or tenants.
Colocation looks different for different
companies
Colocation leases can range in size from several servers to an entire data
center. Data center providers prefer to structure leases in different ways
depending on user needs and lease size.
Leases of 50 kilowatts (kW) and less
Smaller footprints are usually All-In leases, where the user pays a set price
per month with little variation. The price includes both the rental rate and
power cost.
Leases of 50 kW – 5 megawatts (MW)
These leases are often Gross + Electric, where the user pays a set price per
kW of data center infrastructure they lease per month, plus the cost of the
power they use.
Leases of 5 MW and higher
Larger leases are often Triple Net (NNN), meaning the user pays the provider
to use the space, but manages a larger portion of the operations and utilities
themselves.
Because power consumption costs are a major part of most leases, the power
rate is a large factor in a user’s final bill.
For a 250 kW deployment, a power cost of $0.07 per kilowatt hour (kWh) is
more than $1,500 more expensive per month than a $0.06/kWh rate. For a 5
MW deployment, that difference would be more than $25,000 per month.
Data center providers also offer a variety of services tailored to the user’s
needs, which has become increasingly more important as users become more
dependent on data center operators.
Benefits of colocation vs on-premise
data centers
Many companies prefer colocation to owning and operating their own data
center for the following reasons:
1. Specialized expertise in data center operations
Data center operation requires a level of expertise that many companies often
lack. While it’s possible for companies to develop a staff to fill this role, it’s
often faster, less expensive, and more efficient to outsource the requirement.
Data center providers are experts in colocation and can provide specialized
solutions that best fit their customer’s needs.
2. Increased flexibility
Because IT strategy can change quickly, companies value fluidity with their
data center infrastructure. A company’s data center may fit their needs today,
but could be inefficient later. Colocating provides flexiblity and helps users
avoid getting stuck in a solution that doesn’t fit their needs.
3. Cost savings from provider’s scale
Data center providers are experts in designing and building data centers and
often do it in a more cost-efficient manner. Large providers can also leverage
their size to lower construction and power costs, and these lower costs are
passed on to the user, creating lower operating expenses than owning the
data center themselves.
4. Ease of customization
Data center providers offer a variety of services to meet their users’ needs.
They can also use their scale to attract third-party service providers, which
creates a valuable ecosystem hard for single users to replicate.
5. More fiber connections
A colocation data center often has stronger fiber infrastructure and easier
access to cloud service providers, giving users low latency to their cloud
environments and the end-customer.
6. Easier path for growth
Growing your data center presence is easier with a data center provider. The
relationship between a user and data center provider is typically seen as a
long-term partnership. Should a company need a data center in a new market,
they can often deploy infrastructure in their provider’s facility in that region.
Providers like Digital Realty, Equinix, and CyrusOne report the vast majority of
their customers have deployments in more than one of their data centers and
many in more than one country.
Colocation data centers are built to high security standards. The data center
space is often surrounded by over seven levels of physical security, including
perimeter fencing, metal caging, biometric and key card locks, on-site
security, and CCTV cameras.
More and more demand is being placed consistently in third party data center
facilities. Even large tech companies like Microsoft, Facebook, and Amazon
find leasing data center space to be a viable strategy in addition to owning
and operating the infrastructure themselves.
The benefits of colocation make it an essential aspect of many companies’ IT
strategy.
What is the cloud?
In the last ten years, cloud computing has transformed into a necessary
aspect of almost every company’s IT strategy.
Cloud computing, or the Cloud, is a data storage and processing solution
where the user utilizes virtual servers instead of physical servers. The virtual
servers are supported by physical servers, often in multiple locations, but the
physical servers combine to host the entire virtual ecosystem. Since the virtual
ecosystem is shared by the physical servers, the user’s data isn’t tied to a
specific physical location.
Advantages of the cloud compared to
colocation
Operating in the cloud has distinct advantages, primarily driven by the
absence of physical infrastructure.
Instead of physically commissioning and installing new servers as you would
with colocation, cloud servers can be deployed almost immediately and at a
lower cost. Users also have easier access to their cloud ecosystem and can
interact online instead of physically managing the servers from inside the data
center.
Users also don’t need to lease the infrastructure required to support their
servers. This provides ample flexibility to scale up, down, or reconfigure based
on user needs.
When virtual servers are shared across various physical locations, cloud
computing can provide a more highly available solution. Most cloud
infrastructure includes geographic diversity, with the multiple locations
supportive of operations if one of the locations suffers any downtime.
Disadvantages of the cloud compared to
colocation
Transitioning existing systems from physical servers into the cloud is
complicated. In some cases, users find their systems aren’t as efficient in the
cloud as they were in physical servers. In cases like this, migrating out of the
cloud back to their own servers can be even more complicated.
Since the physical servers aren’t made available to end users of the cloud,
users have less control or ability to respond when downtime occurs. When
colocated, you can physically send someone to work on a resolution. With the
cloud, you’ll likely communicate with a support team who will respond within
the time frame stated in your service level agreement, which could be
between 15 minutes to 12 hours.
Operating in the cloud can be less expensive for many users, but some that
scale up find it to be more expensive than their physical infrastructure. Cloud
services are typically offered in a “pay what you use” model, and companies
can quickly end up using and paying more than they anticipated.
While the cloud can be very helpful for companies, it requires thorough
planning to execute the transition.
Cloud’s impact on the data center
industry
The mainstream adoption of the cloud created ripples in the data center
industry the same way colocation did.
Almost every company utilizes the cloud in some capacity. It solves problems
that leasing physical servers can’t. Instead of housing a handful of racks on
premise or going to colocation, now most small companies deploy their
systems straight to the cloud and large companies utilize the cloud in various
operations.
The three largest providers of cloud services are Amazon, Microsoft,
and Google. Each company has spent billions of dollars over the last five
years to develop their cloud offerings.
However, these companies have found they can’t build fast enough to keep
up with demand, so they have leased capacity from data center providers.
The term “hypserscale leasing” came about due to the demand generated by
cloud providers and the large leases they executed with data center providers.
Some of the larger data center providers have even adjusted their traditional
design to build data centers that more closely match the needs of cloud
providers in an effort to capture the demand they generate.
The myth of cloud takeover
The popularity of the cloud led many in the industry to believe the cloud was
the next stage of digital transformation and would replace colocation.
Cloud computing is certainly a necessary aspect of a modern company’s IT
strategy, but hasn’t functioned as a replacement for colocation across the
board. The cloud can be a complicated solution and doesn’t meet everyone’s
needs all the time. Many data center providers report an uptick in absorption
from users that migrated to an all-cloud solution and found it didn’t fit their
needs. Hybrid solutions are popular today, where users have a mix of some
systems in the cloud and others in physical servers.
While the cloud has taken some requirements away from colocation, it has
substantially increased the demand for colocation overall. In fact, cloud
adoption is one of the biggest sources of data center absorption, based on our
analysis and discussion with top providers, and we expect to see that trend
continue.
Intro to Data Center Infrastructure
Electrical Infrastructure (Power)
Backup generators are on hand at data centers in the event of a
regional power outage or disruption.
Data centers consume far more power than a typical office building or
warehouse. As such, the power infrastructure is one of the most critical
components.
Electricity travels along what’s called the power chain, which is how electricity
gets from the utility provider all the way to the server inside the data center. A
traditional power chain starts at the substation and eventually makes its way
through a building transformer, a switching station, an uninterruptible power
supply (UPS), a power distribution unit (PDU) and a remote power panel
(RPP) before finally arriving at the racks and servers. Data centers also utilize
on-site generators to power the facility if there is an interruption in the power
supply from the substation.
Each step of the process has a distinct purpose, whether it be transforming
the power to a usable voltage, charging backup systems, or distributing power
to where it is needed. We’ll be breaking down what each component does and
why it’s important in future articles.
Mechanical Infrastructure (Cooling)
Many data centers blow air underneath raised floors to keep servers
cool.
Servers produce substantial heat when operating and cooling them is critical
to keeping systems online.
The amount of power a data center can consume is often limited by the
amount of power consumption per rack that can be kept cool, typically
referred to as density. In general, the average data center can cool at
densities between 5-10 kW per rack, but some can go much higher.
The most common way to cool a data center involves blowing cool air up
through a raised floor, which is pictured above. In this setup, racks are placed
on a raised floor with removable tiles, usually three feet above the concrete
slab floor. Cool air is fed underneath the raised floor and is forced up through
perforated tiles in the floor around the racks. The warmer air coming out of the
servers rises up and is pulled away from the data hall, run through cool-water
chillers to cool it, and fed back beneath the raised floor to cool the servers
again.
While raised floor is a common solution, it isn’t always necessary.
Some data centers utilize isolation, where physical barriers are placed to
direct cool air toward the servers and hot air away. It’s common to see high
ceilings in newer data centers as well. By simply increasing the volume of air
in a data hall, it’s easier to keep the room from getting too hot.
Another less common solution is liquid cooling. The servers are stored on
racks that are submerged in a special non-conductive fluid. This method is the
most efficient, enabling the data center to operate at extremely high densities
and prolong the lifetime of the equipment.
In certain climates, data centers can also take advantage of “free cooling”
where they use the outside air to cool the servers. Instead of taking the hot air
and cooling it to be used again, they allow the heat to escape and pull in the
cool air from outside. This process is, as expected, much cheaper and energy
efficient than operating more man made cooling infrastructure.
Connectivity Infrastructure
A meet-me-room provides a single location for all the servers in the
data center to connect to fiber providers.
A data center’s connectivity infrastructure is also important. Without it, a data
center would just be a building full of computers that can’t communicate with
anyone outside the building.
As data centers are the primary foundation for activities happening online, the
buildings themselves need to be highly connected. Access to a variety of fiber
providers connects a data center to a wide network able to provide low latency
connections and reach more customers.
Fiber traditionally runs into a data center through secured “vaults” and into the
building’s meet-me-room or directly to a user’s servers. A meet-me-room is a
location where fiber lines from different carriers can connect and exchange
traffic.
Redundancy
Given the critical nature of data center infrastructure, it isn’t sufficient to only
have the systems necessary for operations. Data center users also care about
the additional equipment a data center has on hand to ensure that no single
system can fail and take the data center, and the users servers, offline. This
measure is called redundancy.
For example, a data center may need 10 chillers to cool their servers, but will
have a total of 11 chillers on-site. The extra chiller is redundant and used in
the event of another chiller failing.
Redundancy is communicated by the “need” or “N” plus the number of extra
systems. The example above would be considered N+1. The data center
needs 10 chillers and has one extra, thus it would be labeled as N+1. If the
data center above had 10 extra generators in addition to the 10 they needed
to operate, their redundancy would be double their need, or 2N.
The closer to N, the less redundant a data center is.
In an N+1 scenario, a data center could lose one chiller and still operate
because of the one extra chiller, but they would not have an extra available if
a second chiller went down. In a 2N scenario, all of the operational chillers
could break and the data center would enough to replace them all. Today,
most data center providers find N+1 is sufficient to avoid downtime, though
some industries require their data centers to be more redundant.
Redundancy applies to most aspects of a data center, including power
supplies, generators, cooling infrastructure, and UPS systems. Some data
centers have multiple power lines entering the building, or are fed from
multiple substations to ensure uptime in the event a line is damaged
somewhere. The same approach can be taken with fiber lines. Generators are
also used as a back-up power source should the supply be interrupted.
Data centers support the internet ecosystem that more and more of the world
relies on today. As such, they require robust infrastructure to ensure there’s
no interruption in the services they provide.
Types of data centers
All types of data centers provide power, space, and connectivity. Yet the size,
location, and connectivity of a data center changes the types of customers it
can serve.
At datacenterHawk, we break data centers down into five categories:
1. Colocation data centers
2. Enterprise data centers
3. Hyperscale data centers
4. Edge data centers
5. Carrier hotels
Colocation Data Centers
Customers lease power and space from a data center provider in colocation
data centers. The facilities themselves come in a variety of forms, but most
can meet a wide range of needs.
Most colocation data centers are located in the suburbs of larger cities,
although there are other located near the city's core. The suburbs offer larger
parcels of land, greater setbacks and provide data center providers with a
larger pathway to grow.
The emergence of colocation over 20 years ago created the competitive data
center industry we see today. Leasing data center colocation infrastructure
after the 2008 US recession allowed customers to save their capital and pay
over time. Since then, we've seen continued growth in the colocation market.
Digital Realty, Equinix, CyrusOne, CoreSite, and QTS are some of the largest
colocation data center providers.
Enterprise Data Centers
Some companies choose to own and operate their own data center facilities.
We call these facilities "enterprise data centers".
Companies have different reasons for owning and operating their own data
center infrastructure. Some choose this path because they want the privacy
and security of their own facility. Companies with larger requirements might
choose this strategy to control costs.
All major markets have enterprise data centers. Areas like Chicago, Dallas,
and Phoenix have enterprise data centers because of the businesses located
in the region. Areas like Quincy, Omaha, Des Moines and New Albany have
enterprise data centers because of the tax incentives and low power costs
they provide.
Because data center users are using more colocation and cloud today instead
of building their own, enterprise data centers are less common today than ten
years ago.
Hyperscale Data Centers
Hyperscale data centers are large, scalable facilities. Companies with
massive infrastructure needs find them attractive. Data center operators first
built these facilities for large, public cloud providers. While these companies
can build data centers themselves, leasing provides flexibility and speed to
market, which created some of the largest requirements in the industry.
Northern Virginia, Northern California, Phoenix, Dallas, and Chicago have all
seen hyperscale development and leasing.
Digital Realty, CyrusOne, QTS and CloudHQ are some of the providers
focused on meeting hyperscale demand.
Edge Data Centers
Edge data centers emerged recently in the data center industry. and provide
data center services where the end-users are.
Instead of a single large data center servicing a large region, edge data
centers are smaller facilities spread out to serve smaller areas of a region.
This provides lower latency connections from the end-user to the data center
infrastructure and reduces traffic on major fiber lines.
A great use case is Netflix. People want quick access to Netflix content. It
doesn't make sense to route them to a data center in the a major market 200
miles away. With a closer edge data center, they can get the content much
faster.
VaporIO, EdgeConneX, TierPoint, Cyxtera, and Compass are a few providers
that are focus on edge services.
Carrier Hotels
Carrier hotels are the primary internet exchange point for all data traffic in their
area. They emphasize having more telecom and fiber providers in the building
than a typical colocation data center.
One of the largest carrier hotels in the world is the One Wilshire building in
Los Angeles. Over 200 carriers connect to the building, making it the primary
connectivity hub for all traffic on the West Coast.
Carrier hotels are typically located downtown where fiber infrastructure is the
most mature. Larger carrier hotels are also located in coastal cities like Los
Angeles, Seattle, New York, or Miami to act as an anchor point for sub sea
cables. Creating such a dense fiber ecosystem at a facility takes a lot of time
and effort. As a result, most markets have only 1-3 true carrier hotels.
Carrier hotels will increase in value as network maturity grows more important
over time.
Is data center location important?
Economics
Data centers are exponentially more expensive than other types of real estate,
and the economic considerations have ramifications on all data center
projects.
Power cost is one of the most important consideration factors when choosing
a data center location and can vary widely. Areas like Quincy or Montreal are
$.02-.03 per kilowatt/hour, while locations in the north east US are $.15.16/kWh.
You can find markets with low power costs with Hawk Zoom.
Most states offer tax incentives tailored to data center development in order to
attract end users. Larger data center investments often receive exemptions
from sales tax on equipment or power consumption.
The market’s climate can impact cost as well. In cooler markets, you can use
the cool air outside to cool servers instead of air conditioning units. This can
help keep power consumption costs down. In warmer markets, summers can
have higher power costs due to peaks in demand.
Colocation competition in the market can also shift costs. Heavy competition
tends to lower the rental rates data center providers can charge. This is what
we're seeing in the Dallas and Chicago data center markets right now.
Hazards
A data center’s location can also be influenced by geographic hazards.
Natural hazards like earthquakes or hurricanes are important to consider
when performing a site evaluation. Markets like Phoenix and Chicago are
relatively safe from natural hazards. But other markets like New Orleans have
hazards that usually eliminate them from data center development.
Even with natural hazards, some markets are so strategic that providers build
there anyways. They just build their facilities to be more resilient against the
hazards of their region.
For example, Northern California, Los Angeles, and Seattle are areas of high
seismic risk, but are also three areas of substantial data center investment. To
account for natural hazards, data centers can be designed to absorb
earthquake vibrations or withstand winds of 150+ mph.
Man made hazards also have an influence on a data center’s location within a
market. The proximity to railroads, highways, airports, and nuclear power
plants are often considered when selecting a data center location.
You can see which markets have high hazard risks with Hawk Zoom.
Strategy
A company’s internal strategy matters most when placing IT infrastructure in
the proper region.
For example, Northern California is the second largest data center market
because it’s strategic for some companies to have a portion of their IT
infrastructure located in Silicon Valley despite the higher power costs and
seismic risk.
Companies also prefer to be close to their data center infrastructure. If a
company is headquartered in Houston, it might not make much sense to put
their data center in Montreal. Especially if there's a service interruption. Larger
companies might have good reason to distribute their infrastructure globally.
But the added security might come at a convenience cost to smaller
companies.
The infrastructure of a market will also influence how strategic it is. Dense
fiber infrastructure, sub sea cables, and renewable energy can be attractive to
companies.
Regulations
A market’s government can also carry a lot of weight when choosing a data
center location. Laws and regulations can make it easier or harder to operate
in different states or nations. For example, building a data center in California
can take longer due to their regulations on water usage and emissions.
For companies that operate on an international scale, data privacy is a major
concern. The Patriot Act and other privacy focused legislation and concerns
have pushed some demand away from the US and into Canada. Particularly
for international companies that need data centers in North America.
Other government movements like green-energy initiatives or Brexit can have
a major impact on a company’s data center strategy.
Just because data center can be built anywhere doesn’t mean they are. The
largest data center markets are their size for multiple reasons. Their costs,
risks, strategic benefits, and regulations all make sense for companies to
invest in the region.
What is connectivity?
Without connectivity a data center is just a warm building of computers that
can talk to themselves.
It's a game that’s won and lost on speed. The faster traffic can get to a data
center and back, the more valuable the data center can be to people outside
its four walls.
If this is the first article you’re reading about connectivity, then you’re in the
right spot. We’ll unpack everything you need to get up to speed below.
Customers Don’t Like Slow Delivery
Times
Imagine you run a logistics company. The success of your business depends
on how quickly you deliver packages. As your operation grows, you build
warehouses to swap packages on and off trucks that are going to different
areas. Sometimes trucks have to stop at multiple warehouses to get
everything they need to take to their destination.
Ideally your trucks can take interstate freeways directly to each warehouse
instead of slogging through dense downtown traffic or winding through miles
of dirt farm roads. Fast roads directly to a warehouse are better than slow
roads that require a round about route. Traveling on slower roads or taking
round about routes ultimately compounds into slower delivery times.
Customers don’t like slow delivery times.
This is what data center connectivity is all about.
You probably caught on that the warehouses in our example are data centers.
The roads in this example are the connectivity, which in the data center
industry is usually accomplished via fiber lines.
And just like the roads, if we want to get traffic where it needs to be as quickly
as possible, it’s better to use the fastest, biggest, most direct fiber line
possible. We’ll still need to make stops every now and again to pick up
packages or data, but the concept remains the same.
The Importance of Connectivity
In simpler days of the internet, one computer would talk directly to another
and get everything it needed. Today companies are using increasingly
complex systems to support their customers and their internal needs.
It’s not uncommon for a company to deploy part of their IT resources to the
cloud and keep part in house, whether physically in their building or distributed
across colocation data centers. Even within those nodes, they may have a
multitude of micro services spread across different servers. As the number of
points of communication increases, so does the importance of keeping those
communications as fast as possible.
From a user experience perspective, all this operational speed is typically
taken for granted, until something goes wrong. In terms of user experience,
human factor studies have consistently shown over 30 years that delays of 1
second interrupt the users flow of thought while delays of more than 10
seconds loses their attention. Users consistently bemoan the slow speeds of
websites and apps.
In the earlier days of the internet, it was understood that as companies were
growing there would be some hiccups. Twitter’s fail whale, which indicated a
service outage, even became a cultural icon.
However today, as consumer choices on the internet proliferate, a slow load
will ultimately become a no load as customers go elsewhere. All the more
reason to focus on speed.
Connectivity Solutions Overview
So how does a company actually ensure their data gets to its destination as
fast as possible? That’s what good connectivity helps ensure.
Fiber, Carrier Neutral Data Centers, and Being On-Net
Frequently what the industry refers to as fiber is fiber line that connects you to
the world outside the four walls of the data center. You may have to deal with
delays like potential congestion but it’s the primary method to reach
geographically dispersed facilities.
In the earlier days of colocation, data centers would only have a single fiber
provider like AT&T, Verizon, or Comcast. If you wanted to use the data center,
you were going to have to use their fiber provider. Think something akin to
you moving into an apartment where the complex signed an exclusive deal
with a single internet provider.
Over time though, the data center industry evolved to building data centers
with multiple fiber providers. We call these facilities carrier neutral data
centers. This means that if you have most of your infrastructure on Cogent,
Zayo, CenturyLink or another fiber provider then you can go to a carrier
neutral data center and expect to bring your fiber provider with you.
With the advent of carrier neutral data centers, several fiber providers have
gone about preemptively building out connections at several of these data
centers. So that when their customers show up they can get service right
away. This is what it means for a fiber provider to be on-net at a data center.
It means they are already connected and live in the building so that customers
don’t have to wait for them to build out a point of presence there.
Cross-connect
This might be the simplest type of connectivity. If a company has servers in
one area of a data center and needs to quickly communicate with a server in
another area of the data center, they would request a cross-connect.
This is a direct line between servers that doesn’t even have to go out to the
public internet. Think something akin to you connecting two laptops via a USB
or ethernet cable. There’s nothing between the computers except for the wire.
No chance of latency. No chance of traffic congestion. No need for switching
or routing. No chance of having the connection severed by an unknowing
construction crew. It’s very fast and very reliable. But it also doesn’t go outside
the four walls of the data center.
Campus Cross-connect
Instead of single isolated data centers, some larger providers opt to build
several data center facilities in close proximity on the same parcel of land.
These groups of data centers are called a campus. If a company has a server
in one of these buildings and needs to quickly communicate with a server in
the building next door, they would request a campus cross-connect.
Similar to a cross-connect, the campus cross-connect is internal to the
campus. It doesn’t need to go out to the public internet and reaps many of the
same benefits for not doing so. It’s very reliable with very little latency. The
provider essentially hard wires a cross-connect to a single point in Facility A,
which connects to a giant private fiber optic pipe to Facility B, and then hard
wires a cross-connect to the server in Facility B.
Cloud Direct Connect
Over the past ten years, companies started to move more IT resources over
to cloud providers like Amazon Web Services, Google Cloud Platform or
Microsoft Azure. If a company has material resources in the cloud, they may
want to reduce their exposure risk by avoiding connecting to the cloud over
the public internet. In this case, they would request a direct connect from
their data center to the cloud provider.
As it implies, a direct connect is a private, direct connection from the
company's premises to the cloud provider. Compared to internet based
connections, this can help increase bandwidth and consistency, all while
defraying network costs.
Carrier Hotel
As an example, some companies want to connect to certain fiber providers
but find that the providers aren’t physically connected to their data center.
Instead of waiting for all of their fiber providers to build out points of presence
at their facility, they may opt to connect to the market's carrier hotel.
While most data centers brag about scalability and security, carrier hotels
pride themselves on the ecosystem they create through fiber and network
providers. They want to be the single location you can go to connect to almost
anyone.
Most markets will only have a single carrier hotel. On the coasts, the carrier
hotel normally sits where the subsea cables come ashore. In non-costal
markets, you’ll find them near the densest infrastructure in the city.
We recently dove deep into carrier hotels on one of our podcasts if you’d like
to learn more.
Redundancy and Easy Mistakes To Make
Power infrastructure redundancy is critical to the data center industry and
connectivity redundancy is too.
Connectivity can sometimes be forgotten. If there’s only a single line
connecting your critical systems, it doesn’t matter how redundant the nodes
are if they can’t talk to each other. Without a consistent connection, these
nodes go back to just being computers in a warm building talking to
themselves.
It’s also good to remember that different people have different views of
connectivity.
A company that operates an enterprise data center to solely support their own
company operations probably doesn’t care how many fiber providers they
have on-net as long as they have the one or two they have contracts with. A
colocation data center wants to have enough fiber providers to attract
customers. But they don’t want to go through the expense of trying to compete
with a carrier hotel. For carrier hotels, the more fiber providers that are on-net
the better.
We also dove into the different types of data centers on one of our recent
podcasts if you’d like to learn more.
If you’re trying to find data centers that have specific fiber providers onnet, our search filters can help with that.
How to Measure the Data Center Market
No one makes good decisions with bad data. At
the same time, measuring and reading the data
center market is mysterious and data is hard to
get. But it doesn’t have to be that way.
Below we’ll lay out exactly how we do it at datacenterHawk. At the end of this
article, you’ll be able to take our approach and put it to use to help you
succeed in your new role.
This is part eight of Data Center Fundamentals, datacenterHawk’s guide to
getting up to speed on the data center market. If you’re a new participant in
the industry, then this is for you. Instead of analyzing deep market trends,
we’ll be covering the basics one step at a time. Be sure to subscribe to our
monthly update to know when we release future topics.
What Market Research Can Do For You
For some, market research might be a bit of an ill defined, fuzzy term. Put
simply, market research helps people understand how a certain product or
industry is performing.
Market research can help answer questions like:
1. How much are people selling?
2. How much are people buying?
3. What price are they paying?
4. What trends should I care about to help me make better decisions?
5. Where should I take my business next to give me the best chance of
success?
As the size and risk of your decision increases so does the amount of due
diligence you’re likely to give to making a good decision. You spend a lot less
time weighing the outcomes of putting together a $10 lemonade stand than
you do when buying a $200,000 house. Even more so when you’re building a
$30M data center or a $400M campus.
This is where market research can really be helpful - when you’re about to
make a large outlay of cash that can be hard to take back. In the data center
industry, it helps different people in different ways.
1. As a data center provider or operator, you can figure out where to
build your next facility by looking at which regions that have a lot of
demand but not much supply. You can also see which regions are
under supplied compared to their historical average.
2. As an investor placing capital into an investment decision or asset, you
can make an educated guess as to if the market is going to grow faster,
slower, or not at all.
3. As a consultant or broker, you can know when to press your leverage
in negotiations for your client based on your knowledge of how hot or
cold the market might be.
4. As a vendor, you can forecast future revenues based on assumptions
you can make on the growth of the industry globally, nationally, and
regionally.
Of course, we think Hawk Insight is great for guiding these types of decisions.
But regardless of where you get your market research, once you're making big
enough bets, it’s generally wise to get a third party view of what’s going on.
It’s All About Supply & Demand
Let’s go back to our lemonade stand example.
If you’re the only stand on your block, chances are fair that a car would stop
and buy your lemonade. But if 10 other kids on your block also set up
lemonade stands, your chances drop precipitously. This is the concept
of market supply. The more lemonade that people are offering to sell, the
more supply is on the market.
Let’s say you relocate your lemonade stand. You’ll probably do better setting
up on a busy neighborhood street or walking path instead of a sleepy cul-desac. Especially if you set up on a day where people are likely to buy
lemonade, like on the first warm Saturday of summer. This is the concept
of market demand. The more people there are and the more they’re willing to
pay, the higher the demand is in the market.
Now if you’re an enterprising young girl or boy, you might realize you’d do
even better setting up where there are lots of people, who really want
lemonade, when there’s no one else around to sell it.
This is how people use market data to guide their decisions. By understanding
the supply and demand of a market they can make a more informed decision
on where the market will go and where they want to be.
Measuring Data Center Market Supply
Lets map this to the data center industry.
On the supply side of the market, the lemonade stands are data centers. The
family that runs one or more lemonade stands are data center providers or
operators. Instead of selling lemonade, data center providers sell capacity.
Instead of measuring in cups, the primary measure of capacity is electricity
consumption, specifically kilowatts (kW) and megawatts (MW).
Capacity is a tricky thing to understand for people new to the industry. In
commercial real estate, the main driver of the price of a lease is the square
footage. It’s fairly straight forward: the more space you lease or build, the
more it’s going to cost.
With data centers though, you can stack lots of servers vertically into the
same 2'x3' footprint. Depending on how well the data center can cool the
servers, the more servers they can stack more closely together and the more
use they can get out of their square footage. This is the concept of density.
To tie it back together, let’s say we have two data centers: DC#1 and DC#2.
Each facility has the same square footage. DC#1 though has invested heavily
in cooling technology and can fit twice as many servers in their facility
compared to DC#2. A server is going to draw the same amount of power
regardless of which facility it’s in. So it makes more sense to measure things
based on the electricity it consumes rather than the square footage it
occupies. As such, DC#1 will probably be seeing double the revenue of DC#2.
This is why capacity is primarily measured in electricity consumption, not
square footage. It provides a better apples to apples comparison across data
centers. It also lets data centers invest in better cooling technology and
suddenly have more “lemonade” even though they didn’t expand the building.
To determine the market size, you basically add up all the capacity at all the
data centers that is currently leased or is available to be leased. This is what’s
called commissioned power. If you want to size a specific market like
Northern Virginia or London, you just add up the commissioned power of the
data centers in those markets.
By our math at the end of 4Q 2019, Northern Virginia had 1,209.43 MW commissioned power.
Smaller markets like Portland and Quincy come in at 57.81 MW and 73.53 MW respectively.
Measuring Data Center Market Demand
Let’s go back to the lemonade example and talk about the demand side of
the market. Instead of people driving by in the cars, data center customers are
called tenants or users. These tenants might be one person who’s mining
bitcoin or it might be Facebook, Amazon, or Netflix supporting an entire region
of the US. Just like on the supply side, tenants are purchasing capacity that’s
primarily measured by electricity consumption in kilowatts
(kW) or megawatts (MW).
To measure demand, we want to know how much capacity was leased up by
customers over a specific period of time. At datacenterHawk we calculate this
quarterly. The resulting number is what’s called absorption.
Let’s say DC#1 has 10 MW commissioned. 9 MW are currently leased and 1
MW is available. Over the course of a quarter DC#1 leases up that last MW to
a few tenants. Their absorption for the quarter would be 1 MW. It can get a
little more complicated but that’s the basic concept.
Similar to sizing the market on the supply side, to size the demand side of the
market you simply add up all the absorption from all the facilities in the market
for the period.
By our math in 4Q 2019, Northern Virginia absorbed 19.19 MW while Portland did 2.25 MW.
Gathering, Standardizing, and Verifying
the Data
So where can you get all this data to analyze?
We think Hawk Insight is great because you can jump to the analysis in just a
few clicks. But you can actually gather the data yourself from lots of places. It
just takes some legwork to gather it, interpret it, and standardize it before you
begin to analyze it.
See how you can skip everything in this blog and get straight to what you need with Hawk Insight.
Gathering Data
You can find data from lots of different sources.
Start with information from data center provider websites. They’ll typically list
the facilities they operate, where they are, their capacity, and other
infrastructure details. Brokerage reports from companies like CBRE or
Cushman Wakefield are also great sources of market data.
There are also market research firms dedicated to providing data and
analysis. Many firms offer purchasable reports that they produce on a yearly
cadence. Other companies have built online platforms that are easy to use
and are constantly updated with real time data. This is what we’re continuing
to build at datacenterHawk.
There’s nothing wrong with using good old Google to find out what’s going on.
Frequently there will be press releases or announcements whenever a
provider is expanding a facility or campus. You can also find announcements
across social media.
Standardizing the Data
As you research capacity data, you’ll want to be pressure testing each data
point as it comes in.
For example, it’s fairly easy to find press releases about a provider adding a
MW to a specific data center. But having 1
MW commissioned or available right now is very different than it
being under construction or even just planned. Frequently this portion of the
announcement is left off and you need to do some additional digging to figure
out what they’re really saying.
We recommend you only count commissioned and available power for the
most accurate view of the market like we do at datacenterHawk.
Verifying the Data
As you piece together your market data, you’ll also want to pressure test how
reliable the data is.
Some firms will extrapolate their numbers with a top down approach. This
means they gather a few sample data points and assume the rest of the
market shapes up along the same lines as their sample data. This is great for
getting a quick read on the market but isn’t as bullet proof as a bottoms up
approach.
With a bottoms up approach you go facility by facility in every market and
roll up those capacity figures to the national level. This takes a lot more time
than going top down but it’s much more reliable.
At datacenterHawk we take the bottoms up approach to building our market
calculations and we do it that way each and every quarter.
Just Talk to People
Some data just isn’t on the internet. It takes going to conferences, traveling to
see people, and helping everyone you talk to along the way. The great part
about this is you get to build deep relationships with the people in the
space. We’ve even had lots of them on our podcast to talk about what’s
transpiring in the industry.
If you’re new, we’d love to talk to you too and see what we can do to point you
in the right direction. You can also subscribe to our monthly update to get
more great content like this along with hard data on the market.
Talk to you soon!
Download