THE BUSINESS VALUE OF TECHNOLOGY
ELECTRONICALLY REPRINTED FROM FEBRUARY 2014
2014 State of
Enterprise
Storage
83% reporting 10% or better annual
growth; 27% are wrangling 25% or more yearly increases.
Data stores are big, with
The main culprit: enterprise databases and data warehouses. Yet
money’s still tight, with
25% saying they lack the budget to
meet demand, never mind optimize for performance and capacity.
By Kurt Marko
The Root Of The Problem
Business users worry about storage
growth like the NSA worries about
your privacy. Sure, they may pay lip
service to the virtue of restraint, but
when it comes down to it, they want
their stuff. And their stuff? It’s digital,
decoupled from the physical devices
used to store and distribute it. Size,
and sometimes cost, are no longer attributes users associate with information, so there is no limit to its accumulation. And of course, any time you
make something easier and cheaper
to consume, you increase demand.
In the case of storage, you increase it
by a lot, to the tune of continued double-digit growth, according to our 2014
InformationWeek State of Enterprise
Storage Survey, which we limited to respondents from organizations with 50
or more employees who are involved with storage strategy
or operations. Most, 83%, report 10% or better annual increases; 27% are wrangling 25% or more yearly growth. An
unfortunate 10% report growth of 50% or more, effectively
doubling their capacity every 12 to 18 months. The main culprits: enterprise databases, email systems, productivity software documents, and collaboration systems like SharePoint.
Money’s still tight, of course, with 25% saying they lack
the cash even to meet demand, much less optimize performance by loading up on solid state drives.
In a research note, 451 Group storage analyst Marco Coulter points out that over the past decade, enterprise storage
capacity demand has grown faster than Moore’s Law, more
than doubling every year. In 2002, only 18% of respondents
to 451 Group’s storage survey had more than 80 TB of SAN
capacity. Today, many manage well over 10 PB.
Meanwhile, just having enough space no longer guarantees performance. “In recent years, the consistent top
two pain points are dealing with rapid capacity growth
and the high cost of storage,” says Coulter. “Yet the selections of each dropped in 2013, so something is increasingly painful for storage professionals. It is delivering storage performance.”
All this is setting up a tough “pick two” conflict for IT
among capacity, cost, and performance.
Where’s The Money?
On the vendor side, business isn’t booming. Storage system commodification and technology improvements mean
growth in enterprise storage capacity doesn’t translate to
higher sales. IDC estimates that external disk storage systems factory revenue posted a year-over-year decline of
3.5%, totaling $5.7 billion in the third quarter — the third
consecutive decline. Overall for the last 12 months, storage
systems revenue is down 1.3% year over year, with only EMC
and NetApp eking out slight gains. IDC’s definition of “systems” includes disks and components, either in a standalone
cabinet or within servers, but it generally translates to the
type of consolidated disk systems enterprises use for most
of their important applications.
As in the past, the amount each system holds keeps improving. IDC estimates total storage systems capacity
shipped reached nearly 8.4 exabytes, up 16.1% year over
year.
Like revenue, sales are generally flat for EMC, NetApp, IBM,
Hewlett-Packard, and Hitachi. Only NetApp defied the thirdquarter revenue slowdown, with sales increasing almost 6%
from the third quarter of 2012. Analyzing the last four quarters of IDC estimates (see Figure 3), total storage industry
revenue is down about 1%, with EMC and NetApp squeezing out small increases while the remaining top three vendors all suffered declines, led by Hitachi, down almost 9%.
While the macro view paints a stagnating storage business, it’s not a uniform picture across product categories
and price points. As Chuck Hollis, chief strategist at
VMware, points out in a blog post, the largest product segment in IDC’s analysis is block-based network storage,
things like Fibre Channel SAN, FCoE, and iSCSI arrays, which
are actually doing better than the storage market overall.
Almost half of the respondents in our survey use FC SANs,
and one-third use iSCSI for 25% or more of their storage.
Our respondents are more inclined to buy storage from
vertically integrated IT vendors, namely HP and IBM, than
the storage specialists like EMC and NetApp that dominate
the industry rankings, although EMC does effectively tie
Dell for third place among the vendors used for Tier 1 and
2 storage. In fact, the share using EMC or NetApp among
our demographic at organizations with 50 or more employees dropped nine points each this year, while IBM
gained six points. HP recorded a precipitous drop of seven
points in our survey, likely a hangover from the turmoil
that embroiled the company a few years ago and uncertainty as it integrated major acquisitions from 3PAR and
LeftHand into its product lines.
We suspect that the preference for large, established IT
vendors with broad infrastructure portfolios at least partially
stems from the fact that two-thirds of our respondents have
completed or are planning for consolidated storage designs, in which a few centrally managed systems hold the
bulk of their important data. Putting most of your eggs into
a few baskets tends to reduce your risk tolerance and increase the appeal of a familiar vendor from which you already buy servers, management software, and services. The
same preference for one-stop, vertically integrated IT superstores also explains the use of vendor-supplied storage
management software and comprehensive IT management
systems over point products.
But there’s no room for complacency. Big storage vendors need to watch for a phenomenon we discussed in our
latest State of Servers report. Specifically, cloud service
providers are more open to innovative hardware architectures and less tied to big-name vendors. As they constitute
a larger share of the market, CSPs begin to shift industry
dynamics and technology direction, upsetting the established vendor pecking order. CSPs are more likely to build
distributed, scale-out storage systems using either whitebox components or appliances from less-costly startups
or Asian OEMs and could provide an opportunity for
smaller firms, like Coraid, Nimble, and Nasuni.
While our survey shows market penetration for all but
the largest storage vendors still only in the single digits,
much like in the server and networking markets, the disruption caused by the shift to public and private cloud infrastructure provides opportunities for some fresh faces.
Tale Of Two Markets
Although the overall storage market is in the doldrums,
one corner is absolutely on fire: solid state. Looking at component shipments for both consumer and enterprise hardware, IHS iSuppli found that in the first quarter, SSD shipments were up across the board. Its SSD market estimate
reports that SSD deployment rose not only in the enterprise segment but for desktop and notebook PCs and industrial markets, such as aerospace, automotive, and medical electronics.
“All told, SSD shipments in the first quarter amounted to
11.5 million units, up 92 percent from 6.0 million the same
time a year ago,” writes storage analyst Fang Zhang. “The
shipments include standalone SSDs as well as the NAND
flash component used together with hard-disk drives to
form cache SSDs or hybrid drives.”
In contrast, iSuppli found that hard disk drive (HDD) shipments increased only 7%, to 16 million units in the quarter,
fueled largely by enterprises. Consumer shipments fell because of less demand for PCs and displacement by SSDs.
While mobile devices still account for the majority of
flash memory demand, DRAMeXchange estimates that
SSDs, an increasing number of which end up in servers and
storage systems, will account for about one-quarter of the
flash market in 2014, with higher growth than any other
flash memory segment. Overall, it predicts that the total
market for flash memory chips will increase by more than
13% this year, on top of a similar double-digit increase in
2013. Micron Technology, now the world’s second largest
memory manufacturer, noted in its fourth-quarter earnings conference call in October that, measured by total
bits, flash sales increased 23%, driven primarily by the enterprise space. It forecast an enviable five-year compound
annual growth rate in the “low to mid-40% range” in its
latest first-quarter conference call.
Translated: Shipped SSD capacity could double every 18
to 20 months for the next few years.
This is apparent in our survey results, where the percentage of respondents using or evaluating solid state in
servers, storage arrays, or both rose eight points, to 68%.
Forty percent already use SSDs in disk arrays, and 39% run
them within servers, up eight and 10 points, respectively,
since 2013.
Still, solid state penetration isn’t deep. Only 20% of respondents have deployed SSDs in more than 40% of their
arrays, while just 13% have them in more than 40% of their
servers. Still, both figures represent a four-point increase
since our 2013 survey.
Storage Growth Spurt
It’s not just enterprises in a capacity bind. In its “The Digital Universe in 2020” report, IDC says the overall volume
of digital bits created, replicated, and consumed across the
United States will hit 6.6 zettabytes by 2020. That represents a doubling of volume about every three years. For
those not up on their Greek numerical prefixes, a zettabyte
is 1,000 exabytes, or just over 25 billion 4 TB drives.
As to what’s driving demand, greater use of cloud services and social networks and the proliferation of smartphones as information clients play a part. Migration of all
media, particularly TV, from analog to digital formats is a
culprit, too. But what’s really coming at us like a freight
train is machine-generated data, notably security images
and “information about information.” This last bucket includes everything from the Internet of things (IoT), in
which devices generate information about their operations and environments, to analytics software that must
crunch vast troves of raw data to produce the insights
businesses crave.
In its report, IDC says that “one of the ironies of the digital universe is that as soon as information is created or captured and enters the digital cosmos, much of it is lost.” It
estimates that 18% (an amazingly precise measure for
such a vague concept) of the aggregate data would be
valuable if it were tagged and analyzed, although less than
0.5% actually is.
This digital data universe doesn’t
just fuel the need for storage. Intel executives like to point out that 600
smartphones generate enough traffic
and data to occupy one server. Add a
few hundred million smartphones,
and you’ve just created the need for
hundreds of thousands of servers and
petabytes of storage.
Double-digit capacity growth,
along with the increasing diversity of
data sources (IoT), clients (mobile),
storage repositories (on-premises
consolidated arrays, scale-out nodes,
cloud services) and applications
(legacy, mobile, PaaS/SaaS), creates
problems and opportunities for IT:
• Applications, especially for mobile
devices, are getting more sensitive to
variances in storage performance,
which means storage architectures
must be optimized for both performance and capacity.
Mobile applications also create new challenges in providing remote and cloud access to data repositories.
• Scaling capacity without equivalent upgrades in capital
equipment or operational (personnel) budgets drives interest in less-costly and easier-to-manage scale-out storage designs or deduplicated distributed object stores (systems like Ceph and Actifio).
• Use of cloud services can create data silos, hence the need
to integrate cloud-based storage, applications, and services
into an on-premises enterprise storage infrastructure.
• More data, as well as more sources of data, creates the
need for better consolidated information management
software and (big) data analysis tools.
These challenges aren’t insurmountable even on tight
budgets, thanks to the proliferation of solid state, scale-out
systems, cloud gateways, and storage virtualization. Still,
it’s a balancing act because these days, compromise is a
dirty word — just look at Washington. In the storage world,
this means app developers and users want it all: blazing
performance and unlimited, dirt-cheap capacity. Solid
state storage has done more to improve performance than
any other technology since the hard disk, particularly for
random I/O. In fact, Radhika Krishnan, VP of marketing at
Nimble Storage, calls flash the most disruptive digital storage technology ever. We agree, and would add that the
benefits of flash mean more workloads are moving to silicon as the price per bit declines. But Krishnan says storage
buyers tell her they want both performance and capacity,
and given the capacity growth rates we cited above, it’s
hard to argue.
As flash densities increase and costs plummet, some industry experts, like John Scaramuzzo, general manager
and VP of SanDisk’s enterprise group, argue that we’re on
the verge of all-flash datacenters. Although flash memory
prices declined to less than $1 per gigabyte for consumer
SSDs, that’s still two to three times more than enterpriseclass solid state drives, writes Howard Marks, founder of
storage consultancy and independent test lab Deepstorage, in a blog post.
But aside from the price differential, Scaramuzzo says
the hard drive scaling technology is “running out of gas.”
He contends we’ll see 2.5-inch (SFF) SDDs in the range of
4 TB to 8 TB by year’s end and 16 GB soon afterward. According to Kevin Dibelius, Micron’s director of enterprise
storage, that figure is conservative — 25-TB drives are on
Micron’s near-term road map using existing 16-nanometer
process technology.
Eye-opening claims to be sure. But does flash capacity
still come at the cost of reliability and data protection? Yes.
Krishnan points out that high-density flash designs
achieve capacity at the cost of media endurance (how
many times a memory cell can be written to before failing)
and data resilience (the ability to tolerate random bit errors and dying memory cells). Essentially, solid state can
give you hundreds of terabytes of capacity or disk-like
longevity, but not both at the same time. To get technical,
this is because of an inherent wear-out mechanism in flash
technology caused by repeated tunneling of electrons
through an insulating layer at relatively (at least for semiconductors) high voltages. The primary means of improving flash density, and hence capacity, has been through
tighter fabrication geometries and via multilevel cell designs (MLC), in which each memory location can store
more than 1 bit of information, However, both approaches
compromise reliability and durability.
This is why most major hardware makers are betting on
hybrid arrays. It’s an optimization exercise that attempts
to deliver the best of flash and hard disks in one box, since
the cost factor still favors the latter and will for the foreseeable future.
“If we assume that SSD prices will fall at their historical
35% annual rate and hard drive prices will fall at a more
conservative 15%, by 2020 the enterprise SSD will cost almost 13 cents a gigabyte, more than the hard drive costs
today, while the 20 TB drives the hard drive vendors are
promising for 2020 will cost under 3 cents a gigabyte,” says
Marks. “The price difference will have shrunk from 30-to-1
to around 5-to-1.”
Is that enough parity to make Scaramuzzo’s vision of a
disk-free datacenter a reality anytime soon? Probably not.
However, flash will earn a growing role in storage architectures by delivering much faster I/O (particularly for reads)
and lower power consumption compared with spinning
disks. Adding capacity doesn’t significantly increase power
draw, particularly since intelligent SSD controllers can automatically idle inactive NAND chips and instantly light them
up when needed. This rapid access to cold data is another
factor Scaramuzzo cites for flash in Tier 2 and 3 data stores.
“Flash provides instant access,” he says. “You don’t need to
keep an HDD spinning all the time.”
Micron already sees high demand for lower-durability
drives for what Dibelius terms “read often, write few” applications. Enterprise SSDs can withstand a lot of write cycles, typically about 10 fills a day. This equates to rewriting
every memory cell 10 times, or almost 4,000 times per year.
But Dibelius says customers, particularly big cloud service
providers, are asking for less-expensive devices good for
as little as one fill per day, a write rate well within the specifications of consumer-grade MLC, or even TLC (triple-level
cell) flash chips. He says high-durability (10-fill) drives using much more
costly SLC (single-level cell) devices
now account for only about 10% of
Micron’s demand. Reiterating Scaramuzzo’s point about idle data, Dibelius attributes much of the growth
in read-intensive applications to the
use of flash as cold storage.
Most of our respondents are using
solid state selectively. Of the 63% deploying or evaluating SSDs, 60% use or
will use it to improve overall server performance. The majority of those using
flash within servers, 83%, opt for SSDs,
with only 14% using PCIe cards, unchanged since last year. Just 20% have
SSDs or flash cards installed in more
than 30% of their servers, also unchanged, a fact we find surprising
given price erosion of 30% to 50% in
flash drives over the past year.
Although we didn’t ask about allsolid state systems, in light of other data, we suspect most
respondents are opting for a hybrid approach, in which flash
acts as either a fast cache or automatically tiered repository
for hot data and I/O-intensive applications. A 30/30 rule applies to the 62% of respondents with or evaluating flash in
disk arrays: 30% have deployed or will deploy flash in more
than 30% of their systems, up two points. Our data’s distribution implies that flash penetration in enterprise arrays is
around 25% of all arrays across respondents, little changed
in the past year.
Of course, these arrays probably have more flash (and
disk) capacity, but the data underscores that IT organizations are using flash judiciously. Indeed, one of the advantages of hybrid storage systems is the ability to easily and
nondisruptively vary the mix between flash and disk in a
particular system. Krishnan says that somewhere between
10% and 40% of typical workloads need data in a hot
(flash) tier, but by tuning the ratio for a particular application mix, IT can make hybrids perform like an all-flash system. She says Nimble’s systems can migrate data from
HDD to flash in less than a millisecond.
Good News, Bad News
The good news for vendors is that, while SSDs accounted for 4% of capacity sold in 2012, it’s growing at
60% per year versus 10% to 20% for disk-based systems,
according to Coulter of the 451 Group. Keep that pace for
a few years, and solid state becomes an appreciable fraction of purchased storage capacity.
The bad news for standalone flash startups? We’re already seeing signs of market disruption, notably the disastrous reception Violin Memory’s IPO received on Wall
Street (down 22% from its initial price on the first day of
trading). It went on to post terrible first-quarter financial
results, which led to the sacking of its CEO and defection
of several other C-level executives. Violin’s stock now sells
for less than half its IPO price at the end of September, and
the company appears to be in a death spiral. And it’s not
the only one: OCZ filed for bankruptcy in December, and
Fusion-io, hit with declining revenue and quarterly losses,
is also playing musical chairs in the C-suite.
The problem for pure-plays is that big storage vendors
have made major flash acquisitions in the last couple of
years, solidifying their solid state portfolios. Cisco nabbed
Whiptail, EMC bought XtremIO and ScaleIO, IBM scooped
up Texas Memory, SanDisk bought Smart Storage (which
is how Scaramuzzo got there), and Western Digital added
STEC and Virident. HP is also beefing up its 3PAR line with
all-flash arrays.
The only notable names left off this list are NetApp and
Seagate, both of which are regularly rumored to be sniffing around the remaining point-product companies —
Kaminario, Nimbus Data, Pure Storage, Skyera, and SolidFire — for a possible buyout.
Jay Kidd, senior VP and CTO at NetApp, calls it the
“Hunger Games for the all-flash market” and says the coming year will be a much more difficult environment for
flash startups. An exception: companies that promote
hybrid architectures, where flash systems are important
acceleration components between application servers and
bulk disk (or cloud) storage tiers. This category includes
Nimble, which has gone public, and privately held Avere
and Tegile. While they appear to have more sustainable
niches, 2014 will be a pivotal year for flash specialists of all
stripes. Can they compete with big storage vendors that
have built or acquired substantial flash portfolios? Do they
venture into the public equity markets? Or do they try to
hold out for another year as independents, hoping their
VC funding and unique technology sustain them and catalyze growth in customers and sales?
Our take is that enterprise buyers will be wary of the orphan risk and inclined to stay with established vendors that
have plugged the flash gaps in their portfolios. Whether
cloud and other service providers, which often value vendor
innovation more than stability, will make up for that lost
business remains to be seen.
Bridge To The Cloud
Speaking of the cloud, in many shops, enterprise IT itself
is starting to look a lot like a cloud service provider, aiming
to compete in cost and agility. NetApp’s Kidd sees this phenomenon as one of the top IT trends in 2014: “The bar of
service is being set by external providers,” he says.
Now IT needs to up its game, and as more organizations
build private clouds with infrastructure shared across multiple business units and applications, enterprise storage
architectures will start to emulate those of public cloud
providers. This has several ramifications:
• Scale-out systems will displace monolithic arrays.
• Distributed file systems running on cloud stacks will
turn compute nodes into storage nodes.
• Use of object storage will increase.
• Cloud storage will become an extension of onpremises systems.
Scale-out storage has been a feature in our past few
storage reports, and like solid state, it’s gone from niche to
STORAGE
Market Watch: Storage
A
ll is not bleak in the storage business. IDC’s estimates show success varies widely depending on
which end of the storage market a vendor is in.
The firm segments storage products into three price
ranges: less than $25,000, $25,000 to $250,000, and
more than $250,000.
“In 2012, the entry category was responsible for [approximately] $4.7 billion but has good growth — over
10%,” says Chuck Hollis, chief strategist at VMware. “The
midrange category is larger but growing more slowly:
$11.4 billion, with a humble 1.6% growth rate. And big
iron is responsible for $7.6 billion and a very modest
growth rate of under 1%. EMC has a big lead in the top
two price bands, and is currently behind Dell and fighting it out with HP for the No. 2 spot in the entry-level
category.”
Looking just at the network-attached SAN, NAS, and
unified storage products, Gartner paints a more encouraging picture, particularly for NAS filers. Its May estimate
for the 2012 market calculated revenue growth of
almost 20%: “The pure NAS market continues to grow
at a much faster rate (15.9%) than the overall external
controller-based (ECB) block-access market (2.3%), in
large part due to the expanding NAS support of growing vertical applications and virtualization.”
Breaking the market down by protocol, NAS filers represent almost two-thirds of the business, with FC SAN
mainstream. All the major storage vendors offer some
form of scale-out hardware/software system. Coraid CEO
Dave Kresse says scale-out is particularly popular with
service providers because they have massive volume requirements, unpredictable scale and rates of growth,
widely variable and unpredictable workloads, and a need
to tightly control opex costs and administrative overhead.
We see all of these as challenges in enterprise storage
environments, too. While only 41% of our respondents
have deployed scale-out systems, we expect that number
to increase in the coming years as the architectural pendulum swings from monolithic storage systems to distributed pools composed of locally attached storage to support private and hybrid clouds.
A major catalyst is the growing interest in, and maturation of, OpenStack, fueled by a combination of factors:
widespread vendor support, particularly endorsement by
major open source arms merchants like Red Hat; rapid and
predictable upgrade cycles; a growing ecosystem of commercially supported, enterprise-grade products, including
Cloudscaling, Mirantis, Piston Cloud, Red Hat, and SUSE;
and maturing distributed storage software like Ceph, Gluster, and Lustre.
hardware just under 30% but growing about 13 points
faster, at 28.7% year over year. Our data corroborates the
trend, as the share of respondents using NAS for 50% or
more of their storage rose three points, to 26%. Still,
Gartner’s estimates point to a slowdown in 2013, with
its first-quarter figures showing virtually no revenue
growth over the prior year.
Gartner’s findings largely mirror IDC’s standings,
which place EMC, NetApp, IBM, and HP, in that order;
however, Oracle and Netgear occupy the fifth and sixth
positions. Illustrative of strength at the low end of the
market, Netgear leads, by far, in total capacity and units
sold, with over 3 PB and 137,000 units shipped in 2012,
about 7% more in capacity and 67% in units than EMC.
According to Gartner’s analysis, the raw capacity
growth rate, at 15.8%, was the lowest in recent years,
largely because of budget constraints and the necessity
to find greater storage efficiencies throughout the infrastructure. However, the firm expects that capacity
growth will continue in the range of 35% to 45%, and it
may yet exceed 50% in future years.
Note that the Gartner and IDC estimates for capacity
growth are almost identical, at around 16%, and align
with our data. Taking a weighted average of our data on
storage under active management and using the midpoint of our capacity ranges as a proxy for all responses,
we calculate capacity growth at about 20%.
Cloud Control
Enterprise use of cloud storage is arguably already mainstream, as 47% of our respondents, up eight points since last
year, use these services. Primary uses are application-specific
needs, backup and recovery, and archiving. Data security remains the biggest concern, identified by 86% of our respondents. The four-point uptick is understandable given the
steady stream of NSA spying revelations. Adoption is broad
but not deep. Of those with knowledge of their organizations’ cloud storage allocations, 88% say half or less of their
incremental storage capacity will go to the cloud.
Today, most cloud storage is tied to specific applications
designed to work with cloud services, like a SaaS backup
and DR product, or custom PaaS app with a cloud back
end. However, as IT becomes more comfortable with the
technology and economics of cloud storage, most will
want to integrate it with existing on-premises storage
pools using a cloud gateway.
So far, only 21% of our respondents overall have deployed gateways. Of those using cloud storage, half use
some form of gateway, either a software appliance like the
AWS S3 or hardware boxes such as those from Ctera Networks, Panzura, Riverbed, or TwinStrata. For the rest, this is
one area to address in 2014.
The Year Of Software-Defined Storage?
If 2013 was the year of software-defined networks, this
could be the year storage follows suit. There are compelling commercial examples of comprehensive storage
virtualization, including Dell’s Compellent Fluid Data Architecture, EMC’s ViPR, Hitachi’s VSP, IBM’s SmartCloud Virtual Storage Center, and NetApp’s Clustered Data OnTap.
Some vendors incrementally turned block or file virtualization products into comprehensive systems; others, like
EMC’s ViPR, are grandiose visions that have only recently
materialized into actual products.
Half of our respondents virtualize some storage, though
legacy apps mean few can put all their resources in a single virtual pool. As the technology matures and more IT
organizations see the benefits of infrastructure independence and information mobility, where applications and
their data aren’t tied to a specific piece of hardware, that
number should increase.
Key SDS drivers are private clouds, already in production
or testing by 77% of respondents to our 2014 Private
Cloud Survey, and more complex data management associated with big data analytics applications. Both fuel the
need for more efficient and flexible data management and
in turn increase interest in SDS.
If you’re a VMware shop, start incorporating virtual volumes and VSAN. These can allow you to turn VMs into distributed storage nodes for virtualized applications. Also,
test the SDS software your incumbent storage provider offers. For EMC customers, this means ViPR, an admittedly
big, complex product. Start small — identify a few usage
scenarios and subsets of ViPR features, like vCenter integration using VASA or centralized storage management.
HP, IBM, and NetApp shops should take a similarly incremental approach. Don’t try to boil the ocean with your first
SDS deployment, but do turn up the heat.
OpenStack users should start testing distributed file systems and use OpenStack’s storage services to make
servers do double duty as both compute and storage
nodes.
Although the heady days of double-digit revenue
growth won’t be returning to the storage business any
time soon, that doesn’t mean the industry is stuck in a rut.
Far from it. The move from magnetic to solid state storage
is as significant and disruptive to enterprise IT as the mobile device revolution has been on the consumer side. But
solid state isn’t the only big shift in storage. The translation
of storage into an abstract virtual resource is following the
same model that upended the server business.
The big-picture goal for 2014: Redesign your storage infrastructure into a dynamic, resilient, scalable, cloud-like
pool that incorporates solid state, storage virtualization,
and cloud services in ways that are transparent to applications and end users. It’s a tall order and a multiyear project, but there’s no time like the present to get started.
SEI specializes in flexible on-site data center maintenance for servers and storage,
helping IT departments maximize support services to meet current and future business needs.
For more information, go to seiservice.com/Enterprise-Storage-Support
Posted with permissions from the February 2014 issue of InformationWeek, United Business Media LLC. Copyright 2014. All rights reserved.
For more information on the use of this content, contact Wright’s Media at 877-652-5295.
108965