VMware ESXi + - Just Storage Data

advertisement
Introducing AutoCache 2.0
December 2013
Company Profile
• Team
– Rory Bolt, CEO - NetApp, EMC, Avamar, Quantum
– Clay Mayers, Chief Scientist - Kofax, EMC
– Rich Pappas, VP Sales/Bus-Dev – DDN, Storwize, Emulex, Sierra Logic
• Vision
– I/O intelligence in the hypervisor is a universal need
– Near term value is in making use of flash in virtualized servers
• Remove I/O bottlenecks to increase VM density, efficiency, and performance
– Must have no impact to IT operations; no risk to deploy
– A modest amount of flash in the right place makes a big difference
• Product
– AutoCache™ hypervisor-based caching software for virtualized servers
Company Confidential
Page 2
Solution: AutoCache
– I/O caching software that
plugs in to ESXi in seconds
– Inspects all I/O
– Uses a PCIe Flash Card or
SSD to store hot I/O
– Read cache with write thru
and write around
– Transparent to VM’s
• No Guest OS Agents
+
VMware ESXi
+
CPU
Utilization
– Transparent to storage
infrastructure
Company Confidential
Page 3
AutoCache Results
Up to 2-3X VM density
improvement
Business Critical Apps
accelerated
Transparent to ESXi value
add like vMotion, DRS, etc.
Converts a modest flash
investment into huge value
Company Confidential
+
VMware ESXi
+
CPU
Utilization
Page 4
Simple to deploy
1. Buy Flash device, download PD software
2. Single ‘vib’ to install AutoCache
3. Install Flash-based device
+
– Turn off server to install PCIe card, power on
– -or- partition SSD
•
Global cache relieves I/O bottleneck in minutes
– All VM’s accelerated regardless of OS, without use of agents
– Reporting engine displays the results over time
Proximal Data turned on
Company Confidential
Page 5
AutoCache in vCenter
Company Confidential
Page 6
Uniquely Designed for
Cloud Service Providers
• Broad support
– Any backend datastore
– Effectively any flash device, plus Hyper-V soon
• Adaptive caching optimizes over time
– Takes “shared” environment into consideration
– Latencies and cache access affect other guests
• Easy retrofit
– No re-boot required
• Pre-warm to maintain performance SLA on vMotion
• Role Based Administration
Company Confidential
Page 7
PD vMotion Innovation
1. AutoCache detects
vMotion request
+
+
VMware ESXi
VMware ESXi
Target host
Source Host
Shared storage
Company Confidential
Page 8
PD vMotion Innovation
X
+
+
VMware ESXi
Source Host
2. Pre-warm VM
metadata sent to
target host to fill
cache in parallel
from shared storage
Shared storage
Company Confidential
VMware ESXi
Target host
Key Benefit:
Minimized time to
accelerate moved
VM on target host
Page 9
PD vMotion Innovation
X
+
+
VMware ESXi
2. Pre-warm VM
metadata sent to
target host to fill
cache in parallel
from shared storage
Source Host
Shared storage
3. Upon vMotion action,
atomically and instantly
invalidates VM metadata
on source host
Company Confidential
VMware ESXi
Target host
Key Benefits: Eliminates
the chance of cache
coherency issues, and
frees up source host
cache resources
Page 10
Role Based Administration
 Creates specific access
rights for AutoCache
vCenter plug in
 Enables customer to
modify:
 Host level cache
settings
 VM cache settings
 AutoCache retains
statistics for a month
Company Confidential
Customer B
Customer A
+
VMware ESXi
+
CPU
Utilization
CSP Infrastructure
Page 11
RBA in practice
• CSP creates vCenter account for customer
• With RBA, now can also grant AutoCache rights for
customer accounts that allow the customer to
control caching for their VM’s
• Enables varying degree of rights for the customer
– One user at the customer might see all VMs
– Another might see a subset of VMs
– Yet another might see some VMs, but only have rights to
certain aspects
• Say, could turn on/off cache, but not impact caching on a device
• Usage statistics are available for the last month,
and may be exported for billing purposes
Company Confidential
Page 12
Pricing and Availability
• AutoCache 2.0 is available now from resellers
– CMT, Sanity Solutions, Champion, CDW, AllSystems,
BuyOnline, Pact Informatique, Commtech, etc.
– Direct reps support channel in target markets in US
– OEM Partnerships coming in Sept
• Support for
– ESXi 4.1, 5.0, 5.1, 5.5 (when available), Hyper-V in 2013
– PCIe cards from LSI, Micron, Intel, and server vendors
– SSD from Micron, Intel, and server vendors
• AutoCache Suggested Retail Price
– Prices start at $1000 per host for cache sizes under 500GB
Company Confidential
Page 13
Summary
The Proximal Data Difference
• Innovative I/O Caching Solution
–
–
–
–
–
–
–
Company Confidential
Specifically Designed for Virtualized Servers & FLASH
Dramatically improved VM Density and Performance
Fully Integrated into VMware Utilities and Features
Transparent to IT Operations
Simple to Deploy
Low Risk
Cost Effective
Page 14
The simplest, most cost-effective
use of Enterprise Flash
Thank You
Outline
•
•
•
•
•
Brief Proximal Data Overview
Introduction to FLASH
Introduction to Caching
Considerations for Caching with FLASH in a Hypervisor
Conclusions
Company Confidential
Page 16
Considerations…
• Most caching algorithms developed for RAM caches
– No consideration for device asymmetry
• Placing data in read cache that is never read again has negative effects on
performance and device lifespan.
• Hypervisors have very dynamic I/O patterns
– vMotion affects I/O load as well as coherency issues
– Adaptive algorithms are very beneficial
– Must consider “shared” environment
• Latencies and cache access affect other guests
• Quotas/allocations may have unexpected side effects
• Hypervisors are I/O blenders
– The individual I/O patterns of guests are aggregated; devices see a blended
average
• Write-Around provides best performance/wear trade-off
Company Confidential
Page 17
Write-Around Cache:
Cost-Benefit analysis
Accelerates ~70% of I/O
(based on data supplied by
Microsoft to IEEE/SNIA)
Majority of benefits with the least changes and
best use of FLASH. Maintains disk coherency.
(Must support vMotion)
Modest investment: 3400GB is usually enough
~$1K flash per server, far less than another ESXi
host
No disruption to IT
operations
Minimal operational cost to install, configure or
maintain; no Guest OS agents if in Hypervisor
Low risk
No need to re-think data protection schemes in
HA environments; Operational transparency
possible
Minimal disruption to
performance possible
Requires fast cache load, and pre-warm upon
vMotion
Quick ROI
Within minutes, you can start adding more VM’s
to your environment
Company Confidential
Page 18
Complications of Write-Back
Caching
• Writes from VM’s
fill the cache
• Cache wear
increased
• Cache ultimately
flushes to disk
• Cache withstands
write bursts
• Cache over runs
when disk flushes
can’t keep up
 If you are truly write bound, a cache will not help
 Write-Back cache handles write bursts and benchmarks well
but is not a panacea
Company Confidential
Page 19
Complications of
Write Back caching
2. Write is
acknowledged
by mirrored host
(continued)
1. Write I/O
mirrored on
destination
Ack
+
+
VMware ESXi
Source Host
(New, dedicated I/O channel
for write back cache sync)
VMware ESXi
Mirrored host
In either case, network
latency limits performance
Shared storage with a performance tier
Existing HA
Storage infrastructure
Company Confidential
Page 20
Disk Coherency…
• Cache flushes MUST preserve write ordering to preserve disk
coherency
• Hardware copy must flush caches
• Hardware snapshots do not reflect current system state
without a cache flush
• Consistency groups must now take into account the write
back cache state
• How is backup affected?
Company Confidential
Page 21
Hypervisor Write Back
Cache: Cost-Benefit analysis
Accelerates up to 100% of
I/O, some of the time
Unless writes are truly bursty, you can’t flush the
data to disk fast enough
Higher Flash investment:
1TB or more is common
$4K/server for flash, mirrored, plus bandwidth to
replicate
Some disruption to IT
operations
Operational cost to install/configure replication;
Impact to Backup/DR procedures, impact to
shutdowns
High Availability
Adds complexity and risk for recovery from
failure
Minimal disruption to
performance
Requires fast cache load, pre-warm no longer
necessary if distributed
Quick ROI
Operational within minutes, higher hardware
costs
Company Confidential
Page 22
Outline
•
•
•
•
•
Brief Proximal Data Overview
Introduction to FLASH
Introduction to Caching
Considerations for Caching with FLASH in a Hypervisor
Conclusions
Company Confidential
Page 23
Evaluating caching
•
•
•
•
•
•
Results are entirely workload dependent
Benchmarks are good for characterizing devices
It is VERY hard to simulate production with benchmarks
Run your real workloads for meaningful results
Run your real storage configuration for meaningful results
Steady state is different from initialization
– Large caches can take days to fill
• Beware caching claims of 100s or 1000s times improvement
– It is possible, just not probable
Company Confidential
Page 24
FLASH Caching in Perspective
• Flash will be pervasive in the Enterprise
– Ten years in the making, but deployment is just beginning now
• Choose the right amount in the right location
– Modest flash capacity in host as read cache – the best
price/performance and lowest risk/impact
– More flash capacity in host as write back cache can help for
specific workloads – but at substantial
cost/complexity/operational impact
– Large scale, centralized write back flash cache in arrays that
leverage existing HA infrastructure and operations – highest cost
– highest performance - medium complexity – low impact to IT
Company Confidential
Page 25
Download