Uploaded by sharmavivek21

2008 5 13 Best Practices for Capacity and Performance Planning

advertisement
EMC CLARiiON Best Practices For
Capacity and Performance Planning
Jonie Chen
Sr. Technology Consultant
EMC Corporation
© Copyright 2008
2006 EMC Corporation. All rights reserved.
1
Agenda
y Choose the Right System
y Choose Disk Type(s)
y Choose RAID Type(s), RAID Size(s)
y How many disk can I add to the array
y Global Hot Spares
y Tools for Capacity Planning
y Performance Monitoring / Planning
y Tools for Performance Tuning
y Other Useful Tools
y Summary
© Copyright 2008
2006 EMC Corporation. All rights reserved.
2
Choose the Right System
What we need to know
y Total Capacity
y Total Host Connections
y Connectivity (iSCSI and/or FC)
y IO Profile
–
–
–
–
Primarily Random or Sequential IO
Predominant I/O request sizes
Read/write ratio
Total Workload ->Total Throughput (IOPS) / Bandwidth (MB/s)
ƒ Convert host load to disk load based on your RAID type
© Copyright 2008
2006 EMC Corporation. All rights reserved.
3
Choose the Right System
What we need to know… cont’d
y Determine Total Disk Workload for Random IOPS
y Determine Total Disk Workload for Random IOPS
y Planned Future Growth for capacity, connectivity & workload
© Copyright 2008
2006 EMC Corporation. All rights reserved.
4
Choose the Right System
EMC CLARiiON Model Comparisons
Description
Max Host Connections
per Array
Memory per array
AX4
AX4
Basic
Expanded
20
1GB (1SP)
2GB ( 2SPs)
CX3-10
CX3-20
CX3-20C
CX3-40
CX3-40C
CX3-80
128
128
256
256
256
256
512
2GB
2GB
4GB
4GB
8GB
8GB
16GB
Front-end FC ports per
array
2 x 4Gb per
SP
4x4Gb
4x4Gb
12x4Gb
4x4Gb
8x4Gb
4x4Gb
8x4Gb
Front-end iSCSI Ports
2 x 1Gb per
SP
4x1Gb
4x1Gb
N/A
8 x1Gb
N/A
8x1Gb
NA
Back-end ports per
array
1 x SAS
Expansion Pt
per SP
2 x SAS
Expansion
Ports
2x4Gb
2X4Gb
2x4Gb
8x4GB
4x4Gb
8x4Gb
Maximum # Drives
12
60
60
120
120
240
240
480
Maximum Capacity
12TB
60TB
60TB
114TB
114TB
234TB
234TB
474TB
MAX LUNs
512
512
512
1024
1024
2048
2048
2048
© Copyright 2008
2006 EMC Corporation. All rights reserved.
5
Choose the Right System
Fibre Channel Drives
y IOPS workloads
– Choose the system that can service the number of drives you need
– Sweet spots below are at rule-of-thumb 70% disk utilization for sustained workload
– Expect lower performance for iSCSI systems
y Bandwidth workloads
– Ranges below are for actively concurrent drives, 64–512 KB I/O
CLARiiON CX3-10
CLARiiON CX3-20
IOPS
60 disks
10,500 IOPS
120 disks
21,500 IOPS
180 disks
32,000 IOPS
300 disks
68,000 IOPS
Read
bandwidth
40 disks
600 MB/s
40–60 disks
650 MB/s
60–80 disks
1100 MB/s
60–80 disks
1400 MB/s
Write
bandwidth
40 disks
200 MB/s
40 disks
400 MB/s
40 disks
460 MB/s
60 disks
530 MB/s
Write
bandwidth
60 disks
350 MB/s
60 disks
470 MB/s
80 disks
880 MB/s
120 disks
1120 MB/s
(cache bypass)
© Copyright 2008
2006 EMC Corporation. All rights reserved.
CLARiiON CX3-40
CLARiiON CX3-80
6
Choose the Right System
SATA Drives
y IOPS workloads
– We do not suggest ATA drives for sustained random IOPS
– Sweet spots below are at rule-of-thumb 70% disk utilization for sustained
workload
y Bandwidth workloads
– Ranges below are for actively concurrent drives, 64–512 KB I/O
CLARiiON CX3-10
CLARiiON CX3-20
CLARiiON CX3-40
CLARiiON CX3-80
Read bandwidth
60 disks
600 MB/s
40–60 disks
600 MB/s
80-100 disks
1100 MB/s
80-100 disks
1400 MB/s
Write bandwidth
60 disks
200 MB/s
60 disks
400 MB/s
60 disks
460 MB/s
80 disks
500 MB/s
Write bandwidth
(cache bypass)
60 disks
300 MB/s
60 disks
450 MB/s
120 disks
800 MB/s
180 disks
1100 MB/s
© Copyright 2008
2006 EMC Corporation. All rights reserved.
7
Choose the Right System
Design Recommendations
y Evenly distribute your workload across available spindles, backend
loops and storage processors as much as possible
y Create separate set of RAID groups (with different drive types when
required) for each access profile
– Physically separate mostly sequential from mostly random
© Copyright 2008
2006 EMC Corporation. All rights reserved.
8
Choose Disk Type(s)
Disk-Drive Characteristics
RPM
Interface speed Gb/s
Service time ms
Fibre
Channel
15,000
4
5.5
Fibre
Channel
10,000
2
7
7,200
4
14
IOPS low concurrency
IOPS high concurrency
180
300
140
200
70
100
MB/s range
Optimizing ability
15-40
12-30
8-20
***
*
*
***
*
***
**
**
**
**
**
***
***
*
***
Capacity GB
Cost/GB
Cost/IOPS
Power/GB
SATA
*** the drive with the most stars wins that category!
© Copyright 2008
2006 EMC Corporation. All rights reserved.
9
Choose Disk Type(s)
y Fibre Channel drives are enterprise class
– Rated for 100% duty cycle, 24x7 even in the most demanding
environments
– Many design factors to achieve high performance and reliability
ƒ Reinforced chassis, higher precision mechanics
ƒ Dual-CPU design allows for real-time adjustments for disk tracking, magnetic
fields, etc.
ƒ Better filtering, lubrication, surface preparation, and materials
– USE THESE DRIVES for mission-critical, and sustained randomaccess workloads
y SATA drives are best for replication, streaming, and backup
– Modest random workloads are OK
– Not the choice to put the mission-critical billing application
© Copyright 2008
2006 EMC Corporation. All rights reserved.
10
Choose Disk Type(s)
Drives by Application
y Database
– OLTP, DSS
– High performance
y Media
– Ingest, playback
– Lower performance
y Backup
– To disk, from disk
– Lower performance
y Clones, mirrors
– Match primary drives if possible for best, consistent performance
y FC 15K rpm: does everything well, lower capacity
y FC 10K rpm: good choice for many applications
y SATA: cost effective, high capacity, good sequential bandwidth, but too
slow for time-critical applications and heavy loads
© Copyright 2008
2006 EMC Corporation. All rights reserved.
11
Choose RAID Type(s)
y Random I/O – mostly reads
– RAID 5 and RAID 6 will perform—for an equal number of disks—very
similarly to RAID 1/0 in read-heavy (80%) environments
– At an equal capacity, RAID 1/0 will offer higher performance as it offers
more drives
– With same amount of data disks, RAID6 performs slightly better than
RAID5. 4+2 RAID6 performs better than 4+1 RAID5 because it has an extra
disk to service reads
y Random I/O – mostly writes
–
–
–
–
RAID 1/0 is the performance champ for random write I/O
RAID 6 offers higher resiliency to disk failure than RAID 5
RAID 5 performs better than RAID6
At an equal capacity, RAID 1/0 will offer much higher performance
© Copyright 2008
2006 EMC Corporation. All rights reserved.
12
Choose RAID Type(s)
Choose a RAID Type, cont’d
y High bandwidth (large, sequential I/O)
– RAID 5 and 6 performs slightly faster than RAID 1/0
ƒ RAID 1/0 is good, but RAID 5 and 6 have fewer drives to synchronize: N+1 or
N+2, not N+N
y Recommended RAID Widths
– 4+1 or 8+1 for RAID 5
– 8+2 or 10+2 for RAID 6
– 4+4 for RAID 10
© Copyright 2008
2006 EMC Corporation. All rights reserved.
13
How Many Disk can I add in an Array
y Performance limited by drives or array?
– Does each added drive add more total performance?
y Sweet spot
– “Perfect” number of drives for max total array performance
– Assuming evenly spread load
© Copyright 2008
2006 EMC Corporation. All rights reserved.
14
Global Hot Spares
y How many hot spare drives should I have?
– ROT 1 hot spare for every 30 drives
y Where should I put them?
– There is no dedicated slots on a CLARiiON array for hot spares, i.e.
hot spares can be at any location on a CLARiiON provided they
follow the general rules of drives and DAEs.
– its recommended to spread hot spare drives across the back-end
loops.
y Can different type of drives hot spare each other?
– Fibre Channel and SATA-II drives can hot spare for other Fibre
Channel and SATA-II drives
– Only an ATA drive can be a hot spare for other ATA drives.
FC/SATA-II drives may not hot spare for ATA drives and vice versa.
© Copyright 2008
2006 EMC Corporation. All rights reserved.
15
Tools for Capacity Planning
Navisphere Task Bar: Reporting
y Provides configuration reports
– Array
– Server
– CLARiiON-platform applications
y Offers array-capacity
utilization reports
y Report results can be viewed
via the Web or saved as a
CSV/XML file
Available for all CLARiiON CX3
UltraScale and CX300/CX500/CX700
arrays with FLARE 24 and up
© Copyright 2008
2006 EMC Corporation. All rights reserved.
16
Tools for Capacity Planning
Navisphere Task Bar: Report Sample
y Provides information on
capacity utilization
y Enables visualization of
relationship between
DAEs, disks, and RAID
Groups
© Copyright 2008
2006 EMC Corporation. All rights reserved.
17
Tools for Capacity Planning
Example Report
© Copyright 2008
2006 EMC Corporation. All rights reserved.
18
Performance Monitoring / Tuning
Performance Monitoring – Navisphere Analyzer
y Collect real-time and
historical CLARiiONperformance data
y Monitor key statistics for
Trending Analysis /
Performance Planning
y What do we look at?
– Resource Utilization
ƒ SP
ƒ Cache
ƒ Drive
– Queue Length
– Response Time
© Copyright 2008
2006 EMC Corporation. All rights reserved.
19
Performance Monitoring / Tuning
Cache Optimization –
Watermarks & SP Dirty Pages Percentage
– Watermarks—your tool for overall cache optimization
ƒ Global setting—defaults to 80 (high)–60 (low)
ƒ High watermark
•
•
Sets amount of reserve cache space
Acts as a trigger for watermark flush
ƒ Low watermark
•
•
Where watermark flushing stops
Sets the number of dirty pages to retain for rehits
100
Forced Flush
Dirty Page %
90
80
Hi-Water Flush
70
60
Idle Flush
50
40
Time
To absorb bursts, we reserve some cache space—the area above the high
watermark
Set the watermark lower if you hit too many forced flushes
© Copyright 2008
2006 EMC Corporation. All rights reserved.
20
Tools for Performance Tuning
y Navisphere Quality of Service Manager (NQM)
y MetaLUN Provisioning (LUN Expansion)
y Virtual LUN Technology (LUN Migration within an array)
© Copyright 2008
2006 EMC Corporation. All rights reserved.
21
Tools and Features for Performance Tuning
Navisphere Quality of Service Manager (NQM)
Array Based Tool
High Medium Low
Priority Priority Priority
y Control Application I/O on a Granular Level
y Monitor Application and System Performance
– Response Time
– Bandwidth
– Throughput
y Use Archive Logs to View Performance Over Time
y Set and Achieve Specific Performance Targets
Available Throughput
– LUN by LUN (or MetaLUN)
– I/O Type (read or write)
– I/O Size
y Limit Resources Given to Lower Priority Applications
y Schedule Different Policies to be Enforced at Different Times
© Copyright 2008
2006 EMC Corporation. All rights reserved.
Applications
Without
Quality of
Service
Manager
22
Tools for Performance Tuning
MetaLUN Provisioning (LUN Expansion)
Improves capacity utilization
y Provision capacity just as
needed
y Pay as you grow and expand
while keeping application
online
Enables storage expansion via
two modes
y Concatenate for capacity
y Stripe for both performance
and capacity
© Copyright 2008
2006 EMC Corporation. All rights reserved.
23
Tools for Performance Tuning
Virtual LUN Technology (LUN Migration within an array)
y Solves both capacity and
performance provisioning at
the same time
– Move data to larger volume,
different types of disks,
different RAID levels, etc to
solve both capacity and
performance problems
– Optimize performance up or
down
y Moves data within a CLARiiON
CX system without application
disruption
– New LUN assumes the identity
of the source LUN
© Copyright 2008
2006 EMC Corporation. All rights reserved.
24
Other Useful Tools
Standardized Procedures y CLARiiON Procedure Generator
y Available through Powerlink
–
–
–
–
Install New Hosts
Add Storage Capacity
Install or Replace an HBA
Install or Upgrade Software
y Updated frequently
© Copyright 2008
2006 EMC Corporation. All rights reserved.
25
Other Useful Tools / Features
CLARiiON Tools on Powerlink
Customized Documentation – Planning
© Copyright 2008
2006 EMC Corporation. All rights reserved.
26
Summary
Planning/ Designing vs. Tuning
y Understand your workload
y Consider both capacity and performance requirements when planning /
designing the storage solution
y Match drive type to IO profile (workload characteristics)
y Evenly distribute the workload over all resources in the array and over
time.
y EMC “out of the box” settings are designed to satisfy the majority of
workloads encountered provided the design fits the workload
Tuning comes after implementation on a sound design
y Monitor Array performance and adjust parameters when tuning is
needed
– For example, read/write cache, watermarks
y Fine tune with Navishpere Quality of Service Manager (NQM)
y Use Virtual LUN technology to move data to different RAID, Disk types
or Loop
y Use Meta LUN technology to add spindles for additional capacity and/or
workload.
© Copyright 2008
2006 EMC Corporation. All rights reserved.
27
References
y EMC CLARiiON Best Practices for Fibre Channel Storage:
FLARE Release 26 Firmware Update - Best Practices
Planning
y Navisphere Quality of Service Manager (NQM) Applied
Technology
y EMC CLARiiON Global Hot Spares and Proactive Hot
Sparing – Best Practices Planning
y CLARiiON CX3 Performance Best Practices - Dave Zeryck
y CLARiiON Best Practices Guide Explained: Key
Performance Topics - John Freeman
© Copyright 2008
2006 EMC Corporation. All rights reserved.
28
Related documents
Download