Uploaded by Chris Lindsay

Fast Cache Best Practices

advertisement
FAST Cache configuration best practices
Article Number:000073184 Version:21
Key Information
Audience: Level 30 = Customers
Article Type: Break Fix
Last Published: Tue May 13 10:56:34 GMT 2014
Validation Status: Final Approved
Summary: Rules of thumb for configuring FAST Cache for best performance and highest availability.
Article Content
Impact: FAST Cache configuration best practices.
How many Enterprise Flash drives (EFDs) can be added to FAST Cache?
How to minimize the chances of dual drive failures in FAST Cache.
Issue: Performance
All available EFDs cannot necessarily be added into FAST Cache.
Environment: Product: VNX Series
Product: VNX Series with MCx
Product: CLARiiON CX4 Series
Product: Celerra Network Server (CNS)
EMC Firmware: FLARE Release 30 or later
EMC SW: FAST Cache
Hardware: Enterprise Flash Drive (EFD)
Hardware: Solid State Drive (SSD)
Cause:
Care must be taken in locating the FAST Cache drives, because each drive can sustain much higher I/O levels than an FC or SAS drive. Therefore, putting all
of these drives together in a DAE, can create a bottleneck on that bus and would decrease the availability of the system.
Only limited combinations of Enterprise Flash drives are supported by FAST Cache on the CX4 Series.
Unisphere and Naviseccli will not allow drives to be added or removed from an existing FAST Cache configuration. To reconfigure FAST Cache, it has to
first be disabled/destroyed (see 14561), which flushes any "dirty" data back to disk (where the block in FAST Cache is more recent than the version on the
original LUN). FAST Cache can then be created again with a new configuration.
For FAST Cache to be used, a FAST Cache enabler must be present on the array. Under the software list in Unisphere, this will appear as '-FASTCache'.
Change: FAST Cache enabler must be present on the array.
Resolution: Best Practices considerations for which LUNs and Pools to enable FAST Cache on:
The FAST Cache driver has to track every I/O in order to calculate whether a block needs to be promoted to FAST Cache. That adds to the SP CPU
utilization. Disabling FAST Cache, for LUNs unlikely to need it, will reduce this overhead and thus can improve overall performance levels. This is
especially true for secondary mirror and clone destination LUNs, which would gain no significant performance benefit from using FAST Cache. In the
case of Thin or Thick LUNs, the FAST Cache would have to be disabled at a Pool level, so it would be best to have separate Pools for LUNs which do
not need FAST Cache. This in turn will give a higher performance gain to the LUNs that really do need FAST Cache.
After a few hours, FAST Cache will be using nearly all of the available drive capacity of its flash drives. For every new block that is added into FAST
Cache, another must be removed and these will be the blocks that are the oldest in terms of the most recent access. If there is not much FAST Cache
capacity (in comparison to LUN capacity), blocks that are frequently accessed at certain times of day will have been removed by the time they are
accessed again the next day. Restricting the use of FAST Cache to the LUNs, or Pools, which need it the most, can increase the probability of a
frequently accessed block still being in FAST Cache the next time it is needed.
For VNX, upgrading to Release 32 (05.32.000.5.xxx), can improve the performance of FAST Cache. Release 32 recovers performance significantly
faster after a FAST Cache drive failure, compared to Release 31. Release 32 also avoids promoting small sequential block I/O to FAST Cache. These
workloads access the 64KB block multiple times (due to the small writes), making it incorrectly appear to be a frequently accessed block, which then gets
unnecessarily promoted into FAST Cache (in Release 30 and 31).
LUNs which have small sequential block I/O are not recommended for FAST Cache (such as SnapSure dvol access), but upgrading to Release 32
(05.32.000.5.xxx), will reduce the chances of this I/O causing performance issues for FAST Cache. FAST Cache should also not be used for layered
applications, such as Private LUNs or Secondary Mirrors. These include the Reserved LUN Pool (RLP: see article [see article 53261); clone private
LUNs (CPL); RecoverPoint Journal LUNs; Write Intent Logs (WIL); SnapView Clones and MirrorView Secondary Mirrors (see article 76584). Again
though, the upgrade to Release 32 can prevent secondary mirrors from causing unnecessary FAST Cache promotions.
For VNX Series arrays with MCx (Release 33), the best practices for drive placement, are:
Spread the drives as evenly as possible across the available backend busses.
Add no more than 8 FAST Cache flash drives per bus, including any unused flash drives for use as hot-spares.
Use DAE 0 on each bus when possible for flash drives, as this has the lowest latency (see below).
Best Practices considerations for VNX and CX4 FAST Cache drives on arrays running Release 30 to 32:
On a CX4, avoid adding more than four FAST Cache drives per bus. It is acceptable to use bus 0 for some, but preferably not all, of the FAST Cache
drives (unless the CLARiiON only has a single backend bus).
The FAST Cache drives can sustain very heavy workloads, but if they are all on the same bus, they could completely saturate this bus with I/O. This
would especially be an issue if the drives were all on bus 0, because this is used to access the vault drives. Spread the FAST Cache drives over as many
buses as possible. This is less of a problem for a VNX series array, because it has six times more backend bandwidth per bus than the CX4 series.
All the drives in FAST Cache must be of the same capacity; otherwise the workload on each drive would rise proportionately with it's capacity. In other
words, a 200GB drive would have double the workload of a 100Gb drive.
On a VNX, try to put Flash drives in the first DAE on any bus (i.e. Enclosure 0). This is because the drive I/O has to pass through the LCC of each DAE
between the drive and the SP and each extra LCC it passes through will add a small amount of latency (under 1ms). While this latency would be
negligible for a normal hard disk, it is more significant for flash drives and so the flash drives are usually placed in the first slots on any bus. This was
not the case for CX4, because for Fibre Channel, all I/O had to pass through every LCC on a bus, regardless of which DAE the drive was in. Therefore
the enclosure number for flash drives on a CLARiiON was did not effect performance.
Avoid placing drives in the DPE or DAE-OS enclosure (0_0) that will be mirrored with drives in another enclosure. For example, DON’T mirror a disk in
0_0 with a disk in 1_0. Please see 82823 for further information about the use of Bus 0 Enclosure 0.
The order that the drives are added into FAST Cache is important, because that will dictate which drives are Primary and which are Secondary. The first
drive added is the first Primary, the next drive added is its Secondary, the third drive is the Primary for the second mirror and so on.
The highest level of availability can be achieved by binding the Primary and Secondary for each RAID 1 pair are on different buses (i.e. not just different
enclosures on the same bus). However this does add to the complexity of configuring FAST Cache, because the configuration would need to be done
using the secure CLI. This is not considered necessary for most implementations, but is done to further reduce the chances of multiple drive failures
(which are already unlikely).
For example, in a VNX with four backend buses and eight drives to be added to FAST Cache, one example layout would be:
1_0_0 (P1); 2_0_1 (S1); 2_0_0 (P2); 3_0_1 (S2); 3_0_0 (P3); 1_0_1 (S3); 1_0_2 (P4); 2_0_2 (S4) (drives added in this order)
Where the notation P2 means the Primary and S2 means the Secondary in the second RAID 1 etc. (Do not include the labels such as (S1) in
a bind command.)
The above example would ensure that the Primary and Secondary mirrored drives share a minimum number of components. It is easiest to be
certain of the order in which these drives are bound by using the following CLI command to bind the FAST Cache: Naviseccli cache -fast -create
-disks disksList -mode rw -rtype r_1
One way for anyone to check which RAID 1 pairs are being used by FAST Cache, would be to use Unisphere Service Manager to open up an
arrayconfig.xml file (contained with an SPcollect). The Available Storage tab will show the private RAID 1 Groups used by FAST Cache, as well as all
the other drives in the array. The Private RAID Group number will be shown there and the only two RAID 1 drives which share the same RAID Group
number will be paired with each other.
Here are some general rules of thumb for using FAST Cache:
Do upgrade a VNX to at least 05.32.000.5.207, to improve the performance and resiliency of FAST Cache and FAST VP (see Release Notes for details
on support.emc.com).
Do not enable FAST Cache on any reserved / private LUNs, apart from metaLUN components. This includes the Clone Private LUNs and the Write
Intent Logs.
Do think about the type of data on a LUN and consider if FAST Cache is needed. For example log files are generally written and read sequentially across
the whole LUN. Therefore these would not be good candidates for FAST Cache. Avoiding using FAST Cache on unsuitable LUNs, it reduces the
overhead of tracking I/O for promotion to FAST Cache.
Do not enable FAST Cache on SnapView clones or the RLP (Reserved LUN Pool). This will involve disabling FAST Cache for Pools which contain
these LUNs.
Notes: For details about which workloads do not work well on FAST Cache, see article 15075.
Instructions for analyzing the FAST Cache performance using NAR files and Unisphere can be found in article 15606
FAST Cache configuration options for Next Generation VNX (MCx VNX OE Release 33)
Maximum Recommended
Permissible Flash Drive Count
System Model FAST Cache minimum
for FAST Cache
Capacity
drive count
VNX5200
600 GB*
2
Up to 6 (in multiples of 2)
(100 or 200 GB EFD)
VNX5400
1000 GB*
4
Up to 10 (in multiples of 2)
(100 or 200 GB EFD)
VNX5600
2000 GB*
4
Up to 20 (in multiples of 2)
(100 or 200 GB EFD)
VNX5800
3000 GB*
8
Up to 30 (in multiples of 2)
(100 or 200 GB EFD)
VNX7600
4200 GB*
8
Up to 42 (in multiples of 2)
(100 or 200 GB EFD)
VNX8000
4200 GB*
Up to 42 (in multiples of 2)
(100 or 200 GB EFD)
* Assuming 200GB EFD are used for FAST Cache. For 100 GB FAST Cache Flash drives, the maximum capacity would be half this amount.
FAST Cache configuration options for VNX (VNX OE Release 31 and 32)
Maximum Recommended
Permissible Flash Drive Count
System Model FAST Cache minimum
for FAST Cache
Capacity
drive count
VNX5100
100 GB
2
2
VNX5300
500 GB
4
2*, 4, 6, 8 or 10 (100 GB EFD)
or 2* or 4 (200 GB EFD)
VNX5500
1000 GB
4
All VNX5300 configurations, plus:
12, 14, 16, 18 or 20 (100 GB EFD)
6, 8 or 10 (200 GB EFD)
VNX5700
1500 GB
8
All VNX5500 configurations, plus:
22, 24, 26, 28 or 30 (100 GB EFD)
12 or 14 (200 GB EFD)
8
All VNX5700 configurations, plus:
32, 34, 36, 38, 40 or 42 (100 GB
EFD)
16, 18 or 20 (200 GB EFD)
VNX7500
2100 GB
* Not recommended
The above configurations are all RAID 1 Read/Write FAST Cache.
FAST Cache configuration options for CX4
System Model
Drive Count
(RW)
EFD type (GB) Listed Cache Capacity (GB)
CX4-120 / NS-120
2
100
100
2
4
2
4
100
100
100
100
100
200
100 *
200
CX4-240
CX4-480 / NS-480
CX4-960 / NS-960
4
8
8
200
100
200
400 *
400
800
2
4
4
8
8
10
20
100
100
200
100
200
200
200
100 *
200 *
400 *
400
800 *
1000
2000
RW = Read/Write RAID 1 FAST Cache, which is the only supported FAST Cache configuration. Read Only mode for FAST Cache is only available with an RPQ.
* Not Available in the initial Release of FAST Cache, but are available in current versions of Flare. Flare 04.30.000.5.525 is the recommended flare version for all
CX4 arrays.
73GB drive options not listed here as these drives are no longer available.
Article Metadata
Product: CLARiiON CX4 Series, VNX FAST Suite, Unisphere, Navisphere Agents/CLI, VNX5100, VNX5200, VNX5300, VNX5400, VNX5500, VNX5600, VNX5700,
VNX5800, VNX7500, VNX7600, VNX8000, VNX2 Series
Subject: VNX FAST Cache best practice
Hardware Platform: VNX Block VNX File VNX Unified
Software: FAST Cache
Connectivity
NAS;SAN Attach Fibre;iSCSI
Type:
Keywords: FAST Cache, Performance, Best Practice, EFD, Flash, Cache
External Source: Primus
Primus/Webtop solution ID: emc251589
Download