What's new in V7.3 Software

advertisement
Storwize Family
V7.3 Technical Update
May 8, 2014
Byron Grossnickle
Consulting I/T Specialist
NA Storage Specialty Team
Storage Virtualization
Bill Wiegand
Consulting I/T Specialist
NA Storage Specialty Team
Storage Virtualization
© 2014 IBM Corporation
Agenda
 Next Generation Storwize V7000 Hardware
 What’s new in V7.3 Software
2
© 2014 IBM Corporation
Agenda
 Next Generation Storwize V7000 Hardware
 What’s new in V7.3 Software
3
© 2014 IBM Corporation
Storwize V7000 Hardware Refresh: 2076-524 Control Enclosure



Control enclosure is 2U, same physical size as previous model
Front view looks identical to V5000 and only comes in 24 drive SFF
configuration for the control enclosure and both SFF and LFF configurations
for expansion enclosures
Back layout is very different to make room for the more powerful canisters
PSU 1
4
Canister 1
© 2014 IBM Corporation
Canister 2
PSU 2
Storwize V7000 Hardware Refresh: Rear View
SAS expansion ports
1GbE ports
Host Interface slots
Compression
accelerator slot
PSU
PSU
Dual controller/node canisters
Technician port
5
© 2014 IBM Corporation
Storwize V7000 Hardware Refresh: Exploded View
Canisters
PSU
Fan Cage
Enclosure Chassis
Midplane
Drive Cage
Drives
6
© 2014 IBM Corporation
Storwize V7000 Hardware Refresh: Block Diagram of Node Canister
*Optional
High speed
cross card
communications
16GB DIMM
16GB DIMM
16GB DIMM
16GB DIMM
7
PLX
SAS
EXP
Mezz Conn
To Control
Enclosure
Drives on
SAS Chain 0
COLETO
CREEK
HBAs
8Gb FC
or
10GbE
Ivy Bridge PCIe V3-1GB full duplex
1.9GHz
8 lanes
E5-2628L-V2
PLX
DMI
*Optional 2nd
Compression
Acceleration
Card
Standard
Quad
1GbE
COLETO
CREEK
SPC
SAS Chain 12Gb/phy 4 phys
4 phys
© 2014 IBM Corporation
1GbE
USB
TPM
4 phys
4 phys
Boot
128GB SSD
To Expansion Enclosure
Drives on
SAS Chain 1
SAS Chain 2
Storwize V7000 Hardware Refresh: Built-in Ports per Node Canister
 There are four 1Gb Ethernet ports which are numbered as shown in the picture
 The T port is the Technician port used for initial configuration of the system
 There are two external 12Gb SAS ports for expansion
– SAS host and SAS virtualization is not supported
 There are two USB ports
 There are three slots for expansion cards
8
© 2014 IBM Corporation
Storwize V7000 Hardware Refresh: Expansion Card Options
 There are three expansion slots numbered 1-3 left to right when viewed from the rear
 Ports on a particular card are numbered top to bottom starting with 1
 Supported expansion cards
– Compression pass-through comes standard with system to enable on-board compression engine
Slot
Supported cards
1
Compression pass-through, Compression Acceleration card
2
None, 8Gb FC*, 10GbE**
3
None, 8Gb FC*, 10GbE**
* Statement of Direction for 16Gb FC announced
** Only one 10GbE card supported per node canister
9
© 2014 IBM Corporation
Storwize V7000 Hardware Refresh: 8Gb FC Card
 Same adapter as used in current Storwize V7000 Models
– PMC-Sierra Tachyon QE8
– SW SFPs included
– LW SFPs optional
 Up to two can be installed in each node canister for total of 16 FC ports in control
enclosure
 16Gb FC Statement of Direction announced
10
© 2014 IBM Corporation
Storwize V7000 Hardware Refresh: 10GbE Card
 The new 4 port 10GbE adapter supports both FCoE and iSCSI
– Can be used for IP replication too
 In V7.3.0 we will only support one 10GbE adapter in each node canister of the 2076-524
 Support for IBM 10Gb optical SFP+ only
 Each adapter port has amber and green coloured LED to indicate port status
– Fault LED is not used in V7.3
Green LED
Meaning
Link established
On
No link
Off
 FCoE frame routing, FCF, performed by CEE switch or passed-thru to FC switch
– No direct attach of hosts or storage to these ports
 Software allows using FCoE/iSCSI protocols simultaneously as well as IP replication on
same port
– Best practice is to separate these protocols onto different ports on the card
11
© 2014 IBM Corporation
Storwize V7000 Hardware Refresh: Compression Accelerator Card
 New Storwize V7000 model has one on-board compression accelerator standard and
supports volume compression without any additional adapter installed
– This configuration will have a pass-through adapter in slot 1 to allow the on-board compression
hardware to be utilized
 One additional Compression Accelerator card (see picture) can optionally be installed in
slot 1, replacing the pass-through adapter, for a total of two Compression Accelerator cards
per node canister
12
© 2014 IBM Corporation
Storwize V7000 Hardware Refresh: Memory/CPU Core Allocation – RtC
 For this initial release there will be fixed memory sizes assigned for RtC use based on how
much memory is installed in each node canister
 An additional 32GB of memory can be installed in each node canister
– Currently can only be used by RtC code
– Statement of direction announced to allow use of this extra memory in non-RtC environment
 Memory Allocation when RtC enabled:
Installed RAM
RtC Allocation
32 GB
6 GB
64 GB
6 GB + optional 32 GB Upgrade
 CPU Core Allocation when RtC enabled:
Compression Disabled
Compression Enabled
SVC
RTC
SVC
RTC
8
0
4
4
 This gives a balanced configuration between SVC and RtC performance
– Recommendation for serious RtC use is add the extra 32GB of memory per node canister
– Second Compression Accelerator is also recommended and requires extra 32GB of memory
13
© 2014 IBM Corporation
Storwize V7000 Hardware Refresh: Max Performance (One I/O Group)
Uncompressed
Previous Storwize
V7000
New Storwize V7000
Read Hit IOPS
850,000
1,300,000
Read Miss IOPS
125,000
238,000
Write Miss IOPS
25,000
50,000
“DB-like”
52,000
100,000
Previous Storwize
V7000
New Storwize V7000
Read Miss IOPS
2,000-44,000
39,000-149,000
Write Miss IOPS
1,100-17,000
22,500-78,000
“DB-like”
1,500-32,700
41,000-115,000
Compressed
 Compressed performance shows a range depending on I/O distribution
 Compressed performance is better than uncompressed in some cases because of fewer
I/Os to drives and additional cache benefits
14
© 2014 IBM Corporation
Preliminary data: Subject to change before GA
Storwize V7000 Hardware Refresh: Fan Module
 Each control enclosure contains two fan modules for cooling, one per node canister
 Each fan module contains 8 individual fans in 4 banks of 2
 The fan module as a whole is a replaceable component, but the individual fans are not
 There is a new CLI view lsenclosurefanmodule
IBM_Storwize:FAB1_OOB:superuser>svcinfo lsenclosurefanmodule
enclosure_id fan_module_id status
1
1
online
1
2
online
15
© 2014 IBM Corporation
Storwize V7000 Hardware Refresh: Internal Battery (1)
 The battery is located within the node canister rather than the PSU in the new model
– Provides independent protection for each node canister
 A 5-second AC power loss ride-through is provided
– After this period, if power is not restored, we initiate a graceful shutdown
– If power is restored during the ride-through period, the node will revert back to main power and the
battery will revert to 'armed‘ state
– If power is restored during the graceful shutdown, the system will revert back to main power and
the node canisters will shutdown and automatically reboot
 A one second full-power test is performed at boot before the node canister comes online
 A periodic test on the battery (one at the time) is performed within the node canister, only if
both nodes are online and redundant, to check whether the battery is functioning properly
16
© 2014 IBM Corporation
Storwize V7000 Hardware Refresh: Internal Battery (2)
 Power Failure
– If power to a node canister fails, the node canister uses battery power to write cache and
state data to its boot drive
– When the power is restored to the node canister, the system restarts without operator
intervention
– How quickly it restarts depends on whether there is a history of previous power failures
– The system restarts only when the battery has sufficient charge to power the node canister
for the duration of saving the cache and state data again
– If the node canister has experienced multiple power failures, and the battery does not have
sufficient charge to save the critical data, the system starts in service state and does not
permit I/O operations to be restarted until the battery has sufficient charge
 Reconditioning
– Reconditioning ensures that the system can accurately determine the charge in the battery.
As a battery ages, it loses capacity. When a battery no longer has capacity to protect against
two power loss events it reports the battery end of life event and should be replaced.
– A reconditioning cycle is automatically scheduled to occur approximately once every three
months, but reconditioning is rescheduled or cancelled if the system loses redundancy. In
addition, a two day delay is imposed between the recondition cycles of the two batteries in
one enclosure.
17
© 2014 IBM Corporation
Storwize V7000 Hardware Refresh: 2076-24/12F Expansion Enclosure
 The expansion enclosure front looks just like the V5000 enclosures
 The expansion enclosure back looks pretty much like the V5000 enclosures too
18
© 2014 IBM Corporation
Storwize V7000 Hardware Refresh: 2076-24/12F Expansion Enclosure
 Available in 2.5- and 3.5-inch drive models
– 2076 Models 24F and 12F respectively
 Attach to new control enclosure using 12Gbps SAS
 Mix drive classes within enclosure including different drive SAS interface speeds
 Mix new enclosure models in a system even on same SAS chain
 All drives dual ported and hot swappable
19
© 2014 IBM Corporation
Storwize V7000 Hardware Refresh: Expansion Enclosure Cabling
20
© 2014 IBM Corporation
Storwize V7000 Hardware Refresh: SAS Chain Layout
 Each control enclosure supports two
expansion chains and each can connect up
to 10 enclosures
 Unlike previous Storwize V7000 the control
enclosure drives are not on either of these
two SAS chains
– There is a double-width high-speed link to
the control enclosure and SSDs should be
installed in control enclosure
– There is as much SAS bandwidth
dedicated to these 24 slots as there is to
other two chains combined
– The control enclosure internal drives are
shown as being on ‘port 0’ where this
matters
 SSDs can also go in other enclosures if
more then 24 required for capacity reasons
 HDDs can go in control enclosure if desired
 Mix of SSDs and HDDs is fine too
21
© 2014 IBM Corporation
‘SAS port 0’
Chain 0
Node Canister
Internal
SAS links
SAS Adapter
Control
SAS port 1
Chain 1
SAS port 2
Chain 2
Expansion
Expansion
Expansion
Expansion
8 more
8 more
Clustered System Example – 2 IOGs and Max of 40 SFF Expansion Enclosures
I/O Group 0
I/O Group 1
Control Enclosure SAS Chain 0
22
Control Enclosure SAS Chain 0
Expansion Enclosure
Expansion Enclosure
Expansion Enclosure
Expansion Enclosure
SAS Chain 1
SAS Chain 2
SAS Chain 1
SAS Chain 2
© 2014 IBM Corporation
Clustered System Example – 4 IOGs and Max of 40 SFF Expansion Enclosures
I/O Group 0
I/O Group 1
Control Enclosure SAS Chain 0
Expansion Enclosure
Expansion Enclosure
Expansion Enclosure
Expansion Enclosure
SAS Chain 1
SAS Chain 2
SAS Chain 1
SAS Chain 2
I/O Group 2
Control Enclosure SAS Chain 0
23
Control Enclosure SAS Chain 0
© 2014 IBM Corporation
I/O Group 3
Control Enclosure SAS Chain 0
Clustered System Example – 4 IOGs and Max of 40 SFF Expansion Enclosures
I/O Group 0
I/O Group 1
Control Enclosure SAS Chain 0
Expansion Enclosure
Expansion Enclosure
Expansion Enclosure
Expansion Enclosure
SAS Chain 1
SAS Chain 2
SAS Chain 1
SAS Chain 2
I/O Group 2
I/O Group 3
Control Enclosure SAS Chain 0
24
Control Enclosure SAS Chain 0
Control Enclosure SAS Chain 0
Expansion Enclosure
Expansion Enclosure
Expansion Enclosure
Expansion Enclosure
SAS Chain 1
SAS Chain 2
SAS Chain 1
SAS Chain 2
© 2014 IBM Corporation
Clustered System Example – 4 IOGs and Max of 80 LFF Expansion Enclosures
I/O Group 0
I/O Group 1
Control Enclosure SAS Chain 0
Control Enclosure SAS Chain 0
Expansion Enclosure
Expansion Enclosure
Expansion Enclosure
Expansion Enclosure
SAS Chain 1
SAS Chain 1
SAS Chain 1
SAS Chain 1
I/O Group 2
I/O Group 3
Control Enclosure SAS Chain 0
25
Control Enclosure SAS Chain 0
Expansion Enclosure
Expansion Enclosure
Expansion Enclosure
Expansion Enclosure
SAS Chain 1
SAS Chain 1
SAS Chain 1
SAS Chain 1
© 2014 IBM Corporation
Technician Port
 Technician port is used for the initial configuration of the system in lieu of USB stick
– Technician port is marked with a T (Ethernet port 4)
– As soon the system is installed and the user connects laptop Ethernet cable to the Technician
Port the welcome panel will appear (same as on SVC DH8)
– The Init tool will not be displayed, if there is a problem, which prevents the system from
clustering
• E.G. Node canister is in Service state because of an error or there is a stored System ID (if the system was set
up before and the user forgot to remove the ID (chenclosurevpd -resetclusterid)
– If there is a problem, then the Service Assistant GUI will be shown whereby the customer can
logon and check the node canister’s status
* If the users laptop has DHCP configured, nearly all do, it will automatically configure to bring up Initization screen
* If they do not have DHCP they will need to set IP of their Ethernet adapter to 192.168.0.2 – 192.168.0.20
26
© 2014 IBM Corporation
Hardware Compatibility within the Storwize family
 Expansion Enclosures
– The V7000 Gen2 expansion enclosures can only be used with a V7000 Gen2 control
enclosure
– The V7000 Gen1 expansion enclosures can only be used with a V7000 Gen1 control
enclosure
– The V3x00/V5000/SVC-DH8 and Flex SystemV7000 expansion enclosures cannot be used
with a V7000 Gen2 control enclosure and drives can not be swapped between models either
• Note that Flex System V7000 will not support V7.3
 Control Enclosures
– V7000 Gen2 control enclosures can cluster with V7000 Gen1 control enclosures
– Allows for non-disruptive migration from Gen1 to Gen2 or long-term system growth
– No clustering between V7000 Gen2 and V3x00/V5000 and Flex System V7000
 Remote Copy
– No remote-copy restrictions as we can replicate amongst any of the SVC/Storwize models
 Virtualization
– Fibre-channel and FCoE external storage virtualization with appropriate HBAs
– No SAS host support or SAS storage support with 2076-524
 File Modules
– V7000 Unified will support V7000 Gen2 control enclosures when IFS 1.5 GAs
27
© 2014 IBM Corporation
Agenda
 Next Generation SAN Volume Controller Hardware
 Next Generation Storwize V7000 Hardware
 What’s new in V7.3 Software
28
© 2014 IBM Corporation
Storwize Family Software Version 7.3
 Required for new Storwize V7000 model and new SVC node model
– Existing Storwize V3700/5000/7000 and SVC nodes supported too
 Supports additional expansion for Storwize V3700 and Storwize V5000
– Both systems now support up to 9 expansion enclosures
 Improved licensing model for Storwize V7000 and Storwize V5000
– SVC and Storwize V3700 licensing is unchanged
 New cache design
 Easy Tier v3
 Storage Pool Balancing
29
© 2014 IBM Corporation
Cache Re-Architecture
30
© 2014 IBM Corporation
Why re-architect?
 More scalable for the future
– Required for supporting beyond 8K volumes
– Required for support beyond 8 node clusters
– Required for 64 bit user addressing beyond 28 GB
• SVC code only uses 28 GB max today
 Required for larger memory sizes in nodes/canisters
 Required for more CPU cores
 Reduces # of IOPs copy services do directly to the back end storage
 Required for flush-less FlashCopy prepare
– Allows near CDP like capability
 RtC benefits from the cache underneath
 Algorithmic independence
– Allows changes to pre-fetch and destage algorithms without touching the rest of the cache
 Improved debugging capability
31
© 2014 IBM Corporation
Cache Architecture pre-V7.3.x
FWL = Forwarding Layer
Host I/O
FWL
Volume Mirror
Front End
TP/RtC
FWL
Remote Copy
FWL
Cache
FlashCopy
32
© 2014 IBM Corporation
FWL
TP/RtC
Virtualization
Virtualization
RAID 1/5/6/10
RAID 1/5/6/10
Backend
Backend
Cache Architecture V7.3.x
FWL = Forwarding Layer
Host I/O
FWL
Volume Mirror
Front End
TP/RtC
FWL
Remote Copy
Upper Cache
FWL
FWL
TP/RtC
Lower Cache
Lower Cache
Virtualization
Virtualization
RAID 1/5/6/10
RAID 1/5/6/10
Backend
Backend
FlashCopy
33
© 2014 IBM Corporation
Upper Cache
 Simple 2-way write cache between node pair of the I/O group
– This is it’s primary function
• Receives write
• Transfers to secondary node of the I/O group
• Destages to lower cache
 Very limited read cache
– This is mainly provided by the lower cache
 Same sub-millisecond response time
 Partitioned the same way as the original cache
34
© 2014 IBM Corporation
Lower Cache
 Advanced 2-way write between node pair of an I/O group
– Primary read cache
– Write caching for host i/o as well as advanced function i/o
 Read/write caching is beneath copy servies for vastly improved performance to FlashCopy, Thin
Provisioning, RtC and Volume Mirroring
35
© 2014 IBM Corporation
SVC Stretch Cluster – Old Cache Design
Site1
Preferred Node
IO group Node Pair
Cache
Site2
Non-Preferred Node
Cache
Write Data
Destage
Data is replicated twice over ISL
Mirror
Copy2
Copy 1
Storage at Site 1
36
© 2014 IBM Corporation
Storage at Site 2
SVC Enhanced Stretch Cluster – New Cache Design
Site1
Preferred Node
IO group Node Pair
Site2
Non-Preferred Node
Write Data with location
UC
UC
Destage
Reply with location
Data is replicated
once across ISL
Mirror
Copy 1
Preferred
Copy 2
Non preferred
Copy 1 Non preferred
LC_1
LC_1
LC_ 2
Token write data
message with location
Destage
Storage at Site 1
37
LC_ 2
© 2014 IBM Corporation
Copy 2
Preferred
Destage
Storage at Site 2
Stretch Clustered – Old Cache with compression at both
Site1
Preferred Node
IO group Node Pair
Site2
Non-Preferred Node
Uncompressed Write Data
CA
CA
Destage
Mirror
Data is replicated twice over ISL.1 x compressed
1 x uncompressed
Cmp
Cmp
Mdisk FW
Compressed Write Data
Storage at Site 1
Storage at Site 2
38
© 2014 IBM Corporation
Enhanced Stretch Cluster with compression at both
Site1
Preferred Node
IO group Node Pair
Site2
Non-Preferred Node
Uncompressed Write Data
UCA
UCA
Destage
Data is replicated three times over ISL.
1 x uncompressed, 2 x compressed
RtC changes buffer location, invalidates UCA location.
Mirror
C
Copy 1
Preferred
C
LCA1
LCA 2
Copy 2 Preferred
Cmp'd Write data Copy 1
LCA1
LCA 2
Cmp'd Write data Copy 2
Destage
Storage at Site 1
39
Copy 1 Non preferred
Copy 2
Non preferred
© 2014 IBM Corporation
Destage
Storage at Site 2
Other features/benefits of cache re-design
 Read-Only cache mode
– In addition to the read/write or none available today
 Redesigned volume statistics that are backward compatible with TPC
 Per volume copy statistics
– Enables drill down on each of the 2 copies the volume has
 Switch the preferred node of a volume easily and non-disruptively – with simple command
– Prior to 7.3 had to use NDVM to try to change preferred node non-disruptively
– Had to change I/O group and back again
– Available from the command line only (as of this writing)
40
© 2014 IBM Corporation
Changing preferred node in 7.3
 In 7.3 the movevdisk command can be used to change the preferred node in the i/o group
– If no new i/o group is specified, the volume will stay in the same i/o group but will change to
the preferred node specified.
41
© 2014 IBM Corporation
Upper Cache Allocation - Fixed
 4GB V3700 – 128MB
 All other Platforms – 256MB
 The rest of the cache is designated to the lower cache
42
© 2014 IBM Corporation
Lower Cache Allocation - BFN
 Attempts to use all cache left after the upper cache and other components have been initialized
 32 GB – RTC not supported
– 28 GB for SVC (extra 4 is used for linux kernal, etc)
• 12 GB used for write cache, the rest for read
 64 GB
– 28 GB for SVC (26 GB if compression is on)
• 12 GB for write cache, remaining for read
– 36 GB for compression
43
© 2014 IBM Corporation
Lower Cache Allocation – Next Gen V7000
 Attempts to use all cache left after the upper cache and other components have been initialized
 32 GB
– 4 GB for compression
– 28 GB for SVC
• 12 GB used for write cache, the rest for read
 64 GB
– 28 GB for SVC
• 12 GB for write cache, remaining for read
– 36 GB for compression
44
© 2014 IBM Corporation
Software Upgrade to 7.3
 Upgrade from 6.4.0 and onwards only
 All volumes (Vdisks) are cache disabled from beginning to upgrade commit
– Not a big issue on the SVC since the back end arrays have cache
– More of a challenge on the V7,V5, V3 since all reads and write will be going directly to the
back end
• Choose a time of lower activity to upgrade
– Manual upgrade is supported
• Must use applysoftware -prepare
45
© 2014 IBM Corporation
Easy Tier v3 and Automated
Storage Pool Balancing
46
© 2014 IBM Corporation
Easy Tier v3: Support for up to 3 Tiers
 Support any combination of 1-3 tiers
 Flash/SSD always is Tier-0 and only Flash/SSD can be Tier-0
 Note that ENT is always Tier-1 but NL can be Tier-1 or Tier-2
– ENT is Enterprise 15K/10K SAS or FC and NL is NL-SAS 7.2K or SATA
47
Tier 0
Tier 1
Tier2
SSD
ENT
NL
SSD
ENT
NONE
SSD
NL
NONE
NONE
ENT
NL
SSD
NONE
NONE
NONE
ENT
NONE
NONE
NONE
NL
© 2014 IBM Corporation
Easy Tier v3: Planning
 Deploy flash and enterprise disk for performance
 Grow capacity with low cost disk
 Moves data automatically between tiers
 New volumes will use extents from Tier 1 initially
– If no free Tier 1 capacity then Tier 2 will be used
if available, otherwise capacity comes from Tier
0
Flash Arrays
Less Active Data
Migrates Down
Active Data
Migrates Up
 Best to keep some free extents in pool and Easy
Tier will attempt to keep some free per Tier
– Plan for one extent times number of MDisks in
Tiers 0 and 2 and sixteen extents times number
of MDisks in Tier1
– E.G. 2 Tier-0, 10 Tier-1 and 20 Tier-2 MDisks so
182 free extents in the pool
• (2*1) + (10*16) + (20*1) = 182
48
© 2014 IBM Corporation
HDD Arrays
Easy Tier v3: Automated Storage Pool Balancing
 Any storage medium has a
performance threshold:
– Performance threshold
means once IOPS on a
MDisk exceed this
threshold, IO response time
will increase significantly
 Knowing the performance
threshold we could:
– Avoid overloading MDisks
by migrating extents
– Protect upper tier's
performance by demoting
extents when upper tier's
MDisks are overloaded
– Balance workload within
tiers based on utilization
– Use xml file to record the
MDisk’s threshold and make
intelligent migration
decisions automatically
49
© 2014 IBM Corporation
Easy Tier v3: Automated Storage Pool Balancing
 XML files have stanzas for various drive classes, RAID types/widths and workload
characteristics to determine MDisk thresholds
– Internal drives on Storwize systems we are aware of so more stanzas for them
– Externally virtualized LUNs we don’t know what is behind them so based on controller
50
© 2014 IBM Corporation
Easy Tier v3: Automated Storage Pool Balancing
 Configuration:
Drive
MDisk
Volume
Comments
24 - 300GB 15K RPM Drives
3 - RAID-5 arrays
Vol_0, Vol_1, Vol_2, Vol_3
each 32GB capacity
Total MDisk size 5.44TB
Total Volume size 128GB
All Volumes are created on MDisk0
initially
 Performance improved by balancing workload across all 3 MDisks:
 Provided as basic storage functionality, no requirement for an Easy Tier license
51
© 2014 IBM Corporation
Easy Tier v3: STAT Tool
 Provides recommendations on adding additional tier capacity and performance impact
– Tier 0: Flash
– Tier 1: “Enterprise” disk (15K and 10K)
– Tier 2: Near-line disk (7.2K)
52
© 2014 IBM Corporation
Easy Tier v3: Workload Skew Curve
Generate the skew report of the workload
 The workload skew report can be directly read by Disk Magic

53
© 2014 IBM Corporation
0x0001 0x00040x0005
1
1
2
1
0
0x0000
Pool and Tier
Easy Tier v3: Workload Categorization
2
1
0
0
54
5000
10000
© 2014 IBM Corporation
15000
Active
20000
ActiveLG
Extents
Low
25000
Inactive
30000
Unallocated
35000
40000
EasyTier v3: Data Movement Daily Report
Generate a daily (24hours) CSV formatted report of Easy Tier data movements
55
© 2014 IBM Corporation
Miscellaneous
Enhancements
56
© 2014 IBM Corporation
User Controlled GUI for Advanced Storage Pool Settings
 With introduction of new GUI for Storwize V7000 in 2010 we hid the “Advanced Pool
Settings” from users to simplify things while presenting this option on SVC GUI in V6.1
 These settings allow the choice of the extent size for the pool and the capacity warning
threshold for the pool
– The goal was to have Storwize users not to have to understand extent sizes
– SVC users were use to these options and continued to see them via the “Advanced Pool
Settings”
 In V7.1 we introduced 4TB NL-SAS drives and had to make a change to the default extent
size from 256MB to 1GB to be able to address a limitation on the number of extents
required to build a default RAID-6 10+P+Q array in the Storwize family of systems
– This change was a concern with some customers who wanted to keep pool extent size
consistent at 256MB for volume migration, etc. that in turn caused a late change in V7.2
which now provides the ability in the GUI to configure extent size for a storage pool at
creation
 This V7.3 change will allow GUI users on all products to set extent sizes if they so desire
57
© 2014 IBM Corporation
Modify RAID Sparing Behaviour
 The chain balancing rules are changed in V7.3
– Many insufficient spare errors and “unbalanced” errors will autofix on upgrade as wrong
chain no longer unbalances configuration
 The chain-balanced presets are now more easily trackable and will continue to demand
spares on the correct chain
 The member goal view will highlight when a chain balanced array has a member on the
wrong chain
58
© 2014 IBM Corporation
Restrictions and Limitations Update
 Storwize V3500/3700 with V7.3 installed will support up to nine expansion enclosures per
control enclosure
– V3700: Single SAS chain includes controller enclosure and up to nine expansion enclosures
– V3x00: Drive limit is 240 SFF or 120 LFF disks
 Storwize V5000 with V7.3 installed will support up to nine expansion enclosures per control
enclosure
– V5000: Dual SAS chains with control enclosure and up to four expansion enclosures on one
and up to five expansion enclosures on the other (Same as today's V7000)
– V5000: Up to 20 expansion enclosures per clustered system
– V5000: Drive limit is 240 SFF or 120 LFF disks per I/O Group or 480 SFF or 240 LFF disks
per I/O Group for a clustered system
 For Real-time Compression pre-V7.3, the SVC and Storwize V7000 systems have a limit of
200 compressed volumes per I/O Group
– SVC DH8 with second CPU, extra 32GB of memory and both compression accelerator cards
installed in each node will support 512 compressed volumes per I/O Group
– The jury is still out on whether the new Storwize V7000 with the extra 32GB of memory and
the second compression accelerator card in each node canister will allow for more then 200
compressed volumes per I/O Group
• We won’t know till testing is completed and V7.3 and new hardware GA’s on June 6th
• Info on status of this change will be posted on support matrix under “Restrictions and Limitations” where we
list the maximums for various functions of the system
59
© 2014 IBM Corporation
Download