View - HDS Community - Hitachi Data Systems

VSP Architecture Overview
HDS Technical Training
© 2010 Hitachi Data Systems. All rights reserved.
VSP Introduction
 VSP is a completely new, highly scalable enterprise array
 VSP is the first “3D Array”
– Scales up within a single chassis by adding logic boards (I/O processors,
cache, host ports, disk controllers), disk containers and disks (to 1024 disks)
– Scales out by adding a second fully integrated chassis to double the cache,
disk capacity and host connectivity of a single chassis (to 2048 disks)
– Scales deep with external storage
 VSP continues support of Hitachi Dynamic Provisioning and Universal
Volume Manager (virtualized storage), as well as most other Hitachi
Program Products available on the USP V
 VSP has a new feature within HDP named Hitachi Dynamic Tiering to
migrate data among different storage tiers (SSD, SAS, SATA) located
within a single HDP Pool based on historical usage patterns
 VSP provides up to 40% better power efficiency than USP V and a much
smaller footprint
2
VSP Changes Overview
 The VSP shares no hardware with the USPV
 The VSP architecture is 100% changed from the USP V
 VSP does reuse much of the USP V software, such as HDP and other
Program Products
 Major changes from the USP V include:
– The previous Universal Star Network switch layer (PCI-X, 1064MB/s paths)
has been upgraded to a new HiStar-E grid (PCI-e, 2048MB/s paths)
– The MP FED/BED processors have been replaced with Intel Xeon quad-core
CPUs located on a new Virtual Storage Director I/O processor board
– The discrete Shared Memory system has been replaced by a Control Memory
(CM) system. This uses processor board local memory plus a master copy in
a region of cache that is updated by the individual VSDs
– Each VSD board manages a discrete group of LDEVs that may be accessed
from any port, and has a reserved partition in cache to use for these LDEVs
– Individual processes on each VSD Xeon core dynamically execute tasks for
the different modes: Target, External, BED (disk), HUR Initiator, HUR Target,
various mainframe modes, and various internal housekeeping modes
3
VSP Configurations Overview
 A single chassis array can include up to:
–
–
–
–
–
–
–
3 racks and one logic box
4 VSD boards
64 8Gbps FC or FICON ports (no ESCON) – 8 FED boards*
256GB cache – 8 DCA boards (using 4GB DIMMs)
1024 2.5” disks (or 640 3.5” disks) – 64 HDUs
32 6Gbps back-end SAS links – 4 BED boards
65,280 Logical Devices
 A dual chassis array can have up to:
–
–
–
–
–
–
–
6 racks and two logic boxes
8 VSD boards
128 8Gbps FC or FICON ports – 16 FED boards*
512GB of cache – 16 DCA boards (using 4GB DIMMs)
2048 2.5” drives (or 1280 3.5” drives) – 128 HDUs
64 6Gbps back-end SAS links – 8 BED boards
65, 280 Logical Devices
* More if some DKAs are deleted
4
VSP Disk Choices
 2.5” SFF Disks (SFF DKU):
–
–
–
–
200 GB SSD (3 Gbps**)
146 GB 15K RPM SAS (6 Gbps)
300 GB 10K RPM SAS (6 Gbps)
600 GB 10K RPM SAS (6 Gbps)
 3.5” LFF Disks (LFF DKU):
– 400 GB SSD (3 Gbps**)
• (~20% slower on writes than the 200 GB SSD)
– 2 TB 7.2K SATA (3 Gbps)
** In the future, the SSDs will have the 6 Gbps interface.
 Disks of different interface speeds may be intermixed in the DKUs as the
BEDs drive each “conversation” at the speed of the individual drive over
the switched SAS back-end.
5
VSP Design
 Each FED board has a Data Accelerator chip (“DA”, or “LR” for local router)
instead of 4 MPs. The DA routes host I/O jobs to the VSD board that owns that
LDEV and performs DMA transfers of all data blocks to/from cache.
 Each BED board has 2 Data Accelerators instead of 4 MPs. They route disk I/O
jobs to the owning VSD board and move data to/from cache. Each BED board has
2 SAS SPC Controller chips that drive 8 SAS 6Gbps switched links (over four 2Wide cable ports).
 Most MP functions have been moved from the FED and BED boards to new multipurpose VSD boards. No user data passes through the VSD boards! Each VSD
has a 4-core Intel Xeon CPU and local memory. Each VSD manages a private
partition within global cache.
 Unlike the previous Hitachi Enterprise array designs, the FED board does not
decode and execute I/O commands. In the simplest terms, a VSP FED accepts
and responds to host requests by directing the host I/O requests to the VSD
managing the LDEV in question. The VSD processes the commands, manages
the metadata in Control Memory, and creates jobs for the Data Accelerator
processors in FEDs and BEDs. These then transfer data between the host and
cache, virtualized arrays and cache, disks and cache, or HUR operations and
cache. The VSD that owns an LDEV tells the FED where to read or write the data
in cache.
6
VSP LDEV Management
 In VSP, VSDs manage unique sets of LDEVs, and their data is contained
within that VSD’s cache partition. Requests are routed to the VSDs by the
Data Accelerator chips on the FED and BED boards using their local
LDEV routing tables.
 LDEV ownership can be looked at in Storage Navigator, and may be
manually changed to another VSD board.
 When creating new LDEVs, they are round-robin assigned to the installed
VSDs in that array.
 If additional VSDs are installed, groups of LDEVs will be automatically
reassigned to the new VSDs. There will be a roughly even distribution
across the VSDs at this point. This is a fast process.
 LDEV ownership by VSD means that VSP arrays don’t have an LDEV
coherency protocol overhead. There is only one VSD board that can
manage all I/O jobs for any given LDEV, but any core on that Xeon CPU
may execute those processes.
7
Paths Per LDEV
 VSP should be relatively insensitive to how many different active paths
are configured to an LDEV.
 On the USP V, we generally advise 2 paths for redundancy, and 4 paths
where performance needs to be maintained across maintenance actions,
but never use more than 4 active ports because of the LDEV coherency
protocol “bog-down” in Shared Memory that happens as you increase the
number of paths.
8
VSP I/O Operations
 Note that a VSD controls all I/O operation for an LDEV,
whether it is processing a host I/O, a disk I/O, an external I/O,
or a Copy Product operation.
 Copy products PVOLs and SVOLs must be on the same
VSD, as the data has to available from the same cache
partition.
9
Performance on VSP
 Basically, on the USP V, we know that
– Small block I/O is limited by MP busy rate (FED-MP or BED-MP busy)
– Large block I/O is limited by path saturation paths (port MB/s or cache
switch path MB/s, etc.)
 On VSP, the “MPs” are separated from the ports.
– Where there are multiple LDEVs on a port, these can be owned by different
VSD boards.
– Where there are multiple LDEVs on a port that are owned by a single VSD
board, the 4 cores in the VSD board can be processing I/Os for multiple
LDEVs in parallel.
 VSP can achieve very high port cache hit IOPS rates. Tests using 100%
8KB random read, 32 15K disks, RAID-10 (2+2), we saw:
– USP V: 1 port, about 16,000 IOPS (2 ports-2MPs, 31,500 IOPS)
– VSP: 1 port, about 67,000 IOPS (2 ports, 123,000 IOPS)
10
VSP Architecture
Overview
11
Fully populated Dual Chassis VSP has 6 racks
2 DKC racks, each with a DKC box and 2 DKU boxes
4 DKU racks, each with 3 DKU boxes
DKC Module-1
DKC Module-0
4 VSDs
8 VSDs
HDD (SFF)
1,024
2,048
FED ports
64 (80/96*1)
128 (160*2)
Cache
256GB (512GB)*3
512GB (1,024GB)*3
*1 80 ports with 1 BED pair
96 ports in a diskless (all FED) configuration
*2 160 ports with 1 BED pair per DKC module
(Diskless not supported on 2 module configurations.)
*3 Enhanced(V02)
RK-12
6.5 ft
RK-11
RK-10
RK-00
11.8 ft
RK-01
RK-02
12
3.6 ft
VSP Single Chassis Architecture w/ Bandwidths
64 x 8Gbps FC Ports
8 x 8Gbps FC Ports per FED
8 BED boards
8 DA Processors
FED
FED
VSD
VSD
VSD
VSD
FED
FED
16 x 1GB/s Send
16 x 1GB/s Receive
FED
FED
FED
FED
CM
DCA
96 GSW links
CM
DCA
GSW
GSW
16 x 1GB/s Send
16 x 1GB/s Receive
GSW
GSW
16 x 1GB/s Send
16 x 1GB/s Receive
32 x 1GB/s Send
32 x 1GB/s Receive
CM
DCA
CM
DCA
To Other GSWs
16 x 1GB/s Send
16 x 1GB/s Receive
BED
BED
BED
BED
256GB Cache
8 x 6Gbps SAS Links per BED
VSP Single Chassis
32 x 6Gbps SAS Links
13
4 BED boards
8 SAS Processors
8 DA Processors
VSP Single Chassis Grid Overview
FED
FED
BED
FED
FED
GSW
VSD
VSD
DCA
BED
GSW
DCA
DCA
BED
FED
FED
GSW
DCA
DCA
DCA
FED
DCA
DCA
FED
HiStar-E Network
4 PCIe Grid Switches
(96 ports)
GSW
VSP Single Chassis - Boards
14
BED
•8 FEDs
•4 BEDs
VSD
VSD
•8 Cache
•16 CPU
Cores
Dual Chassis Arrays
 The VSP can be configured as a single or dual chassis array. It is still a
single homogeneous array.
 A VSP might be set up as a dual chassis array from the beginning, with a
distribution of boards across the two chassis.
 A single chassis VSP can be later expanded (Scale Out) with a second
chassis.
 The second chassis may be populated with boards in any of these
scenarios:
– Adding 2 or 4 Grid Switches and 4-8 Cache boards to provide larger amounts
of cache
– Adding 2 or 4 Grid Switches and 2-4 VSDs to add I/O processing power (for
random I/O)
– Adding 2 or 4 Grid Switches and 2-8 FEDs to add host, HUR, or external ports
– Adding 2 or 4 Grid Switches and 1-2 BEDs to add disks and SAS paths
– Any combinations of the above
15
VSP Second Chassis - Uniform Expansion
FED
FED
BED
FED
FED
BED
BED
FED
FED
BED
FED
FED
•8 FEDs
•4 BEDs
4 GSW Paths
to Chassis-1
GSW
VSD
VSD
DCA
GSW
DCA
DCA
GSW
DCA
DCA
DCA
DCA
VSP Second Chassis - Boards
16
HiStar-E Network
4 PCIe Grid Switches
(96 ports)
GSW
DCA
VSD
VSD
•8 Cache
•16 CPU
Cores
VSP and USP V Table of Limits
Table of Limits
USP V
Single Chassis
Dual Chassis
Maximum
32-256 (512)
32-512 (1024)
512
Raw Cache Bandwidth
64GB/s
128GB/s
68 GB/sec
Control Memory (GB)
8-48
8-48
24
Cache Directories (GB)
2 or 4
6 or 8
-
SSD Drives
128
256
128
2.5“ Disks (SAS and SSD)
1024
2048
-
3.5“ Disks (SATA, SSD
)640
1280
-
3.5" Disks (FC, SATA)
-
-
1152
65,280
65,280
65,280
16,320
16,320
-
2.99TB
2.99TB
2.99TB
Max CoW Volume Size
4TB
4TB
4TB
Max External Volume Size
4TB
4TB
4TB
IO Request Limit per Port
2048
2048
2048
Nominal Queue Depth per LUN
32
32
32
HDP Pools
128
128
128
Max Pool Capacity
1.1PB
1.1PB
1.1PB
Max Capacity of All Pools
1.1PB
1.1PB
1.1PB
LDEVs per Pool (pool volumes)
1024
1024
1024
2.99/4TB
2.99/4TB
2.99/4TB
~62k
~62k
8192
DP Volume Size Range (No SI/TC/UR)
46MB-60TB
46MB-60TB
46MB-4TB
DP Volume Size Range (with SI/TC/UR)
46MB - 4TB
46MB - 4TB
46MB - 4TB
Data Cache (GB)
Logical Volumes
Logical Volumes per VSD
Max Internal Volume Size
Max Pool Volume size (internal/external)
DP Volumes per Pool
17
VSP
Board Level Details
18
Logic Box Board Layout
DKC-0 Rear
DKC-0 Front
DCA-1 (2CD)
DCA-0 (2CC)
CHA-3 (2RL)
CHA-2 (2RU)
CHA-1 (2QL)
CHA-0 (2QU)
DKA-1 (2ML)
DKA-0 (2MU)
MPA-1 (2MD)
ESW-1 (2SD)
MPA-0 (2MC)
ESW-0 (2SC)
MPA-0 (1MA)
ESW-0 (1SA)
MPA-1 (1MB)
ESW-1 (1SB)
DCA-0 (1CA)
DKA-1 (1AL)
DKA-0 (1AU)
CHA-1 (1EL)
CHA-0 (1EU)
CHA-3 (1FL)
CHA-2 (1FU)
DCA-1 (1CB)
DCA-2 (1CE)
DCA-3 (1CF)
SVP-0
VSP Chassis #1
19
Cluster 1
Cluster 2
Cluster 1
DCA-2 (2CG)
Cluster 2
SVP-1 / HUB-01
DCA-3 (2CH)
FED Port Labels (FC or FICON)
DKC-0 Rear Slots
CHA
Slot LEFT
DKC-0
RIGHT
Name Name Port 1 Port 2 Port 3 Port 4 Port 5 Port 6 Port 7 Port 8
CHA
Slot LEFT
DKC-0
RIGHT
Name Name Port 1 Port 2 Port 3 Port 4 Port 5 Port 6 Port 7 Port 8
CHA-3
2RL
2G
4G
6G
8G
2H
4H
6H
8H
CHA-2
2RU
2E
4E
6E
8E
2F
4F
6F
8F
CHA-1
2QL
2C
4C
6C
8C
2D
4D
6D
8D
CHA-0
2QU
2A
4A
6A
8A
2B
4B
6B
8B
CHA-4
2ML
AA
CA
EC
GC
AD
CD
ED
GD
CHA-5
2MU
AA
CA
EA
GA
AB
CB
EB
GB
CHA-4
1AL
9C
BC
DC
FC
9D
BD
DD
FD
CHA-5
1AU
9A
BA
DA
FA
9B
BB
DB
FB
CHA-1
1EL
1C
3C
5C
7C
1D
3D
5D
7D
CHA-0
1EU
1A
3A
5A
7A
1B
3B
5B
7B
CHA-3
1FL
1G
3G
5G
7G
1H
3H
5H
7H
CHA-2
1FU
1E
3E
5E
7E
1F
3F
5F
7F
DKC-1 Rear Slots
20
CHA
Slot LEFT
DKC-1
RIGHT
Name Name Port 1 Port 2 Port 3 Port 4 Port 5 Port 6 Port 7 Port 8
CHA
Slot LEFT
DKC-1
RIGHT
Name Name Port 1 Port 2 Port 3 Port 4 Port 5 Port 6 Port 7 Port 8
CHA-9
2UL
2Q
4Q
6Q
8Q
2R
4R
6R
6R
CHA-8
2UU
2N
4N
6N
8N
2P
4P
6P
8P
CHA-7
2TL
2L
4L
6L
8L
2M
4M
6M
8M
CHA-6
2TU
2J
4J
6J
8J
2K
4K
6K
8K
CHA-10 2XL
AL
CL
EL
GL
AM
CM
EM
GM
CHA-11 2XU
AJ
CJ
EJ
GJ
AK
CK
EK
GK
CHA-10 1LL
9L
BL
DL
FL
9M
BM
DM
FM
CHA-11 1LU
9J
BJ
DJ
FJ
9K
BK
DK
FK
CHA-7
1GL
1L
3L
5L
7L
1M
3M
5M
7M
CHA-6
1GU
1J
3J
5J
7J
1K
3K
5K
7K
CHA-9
1HL
1Q
3Q
5Q
7Q
1R
3R
5R
7R
CHA-8
1HU
1N
3N
5N
7N
1P
3P
5P
7P
DKU and HDU
Overviews
21
DKU and HDU Map – Front View, Dual Chassis
RK-12
Chassis #1
front view
RK-11
RK-00
Chassis #0
front view
RK-01
RK-10
RK-02
HDU
HDU
17-5
17-4
DKU-17
HDU
HDU
17-1
17-0
HDU
HDU
14-5
14-4
DKU-14
HDU
HDU
14-1
14-0
HDU
HDU
11-5
11-4
DKU-11
HDU
HDU
11-1
11-0
HDU
HDU
01-5
01-4
DKU-01
HDU
HDU
01-1
01-0
HDU
HDU
04-5
04-4
DKU-04
HDU
HDU
04-1
04-0
HDU
HDU
07-5
07-4
DKU-07
HDU
HDU
07-1
07-0
HDU
HDU
16-5
16-4
DKU-16
HDU
HDU
16-1
16-0
HDU
HDU
13-5
13-4
DKU-13
HDU
HDU
13-1
13-0
HDU
HDU
10-5
10-4
DKU-10
HDU
HDU
10-1
10-0
HDU
HDU
00-5
00-4
DKU-00
HDU
HDU
00-1
00-0
HDU
HDU
03-5
03-4
DKU-03
HDU
HDU
03-1
03-0
HDU
HDU
06-5
06-4
DKU-06
HDU
HDU
06-1
06-0
HDU
HDU
16-5
16-4
DKU-15
HDU
HDU
16-1
16-0
HDU
HDU
12-5
12-4
DKU-12
HDU
HDU
12-1
12-0
Logic Box
Chassis 1
DKC-1
Logic Box
Chassis 0
DKC-0
HDU
HDU
02-5
02-4
DKU-02
HDU
HDU
02-1
02-0
HDU
HDU
05-5
05-4
DKU-05
HDU
HDU
05-1
05-0
power power
power power
power power
power power
power power
power power
DKC-0
BED 1, 2
22
0
0
1
1
2
2
DKU #
3 4
3 4
DKC-1
5
5
6
6
7
7
BED 3,4
DKU #
10 11 12 13 14 15 16 17
10 11 12 13 14 15 16 17
DKU and HDU Map – Rear View, Dual Chassis
RK-02
Chassis #0
rear view
RK-01
RK-00
HDU
HDU
07-7
07-6
DKU-07
HDU
HDU
07-3
07-2
HDU
HDU
04-7
04-6
DKU-04
HDU
HDU
04-3
04-2
HDU
HDU
06-7
06-6
DKU-06
HDU
HDU
06-3
06-2
HDU
HDU
03-7
03-6
DKU-03
HDU
HDU
03-3
03-2
HDU
HDU
05-7
05-6
DKU-05
HDU
HDU
05-3
05-2
HDU
HDU
02-7
02-6
DKU-02
HDU
HDU
02-3
02-2
power power
power power
DKC-0
BED 1, 2
23
0
0
1
1
2
2
RK-10
Chassis #1
rear view
RK-11
RK-12
HDU
HDU
01-7
01-6
DKU-01
HDU
HDU
01-3
01-2
HDU
HDU
11-7
11-6
DKU-11
HDU
HDU
11-3
11-2
HDU
HDU
14-7
14-6
DKU-14
HDU
HDU
14-3
14-2
HDU
HDU
17-7
17-6
DKU-17
HDU
HDU
17-3
17-2
HDU
HDU
00-7
00-6
DKU-00
HDU
HDU
00-3
00-2
HDU
HDU
10-7
10-6
DKU-10
HDU
HDU
10-3
10-2
HDU
HDU
13-7
13-6
DKU-13
HDU
HDU
13-3
13-2
HDU
HDU
16-7
16-6
DKU-16
HDU
HDU
16-3
16-2
Logic Box
Chassis 0
DKC-0
Logic Box
Chassis 1
DKC-1
HDU
HDU
12-7
12-6
DKU-12
HDU
HDU
12-3
12-2
HDU
HDU
16-7
16-6
DKU-15
HDU
HDU
16-3
16-2
power power
power power
power power
power power
DKU #
3 4
3 4
DKC-1
5
5
6
6
7
7
BED 3,4
DKU #
10 11 12 13 14 15 16 17
10 11 12 13 14 15 16 17
BED to DKU Connections (Single Chassis)
Rack - 00
DKC-0
BED-0
Rack - 01
Rack - 02
32 6Gbps SAS Links
(16 2W cables)
DKU-00
DKU-01
DKU-02
DKU-03
DKU-04
DKU-05
DKU-06
DKU-07
16 16
16 16
16 16
16 16
16 16
16 16
16 16
16 16
16 16
16 16
16 16
16 16
16 16
16 16
16 16
16 16
16 16
16 16
16 16
16 16
16 16
16 16
16 16
16 16
16 16
16 16
16 16
16 16
16 16
16 16
16 16
16 16
SFF
SFF
SFF
SFF
SFF
SFF
SFF
BED-1
BED-0
BED-1
2 DKUs, 16 HDUs
3 DKUs, 24 HDUs
SFF
3 DKUs, 24 HDUs
• Up to 1024 SFF (shown) or 640 LFF disks, 32 600MB/s SAS links (16 2W ports), 8 DKUs, 64 HDUs
24
Q and A
25