Module 9 PS-M4110 Overview <Place supporting graphic here> Module Objectives: • • • • • • • Describe the PS-M4110 solution Describe the features of the PS-M4110 Install the PS-M4110 in the M1000 Chassis Operate the drawer of the PS-M4110 Describe the airflow features of the M1000 Identify whether the M1000 chassis has a version 1.1 Midplane Describe how the PS-M4110 connects to the Servers via the Fabric modules . Introduction PS Series Early releases PS100 PS3000 PS5000 Series PS6000 Series PS5500 PS6500 PS4000 Series PS6010 Series PS4100/4110 PS6100/6110 Series PS-M4110 PS6510 FS7500 2003 + Thin Start of Provisioning Solution innovation Auto-Snapshot VMware® Site Recovery Manager/ VMware® Manager Edition SAN Headquarter s ASM/ME for Hyper-V™ VMware® vStorage Core File Capability Data Center Bridging (DCB) Sync Rep IPSEC SED Continuous advancements of inclusive software features Dell PS-M4110 Blade Array PS-M4110 • EqualLogic iSCSI SAN for the Dell M1000e Blade chassis • 14 hot-plug 2.5” disk drives • 10G controllers • PS Series firmware / Group Manager M1000e • CMC provides M1000e chassis management • M1000e chassis provides power, cooling, integrated networking and density PS-M4110 Blade Array - Installed in the M1000e blade chassis. - Two control modules - Two 10 GbE ports - Configurable Management port - 14 hot pluggable 2.5” disks - 6Gb/s SAS drives Dell PS M4110 and the PowerEdge M1000e 12G Enclosure • PS M4110 inserts into two M1000e slots. • M1000e chassis provides power, cooling, and management for the M4110 and server blades. PS-M4110 Blade Array Two 10Gbe iSCSI Ports wired internally to the M-series Ethernet fabrics M-series Blade Server and M4110 Chassis Environment Chassis provides power, cooling, Ethernet switch configuration CMC provides chassis management, configuration and monitoring. 10G Ethernet fabric selection - M4110 defaults to B fabric - Can be on the A fabric with version 1.1 midplane Support’s DCB (Data Center Bridging) M4110 performance is similar to PS6100E The power to do more in less space 32 U M1000e and the PS-M4110 EqualLogic Blade Array 4 Switches 2 PS Series Array’s 10 U 24 Servers 2 PS-M4110’s 4 Switches 24 Servers Physical Convergence = 3x space savings Installation and Drawer Operation Installing The PS M4110 • The PS M4110 installs in the top slots using the rail guides on the top. • The PS M4110 installs in the bottom slots using the slot guides on the bottom. • Slide the array into the M1000e available slots. • Seat the array by sliding the handle firmly in place PS M4110 Drawer • To open the array inner drawer: – Push the array's front panel and release it quickly to unlatch the array’s inner drawer. • When the drawer is unlatched will be a “Caution” label visible PS M4110 Drawer • When the PS-M4110 is out of the M1000 enclosure, the array's drawer cannot be opened unless its safety locking mechanism is released. PS-M4110 / M1000e EqualLogic PS-M4110 Ecosystem PS-M4110 Components Dual, hot-pluggable 10GbE controllers 4GB of memory per controller 1 x dedicated 10/100 management port – accessible through CMC 6Gb/s SAS backend 14x 2.5” drives PS-M4110 Design Drawer in Drawer Double-wide, half-height storage blade Operate/interoperate with servers inside/outside chassis Dell Force10 or PowerConnect Switch M1000E Chassis EqualLogic Host Software PS-M4110 Auto-Snapshot Manager – Microsoft®, VMware®, Linux® Multi-Path I/O PowerShell Tools Datastore manager SAN HQ PowerEdge Blades 11G or 12G PS-M4110 Scalability Up to 4 PS-M4110 per blade chassis Up to 2 PS-M4110 per EqualLogic group Scale up to 16 EqualLogic arrays per group by joining arrays outside chassis EqualLogic Array Software Peer storage architecture Advanced load balancing Snapshots, cloning, replication Thin provisioning Thin clones PS-M4110 Configuration Option #1 Fabric B Switch (Default) Fabric A1 Ethernet I/O Module PS-M4110 CM0 (Active ) CM1 Fabric Interfac e Module Fabric A2 Ethernet I/O Module Fabric B1 Switch Fabric B2 Switch Half-Height Blade Server (16 of 16) Fabric A E’net LOM Server Logic Fabric C1 I/O Module Fabric B Mezz Fabric C Mezz Midplane Fabric C2 I/O Module External Fabric Connection s PS-M4110 Configuration Option #2 Fabric A Switch PS-M4110 Fabric A1 Ethernet Switch CM0 Fabric Interface Module Fabric A2 Ethernet Switch CM1 Fabric B1 I/O Module External Fabric Connections Fabric B2 I/O Module Half-Height Blade Server (16 of 16) Fabric A E’net LOM Server Logic Fabric C1 I/O Module Fabric B Mezz Fabric C Mezz Midplane Fabric C2 I/O Module PS-M4110 Configuration Option #3/4 Fabric A/B Pass-Thru to External Switch Stack PS-M4110 CM0 Fabric Interface Module CM1 Fabric A1 Pass-Thru Ethernet Switch Fabric A2 Pass-Thru Ethernet Switch Fabric B1 I/O Module External Fabric Connections Fabric B2 I/O Module Half-Height Blade Server (16 of 16) Fabric A E’net LOM Server Logic Fabric C1 I/O Module Fabric B Mezz Fabric C Mezz Midplane Fabric C2 I/O Module PS-M4110 Configuration PS-M4110 provides a single 10Gb Ethernet port that can be connected to one of two redundant Fabrics (A or B) in the M1000e chassis. FABRIC SWITCH A1 4 Active Controller BOTTOM PLANE FIXED I/O MODULE PS-M4110 FABRIC SWITCH B1 4 Passive Controller FIXED I/O MODULE FABRIC SWITCH B2 FABRIC A LOM SERVER BLADE FABRIC B MEZZ CARD FABRIC SWITCH A2 NOBLE SYSTEM MIDPLANE M1000e I/O Module Placement Options Single Fabric Split Fabric Blade Server Blade Server Mezz C Mezz B B1 C1 CMC2 4 CMC1 3 5 6 8 3 9 5 Gb 1 Gb 2 Gb 1 Gb 2 2 STK 1 B1 18X 17X 18X 17X 18X 17X 20X 22 19X 20X 22 19X 20X 22 19X 21 20X 22 23 24 23 24 2 STK 1 3 4 9 5 2 STK 1 2 STK 1 17X 18X M M S S M S T Y B T C S R R K T 18X 17X 18X 20X 22 19X 21 20X 22 17X 20X 22 21 20X 22 21 24 23 24 23 19X 19X 21 24 23 24 23 19X 23 23 WS-CBS3130G-S M M S S M S T Y B T C S R R K T WS-CBS3130G-S M M S S M S T Y B T C S R R K T M M S S M S T Y B T C S R R K T WS-CBS3130G-S WS-CBS3130G-S M M S S M S T Y B T C S R R K T WS-CBS3130G-S M M S S M S T Y B T C S R R K T M M S S M S T Y B T C S R R K T WS-CBS3130G-S M M S S M S T Y B T C S R R K T MODE MODE CONSOLE MODE CONSOLE MODE CONSOLE A2 17X WS-CBS3130G-S WS-CBS3130G-S CONSOLE Gb 6 B2 19X 21 21 24 24 6 8 2 C2 17X 18X 21 C1 2 STK 1 18X 3 5 7 1 A2 2 STK 1 2 STK 1 CMC2 2 4 6 B2 A1 2 STK 1 KVM 1 iKVM 2 2 C2 MODE MODE CONSOLE MODE CONSOLE CONSOLE MODE CONSOLE 1 Gb 2 CMC KVM 4 7 1 A1 CMC 2 1 iKVM CMC 1 Gb CMC CMC1 Gb Mezz C Mezz B 10G IOM fabric • 10G K supports PS-M4110 – Pass through KR with top of rack (TOR) 8024F or TOR NX-5020 – Force 10 (Navasota MLX) – M8024-K (10 gig) (Lavaca) – Brocade M8428-K (Brazos) • Other M-Series IOM fabrics – not supported for PS-M4110 – 10G XAUI – 1GE – FC – Infiniband Switch Configurations • Stack or create a LAG between switches SAN Modules in Single Enclosure Fabric 2 B1 2 STK LAN SAN C1 2 STK 1 18X 17X 18X 19X 2 STK 17X 18X 19X 3 5 7 6 8 2 3 C2 1 CMC2 2 4 1 A1 KVM 1 iKVM Gb CMC 1 4 9 5 2 STK 1 17X 18X 19X 21 20X 22 21 20X 22 21 20X 22 21 24 23 24 23 24 23 24 23 WS-CBS3130G-S M M S M S T B T C R R K S Y S T MODE CONSOLE WS-CBS3130G-S M M S M S T B T C R R K S Y S T MODE CONSOLE WS-CBS3130G-S M M S M S T B T C R R K S Y S T MODE CONSOLE A2 1 17X 20X 22 19X WS-CBS3130G-S M M S M S T B T C R R K S Y S T MODE CONSOLE Gb 6 B2 1 Gb 2 CMC CMC1 Gb Multiple M1000e Chassis Switch IOM’s m1000e Stacked IOMs LAG m1000e • For multiple chassis • Stacking should be done in two rings, a left side ring and a right side ring • LAGs should be connected between the rings so that each port on the storage devices and can communicate to any other port on the storage devices. • DCB Support Requires: • DCB external switch (B8000) • IOM: Navasota only • Intel or Brocade CNA Mezz Module Summary • Now that you have completed this module you should be able to: • • • • • • • Describe the PS-M4110 solution Describe the features of the PS-M4110 Install the PS-M4110 in the M1000 Chassis Operate the drawer of the PS-M4110 Describe the airflow features of the M1000 Identify whether the M1000 chassis has a version 1.1 Midplane Describe how the PS-M4110 connects to the Servers via the Fabric modules Questions?