Welcome to the Unified Datacentre G.I. Joe Movie Premiere Aug 7th 2009 What we do. © Copyright 2009 EMC Corporation. All rights reserved. / 2 © Copyright 2009. scalar decisions inc. 2 Unified Computing and Scalar Decisions Toronto Vancouver Calgary Ottawa London Kitchener Guelph Over 50 Employees across Canada Certifications and Partnerships CISCO Silver Partner, broad product expertise DCNI (Data Center Networking Infrastructure) UCS ATP (Unified Computing Solutions Advanced Technology Partner) VMware Enterprise VIP Partner Gold-Level VMware Authorized Consultants (VAC) EMC Velocity Solution Provider Technically Led specializing in Advanced IT Infrastructure Transform your Data Center Fabric with Scalar Decisions By engaging with our architect team for technology deep dives By assessing your infrastructure and building a strategic plan © Copyright 2009 EMC Corporation. All rights reserved. / 3 3 What we see in the Datacentre. Existing architecture colliding with new paradigms – Mass consolidation highlights I/O bottlenecks and process inefficiencies – x64 Virtualization may reduce physical footprint, but not management overhead – Multiple slower, discrete fabrics (Eth, FC), storage arrays and complex cabling buildup – Getting awfully hard to do more with less! Emerging Solutions to simplify and save – Unified Fabric on lossless 10GigE – Unified Storage systems with FCoE – Large memory Unified Compute server blades that are one with the Unified Fabric – Distributed Virtual Switching for x64 hypervisors © Copyright 2009 EMC Corporation. All rights reserved. / 4 © Copyright 2008. scalar decisions inc. 4 Scalar Labs – Customer Demo Centre Hassle-free access to the technologies you need! 21 vendors’ products on display with remote access Product demonstrations and handson Customer Proof-of-Concepts Interoperability Testing Access to direct vendor assistance as needed Remote labs for F5 Authorized Training Convenient downtown Toronto © Copyright 2009 EMC Corporation. All rights reserved. / 5 5 Events @ Scalar – 2009 Calendar. © Copyright 2009 EMC Corporation. All rights reserved. / 6 © Copyright 2009. scalar decisions inc. 6 Thank you… © Copyright 2009 EMC Corporation. All rights reserved. / 7 © Copyright 2008. scalar decisions inc. 7 Fibre Channel over Ethernet (FCoE), iSCSI and the Converged Data Center Joe Rabasca - Solutions Lead EMC Corporation © Copyright 2009 EMC Corporation. All rights reserved. 8 Objectives After this session you will be able to: Understand FCoE and iSCSI and how they fit into existing storage and networking infrastructures. Compare and contrast the structure and functionality of the FCoE and iSCSI protocol stacks. Understand how FCoE and iSCSI solutions provide storage networking options for Ethernet, including 10 Gb Ethernet. © Copyright 2009 EMC Corporation. All rights reserved. 9 Rack Server Environment Today Ethernet Fibre Channel iSCSI SAN Servers connect to FC SAN with HBAs 1 Gigabit Ethernet Many environments today are still 1 Gigabit Ethernet 1 Gigabit Ethernet 1 Gigabit Ethernet NICs Fibre Channel HBAs Servers connect to LAN, NAS and iSCSI SAN with NICs Multiple server adapters, multiple cables, power and cooling costs Ethernet LAN Fibre Channel SAN Storage Rack-mounted servers © Copyright 2009 EMC Corporation. All rights reserved. – Storage is a separate network (including iSCSI) Note: NAS will continue to be part of the solution. Everywhere that you see Ethernet or 10Gb Ethernet in this presentation, NAS can be considered part of the unified storage solution 11 10Gb Ethernet allows for Converged Data Center Maturation of 10 Gigabit Ethernet – 10 Gigabit Ethernet allows replacement of n x 1Gb with a much smaller number (start with 2) of 10Gb Adapters – Many storage applications require > 1Gb bandwidth Single Wire for 10 GbE Network and Storage SAN LAN 10 Gigabit Ethernet simplifies server, network and storage infrastructure – – – – Reduces the number of cables and server adapters Lowers capital expenditures and administrative costs Reduces server power and cooling costs Blade servers and server virtualization drive consolidated bandwidth 10 Gigabit Ethernet is the answer! iSCSI and FCoE both leverage this inflection point © Copyright 2009 EMC Corporation. All rights reserved. 12 Why iSCSI? Initiator SCSI Reliable data transport and delivery (TCP Windows, ACKs, ordering, etc.) iSCSI TCP IP IPsec Provides physical network capability (Layer 2 Ethernet, Cat 5, MAC, etc.) Delivery of iSCSI Protocol Data Unit (PDU) for SCSI functionality (initiator, target, data read / write, etc.) Provides IP routing (Layer 3) capability so packets can find their way through the network Link © Copyright 2009 EMC Corporation. All rights reserved. Target SCSI iSCSI TCP IP IPsec Link IP Network 14 Why a New Option for FC Customers? FC has a large and well managed install base – Want a solution that is attractive for customers with FC expertise / investment – Previous convergence options did not allow for incremental adoption Requirement for a Data Center solution that can provide I/O consolidation – 10 Gigabit Ethernet makes this option available Leveraging Ethernet infrastructure and skill set has always been attractive FCoE allows an Ethernet-based SAN to be introduced into the FC-based Data Center without breaking existing administrative tools and workflows © Copyright 2009 EMC Corporation. All rights reserved. 15 Protocol Comparisons FC over Ethernet (no TCP/IP) App App Applications Applications SCSI SCSI SCSI SCSI iSCSI Encapsulation Encapsulation Layer Layer TCP IP Base Base Transport Transport Block storage with TCP/IP FC FC iFCP FCIP TCP TCP IP IP FC FC FC SRP FCoE FCoE Ethernet Ethernet FC replication over IP FC management FC Infiniband New transport and drivers Low latency, high bandwidth © Copyright 2009 EMC Corporation. All rights reserved. 16 FCoE Extends FC on a Single Network Server sees storage traffic as FC Network Driver FC Driver Ethernet Network FCoE SW Stack Standard 10G NIC Converged Network Adapter 2 Lossless Ethernet Links options FC storage FC network Converged Network Switch Ethernet FC © Copyright 2009 EMC Corporation. All rights reserved. SAN sees host as FC 17 iSCSI and FCoE Framing iSCSI is SCSI functionality transported using TCP/IP for delivery and routing in a standard Ethernet/IP environment iSCSI Frame Ethernet Header IP TCP iSCSI Data CRC TCP/IP and iSCSI require CPU processing FCoE is FC frames encapsulated in Layer 2 Ethernet frames designed to utilize a Lossless Ethernet environment © Copyright 2009 EMC Corporation. All rights reserved. FC Frame FCS EOF FC Payload CRC FC Header FCoE Header FCoE Frame Ethernet Header – Large maximum size of FC requires Ethernet Jumbo Frames – No TCP, so Lossless environment required – No IP routing 20 FCoE Frame Formats Ethernet frames give a 1:1 encapsulation of FC frames – No segmenting FC frames across multiple Ethernet frames – FCoE flow control is Ethernet based FCoE Frame Format Bit 0 Bit 31 Destination MAC Address BB Credit/R_RDY replaced by Pause/PFC mechanism Source MAC Address FC frames are large, require Jumbo frames IEEE 802.1Q Tag ET = FCoE – Max FC payload size is 2112 bytes – Max FCoE frame size is 2180 bytes © Copyright 2009 EMC Corporation. All rights reserved. Reserved Reserved Reserved Reserved Also created a FCoE Initialization Protocol (FIP) for: – Discovery – Login – To determine if the MAC address is server provided (SPMA) or fabric provided (FPMA) Ver SOF Encapsulated FC Frame (Including FC-CRC) EOF Reserved FCS 23 Lossless Ethernet Limit the environment only to the Data Center – FCoE is Layer 2 only IEEE 802.1 Data Center Bridging (DCB) is the standards task group Converged Enhanced Ethernet (CEE) is an industry consensus term which covers three link level features – Priority Flow Control (PFC, IEEE 802.1Qbb) – Enhanced Transmission Selection (ETS, IEEE 802.1Qaz) – Data Center Bridging Exchange Notification (DCBX, currently part of IEEE 802.1Qaz, leverages 802.1AB (LLDP)) Data Center Ethernet is a Cisco term for CEE plus additional functionality including Congestion Notification (IEEE 802.1Qau) Enhanced Ethernet provides the Lossless Infrastructure which will enable FCoE and augment iSCSI storage traffic . © Copyright 2009 EMC Corporation. All rights reserved. 28 PAUSE and Priority Flow Control PAUSE transforms Ethernet into a lossless fabric Classical 802.3x PAUSE is rarely implemented since it stops all traffic Priority Flow Control (PFC), formerly known as Per Priority PAUSE (PPP) or Class Based Flow Control – PFC will be limited to Data Center A new PAUSE function that can halt traffic according to priority tag while allowing traffic at other priority levels to continue – Creates lossless virtual lanes Per priority link level flow control – Only affect traffic that needs it – Ability to enable it per priority – Not simply 8 x 802.3x PAUSE Switch A © Copyright 2009 EMC Corporation. All rights reserved. Switch B 29 Enhanced Transmission Selection and Data Center Bridging Exchange Protocol (DCBX) Enhanced Transmission Selection (ETS) provides a common management framework for bandwidth management Allows configuration of HPC & storage traffic to have appropriately higher priority When a given load in a class does not fully utilize its allocated bandwidth, ETS allows other traffic classes to use the available bandwidth Maintain low latency treatment of certain traffic classes Offered Traffic 3G/s 3G/s 2G/s 10 GE Link Realized Traffic Utilization 3G/s HPC Traffic 2G/s 3G/s 3G/s 3G/s 3G/s 3G/s 3G/s 4G/s 6G/s Storage Traffic 3G/s 3G/s 3G/s LAN Traffic 5G/s 4G/s t1 t2 t3 t1 t2 t3 Data Center Bridging Exchange Protocol (DCBX) is responsible for configuration of link parameters for DCB functions Determines which devices support Enhanced Ethernet functions © Copyright 2009 EMC Corporation. All rights reserved. 30 40 & 100 Gigabit Ethernet IEEE P802.3ba Task Force states that bandwidth requirements for computing and networking applications are growing at different rates, which necessitates two distinct data rates, 40 Gb/s and 100 Gb/s IEEE target for standard completion of 40 GbE & 100 GbE is 2010 40 GbE products shipping today supporting existing fiber plant and plan is for 100 GbE to also support 10m copper, 100m MMF (use OM4 for extended reach) and SMF Cost of 40 GbE or 100 GbE is currently 5 – 10 x 10 GbE – Adoption will become more economically attractive at 2.5x which will take a couple of years © Copyright 2009 EMC Corporation. All rights reserved. 32 Deployments - FCoE and iSCSI FCoE FC expertise / install base iSCSI Ethernet No FC expertise needed FC management Layer 2 Ethernet Leverage Ethernet/IP expertise Use FCIP for distance 10 Gigabit Ethernet Supports distance connectivity (L3 IP routing) Strong virtualization affinity Lossless Ethernet Standards in process © Copyright 2009 EMC Corporation. All rights reserved. Standards since 2003 37 iSCSI Deployment iSCSI grew to > 10% of SAN market revenue in 2008 * Many deployments are small environments, which replace DAS – Strong affinity in SMB/commercial markets Seeing strong growth of Unified Storage – Supports iSCSI, FC, and NAS iSCSI with 10 Gigabit Ethernet becoming available Ethernet iSCSI SAN * According to IDC, 3/09 © Copyright 2009 EMC Corporation. All rights reserved. 38 FCoE Server Phase (Today) FCoE with direct attach of server to Converged Network Switch at top of rack or end of row Tightly controlled solution Server 10 GE adapters may be CNA or NIC Ethernet LAN Storage is still a separate network Converged Network Switch FC Attach 1 Gb NICs 10 GbE CNAs Fibre Channel SAN FC HBAs Storage Ethernet FC © Copyright 2009 EMC Corporation. All rights reserved. Rack Mounted Servers 39 FCoE Network Phase (2009 / 2010) Converged Network Switches move out of the rack from a tightly controlled environment into a unified network Maintains existing LAN and SAN management Ethernet LAN Overlapping domains may compel cultural adjustments Ethernet Network (IP, FCoE) and CNS Converged Network Switch Fibre Channel SAN FC Attach 10 GbE CNAs Ethernet FC © Copyright 2009 EMC Corporation. All rights reserved. Storage Rack Mounted Servers 40 Convergence at 10 Gigabit Ethernet Two paths to a Converged Network – iSCSI purely Ethernet – FCoE allows for mix of FC and Ethernet (or all Ethernet) Ethernet LAN FC that you have today or buy tomorrow will plug into this in the future Choose based on scalability, management, and skill set Converged Network Switch iSCSI/FCoE Storage 10 GbE CNAs FC SAN Ethernet FC © Copyright 2009 EMC Corporation. All rights reserved. Rack Mounted Servers 43 Time To Widespread Adoption 1980 1990 2000 2010 10 Gigabit Ethernet Ethernet 73 Defined 83 Standard 93 Widespread 02 Standard 09? Widespread iSCSI 00 02 Defined Standard 08 Widespread Fibre Channel 85 Defined 94 Standard 03 Widespread FCoE 07 09 ?? Defined Standard? © Copyright 2009 EMC Corporation. All rights reserved. 44 Summary A converged data center environment can be built using 10Gb Ethernet – Ethernet Enhancements are required for FCoE and will assist iSCSI Choosing between FCoE and iSCSI will be based on customer existing infrastructure and skill set 10 Gigabit Ethernet solutions will take time to mature – Active industry participation is creating standards that allow solutions that can integrate into existing data centers – FCoE and iSCSI will follow the Ethernet roadmap to 40 and 100 Gigabit in the future The Converged Data Center allows Storage and Networking to leverage operational and capital efficiencies © Copyright 2009 EMC Corporation. All rights reserved. 45 Office of the CTO EMC Corporation