Juniper Confidential

advertisement
INTRODUCING QFABRIC
REINVENTING THE DATA CENTER NETWORK
Simon Gordon – Juniper Networks
Senior Product Line Manager - FSG/DCBU
sgordon@juniper.net
+1-408-242-2524
QFABRIC IS REAL
QF/Node (QFX3500)
QF/Interconnect
2
Copyright © 2011 Juniper Networks, Inc.
QF/Director
www.juniper.net
TRENDS IN DATA CENTER
Server Trends
Consolidation
• Multi-core (8->16->32,….128,…)
• Virtualization and VMs
• Mega DCs; 400K sq ft
• 4K racks, 200K servers
DC Scale
QFabric
Any Service
Any Port
Low O/S
Application Trends
Interconnect Trends
• SOA, Web2.0
• MapReduce, Hadoop, Grids
• Convergence to 10GE
• Enhancements to Ethernet
10/40/100 GE
East-West traffic
3
Copyright © 2011 Juniper Networks, Inc.
www.juniper.net
TODAY’S ARCHITECTURE IS NON-TRANSPARENT
Scale vs. Latency
Latency
Multi Tier Network
1
1
Ethernet
1
4
3
2
Scale
Scale vs. Bandwidth
Bandwidth
1
2
1
1
2
3
3
4
Servers
NAS
1
4
Location dependent adjacencies between nodes
Scale
4
Copyright © 2011 Juniper Networks, Inc.
www.juniper.net
QFABRIC – 1 TIER
 SRX and vGW
Remote
Data Center
 Inter-DC connectivity
MX
Series
 MPLS and VPLS
 Virtual Control
 Single, scalable fabric
SRX5800
Servers
NAS
One large, seamless resource pool
5
Copyright © 2011 Juniper Networks, Inc.
www.juniper.net
FC Storage
QFABRIC BENEFIT
Scale vs. Latency
Latency
Multi Tier Network
Traditional
design
Ethernet
QFabric
Scale
Scale vs. Bandwidth
Bandwidth
QFabric
Servers
Traditional
design
NAS
Scale
6
Copyright © 2011 Juniper Networks, Inc.
www.juniper.net
A Revolutionary New
Architecture
3 Design Principles
Management N=1
Operational model of
Plane a single switch
Intelligence
Director Plane Federated
Only way to scale with resilience
edge, Simple core
Data Plane Rich
Everything is one hop away
DATA PLANE IN A SINGLE SWITCH
Data Plane
1. All ports are directly connected
to every other port
2. A single “full lookup” processes
packets
8
Copyright © 2011 Juniper Networks, Inc.
www.juniper.net
Director PLANE IN A SINGLE SWITCH
Director Plane
 Single consciousness
 Centralized shared table(s)
have information about all ports
Management Plane
 All the ports are managed from
a single point
9
Copyright © 2011 Juniper Networks, Inc.
www.juniper.net
SINGLE SWITCH DOES NOT SCALE
Ports can be added to a
single switch fabric.
…but eventually it runs out
of real estate.
After this, the network
cannot be flat.
Choice:
10
Sacrifice simplicity or…
change the scaling model
Copyright © 2011 Juniper Networks, Inc.
www.juniper.net
SCALING THE SWITCH
QF/Director
Disaggregate
So, we separate the
line cards and supervisor cards
from the fabric.
And replace the copper
traces with fiber links.
For redundancy add
multiple devices.
QF/Interconnect
QF/Node
11
Copyright © 2011 Juniper Networks, Inc.
www.juniper.net
SCALING THE SWITCH
QF/Director
Disaggregate
So, we separate the
line cards from the fabric.
And replace the copper
traces with fiber links.
For redundancy add
multiple devices.
Enable large scale.
QF/Interconnect
QF/Node
12
Copyright © 2011 Juniper Networks, Inc.
www.juniper.net
SCALING THE DATA PLANE
Data Plane
1. All ports are
directly connected
to every other port
QF/Interconnect
2. A single “full
lookup” at the
ingress QF/Node
device
QF/Node
3. Blazingly fast:
Always under 5us
3.71us (short cables)
QFabric is faster than any Ethernet chassis switch ever built
13
Copyright © 2011 Juniper Networks, Inc.
www.juniper.net
SCALING THE DIRECTOR PLANE
Backup
Director Plane
Active
Old Model
Active/Backup
The single
active instance
limits scalability
14
Copyright © 2011 Juniper Networks, Inc.
www.juniper.net
SCALING THE DIRECTOR PLANE
QF/Director
Director Plane
The intelligence
New
andModel
state is
Services
Oriented
federated,
distributed
across
theand
fabric
Director
management
services use a
scale out model
New Host
Address
15
Copyright © 2011 Juniper Networks, Inc.
www.juniper.net
SCALING THE MANAGEMENT PLANE
QF/Director
Management
Plane
• Single point of
management
• Extensive use
of automation
• Familiar
operational
model
Managed as a single switch - N=1
16
Copyright © 2011 Juniper Networks, Inc.
www.juniper.net
QFABRIC CONVERGENCE – THE END VIEW
Storage
Convergence
FC
FCoE
Fully Blended Fabric
FCoE
 Fibre Channel Services
 Flexible ports FC/FCoE/E
Servers
 Fully converged unified Network
FC
Servers
17
Copyright © 2011 Juniper Networks, Inc.
www.juniper.net
QFABRIC CONVERGENCE – 2011
Storage
SAN
Convergence
FC
FCoE Transit Switch
FCoE
 Converged Enhanced Ethernet –
Standards based (CEE or DCB)
FCoE
Servers
 Provides Perimeter protection
with FIP Snooping.
FCoE-FC Gateway
 Ethernet or Fibre channel
gateway with FC ports at
the QF/Node
 Interoperates with
existing SANs
FCoE
FC
Servers
FCoE
18
Copyright © 2011 Juniper Networks, Inc.
www.juniper.net
Hardware
19
Copyright © 2011 Juniper Networks, Inc.
www.juniper.net
QFABRIC HARDWARE
QF/Interconnect
Connects all the QF/Node devices
QF/Node
Media independent I/O ToR device.
Can be run in independent or fabric mode
QF/Director
2 RU high fixed configuration
X86 based system architecture
20
Copyright © 2011 Juniper Networks, Inc.
www.juniper.net
QFABRIC HARDWARE – INTERCONNECT
QF/Interconnect
 21 RU high 8 slot chassis
 128 QSFP 40G ports – wire
speed
 8 fabric cards
(10.24Tbps/chassis)
 Dual redundant Director board
 Redundant AC power supply
 Front to back air flow
Front View
21
Rear View
Copyright © 2011 Juniper Networks, Inc.
www.juniper.net
FABRIC HARDWARE – QF/NODE
QF/Node
Front View
• 1 RU high fixed configuration
• 48 SFP+/36 SFP ports
• 12 FC capable (2/4/8G) ports
• 4 * 40G fabric uplink ports (can
also operate in 10G mode)
Rear View
• Redundant AC power supply
• Front to back air flow
Will also operate as a
4 QSFP+ ports
48 SFP+/36 SFP
ports
12 FC Capable ports
22
Copyright © 2011 Juniper Networks, Inc.
Stand Alone Switch
QFX3500
www.juniper.net
QFABRIC HARDWARE – DIRECTOR
QF/Director
 2RU device
 Has GE ports to connect to
QF/Node and interconnect
devices
 Based on x86 architecture
23
Copyright © 2011 Juniper Networks, Inc.
www.juniper.net
System Design
24
Copyright © 2011 Juniper Networks, Inc.
www.juniper.net
QFABRIC CONFIGURATION FOR SMALL DEPLOYMENT
QF/Interconnect
QF/Director
LEGEND
1 GB
40 GB
QF/Node #1
Solution for 768
10GE/1GE ports
25
QF/Node #2
QF/Node #3
2 Fabric cards per
chassis (25% fill rate)
Copyright © 2011 Juniper Networks, Inc.
www.juniper.net
QF/Node #16
Redundant uplinks
QFABRIC CONFIGURATION FOR LARGE DEPLOYMENT
QF/Interconnect
QF/Director
LEGEND
1 GB
40 GB
QF/Node #1
Solution for 6,000
10GE/1GE ports
26
QF/Node #2
QF/Node #3
40 Gig uplink from each
Node to Interconnect
Copyright © 2011 Juniper Networks, Inc.
www.juniper.net
QF/Node #128
1GE connections to
the Director cluster
QFabric Software
27
Copyright © 2009 Juniper Networks, Inc.
www.juniper.net
SYSTEM ARCHITECTURE EVOLUTION
Pizza Box
Chassis
Matrix, Virtual Chassis
Management
Management
Management
Route Engine
Route Engine
Route Engine Master
Forwarding Engine
Forwarding Engine
Route Engine
Slave
Slave
Route Engine
Forwarding Engine
Forwarding Engine
28
Copyright © 2010 Juniper Networks, Inc.
www.juniper.net
QFabric
Management
Peer
Peer
Peer
Peer Route Engine
Forwarding Engine
Forwarding Engine
Forwarding Engine
Forwarding Engine
QFabric SOFTWARE STACK
Centralized
Fabric Administrator
Management
Views
Inventory
Fabric Control
APIs
Fault
Connectivity
Topology
Troubleshooting
Distributed
Control Plane
Control Plane
Control Plane
Data Plane
Data Plane
Data Plane
Platform
Platform
Platform
L2/L3 switch stack
29
Copyright © 2010 Juniper Networks, Inc.
www.juniper.net
Management
30
Copyright © 2010 Juniper Networks, Inc.
www.juniper.net
FABRIC DIRECTOR
SINGLE POINT for signaling
and configuration: CLI, SNMP,
NETCONF/DMI (XML ), SMI-S
Director RE
Partition RE
Control And
Management
interfaces for
QFabric
Fabric RE
Fabric
Administrator
Hypervisor
Director
Director
Simplicity
Single Box
management
paradigm
31
Standard
SNMP, Syslog,
NETCONF, CIM,
SMI-S
Copyright © 2009 Juniper Networks, Inc.
Virtualized
Hide CP and DP
components,
views, scale
www.juniper.net
Automation
JUNOS built in
Automation
capabilities
QFABRIC MANAGEMENT STACK
BSS:
Juniper +
Partners
Business Services
Data Center Orchestration
WSDL/SOAP, REST APIs
Junos Space
EMS/NMS/Apps
Space
SDK
Director
Management
Signaling and Configuration:
CLI, SNMP, NETCONF/DMI (XML),
SMI-S
Views
APIs
Inventory
Topology
JUNOS SDK
32
Fault
Connectivity
Troubleshooting
JUNOS SDK
JUNOS SDK
Control Plane
Control Plane
Control Plane
Data Plane
Data Plane
Data Plane
Platform
Platform
Platform
Copyright © 2009 Juniper Networks, Inc.
www.juniper.net
FABRIC SYSTEM CONTROL SCALE OUT
SFC
Director RE
Node 0
Partition
RE 1
Node
QFabric
RE Node N
Fabric
Administrator
Director
Hypervisor
Director
Director
N-way compute cluster
 Automatic balancing of compute load and connection traffic (both south and
northbound)
 No redundant nodes / hot spares – all resources available for computation
 Graceful degradation upon failure
Scale out: adding nodes in service, nodes automatically discover each other
SFC application logic dynamically upgradable while running
33
Copyright © 2009 Juniper Networks, Inc.
www.juniper.net
A REVOLUTIONARY NEW ARCHITECTURE
Performance and
simplicity
of a single switch
Scalability and resiliency
of a network
34
Copyright © 2009 Juniper Networks, Inc.
www.juniper.net
Migration to QFabric
Introducing QFX3500
35
Copyright © 2009 Juniper Networks, Inc.
www.juniper.net
MIGRATING TO QFABRIC
MX Series
EX8216
QFabric
SRX5800
EX4200
QFX3500
4
Pod 1
36
Copyright © 2009 Juniper Networks, Inc.
www.juniper.net
Pod 2
INTRODUCING QFX3500
Front view
Rear view
Wirespeed switching
Layer 2
Layer 3
1.28 Tbps , 960 MPPS switching
FCoE-FC
FCoE & Fibre Channel Support
FCoE Transit Switch & FCoE-FC Gateway
Ultra low latency
Sub microsecond
Low power
5Watts/port
37
Copyright © 2009 Juniper Networks, Inc.
www.juniper.net
PORTS
4 QSFP+ ports 
Rear view
48 SFP+/SFP ports
12 port FC; 36 port GbE
48 port 10GbE
4 port 40GbE
63 port 10GbE


6 port FC
42 port 10GbE
4 port 40GbE

 Roadmap (not available at FRS)
38
Copyright © 2009 Juniper Networks, Inc.
www.juniper.net
12 port FC
36 port 10GbE
4 port 40GbE

12 port FC
48 port 10GbE
1 port 40GbE

TRANSCEIVER SUPPORT
SFP optical transceiver
2/4G or 8G FC-SW
FC SFP
GbE SFP Copper
Rear view
SFP transceiver
SR, LR, 1000BaseT
GbE SFP Optical
10GbE SFP+
Direct Attached/Twinax SFP+ copper
1, 3, 5, 7 meter
39
Copyright © 2009 Juniper Networks, Inc.
SFP+ optical transceiver
USR, SR, LR
www.juniper.net
63 PORT 10GBE IN 1RU (2Q2011)
4 x 10GbE
4 x 10GbE
4 x 10GbE
3 x 10GbE
4 x SFP+
To Servers
QSFP+

Rear view
48 x 10GbE SFP+
12 FC Capable ports
Direct Attached/Twinax SFP+ copper
1, 3, 5, 7 meter
 Roadmap (not available at FRS)
40
Copyright © 2009 Juniper Networks, Inc.
SFP+ optical transceiver
USR, SR, LR
www.juniper.net
PERFORMANCE & SCALE
Front view
Rear view
41
Feature
Scale
Throughput
1.28 Tbps
Forwarding
960 MPPS
Latency
900 nanoseconds
Packet buffer
9MB shared
MAC address
960K
IPv4 routes
20K
Multicast groups
4K
Firewall filter
1,500
Maximum power
320 Watts
Nominal power
200 Watts
Nominal power per port
~4 Watts
Depth
28”
Air flow
Front to back
Copyright © 2009 Juniper Networks, Inc.
www.juniper.net
SECURITIES TECHNOLOGY ANALYSIS CENTER (STAC)
TEST RESULTS
Simulates Trading Transactional Performance
Juniper
QFX3500/
IBM LLM
Cisco
4900M/29West
Cisco
Nexus
5010/29West
Voltaire
IB/IBM LLM
1,500,000
1,300,000
1,300,000
1,000,000
Mean (micro seconds)
9
15
14
8
Max (micro seconds)
23
30
33
47
0
1
1
1
Description
Supply to Receive
Latency, 1 Producer to 5
Highest Supply Rate
(msg/sec)
Standard Deviation
(Jitter)
The Juniper QFX3500 in combination with IBM server and middleware with SolarFlare
NICs delivered the best performance to date for product combinations with 10GE switches.
This product combination delivered more messages faster with lower jitter than any other
audited report in the STAC library.
STAC-M2 Benchmarks™ v1.0 Highlights
42
Copyright © 2009 Juniper Networks, Inc.
www.juniper.net
QFX3500 – UNIVERSAL TOR
<1µSec; Cut-through; 40G
Full L3; VirtualControl; FC
Gateway; HA; VPN
Ultra Low Latency
Feature Rich
QFX3500
Converged I/O
Fabric Attach
Unique Value
Add to Scale
DCB; FCoE-FC Gateway; FCoE
Transit Switch
FC SAN
FC/FCoE
Ethernet/
IP
Ethernet
TOR
Certify Once; Deploy Everywhere
43
Copyright © 2009 Juniper Networks, Inc. www.juniper.net CONFIDENTIAL
ULTRA LOW LATENCY
CONVERGED I/O
CONVERGED I/O
Converged I/O with CNA & FCoE
FC/FCoE Switch
DCB
Port
DCB
Port
F_Port
F_Port
 FCoE transit switch
N_Port
N_Port
 FCoE-FC Gateway
FIP
ACL
FIP
ACL
FIP
ACL
DCB
Port
DCB
Port
DCB
Port
VN_Port
 Feature rich DCB (PFC, ETS, DCBX)
NPIV Proxy
FCoE Transit Switch
FIP Snooping
VN_Port
 10GbE/FCoE standard
VF_Port
DCB
Port
DCB
Port
VF_Port
VF_Port
DCB
Port
DCB
Port
DCB
Port
VN_Port
VN_Port
VN_Port
VF_Port
 FIP Snooping
FCoE-FC GW
VN_Port
FCoE servers with CNA
FABRIC ATTACH
QFX3500 Solution
FC Switch
VF_Port
VF_Port
FEATURE RICH
FCoE servers with CNA
EX82XX/
MX Series
EX82XX/
MX Series
CORE
FC SAN
FC SAN
CORE
FCoE
LAG
LAG
QFX3500
ACCESS
FCoE
LAG
LAG
QFX3500
ACCESS
FCoE
FCoE
FCoE
FCoE
Servers
w/ CNA
QFX3500
Servers
w/ CNA
Servers
w/ CNA
Scenario1: FCoE Transit
44
FC
QFX3500
Copyright © 2011 Juniper Networks, Inc. www.juniper.net
FCoE
FCoE-FC Gateway
Servers
w/ CNA
Scenario2: FCoE-FC Gateway
FCOE TRANSIT SWITCH USE CASE (QFX3500)
Requirements
FCoE
MX series
MCLAG or VC
or
EX8200 VC
enabled
SAN
 10GbE server access including Blade
servers with pass through or with
embedded DCB switch
 Copper and/or fiber cabling
 High availability
 Dual homed to aggregation layer
 >40 port per ToR switch
LAG
LAG
LAG
 DCB support with FIP Snooping
QFX3500 solution
FCoE Transit Switch
 48 (63) ports wirespeed 10GbE
 Copper DAC and SFP+ fiber support
 Hardware & software HA
Rack servers or Blade servers with CNA
45
 DCB & FCoE transit switch support

FCoE is standard on all ports

PFC, ETS, DCBX support

FIP snooping support

Interoperability with Qlogic, Emulex CNA
Copyright © 2011 Juniper Networks, Inc. www.juniper.net
FCOE-FC GATEWAY USE CASE (QFX3500)
Requirements
MX series
MCLAG or VC
or
EX8200 VC
FC
SAN
 10GbE server access including Blade





LAG
servers with pass through or with
embedded DCB switch
Copper and/or fiber cabling
High availability
Dual homed to aggregation layer
>40 port per ToR switch
DCB & FCoE-FC Gateway support
QFX3500 solution
LAG
 48 (63) ports wirespeed 10GbE
FCoE-FC Gateway
 Copper DAC and SFP+ fiber support
 Hardware & software HA
Rack servers or Blade servers with CNA
46
 DCB & FCoE-FC Gateway support
 FCoE is standard on all ports
 PFC, ETS, DCBX support
 12 port FC (2/4/8G FC) with FC license
 Interoperability with Qlogic CNA, Emulex
CNA, Cisco & Brocade FC switch
Copyright © 2011 Juniper Networks, Inc. www.juniper.net
FCOE TRANSIT & GATEWAY SWITCH USE CASE
Requirements
MX series
MCLAG or VC
or
EX8200 VC
FC
SAN
FCoE-FC
GW
 10GbE server access including Blade
servers with pass through or with
embedded DCB switch
 Separation of management between the
LAN & SAN Teams
FCoE-FC Gateway
 Gateway administered by SAN Team
 ToR administered by LAN Team
LAG
 Support for Blade servers with pass
through or with embedded DCB switch
LAG
LAG
QFX3500 solution
FCoE Transit Switch
 FCoE Transit Switch at ToR
 FCoE-FC Gateway at EoR
 EX4500 as Transit Switch
Rack servers or Blade servers with CNA
 3rd party Transit Switches

47
In particular blade shelf embedded switches
Copyright © 2011 Juniper Networks, Inc. www.juniper.net
FCOE TRANSIT & GATEWAY SWITCH USE CASE
Requirements
FC
SAN
MX series
MCLAG or VC
FCoE-FC
GW
 10GbE server access including Blade
servers with pass through or with
embedded DCB switch
 Separation of management between the
LAN & SAN Teams
FCoE-FC Gateway
 Gateway administered by SAN Team
 ToR administered by LAN Team
 Support for Blade servers with pass
through or with embedded DCB switch
QFabric & QFX3500 Solution
 FCoE Transit Switch at ToR
 FCoE-FC Gateway at EoR
 EX4500 as Transit Switch
 3rd party Transit Switches

In particular blade shelf embedded switches
Rack servers or Blade servers with CNA
48
Copyright © 2011 Juniper Networks, Inc. www.juniper.net
Download