The IRIS Optical Packet Router A DARPA / MTO Project International Workshop

advertisement
The IRIS Optical Packet Router
A DARPA / MTO Project
International Workshop
on the
Future of Optical Networking (FON)
OFC 2006
Jürgen Gripp
Bell-Labs, Crawford Hill Laboratories
jurgen@lucent.com
March 5, 2006
OFC’06 FON Workshop
Acknowledgements
Lucent Bell Labs
– Jurgen Gripp, Jesse Simsarian, Pietro Bernasconi,
Dimitrios Stiliadis, Liming Zhang, Larry Buhl, Jane Le Grange,
Nick Sauer, Weiguo Yang, Jeff Sinsky, Martin Zirngibl
Our IRIS partners
– Lehigh University : Prof Tom Koch
– University of Santa California Cruz : Prof Anujan Varma
– Agility Communications : Dr Mike Larson
March 5, 2006
OFC’06 FON Workshop
Overview
ƒ Why electronic routers don’t scale and
how the IRIS optical packet routers fixes the problems
ƒ IRIS architecture and technology
ƒ Experimental Results
March 5, 2006
OFC’06 FON Workshop
Overview
ƒ Why electronic routers don’t scale and
how the IRIS optical packet routers fixes the problems
ƒ IRIS architecture and technology
ƒ Experimental Results
March 5, 2006
OFC’06 FON Workshop
Scaling Today’s Electronic Packet Routers
ƒ Routers Growth
– Traditionally scaled at 2× per
18 months
– 100 Tb/s core routers in 5-6yrs!!
Single Rack
Routers Not
possible
ƒ Today’s Electronic Routers
– Power and heat dissipation limit
density
– Rack Capacity is Power Density
Limited
• Single rack capacity scales only at
10-20% per year in future
2010
Multi Rack Electronic
Router Scaling
– Require multiple shelves and
racks of line cards
• Nonlinear Scaling
• Increases number of electrical to
optical to electrical (E-O-E)
conversions
March 5, 2006
2007
2004
2001
640Gb/s
6kW
1 Rack
OFC’06 FON Workshop
2.5Tb/s
30kW
5 Racks
2 E-O-E
10Tb/s
160kW
28 Racks
4 E-O-E
40Tb/s
850kW
160 Racks
6 E-O-E
Why do current packet routers not
scale?
ƒ Power and heat dissipation limit density.
– Require multiple shelves and racks of Line Cards
– Increases distance between Line Cards and Switch Fabric
ƒ Centralized Switch Fabric
– Switch Fabric alone pushes power density and heat dissipation limits
Centralized
Centralized Control and Scheduling complexity
grows nonlinearly
functions
scale poorly
– Massive wiring density between Line Cards and Switch Fabric
ƒ
To
Network
Central
Scheduler
Optics
Optics Packet
Optics
Packet
Processing
Optics
Packet
Processing
I/O
Packet
Processing
I/O
Processing
Buffer
I/O
Buffer
Buffer
Buffer
Line Cards
March 5, 2006
I/O
Massive Wiring
OFC’06 FON Workshop
Switch
Fabric
Distributed Optical Switch Fabric
ƒ Optical fibers can eliminate massive wiring density and allow
higher bandwidths
ƒ Central Switch fabric replaced by passive optical device that
directs signals according to their wavelength
– Arrayed Waveguide Grating (AWG)
ƒ Switching using Fast Wavelength Tunable Transmitters (T-TX)
Fixes Hardware
– Deployed hardware scales with deployedScaling
capacity
distributed on each line card
– Passive Optical Backplane
To
Network
Central
Scheduler
Optics
Optics
Optics
Optics Packet
Optics
Optics
Packet
Processing
Optics
Packet
Processing
I/O
Packet
Processing
I/O
Processing I/O
Buffer
Buffer
Buffer
Buffer
Buffer
Buffer
Buffer
Buffer
I/O
T-TX Optics
Fiber
Switch
AWG
Fabric
LineCards
Cards
Line
Passive Optical Backplane
March 5, 2006
OFC’06 FON Workshop
Distributed Scheduler
ƒ Eliminate central scheduler without sacrificing throughput
– Distribute traffic uniformly to Line Cards
– Technique called “Load balancing”
ƒ Each line card works like a small router with its own
scheduler
– Scales gracefully since complexity of scheduling is constant
– Deployed hardware scales with deployed capacity
Router
ƒ Combined with Distributed Optical Switch
Fabric allows
scaling to very large routers
To
Network
Optics
Optics
Optics
Optics Packet
Optics
Optics
Packet
Processing
Optics
Optics
Packet
Processing
Scales to
>100Tb/s Central
Scheduler
Packet
Processing
Processing
Local
Buffer
Local
Scheduler
Buffer
Local
Scheduler
Buffer
Local
Scheduler
Buffer
Scheduler
AWG
Line Cards
March 5, 2006
OFC’06 FON Workshop
Overview
ƒ Why electronic routers don’t scale and
how the IRIS optical packet routers fixes the problems
ƒ IRIS architecture and technology
ƒ Experimental Results
March 5, 2006
OFC’06 FON Workshop
Load-Balanced Router Architecture
ƒ three stage architecture (space – time – space)
ƒ all-optical data plane, SOA-based wavelength converters
ƒ N2 router ports (for N AWG ports), scales to 256 Tb/s for 80x80 AWGs and 40 Gb/s data rate
ƒ round-robin schedule for the space switches, deterministic schedule for time buffers
March 5, 2006
OFC’06 FON Workshop
NxN Arrayed Waveguide Grating (AWG)
Provides strictly non-blocking
cross-connectivity between N
input and N output ports while
using a single set of N
wavelengths.
No wavelength conflict possible.
40 channels packaged
device
Dimensions: 5 x 2.5
cm2
0
InsertionLoss[dB]
-5
-10
-15
-20
-25
-30
-35
-40
1530
1535
1540
1545
1550
1555
1560
1565
Wavelength [nm]
March 5, 2006
OFC’06 FON Workshop
N×N AWG Switch Fabric
ƒ Array Waveguide Grating
– Integrated N×N multi-port
grating
– Fabricated as Planar Light
Wave Circuit (PLC)
– Converts spectral switching
to spatial switching with
cyclic function
ƒ Switch Fabric
– Use fast tunable lasers TTX
on N input fibers to make
N×N cross connect
– Strictly non blocking cross
connect
March 5, 2006
OFC’06 FON Workshop
N×N periodic AWG
Photonic Integration in IRIS
Integrated Header
Detector array
Integrated 2x4
Wavelength Switch
FirstStage
PacketInterconnect
λ1, λ2, … λN
λ1, λ2, … λN
WS
1
WS
2
AWG
MxN
λ1, λ2, … λN
WS
M
Fast tunable
Lasers
Wavelength
Converters
SecondStage
Memory
Fast tunable
Multi-Frequency Lasers
ThirdStage
PacketInterconnect
HD
TB
1
WS
1
HD
TB
2
WS
2
HD
TB
N
WS
N
WS
1
WS
2
λ1, λ2, … λN
λ1, λ2, … λN
AWG
NxM
WS
M
λ1, λ2, … λN
Integrated
delay lines
Everywhere and Everything!
March 5, 2006
OFC’06 FON Workshop
Wavelength Switch Module
ƒ The
wavelength switch (WS) module
consists of an array of tunable wavelength
converters (TWC) each containing a tunable
laser (MFL) and a wavelength converter
(WC).
λ1
TWC
TWC
λ1.. λN
λ1.. λN
λN
ƒ The N incoming wavelengths are first
demultiplexed and singularly converted. The
converted signals are then power combined
and forwarded.
WS
MFL
λk
TWC
λj
TWC
WC
λj
Wavelength switch module
March 5, 2006
OFC’06 FON Workshop
Monolithic Integration Structure
ƒ InP-based material structure quaternary InGaAsP
– Low propagation losses (<0.5dB/cm) in passive ridge guides
– 600μm bend radius at index step Δ ~ 0.8%
ƒ Active PLCs consist mainly of SOA as amplifiers, detectors, modulators, etc.
– MQW or bulk with active overlay or butt-joint
– Low reflections at passive-active interface
Active Section
Passive Section
Buried
Rib Waveguide
MQW-loaded
Channel Amplifier
Butt-joint
Channel Amplifier
Active
Channel
Waveguide
p:InP
i:M
Q
W
Metal Contact
p:InP
p:InP
SiO2
SiO2
i:MQW
i:InP
n:InGaAsP
i:InP
i:InP
i: Q 1.2
i: Q 1.2
i: Q 1.6
or MQW
n:InP
n:InGaAsP
n:InP
March 5, 2006
OFC’06 FON Workshop
Passive
Rib Waveguide
Wavelength Conversion at 40Gb/s
Two different approaches to wavelength conversion
ƒ SOA + FIR filter
ƒ MZI with differential input
– Single SOA but sensitive to output wavelength
if operated in non-inverting mode
SOA-WC with FIR filter
–Two SOAs but MZI works as wavelength filter
for input and output not wavelength filtered
Differential input
TL
TL
SOA
Data
– 3 ps input pulses
– 3 - 5.5 dB penalty at BER = 10-9
– 8 ps input pulses
– 6.5dB penalty at BER = 10-9
Error-free wavelength conversion
March 5, 2006
SOA
SOA
OFC’06 FON Workshop
Multi-Frequency Lasers
ƒ 8-channel MFL with double-chirped 1x8 AWG
– Channel spacing: 100 - 200 GHz
– Threshold currents: ~ 35-40 mA
– Operating currents: ~ 2x 90 mA
– On-chip power: ~ 5-7 dBm
– SMSR: ~ 40 dB
– Switching time: rise ~250 ps
fall ~ 290-460 ps
8-channel MFL
Laser Cavity
Double
Chirp
HR
κ
1-κ
OUT
1ns
March 5, 2006
OFC’06 FON Workshop
FSR = 900 GHz
Δλ = 100 GHz
HR
Integrated 1x8 Wavelength Switch
ƒ Monolithically Integrated 1x8-channel
wavelength switch comprising
λsig
– 8-channel MFL with double-chirped 1x8 AWG
– Wavelength Converter SOA + FIR filter
WC-SOA
MZI λsig & λj
λj j = 1..8
ƒ Error free conversion:
κ
– from 1559.0 nm to 1550.1 - 1555.7 nm
ƒ Penalty: 3 - 5.5 dB at BER = 10-9
Penalty partly due to unmatched filter before
SOA 1-κ
InP
SOA Array
receiver and degraded OSNR due to high
coupling losses
ƒ WC-SOA: 350 mA
ƒ MFL’s SOAs: 90-150 mA
4.0 mm
WC
Laser
6.5 mm
Converted optical eye diagram
at 40 Gb/s (20 ps/div)
March 5, 2006
OFC’06 FON Workshop
2x8 Wavelength Switch
ƒ Monolithically Integrated 2x8channel wavelength switch
comprising
λm m = 1..8
– Two 8-channel MFLs
– Two differential mach zender
wavelength converters
– 2-channel demux interleaver
(first time ever in InP)
ƒ MFL: channel spacing 100 GHz
λsig A-B
λn & λm
WC-MZI
B
λn n = 1..8
κ
SOA 1-κ
InP
– operating currents: 80-95 mA
ƒ Interleaver: 3-dB bandwidth
~180 GHz
– insertion losses ~ 6-8 dB ER >
15 dB (not tuned)
March 5, 2006
A
OFC’06 FON Workshop
SOA Array
Overview
ƒ Why electronic routers don’t scale and
how the IRIS optical packet routers fixes the problems
ƒ IRIS architecture and technology
ƒ Experimental Results
March 5, 2006
OFC’06 FON Workshop
System Setup
March 5, 2006
OFC’06 FON Workshop
Operation of the Router (Animation)
From
From
Delay
Delay
Tx1
Tx2
Tx1
Tx2
ÎRx1 ÎRx1
0
1
ÎRx1 ÎRx1
2
1
ÎRx2 ÎRx1
1
3
ÎRx1 ÎRx1
(4)
drop
T=2
T=1
T=4
T=3
1Î2
1Î1
1Î1
1Î2
2Î1
1
2Î1
2Î1
1Î2
1Î11Î1
2Î1
1Î1
2Î1
ƒ 1. stage: round robin schedule
ƒ 2. stage: time buffer schedule
ƒ 3. stage: round robin schedule
ƒ all stages switch based on a schedule that simulates random traffic
with adjustable traffic load and imbalance
March 5, 2006
OFC’06 FON Workshop
2Î1
1Î1
1Î2
1Î1
2Î1
BER performance of the Wavelength Converters
-2
before WC1 (a)
before WC3 (b)
at Out 1
(c)
-3
log(BER)
Differential input
-4
TL
Data
-5
-6
-7
-8
-9
-10
-11
-40
-35
-30
-25
-20
Rx Power (dBm)
ƒ 4 dB penalty for the first conversion
and AWG filtering in the time buffers
ƒ 6.5 dB penalty for second conversion and final AWG pass
ƒ second converter aligned for inverted pulses
ƒ reduced coupling losses and better filter matching should decrease penalties
March 5, 2006
OFC’06 FON Workshop
SOA
SOA
Packet Switching with Time Buffers & Drop
ƒ 1 μs packets, 50 ns guard
band, 25 ns haders
ƒ random schedule
– 20 packets long
– 80% traffic load
– 6 packets are dropped
ƒ time buffers shift
packets into later time
slots to avoid contention
ƒ BERT gates on packets
going along one path
ƒ packet re-ordering
observed
March 5, 2006
OFC’06 FON Workshop
Packet Loss Rate and Latency
random, balanced
bursty
unbalanced, cross
unbalanced, straight
2.0
1.5
buffer
BER depth
1.0
-2
-7
BER=10
-3
-4
-5
0
0.5
-8
BER=10
20
40
60
Traffic Load (%)
80
100
0
20
40
60
Traffic Load (%)
80
0.0
100
ƒ For high traffic load, the PLR is dominated by the buffer overflow
ƒ For low traffic load, the PLR is dominated by the BER Graph shown for
1 μs packets at 40 Gb/s (40 000 bits)
ƒ Schedule ~ 4000 packets long
ƒ bursty traffic uses bimodal traffic load distribution (50% at TL – 18%, 50% at TL + 18%)
ƒ unbalanced straight (cross) traffic sends 90% of TL from In1 to Out1 (Out2)
March 5, 2006
OFC’06 FON Workshop
Average Latency (µs)
log(PLR)
-1
random, balanced
bursty
unbalanced, cross
unbalanced, straight
Summary
ƒ Today’s electronic router do not scale, Optical Packet Routers can help
– Power dissipation, fabric and scheduler scalability problem
– IRIS provides an optical solution to these scalability problems
– Less O-E-O conversions, passive optical backplane, load-balancing
ƒ Demonstrated a load-balanced all-optical packet router
– 2x2 ports at 40 Gb/s Î 160 Gb/s router
– complete data plane, including two switching stages and buffering
– time-buffers handle random traffic, control based on simulation
– Switching with 50 ns dead-time, 25 ns training pattern and 1 μs payload
– Data can pass through two wavelength converters without errors
– tested the router with arbitrary traffic distributions and determined
packet loss rates and average latencies
March 5, 2006
OFC’06 FON Workshop
Download