Progress and Challenges toward 100Gbps Ethernet

advertisement
Progress and Challenges
toward 100Gbps Ethernet
Joel Goergen
VP of Technology / Chief Scientist
1
Abstract: This technical presentation will focus on the progress and challenges for development of
technology and standards for 100 GbE. Joel is an active contributor to IEEE802.3 and the Optical
Internetworking Forum (OIF) standards process. Joel will discuss design methodology, enabling
technologies, emerging specifications, and crucial considerations for performance and reliability for
this next iteration of LAN/WAN technology.
Overview
2

Network Standards Today

Available Technology Today

Feasible Technology for 2009

The Push for Standards within IEEE and OIF

Anatomy of a 100Gbps or 160Gbps Solution

Summary

Backup Slides
Network Standards Today:
The Basic Evolution
100 GbE ???
10 GbE
1 GbE
100Mb
10 Mb
1996
1994
3
1983
2002
2010???
Network Standards Today:
The Basic Structure
Internet
Core
Switches
Aggregation / Access
Switches
Servers
/ PCs
`
4
`
Network Standards Today:
The Desk top

1Gbps Ethernet
– 10/100/1000 Copper ports have been shipping with
most desktop and laptop machines for a few years.
– Fiber SMF/MMF

IEEE 802.11a/b/g Wireless
– Average useable bandwidth reaching 50Mbps
5
Network Standards Today:
Clusters and Servers

1Gbps Ethernet
– Copper

10Gbps Ethernet
– Fiber
– CX-4
6
Network Standards Today:
Coming Soon

10Gbps LRM
– Multi-mode fiber to 220meters.

10Gbps Base-T
– 100meters at more then 10Watts per port ???
– 30meters short reach at 3Watts per port ???

10Gbps Back Plane
– 1Gbps, 4x3.125Gbps, 1x10Gbps over 1meter
improved fr-4 material.
7
Available Technology Today:
System Implementation A+B
10G
10G
1G
Front End
Line Card
A
1G
SPIx
B
10G
Line Card
Front End
A
10G
1G
SPIx
B
1G
Passive
Copper
Backplane
A
Fabric
A
B
Fabric
B
8
Available Technology Today:
System Implementation N+1
10G
10G
Front End
Line Card
1G
SPIx
1G
10G
L1
Line Card
Front End
10G
SPIx
1G
Ln+1
Passive
Copper
Backplane
1G
L1
N+1
Switch
Fabric
Nth
Switch
Fabric
1st
Switch
Fabric
L1
Ln
Ln
Ln+1
Ln+1
9
Available Technology Today:
Zoom to Front-end
10G
10G
Front End
Line Card
1G
SPIx
1G
10G
L1
Line Card
Front End
10G
SPIx
1G
Ln+1
Passive
Copper
Backplane
1G
L1
N+1
Switch
Fabric
Nth
Switch
Fabric
1st
Switch
Fabric
L1
Ln
Ln
Ln+1
Ln+1
10
Available Technology Today:
Front-end

Copper
– RJ45
– RJ21 (mini … carries 6 ports)

Fiber
–
–
–
–
11
XFP and variants (10Gbps)
SFP and variants (1Gbps)
XENPAK
LC/SC bulkhead for WAN modules
Available Technology Today:
Front-end System Interfaces

TBI
– 10bit Interface. Max speed 3.125Gbps.

SPI-4 / SXI
– System Protocol Interface. 16bit Interface. Max speed
11Gbps.

SPI-5
– System Protocol Interface. 16bit Interface. Max speed
50Gbps.

XFI
– 10Gbps Serial Interface.
12
Available Technology Today:
Front-end Pipe Diameter

1Gbps
– 1Gbps doesn’t handle a lot of data anymore.
– Non standard parallel also available based on OIF
VSR.

10Gbps LAN/WAN or OC-192
– As port density increases, using 10Gbps as an
upstream pipe will no longer be effective.

40Gbps OC-768
– Not effective port density in an asynchronous system.
– Optics cost close to 30times 10Gbps Ethernet.
13
Available Technology Today:
Front-end Distance Requirements

x00 m (MMF)
–
–

2-10 km
–
–

OTN: ITU G.709 OTU-2, OTU-3
Assertion
–
14
SONET/SDH: OC-192/STM-64 LR-2/L-64.2, OC-768/STM-256
Ethernet: 10GBASE-ZR
DWDM
–

SONET/SDH: OC-192/STM-64 IR-2/S-64.2, OC-768/STM-256
Ethernet: 10GBASE-ER
~100 km
–
–

SONET/SDH: OC-192/STM-64 SR-1/I-64.1, OC-768/STM-256 VSR2000-3R2/etc.
Ethernet: 10GBASE-LR
~40 km
–
–

SONET/SDH (Parallel): OIF VSR-4, VSR-5
Ethernet:: 10GBASE-SR, 10GBASE-LX4, 10GBASE-LRM
Each of these applications must be solved for ultra high data rate interfaces.
Available Technology Today:
Increasing Pipe Diameter
15

1Gbps LAN by 10links parallel

10Gbps LAN by x-links WDM

10Gbps LAN by x physical links

Multiple OC-192 or OC-768 Channels
Available Technology Today:
Zoom to Back Plane
10G
10G
Front End
Line Card
1G
SPIx
1G
10G
L1
Line Card
Front End
10G
SPIx
1G
Ln+1
Passive
Copper
Backplane
1G
L1
N+1
Switch
Fabric
Nth
Switch
Fabric
1st
Switch
Fabric
L1
Ln
Ln
Ln+1
Ln+1
16
Available Technology Today:
Back Plane
Data Packet
Line Cards
--GbE / 10 GbE
RPMs
SFMs
Power Supplies
SERDES
Backplane
17
Traces
Available Technology Today:
Making a Back Plane
Simple! It’s just multiple
sheets of glass with copper
traces and copper planes added
for electrical connections.
18
Available Technology Today:
Back Plane Pipe Diameter

1.25Gbps
– Used in systems with five to ten year old technology.

2.5Gbps/3.125Gbps
– Used in systems with five year old or less technology.

5Gbps/6.25Gbps
– Used within the last 12 months.
19
Available Technology Today:
Increasing Pipe Diameter

Can’t WDM copper

10.3Gbps/12.5Gbps
– Not largely deployed at this time.

20
Increasing the pipe diameter on a back plane
with assigned slot pins can only be done by
changing the glass construction.
Available Technology Today:
Pipe Diameter is NOT Flexible
SDD21 3 Connectors in N4000-13
0
Once the pipe is
designed and built to a
certain pipe speed,
making the pipe faster
is extremely difficult, if
not impossible.
-10
-15
5_5_20_10 SDD21
Old adhoc SDD21
New adhoc SDD21
3_3_20_7 SDD21
3_3_15_10 SDD21
3_3_15_7 SDD21
5_5_10_10 SDD21
3_3_10_7 SDD21
5_5_3_10 SDD21
3_3_3_7 SDD21
-20
-25
-30
dB

-5
-35
-40
-45
-50
-55
-60
-65
-70
-75
15000
14000
13000
12000
11000
21
10000
9000
8000
7000
6000
5000
4000
3000
2000
1000
0
Freq in MHZ
Available Technology Today:
Gbits Density per Slot with Front End and Back Plane
Interfaces Combined
Year System Introduced
Slot density
2000
40Gbps
2004
60Gbps
2006/7 – in design now
120Gbps
Based on max back plane
thickness of 300mils, 20TX
and 20RX differential
pipes.
22
Feasible Technology for 2009:
Defining the Next Generation
23

The overall network architecture for next generation
ultra high (100, 120 and 160Gbps) data rate
interfaces should be similar in concept to the
successful network architecture deployed today using
10Gbps and 40Gbps interfaces.

The internal node architectures for ultra high (100,
120 and 160Gbps) data rate interfaces should follow
similar concepts in use for 10Gbps and 40Gbps
interfaces.

All new concepts need to be examined, but there are
major advantages to scaling current methods with
new technology.
Feasible Technology for 2009:
Front-end Pipe Diameter





80Gbps … not enough Return On Investment
100Gbps
120Gbps
160Gbps
Reasonable Channel Widths
–
–
–
–

24
10λ by 10-16 Gbps
8λ by 12.5-20 Gbps
4λ by 25-40 Gbps
1λ by 100-160 Gbps
Suggest starting at an achievable channel width while
pursuing a timeline to optimize the width in terms of
density, power, feasibility, and cost - depending on
optical interface application/reach.
Feasible Technology for 2009:
Front-end Distance Requirements

x00 m (MMF)
–
–

2-10 km
–
–

SONET/SDH: Mapping of OC-3072/STM-1024
Ethernet: Mapping of 100GBASE
Assertion
–
–
25
SONET/SDH: OC-3072/STM-1024 LR-2
Ethernet: 100GBASE-Z
DWDM (OTN)
–
–

SONET/SDH: OC-3072/STM-1024 IR-2
Ethernet: 100GBASE-E
~100 km
–
–

SONET/SDH: OC-3072/STM-1024 SR
Ethernet: 100GBASE-L
~40 km
–
–

SONET/SDH: OC-3072/STM-1024 VSR
Ethernet: 100GBASE-S
These optical interfaces are defined today at the lower speeds. It is highly likely that industry
will want these same interface specifications for the ultra high speeds.
Optical interfaces, with exception of VSR, are not typically defined in OIF. In order to specify
the system level electrical interfaces, some idea of what industry will do with the optical
interface has to be discussed. It is not the intent of this presentation to launch these optical
interface efforts within OIF.
Feasible Technology for 2009:
Front-end System Interfaces

Reasonable Channel Widths (SPI-?)
–
–
–
–
–

26
16 lane by 6.25-10Gbps
10 lane by 10-16Gbps
8 lane by 12.5-20Gbps
5 lane by 20-32Gbps
4 lane by 25-40Gbps
Port Density is impacted by channel width.
Fewer lanes translates to higher Port Density
and less power.
Feasible Technology for 2009:
Back Plane Pipe Diameter

Reasonable Channel Widths
–
–
–
–
–

27
16 lane by 6.25-10Gbps
10 lane by10-16Gbps
8 lane by 12.5-20Gbps
5 lane by 20-32Gbps
4 lane by 25-40Gbps
Port Density is impacted by channel width.
Fewer lanes translates to higher Port Density
and less power.
Feasible Technology for 2009:
Pipe Diameter is NOT Flexible
SDD21 3 Connectors in N4000-13
0
New Back Plane
designs will have to
have pipes that can
handle 20Gbps to
25Gbps.
-10
-15
5_5_20_10 SDD21
Old adhoc SDD21
New adhoc SDD21
3_3_20_7 SDD21
3_3_15_10 SDD21
3_3_15_7 SDD21
5_5_10_10 SDD21
3_3_10_7 SDD21
5_5_3_10 SDD21
3_3_3_7 SDD21
-20
-25
-30
dB

-5
-35
-40
-45
-50
-55
-60
-65
-70
-75
15000
14000
13000
12000
11000
28
10000
9000
8000
7000
6000
5000
4000
3000
2000
1000
0
Freq in MHZ
Feasible Technology for 2009:
Gbits Density per Slot with Front End and Back Plane
Interfaces Combined
Year System Introduced
Slot density
2000
40Gbps
2004
60Gbps
2006/7 – in design now
120Gbps
2009
500Gbps
Based on max back plane
thickness of 300mils, 20TX
and 20RX differential
pipes.
29
Feasible Technology for 2009:
100Gbps Options
Device
Optical Width
10λ by 10
10λ by 10
8λ by 12.5
8λ by 12.5
8λ by 12.5
4λ by 25
Interconnect
Chip2Chip
16Lane by 6.25
10Lane by 10
8Lane by 12.5
5Lane by 20
4Lane by 25
16Lane by 6.25
10Lane by 10
8Lane by 12.5
5Lane by 20
4Lane by 25
4λ by 25
16Lane by 6.25
10Lane by 10
8Lane by 12.5
5Lane by 20
4Lane by 25
1λ by 100
1λ by 100
1λ by 100
1λ by 100
1λ by 100
16Lane by 6.25
10Lane by 10
8Lane by 12.5
5Lane by 20
4Lane by 25
4λ by 25
Device
Interconnect
Framer or MAC Chip2Chip
X
X
X
X
X
X
X
16Lane by 6.25
10Lane by 10
8Lane by 12.5
5Lane by 25
4Lane by 25
16Lane by 6.25
10Lane by 10
8Lane by 12.5
5Lane by 25
4Lane by 25
16Lane by 6.25
10Lane by 10
8Lane by 12.5
5Lane by 25
4Lane by 25
16Lane by 6.25
10Lane by 10
8Lane by 12.5
5Lane by 25
4Lane by 25
Device
NPU
Interconnect
Chip2Chip
X
X
X
X
X
16Lane by 6.25
10Lane by 10
8Lane by 12.5
5Lane by 20
4Lane by 25
16Lane by 6.25
10Lane by 10
8Lane by 12.5
5Lane by 20
4Lane by 25
16Lane by 6.25
10Lane by 10
8Lane by 12.5
5Lane by 20
4Lane by 25
16Lane by 6.25
10Lane by 10
8Lane by 12.5
5Lane by 20
4Lane by 25
Device
mux
Interconnect
Back Plane
X
X
X
X
X
X
X
16Lane by 6.25
10Lane by 10
8Lane by 12.5
5Lane by 20
4Lane by 25
16Lane by 6.25
10Lane by 10
8Lane by 12.5
5Lane by 20
4Lane by 25
16Lane by 6.25
10Lane by 10
8Lane by 12.5
5Lane by 20
4Lane by 25
16Lane by 6.25
10Lane by 10
8Lane by 12.5
5Lane by 20
4Lane by 25
X
Bit rate shown above is based on 100Gbps. Scale the bit rate
accordingly to achieve 160Gbps.
30
Not efficient design; mux to 5Lane for back plane efficiency
More efficient, but not usuall multiples, power ???
Not efficient design; mux to 8Lane for better back plane efficiency
Scalable, power ???, back plane efficiency ???
Efficient and scalable, needs power feasibility study
Scalable, power ???, back plane efficiency ???
Efficient and scalable
Not efficient design; mux to 5Lane for back plane efficiency
Scalable, power ???, back plane efficiency ???
More efficient, but not usuall multiples, power ???
Efficient and scalable
Possible conversion or mux point to a more efficient lane width as required.
Possible Path
Best Path
Scalable Path based on ASIC technology
Possible Path but not efficient
The Push for Standards:
Interplay Between the OIF & IEEE

OIF defines multi-source agreements
within the
Telecom Industry.
– Optics and EDC for LAN/WAN
– SERDES definition
– Channel models and simulation tools

IEEE 802 covers LAN/MAN Ethernet
– 802.1 and 802.3 define Ethernet over
copper cables, fiber cables, and back
planes.
– 802.3 leverages efforts from OIF.

31
Membership in both bodies is
important for developing next
generation standards.
The Push for Standards:
OIF

Force10 Labs introduced three efforts within OIF to
drive 100Gbps to 160Gbps connectivity.
– Two interfaces for interconnecting optics, ASICs, and
backplanes.
– A 25Gbps SERDES
– Updates of design criteria to the Systems User Group
32
Case Study: Standards Process
P802.3ah – Nov 2000 / Sept 2004
Call for Interest
By a member of 802.3
50% WG vote
Study Group
Open participation
75% WG PAR vote, 50% EC & Stds Bd
Task Force
Open participation
75% WG vote
Working Group Ballot
Members of 802.3
75% WG ballot, EC approval
Sponsor Ballot
Public ballot group
75% of ballot group
Standards Board Approval
RevCom & Stds Board
50% vote
Publication
33
IEEE Staff, project leaders
Case Study: Standards Process
10GBASE_LRM: 2003 / 2006
Optical Power Budget (OMA)
Launch power (min)
- 4.5 dBm
0.5 dB: Transmitter implementation
0.4 dB: Fiber attenuation
10GBASE-LRM Innovations:
• TWDP
Software reference equalizer
Determines EDC penalty of transmitter
0.3 dB: RIN
0.2 dB: Modal noise
4.4 dB: TP3 TWDP and connector
loss @ 99% confidence level
• Dual Launch
Centre and MCP
Maximum coverage for minimum EDC penalty
0.9 dB: Unallocated power
- 11.2 dBm
Required effective receiver sensitivity
Time line
Nov03
Jan04
CFI Study Group
34
May04
Taskforce
Nov03
TF Ballot
• Stress Channels
Precursor, split and post-cursor
Canonical tests for EDC
Mar05
WG BAllot
Dec05
Sponsor Ballot
Mid-06
Standard
Case Study: Standards Process
10GBASE_LRM
Specified optical power levels (OMA)
- 4.5dBm
Optical input to
receiver (TP3)
compliance test
allocation
Launch power minimum
Connector losses = 1.5dB
Fiber attenuation = 0.4dB
- 6.5 dBm
Power budget
starting at TP2
Attenuation
(2 dB)
Modal noise = 0.2 dB
Fiber attenuation = 0.4 dB
Modal noise = 0.2 dB
Interaction penalty = 0.1dB
Stressed receiver
sensitivity
Transmit implementation
allowance = 0.5 dB
Noise (0.5 dB)
RIN = 0.3 dB
RIN = 0.3 dB
Ideal EDC power
penalty,
PIE_D = 4.2dB
Dispersion
(4.2 dB)
TWDP and connector
loss at 99th percentile
(4.4 dB)
Unallocated margin 0.9 dB
- 11.2 dBm
35
Effective maximum unstressed 10GBASE-LRM receiver sensitivity
Case Study: Standards Process
10GBASE_T: 2002 / 2006

Techno-babble
– 64B/65B encoding (similar to 10GBASE-R)
– LDPC(1723,2048) framing
– DSQ128 constellation mapping (PAM16 with ½ the code points
removed)
– Tomlinson-Harshima precoder

Reach
– Cat 6 up to 55 m with the caveat of meeting TIA TSB-155
– Cat 6A up to 100 m
– Cat 7 up to 100 m
– Cat 5 and 5e are not specified

Power
– Estimates for worst case range from 10 to 15 W
– Short reach mode (30 m) has a target of sub 4 W
36
Case Study: Standards Process
10GBASE_T

Noise and EMI
– Alien crosstalk has the biggest impact on UTP cabling
– Screened and/or shielded cabling has better performance

Power
– Strong preference for copper technologies, even though higher
power
– Short reach and better performance cable reduce power requirement

Timeline
– The standard is coming… products in the market end of `06, early
`07
Tutorial
& CFI
NOV
2002
MAR
2003
1st Technical
Presentation
37
Task Force
review
PAR
JUL
NOV
MAR
2004
JUL
D1.0
Sponsor
Ballot
802.3 Ballot
NOV
MAR
2005
D2.0
JUL
NOV
D3.0
MAR
2006
JUL
STD
Birth of A Standard
It Takes About 5 Years
38

Ideas from industry

Feasibility and research

Call for Interest (CFI) –100 GbE EFFORT IS HERE

Marketing / Sales potential, technical feasibility

Study Group

Work Group

Drafts

Final member vote
The Push for Standards:
IEEE

Force10 introduces a Call for Interest (CFI) in July
2006 IEEE802 with Tyco Electronics.
– Meetings will be held in the coming months to determine the
CFI and the efforts required.
– We target July 2006 because of resources within IEEE.
– Joel Goergen and John D’Ambrosia will chair the CFI effort.
The anchor team is composed of key contributors from
Force10, Tyco, Intel, Quake, and Cisco. It has since
broadened to include over 30 companies.
39
The Ethernet Alliance
Promoting All Ethernet IEEE Work

Key IEEE 802 Ethernet projects include
–
–
–
–
100 GbE
Backplane
10 GbE LRM / MMF
10 G Base-T

Force10 is on the BoD, principle member

20 companies at launch
– Sun, Intel, Foundry, Broadcam. . .
– Now approaching 40 companies
40

Launch January 10, 2006

Opportunity for customers to speak on
behalf of 100 GbE Ethernet
Anatomy of a 100Gbps Solution:
Architectural Disclaimers

There Are Many Ways to Implement a system
– This section covers two basic types.
– Issues facing 100Gbps ports are addressed in basic
form.

Channel Performance or ‘Pipe Capacity’ is
difficult to measure

Two Popular Chassis Heights
– 24in to 34in Height (2 or 3 Per Rack)
– 10in to 14in Height (5 to 8 Per Rack)
41
Anatomy of a 100Gbps Solution:
What is a SERDES?

Device that attaches to
the ‘channel’ or ‘pipe’

Transmitter:
– Parallel to serial
– Tap values
– Pre-emphasis

Receiver:
–
–
–
–
42
Reference: Altera
Serial to Parallel
Clock and Data Recovery
DFE
Circuits are very
sensitive to power noise
and low Signal to Noise
Ration (SNR)
Anatomy of a 100Gbps Solution:
Interfaces that use SERDES

TBI
– 10bit Interface. Max speed 3.125Gbps across all 10 lanes. This is
a parallel interface that does not use SERDES technology.

SPI-4 / SXI
– System Protocol Interface. 16bit Interface. Max speed 11Gbps.
This is a parallel interface that does not use SERDES technology.

SPI-5
– System Protocol Interface. 16bit Interface. Max speed 50Gbps.
This uses 16 SERDES interfaces at speeds up to 3.125Gbps.

XFI
– 10Gbps Serial Interface. This uses 1 SERDES at 10.3125Gbps.

XAUI
– 10Gbps 4 lane Interface. This uses 4 SERDES devices at
3.125Gbps each.
43
Anatomy of a 100Gbps Solution:
Power Noise thought …

Line Card SERDES Noise Limits
– Analog target 60mVpp ripple
– Digital target 150mVpp ripple

Fabric SERDES Noise Limits
– Analog target 30mVpp ripple
– Digital target 100mVpp ripple

44
100Gbps interfaces won’t operate well if
these limits can not be meet.
Anatomy of a 100Gbps Solution:
Memory Selection

Advanced Content-Addressable Memory (CAM)
– Goal: Less power per search
– Goal: 4 times more performance
– Goal: Enhanced flexible table management schemes

Memories
– Replacing SRAMs with DRAMs when performance allows to
conserve cost
– Quad Data Rate III SRAMs for speed
– SERDES based DRAMs for buffer memory

Need to drive JEDEC for serial memories that can be easily
implemented in a communication system.
– The industry is going to have to work harder to get high speed
memories for Network Processing in order to reduce latency.

45
Memory chips are usually the last thought! This will
need to change for 100Gbps sustained
performance.
Anatomy of a 100Gbps Solution:
ASIC Selection

High Speed Interfaces
– Interfaces to MACs, Backplane, Buffer Memory are all SERDES
based. SERDES all the way. Higher gate counts with internal
memories target 3.125 to 6.25 SERDES; higher speeds difficult
to design in this environment.
– SERDES used to replace parallel busing for reduced pin and
gate count

Smaller Process Geometry
– Definitely 0.09 micron or lower
– More gates(100% more gates over 0.13 micron process)
– Better performance(25% better performance)
– Lower power(1/2 the 0.13 micron process power)
– Use power optimized libraries

Hierarchical Placement and Layout of the Chips
– Flat placement is no longer a viable option

46
To achieve cost control, ASIC SERDES speed is
limited to 6.25Gbps in high density applications.
Anatomy of a 100Gbps Solution:
N+1 Redundant Fabric - BP
Front End
Line Card
SPIx
L1
Line Card
Front End
SPIx
Ln+1
Passive
Copper
Backplane
L1
N+1
Switch
Fabric
Nth
Switch
Fabric
1st
Switch
Fabric
L1
Ln
Ln
Ln+1
Ln+1
47
Anatomy of a 100Gbps Solution:
N+1 Redundant Fabric – MP
Line Card
Front End
L1
SPIx
Line Card
SPIx
Front End
Ln+1
SPIx
SPIx
Passive
Copper
Midplane
L1
L1
N+1
Switch
Fabric
Nth
Switch
Fabric
1st
Switch
Fabric
Ln
Ln
Ln+1
Ln+1
48
Anatomy of a 100Gbps Solution:
N+1 High Speed Channel Routing
Line Card Line Card
1st
Switch
Fabric
49
2nd
Switch
Fabric
Line Card Line Card
Nth
Switch
Fabric
N+1
Switch
Fabric
Anatomy of a 100Gbps Solution:
A/B Redundant Fabric - BP
Front End
Line Card
A
SPIx
B
Line Card
Front End
A
SPIx
B
Passive
Copper
Backplane
A
Fabric
A
B
Fabric
B
50
Anatomy of a 100Gbps Solution:
A/B Redundant Fabric – MP
Line Card
Front End
A
SPIx
SPIx
B
Line Card
Front End
A
SPIx
SPIx
Passive
Copper
Midplane
B
A
Fabric
A
B
Fabric
B
51
Anatomy of a 100Gbps Solution:
A/B High Speed Channel Routing
Line
Card
Line
Card
Line
Card
A
Switch
Fabric
52
B
Switch
Fabric
Line
Card
Anatomy of a 100Gbps Solution:
A Quick Thought …..

Looking at both Routing and Connector
Complexity designed into the differential
signaling ….
– Best Case: N+1 Fabric in a Back Plane.
– Worst Case: A/B Fabric in a Mid Plane.

53
All implementations need to be examined for
best possible performance over all deployed
network interfaces. Manufacturability and
channel (Pipe) noise are two of the bigger
factors.
Anatomy of a 100Gbps Solution:
Determine Trace Lengths

After careful review of possible Line Card, Switch
Fabric, and Back Plane Architectural blocks,
determine the range of trace lengths that exist
between a SERDES transmitter and a SERDES
receiver. 30 inches or .75meters total should do it.

Several factors stem from trace length.
–
–
–
–
–

54
Band Width
Reflections from via and or thru-holes
Circuit board material
BER
Coding
Keep in mind that the goal is to target one or both
basic chassis dimensions.
Anatomy of a 100Gbps Solution:
Channel Model Description

A “Channel” or “Pipe” is a high speed singleended or differential signal connecting the
SERDES transmitter to the SERDES receiver.
The context of “Channel” or “Pipe” from this
point is considered differential.

Develop a channel model based on the
implications of Architectural choices and trace
lengths.
–
–
–
–
55
Identifies a clean launch route to a BGA device.
Identifies design constraints and concerns.
Includes practical recommendations.
Identifies channel Bandwidth.
Anatomy of a 100Gbps Solution:
Channel Simulation Model
TP1
XMTR
Transmitter
TP4
TP2 and TP3 not used
FR4+
CONN
PLUG
CONN
JACK
FR4+
Channel
CONN
JACK
CONN
PLUG
FR4+
DC
block
TP5
Informative
FR4+
RCV
FILTER
equivalent cap circuit
Receiver
Back Plane
56
Line Card: Receiver
RCV
SLICER
Anatomy of a 100Gbps Solution:
Channel: Back Plane
24mil
Back Drill ??

57
13mil Drill
13mil Drill
PressFit
Connector
Power Plane
24mil
trace
Clearance Pad
Digital GND
Shows signal trace connecting pins on separate
connectors across a back plane.
Anatomy of a 100Gbps Solution:
Channel: Line Card Receiver
Back
Plane
Conn
Blocking
Capacitor
trace
24mil
24milx32mil
BGA or
Like device
24milx32mil
21mil BGA
Back Drill ??

58
13mil Drill
Power
DGND
13mil Drill
Signal
24mil
24mil
trace to BP
Shows a signal trace connecting the back plane to a
SERDES in a Ball Grid Array (BGA) package.
Channel Model Definition:
Back Plane Band Width
How do we Evaluate the signal speed that can be placed on a channel?

2Ghz to 3Ghz Band Width

– Supports 2.5Gps NRZ –
8B10B



2Ghz to 7.5Ghz
– Supports 12Gps
– Limited Scrambling
Algorithms
2Ghz to 5Ghz Band Width
(4Ghz low FEXT)
– Supports 6.25Gps PAM4
– Supports 3.125Gps NRZ –
8B10B or Scrambling
59
– Supports 6.25Gps NRZ –
8B10B
– Limited Scrambling
Algorithms
2Ghz to 4Ghz Band Width
– Supports 3.125Gps NRZ –
8B10B
2Ghz to 6.5Ghz

2Ghz to 9Ghz
– Supports 25Ghz multi-level
Channel Model Definition –
IEEE 802.3ae XAUI Limit
60

b1 = 6.5e-6

b2 = 2.0e-10

b3 = 3.30e-20

SDD21 = -20*log10(e)*(b1*sqrt(f) + b2*f + b3*f^2)

f = 50Mhz to 15000Mhz
Channel Model Definition –
IEEE 802.3ap A(min)
61

b1 = 2.25e-5

b2 = 1.20e-10

b3 = 3.50e-20

b4 = 1.25e-30

SDD21 = -20*log10(e)*(b1*sqrt(f) + b2*f + b3*f^2 - b4*f^3)

f = 50Mhz to 15000Mhz
Anatomy of a 100Gbps Solution:
Channel Model Limit Lines
SDD21 SDD11 SDD22 CH12 AGGR2 N4000-13
0
-5
-10
-15
-20
-25
dB
-30
Measured SDD21
XAUI Limit
IEEE P802.3ap Limit
-35
-40
-45
-50
-55
-60
-65
-70
-75
15000
14000
13000
12000
11000
Freq in MHZ
10000
9000
8000
7000
6000
5000
4000
3000
2000
1000
0
62
Anatomy of a 100Gbps Solution:
Comments on Limit Lines
63

IEEE802.3ae XAUI is a 5year old channel model limit
line.

IEEE P802.3ap channel model limit is based on
mathematical representation of improved FR-4
material properties and closely matches “real
life” channels. This type of modeling will be
essential for 100Gbps interfaces.

A real channel is shown with typical design violations
common in the days of XAUI. Attention to specific
design techniques in the channel launch conditions
can eliminate the violation to the defined channel
limits.
Anatomy of a 100Gbps Solution:
Receiver Conditions – Case 1
TP5
Informative
TP4
trace
trace to BP
64
24mil
24mil
13mil Drill
24milx32mil
13mil Drill
24milx32mil 12mil
AG
trace dogbone
13mil Drill
24mil
trace
24mil
24mil
24mil
34mil Anti Pad
trace
6
21mil BGA
Anatomy of a 100Gbps Solution:
Constraints & Concerns – Case 1
65

Poor Signal Integrity – SDD11/22/21

Standard Cad Approach

Easiest / Lowest Cost to Implement

Approach will not have the required
performance for SERDES implementations
used in 100Gbps interfaces.
Anatomy of a 100Gbps Solution:
Receiver Conditions– Case 4
TP5
Informative
TP4
trace
24milx32mil 12mil
AG
24milx32mil
13mil Drill
24mil
trace
24mil
trace to BP
34mil Anti Pad
66
trace dogbone
6
21mil BGA
Anatomy of a 100Gbps Solution:
Constraints & Concerns – Case 4

Ideal Signal Integrity
– Eliminates two VIAS
– Increases pad impedance to reduce SDD11/22

High speed BGA pins must reside on the
outer pin rows
 Crosstalk to traces routed under the open
ground pad is an issue for both the BGA and
the Capacitor footprint
 Requires 50mil pitch BGA packaging to avoid
ground plane isolation on the ground layer
under the BGA pads
 Potential to require additional routing layer
67
Available Technology Today:
Remember this Slide ?
Circuit board material is just
multiple sheets of glass with
copper traces and copper
planes added for electrical
connections.
68
Anatomy of a 100Gbps Solution:
Channel Design Considerations

Circuit Board Material Selection is Based on
the Following:
– Temperature and Humidity effects on Df (Dissipation
Factor) and Dk (Dielectric Constant)
– Required mounting holes for mother-card mounting,
shock and vibration
– Required number of times a chip or connector can be
replaced
– Required number of times a pin can be replaced
on a back plane
– Aspect ratio (Drilled hole size to board thickness)
– Power plane copper weight
– Coding / Signaling scheme
69
Anatomy of a 100Gbps Solution:
Materials in Perspective
Dielectric
Constant
IS402
N4000-2
FR-4/E
Modified FR-4
(IS620)
Mod Epoxy/E
(N4000-13/FR408)
PPO-Epoxy/E
(GETEK)
CE-Epoxy/E
CE/E
PTFE/E
CE/PTFE
Epoxy/NE (N4000-13SI)
PPO-Epoxy/NE
(GETEK-IIX, Megtron-5)
CE-Epoxy/NE
Dielectric Constant
(Most materials are flat from 100Hz - 2GHz)
Graph
70 provided by Zhi Wong zwong@altera.com
Dissipation Factor
(Loss Tangent)
“Improved FR-4”
in Reference to IEEE P802.3ap

Improved FR-4 (Mid Resolution Signal Integrity):
–
–
–
–
–
–

100Mhz: Dk ≤ 3.60; Df ≤ .0092
1Ghz: Dk ≤ 3.60; Df ≤ .0092
2Ghz: Dk ≤ 3.50; Df ≤ .0115
5Ghz: Dk ≤ 3.50; Df ≤ .0115
10Ghz: Dk ≤ 3.40; Df ≤ .0125
20Ghz: Dk ≤ 3.20; Df ≤ .0140
Temperature and Humidity Tolerance
(0-55degC, 10-90% non-condensing):
– Dk:+/- .04
– Df: +/- .001

Resin Tolerance (standard +/-2%):
– Dk:+/- .02
– Df: +/- .0005
71
Anatomy of a 100Gbps Solution:
Channel or Pipe Considerations

72
Channel Constraints Include the Following:
– Return Loss
– Thru-hole reflections
– Routing reflections
– Insertion Loss based on material category
– Insertion Loss based on length to first reflection point
– Define coding and baud rate based on material
category
– Connector hole crosstalk
– Trace to trace crosstalk
– DC blocking Capacitor at the SERDES to avoid
shorting DC between cards.
– Temperature and Humidity losses/expectations based
on material category
Channel Model Starting Point:
Materials in Perspective
Dielectric
Constant
Target Area
IS402
N4000-2
FR-4/E
Modified FR-4
(IS620)
Mod Epoxy/E
(N4000-13/FR408)
PPO-Epoxy/E
(GETEK)
CE-Epoxy/E
CE/E
PTFE/E
CE/PTFE
Epoxy/NE (N4000-13SI)
PPO-Epoxy/NE
(GETEK-IIX, Megtron-5)
CE-Epoxy/NE
Dielectric Constant
(Most materials are flat from 100Hz - 2GHz)
Graph
73 provided by Zhi Wong zwong@altera.com
Dissipation Factor
(Loss Tangent)
Channel Model Starting Point:
Real Channels
SDD21 2 Connectors in N4000-13
0
-5
dB
-10
-15
-20
-25
10_20_10 SDD21
-30
-35
-40
-45
Proposed P802.3ap
SDD21
10_15_10 SDD21
-50
-55
-60
-65
10_10_10 SDD21
-70
-75
15000
14000
13000
12000
11000
Freq in MHZ
10000
9000
8000
7000
6000
5000
4000
3000
2000
1000
0
74
Starting Point Jan06
SDD21
Channel Model Starting Point:
Equation
75

b1 = 1.25e-5

b2 = 1.20e-10

b3 = 2.50e-20

b4 = 0.95e-30

SDD21 = -20*log10(e)*(b1*sqrt(f) + b2*f + b3*f^2 - b4*f^3)

f = 50Mhz to 15000Mhz
Anatomy of a 100Gbps Solution:
Channel Design Considerations

Channel BER
– Data transmitted across the back plane channel is
usually done in a frame with header and payload
– The frame size can be anywhere from a few hundred
bytes to 16Kbytes, typical
– A typical frame contains many PHY-layer packets
– BER of 10E-12 will result in a frame error of 10E-7 or
less, depending on distribution
– That is a lot of frame loss
76
Anatomy of a 100Gbps Solution:
Channel Design Considerations

Channel BER
– Customers want to see a frame loss of zero
– Systems architects want to see a frame loss of zero
– Zero error is difficult to test and verify … none of us will live that
long
– The BER goal should be 10E-15
– It can be tested and verified at the system design level
– Simulate to 10E-17
– Any frame loss beyond that will have minimal effect on
current packet handling/processing algorithms
– Current SERDES do not support this. Effective 10E-15 is
obtained by both power noise control and channel model loss

77
This will be tough to get through, but without this
tight requirement, 100Gbps interfaces will need to
run faster by 3% to 7%. Or worse, pay a latency
penalty for using FEC or DFE.
Anatomy of a 100Gbps Solution:
Remember Interface Speeds …

Reasonable Channel Widths for 100Gbps:
–
–
–
–
–
78
16 lane by 6.25Gbps *BEST
10 lane by 10Gbps
8 lane by 12.5Gbps
5 lane by 20Gbps
4 lane by 25Gbps *BEST
Anatomy of a 100Gbps Solution:
Channel Signaling Thoughts

Channel Signaling:
– NRZ
– In general, breaks down after 12.5Gbps
– 8B10B is not going to work at 25Gbps
– 64B66B is not going to work at 25Gbps
– Scrambling is not going to work at 25Gbps
– Duo-Binary
– Demonstrated to 33Gbps
– PAM4 or PAMx
– Demonstrated to 33Gbps
79
Anatomy of a 100Gbps Solution:
Designing for EMI Compatibility

Treat each slot as a unique chamber
– Shielding Effectiveness determines the maximum
number of 1GigE ports, 10GigE ports, or 100GigE
ports before saturating emissions requirements.
– Requires top and bottom seal using honeycomb

Seal the back plane / mid plane
– Cross-hatch chassis ground
– Chassis ground edge guard and not edge plate
– Digital ground sandwich for all signal layers
80

Provide carrier mating surface

EMI follows wave equations. Signaling
spectrum must be considered.
Anatomy of a 100Gbps Solution:
Power Design

Power Routing Architecture from Inputs to All Cards
–
–
–
–

Bus bar
Power board
Cabling harness
Distribution through the back plane / mid plane using copper foil
Design the Input Filter for Maximum Insertion Loss and Return Loss
– Protects your own equipment
– Protects all equipment on the power circuit

Design Current Flow Paths for 15DegC Max Rise, 5 DegC Typical

Design all Distribution Thru-holes to Support 200% Loading
at 60DegC
– Provides for the case when the incorrect drill size is selected in the
drilling machine and escapes computer comparison. Unlikely case but
required in carrier applications

81
Power Follows Ohm’s Law. It Can Not Be Increased without Major
Changes or Serious Thermal Concerns
Summary
82

Industry has been successful scaling speed since
10Mbps in 1983.

The efforts in 1GigE and 10GigE have taught us
many aspects of interfaces and interface technology.

100Gbps and 160Gbps success will depend on
useable chip and optics interfaces.

Significant effort is underway in both IEEE and OIF to
define and invent interface to support the next
generation speeds.

Systems designers will need to address many new
issues to support 100Gbps port densities of 56 or
more per box.
Thank You
83
Backup Slides

84
The following slides provide additional detail
to support information provided within the
base presentation.
Acronym Cheat Sheet
85

CDR – Clock and Data Recovery

CEI – Common Electrical Interface

CGND / DGND – Chassis Ground / Digital Ground

EDC – Electronic Dispersion Compensation

MAC – Media Access Control

MDNEXT / MDFEXT – Multi Disturber Near / Far End Cross Talk

MSA – Multi Source Agreement

NEXT / FEXT – Near / Far End Cross Talk

OIF – Optical Internetworking Forum

PLL - Physical Link Layer

SERDES – Serialize / De-serialize

SFI – System Framer Interface

SMF / MMF – Single Mode Fiber / Multi Mode Fiber

XAUI – 10Gig Attachment Unit Interface
Anatomy of a 100Gbps Solution:
Basic Line Card Architecture
PMD
PHY,
Framer
Network
Processor
Non
“Wire
Speed”
µP
Protocol Stacks,
APPs
Applications
86
Fabric
Interface
Anatomy of a 100Gbps Solution:
Basic Line Card Architecture 1
Optical
or Copper
Media
Forwarding
Engine

SERDES
Media
SERDES
Network
Processor
SERDES
SERDES
87
Backplane
Reserved for Power
Architecture:
– Long trace lengths.
– Poor power noise control
means worse than...
– Analog target
60mVpp ripple
– Digital target
150mVpp ripple
– Poor SERDES to
connector signal flow will
maximize ground noise.
– This layout is not a
good choice for
100Gbps.
Anatomy of a 100Gbps Solution:
Basic Line Card Architecture 2
Optical
or Copper
Media
Forwarding
Engine

Media
Network
Processor
S
E
R
D
E
S
88
Backplane
Reserved for Power
S
E
R
D
E
S
Architecture:
– Clean trace routing.
– Good power noise
control means better
than...
– Analog target
60mVpp ripple
– Digital target
150mVpp ripple
– Excellent SERDES to
connector signal flow to
minimize ground noise.
– Best choice for 100Gbps
systems.
Anatomy of a 100Gbps Solution:
Basic Line Card Architecture 3
Optical
or Copper
Media
Forwarding
Engine

S
E
Network R
Processor D
E
S
89
Midplane
Reserved for Power
Media
Architecture:
– Clean trace routing.
– Good power noise
control means better
than...
– Analog target
60mVpp ripple
– Digital target
150mVpp ripple
– Difficult SERDES to
connector signal flow
because of Mid Plane.
– This layout is not a
good choice for
100Gbps.
Anatomy of a 100Gbps Solution:
Basic Switch Fabric Architecture
Line Card
Interface
Digital or
Analog
X bar
Non
“Wire
Speed”
µP
90
Line Card
Interface
Anatomy of a 100Gbps Solution:
Basic Switch Fabric Architecture 1
Reserved for Power
SERDES
Digital
Cross Bar
SERDES
91

Architecture:
– Long trace lengths.
– Poor power noise control
means worse than...
– Analog target
30mVpp ripple
– Digital target
100mVpp ripple
– Poor SERDES to
connector signal flow will
maximize ground noise.
– This layout is not a
good choice for
100Gbps.
Anatomy of a 100Gbps Solution:
Basic Switch Fabric Architecture 2
Reserved for Power
Digital
Cross Bar
92
S
E
R
D
E
S

S
E
R
D
E
S
Architecture:
– Clean trace routing.
– Good power noise
control means better
than...
– Analog target
30mVpp ripple
– Digital target
100mVpp ripple
– Excellent SERDES to
connector signal flow to
minimize ground noise.
Anatomy of a 100Gbps Solution:
Basic Switch Fabric Architecture 3
Reserved for Power
Analog
Cross
Bar
93

Architecture:
– Clean trace routing.
– Good power noise
control means better
than...
– Analog target
30mVpp ripple
– Digital target
100mVpp ripple
– Excellent SERDES to
connector signal flow to
minimize ground noise.
– True Analog Fabric is
not used anymore.
Anatomy of a 100Gbps Solution:
Back Plane or Mid Plane

Redundancy
– N+1 Fabric
– A / B Fabric

Connections
– Back Plane
– Mid Plane
94
Anatomy of a 100Gbps Solution:
Trace Length Combinations - Max

24in to 34in height (2 or 3 per rack)
N+1 Fabric
Position: Top or Bottom of Line Cards
Case
LC -1
LC -2
LC -3
Trace Length
16
6
4
SF -1
18
56
46
44
SF -2
12
50
40
38
Back Plane
22
A/B Fabric
Position: Top or Bottom of Line Cards
Case
LC -1
LC -2
LC -3
Trace Length
16
6
4
SF -3
14
52
42
40
Back Plane
22
N+1 Fabric
Position: Middle of Line Cards
Case
LC -1
LC -2
Trace Length
16
6
SF -1
18
52
42
SF -2
12
46
36
Back Plane
18
A/B Fabric
Position: Middle of line cards
Case
LC -1
LC -2
Trace Length
16
6
SF -3
14
48
38
Back Plane
18
LC -3
4
40
34
Note: All dimensions in inches
95
LC -3
4
36
Anatomy of a 100Gbps Solution:
Trace Length Combinations - Min

24in to 34in height (2 or 3 per rack)
N+1 Fabric
Position: Top or Bottom of Line Cards
Case
LC -1
LC -2
LC -3
Trace Length
16
6
4
SF -1
18
39
29
27
SF -2
12
33
23
21
Back Plane
5
A/B Fabric
Position: Top or Bottom of Line Cards
Case
LC -1
LC -2
LC -3
Trace Length
16
6
4
SF -3
14
35
25
23
Back Plane
5
N+1 Fabric
Position: Middle of Line Cards
Case
LC -1
LC -2
Trace Length
16
6
SF -1
18
36
26
SF -2
12
30
20
Back Plane
2
A/B Fabric
Position: Middle of line cards
Case
LC -1
LC -2
Trace Length
16
6
SF -3
14
32
22
Back Plane
2
LC -3
4
24
18
Note: All dimensions in inches
96
LC -3
4
20
Anatomy of a 100Gbps Solution:
Trace Length Combinations - Max

10in to 14in height (5 to 8 per rack)
N+1 Fabric
Position: Top or Bottom of Line Cards
Case
LC -1
LC -2
LC -3
Trace Length
9
5
n/a
SF -1
15
41
37
n/a
SF -2
9
35
31
n/a
Back Plane
17
A/B Fabric
Position: Top or Bottom of Line Cards
Case
LC -1
LC -2
LC -3
Trace Length
9
5
n/a
SF -3
n/a
n/a
n/a
n/a
Back Plane
17
N+1 Fabric
Position: Middle of Line Cards
Case
LC -1
LC -2
Trace Length
9
5
SF -1
15
38
34
SF -2
9
32
28
Back Plane
14
A/B Fabric
Position: Middle of line cards
Case
LC -1
LC -2
Trace Length
9
5
SF -3
n/a
n/a
n/a
Back Plane
14
LC -3
n/a
n/a
n/a
Note: All dimensions in inches
97
LC -3
n/a
n/a
Anatomy of a 100Gbps Solution:
Trace Length Combinations - Min

10in to 14in height (5 to 8 per rack)
N+1 Fabric
Position: Top or Bottom of Line Cards
Case
LC -1
LC -2
LC -3
Trace Length
9
5
n/a
SF -1
15
26
22
n/a
SF -2
9
20
16
n/a
Back Plane
2
A/B Fabric
Position: Top or Bottom of Line Cards
Case
LC -1
LC -2
LC -3
Trace Length
9
5
n/a
SF -3
n/a
n/a
n/a
n/a
Back Plane
2
N+1 Fabric
Position: Middle of Line Cards
Case
LC -1
LC -2
Trace Length
9
5
SF -1
15
26
22
SF -2
9
20
16
Back Plane
2
A/B Fabric
Position: Middle of line cards
Case
LC -1
LC -2
Trace Length
9
5
SF -3
n/a
n/a
n/a
Back Plane
2
LC -3
n/a
n/a
n/a
Note: All dimensions in inches
98
LC -3
n/a
n/a
Anatomy of a 100Gbps Solution:
Receiver Conditions– Case 2
TP5
Informative
TP4
trace
24mil
24mil
24mil
24mil
trace to BP
trace
34mil Anti Pad
99
24mil
13mil Drill
24milx32mil
13mil Drill
24milx32mil 12mil
AG
trace dogbone
13mil Drill
24mil
trace
6
21mil BGA
Anatomy of a 100Gbps Solution:
Constraints & Concerns – Case 2
100

Crosstalk to Traces Routed Under the Open
Ground Pad is an Issue

Allows Good Pin Escape from the BGA

Poor Signal Integrity - Has High SDD11/22/21
at the BGA

Potential to Require Additional Routing Layer

Approach will not have the required
performance for SERDES implementations
used in 100Gbps interfaces.
Anatomy of a 100Gbps Solution:
Receiver Conditions– Case 3
TP5
Informative
TP4
trace
trace to BP
101
24mil
24mil
13mil Drill
24milx32mil
13mil Drill
24milx32mil 12mil
AG
13mil Drill
24mil
trace dogbone
trace
24mil
24mil
24mil
34mil Anti Pad
trace
6
21mil BGA
Anatomy of a 100Gbps Solution:
Constraints & Concerns – Case 3

Allows for Inner High Speed Pad Usage within the BGA

Medium Poor Signal Integrity – SDD11/22/21. Has the
Extra VIAS to Content with in the Break Out
– Increases pad impedance to reduce SDD11/22
102

Crosstalk to Traces Routed Under the Open
Ground Pad is an Issue for Both the BGA and the
Capacitor Footprint

Allows Good Pin Escape from the BGA

Potential to Require Additional Routing Layer

Requires 50mil Pitch BGA Packaging to Avoid
Ground Plane Isolation on the Ground Layer Under
the BGA Pads
Anatomy of a 100Gbps Solution:
Stack-up Detail
Layers
L01
L02
L03
L04
L05
L06
L07
L08
L09
L10
L11
L12
L13
L14
L15
L16
L17
L18
L19
L20
L21
L22
L23
L24
Total:
103
Specified
Thickness
0.7
2.0
1.3
6.0
1.3
5.5
1.3
9.1
1.3
5.5
1.3
9.1
1.3
5.5
1.3
9.1
1.3
5.5
1.3
9.1
1.3
5.5
1.3
10.1
1.3
5.5
1.3
10.1
1.3
5.5
1.3
9.1
1.3
5.5
1.3
9.1
1.3
5.5
1.3
9.1
1.3
5.5
1.3
9.1
1.3
5.5
1.3
6.0
1.3
2.0
0.7
202.1
200.7
194.1
Cross Section Diagram
Layer Name
Definition
Layer
Type
Resin
Content
Material
Type
&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
Mask
&&&
Foil & Plating
Plating
Pads Only
1 oz. Cu
2x1080
rc: 65%
N4000-13
&&&&&&&&&&&&&&&&&&&&&
GND
1 oz. Cu
core 2x1080
rc: 58.4%
N4000-13
&&&
HS1
1 oz. Cu
1x1080/2x106/1x1080
rc: 65%/75%/65%
N4000-13
&&&&&&&&&&&&&&&&&&&&&
GND
1 oz. Cu
core 2x1080
rc: 58.4%
N4000-13
&&&
HS2
1 oz. Cu
1x1080/2x106/1x1080
rc: 65%/75%/65%
N4000-13
&&&&&&&&&&&&&&&&&&&&&
GND
1 oz. Cu
core 2x1080
rc: 58.4%
N4000-13
&&&
HS3
1 oz. Cu
1x1080/2x106/1x1080
rc: 65%/75%/65%
N4000-13
&&&&&&&&&&&&&&&&&&&&&
GND
1 oz. Cu
core 2x1080
rc: 58.4%
N4000-13
&&&
HS4
1 oz. Cu
1x1080/2x106/1x1080
rc: 65%/75%/65%
N4000-13
&&&&&&&&&&&&&&&&&&&&&
GND
1 oz. Cu
core 2x1080
rc: 58.4%
N4000-13
&&&&&&&&&&&&&&&&&&&&&&
Plane
1 oz. Cu
1x1080/2x106/1x1080
rc: 65%/75%/65%
N4000-13
&&&&&&&&&&&&&&&&&&&&&&
Plane
1 oz. Cu
core 2x1080
rc: 58.4%
N4000-13
&&&&&&&&&&&&&&&&&&&&&&
Plane
1 oz. Cu
1x1080/2x106/1x1080
rc: 65%/75%/65%
N4000-13
&&&&&&&&&&&&&&&&&&&&&&
Plane
1 oz. Cu
core 2x1080
rc: 58.4%
N4000-13
&&&&&&&&&&&&&&&&&&&&&&
GND
1 oz. Cu
1x1080/2x106/1x1080
rc: 65%/75%/65%
N4000-13
&&&
HS5
1 oz. Cu
core 2x1080
rc: 58.4%
N4000-13
&&&&&&&&&&&&&&&&&&&&&&
GND
1 oz. Cu
1x1080/2x106/1x1080
rc: 65%/75%/65%
N4000-13
&&&
HS6
1 oz. Cu
core 2x1080
rc: 58.4%
N4000-13
&&&&&&&&&&&&&&&&&&&&&&
GND
1 oz. Cu
1x1080/2x106/1x1080
rc: 65%/75%/65%
N4000-13
&&&
HS7
1 oz. Cu
core 2x1080
rc: 58.4%
N4000-13
&&&&&&&&&&&&&&&&&&&&&
GND
1 oz. Cu
1x1080/2x106/1x1080
rc: 65%/75%/65%
N4000-13
&&&
HS8
1 oz. Cu
core 2x1080
rc: 58.4%
N4000-13
&&&&&&&&&&&&&&&&&&&&&&
GND
1 oz. Cu
2x1080
rc: 65%
N4000-13
Pads Only
1 oz. Cu
&&&
Foil & Plating
Plating
((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((
Mask
Est. Finish Thickness Over Plating & Mask
Est. Finish After Copper Plating
Est. Finish Thickness Dielectric
Impedance
Geometry
Impedance
Geometry
Pads Only
98ohm+-4
10on10
56ohm+-5
10on10
100ohm+-3
6on14
51ohm+-3
6on14
100ohm+-3
6on14
51ohm+-3
6on14
100ohm+-3
6on14
51ohm+-3
6on14
100ohm+-3
6on14
51ohm+-3
6on14
100ohm+-3
6on14
51ohm+-3
6on14
100ohm+-3
6on14
51ohm+-3
6on14
100ohm+-3
6on14
51ohm+-3
6on14
100ohm+-3
6on14
51ohm+-3
6on14
98ohm+-4
Pads Only
10on10
56ohm+-5
10on10
Requirements to Consider when
Increasing Channel Speed
104

Signaling Scheme vs Available Bandwidth

NEXT/FEXT Margins

Average Power Noise as Seen by the
Receive Slicing Circuit and the PLL

Insertion Loss (SDD21) Limits
Download