EIBONE Working Group Transmission Technologies

EIBONE Working Group
Transmission Technologies
White Paper
100 Gbit/s Ethernet
April 2008
White Paper 100 GbE
EIBONE Working Group
Transmission Technologies
White Paper
100 Gbit/s Ethernet
P. M. Krummrich, Technische Universität Dortmund
W. Weiershausen, Deutsche Telekom
J.-P. Elbers, Ericsson
J. Leuthold, University of Karlsruhe
R.-P. Braun, Deutsche Telekom
M. Düser, Deutsche Telekom
R. Ries, Deutsche Telekom
H. Haunstein, Alcatel-Lucent
E.-D. Schmidt, Nokia Siemens Networks
A. Mattheus, Deutsche Telekom
H. Richter, Deutsche Telekom
A. Richter, VPIsystems
S. Vorbeck, Deutsche Telekom
M. Schneiders, Deutsche Telekom
N. Hanik, Technical University of Munich
W. Rosenkranz, University of Kiel
H. Bülow, Alcatel-Lucent
C. Schäffer, Technical University of Dresden
E. Lach, Alcatel-Lucent
G. Veith, Alcatel-Lucent
K. Schuh, Alcatel-Lucent
H.-G. Bach, FhG HHI
K.-O. Velthaus, FhG HHI
C. Schubert, FhG HHI
page 1 of 54
White Paper 100 GbE
Executive Summary
In the framework of the BMBF EIBONE project, a working group on optical transmission
technology has been established. Several partners from various sub-projects have identified
100 Gbit/s Ethernet (100 GbE) as one of the core technologies for future optical networks.
This white paper has been compiled to convey a shared view of the partners in the project. It
contains a brief introduction with a discussion of the motivation to develop 100 GbE and a
description of standardization activities. The core part of the paper deals with concepts and
options for the realization of 100 GbE. In the last section, a summary, conclusions, and
recommendations for future activities are given.
The main motivation for the development 100 GbE stems from the predicted growth of
bandwidth demand in data transport networks in the coming years. Due to the lack of efficient
packet based link aggregation mechanisms in the Ethernet protocol, the capacity of router
and Ethernet switch interface ports has to be increased to accommodate for the growing data
traffic. As soon as 100 GbE interfaces are deployed in edge and core routers, transport
technologies will be required for the interconnection of these routers via cost efficient metro
and long haul networks.
Standardization activities concerning 100 GbE are currently underway. The IEEE Higher
Speed Study Group (HSSG) as well as the ITU-T Study Group 15 are working on solutions for
a higher data-rate Ethernet. However, serious efforts are still required to evaluate the
proposed alternatives from a scientific and technological point of view as well as from the
system development and finally economical perspective. Especially the most appropriate
data-rate for the transport layer currently contributes a debated topic.
From a physical layer or transmission perspective, several options exist for short reach as
well as longer reach transport in metro and backbone networks. In the short reach case,
comprising intra-office connections between switches and routers, inverse multiplexing may
provide a viable option. The 100 GbE signal is split and transmitted using multiple coarsely
spaced wavelengths (CWDM) in a single fiber with lower bit rate per wavelength. Due to the
higher importance of fiber cost and more stringent bandwidth efficiency requirements in metro
and backbone networks, multiple wavelength transmission of a 100 GbE signal is not
desirable in long haul applications. However, the most suitable modulation format and
transmission technology for single wavelength transmission still has to be identified.
In the area of optical components several promising results have been achieved. Photodiodes
as well as receivers could be demonstrated with sufficient bandwidth for the detection of serial
binary 100 Gbit/s data streams. Devices for the electrical multiplexing and demultiplexing of
these signals have also been realized. Major improvements are still required in the field of
optical modulator bandwidth and the packaging and integration of components with sufficient
On a subsystem level, integration for reduction of size, power consumption and cost is a task
still in its infancy. Major efforts have to be spent in this area. Signal processing functions such
as equalization and forward error correction already implemented at lower bitrates stretch the
limits of currently available electronic processing capabilities. Finding a compromize between
speed limits and complexity or the number of required gates will be a demanding endeavour.
In conclusion, several intermediate results concerning potential solutions for 100 GbE
transport have been achieved, whereas the implementation of product grade solutions still
requires a lot of research activities and pre-development work to be done before the actual
product development can start.
page 2 of 54
White Paper 100 GbE
Table of contents
Introduction and motivation...................................................................................4
What do we need 100 GbE for, what are the drivers? ...................................4
Short overview of Ethernet, historical perspective ........................................4
Ethernet – a brief history........................................................................5
The Ethernet physical layer ...................................................................6
The Ethernet frame structure .................................................................6
Techno-economical boundary conditions, retrospective concerning
developments in TDM and data networks .................................................................7
Standardisation considerations...............................................................................9
Standards for networks with optical transport interfaces...............................9
IEEE Ethernet ........................................................................................9
International Telecommunications Union............................................10
Ethernet transport.........................................................................................10
The IEEE 802.3 standard .....................................................................10
IP/Ethernet over WDM ........................................................................11
Ethernet over SDH/SONET/OTN........................................................11
Carrier Grade Ethernet.........................................................................11
Evolution of high-speed interfaces for optical transmission........................13
Concepts and implementation options .................................................................14
State of the art in general, resulting transmission and data transport
Dominating limiting effects of the transmission link ..................................16
Limitations due to PMD impairments..................................................18
Limitation of transmission distance due to the temperature dependence
of chromatic dispersion........................................................................................21
Modulation formats – advantages and disadvantages..................................24
Equalization, compensation and mitigation of signal distortions resulting
from transmission.....................................................................................................27
Optical equalization .............................................................................28
Electronic equalization.........................................................................29
Electronics and subsystems..........................................................................30
ETDM multiplexer...............................................................................30
Modulator driver amplifier ..................................................................31
Electronic demultiplexers ....................................................................32
Clock recovery .....................................................................................32
Subsystems and integrated ETDM electronics ....................................33
Opto-electrical components .........................................................................34
Optical modulators...............................................................................34
Optical detectors and receivers for serial transmission at 100 Gbit/s..36
Overview of reports on transmission experiments.......................................41
OTDM/ETDM experiments.................................................................43
Fully ETDM experiments ....................................................................44
Summary, conclusions and recommendations.....................................................45
page 3 of 54
White Paper 100 GbE
1 Introduction and motivation
Optical transmission technologies constitute the basis of current and future communication
networks. To support the development of sustainable solutions, the German federal ministry
of education and research (BMBF) has set up the national funded project framework EIBONE.
In several EIBONE projects, partners from industry, research institutions and universities
investigate new technologies for efficient, integrated backbone networks. Across all projects,
a working group has been defined to focus on transmission aspects in the physical layer. As a
part of its activities, this working group generates white papers to define a shared view of the
partners in the EIBONE project. The group has identified 100 Gbit/s Ethernet (100 GbE) as
one of the core technologies for future optical networks.
Consequently, this white paper focuses on 100 GbE from a transmission/physical layer
perspective. In three major parts, it covers aspects of 100 GbE from a techno-economical
perspective for its introduction, over standardization considerations to concepts and
implementation considerations. It also provides directives for further research and
development needed to implement 100 GbE into optical communication networks.
1.1 What do we need 100 GbE for, what are the drivers?
The development of technologies for 100 GbE is influenced by a number of external driving
forces. Market drivers are expected from IPTV and video services. The forecasted traffic
growth will be in the order of 30 – 100 % per year even on the basis of very conservative use
cases. Broadband Access will become a standard as the PSTN is today. The convergence of
historically very different services and technologies on a common packet base is considered
in the framework of the ‘Next Generation Network (NGN)’. In consequence still specialized
networks for e. g. telephony and TV distribution migrate to a common packet platform to
efficiently bear the entire traffic generated by new and migrated IP service.
Besides pure technical scaling, economical scaling becomes a serious issue. Incumbents and
service providers are expected by the market to produce the steady growing transport
volumes at the same or even declining production costs to keep the Internet and online story
going on. This can only be achieved by very efficient operation and low incremental
investments on top of existing facilities and infrastructure.
Also quality requirements augment from current best effort Internet experience. Service
specific QoS will become a standard. For example the transport of TV signals to the
customers with well-defined QoS (e. g. time delays) will cause considerable effort.
Consequently in metro and regional networks traffic loads will reach new scales driven by
broadband market penetration, new broadband multi media services and customer behaviour
from a market perspective as well as production optimization by higher aggregation rates and
new content delivery models. So called single mode, dark fibre production at SDH, 1 GbE or
10 GbE won’t be adequate. The next step could be 100 GbE.
Even though content delivery of multi media is planed to be done in the aggregation, total
traffic growth in the core will result in link loads exceeding the capacity of 10 Gbit/s and even
40 Gbit/s interfaces. Even worse in cases of a high single cast rate of media streams through
the backbone the link load in the core network will exceed 40 Gbit/s and probably even
100 Gbit/s within the next 10 years.
A next technological step towards this future must be started now in the suppliers’
laboratories and development units.
1.2 Short overview of Ethernet, historical perspective
The term Ethernet refers to the family of local-area network (LAN) products covered by the
IEEE 802.3 standard. Ethernet defines physical layer standards such as wiring and signaling
but also is a Media Access Control (MAC) layer standard. Other technologies and protocols
have been touted as likely replacements, but the market has spoken. Ethernet has survived
as the major LAN technology (it is currently used for approximately 85 percent of the world's
LAN-connected PCs and workstations) [1]. Meanwhile Ethernet has evolved to the extent that
operators consider using it as “Carrier Grade Ethernet” in Access and Metro networks. As a
page 4 of 54
White Paper 100 GbE
matter of fact, the Ethernet from the early 70s has only in part resemblance to today’s
implementation of 10 Gbit/s Ethernet and will have even less similarities to the 100 Gbit/s
Ethernet of the future. But – as pointed out by Bob Metcalfe, the inventor of the Ethernet:
“Whenever people introduced a new network, they preferred calling it ‘Ethernet’ again”. And
thus - the legend lives on.
Ethernet’s success is probably due to some of the following characteristics of the protocol:
It is easy to understand, implement, manage, and maintain.
Allows low-cost network implementations,
Provides extensive topological flexibility for network installation,
Guarantees successful interconnection and operation of standards-compliant
products, regardless of manufacturer.
1.2.1 Ethernet – a brief history
Ethernet was developed as an experimental LAN network in the 1970s by Xerox Corporation
to interconnect one of the first laser printers to other computers within a building. Its first
implementation relied on a carrier sense multiple access collision detect (CSMA/CD) protocol
for LANs with sporadic but occasionally heavy traffic requirements [1], [2]. Success with that
project attracted early attention and led to the 1980 joint development of the 10 Mbit/s
Ethernet Version 1.0 specification by the three-company consortium: Digital Equipment
Corporation, Intel Corporation, and Xerox Corporation.
There are many events that contributed to the invention of Ethernet. Yet, a couple of
important dates should be mentioned at this point:
In 1970, Norman Abrahamson of the University of Hawaii developed ALOHAnet. The goal of
the ALOHA network was to interconnect terminals of the University of Hawaii at campuses
located on different islands to the host computer on the main campus. The ALOHA idea is to
transmit a message as soon as it becomes available, thus producing the smallest possible
delay. From time to time frame transmissions will collide, but these can be treated as
transmission errors, and recovery can take place by retransmission after random times.
On May 22nd, 1973 Robert Metcalfe writes a 13 page description of what will become Ethernet
as part of his Harvard PhD thesis. The thesis is inspired by the ALOHAnet introduced a
couple of years earlier
On March 21st, 1975 Xerox Corporation files a United States Patent 4,063,220 entiltled
“Multipoint data communication system with collision detection”
&l=50&f=G&d=PALL&s1=4063220.PN.&OS=PN/4063220&RS=PN/4063220 - h2].
Bob Metcalfe, now working for Xerox, is the lead inventor. The patent was written while
Metcalfe was building a networking system for Xerox-Parc’s computers. Xerox's motivation for
the computer network was to integrate their world's first laser printer into a network in order to
be able to print with a single printer [4].
In June 1979, Bob Metcalf founds 3COM Corporation – a manufacturer for network
equipment and in particular Ethernet Network Interface cards.
Subsequently, many Ethernet standards were ratified. Some milestones are:
Standardization of the IEEE 802.3 (Ethernet, 10 Mbit/s),
Standardization of the IEEE 802.3u (Fast Ethernet, 100 Mbit/s),
Standardization of the IEEE 802.3z (Gigabit Ethernet, 1 Gbit/s),
Standardization of the IEEE 802.3ae (10 Gigabit Ethernet over fiber,10 Gbit/s),
Standardization of the IEEE 802.3an (10 Gigabit Ethernet, 10 Gbit/s).
The word “Ethernet” was coined on May 22nd, 1973 by Bob Metcalfe at the Xerox Palo Alto
Research Center. He writes: “The word ether came from lumeniferous ether - the omnipresent
passive medium once theorized to carry electromagnetic waves through space, in particular
light from the Sun to the Earth. Around the time of Einstein's Theory of Relativity, the lightEIBONE_AK_Uebertragungstechnik_Positionspapier_100GbE.doc
page 5 of 54
White Paper 100 GbE
bearing ether was proven not to exist. So, looking to name our LAN's omnipresent passive
medium, then a coaxial cable, which would propagate electromagnetic waves, namely data
packets, I chose to recycle ether. Hence, Ethernet.”
1.2.2 The Ethernet physical layer
The Ethernet physical layer defines transmission rate, modulation method and media type as
well as signal encoding, the connectors and many other physical layer aspects.
Four data rates are currently defined for operation over optical fiber and twisted-pair cables:
• 10 Mbit/s
• 100 Mbit/s
• 1000 Mbit/s
• 10 Gbit/s
10Base-T Ethernet
Fast Ethernet
Gigabit Ethernet
10-Gigabit Ethernet
The many implementations of Ethernet at the various bit-rates and in various media have
resulted in a confusing variety of Ethernet standards. The naming convention of these
standards is a concatenation of three terms indicating the transmission rate, the transmission
method, and the media type/signal encoding. For instance:
10GBase-LR =
10 Gbit/s,
1310 nm wavelength transmission over optical fiber cable with
64B66B encoding.
The various suffixes are
“X”: denotes use of 8B10B code
“R”: denotes use of 64B66B code
“W”: indicates encapsulation of 64B66W into SONET STS-192 payload, thus implementing
“S”: indicates a multimode fiber operating at 850 nm ("S" stands for "short range")
“L”: indicates a single-mode fiber at 1310 nm ("L" stands for "long range")
“E”: indicates a single-mode fiber at 1550 nm ("E" stands for "extended range")
“L4”:indicates two single/multimode fibers in the 1310 nm WDM window.
"T": indicates transmisson over Cat 6 and Cat 7 Twisted Pair Cable
Ethernet has adopted the 1-persistent mode of the CSMA-CD protocol. This means, that
when a channel goes silent, the station with data transmits immediately. During the
transmission, the station continues to listen for a collision that can occur. If a collision occurs,
the station aborts the transmission and schedules a random time when it will reattempt to
transmit its frame. With increasing bit-rate the Ethernet standard has undergone major
changes. Changes – to the extent that the original idea of the CSMA-CD protocol is disabled
and one might question if there is sufficient justification to still call it Ethernet.
While the 10Base-T Ethernet was a CSMA-CD with a minimum frame size of 64 bytes, the
increase in speed to 1 Gbit/s resulted in a transmission being completed before the sending
stations could sense a collision. For this reason the slot time was extended to 512 bytes for
Gigabit Ethernet. Frames smaller than 512 bytes must now be extended with padding bytes.
Also an approach called frame bursting was introduced to address this scaling problem.
Stations are allowed to transmit a burst of small frames, in effect to improve the transmission
efficiency. The IEEE standard for 10-Gibabit Ethernet led to other changes. For instance,
because the ratio of the round-trip propagation delay and the frame transmission time
becomes very small, 10 Gigabit Ethernet is defined only for full-duplex mode providing a
point-to-point Ethernet connectivity service with the CSMA-CD algorithm disabled.
1.2.3 The Ethernet frame structure
There are two Ethernet frames. The IEEE 802.3 standard and the Ethernet (DIX) or
Ethernet II standard. These two standard frames are identical except for the 2 byte “Length”
field which in Ethernet II is replaced by a “Type” field. The Ethernet II frame structure is shown
in Fig. 1.
The fields in the frame have the following meaning
• Preamble: A seven-byte sequence that repeats the octet “10101010”.
page 6 of 54
White Paper 100 GbE
• Start Frame Delimiter: This is the pattern “10101011”
• Destination and Source Field Addresses: The addresses are six bytes long.
The first bit distinguishes between single addresses and group addresses that are
used to multicast a frame to a group of users.
The 2nd bit indicates whether the address is a local address or a global address.
Thus there are 246 global address left for use. These are burnt into the ROM of the
NIC card. The first 3 bytes specify the NIC vendor and are called the Organizationally
Unique Identifiers (OUI).
• Type Field: This field identifies the upper layer protocol. For example, type filed values
are defined for IP, Address Resolution Protocols (ARP), and Reverse ARP.
• Pad: The pad field ensures that the frame size is always at least 64 bytes long. The data
field is at most 1500 bytes long.
• FCS (Frame Check Sequence): Is a 32 bit cycle redundancy check bit sequence. It
covers the address, the type and pad fields. It does not correct errors, it only enables
detecting errors.
Ethernet Frame
64 to 1518 Bytes
Single Address
Group Address
Local Address
Global Address
The Destination Address is Either a
Single Address beginning with ‚0' or a
Group Address beginning with ‚1'
(The Broadcast address would be = 111...111)
The 2nd bit in the Destination address indicates a
recipient defined on Local Basis ‚0' or a
Recipeint defined on Universal Basis ‚1'
246 Possible Global Addresses
Fig. 1: Ethernet II frame structure
1.3 Techno-economical boundary conditions, retrospective
concerning developments in TDM and data networks
Fuelled by the rollout of broadband residential and mobile access services, network traffic is
continuously growing. Even conservative estimates predict an annual growth rate of 30 –
100 % for the transport network. The traffic growth is primarily driven by IP services. In
response to this development, major network operators world-wide are currently upgrading
there core- and metro networks to support 40 Gbit/s wavelength services. Large Internet
exchanges and other users reach aggregated traffic volumes which already today could
benefit from higher data rates (see Fig. 2). Consequently, both network operators and
equipment manufacturers foster the development of next generation network equipment going
beyond interface card speeds of 40 Gbit/s.
With its LAN dominance, simplicity and high volume/low cost, Ethernet is emerging as an
ubiquitous packet interface standard also for MAN and WAN networks. As the Ethernet
hierarchy calls for a 10-fold increase in capacity for each new generation, the natural logical
step after the introduction of 1 GbE and 10 GbE is consequently 100 GbE. 100 GbE has a
wide range of applications:
page 7 of 54
White Paper 100 GbE
Deutscher Commercial Internet Exchange (DE-CIX)
Amsterdam Internet Exchange (AMS-IX)
Fig. 2: Internet traffic statistics in German and European Internet Exchanges.
Consumer broadband access & aggregation networks: Applications such as voice-overIP (VoIP), interactive high-definition IPTV, video-on-demand, multimedia streaming, and
internet data, bundled to a single service known as triple-play, are driving the need for
bandwidth in the residential area. The services are delivered through a variety of access
technologies, including cable, ADSL2/VDSL, and passive optical networks (PON). Whilst
current network roll-outs support data rates of 1-100 Mbit/s and moderate numbers of users, it
is expected that both access speeds and numbers of users will grow rapidly. Estimates of the
traffic growth in the access network predict aggregated bandwidths exceeding 100 Gbit/s as
early as in 2008-2010.
Enterprise Networks: Server and data centres, as well as cluster computing, are driving the
need for higher bandwidth in the enterprise sector. Servers are commonly connected to
aggregation switches. As 10 GbE interfaces become more widely deployed at the backend of
large server farms and storage arrays, congestion occurs at aggregation points and will drive
the need for 100 GbE at key points in the network.
Research, Education, and Government Facilities: They typically operate supercomputing
centres for data visualization, modelling, and processing that handle terabits and even
petabits of data. These high performance computing clusters are comprised of hundreds,
sometimes thousands, of nodes and may be geographically dispersed. In these networks link
aggregation of 10 GbE is commonly used making upgrading to 100 GbE important in the near
Content providers: Goggle, Yahoo, and other large content providers have announced an
immediate need for 100 GbE.
Internet exchange providers: Currently, the largest Internet exchange carriers in Asia, the
U.S. and Western Europe are all supporting 80 to >100 Gbit/s of aggregate bandwidth. AMSIX, an Amsterdam based Internet data exchange, predicts that at the end of 2007 Inter Switch
Links (ISL) capacity will need 80-100 Gbit/s, which would require an upgrade to
12-16 x 10 GbE LAG (Link Aggregation Group). LAG is not preferred for this application due
to the complexity and limitations associated with flow segregation and aggregation on layer 2.
The IEEE 802.3 HSSG is working on the definition of 100 GbE for LAN and short range
applications. The IEEE approach is to adhere to existing Ethernet standards wherever
possible and define 100 GbE in an incremental way. Areas of new developments are in
particular the electronic and optical interfaces. Optical interface options under discussion
include both parallel and serial realisations. It is likely that a parallel (C)WDM option (probably
4x25 Gbit/s) will emerge for 10-40 km single channel applications on standard single-mode
fiber (SSMF). Standard Ethernet economics dictate 100 GbE not to be more expensive than
twice a 10 GbE interface. It is clear that achieving this target requires both large volumes and
a low cost base (e.g. by a high level of optical integration).
page 8 of 54
White Paper 100 GbE
How 100 GbE signals can be transported reliably over optical MAN/WAN networks is a
question which is being addressed by the ITU-T. A liason between IEEE and ITU-T aims at
ensuring consistency between the different standardisation activities. The ITU-T SG15 Q11 is
working on the choice of a transmission frame structure compatible with the Optical Transport
Network (OTN) standard. Although the standardisation processes are far from complete, the
current proposal in the ITU SG15 is that the aggregate physical line rate will be in the 111 130 Gbit/s range, depending on the exact structure agreed. At 111 Gbit/s, the 100 GbE LAN
PHY signal would be mapped with minimum overhead into an OTN-like container.
The lower data rate has the advantage that such a solution can potentially earlier and easier
be realised and that it possesses a narrower optical spectrum. At 130 Gbit/s, the data rate is
optimised for the most efficient multiplexing of 2.5 Gbit/s, 10 Gbit/s und 40 Gbit/s tributaries
into a new OTN container, but requires more overhead and optical bandwidth and is less
efficient in mapping native 100 GbE signals. Traditionally, both SDH and OTN have used a
factor of four between different hierarchy levels. All proposals for the next OTN hierarchy level
currently will break this rule.
Whilst the ITU-T will standardise the digital frame format for transporting 100 GbE, the
physical interface for 100 GbE WDM transmission is not likely to be standardised any time
soon and will be the choice of the transport equipment vendor. First technology
demonstrations have already shown the feasibility of 100 GbE serial transmission over SSMF
and components for 100 GbE transmission, such as high speed photodiodes, are becoming
available. The technological hurdles for transporting 100 GbE serially over long distances
present a significant challenge and will likely make 100 GbE serial interfaces appear first in
the aggregation network and on short link lengths.
Depending on the processing required in the aggregation nodes, 100 GbE flows may initially
be decomposed into subrate flows which can then be transported, for example, as virtually
concatenated 40 Gbit/s streams in the core and metro domain. It is, however, expected that
for transport applications, serial transmission will win over parallel transmission. The reasons
are the better cost structure and higher spectral efficiency offered by serial formats. The
physical challenges of 100 GbE transport over long distances call for methods such as multilevel/multi-carrier modulation formats, adaptive distortion compensation and electronic signal
A high level of optical integration and a convergence to a limited number of solutions is key
for reaching the target cost of 100 Gbit/s when compared to 10 Gbit/s and 40 Gbit/s (2.5 times
the cost for 4 times the data rate is commonly cited for transport networks). A commonality of
100 Gbit/s with existing 10 and 40 Gbit/s component technologies is desirable for increasing
volumes and improving economies of scale.
The requirements on the transmission technology can be summarised as follows:
Supporting metro (~250 km) and core/LH DWDM (~1500 km) applications
Compatible to today’s fiber infrastructure
Compatible to 50 GHz channel grid (core/LH)
Compatible to 100 GHz channel grid (metro)
≥ 2x increase of spectral efficiency compared to 10 Gbit/s
For core DWDM network applications, QPSK with polarisation multiplexing and coherent
reception seems to form an emerging industry consensus albeit a variety of technical and
implementation problems remains to be more thoroughly investigated. In the metro domain,
the exploitation of multi-level/multi-carrier modulation formats (e. g. OFDM) attracts much
interest though an industry consensus is less obvious at the moment.
2 Standardisation considerations
2.1 Standards for networks with optical transport interfaces
2.1.1 IEEE Ethernet
Ethernet is defined by the IEEE 802.3 Working Group within the IEEE Standardization
Process. The standard comprises the aspects of the physical layer and data link layer (layer 1
page 9 of 54
White Paper 100 GbE
and 2 in the OSI model) of local area networks (LANs), i.e. data transmission over multiple
types of wired media.
During its evolution since the 1980s, Ethernet has moved more and more from the pure LAN
application (10 Mbit/s, 100 Mbit/s, 1GbE) over copper cable towards the Metropolitan network
area providing high-speed transport (10GbE) over optical fiber. Recently the IEEE has
launched a Higher Speed Study Group (HSSG) to investigate the future upgrade towards
100Gb Ethernet.
2.1.2 International Telecommunications Union
At ITU-T standardization recommendations of the G series cover the working area
"Transmission systems and media, digital systems and networks". The optical interfaces for
SDH/SONET as well as OTN transport networks are specified in several of those documents.
Additionally network operation and control is specified in this context. SDH/SONET provides a
mechanism to insert various data rates from different traffic sources/client interfaces by GFP,
while OTN extends the application of those techniques to operate and manage high data
rates over transport networks based on DWDM technology.
2.2 Ethernet transport
2.2.1 The IEEE 802.3 standard
Fig. 3 shows the OSI reference model and its relation to the IEEE 802 reference model for
LAN standards. It shows how the LAN functions are placed within the two lower layers of the
OSI reference model (a). The data link layer is divided into two sublayers: A common logical
link control (LLC) sublayer and various medium access control (MAC) sublayers. The MAC
sublayer deals with the problem of coordinating the access to the shared physical medium.
Fig. 3 (b) shows that the IEEE has defined several MAC standards, including IEEE 802.3
Fig. 3: OSI reference model and LAN standards as developed by the IEEE 802 committee.
The original IEEE 802.3 standard was approved by the 802.3 working group in 1983 and was
subsequently published as an official standard in 1985 (ANSI/IEEE Std. 802.3-1985). Since
then, a number of supplements to the standard have been defined to take advantage of
improvements in the technologies and to support additional network media and higher data
rate capabilities, plus several new optional network access control features.
Ethernet has adopted the 1-persistent mode of the CSMA-CD protocol. This means, that
when a channel goes silent, the station with data transmits immediately. During the
transmission, the station continues to listen for a collision that can occur. If a collision occurs,
the station aborts the transmission and schedules a random time when it will reattempt to
transmit its frame.
page 10 of 54
White Paper 100 GbE
2.2.2 IP/Ethernet over WDM
As mentioned above, Ethernet covers network layer 1 and 2 only, and therefore the network
aspect is covered by application of IP, as the layer 3 protocol. For 10GbE the PHY (physical
layer interface) can be used to transport directly over WDM infrastructure by utilizing coloured
optics. This method leaves all routing function to the IP router and does not support traffic
engineering on lower layers, e.g. by pseudo-wire techniques.
2.2.3 Ethernet over SDH/SONET/OTN
Ethernet transport is also possible by GFP mapping of 10 Mbit/s … 1 Gbit/s Ethernet data
flows into VCs of the SDH/SONET hierarchy. There is an ongoing debate about the economic
transport of 10GbE, due to the existence of two flavours for the physical layer interface with
different data rates, i.e. LAN PHY with 10.3 Gbit/s and WAN PHY 9.95 Gbit/s. Operators
prefer mapping of LAN traffic directly into the transport network. However, due to the data rate
mismatch, this is not supported in transparent mode. For this reason it would be beneficial to
find an agreed transport strategy during the definition phase of the new standard. For a higher
data rate Ethernet stream possible mapping options would be multiples of ODU2 or ODU3
signals, which can be handled by an OTN efficiently. The proposed value of 100 Gbit/s does
not fit well.
2.2.4 Carrier Grade Ethernet
Carrier Grade Ethernet has a strong and global momentum for evolving services and
technologies which promise scaling networks and cost effective networking. It comprises
Carrier Grade Ethernet Services for residential and business customers of the mass and
business markets, respectively. These service demands can be produced over Carrier Grade
Ethernet Networks using Carrier Grade Ethernet Technologies, standardized e.g. in IEEE802,
ITU-T, and IETF recommendations. Besides that Carrier Grade Ethernet Networking is a
prerequisite to cost effectively operate the networks
Carrier Grade Ethernet Services and related functionalities are specified in the Metro Ethernet
Forum (MEF). Besides the definition of Carrier Grade Ethernet in terms of standardized
services, scalability, service management, reliability, and quality of service, an expanded set
of services is discussed in the MEF Ethernet service definition phase 2, shown in Fehler!
Verweisquelle konnte nicht gefunden werden..
Table 1: MEF defined expanded set of Carrier Grade Ethernet Services
Service Type
(All to One Bundling)
(Service Multiplexed)
Ethernet Private Line
Ethernet Virtual Private Line
Ethernet Private LAN
Ethernet Virtual Private LAN
Ethernet Private Tree
Ethernet Virtual Private Tree
These services may be categorized as Layer 2 virtual private network (VPN) services. The
Ethernet Tree service may be useful for the mass market connecting residential customers to
their application service providers supporting multiple play services for voice, video, data, and
mobile applications over a Metro Ethernet Network.
The Ethernet Line and LAN services may be useful for business customers and backhaul
services supporting the advanced requirements for enterprise networks and Carrier Grade
Ethernet networks up to the national and global scale.
Carrier Grade Ethernet Technologies are needed to produce the services over a Carrier
Grade Ethernet network. Due to the momentum of the development, recently the MEF
page 11 of 54
White Paper 100 GbE
expanded its scope from Carrier Grade Ethernet Access via Metro Carrier Grade Ethernet up
to a Global & National Carrier Grade Ethernet, as shown in Fig. 4.
Fig. 4: MEF Carrier Grade Ethernet Scope and Expansion
Carrier Grade Ethernet supports the different areas of access, metro, and backbone networks
resulting in a multi-domain Carrier Grade Ethernet Network. It is a converged network which
can integrate the different IP layer 3 applications as well as layer 2 VPN and wireless
Carrier Grade Ethernet has to support the needed functionalities of CoS/QoS, reliable,
resilient, scalable, and secure networking, which are going to be standardized. Due to the
Ethernet expansion, hierarchical networking in terms of Provider Bridge (IEEE802.1ad) and
Provider Backbone Bridge (IEEE802.1ah), and higher bitrates i. e. 100 GE interfaces
(IEEE802.3 HSSG) will become necessary. Furthermore OAM functionalities in terms of Loop
Back, Link Trace, Performance Monitoring, Alarm Indication and Remote Defect Indication
Signalling functionalities are required for the fault management.
For the global and national Carrier Grade Ethernet an interworking with the OTN (optical
transport network) is necessary to support long haul and ultra long haul Ethernet
transmissions since the interface reach is standardized to 40 km.
However, the development is on the way, but there is still work to do, resulting in a
specification and standardization of the advanced functionalities for Carrier Grade Ethernet.
Carrier Grade Ethernet Networking is a further prerequisite for a cost effective networking of
the growing and scalable networks. This includes the seamless network control for multidomain Carrier Grade Ethernet networks, e.g. using control planes and/or network
management systems, to support the provisioning of end-to-end Carrier Grade Ethernet
Furthermore, there is a multi-layer environment including layer 1 OTN (optical transport
network), layer 2 Ethernet, and layer 3 IP networking functionalities, resulting in respective
node architectures of next generation networks as shown in Fig. 5.
page 12 of 54
White Paper 100 GbE
Fig. 5: Multi-Layer Node Architecture
For a cost effective networking the traffic should be switched at that layer at which the
functionality is needed. If no switching is needed the node should be passed by using the
layer 1 transit traffic path. Otherwise the node traffic is let to the respective higher layer 2 or
layer 3 functional elements.
For such a seamless layer interworking a node control is necessary which comprises all 3
layers. It should result in one hop solutions for the higher layers in a network domain if
possible. This reduces the traffic in the higher layers and saves ports as well as CAPEX and
2.3 Evolution of high-speed interfaces for optical
IEEE has decided to establish a Higher Speed Study Group to discuss potential alternative
solutions for a higher data rate Ethernet. A couple of potential interface approaches have
been presented at the constitutional meeting, among those are parallel transmission over
multiple wavelengths or multiple fibers (multi-lane approach) as well as serial transport either
directly binary modulated or applying multi-level signalling. All of these concepts have their
advantages as well as drawbacks. Although there will be a number of different applications
like router inter-connection, metro-transport and long-haul transport, it would be beneficial to
define a common set of adaptation functions (ASIC/FPGA) to avoid several parallel physical
layer interface developments. This activity could probably be triggered by collaboration
between different standardization organizations, with the objective to propose a consistent
end-to-end transmission scenario for 100Gb Ethernet data streams.
The Following 6 objectives for the HSSG were agreed:
Support full-duplex operation only,
Preserve the 802.3/Ethernet frame format at the MAC Client service interface,
Preserve minimum and maximum Frame Size of the current 802.3 Std.,
Support a speed of 100 Gbit/s at the MAC/PLS service interface,
Support at least 40 km on SMF,
Support at least 10 km on SMF,
Support at least 100 meters on OM3 MMF.
IEEE does not consider transport solutions for longer distances at this time. From ITU-T
Study Group 15 collaboration was proposed to elaborate on potential transport alternatives
applying OTN data rates.
Current proposals for an OTU4 bit rate focus around 112Gb/s, which does not follow the
former strategy to increase transport data rates by factors of 4. In order to limit the
page 13 of 54
White Paper 100 GbE
technological challenge, the date rate could be selected close to the raw 100Gb/s Ethernet
data rate plus additional overhead for OAM functions and FEC. Although this bit rate does not
continue to allow multiplexing of 4 times OTU3, it still can be used to combine multiple OTU2
and OTU3 bit streams.
Before final conclusions can be drawn, a serious effort is required to evaluate the proposed
alternatives from a scientific and technological view as well as from the system development
and finally economical standpoint.
3 Concepts and implementation options
The realization of 100 Gbit/s Ethernet interfaces comes with several challenges. Among them
are packet buffering and processing for ultra high line rates as well as the physical layer
interface. The following chapter focuses on options for the implementation of such interfaces.
As optical fiber based transmission offers the only viable solution with sufficient reach for
metro, regional and core networks, requiring transmission distances up to several hundred
(long haul, LH) or even a few thousand (ultra long haul, ULH) kilometers, it was chosen as the
technology that will be discussed. Fig. 6 shows a selection of options for fiber based
100 Gbit/s interfaces in the form of a decision tree.
Options for
100 Gbit/s interfaces
Number of fibers
Number of
Š 10 x 10 Gbit/s DWDM
Š 4 x 25 Gbit/s DWDM
Š ...
Š 100 Gbit/s NRZ ETDM
Š 100 Gbit/s RZ OTDM
Š 100 Gbit/s duobinary
Š ...
of bits
per symbol
Š 10 fibers, 10 Gbit/s each
Š 4 fibers, 25 Gbit/s each
Š ...
Š 10 x 10 Gbit/s CWDM
Š 4 x 25 Gbit/s CWDM
Š ...
Š 2 x 50 Gbit/s DQPSK
Š 2 x 2 x 25 Gbit/s PolMUX DQPSK
Š 100 x 1 Gbit/s OFDM
Š ...
Fig. 6: Implementation options for optical fiber transmission based 100 Gbit/s interfaces
First deployment of 100 Gbit/s interfaces will most likely occur in large carrier offices or large
enterprise data centers. The easiest way to implement these interfaces with currently
available technology is to transmit multiple lower bitrate datastreams over parallel fibers. For
short range, i. e. intra-office links, fiber ribbon technology provides a feasible solution. The
total data stream can be split by inverse multiplexing, launched into 10 fibers carrying
10 Gbit/s signals each and reassembled after detection. Increasing the bitrate per fiber
enables reduction of the fiber count.
Despite the easy implementation of this option, it is not very desirable from a logistical point of
view. Some carriers prefer to avoid clogging of fiber trays by cutting off excess length of patch
cables after installation and attaching new connectors on-site. The required procedures are
well established for single fibers (one fiber per direction). Fiber ribbon cables would require
more complex procedures and new equipment for the installation of connectors.
But even without the need for on-site installation of connectors, carriers prefer to avoid fiber
ribbons, because they add another cable type. Managing patch cables is already a
challenging task in large offices due to different cable lengths. Planning the installation of
page 14 of 54
White Paper 100 GbE
patch panels would turn into a much more difficult process, if 100 Gbit/s ports did require fiber
ribbon connections between them. Having the option to install a large number of single fiber
connections and decide later, whether a given patch panel port will be used for a 10 Gbit/s or
a 100 Gbit/s link, provides a much more flexible and hence desirable solution. In
consequence, the single fiber option has to be considered as the preferable solution for short
reach interfaces. It provides the only viable solution for longer reach links.
Another technology enabling to keep the speed requirements per channel low is wavelength
division multiplexing (WDM). Again the total data stream is split into lower bitrate substreams.
In case of WDM, each one is transmitted using a separate wavelength in one single fiber
before it is detected and recombined with the other substreams in the receiver.
For short range connections, deploying another patch cable is no big issue. A dedicated patch
cable can be reserved for each service. As a result, a given fiber has to carry only one
100 Gbit/s channel. The WDM channels can be distributed over a broad wavelength range,
enabling wide channel spacings. Coarse wavelength division multiplexing (CWDM) offers a
cost efficient solution for this application, because filter and wavelength accuracy
requirements can be relaxed.
In metro, regional and core networks, fibers are a valuable and usually scarce resource.
Bandwidth efficiency has to be kept high to provide large transport capacity per fiber. Narrow
WDM channel spacing helps to enable transmission of more channels in a given wavelength
range, leading to dense wavelength divison multiplexing (DWDM). Transporting a 100 Gbit/s
service in a multiple channel group together with channels from other services in a single fiber
is still an option, but not a desirable one. It provides less bandwidth efficiency than other
options and it increases required port count in transparent optical networks.
A 100 Gbit/s service transported in N channels needs the same number of available ports in
an optical add drop multiplexer (OADM) or phontonic cross-connect (PXC), whereas a service
transported in a single wavelength needs only one. As the internal complexity of OADMs and
PXCs grows overproportionally with increasing port count, requiring multiple ports per service
is not a desirable solution. In addition, guaranteeing that the channel group belonging to one
service will follow the same path through the network increases the complexity of the network
But even if the group of channels carrying the substreams follows the same path through the
network, there will be propagation delay differences between the channels, for example due
to chromatic dispersion. Special measures have to be taken to keep these differences small.
Additional measures are required in the receiver to resynchronize the bit streams.
Resynchronization increases the complexity of the receiver, which has an impact on the cost.
In consequence, transport of 100 Gbit/s services should not require more than one
wavelength in metro, regional and core networks. For short range connections, CWDM can
be considered offering a viable option in case it can be realized at sufficiently low cost.
The third decision deals with the number of bits per symbol. Transmission of one bit per
symbol, especially in the form of amplitude shift keying (ASK) non-return to zero (NRZ)
electrical time division multiplexing (ETDM), provides the most straightforward approach for
100 Gbit/s interfaces. It is also the most challenging one with respect to high speed
electronics and component bandwidth requirements.
Optical time division multiplexing (OTDM) enables relaxing the speed requirements for
electronic components by providing multiplexing and demultiplexing functions in the optical
domain. Bitrates of 100 Gbit/s can be realized easily today using this technology. However,
the need for multiple optical transmitters and receivers at the lower bitrate in combination with
the optical components for multiplexing and demultiplexing does not lead to a very cost
efficient solution.
Once realizable, NRZ ETDM potentially provides a very low cost approach. For short reach
interfaces, its cost advantages and low complexity (single fiber, single wavelength, small
number of optical components with low complexity) create a lot of attractiveness. In case of
LH and ULH applications, other criteria will dominate. One of them is bandwidth efficiency.
Metro and regional networks installed today typically operate with channel spacings of
page 15 of 54
White Paper 100 GbE
100 GHz, whereas spacings down to 50 GHz can be found in core networks. A transmission
format compatible with these channel spacings would be highly desirable.
For channel spacings of 100 GHz, optical duobinary modulation may provide a solution due to
its higher filtering tolerance compared to NRZ. Another advantage of this format is the lower
bandwidth requirement for the optical modulator. Channel spacings of 50 GHz can potentially
be realized with differential quaternary phase shift keying (DQPSK). This multi-level format
features transmission of two bits per symbol. The resulting symbol rate of 50 Gsymbol/s
reduces the speed requirements for the electronic components and leads to a higher
tolerance towards linear signal distortions such as chromatic dispersion (CD) and polarization
mode dispersion (PMD).
Tolerance towards linear and nonlinear signal distortions plays a major role in case of LH and
ULH links. Orthogonal frequency division multiplexing (OFDM) potentially provides even more
linear distortion tolerance. However, it may suffer severely from nonlinear distortions. In
addition, the implementation of this format imposes demanding requirements on high speed
electronic as well as optical components.
The following sections discuss some of the technologies mentioned in this brief introduction
and their characteristics in more detail.
3.1 State of the art in general, resulting transmission and
data transport requirements
A 100 GbE carrier grade transmission system should fulfil the following requirements:
BER < 10-12 at the MAC/PLS,
PMD-induced outage probability Pout ≤ 10-5,
Compatibilty with the OTN according to ITU-T G.709. The development of a
standardized Ethernet over OTN (EoOTN) solution is a must for application in metro
and backbone networks.
Mapping of the 100GE signal into ODU containers including inverse
most advantageous: Standardising a data rate of 120 Gbit/s. This would
enable mapping into both, virtually concatenated 40G containers of virtually
concatenated 10G containers (i.e. ODU3-3v and also ODU2-12v).
possible compromise: Standardising a data rate of 100 Gbit/s into ODU2-10v
or ODU2-11v, or mixed virtual concatenation of ODU3 and ODU2 containers.
Direct mapping of a serial 100GE signal into an ODU container requires an
update the present G.709 standard (e.g. by defining a suitable ODU4
Compatibility with installed fiber base (G.652 in most cases), respecting the fiber
parameters (in particular PMD) of the installed fibers, and also the parameters (in
particular PMD) of recently standardized fibers.
Interfaces with span lengths of 40 km and 80 km (for metro and backbone networks).
Optically transparent transmission with in-line EDFAs over some hundreds of
kilometres are most desirable (to enable future-proof optically transparent systems
with EDFAs, OADMs, and other ONEs). WDM solutions in combination with the OTH
hierarchy (inverse multiplexing) may be very helpful.
Introduction of OAM functionalities required for carrier grade networks, in particular
Configuration Management (CM), Performance Management (PMD) and Fault
Management (FM).
3.2 Dominating limiting effects of the transmission link
Optical transmission is limited by a number of well-known impairments, such as
page 16 of 54
White Paper 100 GbE
Accumulation of Amplified Spontaneous Emission (ASE) noise,
Residual Chromatic Dispersion (CD),
Polarisation Mode Dispersion (PMD),
Accumulated Polarisation Dependent Loss (PDL) effects,
Optical non-linearity,
Spectral imperfections of in-line Optical Network Elements (ONEs), e.g. accumulated gain
ripple of cascaded EDFAs or spectral narrowing of cascaded optical filters.
ASE noise can be minimized by applying Forward Error Correction (FEC), low-noise EDFAs,
low-noise optical receivers, etc. Thus, many mature technologies are available how to cope
with ASE noise impairments.
Several degradation effects limit the reach of signal propagation over optical fiber spans.
Typically, the larger the symbol rate is the larger will be the impact of degrading propagation
Loss due to scattering effects in optical fibers is uniformly distributed over distance, and
referred to as attenuation. Even though, fiber attenuation is a frequency dependent effect, it
can be regarded constant over the signal rate of interest. Optical amplifiers that are placed
between fiber spans compensate for the loss, but add also optical noise to the signal. One
important performance parameter is the Optical Signal-to-Noise Ratio (OSNR), which is given
as the ratio of optical signal power and optical noise power within a certain measurement
bandwidth. The OSNR can be directly translated to the Bit-Error Ratio (BER) for systems that
are limited by optical noise processes only. Typically, the bandwidth of optical noise sources
is much wider than the signal rate. Hence, it can be assumed that the OSNR is proportional to
the signal rate if optical filtering effects can be neglected. So when increasing the signal rate
from 10 Gbit/s to 100 Gbit/s using the same modulation format and filter bandwidths that are
proportional to the signal rate, the OSNR requirement rises by 10 dB.
Important linear propagation effects in single-mode optical fibers are chromatic dispersion
(CD) and polarization mode dispersion (PMD). Chromatic dispersion describes the frequencydependence of the propagation velocity of optical signals along the fiber. As criterion for CD
limitation one can consult the dispersion length (LD), which defines the distance where a
chirp-free Gaussian pulse broadens by a factor of √2. LD is inverse proportional to the square
of the signal bandwidth. This means that when increasing the signal rate from 10 Gbit/s to
100 Gbit/s using the same modulation format and filter bandwidths that are proportional to the
signal rate, the CD requirement rises by a factor of 100. Table 2 shows the scaling of
maximum transmission distances with respect to CD limitation for different signal rates and
fiber type. For 100 Gbit/s even slight variations of accumulated dispersion, e.g., due to
manufacturing imperfections, its frequency dependence, or temporal temperature changes
along the fiber, may lead to performance impairments if no adequate compensation
mechanisms are employed.
Table 2: Limits of fiber propagation due to chromatic dispersion
Fiber type
Standard Single Mode
D = 16.6 ps/nm-km
Non-Zero DispersionShifted Fiber (Type 1)
D = 7.7 ps/nm-km
Non-Zero DispersionShifted Fiber (Type 2)
D = 4.3 ps/nm-km
Fiber Length
10 Gbit/s
Fiber Length
40 Gbit/s
Fiber Length
100 Gbit/s
62.9 km
3.9 km
0.6 km
135.6 km
8.5 km
1.4 km
242.9 km
15.2 km
2.4 km
Polarization mode dispersion describes the dependence of the propagation velocity of optical
signals on the state of polarization along the fiber. The Differential Group Delay (DGD)
defines the difference between propagation velocity of the slowest and fastest signal
polarization state at one distinct frequency. PMD is stochastically varying in time, frequency
and from fiber to fiber. The PMD characteristic of optical single-mode fibers is typically
described by an average DGD value per square-root distance. Table 3 lists fiber propagation
page 17 of 54
White Paper 100 GbE
limitations due to PMD for different signal rates and fiber types. Due to the reduced bit
duration (and thus, pulse width), PMD represents a significant limitation for 100 Gbit/s signal
rate transmission.
Table 3: Limits of fiber propagation due to polarization mode dispersion
Fiber type
Fiber Length
10 Gbit/s
Fiber Length
40 Gbit/s
Fiber Length
100 Gbit/s
400 km
25 km
4 km
10,000 km
625 km
100 km
40,000 km
2,500 km
400 km
Old Fiber
PMD=0.5 ps/km1/2
Modern Fiber
PMD=0.1 ps/km1/2
Advanced Fiber
PMD=0.05 ps/km1/2
Finally, several nonlinear propagation effects that depend on the local optical powers along
the fiber may affect the pulse propagation. While inter-channel interactions dominate for
signal rates of 10 Gbit/s, nonlinear intra-channel interactions predominantly limit systems with
40 Gbit/s and beyond due to the increased signal bandwidth.
It is the task of optical engineers designing WDM transmission systems to find mechanisms
that overcome the interacting impact of all those effects. The challenge here is to find costefficient solutions that are robust, applicable to a wide variety of applications, and stable in
time and frequency.
3.2.1 Limitations due to PMD impairments
The widely spread systems with up to 10 Gbit/s channel rate with standard modulation
formats based on 10 Gbaud can be well transmitted in legacy fiber plants. With increasing
baud rates to more than 10 Gbaud polarisation-mode dispersion represents a crucial obstacle
to maintain the optically transparent transmission distances of 300 to 1000 km that are used
in most European backbone networks. As 10 Gbit/s DWDM systems shall be upgraded by
additional 40 Gbit/s channels the given transparent distances are a fixed network parameter,
furthermore the introduction of additional OEO repeaters would raise the investment costs by
an inadmissible level. As a consequence, an area-wide deployment of systems with
40 Gbaud is not possible in some European backbone networks if no elaborate PMD
compensators are applied. Alternatively, the baud rate of a logical 40 Gbit/s channel has to be
decreased by use of multi level modulation formats, e. g. DQPSK.
Region 1
Mean Value 0,32
PMD [ps/√
PMD specification introduced
as standard process
Region 2
Mean Value 0,13
Region 3
Mean Value 0,052
Fig. 7: Mean and variance of PMD coefficient as a function of cable installation year [5]
Fig. 7 shows the PMD values of the installed fiber base in a typical national fiber network.
Especially the fiber cables that have been installed until the early 1990s limit the employment
page 18 of 54
White Paper 100 GbE
of 40 Gbit/s systems. A reinstallation of the cable segments with extraordinary high PMD
values is a very costly issue operators are desisting from in most cases.
For 40 Gbaud implementation, DT’s core network links were devided into four different
categories (see Fig. 8). Category 1 (PMD<2.5 ps/link) describes the case that the optically
transparent routes can be utilized without any adaptations to the available system technology,
standard modulation formats like NRZ are possible. For links of category 2
(2.5 ps/link<PMD<5 ps/link) systems with enhanced receiver robustness are needed,
equipped with special robust modulation formats and additional efficient FEC coding. For
category 3 (5 ps/link<PMD<7.5 ps/link) specially dedicated high-performance PMD
compensators or concepts with equivalent PMD robustness are needed. Category 4
(PMD>7.5 ps/link) cannot be managed yet, maybe by future technologies. At some locations
even links of category 4 cannot be avoided but the number is very small, so fiber reinstallation
is still an economic solution, here. The today available PMD compensators (PMDC) of the
needed efficiency are not yet mature enough to perfectly compensate for the stochastic
properties of PMD. Especially the partly very fast time variance [6] of differential group delay
(DGD = individual channel related PMD) represents a problem for adaptive control system of
Mean Value 0,079 ps/sqr(km)
Class 1
Class 2
Class 3
Class 4
< 5 ps
< 7.5 ps
> 7.5 ps
< 2.5 ps
PMD in ps/sqr(km)
Fig. 8: Histogramm of PMD in the core network of links with average fiber lengths of 400 km.
Categories 1 through 4 for different classes of robust 40 Gbit/s systems.
For 100 Gbit/s Ethernet systems based on 100 Gbaud transmission, the classes have to
redefined. Category 1 holds for PMD<1ps/link, category 2 for 1ps/link<PMD<2ps/link,
category 3 for 2ps/link<PMD<3ps/link, and category 4 for PMD>3ps/link. A more general
description for the PMD limit in 100 Gbaud systems follows.
Polarisation Mode Dispersion (PMD) is an inherent feature of optical fibers and Optical
Network Elements (ONEs), and PMD may cause severe problems due to its statistical nature.
In this clause, a simple quantitative estimation of PMD-induced limits is illustrated using wellestablished results of PMD characteristics and PMD specifications in optical interfaces
Let us consider an optical transmission system composed of an optical transmitter (Tx),
transmission fiber segments, ONEs and a receiver (Rx), as indicated in Fig. 9.
page 19 of 54
White Paper 100 GbE
Optical Network Elements (ONEs)
OA or
fibre fibre
Fig. 9: Optical transmission system used to model the PMD-induced impairments
Looking at standard NRZ-based optical interfaces (as given e.g. in G.959.1, G.691, G.698.1,
etc) the product of the channel data rate B and the maximum Differential Group Delay
DGDmax of the optical path between the Tx and Rx (i.e. between MPI-S and MPI-R) is
constant, i.e. we have
DGDmax • B = 300 ps • Gbit/s .
A NRZ 2.5G interface tolerates a maximum DGD of 120 ps, a NRZ 10G interface is specified
by DGDmax = 30 ps, and a NRZ 40G interface can cope with DGDmax = 7.5 ps.
Assuming a maximum PMD-induced outage probability of Pout ≤ 10-5, the realation between
the maximum PMD, PMDmax, between MPI-S and MPI-R and DGDmax is given by
3 • PMDmax = DGDmax.
As an example a NRZ 10G interface tolerates up to 30 ps of DGD, i.e. up to 10 ps of PMD.
A transmission system using advanced modulation formats and compensation technologies
may cope better with PMD-induced impairments. Let us define this improvement of the PMD
tolerance by a PMD tolerance factor p. Using (1), (2) and the PMD tolerance factor p we get
PMD max ⋅ B = p ⋅ const .
with const = 100 ps Gbit/s. We have p=1 for standard NRZ-based optical interfaces. The use
of RZ modulation can improve the PMD tolerance by about 50% (i.e. p≈1.5), and a
combination of highly sophisticated modulation techniques in combination with PMD
compensation may provide p=3 (current experiences with 40G transmission have proved that
this is achievable, in principle, although not simple). Aiming for p>3.3 is quite ambitious
because the corresponding value of DGDmax would be larger than the bit length.
The PMD of the optical path is composed of the fiber PMD (PMDfiber) and the PMD of the inline ONEs (PMDONE,j, j=1...n), i. e. we have
PMD max = PMD fibre + PMD ONE .
The fiber PMD is given by
PMD fibre = PMD Q ⋅ L
with the link design value PMDQ of the fiber and the maximum (optically transparent)
transmission fiber length L.
Using (3), (4) and (5) we get
⎡⎛ p ⋅ const
L = ⎢⎜
⎣⎝ B
PMD Q ⎥ .
The last equation (6) is used to illustrate the maximum (optically transparent) transmission
fiber length L versus channel data rate B by some examples. Results of these calculations are
shown in Fig. 10 assuming Optical Line Amplifiers (OLAs) with dispersion compensation
(DC) after LA = 40 km amplifier spacing.
page 20 of 54
Max. transmission fibre length L
White Paper 100 GbE
80 100 160
Channel data rate B
Gbit/ s
Fig. 10: Examples of maximum transmission fiber length L between Tx output and Rx input for
typical G.652 transmission fiber and LA = 40 km amplifier spacing
p=1; PMDQ = 0.2 ps/sqrt(km); PMDONE = 0
p=1; PMDQ = 0.25 ps/sqrt(km); PMDONE = 0.5 ps ps for each OLA with DC
p=2.5; PMDQ = 0.25 ps/sqrt(km); PMDONE = 0.5 ps for each OLA with DC
The general tendency is quite clear: The maximum optical transmission length is
approximately inverse proportional to the square of the channel data rate B! Thus, a smaller
channel data rate (i.e. a larger bit length Tbit = 1/B) considerably relaxes the PMD-induced
limitations, and a smaller channel data rate can be achieved by parallel optical interfaces, in
particular by means of WDM. Inverse multiplexing of the 100 GE data stream into virtually
concatenated ODU containers and transportation of each (lower data rate) ODU container is
a suitable way forward.
There is another way forward how to minimize the bit length, namely the use of any multistate modulation format (instead of a binary modulation format) in the same spectral range. A
binary modulation format is characterized by just two channel states, namely “0” and “1”. A
multi-state modulation format generates an integer number of M channel states (where M>2
holds!) in the same spectral channel range. A state can be a certain amplitude, a phase, a
state of polarisation, etc. Thus the 0-1 bit sequence of the data stream can be mapped onto
these M channel states which is quite straightforward if M=2k (k=2, 3, ...) holds. The relation
between the bit length Tbit of a binary format and the bit length T’bit(M) of a multi-state
modulation format is given by
T’bit(M) = M/2 • Tbit.
Thus both, WDM and multi-state modulation formats can help to relax the PMD-induced
3.2.2 Limitation of transmission distance due to the temperature
dependence of chromatic dispersion
Together with PMD, chromatic dispersion is one of the most severe degrading linear effects in
optical high speed transmissions systems. For state of the art 10 Gbit/s transmission systems
the residual accumulated dispersion before the receiver should be compensated accurately in
the range of ± 1000 ps/nm, which corresponds to 60 km of SMF. Since the dispersion
tolerance decreases with the square root of data rate [7], for 100 Gbit/s signals a dispersion
tolerance drops to only ± 10 ps/nm corresponding to less than 1 km equivalent SSMF length.
page 21 of 54
White Paper 100 GbE
Therefore the accumulated dispersion must be compensated accurately to avoid severe
degradation of the signal quality. Since measurements and experiments have shown, that the
chromatic dispersion depends on the environmental temperature [8], [9], [10] the impact of
temperature variations on 100 Gbit/s signal quality should be taken into account.
In general the optical cables are buried in a depth of 0.8 – 1.2 meter in Europe. The soil
temperature variations depend on the climate and the soil conditions like moisture content,
heat capacity and vegetation. To estimate the soil temperature variations, we analyzed aerialand soil temperature measurement at the Frankfurt Airport in summer and winter times. In
Fig. 11 and Fig. 12, the measured data in a two weeks timeframe are depicted. In Fig. 11
(upper part) the temporal behavior of the air temperature in winter times show a slight
increase of the medium temperature with low day/night temperature dependence. The soil
temperature (Fig. 11 lower part) follows the low daily temperature variations only up to a
depth of 0.4 meters. In depth, where optical cables are buried, the temperature stays stable
and around two degrees below the medium daily air temperature. The soil temperature
increases with the medium daily air temperature after a reaction time of about five days.
Fig. 11: Air- and soil temperature profile
at Frankfurt Airport from 30.12.04 –
12.01.05 [11]
Fig. 12: Air- and soil temperature profile at
Frankfurt Airport from 07.07 – 19.07.06
In Fig. 12 (upper part) the temporal behavior of the air temperature in summer times show a
stable medium temperature with high day/night temperature variation of about 14 degrees
peak to peak. The soil temperature (Fig. 12 lower part) follows the high daily temperature
variations only until a depth of 0.6 meters. In depth, where optical cables are buried, the
temperature stays stable and around two degrees below the medium daily air temperature.
According to these results daily temperature variations have no impact on the variation of
chromatic dispersion. Nevertheless the seasonal temperature changes reach depths of
several meters and must be taken into account. Based on the above example, temperature
variations of about 20 degrees between summer and winter times are not unusual.
From the knowledge of the usual temperature variations, a conclusion to the dispersion
variations originated from the temperature variations are necessary. A temperature
dependent calculation of the dispersion parameter is shown in [10], [13]. Independent of the
fiber type the variation of the chromatic dispersion with temperature is mainly affected by the
dispersion slope of the fiber. Therefore transmission fibers with high dispersion slope values
suffer from higher temperature variations.
Furthermore since the dispersion slope values of transmission and dispersion compensating
fibers have different signs and if the temperature variations of the transmission and
compensating fiber change in the same direction, the dispersion variations can be cancelled
out. If the temperature variation points in the opposite direction for transmission and
compensating fiber, the dispersion variation of the complete transmission section and the
page 22 of 54
White Paper 100 GbE
residual dispersion before receiver can become even worse. The temperature variations for
the buried transmission fiber and the DCFs are uncoupled.
The temperature variation of transmission fiber is determined by the soil temperature
variations. The DCFs are generally placed between the two EDFA amplifier stages. Some of
the amplifier huts are equipped with air condition and therefore the temperature is relatively
stable. Some small huts are not equipped with air conditioning or heating and the temperature
follows the air temperature with a small reaction period. Moreover the temperature in the huts
is also dependent on the amount of personnel. Measurements in a time period of three month
in the sites of Deutsche Telekom showed a temperature variation of more than 8 degree.
Taking into account the seasonal air temperature, the fluctuations could be even higher. From
experiments the dispersion variations of the DCFs can be estimated to approximately
DDCF = 0.01 ps/nm/km/°C. Together with the measured dispersion fluctuations [10] for a
SMF, a NZDSF with low slope (NZDSFls) and a NZDSF with high slope (NZDSFhs), the
overall residual dispersion fluctuation of the transmission line can be estimated. For our
calculations we assumed a transmission distance of five spans each 80 km long, reaching a
reasonable transmission distance of 400 km. We presumed a temperature variation for the
transmission fiber and DCF of ± 10°K.
Fig. 13: Calculation of the residual
Fig. 14: Calculation of the residual
Dispersion due to temperature fluctuations Dispersion due to temperature fluctuations
after transmission of 5x80 km SMF.
after 5x80 km NZDSFls with low dispersion
Fig. 15: Calculation of the residual
Dispersion due to temperature fluctuations
after 5x80 km NZDSFhs with high
dispersion slope.
page 23 of 54
White Paper 100 GbE
The results of the calculation are depicted in Fig. 13 - Fig. 15. Depending on the fiber
parameter [10] a fluctuation of the residual dispersion of about ± 12 ps/nm seems possible.
Considering the tight dispersion tolerances of about ± 10 ps/nm for 100 Gbit/s signals, the
seasonal variation of the accumulated dispersion should be compensated adaptively. Since
the fluctuations occur on a seasonal and not on a daily or hourly time base the requirement
for the compensator speed is very low. Without any adaptive compensation, the 100 Gbit/s
transmission is limited below 400 km. This limit can be overcome by realizing 100 Gbit/s
signals using 10 channels with 10 Gbit/s line rate. The lower data rate relaxes the dispersion
compensation requirement, but has other disadvantages, such as reduced bandwidth
3.3 Modulation formats – advantages and disadvantages
Talking about novel modulation formats first requires a classification of the multitude of
amplitude and phase modulation formats, or even mixtures of both, discussed in optical
communications today. An overview is given in the following figure.
Phase Modulation
Intensity Modulation
Multilevel Pseudo-Multilevel
Partial response
rid Modulation
Fig. 16: Classification of the most relevant modulation schemes
Generally one has to distinguish between binary and multilevel modulation formats. Using
binary formats, only two different levels are encoded onto the amplitude or phase of the
optical carrier. Using multilevel modulation, log2(M) data bits are encoded on M symbol levels.
The data is therefore transmitted at a reduced symbol rate of R/log2(M), R being the initial
data rate. Within this class of multilevel modulation formats, two distinguished sub-classes
need special inspection: pseudo-multilevel modulation and correlative coding. Here, the
symbol alphabet is not enlarged to reduce the symbol rate at a fixed data rate; however, the
additional degrees of freedom are used to shape the spectrum in some favourable way. If, in
this respect, the assignment of redundant symbols to transmitted bits is data-independent,
such schemes are called pseudo-multilevel modulation formats. If the symbol-assignment
depends on the transmitted data, these types are summarized as correlative coding, or partial
response, modulation schemes.
Pseudo-multilevel modulation and correlative coding have advantages at moderate bit rates
of 10 Gbit/s. Increasing the data rate to, and beyond, 40 Gbit/s, real multilevel modulation that
reduces the transmitted symbol rate seems to be more attractive, as these schemes reduce
the spectral width, and are therefore less subject to bandwidth-dependent perturbations as
generated by Chromatic Fiber-Dispersion and Polarisation-Mode-Dispersion. However, this
advantage comes at an increased complexity of the transmitters and receivers required to
modulate and demodulate the data. As a second aspect, the interaction of linear and
nonlinear transmission effects is not predictable and has to be investigated for every
modulation scheme. Dependent on the selected modulation scheme, some perturbations may
be dominant, others are negligible. Finding an optimised link-design is also a task that has to
be carried out separately for every investigated modulation scheme.
page 24 of 54
White Paper 100 GbE
Fig. 17: Evaluated modulation schemes
In the following, the performance of 6 different modulation schemes depicted in Fig. 17 is
compared at channel data rates of 40 Gbit/s and 60 Gbit/s. For demodulation, direct detection
using a photodiode is used combined with balanced receivers, except for pure RZ-ASK [14][18]. The results are preliminary in the sense that only single-channel transmission is
considered, and the effect of PMD is neglected. PMD and WDM-transmission are currently
under investigation. To evaluate the transmission performance, the maximum reach at a BER
of 10-9 is calculated by numerical computer simulations including all relevant linear and
nonlinear transmission effects, except of PMD. For every single modulation scheme, link
design and system parameters have been optimised separately. The optimum performance is
evaluated in the following figure. Here, the effect of nonlinear phase noise, which has an
additional degrading effect on phase-modulated systems, is not considered, but evaluated
separately in a following step.
Fig. 18: Performance results (excluding Nonlinear Phase Noise)
From Fig. 18 it is obvious that pure RZ-ASK (also called On-Off-Keying), or hybrid schemes
using amplitude and phase modulation, show suboptimum performance compared to their
purely phase-modulated alternatives. This is due to the fact that different amplitudes will
generate different nonlinear phase shifts through self-phase modulation, which severely
degrade the phase-modulated part of the signal. Therefore, merging amplitude- and phaseEIBONE_AK_Uebertragungstechnik_Positionspapier_100GbE.doc
page 25 of 54
White Paper 100 GbE
modulation in one signal never achieves optimisation of the performance of both modulation
schemes, but is always a compromise, as increasing the sensitivity of the phase modulation
part always degrades the amplitude-modulation part, and vice versa. Considering the
modulation schemes in Fig. 18 and balanced detection, the best performance is obtained by
pure phase modulation.
However, it is known that phase modulation is subject to additional perturbations due to
Nonlinear Phase Noise. Therefore extensive Monte Carlo Simulations were carried out to
include this effect. The results depicted on the following figures reveal that the maximum
reach of the schemes including ASK is not significantly affected by Nonlinear Phase Noise,
also due to the fact that their reach is already relatively short. Pure phase-modulation is
significantly degraded. However, a BER of 10 is still obtained even for DPSK, DQPSK, and
OD8PSK at the maximum reach identified without considering Nonlinear Phase Noise, i. e.
enhanced FEC could help to compensate for the degradation due to Nonlinear Phase Noise.
This aspect is also subject to future investigations.
Fig. 19: Additional performance degradation due to Nonlinear Phase Noise. The evolution
using the Karhunen-Loeve expansion method serves as a reference, as it neglects Nonlinear
Phase Noise.
Modulation formats, where 4 bits are mapped into one symbol are of special interest within
the SDH/Sonet hierarchy, as e.g. 40 Gbit/s transmission can be achieved by using 10 Gbit/s
equipment. For a 100 Gbit/s Ethernet transmission, such a format would relax the speed
requirements to only a 25 Gbit/s equivalent in most subsystems of the transmitter and
page 26 of 54
White Paper 100 GbE
An experimental implementation of 16-ary Inverse-RZ-QASK-DQPSK at 42.8 Gbit/s, using
four-level inverse return-to-zero amplitude modulation in combination with four-level
differential phase modulation has been shown by a joint experiment of the University of Kiel
and University of Denmark [18]. Transmission over a 75 km fiber span with only minor
degradations has been demonstrated. Inverse-RZ [24] is used in combination with QASK and
DQPSK modulation. Thus, for every symbol period, two bits encoded as different amplitude
levels, and two bits encoded as different phase changes are transmitted. With this approach,
the additional amplitude modulation degrades the optical signal-to-noise ratio (OSNR)
performance of the DQPSK part by less than 2 dB, whereas the QASK tributary determines
the overall system performance with a substantial OSNR degradation due to the reduced
distance of the amplitude levels in the QASK modulator.
Fig. 20 shows the measured BER performance for back-to-back as well as for 75 km
transmission for both the QASK and the DQPSK parts. A 21.4 Gbit/s RZ-DQPSK system (by
omitting the QASK part of the transmitter), was used as a reference in the figure.
Fig. 20: BER curves for 16-ary Inverse RZ-QASK-DQPSK modulation in the back-to-back
case and after transmission of 75 km standard fiber.
The spectral efficiency of the 16-ary format is compared with a fairly spectrally efficient binary
RZ-ASK format both at 42.8 Gbit/s in the measurement result of Fig. 21. The 20 dB spectral
width of Inverse-RZ-QASK-DQPSK (44 GHz) is more than three times narrower than CS-RZASK (145 GHz) resulting in superior spectral efficiency.
power [dBm]
b) 30
power [dBm]
a) 30
44 GHz
-100 -80
-40 -20
frequency offset [GHz]
80 100
145 GHz
-100 -80
-40 -20
frequency offset [GHz]
80 100
Fig. 21: Optical power spectrum of 42.8 Gbit/s 16-ary Inverse-RZ-QASK-DQPSK (a) and
binary 42.8 Gbit/s CS-RZ-ASK (b).
3.4 Equalization, compensation and mitigation of signal
distortions resulting from transmission
Since 100 GbE solutions are expected to run preferably over existing infrastructure, i. e.
installed single mode fibers or amplifier spans with their inherent residual chromatic
dispersion, PMD, optical filter bandwidths, fiber nonlinearity, etc., adaptive distortion
compensation or electronic equalization schemes need to be incorporated. These adaptive
subsystems - often implemented in the transmitter or receiver itself or placed in the line card
page 27 of 54
White Paper 100 GbE
together with transmitter/receiver- enable reliable operation of the transmission system close
to the physical limits.
Different schemes of adaptive optical compensators and electronic equalizers are already
under discussion or are already implemented in products for lower rate transmission at
10 Gbit/s or 40 Gbit/s. These known optical or electronic equalization schemes as well as new
mitigation or robust transmission concepts will even be more mandatory for robust 100 Gbit/s
3.4.1 Optical equalization
Optical equalizers are an enabling technology for high speed serial 100 GbE systems for
performing spectral monitoring, gain equalization, dispersion compensation an dispersion
slope compensation. Multiplex systems like 10 x 10 GbE are not mentioned here. Fixed elements
In high data rate optical transmission systems, chromatic dispersion is one of the limiting
effects. Nowadays, often only fixed elements are used to compensate dispersion. Dispersion
compensating fibers (DCF) are single mode fibers with a negative dispersion. They can be
used over a wide wavelength range. Usually, the length ratio between single mode fiber
(SMF) and DCFs is about five to one. Because of its smaller core diameter, the DCF has a
higher loss and it is more susceptible to nonlinear effects. However, using DCFs is the
common method to compensate dispersion.
Other fixed elements may be few mode fibers and chirped fiber bragg gratings. Fiber bragg
gratings can also be tunable. Tunable filters
Depending on various factors (e. g. temperature fluctuations) dispersion may change with
respect to time [27]. For that reason, compensating dispersion with fixed elements like
dispersion compensating fibers may not be sufficient for systems faster than 10 GBit/s. Up to
now, a great variety of tunable dispersion compensating elements have been proposed to
guarantee a disturbance-free and efficient operation for these wide area networks [28].
Another type of distortion is polarization mode dispersion (PMD).
For dispersion compensation, free space optics have been used. VIPAs (virtual image phase
arrays [29], [30]) use a slidable mirror with a position-dependant geometry. Thus, phase can
be influenced and different dispersion distributions can be achieved.
Also, dispersion-compensating fiber bragg gratings have been reported [31], [32], [33]. Fiber
bragg gratings can be treated as many cascaded Fabry-Perot resonators. At each FabryPerot resonator light of a certain wavelength is reflected and transmitted respectively. With
chirped fiber bragg gratings, the grating period changes with respect to the position. So, the
spectral fractions of the light are reflected at different positions. That can be used to
compensate dispersion [34]. Note that ripple is still an issue.
By using spectral widening of an optical signal, the phase of the spectral fractions can be
influenced directly. It has been shown that waveguide grating routers (AWGs) can apply that
approach to compensate dispersion [35].
Optical Delay Line Filters are another promising approach to compensate time variant
dispersion effects. In analogy to digital FIR and IIR filters it can be distinguished between
filters with or without feedback [36]. FIR filters can be realized for example as Mach Zehnder
Interferometers (MZIs) [37], [38], IIR filters as ring resonators [39], [40]. There have been
presented lots of such filters in different realizations for both residual dispersion compensation
and dispersion slope compensation [41], [42]. Independently from their realization, they all
can be described using the Z-transform just like digital filters [36]. The coefficients of these
adaptive filters are commonly synthesized by iterative methods. Generally, some kind of
minimum square error optimization is used: e. g. to obtain a small eye opening penalty [43] or
for a desired phase response [36], [44], [45]. Especially for higher order filters, it is very
complicated to compute the filter coefficients iteratively. However, numerical methods are not
fast enough in general. They also do not converge always [37], [46]. Thus, analytical methods
page 28 of 54
White Paper 100 GbE
have the advantage that filter coefficients can be computed directly depending on their input
values such as desired dispersion and desired bandwidth. The values obtained by these
deterministic calculations are reliable and available immediately. The first approach working
that way was presented in [47].
3.4.2 Electronic equalization
Electronic equalizers are based on the processing of the detected signal in the electrical
domain. Today processing by analog electronics have already been demonstrated at 43Gbit/s
by feed-forward (FFE) and decision feedback equalizer (DFE) [48]. They increase the
tolerance of the system to many kinds of distortions by some tens of percent.
Recently an increasing attention has been observed for digital signal processing (DSP)
schemes based on numerical calculation of detected signal samples digitized by analog-todigital converters (ADC). Today complex 10 Gbit/s equalizers (maximum likelihood sequence
estimator, MLSE, also referred to as Viterbi equalizer) with one ADC operating at
22 GSample/s already introduced in products. These MLSEs are suitable for distortion
corresponding to an inter-symbol-interference (ISI) of up to 3 bits (4 state MLSE) [49] or 5 bits
(16 states).
In general, with single photo diode detection the performance suffers from the loss of phase
and polarization information. This gap can, at least in part, be overcome by using two or more
photo diodes which see different spectrally or polarization filtered parts of the detected signal
[50], [51]. The basic structure is sketched out in Fig. 22. Digitised signal samples of each
photo diode are processed by the signal processing scheme implemented in the CMOS chip.
The DSP processing scheme can either be a generic algorithm like the Viterbi algorithm (VA)
or it can be an algorithm specialized for a specific modulation format or distortion such as
chromatic dispersion [52].
The coherent detection scheme, also referred to as intradyne detection, can be considered as
the ultimate equalizer solution since it maps the phase and polarization information of the
signal completely into the electrical domain. The linear field distortions like chromatic
dispersion and PMD can be compensated completely and intra-channel distortions induced
by fiber nonlinearity can be mitigated by DSP. However, this potential of high performance
requires a higher effort in optical pre-processing (see Fig. 22) due to the need of a local
oscillator laser diode, two optical hybrids and polarization splitters, 4 photo diodes as well as
4 ADCs. So far the applicability to high bit-rate (40 Gbit/s) operation has only been
demonstrated by post-processing of recorded photo diode samples digitized in an
oscilloscope [53].
opt. hybrid
(phase, pol.)
laser LO ...
opt. proc.:
el. DSP:
FFT ...
Fig. 22: Structure of an electronic equalizer based on digital signal processing (DSP). Some
schemes take advantage of optical pre-processing in waveguide or fiber structures.
Beside adaptive equalization in the receiver only, two alternative schemes based on DSP
processing also in the transmitter have recently gained strong attention: the electronic precompensation and the optical OFDM (orthogonal frequency division multiplexing).
The electronic pre-compensation is based on an optical field synthesizer with electrical DSP
calculation of the complex transmitter field in a CMOS ASIC. The processing is tuned to the
actual link dispersion or fiber nonlinearity. After digital-to-analog conversion (DAC) the
electrical representatives of I and Q field components are converted to the optical domain by
an external modulator. Though product versions for chromatic dispersion mitigation are
existing at 10 Gbit/s [54], investigation of 40 Gbit/s application focuses on the mitigation of
fiber nonlinearity in compensated links [55].
Optical OFDM relies on fixed electronic DSP processing in both transmitter and receiver by
fast and efficient realization of inverse-/fourier transformer circuits and parallel but very simple
page 29 of 54
White Paper 100 GbE
adaptive processing of the individual subcarriers at the digital fourier transformer output of the
OFDM receiver [56]. Different “flavours” of OFDM schemes have been proposed and are in
the early stage of investigation. They are ranging from less complex solutions with only one
photo diode and analog RF frequency conversion of the detected subcarrier signal [57] up to
complex approaches requiring an intradyne receiver frontend with 4 photo diodes, local
oscillator laser diode and 4 ADCs which enable the transmission even in the presence of very
high PMD [56].
3.6 Electronics and subsystems
3.6.1 Introduction
Within the last years the speed of electronics has increased dramatically by the development
and implementation of new production processes for electronic chip manufacturing enabling
higher transistor transit and maximum frequencies applicable for very high electrical-timedivision-multiplexing (ETDM) channel rates.
Currently two alternative semiconductor electronic material systems, i. e. Silicon-Germanium
(SiGe) and Indium-Phosphite (InP), have the potential for realization of ultra-high speed
electronic circuits at 80 Gbit/s to more than 100 Gbit/s.
Today for SiGe based electronic circuits a transit frequency fT of 225 GHz and an fmax of
300 GHz is state of the art. Advanced Indium-Phosphide (InP) electronic circuits exhibit an fT
of 400 GHz and an fmax of 400 GHz.
This section gives an overview on advanced electronic circuits for application in high speed
ETDM systems with the focus on 100 Gbit/s binary formats (i. e. symbol rate is equal to the
channel line rate) representing a considerable technological challenge with respect to the
speed requirements.
3.6.2 ETDM multiplexer
The multiplexer part of an ETDM transmitter aggregates the incoming electrical data
tributaries to the line rate. In transmission products integrated circuits can be operated with
different electrical interfaces and comprise a lot of functions like multiplexing (4:1 or 16:1),
retiming, reshaping and monitoring functions. In current research phase various multiplexers
have been realized to achieve serial binary bitrates of up to 100 Gbit/s.
Several electronic time division multiplexers for 100 Gbit/s application and beyond have been
designed and fabricated with InP processes. By using InP HEMTs, 2:1 multiplexers were
realized up to now for maximum output data rates of 144 Gbit/s [58]. In ref. [59] a 100 Gbit/s
2:1 multiplexer chip with a high output swing of more than 1 V was reported. The possibility of
having such a high output swing is a strong argument for InP multiplexers because of their
potential to drive modulators with low modulation voltage directly. A modulator driver amplifier
would further reduce ETDM transmitter bandwidth, adds additional costs, and increases the
power consumption and the size of linecards.
Using latest SiGe technologies, various ETDM multiplexers have been realized for bitrates of
100 Gbit/s and beyond. For SiGe the record operating speed of 132 Gbit/s was reported for
4:1 ETDM multiplexers realized in 0.13 µm SiGe-Bipolar Technology [60]. However, these
results were obtained by on-wafer measurements. As the HF packaging of such very high
speed circuits is very challenging and not yet mature the operating speeds of packaged
multiplexer circuits (modules) reported in literature are usually lower.
Up to now most 85-107 Gbit/s ETDM multiplexer modules reported so far are based on SiGe
circuits. These modules were used e. g. for generating electrical data signals at line rates of
85.4 Gbit/s [61], and 100 Gbit/s [62], respectively (see Fig. 23 for an output eye diagram).
This module was later used in an ETDM transmission experiment over 320 km at a ETDM
bitrate of 107 Gbit/s [63]. Another 2:1 multiplexer module was first used for 80 Gbit/s [64] and
later for 107 Gbit/s signal generation [65].
page 30 of 54
White Paper 100 GbE
Fig. 23: 100 Gbit/s electrical output eye diagram of a SiGe multiplexer (100 mV/div,
5 ps/div)
Only one high-speed multiplexer module based on InP technology was reported so far in
literature, including an an InP/InGaAs 2:1 multiplexer chip using DHBTs. With this module
operation at 80 Gbit/s was demonstrated [66].
3.6.3 Modulator driver amplifier
In a standard ETDM transmitter the signal is electrically amplified after electronic multiplexing
to the line rate. High-speed Lithium-Niobate (LiNbO3) Mach-Zehnder modulators, however,
require large voltage swing in the range of several Volts (typically 5 V or more) for modulation.
The realization of modulator drivers for high data rates beyond 40 Gbit/s is a difficult task.
Commercially available 40 Gbit/s modulator driver amplifiers are usually realized in GalliumArsenide (GaAs) technology. Recently, specially selected modules were applied for
amplification of 80 Gbit/s [64] and 85.4 Gbit/s signals [62].
For higher data rates, e. g. 100 Gbit/s, these amplifiers lack of sufficient bandwidth and exhibit
considerable signal distortions due to group delay ripple. An experimental example is given in
Fig. 24, showing for comparison the 85.4 Gbit/s eye diagrams after amplification by two
different driver amplifiers, both having roughly the same specified 3 dB bandwidth of 50 GHz.
11.7 ps
11.7 ps
Fig. 24: 85.4 Gbit/s eye diagrams at the output of two different modulator driver
amplifiers each with an electrical bandwidth of ~50 GHz
A lot of publications on high speed amplifier chips can be found in the literature. Most of them
are realized as distributed amplifiers based on InP. In [67] distributed amplifier circuits
realized by InP/InGaAs DHBT were tested at 80 Gbit/s showing a gain of 14.5 dB and a good
eye opening. Already in 1996 an amplifier IC with 90 GHz bandwidth and 10 dB gain was
demonstrated [68].
An InP-HEMT-based amplifier IC with an electrical bandwidth of 92 GHz and 13 dB gain
measured on chip was reported in [68]. Amplifier ICs fabricated using InGaAs/GaAs HBT
distributed amplifier technology show 16 dB gain and a bandwidth of 80 GHz [70],
In [71] InP HEMT amplifier chips designed for flip-chip bonding are presented. The driver
circuits have chip gains of 14.5 dB and 7.5 dB at bandwidths of 94 GHz and 125 GHz,
respectively. Both amplifier chips have a flat gain characteristic. After flip-chip bonding the
second amplifier had a reduced bandwidth of roughly 85 GHz and a smooth roll-off.
page 31 of 54
White Paper 100 GbE
A commerially available high speed modulator driver prototype has been used for data
modulation at a symbol rate of 107 Gbaud using a LiNbO3-Mach-Zehnder Modulator in a
serial 107 Gbit/s ETDM transmission experiment [63].
Broadband packaging of electrical amplifier chips while maintaining the bandwidth and
sufficient gain for the module assembly is highly challenging, so that it will take some time till
broadband driver amplifier modules with 100 GHz bandwidth will be commercially available. It
is likely that first commercial driver amplifier modules specified for very high speeds up to
100Gbit/s will be based on single stage amplifiers with an output voltage swing in the range
up to 2 Vpp.
3.6.4 Electronic demultiplexers
Electronic demultiplexers are used to convert the serial high speed data stream into several
parallel data streams at lower speed. In commercial transmission systems only integrated
demultiplexer solutions are applied which comprise usually further functionalities like e. g.
clock recovery, performance monitoring.
In current 100/107 Gbit/s ETDM early systems experiments, receivers based on discrete setups of high speed electronic building blocks are mostly used, comprising e. g. a chain of high
speed decision flip-flops (DFF), or a combination of DFFs with ETDM demultiplexers.
The fastest electronic decision flip-flop ICs up to now were fabricated in InP technology. They
were tested for half rate decision of electrical 110 Gbit/s data signal and full rate decision of
70 Gbit/s data [73]. Recently, an InP-based DFF IC was realized for which full rate operation
at 90 Gbit/s was reported in [72]. However, for operation at 90 Gbit/s an input voltage swing of
roughly 1 V was required.
An InP-based 80 Gbit/s clock-data-recovery IC including a 1:2 demultiplexer was
demonstrated in [74]. The chip was fabricated using an InP HBT process.
The devices above were tested only on chip level using electrical output signals of an
electronic multiplexer for the measurements. A first demultiplexing experiment of an 80 Gbit/s
optical data signal using a photodiode and a fast InP DFF module was already reported in
2003 [75].
It has been shown in several 80-100 Gbit/s system experiments that electronic demultiplexing
circuits can be realised by using an advanced SiGe HBT technology (e. g. Infineon
B7HF200). Electronic demultiplexing of 80 Gbit/s data signals by SiGe-based demultiplexer
modules is reported in refs [64] and [76], respectively, including an 1:2 demultiplexer circuit
integrated with a clock data recovery designed for 86 Gbit/s [76]. This integrated circuit has
proven to operate also at 100 Gbit/s [77].
In other ETDM transmission system experiments at 100 Gbit/s [78] and 107 Gbit/s [63]
electronic demultiplexing in the receiver has been achieved by SiGe-based fast decision flipflops operated at half-rate of the data signals, respectively.
3.6.5 Clock recovery
In ETDM receivers a synchronous electrical clock signal has to be recovered from the
incoming data signal for signal processing, e. g. electronic decision and electronic
demultiplexing. In general two different types of clock recovery, namely phase-locked loop
(PLL) and filter clock recovery, can be used in ETDM systems. Different realizations of
electronic clock recovery have been demonstrated for high speed transmission system
experiments operating at data rates of 80-107 Gbit/s.
The first reported 80 Gbit/s clock recovery was based on a discrete electronic PLL consisting
of a high speed photodiode, harmonic mixer, voltage controlled oscillator and loop filter,
respectively. The PLL application at 80 Gbit/s was reported already in the year 2000 [79] and
later this electronic PLL was successfully tested also at 100 Gbit/s [80].
The 85.4 Gbit/s clock recovery in [81] was an RZ filter based clock recovery. It consists of a
frequency tuned photodiode and a high-Q waveguide filter. The 85.4 GHz clock line is
amplified by waveguide amplifiers and divided by a SiGe frequency divider.
page 32 of 54
White Paper 100 GbE
In [74] an integrated clock-data-recovery chip including a PLL was realized in InP technology
and tested in a 80 Gbit/s ETDM experiment.
A first 100 Gbit/s clock-data-recovery (CDR) based on SiGe technology was reported in [77].
It included a PLL with a bang-bang phase detector. The CDR chip was originally designed for
85.4 Gbit/s application. For operation at 100 Gbit/s data rate an external 100 GHz oscillator
was used. A similar CDR circuit has been used recently for PLL-clock recovery in a 107 Gbit/s
ETDM transmission experiment [63].
3.6.6 Subsystems and integrated ETDM electronics
Currently most research work on 100/107 Gbit/s ETDM systems is based on hybrid setup of
transmitter and receiver subsystems including first research prototypes of packaged
electronic circuits in combination with advanced high speed prototypes of commercially
available optoelectronic components. Partly the high speed circuits were originally designed
for 80/85 Gbit/s and thus were operated beyond their target operation speed.
Fig. 25 shows the setup of an advanced 107 Gbit/s ETDM NRZ lab system [63] including
electronic and opto-electronic components.
107 Gbit/s ETDM transmitter
DFB laser
107 Gbit/s PLL
clock recovery
13.4 Gb/s
107Gb/s ETDM
107 Gbit/s ETDM receiver
8x 13.4 Gb/s
Fig. 25: Scheme of 107 Gbit/s transmitter and receiver subsystems
In the 107 Gbit/s ETDM transmitter, 4x13.4 Gbit/s (4:1) as well as 2x53.5 Gbit/s (2:1) ETDM
multiplexer and a 107 Gbit/s driver amplifier are applied, respectively. The 107 Gbit/s ETDM
receiver comprises DFF decision circuits, 1:4 ETDM demultiplexer and integrated PLL circuit
which allows operation at 107 Gbit/s by external oscillator and PLL-circuitry. This set-up
represents the first complete serial 107 Gbit/s ETDM NRZ lab system reported so far.
In other published results, 50/53 Gbit/s measurement equipment is used in the 100/107 Gbit/s
transmitter as well as in the receiver and the LiNbO3-modulator is directly driven by the
electrical output of a 2:1 ETDM multiplexer [65], [77].
An integrated clock-data recovery (CDR) designed for 85 Gbit/s represents so far the only
high speed circuit with a higher complexity. This modules could only be applied in 100 Gbit/s
lab systems by using external 100 GHz components [77].
3.6.7 Conclusions
For research on next generation ETDM transmission systems working at data rates of
100/107 Gbit/s, various advanced electronic circuits and subsystems based on advanced high
speed InP or SiGe technologies were demonstrated. Serial transmission at symbol rates of
100/107 Gbaud might be the most cost-effective solution for 100 Gbit/s Ethernet but at the
same time this approach includes severe technological challenges regarding speed
requirements of components.
It was already shown that serial 100/107 Gbaud transmission is feasible, but development of
mature electronic and opto-electronic components is needed for 100 Gbit/s Ethernet products.
page 33 of 54
White Paper 100 GbE
Two material systems, namely InP and SiGe, have proven to offer promising options for
realization of 100/107 Gbit/s electronics, exhibiting different technical features, benefits or
drawbacks, respectively.
SiGe HBT electronics seem to be a good option for circuits with higher complexity. In
contrary, InP-based circuits offer potentially higher operation speed and larger output voltage
swing, respectively, especially beneficial for multiplexer and modulator driver applications.
SiGe electronics can be processed on large standard Silicon wafers on the basis of a well
established industrialized process. Furthermore SiGe has a better potential to integrate
additional functionalities on the same wafer, e. g. by implementation of lower speed functions
and microprocessors in CMOS (Bi-CMOS).
3.7 Opto-electrical components
This chapter focuses on optical modulator and detector/receiver components for 100 Gbit/s
serial transmission in sections 3.7.1 and 3.7.2, respectively.
3.7.1 Optical modulators
For high speed and long reach applications the modulator is one of the key components to
realize different modulation formats and optimum system performance. In actual 40 Gbit/s
systems, LiNbO3 based Mach-Zehnder Modulators are dominating the market. While being
the most mature technology, LiNbO3 suffers from an inherently high driving voltage (typically
> 5 Vpp) and a limited 3 dB bandwidth of about 25 -35 GHz. New designs (e. g. from Fujitsu)
provide lower drive voltages of about 3,5 Vpp at the cost of a lower 3 dB bandwidth of 25 GHz.
The high voltage requirement is a significant cost driver for actual systems. Other drawbacks
are the large device size and the inherent drift of the operation point, implying the need for
highly accurate closed loop temperature and operation parameter control, adding again to
costs. In recent papers regarding first “hero” 100 Gbit/s ETDM experiments LiNbO3
modulators with a 3 dB bandwidth of 35 GHz have been used due to the lack of suitable
alternatives [78], [82]. But it is well known that this is not an option for commercial 100 Gbit/s
As an alternative to ETDM systems with NRZ or duobinary [65] modulation several phase
modulation schemes are under investigation to achieve 100 Gbit/s in one wavelength
channel. Complex phase modulation formats like DQPSK or even polarisation multiplexed
RZ-QPSK [83] are actually gaining interest not only because of their proven high system
performance and the possibility to use only slightly advanced 40 Gbit/s electronic but because
of the need to adopt these future 100 Gbit/s channels into existing DWDM systems with a
channel spacing of 50 or 100 GHz.
IQ-Modulators with two nested Mach-Zehnder Modulator structures are the basic building
block for most of these phase modulation schemes (see Fig. 26).
3 dB
3 dB
Fig. 26: IQ-Modulator with two parallel nested MZ-Modulators with 90° phase shift section
Such a versatile Modulator has been shown recently with a 3 dB bandwidth of 30 GHz [84].
The same modulator with a drive voltage of Vπ = 5 V has been used in recent 100 Gbit/s
experiments [85], [86]. The 3 dB bandwidth of this device does not meet the requirements for
data rates over 50 Gbit/s, but by the time it was the only device available for these
Alternative material systems to the mature LiNbO3 modulators are Polymer-, GaAs- and InPbased modulators.
page 34 of 54
White Paper 100 GbE
Polymer modulators have shown good frequency and voltage response. Since several years,
the long term stability problems have not yet been solved, though. It seems that the polymer
poling required to generate the electrooptical activity is inherently unstable against higher
operation temperature.
GaAs has been an early III-V material of choice for fast MZ-Modulators based on a travelling
wave concept but Bookham has actually stopped the production in 2006 focusing on the InPbased MZ technology acquired from Nortel.
While being less mature than LiNbO3, the InP material system allows for significantly lower
driving voltage, a factor 10 smaller chip size, and electrical long-term stability, thus driving
down both driver costs and closed loop control costs. On a longer timescale, InP has the
possibility for monolithic integration with a laser source. 40 Gbit/s InP Mach-Zehnder
Modulators with standard NRZ on-off keying operation has been reported by HHI [87], NTT
Photonics [88] and Fujitsu [89] whereas 80 Gbit/s NRZ OOK operation has been
demonstrated once for a MZM [87] and once for an electro absorption (EA) modulator [90].
These EA modulators are good candidates for short reach 100 Gbit/s applications, but can
not provide zero chirp operation for long-haul transmission or phase modulation for advanced
modulation schemes.
The reported bandwidth of NTT's InP Modulator is 40 GHz with a DC switching voltage of Vπ
= 2.2 V whereas the required Vpp for 40 Gbit/s operation was as high as 4.2 V [88]. Fujitsu's
Modulator has a reported bandwidth of 28 GHz with a required Vpp = 3 V for 40 Gbit/s
operation [89].
At HHI actual modules show a 3 dB bandwidth of 45 GHz at a low driving voltage Vπ = 2.8 V
(Fig. 32 lower part). This leads to an excellent eye diagram at 43 Gbit/s shown in Fig. 27 [87].
The corresponding 80 Gbit/s eye diagram of this modulator is shown in Fig. 28.
Fig. 27: NRZ 40 Gbit/s eye diagram of a
modulator module;
3-dB bandwidth = 45 GHz, PRBS 231-1;
Extinction ratio 14 dB;
eye signal to noise: 23 dB
Fig. 28: 80 Gbit/s NRZ eye diagram;
module 3-dB bandwidth = 45 GHz
PRBS 231-1; 3 dBeo;
Extinction ratio 8 dB
The modulator developed at HHI has implemented optical spot size converters (SSC) for
efficient, low-cost and alignment tolerant fiber-chip-coupling. Fig. 29 shows a modulator
module with GPPO connector and integrated 50 Ω resistor.
page 35 of 54
White Paper 100 GbE
Fig. 29: 43 Gbit/s MZI-Modulator module
with GPPO RF connector
Fig. 30: Schematic layout of an MZI based
modulator with capacitively loaded
travelling wave electrodes (TWEs).
To achieve high data rates, the modulator is equipped with capacitively loaded travelling wave
electrodes (TWE) (cf. Fig. 30), which are designed as microstrip lines with an overall
impedance of 50 Ω.
43 Gbit/s NRZ operation of a packaged modulator with Vpp= 2.8 V has been successfully
demonstrated over 30 nm tuning range in the C-band (cf. Fig. 7) and a temperature between
20°C and 70°C as well as error free transmission (bit error ratio: 10-10) over 320 km dispersion
compensated fiber at 1550 nm.
Fig. 31: Module performance of a InP MZmodulator within the C-band
Fig. 32: Electro-optic response of two selected
modulators with 2 mm and 4 mm TWE length
The electro-optical (EO) response of a highest frequency modulator chip is shown in Fig. 32.
The actual 3 dBeo bandwidth is 63 GHz which is only a few GHz below the 70 GHz limit which
is required for 100 Gbit/s operation. The drive voltage of this chip is Vπ = 5.5 V.
Detailed high frequency simulations of the current device layout reveal a possible bandwidth
of up to 75 GHz at a driving voltage of about Vπ=3 V with some design changes. With this
adapted design concept 100 Gbit/s on-off-keying operation with a driving voltage Vπ ≤ 3 V will
be possible in the near future.
3.7.2 Optical detectors and receivers for serial transmission at
100 Gbit/s
Ultrafast photodetectors and photoreceivers, based on evanescently coupled photodiodes,
have been shown to operate up to 100 Gbit/s and beyond. These waveguide-integrated
detectors are monolithically integrated with HEMTs, employing semi-insulating optical
waveguides on a semi-insulating InP:Fe substrate. The integration scheme is explained and
demonstrated for broadband photodetectors and pinTWA photoreceivers, focussing on
applications at data rates from 80 to 160 Gbit/s. Furthermore, an example of an 107 Gbit/s
opto-electronic receiver with an hybridly integrated demultiplexer is given.
page 36 of 54
White Paper 100 GbE Introduction
Leading system providers upgrade their long-haul optical communication systems for single
channel bit rates of 40 Gbit/s, aiming to exploit the transport capacities of the SMF of about
10 Tbit/s by joint progress in combining TDM and WDM techniques. System developers
currently start experimenting employing 80 Gbit/s data rates. High-bit rate (40 Gbit/s)
photodetectors and photoreceivers are key components for optical network extensions on all
system levels (transport, metro, storage area networks). Before the upgrade by a factor of
four to 160 Gbit/s, 100 Gbit/s Ethernet will likely be a standard for transmission. This chapter
describes the state of the art for low-cost, high-performance components on the receiver side. High-speed waveguide-integrated photodetectors
Side-illuminated detectors avoid the early compromise between high quantum efficiency and
transit-time-limited bandwidth, which limits the quantum efficiency-bandwidth product of
perpendicular illuminated long-wavelength photodetectors to around 15 GHz. In sideilluminated detectors the optical absorption path is oriented perpendicular to the electric field
lines of the pin junction diode, as the light is injected from the side into the multimode
absorption region [91]. Side-illuminated detectors can be built by applying different schemes:
waveguide-photodiodes and waveguide-integrated photodiodes. A waveguide photodiode
contains a multimode waveguide absorption region, embedded between two higher bandgap
cladding layers. The light is coupled directly from the side into the absorption layer, using a
tapered or lensed fiber [94]. The waveguide-integrated photodiode employs a rib-waveguide,
the light from which is coupled evanescently from the lower side into the small-gap absorption
layer. This type of detector, comprising also a monolithic taper at its input waveguide facet,
can be monolithically integrated to more complex types of detectors (twin [92], balanced [93]).
Side-illuminated photodetectors show an improved high-power behaviour, because the
absorption is distributed laterally into a larger length of a thinner absorption layer in a
controlled manner, compared to perpendicular illuminated detectors.
The recently fabricated photodetector chips are based on InP and comprise an evanescently
coupled mesa photodiode of 5x20 μm size, a spot-size converter for increased fiber
alignment tolerances, a biasing network and a 50 Ω matching resistor. An optimized
impedance connecting the p-mesa to the electrical output line of the detector leads to an
increase of the cut-off frequency to >100 GHz [95].
Advanced detector chips with miniaturized mesa size of 5x7 μm2 dimensions achieve
bandwidth in the range up to 145 GHz [97][98][99]. 100 Gbit/s photodetector module
The chip is assembled into a housing equipped with a 1 mm coaxial output connector and a
fiber pigtail (Fig. 33, left, inset). A cleaved fiber is fixed directly at the chip’s antireflectioncoated waveguide facet. A responsivity of 0.73 A/W with a polarization dependent loss (PDL)
of only 0.4 dB at 1.55 µm wavelength was obtained. The electrical output signal is provided
by a short coplanar waveguide (CPW) which is connected by multiple short bonding wires to a
following low-loss CPW on quartz substrate leading to the output connector. All fabricated
modules are routinely tested for robustness against vibration (10 min, 60 g, 50 Hz) and
thermal cycling from 10°C to 50°C and showed no degradation. The frequency characteristic
of the PD module was determined by an optical heterodyne measurement setup employing a
fixed and a tunable laser around 1.55 µm. The RF signal was measured by a power meter
(hp437B) with three different power sensors for the respective RF bands 0-50 GHz, V- and
W-band. In the F-band (90-140 GHz) we used a high-frequency power meter (PM3 Erickson
Instr.). An excellent agreement of the measured characteristics was observed in the
overlapping frequency ranges. Fig. 33, left, shows the calibrated frequency response of the
photodetector module at 2 V reverse bias (Vbias). A -3 dB bandwidth of 100 GHz is obtained.
At 120 GHz signal frequency the response has decreased by 6 dB. The dip around 135 GHz
can be attributed to a higher order mode arising in the coaxial connector, which limits the
performance of the 1 mm coaxial interface. The PD modules have been evaluated in several
back-to-back OTDM transmission experiments. Fig. 33 (right) shows the received electrical
80 Gbit/s RZ eye patterns at different optical input power levels using the setup described in
page 37 of 54
White Paper 100 GbE
+3 dBm
+6 dBm
0.1 V
+9 dBm
0.2 V
+12 dBm
0.4 V
0.5 V
Fig. 33: (left) Relative frequency response of the PD module (+2.3 dBm optical input power)
measured with HP437B (black) and PM3 (red),
(inset): photograph of the PD module, right: electrical 80 Gbit/s RZ eye pattern at 3, 6, 9 and
12 dBm optical input power detected by the detector module at -2.5 V bias (x: 5ps/div).
160 Gbit/s RZ, +12 dBm
0.5 V
4 ps
Fig. 34: (left) 100 Gbit/s RZ eye pattern at +8 dBm input power, Vbias: -2 V, 220-1 PRBS
(courtesy of L. Moeller, Lucent Techn., USA),
(center): detected eye pattern under 160 Gbit/s RZ excitation, Vbias = -2.5 V,
(right): inner eye height vs. average optical input power at 160 Gbit/s.
All measured eye patterns are opened widely with a peak voltage up to 0.6 V, revealing only
negligible saturation effects at +12 dBm. Preliminary measurements at 100 Gbit/s RZ were
performed using a 100 GHz sampling scope (LeCroy SDA with SE-100 standard time base)
demonstrating the highest available bit rate with RZ modulation format at which o/e
conversion could be performed without any reduction of the modulation depth (Fig. 34, left).
Fig. 34, center, shows the detected 160 Gbit/s RZ data stream at +12 dBm optical input
power. Due to the insufficient bandwidth of the sampling head (70 GHz, Agilent 86118A) and
the PD module we observe an RZ-to-NRZ conversion. Nevertheless, the eye amplitude is still
remarkable and the inner eye opening reaches 160 mV (Fig. 34, right). 80 Gbit/s pin-TWA photoreceiver
The concept of combining a waveguide-integrated photodiode with a spot-size converter and
a HEMT-based amplifier and allowing the independent optimization of each of the
components has been widely discussed in earlier publications [100], [101]. For 80 Gbit/s
system applications a new circuit design has been developed, using HEMTs with modified
layer structures and shorter gate lengths. The biasing configuration is accomplished with a
negative bias supply at the common source electrode with an MIM capacitor [102]. The circuit
diagram of the photoreceiver with a negative bias and ground (GND)-isolated output port is
depicted in Fig. 35. The amplifier’s DC current is fed into the terminal Vdd. The HEMTs with
0.18 µm gate length, exhibit cut-off frequencies fT/fmax of typically 140/300 GHz.
page 38 of 54
White Paper 100 GbE
at TWA
floating GND
Fig. 35: Circuit diagram of the photoreceiver with negative bias and GND-isolated output port
Fig. 36: Simulated ZT and S22 of the traveling wave amplifier (left) and partial view of the
integrated photoreceiver OEIC (right).
The simulated transimpedance ZT and S22 of the traveling wave amplifier (TWA) are illustrated
in Fig. 36 (left). The 3-dB bandwidth and transimpedance (ZT) are 65 GHz and 40 dB,
respectively. The reflection coefficient S22 is less than 10 dB over almost the whole frequency
range up to 80 GHz. A partial view of the chip is given in Fig. 36 (right), which shows the
photodiode at the left being connected to the input of the traveling wave amplifier via an air
bridge. The amplifier characteristics defined by the transimpedance (ZT) is derived from the
measured S-parameters. The transimpedance amounts to 39 dBΩ (71Ω) and has a
bandwidth of 72 GHz. The packaged photoreceiver is shown in Fig. 37, with 1 mm RF output,
a DC supply and an optical input. The frequency characteristic of the module is shown in Fig.
38. The 3 dB bandwidth exceeds 70 GHz, which is comparable to the OEIC characteristics
and proves a high quality RF packaging. The overall conversion gain of the photoreceiver
module is 45.4 V/W, which is high enough for 80 Gbit/s ETDM systems as well as a high
frequency measurement equipment. Time domain measurements with the module
demonstrated well opened 80 Gbit/s RZ and 85 Gbit/s NRZ eye patterns at 7 dBm optical
input power [103] (see Fig. 39).
page 39 of 54
White Paper 100 GbE
Fig. 37: A photograph of a pig-tailed
OEIC module with 1mm RF output.
Fig. 38: Opto-electronic frequency response of the
pinTWA photoreceiver module.
Fig. 39: 80 Gbit/s RZ (left) and 85 Gbit/s NRZ (right) eye pattern at +7 dBm optical input
power detected by the pinTWA receiver module [ref.: Alcatel]. 107 Gbit/s opto-electronic receiver with hybrid integration of a
photodetector and demultiplexer
A recently demonstrated complete ETDM photoreceiver for 107 Gbit/s operation comprises
three active circuit components: a 100 GHz 3-dB bandwidth InP photodiode with 0.6 A/W
responsivity [95], a SiGe 85+ Gbit/s 1:2 electrical demultiplexer, and a SiGe traveling wave
clock amplifier with a 3-dB bandwidth of about 55 GHz [104]. The optical input is a single
mode fiber, the microwave inputs and outputs are integrated V connectors, and the DC power
is provided through a high-density multi-pin connector, see Fig. 40.
Fig. 40: Integrated Demultiplexing 107 Gbit/s Photoreceiver
(a) outside view (b) inside view.
Full details of the architecture are given in [104]. The photodiode is directly coupled to one of
the inputs of the differential demultiplexer, thereby minimizing microwave parasitics and
improving performance over that which would result from trying to AC couple data from
several hundred kHz to nearly 100 GHz. Since all 100 GHz interfaces are inside the package,
design parameters can be tightly controlled resulting in improved performance. All receiver
electrical inputs and outputs operate at half the input data rate, greatly simplifying the external
interfaces, which makes this design approach inherently superior to separately packaged
solutions with 100 Gbit/s interfaces.
page 40 of 54
White Paper 100 GbE
A cleaved fiber couples light into the diode, and a ruby ring is used to hold the fiber in place.
The receiver’s power requirements for 107 Gbit/s operation are as follows: +5.1 dBm nominal
optical input power (+2.6 dBm minimum); 2 V @1.77 mA, nominal (0.975 mA minimum) for
the photodiode; -5.2 V @ 167 mA for the traveling wave amplifier (TWA); and -4.3 V @
235 mA for the demux. The clock input requires a single ended 53.5 GHz clock with an
amplitude of 200 mVpp, and the tributary data outputs are 500 mVpp differential.
Fig. 41: 107 Gbit/s optical input eye from transmitter and 53.5 Gbit/s output eye from the
integrated demultiplexing receiver for the NRZ modulation format.
Fig. 41 illustrates the eye pattern performance of the 1:2 demultiplexing receiver. For the long
(231-1) PRBS a BER floor of ~ 2×10-5 and ~8×10-7 for NRZ and CSRZ signalling, respectively,
as a consequence of imperfections in both the transmitter and receiver, was recognized. For
a 231-1 PRBS and a BER of 10-3 (which is the correct threshold of enhanced forward error
correction, FEC), an OSNR performance of 24 dB for NRZ and 21 dB for CSRZ in a 0.1 nm
resolution bandwidth was measured. This represents the best reported required OSNR for a
fully ETDM system at 107 Gbit/s to date [104]. Summary
A highly efficient 100 GHz photodetector module for the detection of single channel bit rates
up to 160 Gbit/s RZ has been reported. The detected eye diagrams are well opened with
sufficient peak voltages to drive any available demux electronics directly. PinTWA
photoreceivers were demonstrated for O/E conversion of 80/85 Gbit/s data rates, applying the
RZ/NRZ modulation formats. These results demonstrate the potential of the integration
concept of the evanescently coupled pin photodiode for the use in RF microwave links and
ultra high-speed systems up to data rates of 160 Gbit/s, especially serving the component
needs of serial 100 Gbit/s Ethernet transmission schemes.
A 107 Gbit/s integrated demultiplexing opto-electronic receiver was demonstrated. Novel
hybrid integration of a photodiode, demux, and clock amplifier enabled ultra-high-speed
performance in a compact package. Combining advanced microwave and optical packaging
techniques with emerging InP and SiGe integrated circuit technology, an OSNR of 21 dB in a
0.1 nm bandwidth for an ETDM system operating at 107 Gbit/s at a BER of 10-3 and for a long
(231-1) bit sequence was achieved.
To the best of our knowledge, the results described here are the worldwide state of the art.
3.8 Overview of reports on transmission experiments
The transmission of 100 and more Gigabit per second on a single wavelength channel is no
principle problem and has been demonstrated over the last years by a number of research
groups using optical time division multiplexing (OTDM) techniques. However, the key
components of an OTDM system, in particular the demultiplexer and clock recovery, have up
to now been more expensive than the corresponding devices based on electrical time division
multiplexing (ETDM). Therefore ETDM is still the favourable solution, once the components
page 41 of 54
White Paper 100 GbE
are available with sufficient bandwidth, even more for 100 Gbit/s Ethernet where a major
focus is on cost-efficient systems.
The key electronic and optoelectronic subcomponents for ETDM systems, capable of
operating at 100 Gbit/s and more (denoted 100+Gbit/s in the following) have only recently
become available. Several research groups have started to investigate 100+Gbit/s ETDM
systems and first “Hero” transmission experiments were reported. In the following these
experiments, which are summarized in Table 4, will be discussed briefly.
Table 4: Summary of recent experiments using systems and sub-systems at 100+ Gbit/s.
OFC 06
OFC 06
OFC 06
OFC 07
OFC 07
OFC 07
OFC 07
OFC 07
OFC 07
Winzer et al.
(Lucent) [65]
Doerr et al.
(Lucent) [105]
Daikoku et al.
Raybon et al.
(Lucent) [107]
Derksen et al.
(Siemens, HHI,
Micram) [108]
Sano et al.
(NTT) [109]
Winzer et al.
(Lucent, NIST,
Schubert et al.
(HHI, Siemens,
Micram) [110]
Winzer et al.
(Lucent) [82]
Schuh et al.
(Alcatel) [78]
Masuda et al.
(NTT) [114]
Fludger et al.
Schuh et al.
(AlcatelLucent) [116]
Winzer et al.
(AlcatelLucent) [117]
Sinsky et al.
(AlcatelLucent) [102]
Jansen et al.
HHI, Micram,
IBM) [118]
Bit rate Data Format
# of λ-channels Distance
(spacing / GHz)
1 (----)
1 (----)
1 (----)
CSRZ-DQPSK 70 (100)
10 (144)
1 (----)
50 km fully ETDM
400 km OTDM Rx,
clock trans.
480 km OTDM Tx
160 km fully ETDM
10 (150)
2000 km fully ETDM
1 (----)
480 km OTDM Tx
10 (143)
1 (----)
1000 km OTDM Rx,
clock trans.
- fully ETDM,
CSRZ-DQPSK 102 (100)
+ polMUX
10 ( 50)
240 km fully ETDM
10 (200)
2375 km coherent
off-line BER
480 km fully ETDM
10 (100)
1200 km Fully ETDM
1 (----)
- co-packaged
1 (----)
160 km Fully
ETDM, field
The transmission experiments reported so far can be split into two categories, the
"OTDM/ETDM Experiments" and the "Fully ETDM Experiments". The first category contains
page 42 of 54
White Paper 100 GbE
experiments, where part of the system (either transmitter or receiver) are based on OTDM
technology. These experiments aim at demonstrating the capabilities of some pure ETDM
subcomponents (e. g. receiver or transmitter) rather than showing complete 100+ Gbit/s
systems. The experiments in the second category demonstrate transmission using systems
fully based on ETDM.
3.8.1 OTDM/ETDM experiments
The first ETDM optical transmitter operating at 107 Gbit/s was demonstrated by Lucent
Technologies [65],[111] in 2005. This data rate corresponds to a 100 Gbit/s information data
rate plus a 7% overhead for forward error correction (FEC). Due to the unavailability of
sufficiently broadband amplifiers and modulators they used duobinary modulation format. The
single wavelength channel duobinary signal was generated by a Mach-Zehnder modulator
(MZM) with a 3-dB bandwidth of about 30 GHz, biased at minimum transmission, which acted
simultaneously as a low-pass filter and as a modulator. The receiver in this experiment
contained an optical time division demultiplexer, based on a second MZM. Since no
transmission experiments were performed, no clock recovery was needed. Error-free
performance (bit error ratio (BER) < 10-9) was demonstrated for a word length of 27-1 with a
required OSNR of 34 dB and 25 dB for BER 10-9 and 10-3, respectively. For a long word
length (231-1) a clear error-floor was observed, which was attributed to group delay and
amplitude ripple of the modulators nonideal lowpass filter characteristics.
Fig. 42: Eye diagram of a 107 Gbit/s electrical driving signal (left), a 107 Gbit/s duobinary
signal [65] (middle) and a 107 Gbit/s NRZ data signal after optical equalization [105] (right).
A second method to overcome the modulator bandwidth limitations for 100+ Gbit/s serial
transmission is an optical equalizer, which was shown by Lucent Technologies in the same
year [105]. This optical equalizer was an integrated single-chip device, consisting of a series
arrangement of two main Mach-Zehnder interferometers on a silica-on-silicon planar
lightwave circuit. By employing this equalizer they reported the first 107 Gbit/s ETDM non
return-to-zero (NRZ) – on-off keying (OOK) transmitter. In a back-to-back configuration with
an OTDM receiver, similar to the one described above, they achieved error-free performance
(BER < 10-9) at a word length of 231-1 with a required OSNR of about 21 dB for BER 10-3.
Based on this system Lucent Technologies reported the first wavelength division multiplexing
(WDM) transmission at 107 Gbit/s channel rate in 2006 [107]. They transmitted 10
wavelength channels in the C-band modulated with 107 Gbit/s NRZ-OOK each, over four
100 km spans of nonzero dispersion shifted fiber (NZDSF). In this experiment the optical
equalizer was used in the transmitter to simultaneously compensate for the limited modulator
bandwidth across all wavelength channels. For optimum compensation the optical frequency
periodicity in the optical equalizer was 144 GHz. This defined the frequency grid for the WDM
channels and resulted in a spectral efficiency of 0.7 bits/s/Hz. The span loss was
compensated using hybrid EDFA/Raman amplification. For the experiment no electrical
demultiplexer and clock & data recovery was available. Therefore the receiver incorporated
an optical time division demultiplexer, based on two cascaded MZMs and the clock signal to
synchronize the receiver was transmitted on a separate wavelength channel. Lucent
Technologies extended the transmission distance to 1000 km by using the same setup in a
recirculating loop configuration later in the same year [82].
page 43 of 54
White Paper 100 GbE
Fig. 43: Photo of an integrated SiGe ETDM receiver chip for 107 Gbit/s transmission [108].
The first 100 Gbit/s ETDM based receiver in a transmission experiment was shown in a
collaborative experiment between Siemens AG, Fraunhofer Heinrich-Hertz-Institut and
Micram Microelectronic in 2006 [108]. The main building block of the receiver was a single
SiGe chip incorporating the 1:2 electrical demultiplexer and the clock & data recovery. The
receiver chip was initially designed for 86 Gbit/s and incorporated a 43 GHz voltage controlled
oscillator (VCO), which could not be used for the experiments at 100 Gbit/s. Instead an
external 50 GHz VCO was used. Using this ETDM receiver error-free transmission
(BER < 10-9) of a single wavelength over six 80 km spans of dispersion managed fiber
(combination of super-large area fiber and inverse dispersion fiber) was demonstrated for a
word length of 27-1. The span loss was compensated by EDFA only. The transmitter in this
experiment was based on optical time division multiplexing and generated a low duty cycle
100 Gbit/s return-to-zero (RZ) data signal. Later in 2006 the authors reported that the ETDM
receiver is capable of operating up to 107 Gbit/s [110]. They demonstrated error-free
performance (BER < 10-9) at a word length of 27-1 in a transmission experiment using a setup
almost identical to the one described above. The required OSNR back-to-back for BER 10
(10 ) was about 27.5 dB (20.5 dB) for 100 Gbit/s and 32 dB (23.5 dB) for 107 Gbit/s. At a
longer word length of 231-1 an error-floor was observed at both bit rates. This significant
performance degradation was attributed to limitations in the receiver chip.
Fig. 44: Eye diagram of 107 Gbit/s RZ-OOK signal from an optical time division
multiplexing transmitter (left) and 53.5 Gbit/s electrical signal after demultiplexing with
an integrated SiGe receiver chip [110].
3.8.2 Fully ETDM experiments
The first transmission experiment at 100 Gbit/s using a fully ETDM based system was
reported by KDDI R&D Laboratories, the National Institute of Information and Communication
Technology (NIST) and Sumitomo Osaka Cement Co. in 2006 [106]. In a collaborative
experiment they transmitted 100 Gbit/s NRZ - differential quadrature phase shift keying
(DQPSK) on a single wavelength over 50 km standard single-mode fiber. To overcome the
bandwidth limitation in the modulator they used DQPSK, a phase modulation format which
encodes two bits per symbol, thus enabling a reduced symbol rate of 50 Gbaud. The key
component in this experiment was a novel integrated LiNbO3 nested MZM to generate the
DQPSK data signal. They achieved error-free performance (BER < 10-9) at a word length of
page 44 of 54
White Paper 100 GbE
Fig. 45: Optical 107 Gbit/s RZ-DQPSK eye diagram after demodulation [85]
In the same year the nested MZM was used by Lucent Technologies in cooperation with the
NIST and Sumitomo to demonstrate 107 Gbit/s WDM transmission over 2000 km in a
recirculating loop experiment [85]. The transmitter comprised 10 wavelength channels in the
C-band on a 150 GHz grid, resulting in a spectral efficiency of 0.7 bit/s/Hz. Each wavelength
was modulated with 107 Gbit/s (53.5 Gbaud) and subsequent pulse carving was used to
generate 50% duty cycle RZ-DQPSK. A single nested MZM was used for modulation of even
and odd channels, which prevented the demonstration of a narrower channel spacing. The
loop testbed was similar to the one described in the experiments above. The loop consisted of
four 100 km spans NZDSF. Chromatic dispersion in each span was compensated by
dispersion compensating fiber (DCF). The span loss was compensated using hybrid
EDFA/Raman amplification. In the experiment the bit error ratio test set (BERT) had to be
programmed to the expected bit pattern. This limited the word length to 2 -1, which was the
longest tractable sequence length for programming the BERT. The single-channel back-toback required OSNR was 18.1 dB.
DQPSK modulation format at 100+Gbit/s in a WDM transmitter was used by NTT Corporation
in the same year to demonstrate a record total capacity of 14 Tbit/s. They transmitted 70
wavelength channels in the extended L-band over 160 km with 111 Gbit/s line rate per
channel and polarization-division multiplexing (PDM) [99]. The modulators were integrated
nested MZMs and subsequent pulse carving was used to generate carrier suppressed (CS)RZ-DQPSK data signals. The transmitter comprised two nested MZMs, one for the even and
one for the odd channels. The channel spacing was 100 GHz, which resulted in a spectral
efficiency of 2 bit/s/Hz with the aid of PDM. The increased line rate of 111 Gbit/s
accommodates for FEC and additional overhead potentially required by Ethernet. The
transmission link consisted of two 80 km spans G.656 fiber. The span loss was compensated
using phosphorous co-doped silica fiber amplifiers (P-EDFAs) and backward Raman
pumping. In the experiment the BERT had to be programmed to the expected bit pattern. The
word length was limited to 2 -1. A polarization beam splitter was used in the transmitter to
separate the polarization multiplexed WDM channels. The single-channel back-to-back
required OSNR in the experiment was about 19 dB and 27.5 dB for 10-3 and 10-9,
The first fully ETDM NRZ-OOK system (without clock recovery) operating up to 100 Gbit/s
was demonstrated by Alcatel SEL AG in 2006 [78]. They showed error-free performance
(BER<10-9) in a back-to-back configuration with a required OSNR of 36 dB for a BER of 10-9.
At OFC 2007 a number of papers reporting new record transmission capacities and distances
using various modulation formats at 100+ Gbit/s have been reported. These papers are also
listed in Table 4.
4 Summary, conclusions and recommendations
In the preceding sections, a detailed analysis was presented concerning the state of the art in
100 GbE technology from a physical layer perspective. Here, we summarize these results and
provide conclusions. In addition, several areas are identified, where further efforts are
page 45 of 54
White Paper 100 GbE
The foreseeable main drivers for deployment of 100 GbE are router interfaces. Traffic per port
in some large internet or enterprise data network hubs is already approaching 40 Gbit/s
today. Facing traffic growth rates of 30 - 100 % per year in these networks, demand for
capacities up to 100 Gbit/s can be predicted for the coming years. Due to a lack of efficient
packet based link aggregation mechanisms in the Ethernet protocol, adding another port does
not provide a satisfactory option to increase capacity. Therefore, router manufacturers are
already working on 100 Gbit/s interface cards to offer a solution for growing bandwidth
demands. These cards represent a natural next step after 10 Gbit/s based on the Ethernet
tradition of a tenfold capacity increase for each new generation. Physical layer interface
options have to be found to enable short range intra-office and long haul inter-office
interconnection of routers with 100 GbE interface cards.
Standardization activities concerning 100 GbE are currently underway. The IEEE Higher
Speed Study Group (HSSG) as well as the ITU-T Study Group 15 are working on solutions for
a higher data-rate Ethernet. However, serious efforts are still required to evaluate the
proposed alternatives from a scientific and technological point of view as well as from the
system development and finally economical perspective. Especially the most appropriate
data-rate for the transport layer currently contributes a heavily debated topic. Finalization of
standards is not expected before the year 2009.
Good results have already been achieved in the area of components. Research groups were
able to demonstrate devices for multiplexing of parallel data streams in the electrical domain
to serial binary bitrates of 100 Gbit/s and above. Some of these devices provide sufficient
output voltage for a potential to drive the modulator directly. However, in many applications,
driver amplifiers will be required. Realization of packaged modules with sufficient bandwidth
and output voltage for 100 Gbit/s signals are a huge challenge and could not be realized so
A similar conclusion has to be drawn for modulators for binary signals with symbol rates
beyond 100 Gsymbol/s. Mach-Zehnder-interferometer (MZI) type as well as electroabsorption modulator (EAM) based devices are difficult to realize with sufficient bandwidth
and low drive voltage. Considerable research and development efforts have to be spent
before devices suitable for product applications can be expected. Modulators for multi-level
formats such as DQPSK are easier to implement due to their relaxed bandwidth
requirements. The challenge for these devices lies in the more complex structure and stable
operation over wide temperature ranges.
On the receiver side, good results could be achieved concerning photodiodes. Chips with
bandwidths exceeding 100 GHz were demonstrated as well as packaged modules with a
bandwidth of 100 GHz. Combining the photodiode with an electrical amplifier reduces the cutoff frequency of the module to 70 GHz, still leaving sufficient bandwidth for NRZ receivers.
Together with already demonstrated integrated clock and data recovery chips for bitrates up
to 107 Gbit/s and demultiplexers up to 110 Gbit/s, all major building blocks for ETDM
receivers seem to be available with maturity levels close to product requirements.
Major remaining challenges on a component level are given by the packaging and integration
of components with sufficient bandwidth. Promising results could be demonstrated for some
specific components, but in many other cases, the maturity has to be improved significantly to
meet product requirements.
On a subsystem level, integration for reduction of size, power consumption and cost is a task
still in its infancy. Major efforts have to be spent in this area. Signal processing functions such
as equalization and forward error correction already implemented at lower bitrates stretch the
limits of currently available electronic processing capabilities. Finding a compromize between
speed limits and complexity or the number of required gates will be a demanding endeavour.
Several open issues remain to be addressed from a transmission perspective. Increased
OSNR requirements of receivers at the higher bitrate in combination with power limits due to
nonlinear effects turn the design of the system optical power budget into a more difficult task.
Cost efficient solutions have to be found that achieve a comparable reach as current lower
bitrate systems, i. e. 10 and 40 Gbit/s.
The shorter symbol duration of 100 Gbit/s signals significantly reduces the tolerance towards
linear signal distortions such as chromatic dispersion, polarization mode dispersion and
page 46 of 54
White Paper 100 GbE
deviations of filters and other components from a linear phase response. For long haul
systems, variation of CD due to temperature changes can probably no longer be accounted
for by system margins and have to be compensated adaptively. A similar comment applies to
PMD, with much stronger requirements on the response times of adaptive controllers due to
faster changes than CD.
Wider bandwidth of the higher bitrate signals decreases filtering tolerance. Solutions have to
be found to maintain or even increase bandwidth efficiency. Ideally, 100 Gbit/s signals should
be compatible with the currently installed infrastructure using channel spacings on a 100 GHz
or even 50 GHz grid.
Advanced modulation formats offer several options to address linear distortion tolerance as
well as bandwidth efficiency issues. However, an ideal solution could not be identitifed so far.
Multi-level formats such as DQPSK offer advantages with respect to these requirements. But
these advantages usually come with disadvantages in other areas, for example tolerance
towards nonlinear effects. Compatibility of 100 Gbit/s signals added as an upgrade with lower
bitrate signals already present in the fiber is a desirable feature, which cannot be
implemented easily especially in case of PSK formats. The most suitable modulation format
for long haul and ultra long haul transmission still has to be identified.
For short haul interconnects, a single NRZ channel with the full bitrate in a serial binary format
potentially offers the most cost efficient solution. However, components for this symbol rate
have not yet reached required maturity and cost levels. Transmission of multiple lower bitrate
channels, using CWDM, for example, may offer an intermediate solution. But these multiplechannel approaches will probably no longer be deployed in new installations once a suitable
serial solution becomes available.
Reports from successful transmission of 107 Gbit/s ETDM binary signals demonstrate
promising results for short range applications. The same applies for long haul and ultra long
haul transmission of DQPSK signals. These examples represent a good characterization of
the status of 100 Gbit/s physical layer technology. Several promising results could be
achieved, but a lot of effort has to be spent in many areas before these early results can be
turned into viable products. The following list summarizes topics where activities are required:
increased bandwidth and drive voltage of packaged modulator driver amplifiers
MZI and EAM based optical modulators for serial binary datastreams with increased
bandwidth and low drive voltage,
improved packaging and maturity of components,
increased level of integration for size, power consumption and cost reduction,
enhanced signal processing capabilities,
design of control concepts for adaptive elements,
enhanced OSNR budget to increase reach,
more tolerance towards linear and nonlinear signal distortions,
identification of the most suitable transmission format for short haul and long haul
page 47 of 54
White Paper 100 GbE
5 References
A. Leon-Garcia, I. Widjaja: "Communication Networks, Fundamental Concepts and Key
Architectures", McGraw Hill 2004
H.-J. Tessmann, D. Breuer, H.-M. Foisel, H. Reiner, H. Cremer: ”Investigations on the
Relation Between PMD Coefficient and Cable Installation Year in Deutsche Telekom
Fiber Plant”, Proceedings of SOFM 2002, Boulder, USA, 2002
P. M. Krummrich, E.-D. Schmidt, W. Weiershausen, A. Mattheus: “Field Trial Results
on Statistics of Fast Polarization Changes in Long Haul WDM Transmission Systems”,
Proceedings of OFC’05, OThT, Anaheim, CA, USA, 2005
G. Arawal: "Fiber – Optic Communications Systems“, John Wiley & Sons, New York,
A. Walter, G. Schaefer: "Chromatic Dispersion Variations in Ultra-Long-Haul
Transmission Systems Arising from Seasonal Soil Temperature Variations“, Proc. of
OFC 2002, WU4, Anaheim, USA.
T. Kato, Y. Koyano, M. Nishimura: "Temperature dependence of chromatic dispersion
in various types of optical fiber“, Optics Letters, vol. 25, no. 16, August 2000.
M. Hamp, J. Wright, M. Hubbard, B. Brimacombe: "Investigation into the Temperature
Dependence of Chromatic Dispersion in Optical Fiber“, Journal of Lightwave
Technology, vol. 14, no. 11, November 2002.
www.agrowetter.de, Abrufdatum: 12.01.2005.
www.agrowetter.de, Abrufdatum: 19.07.2006.
S. Vorbeck and R. Leppla: “Dispersion and Dispersion Slope Tolerance of 160 Gb/sSystems, considering the Temperature Dependence of Chromatic Dispersion”, IEEE
Photonics Technology Letters, Vol. 15. Nr. 10 October 2003.
M. Rohde, C. Caspar, N.Heimes, M. Konitzer, E.-J. Bachus and N. Hanik “Robustness
of DPSK direct detection transmission format in standard fiber WDM systems”, in IEEE
Electronics Letters, Vol. 36, No. 17, 2000, pp. 1483-1484
R.A. Griffin and A.C. Carter, “Optical Differential Quadrature Phase-Shift Key
(oDQPSK) for High Capacity Optical Transmission”, in Proc. OFC 2002, paper WX6,
M. Ohm and J. Speidel, “Optimal Amplitudes Ratios and Chromatic Dispersion
Tolerances of Optical Quaternary ASK-DPSK and 8-ary ASK-DQPSK”, in Proc.
(APOC), 2004
H. Yoon, D. Lee and N. Park, “Performance Comparison of Optical 8-ary Differential
Phase-Shift Keying Systems with Different Electrical Decision Schemes ”, Optics
Express, vol. 13, No. 2, 2005, pp. 317-376
M. Serbay, T. Tokle, P. Jeppesen. W. Rosenkranz: “42.8 Gbit/s, 4 Bits per Symbol 16ary Inverse-RZ-QASK-DQPSK Transmission Experiment without Polmux”, in Proc.
OFC 2007, paper OThL2.
S. L. Jansen et al.: “10,200km 22x2x10Gbit/s RZ-DQPSK Dense WDM Transmission
without Inline Dispersion Compensation through Optical Phase Conjugation”, in Proc.
OFC 2005, post deadline paper PDP28.
page 48 of 54
White Paper 100 GbE
A. H. Gnauck et al.: "Spectrally efficient (0.8 b/s/Hz) 1-Tb/s (25x42.7 Gb/s) RZ-DQPSK
transmission over 28 100-km SSMF spans with 7 optical add/drops”, in Proc. ECOC
2004, post deadline paper Th4.4.1, Vol. 6, pp. 40-41.
T. Tokle et al.: “Penalty-free Transmission of Multilevel 240 Gbit/s RZ-DQPSK-ASK
using only 40 Gbit/s Equipment”, in Proc. ECOC 2005, post deadline paper Th4.1.6,
Vol. 6, pp. 11-12.
M. Serbay et al.: “Experimental Investigation of RZ-8DPSK at 3x10.7Gb/s”, in Proc.
LEOS 2005, paper We3, pp. 483-484.
K. Sekine et al, “40 Gbit/s 16-ary (4 bit/symbol) optical modulation / demodulation
scheme”, in IEE Electronic Letters., Vol. 41, No. 7, 2005, pp. 430-432.
T. Miyazaki, et al.: “Superposition of DQPSK over inverse-RZ for 3-bit/symbol
modulation-demodulation”, in IEEE Photonics Technology Letters, Vol. 16, No. 12,
2004, pp. 2643-2645.
N. Kikuchi, K. Sekine, S. Sasaki: “Proposal of Inter-Symbol Interference (ISI)
Suppression Technique for Optical Multilevel Signal Generation”, in Proc. ECOC 2006,
paper Tu4.2.1, Vol. 2, pp. 117-118.
S. K. Ibrahim et al.: “Performance of 20 Gbit/s Quaternary Intensity Modulation Based
on Binary or Duobinary Modulation in Two Quadratures With Unequal Amplitudes”, in
IEEE J. Selected Topics Quantum Electronics, Vol. 12, No. 4, 2006, pp 596-602.
S. Vorbeck, R. Leppla: "Dispersion and Dispersion Slope Tolerance of 160-Gb/s
Systems, Considering the Temperature Dependence of Chromatic Dispersion", IEEE
Photonics Technology Letters, Vol. 15, No. 10, 2003
V. Srikant: "Broadband dispersion and dispersion slope compensation in high bit rate
and ultra long haul systems", OFC 2001, Anaheim.
M. Shirasaki: "Compensation of chromatic dispersion and dispersion slope using a
virtually imaged phased array", OFC 2001, Anaheim.
S. Cao et al.: "Dynamically tunable dispersion slope compensation using a virtually
imaged phased array (VIPA)", 2001 Digest of the LEOS Summer Topical Meetings,
Copper Mountain, 2001.
Y. Painchaud et al.: "Multi-channel fiber Bragg gratings for dispersion and slope
compensation", OFC 2002, Anaheim.
W. H. Loh et al.: "Sampled Fiber Grating Based Dispersion Slope Compensator", IEEE
Photonics Technology Letters, Vol. 11, No. 10, 1999
J. A. R. Williams et al.: "Fiber Bragg Grating Fabrication for Dispersion Slope
Compensation", IEEE Photonics Technology Letters, Vol. 8, No. 9, 1996
R. Kashap: "Fiber Bragg gratings", Acedemic Press, Optics and Photonics, San Diego,
London, Boston
A. M. Marom et al.: "Compact Colorless Tunable Dispersion Compensator With 1000ps/nm Tuning Range for 40-Gb/s Data Rates", Journal of Lightwave Technology, Vol.
24, No. 1, 2006
C. K. Madsen, J. H. Zhao: "Optical filter design and analysis, a signal processing
approach", John Wiley & Sons Inc., New York, 1999
C. R. Doerr et al.: "Two Mach-Zehnder-Type Tunable Dispersion Compensators
Integrated in Series to Increase Bandwidth and / or Range While Maintaining SingleKnob Control", IEEE Photonics Technology Letters, Vol. 17, No. 4, 2005
F. Horst et al.: "Compact Tunable FIR Dispersion Compensator in SiON Technology",
IEEE Photonics Technology Letters, Vol. 15, No. 11, 2003
G. Lenz, C. K. Madsen: "General Optical All-Pass Filter Structures for Dispersion
Control in WDM Systems", Journal of Lightwave Technology, Vol. 17, No. 7, 1999
page 49 of 54
White Paper 100 GbE
C. K. Madsen, G. Lenz: "Optical All-Pass Filters for Phase Response Design with
Applications for Dispersion Compensation", IEEE Photonics Technology Letters, Vol.
10, No. 7, 1998
K. Takiguchi et al.: "Dispersion Slope Equalizer for Dispersion Shifted Fiber Using a
Lattice-Form Programmable Optical Filter on a Planar Lightwave Circuit", Journal of
Lightwave Technology, Vol. 16, No. 9, 1998
C. K. Madsen et al.: "Integrated All-Pass Filters for Tunable Dispersion and Dispersion
Slope Compensation", IEEE Photonics Technology Letters, Vol. 11, No. 12, 1999
M. Bohn et al.: "Adaptive Distortion Compensation With Integrated Optical Finite
Impulse Response Filters in High Bitrate Optical Communication Systems", IEEE
Journal of selected topics in Quantum Electronics, Vol. 10, No. 2, 2004
R. Kumaresan, A. Rao: "On Designing Stable Allpass Filters Using AR Modeling", IEEE
Transactions on Signal Processing, Vol. 47, No. 1, 1999
M. Lang, T. I. Laakso: "Simple and Robust Method for the Design of Allpass Filters
Using Least-Squares Phase Error Criterion", IEEE Transactions on circuits and
systems - II: analog and digital signal processing, Vol. 41, No. 1, 1994
M. Lang: "Allpass Filter Design and Applications", IEEE Transactions on Signal
Processing, Vol. 46, No. 9, 1998
T. Duthel et al.: "Quasi-analytic Synthesis of Non-Recursive Optical Delay Line Filters
for Optimal Compensation of Dispersion Effects", IEEE Journal of Lightwave
Technology, Vol. 24, No. 11, 2006
B. Franz, D. Rösener, F. Buchali, H. Bülow: ”Adaptive Electronic Feed-Forward
Equaliser and Decision Feedback Equaliser for the Mitigation of Chromatic Dispersion
and PMD in 43 Gbit/s Optical Transmission Systems”, ECOC 2006, Sept. 24-28, 2006
Cannes, France, We1.5.1
A. Färbert, S. Langenbach, N. Stojanovic, C. Dorschky, T. Kupfer, C. Schulien, J. P.
Elbers, H. Wernz, H. Griesser, C. Glingener: “Performance of a 10.7 Gb/s Receiver
with Digital Equaliser using Maximum Likelihood Sequence Estimation”, ECOC 2004,
Stockholm, post-deadline, Th4.1.5
H. Haunstein, K. Sticht, A. Dittrich, W. Sauer-Greff, R. Urbansky: “Design of near
optimum electrical equalizers for optical transmission in the presence of PMD”, Tech.
Dig., OFC 2001, Anaheim, CA, 2001, WAA4
M. Cavallari, C. R. S. Fludger, P. J. Anslow: "Electronic Signal Processing for
Differential Phase Modulation Formats", Tech. Dig. OFC 2004, Los Angeles, TuG2
X. Liu et al.: “DSP-enabled compensation of demodulator phase error and sensitivity
improvement in direct-detection 40-Gb/s DQPSK”, ECOC 2006, Cannes, France, postdeadline paper Th4.4.5
S. J. Savory, A. D. Stewart, S. Wood, G. Gavioli, M. G. Taylor, R. I. Killey, P. Bayvel:
“Digital Equalisation of 40Gbit/s per Wavelength Transmission over 2480km of
Standard Fibre without Optical Dispersion Compensation“, ECOC 2006, Cannes,
France, Th2.5.5
D. McGhan, C. Laperle, A. Savchenko, C. Li, G. Mak, M. O’Sullivan: “5120km RZDPSK over G.652 fibre at 10G with no dispersion compensation”, OFC/NFOEC 2005,
post-deadline PDP27
C. Weber, J.K. Fischer, C.-A. Bunge, K. Petermann: “Electronic Precompensation of
Intra-Channel Nonlinearities at 40 Gbit/s” ECOC 2006, Cannes, France, W1.5.4
W. Shieh, C. Athaudage: “Coherent optical orthogonal frequency division multiplexing”,
El. Lett., 2006 Vol. 42 No. 10
M. Mayrock, H. Haunstein: “Impact of Implementation Impairments on the Performance
of an Optical OFDM Transmission System“, ECOC 2006, Th3.2.1
page 50 of 54
White Paper 100 GbE
T. Suzuki, Y. Nakasha, T. Takahashi, K. Makiyama, T. Hirose and M. Takikawa: “144Gbit/s selector and 100-Gbit/s 4:1 multiplexer using InP HEMTs” in Microwave
Symposium Digest, 2004 IEEE MTT-S International, Vol.1, p. 117- 120.
K. Murata, T. Enoki, H. Sugahara, and M. Tokumitsu: ”ICs for 100 Gbit/s Data
Transmission”, in Proc. 11th GaAs Symposium Munich 2003, p. 457-460.
M. Meghelli: ”A 132-Gb/s 4:1 Multiplexer in 0.13µm SiGe-Bipolar Technology”, in IEEE
Journal of Solid-State Circuits, December 2004, Vol. 39, No. 12, pp. 2403-2407.
K. Schuh, B. Junginger, H. Rempp, P. Klose, D. Rösener and E. Lach: ”85.4 Gbit/s
ETDM Transmission over 401 km SSMF Applying UFEC”, in Proc. ECOC 2005, postdeadline paper Th4.1.4.
E. Lach, K. Schuh, B. Junginger, G. Veith, J. Lutz, M. Möller: “Challenges for 100Gbit/s
ETDM transmission and implementation“, in Proc. OFC 2007, invited paper OWE1
K. Schuh, B. Junginger, E. Lach, G. Veith, J. Lutz, M. Möller: “107 Gbit/s ETDM NRZ
Transmission over 320km SSMF “, in Proc. OFC 2007, paper OWE2
W. S. Lee, V. Filsinger, L. Klapproth, H.-G. Bach and A. Behling: ”Implementation of an
80 Gbit/s Full ETDM Multi-format ASK Optical Transmitter”, in Proc. ECOC 2005, paper
P. J. Winzer, G. Raybon and M. Duelk: ”107 Gb/s optical ETDM transmitter for 100G
Ethernet transport”, in Proc. ECOC 2005, Glasgow, post-deadline paper Th4.1.1.
R. E. Makon, M. Lang, R. Driad, K. Schneider, M. Ludwig, R. Aidam, R. Quay, M.
Schlechtweg and G. Weimann: “Over 80 Gbit/s 2:1 multiplexer and low power selector
ICs using InP/InGaAs DHBTs”, in IEE Electronics Letters, May 2005, Vol. 41, No. 11.
K. Schneider, R. Driad, R. E. Makon, H. Maßler, M. Ludwig, R. Quay, M. Schlechtweg,
and G. Weimann: “Comparison of InP/InGaAs DHBT Distributed Amplifiers as
Modulator Drivers for 80-Gbit/s Operation”, in IEEE TRANSACTIONS ON
MICROWAVE THEORY AND TECHNIQUES, November 2005, Vol. 53, No. 11.
S. Kimura, Y. Imai, Y. Umeda, T. Enoki: “Loss-compensated distributed baseband
amplifier IC's for optical transmission systems”, in IEEE Transactions on Microwave
Theory and Techniques, October 1996, Vol. 44, No. 10, p. 1688-1693.
C. Meliani, G. Post, J. Decobert, W. Mouzannar, G. Rondeau, E. Dutisseuil and R.
Lefèvre: “92 GHz cut-off frequency InP double channel HEMT based Coplanar
Distributed Amplifier for 40 Gbit/s applications and beyond”, in Proc. ESSCIRC 2002, p.
Y. Arayashiki, Y. Ohkubo, Y. Amano, A. Takagi, M. Ejima, Y. Matsuoka: “16 dB 80 GHz
InGaP/GaAs HBT distributed amplifier”, in IEE Electronics Letters, February 2004, Vol.
40, No. 4, p. 244- 245.
S. Masuda, T. Hirose, T. Takahashi, M. Nishi, S. Yokokawa, S. Iijima, K. Ono, N. Hara,
and K. Joshin: “An Over 110-GHz InP HEMT Flip-chip Distributed Baseband Amplifier
with Inverted Microstrip Line Structure for Optical Transmission Systems”, in 2002 IEEE
GaAs Digest, p. 99-102.
I. I. Y. Suzuki, Z. Yamazaki, Y. Amamiya, S. Wada, H. Uchida, C. Kurioka, S. Tanaka
and H. Hida: ”120-Gb/s Multiplexing and 110-Gb/s Demultiplexing Ics”, in IEEE Journal
of Solid-State Circuits, December 2004, Vol. 39, No. 12, pp. 2397-2401.
K. Ishii, H. Nosaka, K. Sano, K. Murata, M. Ida, K. Kurishima, M. Hirata, T. Shibata and
T. Enoki: ”High-bit-rate low-power decision circuit using InP-InGaAs HBT technology”,
in IEEE Journal of Solid-State Circuits, July 2005, Vol. 40, No. 7, p. 1583- 1588.
R.-E. Makon, R. Driad, K. Schneider, M. Ludwig, R. Aidam, R. Quay, M. Schlechtweg,
and G. Weimann: “80 Gbit/s Monolithically Integrated Clock and Data Recovery Circuit
With 1:2 DEMUX using InP-based DHBTs”, in CISC 2005 digest, p. 268-271.
A. Konczykowska, F. Jorge, A. Kasbari, W. Idler, L. Giraudet, K. Schuh, B. Junginger
and J. Godin: ”High-sensitivity decision circuit in InP/InGaAs DHBT technology and 40-
page 51 of 54
White Paper 100 GbE
80 Gbit/s optical experiments”, in IEE Electronics Letters, October 2003, Vol. 39, No.
21, p. 1532-1533.
U. Dümler, M. Möller, A. Bielik, T. Ellermeyer, H. Langenhagen, W. Walthes and J.
Mejri: ”86 Gbit/s SiGe receiver module with high sensitivity for 160×86 Gbit/s DWDM
system”, in IEE Electronics Letters, January 2006, Vol. 42, No. 1, p. 21-22.
R.H. Derksen, G. Lehmann, C.-J. Weiske, C. Schubert, R. Ludwig, S. Ferber, C.
Schmidt-Langhorst, M. Möller and J. Lutz: "Integrated 100 Gbit/s ETDM Receiver in a
Transmission Experiment over 480 km DMF", in Proc. OFC 2006, post-deadline paper
K. Schuh, E. Lach, B. Junginger: “100 Gbit/s ETDM transmission system based on
electronic multiplexing transmitter and demultiplexing receiver”, in Proc. ECOC 2006,
paper We3.P.124
I. D. Phillips, A. D. Ellis, T. Widdowson, D. Nesset, A. E. Kelly and D. Trommer: "80
Gbit/s optical clock recovery using an electrical phase locked loop, and commercially
available components.", in Proc. OFC 2000, Baltimore, post-deadline paper ThP4.
I. D. Phillips, A. D. Ellis, T. Widdowson, D. Nesset, A. E. Kelly and D. Trommer:
"100 Gbit/s optical clock recovery using an electrical phase locked loop consisting of
commercially available components.", IEE Electronics Letters March 2000, Vol. 36, No.
7, p. 650-652.
K. Schuh, B. Junginger, E. Lach, A. Klekamp and E. Schlag: ”85.4 Gbit/s ETDM
receiver with full rate electronic clock recovery circuit”, Proc. ECOC 2004, postdeadline paper Th4.1.1.
P. J. Winzer, G. Raybon, and C. R. Doerr: ”10 x 107 Gb/s electronically multiplexed
NRZ transmission at 0.7 bits/s/Hz over 1000 km non-zero dispersion fiber”, ECOC
2006, Tu1.5.1
C. R. S. Fludger, T. Duthel, T. Wuth, C. Schulien: “Uncompensated Transmission of
86Gbit/s Polarization Multiplexed RZ-QPSK over 100km of NDSF Employing Coherent
Equalisation”, ECOC 06 PDP Th4.3.3
T. Kawanishi, K. Higuma, T. Fujita, S. Mori, S. Oikawa, J. Ichikawa, T. Sakamoto, M
Izutsu: ”40Gbit/s Versatile LiNbO3 Lightwave Modulator”, ECOC 2005, Th2.2.6
P. J. Winzer, G. Raybon, C. R. Doerr, L. L. Buhl, T. Kawanishi, T. Sakamoto, M. Izutsu,
K. Higuma: ”2000-km WDM Transmission of 10 x 107-Gb/s RZ-DQPSK”, ECOC 2006,
M. Daikoku, I. Morita, H. Taga, H. Tanaka, T. Kawanishi, T. Sakamoto, T. Miyazaki, T.
Fujita: ”100Gbit/s DQPSK Transmission Experiment without OTDM for 100G Ethernet
Transport”, OFC 2006, PDP36
H. N. Klein et al.: “1.55μm Mach-Zehnder Modulators on InP for optical 40/80 Gbit/s
transmission networks”, TuA2.4, Proceed. 18th Int. Conf. InP and Related Materials,
IPRM, Princeton, 7-11 May 2006
K. Tsuzuki et al.: “40 Gbit/s n–i–n InP Mach–Zehnder modulator with a π voltage of 2.2
V”, IEE Elect. Lett., vol. 39(20), 1464- 1466, 2003
S. Akiyama et al.: “40 Gb/s InP-based Mach-Zehnder modulator with a driving voltage
of 3 Vpp”, ThA1-4, Proceed. 16th Int. Conf. InP and Related Materials, IPRM,
Kagoshima, 31 May-4 June 2004
U. Westergren et al.: “Travelling-wave Electroabsorption Modulators for Integrated
Transmitters at 100Gb/s and Beyond”, Tu3.1.3, 30th European Conference on Optical
Communication, ECOC, Stockholm, 5-9 September 2004
Y. Muramoto, K. Yoshino, S. Kodama, Y. Hirota, H. Ito and T. Ishibashi: “100 and
160 Gbit/s operation of uni-travelling-carrier photodiode module,” Electron. Lett., 40 (6),
pp. 378-379, 2004.
page 52 of 54
White Paper 100 GbE
A. Beling, D. Schmidt, H.-G. Bach, G. G. Mekonnen, R. Ziegler, V. Eisner, M. Stollberg,
G. Jacumeit, E. Gottwald, C.-J. Weiske, A. Umbach: “High-power 1550 nm twinphotodetector modules with 45 GHz bandwidth based on InP,” Tech. Digest Optical
Fiber Communication (OFC 2002), pp. 274–275.
A. Beling, H.-G. Bach, D. Schmidt, G.G. Mekonnen, R. Ludwig, S. Ferber, C. Schubert,
C. Boerner, B. Schmauss, J. Berger, C. Schmidt, U. Troppenz and H. G. Weber:
“Monolithically integrated balanced photodetector and its application in OTDM
160 Gbit/s DPSK transmission,” Electron. Lett., 39 (16), pp. 1204-1205, 2003.
K. Kato, A. Kozen, Y. Muramoto, Y. Itaya, T. Nagatsuma, M. Yaita: “110-GHz, 50%efficiency mushroom-mesa waveguide p-i-n photodiode for a 1.55-μm wavelength,”
IEEE Photon. Technol. Lett., vol. 6, no. 6, pp. 719–721, June 1994.
H.-G. Bach, A. Beling, G. G. Mekonnen, R. Kunkel, D. Schmidt, W. Ebert, A. Seeger,
M. Stollberg, and W. Schlaak: “InP-Based Waveguide-Integrated Photodetector with
100 GHz Bandwidth,” IEEE J. Selected Topics Quantum Electron., vol. 10, no.4, pp.
668–672, July/August 2004.
A. Beling, H.-G. Bach, G. G. Mekonnen, T. Eckhardt, R. Kunkel, D. Schmidt, and
C. Schubert: “Highly Efficient PIN Photodetector Module for 80 Gbit/s and Beyond,” in
Tech. Digest Optical Fiber Communication (OFC 2005), paper OFM1.
Beling A., H.-G. Bach, G. G. Mekonnen, R. Kunkel, D. Schmidt: “High-Speed
Miniaturized Photodiodes and Parallel-fed Traveling Wave Photodetector based on
InP”, IEEE Journal of Selected Topics in Quantum Electronics, special issue on HighSpeed Photonic Integration, Vol. 13, No. 1 (2007), pp. 15-21
A. Beling, H.-G. Bach, G. G. Mekonnen, R. Kunkel, and D. Schmidt: “Miniaturized
Waveguide-Integrated p-i-n Photodetector With 120-GHz Bandwidth and High
Responsivity” , IEEE Photonics Technology Letters, VOL. 17, NO. 10, October 2005,
pp. 2152-2154
A. Beling: “InP-Based 1.55 μm Waveguide-Integrated Photodetectors for High-Speed
Applications“, Proc. SPIE, Vol. 6123, Photonics West 2006, San Jose, CA, USA,
January 21-26, 2006 (invited).
[100] H.-G. Bach, A. Umbach, S. van Waasen, R.M. Bertenburg, G. Unterbörsch: "Ultrafast
monolithically integrated InP-based photoreceiver: OEIC-design, fabrication, and
system application," Special Issue of IEEE J. Select. Topics Quantum Electron. and
Integrated Optics (JSTQE) Vol. 2, No. 2, 1996, pp. 418 – 423.
[101] W. Schlaak, G. G. Mekonnen, H.-G. Bach, C. Bornholdt, C. Schramm, A. Umbach,
R. Steingrüber, A. Seeger, G. Unterbörsch, W. Passenberg, and P. Wolfram: “40 Gbit/s
Eyepattern of a Photoreceiver OEIC with Monolithically Integrated Spot Size
Converter”, Technical Digest, OFC 2001, Mar 17-22, 2001, Anaheim, CA, USA, paper
WQ4, pp. 1-3.
[102] G. G. Mekonnen, H.-G. Bach, W. Schlaak, R. Steingrüber, A. Seeger, W. Passenberg,
W. Ebert, G. Jacumeit, Th. Eckhardt, R. Ziegler, and A. Beling: “40 Gbit/s
Photoreceiver with DC-Coupled Output and Operation without Bias-T”, 14 Intern.
Conf. on InP and Related Materials (IPRM 2002), May 12-16, 2002, Stockholm,
Sweden, paper A8-1.
[103] G. G. Mekonnen, H.-G. Bach, A. Beling, R. Kunkel, D. Schmidt, W. Schlaak: “80-Gb/s
InP-Based Waveguide-Integrated Photoreceiver”, IEEE J. Select. Topics Quantum
Electron. vol. 11, Issue 2, 2005, pp. 356-360.
[104] J. H. Sinsky, A. Adamiecki, L. Buhl, G. Raybon, P. Winzer, O. Wohlgemuth, M. Duelk,
C. R. Doerr, A. Umbach, H.-G. Bach, D. Schmidt: "107-Gbit/s Opto-Electronic Receiver
with Hybrid Integrated Photodetector and Demultiplexer“, in Proc. OFC 2007, post
deadline paper PDP30
[105] C. R. Doerr, P. J. Winzer, G. Raybon, L. L. Buhl, M. A. Cappuzzo, A. Wong-Foy, E. Y.
Chen, L. T. Gomez, and M. Duelk: “A Single-Chip Optical Equalizer Enabling 107-Gb/s
Optical Non-Return-to-Zero Signal Generation”, ECOC 2005, Th4.2.1
page 53 of 54
White Paper 100 GbE
[106] M. Daikoku, I. Morita, H. Taga, H. Tanaka, T. Kawanishi, T, Sakamoto, T. Miyazaki,
and T. Fujita: “100Gbit/s DQPSK Transmission Experiment without OTDM for 100G
Ethernet Transport”, OFC 2006, PDP 36
[107] G. Raybon, P. J. Winzer, and C. R. Doerr: ”10 x 107-Gbit/s Electronically Multiplexed
and Optically Equalized NRZ Transmission over 400 km”, OFC 2006, PDP 32
[108] R. H. Derksen, G. Lehmann, C.-J. Weiske, C. Schubert, R. Ludwig, S. Ferber, C.
Schmidt-Langhorst, M. Möller, J. Lutz: ”Integrated 100 Gbit/s ETDM Receiver in a
Transmission Experiment over 480 km DMF”, OFC 2006, PDP 37
[109] A. Sano, H. Masuda, Y. Kisaka, S. Aisawa, E. Yoshida, Y. Miyamoto, M. Koga, K.
Hagimoto, T. Yamada, T. Furuta, and H. Fukuyama: “14-Tb/s (140 x 111-Gb/s
PDM/WDM) CSRZ-DQPSK Transmissionover 160 km Using 7-THz Bandwidth
Extended L-band EDFAs”, ECOC 2006, Th4.1.1
[110] C. Schubert, R. H. Derksen, M. Möller, R. Ludwig, C.-J. Weiske, J. Lutz, S. Ferber, C.
Schmidt-Langhorst: "107 Gbit/s Transmission Using An Integrated ETDM Receiver“,
ECOC 2006, Tu1.5.5
[111] P. J. Winzer, G. Raybon, C. R. Doerr, M. Duelk, and C. Dorrer: “107-Gb/s Optical
Signal Generation Usimg Electronic Time-Division Multiplexing”, Journal of Lightwave
Technology, vol. 24, no. 8, August 2006
page 54 of 54