wireless communication notes

Directional antenna
A directional antenna or beam antenna is an antenna which radiates greater power in one or
more directions allowing for increased performance on transmit and receive and reduced
interference from unwanted sources. Directional antennas like Yagi-Uda antennas provide
increased performance over dipole antennas when a greater concentration of radiation in a
certain direction is desired.
All practical antennas are at least somewhat directional, although usually only the direction in
the plane parallel to the earth is considered, and practical antennas can easily be omnidirectional
in one plane.
The most common types are the Yagi-Uda antenna, the log-periodic antenna, and the corner
reflector, which are frequently combined and commercially sold as residential TV antennas.
Cellular repeaters often make use of external directional antennas to give a far greater signal than
can be obtained on a standard cell phone. Satellite Television receivers usually use parabolic
For long and medium wavelength frequencies, tower arrays are used in most cases as directional
Antenna types
Isotropic radiator
Batwing antenna
Biconical antenna
Cage aerial
Choke ring antenna
Coaxial antenna
Crossed field antenna
Dielectric resonator antenna
Discone antenna
Folded unipole antenna
Franklin antenna
Ground-plane antenna
Halo antenna
Helical antenna
J-pole antenna
Mast radiator
Monopole antenna
Random wire antenna
Rubber ducky antenna
Turnstile antenna
T2FD antenna
Umbrella antenna
Whip antenna
Adcock antenna
AS-2259 Antenna
AWX antenna
Beverage antenna
Cassegrain antenna
Collinear antenna array
Conformal antenna
Corner reflector antenna
Curtain array
Dipole antenna
Doublet antenna
Folded inverted conformal antenna
Fractal antenna
G5RV antenna
Helical antenna
Horn antenna
Horizontal curtain
HRS antenna
Inverted vee antenna
Log-periodic antenna
Loop antenna
Microstrip antenna
Offset dish antenna
Patch antenna
Phased array
Parabolic antenna
Plasma antenna
Quad antenna
Reflective array antenna
Regenerative loop antenna
Rhombic antenna
Sector antenna
Short backfire antenna
Sloper antenna
Slot antenna
Sterba antenna
Vivaldi antenna
Yagi-Uda antenna
Evolved antenna
Ground dipole
Reconfigurable antenna
Reference a
Antenna Efficiency
The efficiency of an antenna relates the power delivered to the antenna and the power
radiated or dissipated within the antenna. A high efficiency antenna has most of the
power present at the antenna's input radiated away. A low efficiency antenna has most
of the power absorbed as losses within the antenna, or reflected away due to
impedance mismatch.
[Side Note: Antenna Impedance is discussed in a later section. Impedance Mismatch
is simply power reflected from an antenna because it's impedance is not the correct
value; hence, "impedance mismatch". ]
The losses associated within an antenna are typically the conduction losses (due to
finite conductivity of the antenna) and dielectric losses (due to conduction within a
dielectric which may be present within an antenna).
The antenna efficiency (or radiation efficiency) can be written as the ratio of the
radiated power to the input power of the antenna:
[Equation 1]
Efficiency is ultimately a ratio, giving a number between 0 and 1. Efficiency is very
often quoted in terms of a percentage; for example, an efficiency of 0.5 is the same as
50%. Antenna efficiency is also frequently quoted in decibels (dB); an efficiency of
0.1 is 10% or (-10 dB), and an efficiency of 0.5 or 50% is -3 dB.
Equation [1] is sometimes referred to as the antenna's radiation efficiency. This
distinguishes it from another sometimes-used term, called an antenna's "total
efficiency". The total efficiency of an antenna is the radiation efficiency multiplied by
the impedance mismatch loss of the antenna, when connected to a transmission line or
receiver (radio or transmitter). This can be summarized in Equation [2], where
the antenna's total efficiency,
is the antenna's loss due to impedance mismatch,
is the antenna's radiation efficiency.
[Equation 2]
is always a number between 0 and 1, the total antenna efficiency is always
less than the antenna's radiation efficiency. Said another way, the radiation efficiency
is the same as the total antenna efficiency if there was no loss due to impedance
Radiation Pattern
A radiation pattern defines the variation of the power radiated by an antenna as a function of the
direction away from the antenna. This power variation as a function of the arrival angle is
observed in the antenna's far field.
As an example, consider the 3-dimensional radiation pattern in Figure 1, plotted in decibels (dB) .
Figure 1. Example radiation pattern for an Antenna (generated with FEKO software).
This is an example of a donut shaped or toroidal radiation pattern. In this case, along the z-axis,
which would correspond to the radiation directly overhead the antenna, there is very little power
transmitted. In the x-y plane (perpendicular to the z-axis), the radiation is maximum. These plots
are useful for visualizing which directions the antenna radiates.
Typically, because it is simpler, the radiation patterns are plotted in 2-d. In this case, the patterns
are given as "slices" through the 3d plane. The same pattern in Figure 1 is plotted in Figure 2.
Standard spherical coordinates are used, where
is the angle measured off the z-axis, and
the angle measured counterclockwise off the x-axis.
Figure 2. Two-dimensional Radiation Patterns.
If you're unfamiliar with radiation patterns or spherical coordinates, it may take a while to see
that Figure 2 represents the same radiation pattern as shown in Figure 1. The radiation pattern
on the left in Figure 2 is the elevation pattern, which represents the plot of the radiation pattern
as a function of the angle measured off the z-axis (for a fixed azimuth angle). Observing Figure
1, we see that the radiation pattern is minimum at 0 and 180 degrees and becomes maximum
broadside to the antenna (90 degrees off the z-axis). This corresponds to the plot on the left in
Figure 2.
The radiation pattern on the right in Figure 2 is the azimuthal plot. It is a function of the
azimuthal angle for a fixed polar angle (90 degrees off the z-axis in this case). Since the
radiation pattern in Figure 1 is symmetrical around the z-axis, this plot appears as a constant in
Figure 2.
A pattern is "isotropic" if the radiation pattern is the same in all directions. Antennas with
isotropic radiation patterns don't exist in practice, but are sometimes discussed as a means of
comparison with real antennas.
Some antennas may also be described as "omnidirectional", which for an actual antenna means
that the radiation pattern is isotropic in a single plane (as in Figure 1 above for the x-y plane, or
the radiation pattern on the right in Figure 2). Examples of omnidirectional antennas include the
dipole antenna and the slot antenna.
The third category of antennas are "directional", which do not have a symmetry in the radiation
pattern. These antennas typically have a single peak direction in the radiation pattern; this is the
direction where the bulk of the radiated power travels. These antennas are very common;
examples of antennas with highly directional radiation patterns include the dish antenna and the
slotted waveguide antenna. An example of a highly directional radiation pattern (from a dish
antenna) is shown in Figure 3:
Figure 3. Directional Radiation Pattern for the Dish Antenna.
In summary, the radiation pattern is a plot which allows us to visualize where the antenna
transmits or receives power.
In the field of antenna design the term radiation pattern (or antenna pattern or far-field
pattern) refers to the directional (angular) dependence of the strength of the radio waves from
the antenna or other source.[1][2][3]
Particularly in the fields of fiber optics, lasers, and integrated optics, the term radiation pattern
may also be used as a synonym for the near-field pattern or Fresnel pattern.[4] This refers to
the positional dependence of the electromagnetic field in the near-field, or Fresnel region of the
source. The near-field pattern is most commonly defined over a plane placed in front of the
source, or over a cylindrical or spherical surface enclosing it.[1][4]
The far-field pattern of an antenna may be determined experimentally at an antenna range, or
alternatively, the near-field pattern may be found using a near-field scanner, and the radiation
pattern deduced from it by computation.[1] The far-field radiation pattern can also be calculated
from the antenna shape by computer programs such as NEC. Other software, like HFSS can also
compute the near field.
The far field radiation pattern may be represented graphically as a plot of one of a number of
related variables, including; the field strength at a constant (large) radius (an amplitude pattern
or field pattern), the power per unit solid angle (power pattern) and the directive gain. Very
often, only the relative amplitude is plotted, normalized either to the amplitude on the antenna
boresight, or to the total radiated power. The plotted quantity may be shown on a linear scale, or
in dB. The plot is typically represented as a three-dimensional graph (as at right), or as separate
graphs in the vertical plane and horizontal plane. This is often known as a polar diagram.
In wireless communications, fading is deviation of the attenuation affecting a signal over certain
propagation media. The fading may vary with time, geographical position or radio frequency, and is
often modeled as a random process. A fading channel is a communication channel comprising fading. In
wireless systems, fading may either be due to multipath propagation, referred to as multipath induced
fading, or due to shadowing from obstacles affecting the wave propagation, sometimes referred to as
shadow fading.
Key concepts
The presence of reflectors in the environment surrounding a transmitter and receiver create
multiple paths that a transmitted signal can traverse. As a result, the receiver sees the
superposition of multiple copies of the transmitted signal, each traversing a different path. Each
signal copy will experience differences in attenuation, delay and phase shift while travelling
from the source to the receiver. This can result in either constructive or destructive interference,
amplifying or attenuating the signal power seen at the receiver. Strong destructive interference is
frequently referred to as a deep fade and may result in temporary failure of communication due
to a severe drop in the channel signal-to-noise ratio.
A common example of deep fade is the experience of stopping at a traffic light and hearing an
FM broadcast degenerate into static, while the signal is re-acquired if the vehicle moves only a
fraction of a meter. The loss of the broadcast is caused by the vehicle stopping at a point where
the signal experienced severe destructive interference. Cellular phones can also exhibit similar
momentary fades.
Fading channel models are often used to model the effects of electromagnetic transmission of
information over the air in cellular networks and broadcast communication. Fading channel
models are also used in underwater acoustic communications to model the distortion caused by
the water.
Slow versus fast fading
The terms slow and fast fading refer to the rate at which the magnitude and phase change
imposed by the channel on the signal changes. The coherence time is a measure of the minimum
time required for the magnitude change or phase change of the channel to become uncorrelated
from its previous value.
Slow fading arises when the coherence time of the channel is large relative to the delay
constraint of the channel. In this regime, the amplitude and phase change imposed by the
channel can be considered roughly constant over the period of use. Slow fading can be caused
by events such as shadowing, where a large obstruction such as a hill or large building obscures
the main signal path between the transmitter and the receiver. The received power change
caused by shadowing is often modeled using a log-normal distribution with a standard deviation
according to the log-distance path loss model.
Fast fading occurs when the coherence time of the channel is small relative to the delay
constraint of the channel. In this regime, the amplitude and phase change imposed by the
channel varies considerably over the period of use.
In a fast-fading channel, the transmitter may take advantage of the variations in the channel
conditions using time diversity to help increase robustness of the communication to a temporary
deep fade. Although a deep fade may temporarily erase some of the information transmitted, use
of an error-correcting code coupled with successfully transmitted bits during other time instances
(interleaving) can allow for the erased bits to be recovered. In a slow-fading channel, it is not
possible to use time diversity because the transmitter sees only a single realization of the channel
within its delay constraint. A deep fade therefore lasts the entire duration of transmission and
cannot be mitigated using coding.
The coherence time of the channel is related to a quantity known as the Doppler spread of the
channel. When a user (or reflectors in its environment) is moving, the user's velocity causes a
shift in the frequency of the signal transmitted along each signal path. This phenomenon is
known as the Doppler shift. Signals traveling along different paths can have different Doppler
shifts, corresponding to different rates of change in phase. The difference in Doppler shifts
between different signal components contributing to a single fading channel tap is known as the
Doppler spread. Channels with a large Doppler spread have signal components that are each
changing independently in phase over time. Since fading depends on whether signal components
add constructively or destructively, such channels have a very short coherence time.
In general, coherence time is inversely related to Doppler spread, typically expressed as
where is the coherence time,
is the Doppler spread. This equation is just an
approximation, to be exact, see Coherence time.
Selective fading
Selective fading or frequency selective fading is a radio propagation anomaly caused by partial
cancellation of a radio signal by itself — the signal arrives at the receiver by two different paths,
and at least one of the paths is changing (lengthening or shortening). This typically happens in
the early evening or early morning as the various layers in the ionosphere move, separate, and
combine. The two paths can both be skywave or one be groundwave.
Selective fading manifests as a slow, cyclic disturbance; the cancellation effect, or "null", is
deepest at one particular frequency, which changes constantly, sweeping through the received
As the carrier frequency of a signal is varied, the magnitude of the change in amplitude will vary.
The coherence bandwidth measures the separation in frequency after which two signals will
experience uncorrelated fading.
In flat fading, the coherence bandwidth of the channel is larger than the bandwidth of the
signal. Therefore, all frequency components of the signal will experience the same magnitude of
In frequency-selective fading, the coherence bandwidth of the channel is smaller than the
bandwidth of the signal. Different frequency components of the signal therefore experience
uncorrelated fading.
Fading is most important thing when you design any RF Design lets understand what is fading
and how its effect.
The communication between the base station and mobile station in mobile systems is
mostly non-LOS.
The LOS path between the transmitter and the receiver is affected by terrain and
obstructed by buildings and other objects.
The mobile station is also moving in different directions at different speeds.
The RF signal from the transmitter is scattered by reflection and diffraction and reaches
the receiver through many non-LOS paths.
This non-LOS path causes long-term and short term fluctuations in the form of lognormal fading and rayleigh and rician fading, which degrades the performance of the RF
Terrain configuration & man made environment causes long-term fading.
Due to various shadowing and terrain effects the signal level measured on a circle around
base station shows some random fluctuations around the mean value of received signal
The long-term fades in signal strength, r, caused by the terrain configuration and man
made environments form a log-normal distribution, i.e the mean received signal strength,
r, varies log-normally in dB if the signal strength is measured over a distance of at least
Experimentally it has been determined that the standard deviation, s, of the mean
received signal strength, r, lies between 8 to 12 dB with the higher s generally found in
large urban areas.
This phenomenon is due to multipath propagation of the signal.
The Rayleigh fading is applicable to obstructed propagation paths.
All the signals are NLOS signals and there is no dominant direct path.
Signals from all paths have comparable signal strengths.
The instantaneous received power seen by a moving antenna becomes a random variable
depending on the location of the antenna.
Fading is most important thing when you design any RF Design lets understand what is fading
and how its effect.
The communication between the base station and mobile station in mobile systems is
mostly non-LOS.
The LOS path between the transmitter and the receiver is affected by terrain and
obstructed by buildings and other objects.
The mobile station is also moving in different directions at different speeds.
The RF signal from the transmitter is scattered by reflection and diffraction and reaches
the receiver through many non-LOS paths.
This non-LOS path causes long-term and short term fluctuations in the form of lognormal fading and rayleigh and rician fading, which degrades the performance of the RF
Terrain configuration & man made environment causes long-term fading.
Due to various shadowing and terrain effects the signal level measured on a circle around
base station shows some random fluctuations around the mean value of received signal
The long-term fades in signal strength, r, caused by the terrain configuration and man
made environments form a log-normal distribution, i.e the mean received signal strength,
r, varies log-normally in dB if the signal strength is measured over a distance of at least
Experimentally it has been determined that the standard deviation, s, of the mean
received signal strength, r, lies between 8 to 12 dB with the higher s generally found in
large urban areas.
This phenomenon is due to multipath propagation of the signal.
The Rayleigh fading is applicable to obstructed propagation paths.
All the signals are NLOS signals and there is no dominant direct path.
Signals from all paths have comparable signal strengths.
The instantaneous received power seen by a moving antenna becomes a random variable
depending on the location of the antenna.
This phenomenon is due to multipath propagation of the signal.
In this case there is a partially scattered field.
One dominant signal.
Others are weaker.
Understanding Cellular Phone Technology
Cellular phones work much the same way as do the other wireless devices we’ve been
discussing. Signals carrying voice, text, and digital data are transmitted via radio waves from one
device to another. In the case of cellular networks, the data is transmitted not to a central hub in a
small network of devices (as it is with Wi-Fi) or even directly from device to device (as it is with
Bluetooth), but through a global network of transmitters and receivers.
Cells in a Network
What’s interesting about mobile phone networks is their cellular design. (Hence the terms
“cellular network” and “cellular phone.”) By that, I mean that a mobile phone network is divided
into thousands of overlapping geographic areas, or cells. A typical cellular network can be
envisioned as a mesh of hexagonal cells, as shown in Figure 4.1, each with its own base station
at the center. The cells slightly overlap at the edges to ensure that users always remain within
range of a base station. (You don’t want a dropped call when you’re driving between base
Figure 4.1. Cells in a cellular network.
The cells in a cellular network vary in size, depending on how many calls are conducted within
that geographic area. The smallest cells, which might cover only a few city blocks, are those
where there’s the heaviest population density, and thus the largest demand for service. The
largest cells are most often in rural areas with a smaller population per square mile.
The base station at the center of each group of cells functions as the hub for those cells—not of
the entire network, but of that individual piece of the network. RF signals are transmitted by an
individual phone and received by the base station, where they are then re-transmitted from the
base station to another mobile phone. Transmitting and receiving are done over two slightly
different frequencies.
Base stations are connected to one another via central switching centers which track calls and
transfer them from one base station to another as callers move between cells; the handoff is
(ideally) seamless and unnoticeable. Each base station is also connected to the main telephone
network, and can thus relay mobile calls to landline phones.
Carrying a Two-Way Radio
All this transmission within a cellular network originates with the handheld cell phone. A mobile
phone is actually a two-way radio, containing both a low-power transmitter (to transmit data) and
a receiver (to receive data).
When I say low power, I mean low power—really low power. The typical cell phone includes a
dual-strength transmitter, capable of transmitting either 0.6-watt or 3-watt signals. In
comparison, a larger AM radio station will typically broadcast a 50,000-watt signal; even smaller
AM stations broadcast 5,000-watt signals. A cell phone’s 3-watt signal is puny in comparison.
The reason mobile phones can get by with such low-power transmitters is that they’re
transmitting within a relatively limited range—within the current network cell. It’s not necessary
or desirable for a phone’s signal to extend beyond the current cell; this way, the same broadcast
frequencies can be used by multiple cells without danger of interference.
How do cell phones work?
You've seen them, perhaps you've used one. Each day something like 100,000 people
around the world sign up for and start using a cellular phone. Therefore it is likely that
you or someone you know has a cell phone and uses it on a regular basis. They are
such great gadgets - with a cell phone you can talk to anyone on the planet from just
about anywhere (80% of the U.S. has coverage).
But have you ever wondered how a cell phone works? Or what happens when you talk
on one? Or how a call gets routed to a regular phone? we will discuss the technology
behind cell phones so that you can see how amazing they really are.
One of the most interesting things about a cell phone is that it is really a radio - an
extremely sophisticated radio, but a radio nonetheless. A good way to understand the
sophistication of a cell phone is to compare it to a CB radio or a walkie-talkie. A CB
radio is a simplex device. That is, two people communicating on a CB radio use the
same frequency, so only one person can talk at a time. A cell phone is a duplex device,
so it uses one frequency for talking and a second, separate frequency for listening. A
CB radio has 40 channels. A cell phone can communicate on 1,664 channels. Cell
phones also operate within cells and they can switch cells as they move around. Cells
give cell phones incredible range. A walkie-talkie can transmit perhaps a mile. A CB
radio, because it has much higher power, can transmit perhaps 5 miles. Someone using
a cell phone, on the other hand, can drive clear across a city and maintain a
conversation the entire time. Cells are what give a cell phone its incredible range.
The MAC level (link layer)
This section of the document focus on the next layer up, the link layer. This mostly comprise the MAC
(Medium Access Control) protocol. Different MAC protocols and techniques are presented.
5.1 Main channel access mechanisms
The main job of the MAC protocol is to regulate the usage of the medium, and this is done through a
channel access mechanism. A channel access mechanism is a way to divide the main resource between
nodes, the radio channel, by regulating the use of it. It tells each node when it can transmit and when it
is expected to receive data. The channel access mechanism is the core of the MAC protocol. In this
section, we describe TDMA, CSMA and polling which are the 3 main classes of channel access
mechanisms for radio.
5.1.1 TDMA
In this chapter, we discuss TDMA as a channel access mechanism and not its applications and protocols
based on it.
TDMA (Time Division Multiplex Access) is very simple. A specific node, the base station, has
the responsibility to coordinate the nodes of the network. The time on the channel is divided into
time slots, which are generally of fixed size. Each node of the network is allocated a certain
number of slots where it can transmit. Slots are usually organised in a frame, which is repeated
on a regular basis.
The base station specify in the beacon (a management frame) the organisation of the frame. Each
node just needs to follow blindly the instruction of the base station. Very often, the frame is
organised as downlink (base station to node) and uplink (node to base station) slots, and all the
communications goes through the base station. A service slot allows a node to request the
allocation of a connection, by sending a connection request message in it (see chapter 5.2.4). In
some standards, uplink and downlink frames are one different frequencies, and the service slots
might also be a separate channel.
TDMA channel access mechanism :
TDMA suits very well phone applications, because those application have very predictable needs
(fixed and identical bit rate). Each handset is allocated a downlink and a uplink slot of a fixed
size (the size of the voice data for the duration of the frame). This is no surprise why TDMA is
used into all cellular phone standards (GSM in Europe, TDMA and PCS in the USA) and
cordless phone standards (DECT in Europe). TDMA is also very good to achieve low latency
and guarantee of bandwidth (where CSMA/CA is quite bad).
TDMA is not well suited for data networking applications, because it is very strict and inflexible.
IP is connectionless and generates bursty traffic which is very unpredictable by nature, while
TDMA is connection oriented (so it has to suffer the overhead of creating connections for single
IP packets). TDMA use fixed size packets and usually symmetrical link, which doesn't suit IP
that well (variable size packets).
TDMA is very much dependant of the quality of the frequency band. In a dedicated clean band,
as it is the case for cellular phone standard, TDMA is fine. But, because of it's inflexibility, and
because it doesn't really take care of what's happening on the channel, TDMA can't cope and
adapt to the bursty interference sources found in the unlicensed bands (unless a retry mechanism
is put on top of it).
5.1.2 CSMA/CA
CSMA/CA (Carrier Sense Multiple Access/Collision Avoidance) is the channel access mechanism used by
most wireless LANs in the ISM bands. A channel access mechanism is the part of the protocol which
specifies how the node uses the medium : when to listen, when to transmit...
The basic principles of CSMA/CA are listen before talk and contention. This is an asynchronous
message passing mechanism (connectionless), delivering a best effort service, but no bandwidth
and latency guarantee (you are still following ?). It's main advantages are that it is suited for
network protocols such as TCP/IP, adapts quite well with the variable condition of traffic and is
quite robust against interferences.
CSMA/CA is fundamentally different from the channel access mechanism used by cellular
phone systems (see TDMA in chapter 5.1.1).
CSMA/CA is derived from CSMA/CD (Collision Detection), which is the base of Ethernet. The
main difference is the collision avoidance : on a wire, the transceiver has the ability to listen
while transmitting and so to detect collisions (with a wire all transmissions have approximately
the same strength). But, even if a radio node could listen on the channel while transmitting, the
strength of its own transmissions would mask all other signals on the air. So, the protocol can't
directly detect collisions like with Ethernet and only tries to avoid them.
CSMA/CA channel Access Mechanisms :
The protocol starts by listening on the channel (this is called carrier sense), and if it is found to
be idle, it sends the first packet in the transmit queue. If it is busy (either another node
transmission or interference), the node waits the end of the current transmission and then starts
the contention (wait a random amount of time). When its contention timer expires, if the channel
is still idle, the node sends the packet. The node having chosen the shortest contention delay wins
and transmits its packet. The other nodes just wait for the next contention (at the end of this
packet). Because the contention is a random number and done for every packets, each node is
given an equal chance to access the channel (on average - it is statistic).
As we have mentioned, we can't detect collisions on the radio, and because the radio needs time
to switch from receive to transmit, this contention is usually slotted (a transmission may start
only at the beginning of a slot : 50 µs in 802.11 FH and 20 µs in 802.11 DS). This makes the
average contention delay larger, but reduces significantly the collisions (we can't totally avoid
5.1.3 Polling MAC
Polling is the third major channel access mechanism, after TDMA and CSMA/CA (see chapter 5.1.1 and
chapter 5.1.2 respectively - There exist also Token Ring, but I guess that nobody would be crazy enough
to implement it on a radio link). The most successful networking standard using polling is 100vg (IEEE
802.12), but some wireless standard are also using it. For example, 802.11 offers a polling channel
access mechanism (Point Coordination Function) in addition to the CSMA/CA one.
Polling is in fact in between TDMA and CSMA/CA. The base station retains total control over
the channel, but the frame content is no more fixed, allowing variable size packets to be sent.
The base station sends a specific packet (a poll packet) to trigger the transmission by the node.
The node just wait to receive a poll packet, and upon reception sends what it has to transmit.
Polling can be implemented as a connection oriented service (very much like TDMA, but with
higher flexibility in packet size) or connection less-service (asynchronous packet based). The
base station can either poll permanently all the nodes of the network just to check if they have
something to send (that is workable only with a very limited number of nodes), or the protocol
use reservation slots (see chapter 5.2.4) where each node can request a connection or to transmit
a packet (depending is the MAC protocol is connection oriented or not).
Polling channel Access Mechanisms :
In the case of 100vg, the polling mechanism doesn't use any bandwidth (it's done out of band
through tones), leading to a very efficient use of the channel (over 96 % user throughput). For
802.11 and wireless LAN, all the polling packets have to be transmitted over the air, generating
much more overhead. More recent system use reservation slots, which is more flexible but still
require significant overhead.
As CSMA/CA offers ad-hoc networking (no need of a base station) and similar performance, it is
usually preferred in most wireless LANs. For example, most 802.11 vendors prefer to use the
distributed mode (CSMA/CA) over the coordinated mode (polling).
5.1.4 Reservation protocols and WATM
The most interesting feature of protocols based on TDMA or Polling mechanism is that the Base Station
has absolute control of the traffic and can guarantee bandwidth and latency for applications that require
it. Sceptics might wonder what can be guaranteed anyway in an environment open to interferers and
without deployment control (see chapter 4.1), but that's another topic of discussions.
The guarantee of bandwidth is essential for people deploying Wireless Distributions Systems
(also called Last Mile Delivery Systems), like replacing the cable between your house and your
ISP with wireless. Those people want to be able to restrict and segregate users and guarantee
fairness. Standards such as HiperLan II (Broadband Radio Access Network project - see
chapter 6.4) is aiming at those usages.
The basic idea is to put ATM (Asynchronous Transfer Mode) over radio, as ATM implement all
the Quality Of Service features that they are dreaming off. The network is centrally managed (so
uses TDMA or Polling mechanism with reservation slots), the base station implement a call
admission control (accept or reject new ATM circuits) and scheduler (prioritise and send ATM
cells) to guarantee the quality of service requested. On top of the MAC, all the usual ATM layers
are needed (virtual circuits, segmentation/reassembly, IP adaptation...), as well as some specific
mobile features (to manage roaming).
Unfortunately, radio transmission has a lot of overhead (like large synchronisation field and
headers) which is somewhat incompatible with the small ATM cells. The main benefit of ATM
small cells is to allow very efficient switching, but this is not needed over radio. At the end of the
day, WATM doesn't resemble at all to ATM ; ATM uses individual channel for each node and is
asynchronous, whereas WATM uses a shared medium and is totally synchronous.
5.2 MAC techniques
We have described the main principle of CSMA/CA (see chapter 5.1.2), but most MAC protocols use
additional techniques to improve the performance of CSMA/CA.
5.2.1 MAC retransmissions
As we have seen in the previous chapter, the main problem of the CSMA/CA protocol is that the
transmitter can't detect collisions on the medium. There is also a higher error rate on the air than on a
wire (see chapter 4.8), so a higher chance of packets being corrupted. TCP doesn't like very much packet
losses at the MAC layer (see TCP and packet losses problem - chapter 5.4.5). Because of that, most MAC
protocols also implement positive acknowledgement and MAC level retransmissions to avoid losing
packets on the air.
The principle is quite simple : each time a node receives a packet, it sends back immediately a
short message (an ack) to the transmitter to indicate that it has successfully received the packet
without errors. If after sending a packet the transmitter doesn't receive an ack, it knows that the
packet was lost, so it will retransmit the packet (after contending again for the medium, like in
Most MAC protocols use a stop and go mechanism, they transmit the next packet of the queue
only if the current packet has been properly acknowledged (no sliding window mechanism like
in TCP). The rationale is that it makes the protocol simpler, minimise latency and avoid
desenquencing packets (something that TCP doesn't like as well).
MAC retransmissions in CSMA/CA :
The acks are "embedded" in the MAC protocol, so they are guaranteed not to collide (the
contention starts after the ack - see figure). These acks are very different from the TCP acks,
which work at a different level (and on a different time frame). Of course, broadcast and
multicast packets are not acknowledged, so they are more likely to fail...
If all modern Wireless LAN protocols implement this essential feature, some old products may
lack it. Wireless WAN protocols (like satellite links) don't implement that either, because the
round trip delay in their case is so long that by the time they would receive the ack they could
have sent another packet. If your Wireless LAN doesn't implement MAC level retransmissions,
all is not lost : students of Berkeley have created a protocol called snoop (see at
ftp://daedalus.cs.berkeley.edu/pub/snoop/) which filters the TCP acks and retransmits the lost
packets before TCP even notices that they are lost (this is still a link level retransmission, but
done just over the MAC).
5.2.2 Fragmentation
The radio medium has a higher error rate than a wire. We have explained in the previous chapter that it
was why most products were including MAC level retransmissions to avoid losing packets.
MAC level retransmissions solve this problem, but is not really performant. If the packet to
transmit is long and contains only one error, the node needs to retransmit it entirely. If the error
rate is significantly high, we could come to some situation were the probability of error in large
packet is dangerously close to 1 (we can't fit a packet between the bursts of errors due to fading
or interferers), so we can't get packet through.
This is why some products use fragmentation. Fragmentation is sending the big packets in small
pieces over the medium. Of course, this adds some overhead, because it duplicates packet
headers in every fragments. Each fragment is individually checked and retransmitted if
necessary. The first advantage is that in case of error, the node needs only to retransmit one small
fragment, so it is faster. The second advantage is that if the medium is very noisy, a small packet
has a higher probability to get through without errors, so the node increases its chance of success
in bad conditions.
5.2.3 RTS/CTS
In the chapter about range (chapter 4.6), we have seen that the main effect of transmission on radio
waves is the attenuation of the signal. Because of this attenuation, we have very commonly a problem
of hidden nodes.
The hidden node problem comes from the fact that all nodes may not hear each other because the
attenuation is too strong between them. Because transmissions are based on the carrier sense
mechanism, those nodes ignore each other and may transmit at the same time. Usually, this is a
good thing because it allows frequency reuse (they are effectively in different cells).
But, for a node placed in between, these simultaneous transmissions have a comparable strength
and so collide (in its receiver). This node could be impossible to reach because of these
The fundamental problem with carrier sense o
Fundamentals of Communications Access
Technologies: FDMA, TDMA, CDMA,
ccess methods are multiplexing techniques that provide communications services to multiple users in a
single-bandwidth wired or wireless medium. Communications channels, whether they’re wireless
spectrum segments or cable connections, are expensive. Communications services providers must
engage multiple paid users over limited resources to make a profit. Access methods allow many users to
share these limited channels to provide the economy of scale necessary for a successful communications
business. There are five basic access or multiplexing methods: frequency division multiple access
(FDMA), time division multiple access (TDMA), code division multiple access (CDMA), orthogonal
frequency division multiple access (OFDMA), and spatial division multiple access (SDMA).
FDMA is the process of dividing one channel or bandwidth into multiple individual bands, each
for use by a single user (Fig. 1). Each individual band or channel is wide enough to
accommodate the signal spectra of the transmissions to be propagated. The data to be transmitted
is modulated on to each subcarrier, and all of them are linearly mixed together.
1. FDMA divides the shared medium bandwidth into individual channels. Subcarriers modulated
by the information to be transmitted occupy each subchannel.
The best example of this is the cable television system. The medium is a single coax cable that is
used to broadcast hundreds of channels of video/audio programming to homes. The coax cable
has a useful bandwidth from about 4 MHz to 1 GHz. This bandwidth is divided up into 6-MHz
wide channels. Initially, one TV station or channel used a single 6-MHz band. But with digital
techniques, multiple TV channels may share a single band today thanks to compression and
multiplexing techniques used in each channel.
This technique is also used in fiber optic communications systems. A single fiber optic cable has
enormous bandwidth that can be subdivided to provide FDMA. Different data or information
sources are each assigned a different light frequency for transmission. Light generally isn’t
referred to by frequency but by its wavelength (λ). As a result, fiber optic FDMA is called
wavelength division multiple access (WDMA) or just wavelength division multiplexing (WDM).
One of the older FDMA systems is the original analog telephone system, which used a hierarchy
of frequency multiplex techniques to put multiple telephone calls on single line. The analog 300Hz to 3400-Hz voice signals were used to modulate subcarriers in 12 channels from 60 kHz to
108 kHz. Modulator/mixers created single sideband (SSB) signals, both upper and lower
sidebands. These subcarriers were then further frequency multiplexed on subcarriers in the 312kHz to 552-kHz range using the same modulation methods. At the receiving end of the system,
the signals were sorted out and recovered with filters and demodulators.
Original aerospace telemetry systems used an FDMA system to accommodate multiple sensor
data on a single radio channel. Early satellite systems shared individual 36-MHz bandwidth
transponders in the 4-GHz to 6-GHz range with multiple voice, video, or data signals via FDMA.
Today, all of these applications use TDMA digital techniques.
TDMA is a digital technique that divides a single channel or band into time slots. Each time slot
is used to transmit one byte or another digital segment of each signal in sequential serial data
format. This technique works well with slow voice data signals, but it’s also useful for
compressed video and other high-speed data.
A good example is the widely used T1 transmission system, which has been used for years in the
telecom industry. T1 lines carry up to 24 individual voice telephone calls on a single line (Fig.
2). Each voice signal usually covers 300 Hz to 3000 Hz and is digitized at an 8-kHz rate, which
is just a bit more than the minimal Nyquist rate of two times the highest-frequency component
needed to retain all the analog content.
2. This T1 digital telephony frame illustrates TDM and TDMA. Each time slot is allocated to one
user. The high data rate makes the user unaware of the lack of simultaneity.
The digitized voice appears as individual serial bytes that occur at a 64-kHz rate, and 24 of these
bytes are interleaved, producing one T1 frame of data. The frame occurs at a 1.536-MHz rate (24
by 64 kHz) for a total of 192 bits. A single synchronizing bit is added for timing purposes for an
overall data rate of 1.544 Mbits/s. At the receiving end, the individual voice bytes are recovered
at the 64-kHz rate and passed through a digital-to-analog converter (DAC) that reproduces the
analog voice.
The basic GSM (Global System of Mobile Communications) cellular phone system is TDMAbased. It divides up the radio spectrum into 200-kHz bands and then uses time division
techniques to put eight voice calls into one channel. Figure 3 shows one frame of a GSM TDMA
signal. The eight time slots can be voice signals or data such as texts or e-mails. The frame is
transmitted at a 270-kbit/s rate using Gaussian minimum shift keying (GMSK), which is a form
of frequency shift keying (FSK) modulation.
3. This GSM digital cellular method shows how up to eight users can share a 200-kHz channel in
different time slots within a frame of 1248 bits.
CDMA is another pure digital technique. It is also known as spread spectrum because it takes the
digitized version of an analog signal and spreads it out over a wider bandwidth at a lower power
level. This method is called direct sequence spread spectrum (DSSS) as well (Fig. 4). The
digitized and compressed voice signal in serial data form is spread by processing it in an XOR
circuit along with a chipping signal at a much higher frequency. In the cdma IS-95 standard, a
1.2288-Mbit/s chipping signal spreads the digitized compressed voice at 13 kbits/s.
4. Spread spectrum is the technique of CDMA. The compressed and digitized voice signal is
processed in an XOR logic circuit along with a higher-frequency coded chipping signal. The
result is that the digital voice is spread over a much wider bandwidth that can be shared with
other users using different codes.
The chipping signal is derived from a pseudorandom code generator that assigns a unique code
to each user of the channel. This code spreads the voice signal over a bandwidth of 1.25 MHz.
The resulting signal is at a low power level and appears more like noise. Many such signals can
occupy the same channel simultaneously. For example, using 64 unique chipping codes allows
up to 64 users to occupy the same 1.25-MHz channel at the same time. At the receiver, a
correlating circuit finds and identifies a specific caller’s code and recovers it.
The third generation (3G) cell-phone technology called wideband CDMA (WCDMA) uses a
similar method with compressed voice and 3.84-Mbit/s chipping codes in a 5-MHz channel to
allow multiple users to share the same band.
OFDMA is the access technique used in Long-Term Evolution (LTE) cellular systems to
accommodate multiple users in a given bandwidth. Orthogonal frequency division multiplexing
(OFDM) is a modulation method that divides a channel into multiple narrow orthogonal bands
that are spaced so they don’t interfere with one another. Each band is divided into hundreds or
even thousands of 15-kHz wide subcarriers.
The data to be transmitted is divided into many lower-speed bit streams and modulated onto the
subcarriers. Time slots within each subchannel data stream are used to package the data to be
transmitted (Fig. 5). This technique is very spectrally efficient, so it provides very high data
rates. It also is less affected by multipath propagation effects.
5. OFDMA assigns a group of subcarriers to each user. The subcarriers are part of the large
number of subcarriers used to implement OFDM for LTE. The data may be voice, video, or
something else, and it’s assembled into time segments that are then transmitted over some of the
assigned subcarriers.
To implement OFDMA, each user is assigned a group of subchannels and related time slots. The
smallest group of subchannels assigned is 12 and called a resource block (RB). The system
assigns the number of RBs to each user as needed.
SDMA uses physical separation methods that permit the sharing of wireless channels. For
instance, a single channel may be used simultaneously if the users are spaced far enough from
one another to avoid interference. Known as frequency reuse, the method is widely used in
cellular radio systems. Cell sites are spaced from one another to minimize interference.
In addition to spacing, directional antennas are used to avoid interference. Most cell sites use
three antennas to create 120° sectors that allow frequency sharing (Fig. 6a). New technologies
like smart antennas or adaptive arrays use dynamic beamforming to shrink signals into narrow
beams that can be focused on specific users, excluding all others (Fig. 6b).
6. SDMA separates users on shared frequencies by isolating them with directional antennas.
Most cell sites have three antenna arrays to separate their coverage into isolated 120° sectors
(a). Adaptive arrays use beamforming to pinpoint desired users while ignoring any others on the
same frequency (b).
One unique variation of SDMA, polarization division multiple access (PDMA), separates signals
by using different polarizations of the antennas. Two different signals then can use the same
frequency, one transmitting a vertically polarized signal and the other transmitting a horizontally
polarized signal.
The signals won’t interfere with one another even if they’re on the same frequency because
they’re orthogonal and the antennas won’t respond to the oppositely polarized signal. Separate
vertical and horizontal receiver antennas are used to recover the two orthogonal signals. This
technique is widely used in satellite systems.
Polarization is also used for multiplexing in fiber optic systems. The new 100-Gbit/s systems use
dual polarization quadrature phase shift keying (DP-QPSK) to achieve high speeds on a single
fiber. The high-speed data is divided into two slower data streams, one using vertical light
polarization and the other horizontal light polarization. Polarization filters separate the two
signals at the transmitter and receiver and merge them back into the high-speed stream.
Other Methods
A unique and widely used method of multiple access is carrier sense multiple access with
collision detection (CSMA-CD). This is the classical access method used in Ethernet local-area
networks (LANs). It allows multiple users of the network to access the single cable for
transmission. All network nodes listen continuously. When they want to send data, they listen
first and then transmit if no other signals are on the line. For instance, the transmission will be
one packet or frame. Then the process repeats. If two or more transmissions occur
simultaneously, a collision occurs. The network interface circuitry can detect a collision, and
then the nodes will wait a random time before retransmitting.
A variation of this method is called carrier sense multiple access with collision avoidance
(CSMA-CA). This method is similar to CSMA-CD. However, a special scheduling algorithm is
used to determine the appropriate time to transmit over the shared channel. While the CSMA-CD
technique is most used in wired networks, CSMA-CA is the preferred method in wireless
Cellular network
A cellular network or mobile network is a wireless network distributed over land areas called
cells, each served by at least one fixed-location transceiver, known as a cell site or base station.
In a cellular network, each cell uses a different set of frequencies from neighboring cells, to
avoid interference and provide guaranteed bandwidth within each cell.
When joined together these cells provide radio coverage over a wide geographic area. This
enables a large number of portable transceivers (e.g., mobile phones, pagers, etc.) to
communicate with each other and with fixed transceivers and telephones anywhere in the
network, via base stations, even if some of the transceivers are moving through more than one
cell during transmission.
Cellular networks offer a number of desirable features:
More capacity than a single large transmitter, since the same frequency can be used for
multiple links as long as they are in different cells
Mobile devices use less power than with a single transmitter or satellite since the cell
towers are closer
Larger coverage area than a single terrestrial transmitter, since additional cell towers can
be added indefinitely and are not limited by the horizon
Major telecommunications providers have deployed voice and data cellular networks over most
of the inhabited land area of the Earth. This allows mobile phones and mobile computing devices
to be connected to the public switched telephone network and public Internet. Private cellular
networks can be used for research[1] or for large organizations and fleets, such as dispatch for
local public safety agencies or a taxicab company.[2]
Example of frequency reuse factor or pattern 1/4
In a cellular radio system, a land area to be supplied with radio service is divided into regular
shaped cells, which can be hexagonal, square, circular or some other regular shapes, although
hexagonal cells are conventional. Each of these cells is assigned multiple frequencies (f1 – f6)
which have corresponding radio base stations. The group of frequencies can be reused in other
cells, provided that the same frequencies are not reused in adjacent neighboring cells as that
would cause co-channel interference.
The increased capacity in a cellular network, compared with a network with a single transmitter,
comes from the mobile communication switching system developed by Amos Joel of Bell Labs
that permitted multiple callers in the same area to use the same frequency by switching calls
made using the same frequency to the nearest available cellular tower having that frequency
available and from the fact that the same radio frequency can be reused in a different area for a
completely different transmission. If there is a single plain transmitter, only one transmission can
be used on any given frequency. Unfortunately, there is inevitably some level of interference
from the signal from the other cells which use the same frequency. This means that, in a standard
FDMA system, there must be at least a one cell gap between cells which reuse the same
In the simple case of the taxi company, each radio had a manually operated channel selector
knob to tune to different frequencies. As the drivers moved around, they would change from
channel to channel. The drivers knew which frequency covered approximately what area. When
they did not receive a signal from the transmitter, they would try other channels until they found
one that worked. The taxi drivers would only speak one at a time, when invited by the base
station operator (this is, in a sense, time division multiple access (TDMA)).
Cell signal encoding
To distinguish signals from several different transmitters, frequency division multiple access
(FDMA) and code division multiple access (CDMA) were developed.
With FDMA, the transmitting and receiving frequencies used in each cell are different from the
frequencies used in each neighbouring cell. In a simple taxi system, the taxi driver manually
tuned to a frequency of a chosen cell to obtain a strong signal and to avoid interference from
signals from other cells.
The principle of CDMA is more complex, but achieves the same result; the distributed
transceivers can select one cell and listen to it.
Other available methods of multiplexing such as polarization division multiple access (PDMA)
and time division multiple access (TDMA) cannot be used to separate signals from one cell to
the next since the effects of both vary with position and this would make signal separation
practically impossible. Time division multiple access, however, is used in combination with
either FDMA or CDMA in a number of systems to give multiple channels within the coverage
area of a single cell.
Frequency reuse
The key characteristic of a cellular network is the ability to re-use frequencies to increase both
coverage and capacity. As described above, adjacent cells must use different frequencies,
however there is no problem with two cells sufficiently far apart operating on the same
frequency. The elements that determine frequency reuse are the reuse distance and the reuse
The reuse distance, D is calculated as
where R is the cell radius and N is the number of cells per cluster. Cells may vary in radius from
1 to 30 kilometres (0.62 to 18.64 mi). The boundaries of the cells can also overlap between
adjacent cells and large cells can be divided into smaller cells.[4]
The frequency reuse factor is the rate at which the same frequency can be used in the network. It
is 1/K (or K according to some books) where K is the number of cells which cannot use the same
frequencies for transmission. Common values for the frequency reuse factor are 1/3, 1/4, 1/7, 1/9
and 1/12 (or 3, 4, 7, 9 and 12 depending on notation).[5]
In case of N sector antennas on the same base station site, each with different direction, the base
station site can serve N different sectors. N is typically 3. A reuse pattern of N/K denotes a
further division in frequency among N sector antennas per site. Some current and historical reuse
patterns are 3/7 (North American AMPS), 6/4 (Motorola NAMPS), and 3/4 (GSM).
If the total available bandwidth is B, each cell can only use a number of frequency channels
corresponding to a bandwidth of B/K, and each sector can use a bandwidth of B/NK.
Code division multiple access-based systems use a wider frequency band to achieve the same
rate of transmission as FDMA, but this is compensated for by the ability to use a frequency reuse
factor of 1, for example using a reuse pattern of 1/1. In other words, adjacent base station sites
use the same frequencies, and the different base stations and users are separated by codes rather
than frequencies. While N is shown as 1 in this example, that does not mean the CDMA cell has
only one sector, but rather that the entire cell bandwidth is also available to each sector
Depending on the size of the city, a taxi system may not have any frequency-reuse in its own
city, but certainly in other nearby cities, the same frequency can be used. In a large city, on the
other hand, frequency-reuse could certainly be in use.
Recently also orthogonal frequency-division multiple access based systems such as LTE are
being deployed with a frequency reuse of 1. Since such systems do not spread the signal across
the frequency band, inter-cell radio resource management is important to coordinate resource
allocation between different cell sites and to limit the inter-cell interference. There are various
means of Inter-Cell Interference Coordination (ICIC) already defined in the standard.[6]
Coordinated scheduling, multi-site MIMO or multi-site beam forming are other examples for
inter-cell radio resource management that might be standardized in the future.
Directional antennas
Cellular telephone frequency reuse pattern. See U.S. Patent 4,144,411
Cell towers frequently use a directional signal to improve reception in higher traffic areas. In the
United States, the FCC limits omni-directional cell tower signals to 100 watts of power. If the
tower has directional antennas, the FCC allows the cell operator to broadcast up to 500 watts of
effective radiated power (ERP).[7]
Cell phone companies use this directional signal to improve reception along highways and inside
buildings like stadiums and arenas.[7] As a result, a cell phone user may be standing in sight of a
cell tower, but still have trouble getting a good signal because the directional antennas point a
different direction.[7]
Although the original cell towers created an even, omni-directional signal, were at the centers of
the cells and were omni-directional, a cellular map can be redrawn with the cellular telephone
towers located at the corners of the hexagons where three cells converge.[8] Each tower has three
sets of directional antennas aimed in three different directions with 120 degrees for each cell
(totaling 360 degrees) and receiving/transmitting into three different cells at different
frequencies. This provides a minimum of three channels, and three towers for each cell and
greatly increases the chances of receiving a usable signal from at least one direction.
The numbers in the illustration are channel numbers, which repeat every 3 cells. Large cells can
be subdivided into smaller cells for high volume areas.[9]
Broadcast messages and paging
Practically every cellular system has some kind of broadcast mechanism. This can be used
directly for distributing information to multiple mobiles, commonly, for example in mobile
telephony systems, the most important use of broadcast information is to set up channels for one
to one communication between the mobile transceiver and the base station. This is called paging.
The three different paging procedures generally adopted are sequential, parallel and selective
The details of the process of paging vary somewhat from network to network, but normally we
know a limited number of cells where the phone is located (this group of cells is called a
Location Area in the GSM or UMTS system, or Routing Area if a data packet session is
involved; in LTE, cells are grouped into Tracking Areas). Paging takes place by sending the
broadcast message to all of those cells. Paging messages can be used for information transfer.
This happens in pagers, in CDMA systems for sending SMS messages, and in the UMTS system
where it allows for low downlink latency in packet-based connections.
Movement from cell to cell and handover
In a primitive taxi system, when the taxi moved away from a first tower and closer to a second
tower, the taxi driver manually switched from one frequency to another as needed. If a
communication was interrupted due to a loss of a signal, the taxi driver asked the base station
operator to repeat the message on a different frequency.
In a cellular system, as the distributed mobile transceivers move from cell to cell during an
ongoing continuous communication, switching from one cell frequency to a different cell
frequency is done electronically without interruption and without a base station operator or
manual switching. This is called the handover or handoff. Typically, a new channel is
automatically selected for the mobile unit on the new base station which will serve it. The mobile
unit then automatically switches from the current channel to the new channel and communication
The exact details of the mobile system's move from one base station to the other varies
considerably from system to system (see the example below for how a mobile phone network
manages handover).
Mobile phone network
GSM network architecture
The most common example of a cellular network is a mobile phone (cell phone) network. A
mobile phone is a portable telephone which receives or makes calls through a cell site (base
station), or transmitting tower. Radio waves are used to transfer signals to and from the cell
Modern mobile phone networks use cells because radio frequencies are a limited, shared
resource. Cell-sites and handsets change frequency under computer control and use low power
transmitters so that the usually limited number of radio frequencies can be simultaneously used
by many callers with less interference.
A cellular network is used by the mobile phone operator to achieve both coverage and capacity
for their subscribers. Large geographic areas are split into smaller cells to avoid line-of-sight
signal loss and to support a large number of active phones in that area. All of the cell sites are
connected to telephone exchanges (or switches), which in turn connect to the public telephone
In cities, each cell site may have a range of up to approximately 1⁄2 mile (0.80 km), while in rural
areas, the range could be as much as 5 miles (8.0 km). It is possible that in clear open areas, a
user may receive signals from a cell site 25 miles (40 km) away.
Since almost all mobile phones use cellular technology, including GSM, CDMA, and AMPS
(analog), the term "cell phone" is in some regions, notably the US, used interchangeably with
"mobile phone". However, satellite phones are mobile phones that do not communicate directly
with a ground-based cellular tower, but may do so indirectly by way of a satellite.
There are a number of different digital cellular technologies, including: Global System for
Mobile Communications (GSM), General Packet Radio Service (GPRS), cdmaOne,
CDMA2000, Evolution-Data Optimized (EV-DO), Enhanced Data Rates for GSM Evolution
(EDGE), Universal Mobile Telecommunications System (UMTS), Digital Enhanced Cordless
Telecommunications (DECT), Digital AMPS (IS-136/TDMA), and Integrated Digital Enhanced
Network (iDEN).
Structure of the mobile phone cellular network
A simple view of the cellular mobile-radio network consists of the following:
A network of radio base stations forming the base station subsystem.
The core circuit switched network for handling voice calls and text
A packet switched network for handling mobile data
The public switched telephone network to connect subscribers to the wider telephony network
This network is the foundation of the GSM system network. There are many functions that are
performed by this network in order to make sure customers get the desired service including
mobility management, registration, call set up, and handover.
Any phone connects to the network via an RBS (Radio Base Station) at a corner of the
corresponding cell which in turn connects to the Mobile switching center (MSC). The MSC
provides a connection to the public switched telephone network (PSTN). The link from a phone
to the RBS is called an uplink while the other way is termed downlink.
Radio channels effectively use the transmission medium through the use of the following
multiplexing and access schemes: frequency division multiple access (FDMA), time division
multiple access (TDMA), code division multiple access (CDMA), and space division multiple
access (SDMA).
Cellular handover in mobile phone networks
Main article: Handover
As the phone user moves from one cell area to another cell while a call is in progress, the mobile
station will search for a new channel to attach to in order not to drop the call. Once a new
channel is found, the network will command the mobile unit to switch to the new channel and at
the same time switch the call onto the new channel.
With CDMA, multiple CDMA handsets share a specific radio channel. The signals are separated
by using a pseudonoise code (PN code) specific to each phone. As the user moves from one cell
to another, the handset sets up radio links with multiple cell sites (or sectors of the same site)
simultaneously. This is known as "soft handoff" because, unlike with traditional cellular
technology, there is no one defined point where the phone switches to the new cell.
In IS-95 inter-frequency handovers and older analog systems such as NMT it will typically be
impossible to test the target channel directly while communicating. In this case other techniques
have to be used such as pilot beacons in IS-95. This means that there is almost always a brief
break in the communication while searching for the new channel followed by the risk of an
unexpected return to the old channel.
In cellular telecommunications, the term handover or handoff refers to the process of transferring an
ongoing call or data session from one channel connected to the core network to another channel. In
satellite communications it is the process of transferring satellite control responsibility from one earth
station to another without loss or interruption of service.
Handover or handoff
American English use the term handoff, and this is most commonly used within some American
organizations such as 3GPP2 and in American originated technologies such as CDMA2000. In
British English the term handover is more common, and is used within international and
European organisations such as ITU-T, IETF, ETSI and 3GPP, and standardised within
European originated standards such as GSM and UMTS. The term handover is more common
than handoff in academic research publications and literature, while handoff is slightly more
common within the IEEE and ANSI organisations.[original research?]
In telecommunications there may be different reasons why a handover might be conducted:
when the phone is moving away from the area covered by one cell and entering the area
covered by another cell the call is transferred to the second cell in order to avoid call
termination when the phone gets outside the range of the first cell;
when the capacity for connecting new calls of a given cell is used up and an existing or new call
from a phone, which is located in an area overlapped by another cell, is transferred to that cell
in order to free-up some capacity in the first cell for other users, who can only be connected to
that cell;
in non-CDMA networks when the channel used by the phone becomes interfered by another
phone using the same channel in a different cell, the call is transferred to a different channel in
the same cell or to a different channel in another cell in order to avoid the interference;
again in non-CDMA networks when the user behaviour changes, e.g. when a fast-travelling user,
connected to a large, umbrella-type of cell, stops then the call may be transferred to a smaller
macro cell or even to a micro cell in order to free capacity on the umbrella cell for other fasttraveling users and to reduce the potential interference to other cells or users (this works in
reverse too, when a user is detected to be moving faster than a certain threshold, the call can be
transferred to a larger umbrella-type of cell in order to minimize the frequency of the handovers
due to this movement);
in CDMA networks a handover (see further down) may be induced in order to reduce the
interference to a smaller neighboring cell due to the "near-far" effect even when the phone still
has an excellent connection to its current cell;
The most basic form of handover is when a phone call in progress is redirected from its current
cell (called source) to a new cell (called target). In terrestrial networks the source and the target
cells may be served from two different cell sites or from one and the same cell site (in the latter
case the two cells are usually referred to as two sectors on that cell site). Such a handover, in
which the source and the target are different cells (even if they are on the same cell site) is called
inter-cell handover. The purpose of inter-cell handover is to maintain the call as the subscriber is
moving out of the area covered by the source cell and entering the area of the target cell.
A special case is possible, in which the source and the target are one and the same cell and only
the used channel is changed during the handover. Such a handover, in which the cell is not
changed, is called intra-cell handover. The purpose of intra-cell handover is to change one
channel, which may be interfered or fading with a new clearer or less fading channel.
Types of handover
In addition to the above classification of inter-cell and intra-cell classification of handovers, they
also can be divided into hard and soft handovers:
A hard handover is one in which the channel in the source cell is released and only then the
channel in the target cell is engaged. Thus the connection to the source is broken before or 'as'
the connection to the target is made—for this reason such handovers are also known as breakbefore-make. Hard handovers are intended to be instantaneous in order to minimize the
disruption to the call. A hard handover is perceived by network engineers as an event during the
call. It requires the least processing by the network providing service. When the mobile is
between base stations, then the mobile can switch with any of the base stations, so the base
stations bounce the link with the mobile back and forth. This is called ping-ponging.
A soft handover is one in which the channel in the source cell is retained and used for a while in
parallel with the channel in the target cell. In this case the connection to the target is
established before the connection to the source is broken, hence this handover is called makebefore-break. The interval, during which the two connections are used in parallel, may be brief
or substantial. For this reason the soft handover is perceived by network engineers as a state of
the call, rather than a brief event. Soft handovers may involve using connections to more than
two cells: connections to three, four or more cells can be maintained by one phone at the same
time. When a call is in a state of soft handover, the signal of the best of all used channels can be
used for the call at a given moment or all the signals can be combined to produce a clearer copy
of the signal. The latter is more advantageous, and when such combining is performed both in
the downlink (forward link) and the uplink (reverse link) the handover is termed as softer. Softer
handovers are possible when the cells involved in the handovers have a single cell site.
Comparison of handovers
An advantage of the hard handover is that at any moment in time one call uses only one channel.
The hard handover event is indeed very short and usually is not perceptible by the user. In the
old analog systems it could be heard as a click or a very short beep; in digital systems it is
unnoticeable. Another advantage of the hard handoff is that the phone's hardware does not need
to be capable of receiving two or more channels in parallel, which makes it cheaper and simpler.
A disadvantage is that if a handover fails the call may be temporarily disrupted or even
terminated abnormally. Technologies which use hard handovers, usually have procedures which
can re-establish the connection to the source cell if the connection to the target cell cannot be
made. However re-establishing this connection may not always be possible (in which case the
call will be terminated) and even when possible the procedure may cause a temporary
interruption to the call.
One advantage of the soft handovers is that the connection to the source cell is broken only when
a reliable connection to the target cell has been established and therefore the chances that the call
will be terminated abnormally due to failed handovers are lower. However, by far a bigger
advantage comes from the mere fact that simultaneously channels in multiple cells are
maintained and the call could only fail if all of the channels are interfered or fade at the same
time. Fading and interference in different channels are unrelated and therefore the probability of
them taking place at the same moment in all channels is very low. Thus the reliability of the
connection becomes higher when the call is in a soft handover. Because in a cellular network the
majority of the handovers occur in places of poor coverage, where calls would frequently
become unreliable when their channel is interfered or fading, soft handovers bring a significant
improvement to the reliability of the calls in these places by making the interference or the
fading in a single channel not critical. This advantage comes at the cost of more complex
hardware in the phone, which must be capable of processing several channels in parallel.
Another price to pay for soft handovers is use of several channels in the network to support just a
single call. This reduces the number of remaining free channels and thus reduces the capacity of
the network. By adjusting the duration of soft handovers and the size of the areas in which they
occur, the network engineers can balance the benefit of extra call reliability against the price of
reduced capacity.
Possibility of handover
While theoretically speaking soft handovers are possible in any technology, analog or digital, the
cost of implementing them for analog technologies is prohibitively high and none of the
technologies that were commercially successful in the past (e.g. AMPS, TACS, NMT, etc.) had
this feature. Of the digital technologies, those based on FDMA also face a higher cost for the
phones (due to the need to have multiple parallel radio-frequency modules) and those based on
TDMA or a combination of TDMA/FDMA, in principle, allow not so expensive implementation
of soft handovers. However, none of the 2G (second-generation) technologies have this feature
(e.g. GSM, D-AMPS/IS-136, etc.). On the other hand, all CDMA based technologies, 2G and 3G
(third-generation), have soft handovers. On one hand, this is facilitated by the possibility to
design not so expensive phone hardware supporting soft handovers for CDMA and on the other
hand, this is necessitated by the fact that without soft handovers CDMA networks may suffer
from substantial interference arising due to the so-called near-far effect.
An Introduction to Handoff in Mobile Cellular
When a mobile user travels from one area of coverage or cell to another cell within a
call’s duration the call should be transferred to the new cell’s base station. Otherwise,
the call will be dropped because the link with the current base station becomes too
weak as the mobile recedes. Indeed, this ability for transference is a design matter in
mobile cellular system design and is call handoff.
Two basic types of handoff are defined -- viz. hard handoff and soft handoff.
With hard handoff, the link to the prior base station is terminated before or as the user
is transferred to the new cell’s base station. That is to say that the mobile is linked to
no more than one base station at a given time. Initiation of the handoff may begin
when the signal strength at the mobile received from base station 2 is greater than that
of base station 1. The signal strength measures are really signal levels averaged over a
chosen amount of time. This averaging is necessary because of the Rayleigh fading
nature of the environment in which the cellular network resides. A major problem
with this approach to handoff decision is that the received signals of both base stations
often fluctuate. When the mobile is between the base stations, the effect is to cause the
mobile to wildly switch links with either base station. The base stations bounce the
link with the mobile back and forth. Hence the phenomenon is called ping- ponging.
Besides ping ponging this simple approach allows too many handoffs. [1] It has been
shown in early studies that much of the time the previous link was well adequate and
that handoffs occurred unnecessarily. A better method is to use the averaged signal
levels relative to a threshold and hysteresis margin for handoff decision. Furthermore,
the condition should be imposed that the target base station’s signal level should be
greater than that of the current base station.
The handoff should take place at point A for the choice of Threshold 1 or Threshold 2.
The handoff should take place at point B for Threshold 3. In has now been shown in
practice that using the hysteresis margin greatly reduces the number of unneeded
handoffs. However, there is a delay factor involved here. It will be shown later that
one may set up optimum trade off values for the parameters threshold and hysteresis
to obtain a tolerable delay.
Because of the increasing demand for wireless services, the available channels within
the cells become insufficient to support the growing number of users. To increase the
system capacity, techniques such as cell splitting and sectoring may be implemented.
Using microcells also improves cellular system capacity, and it is an attractive
alternative to the two former mentioned techniques. [2] While the group of cells may
maintain a particular area of coverage, the co-channel interference is reduced.
Decreasing the co-channel interference increases the system capacity without trunking
inefficiency degradation inherent to sectoring. However, innate to microcells is the
increase in frequency of handoffs. So we seek efficient decision algorithms to achieve
less unnecessary handoffs, yet more reliable handoffs with low blocking probability
and low probability of lost calls. Mobiles moving around in microcells will face line
of sight (LOS) handoffs and non line of sight (NLOS) handoffs. In the case of NLOS,
completely reliable handoffs are difficult to achieve. A problem with microcells is the
so called corner effect. When a mobile station moves around a corner such as at a
street intersection, there can be a sudden drop in the received signal level. It loses its
LOS component with the serving base station. Now if the mobile user does not link up
with this new base station B fast enough, the call gets dropped. Furthermore, the
mobile can cause interference to the new base station. The base station is unable to
regulate the power of the mobile and users within this cell are blocked.
In the cellular design, one could carefully plan so that this can’t happen. That is lay
out the cells in such a way that no corner effect will ever be encountered. This cannot
always be practically done. Measures are taken in handoff design to help alleviate this
problem. We may use a fast forward handoff as the old base station is dropped.
Decentralizing the handoff decision, as in the case of the mobile assisted handoff,
often achieves this fast forward handoff decision. [1] With decentralizing, it is
advantageous that the central switch does not have to make the handoff decisions for
every mobile user. This amounts to a saving in system resources.
A problem with faster handoff is that we lose the benefits associated with signal
averaging and hysteresis. As was mentioned before, this was helpful in mitigating
unnecessary handoffs and ping ponging. However as is now clear, the time of handoff
is critical in mircocellular systems and we may not tolerate the delay that comes with
hysteresis windows. The handoff must be fast. Now recall that in order to initiate a
handoff, the movement of the mobile station from one cell to another must be
detected. A reliable method to make this detection and to accommodate the movement
is to measure the received signal strengths to the base stations and from the user. In
order to avoid excessive and inaccurate handoffs, an averaging of the received signal
levels is performed as well as implementing a hysteresis margin. The total handoff
delay is the sum of the signal averaging delay and the hysteresis delay. We seek to
make this delay small. [3] develops an analytic approach to select the signal averaging
time and hysteresis delay in order to obtain an optimum tradeoff between those two
parameters as well as a tradeoff between the total delay time and the number of
allowable unnecessary handoffs. In [3], some important parameters are given
mathematical expressions. It is shown that the probability of an unnecessary handoff
is given as:
where f(x) is considered as
 L is the difference between the two received signal levels due to the pathloss
difference from the two base stations involved in the handoff. And, h is the hysteresis
For macrocells, the total delay time is:
T denotes the signal averaging window
K2 represents a path loss constant
Krv represents the normalized distance from the mobile station to base station.
For microcells, the total delay time is:
dcor denotes the drop in signal level experienced at a street corner and is determined
experimentally. The analysis shows that there exists compromises between the
parameters of averaging time and hysteresis delay. It is evident that for microcells we
may wish to choose short averaging time and a larger hysteresis. The converse is clear
for macrocells. The main point here is that optimum parameter values may be selected
for a tolerable delay in conjunction with some tolerable probability of unnecessary
By using alternate antenna heights working at different power levels, it is possible to
have different size cells coexist within a single geographic framework. Thus may be
implemented the so called umbrella cell consisting of two or more levels. That is a
macrocell, which contains perhaps a grid of microcells.
Users in this system are assigned to a particular cellular level based on the mobile
transmitter’s speed. [4] The umbrella cell system is used to minimize the many
handoffs incurred by high speed mobiles while providing capacity that is inherent to
microcells for slow moving users. Handoffs from microcell to microcell can be
avoided. Now the slow moving users are placed into the service of a microcell and
retain service therein until an increase in mobile transmitter speed overtakes some
deciding measure and is therefor urged into a macrocell. However, in the real cellular
system, the speed is not really known. Fortunately, there are a couple of common
ways to estimate the user speed. One way is to use the statistics of the dwell time. The
dwell time is defined as the amount of time over which a call is maintained within a
particular cell. Obviously, the dwell time is dependent upon the mobile station’s
speed. Based on information of the dwell time, a rough estimate of the user speed may
be obtained. Further, the estimate may be improved if the mobile’s past behavior is
memorized and accounted for. [3100] Given n dwell times, mobile station speed may
be estimated as follows. First assume that the mobile speed is uniform over an interval
[a,b]. It is found that the maximum likelihood (ML) estimate is:
and the maximum squared error (MMSE) estimate is:
ci depends on the type of handoff -- i.e. micro to micro, micro to macro etc.
Cellular network
A cellular network or mobile network is a wireless network distributed over land areas called
cells, each served by at least one fixed-location transceiver, known as a cell site or base station.
In a cellular network, each cell uses a different set of frequencies from neighboring cells, to
avoid interference and provide guaranteed bandwidth within each cell.
When joined together these cells provide radio coverage over a wide geographic area. This
enables a large number of portable transceivers (e.g., mobile phones, pagers, etc.) to
communicate with each other and with fixed transceivers and telephones anywhere in the
network, via base stations, even if some of the transceivers are moving through more than one
cell during transmission.
Cellular networks offer a number of desirable features:
More capacity than a single large transmitter, since the same frequency can be used for
multiple links as long as they are in different cells
Mobile devices use less power than with a single transmitter or satellite since the cell
towers are closer
Larger coverage area than a single terrestrial transmitter, since additional cell towers can
be added indefinitely and are not limited by the horizon
Major telecommunications providers have deployed voice and data cellular networks over most
of the inhabited land area of the Earth. This allows mobile phones and mobile computing devices
to be connected to the public switched telephone network and public Internet. Private cellular
networks can be used for research[1] or for large organizations and fleets, such as dispatch for
local public safety agencies or a taxicab company.[2]
Adjacent and Co-Channel Congestion
Hopefully you have learned why channels 1, 6, and 11 are the best choices for operating your
wireless network (this is because 1, 6, and 11 are the only Wi-Fi channels in the 2.4 GHz
spectrum that don’t overlap with one another). An open Wi-Fi channel is one of the best gifts to
someone setting up a wireless network, but you may not have that option. Recall that overlap is
bad because of the “conversational” way that Wi-Fi operates, and often times you will find
yourself having to manage congestion. In this newsletter, you’ll learn about adjacent and cochannel congestion, using conversation as a metaphor.
Adjacent channel congestion is the worst type of Wi-Fi interference. To illustrate, think about
being at a concert – there’s a band playing really loud, and tons of people, each with their own
group of friends. With this much going on, it’s difficult to talk to your friends, and when you
start to talk louder, the person next to you has to raise his voice to talk to their group. You’re
hearing multiple conversations happening, as well as music from the band, and it seems
impossible to communicate.
The diagram below shows a theoretical model of how the above conversation scenario looks
when access points on channel 4 (red), channel 6 (green), and channel 5 (blue) are all active at
the same time. As one of these APs tries to talk to its clients, its transmissions become garbled
because of the transmissions of the other two. This drives down the performance of all of the
An illustration of adjacent channel congestion
In order to explain co-channel congestion, we’ll move our imaginary conversation from a concert
venue to a classroom. Think back to your school days – chances are you can think of at least one
class that had a student who would talk slower than the other kids, and everyone else would have
to wait for their turn to ask a question. Co-channel congestion works in a similar manner: the
performance is hindered by the wait times, but the bandwidth is managed, and every device will
get a chance to talk to its associated AP. The diagram below depicts a wireless access point and
its associated clients, which can only talk one at a time.
An illustration of co-channel congestion
Co-channel congestion is preferable to adjacent channel congestion because of the way the
wireless conversations are managed. As mentioned in our previous newsletter, when choosing a
channel that has other networks active, try to keep at least 20dB between the RSSI levels of the
networks, as illustrated below:
To recap, an open channel will always be best when deploying your wireless network, but if you
have to share a channel, that’s okay too. Adjacent channel congestion is the one you’ll want to
avoid if at all possible.
The image below shows what adjacent channel and co-channel congestion look like in inSSIDer.
Having a visual representation of where neighboring wireless access points are active is an
invaluable tool when planning your own network. It’s easy to see how chaotic adjacent channel
congestion is compared to co-channel!
Hopefully you now have a good understanding of how Wi-Fi congestion is caused, and how to
best deal with it in the 2.4 GHz Band.