wireless network

advertisement
WIRELESS NETWORK
Cs- 8303
Unit 1
Introduction of Wireless Networks, Different Generations of Wireless Networks.
Characteristics of the Wireless Medium: Radio Propagation Mechanisms, Path Loss Modeling
and Signal Coverage, Effect of Multipath and Doppler, Channel Measurement and Modeling
Techniques.
Unit 2
Network Planning: Introduction, Wireless Network Topologies, Cellular Topology, Cell
Fundamentals, Signal to Interferences Radio Calculations, Network Planning for CDMA
Systems.
Wireless Network Operations: Mobility Management, Radio Resources and Power Management
INDEX
SNO
1
CONTENT
Introduction of Wireless Networks, Different Generations of Wireless
PAGE
NO.
1-5
Networks.
2
Characteristics of the Wireless Medium: Radio Propagation Mechanisms,
5-12
Path Loss Modeling and Signal Coverage
3
Effect of Multipath and Doppler, Channel Measurement and Modeling
12-15
Techniques.
4
Network Planning: Introduction, Wireless Network Topologies, Cellular
16-23
Topology, Cell Fundamentals, Signal to Interferences Radio Calculations,
5
Network Planning for CDMA
24-26
Systems.
6
Wireless Network Operations: Mobility Management
27-31
7
, Radio Resources and Power Management
32-42
Unit 1
A wireless network is any type of computer network that uses wireless data connections for
connecting network nodes.
Wireless networking is a method by which homes, telecommunications networks and enterprise
(business) installations avoid the costly process of introducing cables into a building, or as a
connection between various equipment locations.[1] Wireless telecommunications networks are
generally implemented and administered using radio communication. This implementation takes
place at the physical level (layer) of the OSI model network structure.
1
How Wireless Networks Work
Moving data through a wireless network involves three separate elements: the radio signals, the
data format, and the network structure. Each of these elements is independent of the other two,
so you must define all three Introduction to Wireless Networks when you invent a new network.
In terms of the OSI reference model, the radio signal operates at the physical layer, and the data
format controls several of the higher layers. The network structure includes the wireless network
interface adapters and base stations that send and receive the radio signals. In a wireless network,
the network interface adapters in each computer and base station convert digital data to radio
signals, which they transmit to other devices on the same network, and they receive and convert
incoming radio signals from other network elements back to digital data. Each of the broadband
wireless data services use a different combination of radio signals, data formats, and network
structure. We’ll describe each type of wireless data network in more detail later in this chapter,
but first, it’s valuable to understand some general principles.
Different Generations of Wireless Networks
1G, which stands for "first generation," refers to the first generation of wireless
telecommunication technology, more popularly known as cellophanes. A set of wireless
standards developed in the 1980's, 1G technology replaced 0G technology, which featured
mobile radio telephones and such technologies as Mobile Telephone System (MTS), Advanced
Mobile Telephone System (AMTS), Improved Mobile Telephone Service (IMTS), and Push to
Talk (PTT).
Unlike its successor, 2G, which made use of digital signals, 1G wireless networks used analog
radio signals. Through 1G, a voice call gets modulated to a higher frequency of about 150MHz
and up as it is transmitted between radio towers. This is done using a technique called
Frequency-Division Multiple Access (FDMA).
2
Second generation (2g) telephone technology is based on GSM or in other words global system
for mobile communication. Second generation was launched in Finland in the year 1991.
How
2G
works,
Uses
of
2G
technology
(Second Generation technology)
2G network allows for much greater penetration intensity. 2G technologies enabled the various
mobile phone networks to provide the services such as text messages, picture messages and
MMS (multi media messages). 2G technology is more efficient. 2G technology holds sufficient
security for both the sender and the receiver. All text messages are digitally encrypted. This
digital encryption allows for the transfer of data in such a way that only the intended receiver can
receive and read it
3
3G Technology
If you want augmented bandwidth, multiple mobile applications and clarity of digital signals,
then3G (Thrid Generation Technology) is your gateway. GSM technology was able to transfer
circuit switched data over the network. The use of 3G technology is also able to transmit packet
switch data efficiently at better and increased bandwidth. 3G mobile technologies proffers more
advanced services to mobile users. It can help many multimedia services to function. The
spectral efficiency of 3G technology is better than 2G technologies. Spectral efficiency is the
measurement of rate of information transfer over any communication sytem. 3G is also known as
IMT-2000.
4G Technology
What is 4G technology
When talking about 4G, question comes to our mind is what is 4G Technology. 4G is short for
Fourth (4th) Generation Technology. 4G Technology is basically the extension in the 3G
technology with more bandwidth and services offers in the 3G. But at this time nobody exactly knows
the true 4G definition. Some people say that 4G technology is the future technologies that are mostly
in their maturity period. The expectation for the 4G technology is basically the high quality audio/video
streaming over end to end Internet Protocol. If the Internet Protocol (IP) multimedia sub-system
movement achieves what it going to do, nothing of this possibly will matter. WiMAX or mobile
structural design will become progressively more translucent, and therefore the acceptance of several
architectures by a particular network operator ever more common.
4G Technology offers high data rates that will generate new trends for the market and
prospects for established as well as for new telecommunication businesses. 4G networks, when
tied together with mobile phones with in-built higher resolution digital cameras and also High
Definition capabilities will facilitate video blogs.
4
After successful implementation, 4G technologies is likely to enable ubiquitous computing, that
will simultaneously connects to numerous high date speed networks offers faultless handoffs
all over the geographical regions. Many network operators possibly utilize technologies for
example; wireless mesh networks and cognitive radio network to guarantee secure connection &
competently allocates equally network traffic and bandwidth.
Some of the companies trying 4G mobile communication at 100 Mbps for mobile users and
up to 1 Gbps over fixed stations. They planned on publicly launching their first commercial
wireless network around 2010.
Mobile communication is burdened with particular propagation complications, making reliable
wireless communication more difficult than fixed communication between and carefully
positioned antennas. The antenna height at a mobile terminal is usually very small, typically less
than a few meters. Hence, the antenna is expected to have very little 'clearance', so obstacles and
reflecting surfaces in the vicinity of the antenna have a substantial influence on the
characteristics of the propagation path. Moreover, the propagation characteristics change from
place to place and, if the terminal moves, from time to time.
Radio Propagation Models
Statistical propagation models
In generic system studies, the mobile radio channel is usually evaluated from 'statistical'
propagation models: no specific terrain data is considered, and channel parameters are modelled
as stochastic variables. Three mutually independent, multiplicative propagation phenomena can
usually be distinguished: multipath fading, shadowing and 'large-scale' path loss.
5

Multipath propagation
Fading leads to rapid fluctuations of the phase and amplitude of the signal if the vehicle
moves over a distance in the order of a wave length or more. Multipath fading thus has a
'small-scale' effect.

Shadowing
This is a 'medium-scale' effect: field strength variations occur if the antenna is displaced
over distances larger than a few tens or hundreds of metres.

Path loss
The 'large-scale' effects cause the received power to vary gradually due to signal
attenuation determined by the geometry of the path profile in its entirety. This is in
contrast to the local propagation mechanisms, which are determined by terrain features in
the immediate vicinity of the antennas.
6
Path Loss Modeling and Signal Coverage
Radio signal path loss is a particularly important element in the design of any radio
communications system or wireless system. The radio signal path loss will determine many
elements of the radio communications system in particular the transmitter power, and the
antennas, especially their gain, height and general location. The radio path loss will also affect
other elements such as the required receiver sensitivity, the form of transmission used and
several other factors.
As a result, it is necessary to understand the reasons for radio path loss, and to be able to
determine the levels of the signal loss for a give radio path.
The signal path loss can often be determined mathematically and these calculations are often
undertaken when preparing coverage or system design activities. These depend on a knowledge
of the signal propagation properties.
Accordingly, path loss calculations are used in many radio and wireless survey tools for
determining signal strength at various locations. These wireless survey tools are being
increasingly used to help determine what radio signal strengths will be, before installing the
equipment. For cellular operators radio coverage surveys are important because the investment in
a macrocell base station is high. Also, wireless survey tools provide a very valuable service for
applications such as installing wireless LAN systems in large offices and other centres because
they enable problems to be solved before installation, enabling costs to be considerably reduced.
Accordingly there is an increasing importance being placed onto wireless survey tools and
software.
7
Measurement techniques: narrowband measurements
 Sine wave transmitter
 Envelope detector
 Envelope includes the large scale and small scale fading
 Phase measurement requires I and Q demodulation of the signa
Continuous wave methods
 Energy is transmitted continuously,
 so less peak power is required in
 the transmitter
 The delays are found by correlating the received signal with the
 transmitted waveform
 The waveforms can be
 frequency sweep
 frequency step (network analyzer)
 Direct sequence (TKK sounder, PROPsound by Elektrobit, Finland)
 Multitone (RUSK sounder by MEDAV, Germany
8
Network analyzer 2
 Too short delay range causes aliasing - long delayed components appear in the
 beginning of the impulse response
 Impulse response is sinc shaped – different window functions can be used to
 reduce the sidelobes. This makes the main lobe of the impulse response wider
 Cable is needed between TX and RX
 Doppler range is very limited
 Phase continuity in virtual arrays is a major concern
 phase stability of th
 e measurement system
 stability of the environment
 Various measurement campaigns have been reported with network analyzers,
 generally the number of measurement points is limited
9
Path Loss Modelling and Signal Coverage
In wireless communications, fading is deviation of the attenuation affecting a signal over certain
propagation media. The fading may vary with time, geographical position or radio frequency,
and is often modeled as a random process. A fading channel is a communication channel that
experiences fading. In wireless systems, fading may either be due to multipath propagation,
referred to as multipath induced fading, or due to shadowing from obstacles affecting the wave
propagation, sometimes referred to as shadow fading.
Key concepts
The presence of reflectors in the environment surrounding a transmitter and receiver create
multiple paths that a transmitted signal can traverse. As a result, the receiver sees the
superposition of multiple copies of the transmitted signal, each traversing a different path. Each
signal copy will experience differences in attenuation, delay and phase shift while travelling
from the source to the receiver. This can result in either constructive or destructive interference,
amplifying or attenuating the signal power seen at the receiver. Strong destructive interference is
frequently referred to as a deep fade and may result in temporary failure of communication due
to a severe drop in the channel signal-to-noise ratio.
A common example of deep fade is the experience of stopping at a traffic light and hearing an
FM broadcast degenerate into static, while the signal is re-acquired if the vehicle moves only a
fraction of a meter. The loss of the broadcast is caused by the vehicle stopping at a point where
the signal experienced severe destructive interference. Cellular phones can also exhibit similar
momentary fades.
10
Fading channel models are often used to model the effects of electromagnetic transmission of
information over the air in cellular networks and broadcast communication. Fading channel
models are also used in underwater acoustic communications to model the distortion caused by
the water.
Slow versus fast fading
The terms slow and fast fading refer to the rate at which the magnitude and phase change
imposed by the channel on the signal changes. The coherence time is a measure of the minimum
time required for the magnitude change or phase change of the channel to become uncorrelated
from its previous value.

Slow fading arises when the coherence time of the channel is large relative to the delay
constraint of the channel. In this regime, the amplitude and phase change imposed by the
channel can be considered roughly constant over the period of use. Slow fading can be
caused by events such as shadowing, where a large obstruction such as a hill or large
building obscures the main signal path between the transmitter and the receiver. The
received power change caused by shadowing is often modeled using a log-normal
distribution with a standard deviation according to the log-distance path loss model.

Fast fading occurs when the coherence time of the channel is small relative to the delay
constraint of the channel. In this case, the amplitude and phase change imposed by the
channel varies considerably over the period of use.
In a fast-fading channel, the transmitter may take advantage of the variations in the channel
conditions using time diversity to help increase robustness of the communication to a temporary
deep fade. Although a deep fade may temporarily erase some of the information transmitted, use
of an error-correcting code coupled with successfully transmitted bits during other time instances
(interleaving) can allow for the erased bits to be recovered.
11
In a slow-fading channel, it is not possible to use time diversity because the transmitter sees only
a single realization of the channel within its delay constraint. A deep fade therefore lasts the
entire duration of transmission and cannot be mitigated using coding.
The coherence time of the channel is related to a quantity known as the Doppler spread of the
channel. When a user (or reflectors in its environment) is moving, the user's velocity causes a
shift in the frequency of the signal transmitted along each signal path. This phenomenon is
known as the Doppler shift. Signals traveling along different paths can have different Doppler
shifts, corresponding to different rates of change in phase. The difference in Doppler shifts
between different signal components contributing to a signal fading channel tap is known as the
Doppler spread. Channels with a large Doppler spread have signal components that are each
changing independently in phase over time. Since fading depends on whether signal components
add constructively or destructively, such channels have a very short coherence time.
In general, coherence time is inversely related to Doppler spread, typically expressed as
where
is the coherence time,
is the Doppler spread. This equation is just an
approximation,[1] to be exact, see Coherence time.
Block fading
Block fading is where the fading process is approximately constant for a number of symbol
intervals. A channel can be 'doubly block-fading' when it is block fading in both the time and
frequency domains.
12
Selective fading
Selective fading or frequency selective fading is a radio propagation anomaly caused by partial
cancellation of a radio signal by itself — the signal arrives at the receiver by two different paths,
and at least one of the paths is changing (lengthening or shortening). This typically happens in
the early evening or early morning as the various layers in the ionosphere move, separate, and
combine. The two paths can both be skywave or one be groundwave.
Selective fading manifests as a slow, cyclic disturbance; the cancellation effect, or "null", is
deepest at one particular frequency, which changes constantly, sweeping through the received
audio.
As the carrier frequency of a signal is varied, the magnitude of the change in amplitude will vary.
The coherence bandwidth measures the separation in frequency after which two signals will
experience uncorrelated fading.

In flat fading, the coherence bandwidth of the channel is larger than the bandwidth of the
signal. Therefore, all frequency components of the signal will experience the same
magnitude of fading.

In frequency-selective fading, the coherence bandwidth of the channel is smaller than
the bandwidth of the signal. Different frequency components of the signal therefore
experience uncorrelated fading.
Since different frequency components of the signal are affected independently, it is highly
unlikely that all parts of the signal will be simultaneously affected by a deep fade.
13
Certain modulation schemes such as orthogonal frequency-division multiplexing (OFDM) and
code division multiple access (CDMA) are well-suited to employing frequency diversity to
provide robustness to fading. OFDM divides the wideband signal into many slowly modulated
narrowband subcarriers, each exposed to flat fading rather than frequency selective fading. This
can be combated by means of error coding, simple equalization or adaptive bit loading. Intersymbol interference is avoided by introducing a guard interval between the symbols. CDMA
uses the rake receiver to deal with each echo separately.
Frequency-selective fading channels are also dispersive, in that the signal energy associated with
each symbol is spread out in time. This causes transmitted symbols that are adjacent in time to
interfere with each other. Equalizers are often deployed in such channels to compensate for the
effects of the intersymbol interference.
The echoes may also be exposed to Doppler shift, resulting in a time varying channel model.
The effect can be counteracted by applying some diversity scheme, for example OFDM (with
subcarrier interleaving and forward error correction), or by using two receivers with separate
antennas spaced a quarter-wavelength apart, or a specially designed diversity receiver with two
antennas. Such a receiver continuously compares the signals arriving at the two antennas and
presents the better signal.
14
Fading models
Examples of fading models for the distribution of the attenuation are:

Dispersive fading models, with several echoes, each exposed to different delay, gain and
phase shift, often constant. This results in frequency selective fading and inter-symbol
interference. The gains may be Rayleigh or Rician distributed. The echoes may also be
exposed to Doppler shift, resulting in a time varying channel model.

Nakagami fading

Log-normal shadow fading

Rayleigh fading

Rician fading

Weibull fading
Mitigation
Fading can cause poor performance in a communication system because it can result in a loss of
signal power without reducing the power of the noise. This signal loss can be over some or all of
the signal bandwidth. Fading can also be a problem as it changes over time: communication
systems are often designed to adapt to such impairments, but the fading can change faster than
the adaptations can be made. In such cases, the probability of experiencing a fade (and associated
bit errors as the signal-to-noise ratio drops) on the channel becomes the limiting factor in the
link's performance.
The effects of fading can be combated by using diversity to transmit the signal over multiple
channels that experience independent fading and coherently combining them at the receiver. The
probability of experiencing a fade in this composite channel is then proportional to the
probability that all the component channels simultaneously experience a fade, a much more
unlikely event.Diversity can be achieved in time, frequency, or space. Common techniques used
to overcome signal fading include.
15
Unit 2
Network Planning: Introduction Wireless Network Topologies
cellular network
A cellular network or mobile network is a wireless network distributed over land areas called
cells, each served by at least one fixed-location transceiver, known as a cell site or base station.
In a cellular network, each cell uses a different set of frequencies from neighboring cells, to
avoid interference and provide guaranteed bandwidth within each cell.
When joined together these cells provide radio coverage over a wide geographic area. This
enables a large number of portable transceivers (e.g., mobile phones, pagers, etc.) to
communicate with each other and with fixed transceivers and telephones anywhere in the
network, via base stations, even if some of the transceivers are moving through more than one
cell during transmission.
16
Cellular topology
 Frequency reuse is adopted in cellular
 topology to increase capacity
 By cellular radio, we mean deploying a large
 number of base stations for transmission,
 each having a limited coverage area.
 Available capacity is increased each time a
 new base station is set up
Cellular topology based on following principles













Divide the coverage area into a number of
contiguous smaller areas called cells, each
served by its own base station
Radio channels allocated to these cells to
minimize interference
Group cells into clusters
Each cluster utilizes the entire frequency spectrum
Adjacent cells cannot use the same spectrum
Two types of interference:
Co-channel interference: due to using the same
frequencies
Adjacent channel interference: interference from
different frequency channels
An example of the cellular concept
One cell with area: 100km, a high power BS with 35 channels 7 smaller cells, each cell: 30% of
channels, over 14.3km roughly 80 channels available Cells 1 and 4 use the same channel, so do
cells 3 and 6
Cells {1, 2, 5, 6, 7} form a cluster; they use disjoint channels.
Cells {3, 4} from another cluster.
17
Cell Fundamentals
Principles of Cellular Networks

 Underlying technology for mobile phones, personal communication systems, wireless
networking etc.

 Developed for mobile radio telephone
 Replace high power transmitter/receiver systems
 Typical support for 25 channels over 80km
 Use lower power, shorter range, more transmitters
Cell signal encoding
To distinguish signals from several different transmitters, frequency division multiple access
(FDMA) and code division multiple access (CDMA) were developed.
With FDMA, the transmitting and receiving frequencies used in each cell are different from the
frequencies used in each neighboring cell. In a simple taxi system, the taxi driver manually tuned
to a frequency of a chosen cell to obtain a strong signal and to avoid interference from signals
from other cells.
The principle of CDMA is more complex, but achieves the same result; the distributed
transceivers can select one cell and listen to it.
18
Frequency reuse
The key characteristic of a cellular network is the ability to re-use frequencies to increase both
coverage and capacity. As described above, adjacent cells must use different frequencies,
however there is no problem with two cells sufficiently far apart operating on the same
frequency. The elements that determine frequency reuse are the reuse distance and the reuse
factor.
The reuse distance, D is calculated as
where R is the cell radius and N is the number of cells per cluster. Cells may vary in radius from
1 to 30 kilometres (0.62 to 18.64 mi). The boundaries of the cells can also overlap between
adjacent cells and large cells can be divided into smaller cells.
The frequency reuse factor is the rate at which the same frequency can be used in the network. It
is 1/K (or K according to some books) where K is the number of cells which cannot use the same
frequencies for transmission. Common values for the frequency reuse factor are 1/3, 1/4, 1/7, 1/9
and 1/12 (or 3, 4, 7, 9 and 12 depending on notation).
In case of N sector antennas on the same base station site, each with different direction, the base
station site can serve N different sectors. N is typically 3. A reuse pattern of N/K denotes a
further division in frequency among N sector antennas per site. Some current and historical reuse
patterns are 3/7 (North American AMPS), 6/4 (Motorola NAMPS), and 3/4 (GSM).
If the total available bandwidth is B, each cell can only use a number of frequency channels
corresponding to a bandwidth of B/K, and each sector can use a bandwidth of B/NK.
19
Code division multiple access-based systems use a wider frequency band to achieve the same
rate of transmission as FDMA, but this is compensated for by the ability to use a frequency reuse
factor of 1, for example using a reuse pattern of 1/1. In other words, adjacent base station sites
use the same frequencies, and the different base stations and users are separated by codes rather
than frequencies. While N is shown as 1 in this example, that does not mean the CDMA cell has
only one sector, but rather that the entire cell bandwidth is also available to each sector
individually.
Broadcast messages and paging
Practically every cellular system has some kind of broadcast mechanism. This can be used
directly for distributing information to multiple mobiles. Commonly, for example in mobile
telephony systems, the most important use of broadcast information is to set up channels for one
to one communication between the mobile transceiver and the base station. This is called paging.
The three different paging procedures generally adopted are sequential, parallel and selective
paging.
The details of the process of paging vary somewhat from network to network, but normally we
know a limited number of cells where the phone is located (this group of cells is called a
Location Area in the GSM or UMTS system, or Routing Area if a data packet session is
involved; in LTE, cells are grouped into Tracking Areas). Paging takes place by sending the
broadcast message to all of those cells. Paging messages can be used for information transfer.
This happens in pagers, in CDMA systems for sending SMS messages, and in the UMTS system
where it allows for low downlink latency in packet-based connections.
20
21
Movement from cell to cell and handover
In a primitive taxi system, when the taxi moved away from a first tower and closer to a second
tower, the taxi driver manually switched from one frequency to another as needed. If a
communication was interrupted due to a loss of a signal, the taxi driver asked the base station
operator to repeat the message on a different frequency.
In a cellular system, as the distributed mobile transceivers move from cell to cell during an
ongoing continuous communication, switching from one cell frequency to a different cell
frequency is done electronically without interruption and without a base station operator or
manual switching. This is called the handover or handoff. Typically, a new channel is
automatically selected for the mobile unit on the new base station which will serve it. The mobile
unit then automatically switches from the current channel to the new channel and communication
continues.
The exact details of the mobile system’s move from one base station to the other vary
considerably from system to system (see the example below for how a mobile phone network
manages handover).
Cellular handover in mobile phone networks
As the phone user moves from one cell area to another cell while a call is in progress, the mobile
station will search for a new channel to attach to in order not to drop the call. Once a new
channel is found, the network will command the mobile unit to switch to the new channel and at
the same time switch the call onto the new channel.
With CDMA, multiple CDMA handsets share a specific radio channel. The signals are separated
by using a pseudo noise code (PN code) specific to each phone. As the user moves from one cell
to another, the handset sets up radio links with multiple cell sites (or sectors of the same site)
simultaneously.
22
This is known as "soft handoff" because, unlike with traditional cellular technology, there is no
one defined point where the phone switches to the new cell.
In IS-95 inter-frequency handovers and older analog systems such as NMT it will typically be
impossible to test the target channel directly while communicating. In this case other techniques
have to be used such as pilot beacons in IS-95. This means that there is almost always a brief
break in the communication while searching for the new channel followed by the risk of an
unexpected return to the old channel.
If there is no ongoing communication or the communication can be interrupted, it is possible for
the mobile unit to spontaneously move from one cell to another and then notify the base station
with the strongest signal.
Signal to Interferences Radio Calculations
In information theory and telecommunication engineering, the signal-to-interference-plus-noise
ratio (SINR) (also known as the signal-to-noise-plus-interference ratio (SNIR)) is a quantity
used to give theoretical upper bounds on channel capacity (or the rate of information transfer) in
wireless communication systems such as networks. Analogous to the SNR used often in wired
communications systems, the SINR is defined as the power of a certain signal of interest divided
by the sum of the interference power (from all the other interfering signals) and the power of
some background noise. If the power of noise term is zero, then the SINR reduces to the signalto-interference ratio (SIR). Conversely, zero interference reduces the SINR to the signal-to-noise
ratio (SNR), which is used less often when developing mathematical models of wireless
networks such as cellular networks.
23
The complexity and randomness of certain types of wireless networks and signal propagation has
motivated the use of stochastic geometry models in order to model the SINR, particularly for
cellular or mobile phone networks.
Network Planning for CDMA system
Code division multiple access (CDMA) is a channel access method used by various radio
communication technologies.
CDMA is an example of multiple access, which is where several transmitters can send
information simultaneously over a single communication channel. This allows several users to
share a band of frequencies (see bandwidth). To permit this without undue interference between
the users, CDMA employs spread-spectrum technology and a special coding scheme (where each
transmitter is assigned a code).
Challenges in Radio Network Planning
 Characteristics of the WCDMA technique: DS-CDMA, FDD/TDD
 Rake reception, power control, soft and softer handover
 Different services (data rates, Eb/N0 requirements)
 Spreading / de-spreading
 different spreading gains
24
 Mutual influence of coverage and capacity
 Coverage limited by the uplink and capacity by the downlink
Coverage and capacity are significant issues in the planning process for cellular mobile
networks. In this paper we focus on calculations for capacity, coverage for a Code Division
Multiple Access (CDMA) cell for Universal Mobile Telecommunication System (UMTS)
networks in different propagation environments.
Present day wireless communications is all pervasive, influencing every area of modern life,
reaching anywhere, anytime, and in any form. The escalation of wireless communications in
recent years has been exponential and the telecommunications landscape is changing daily. Next
generation wireless communications are being designed to facilitate high speed data
communications in addition to voice calls. The evolution of cost-effective, high-quality mobile
networks requires flexible utilization of available. With the need for high speed wireless data and
increased frequency congestion, there is considerable interest on proper understanding of the
radio channel. Knowledge of radio wave propagation characteristics is vital for designing any
mobile/wireless communication system in a given region. Radio wave propagation models are
necessary to determine propagation characteristics for any arbitrary installation. The predictions
are required for a proper coverage planning, the determinationn of multipath effects as well as
for interference and cell calculations, which are the basis for the any network planning process.
Radio link is interference limited, when the interference levels are well above the receiver
sensitivity level. An interference limited network is usually considered to be capacity limited, ie.,
the interference level is setting the limits of the network spectral efficiency. On the other hand,
noise limited networks are considered to be coverage limited, ie., the network has cell range
rather than spectrum efficiency limitations. In this scenario, the interference containment aspect
of CDMA facilitates increase in capacity in the network UMTSradio networks are based on
CDMA technology and are currently offering in several countries.
25
The aim of the technology is to realize the user requirement for new services such as enhanced
and multimedia messaging through high-speed data channels. The performance of the radio
interface in cellular CDMA systems is difficult to analyze, due to the trade-off between coverage
and capacity, caused by the interference limited nature of these systems.
CDMA based mobile networks need efficient network planning. The network planning process
will allow the maximum number of users with adequate signal strength in a CDMA cell. With
proper analysis of capacity and coverage related issues, the key differences arise between 2G
(GSM) and 3G (UMTS) networks due to the different levels of service offered will be
minimized. The simple expression derived in this present work using equations (6) and (7),
describing the relationship between coverage, capacity, data rates and Number of users can be
used in CDMA cellular system planning to set limits on the maximum number of users that can
be admitted into the cell in order to meet coverage and capacity requirements. An attempt is
made in the present study to characterize a CDMA system in different propagation environments
using propagation models and systems parameters for downlink and uplink configurations.
However, there are limitations in our simulations, as we assumed perfect power control.
26
Wireless Network Operations:
Mobility Management
Mobility management is one of the major functions of a GSM or a UMTS network that allows mobile
phones to work. The aim of mobility management is to track where the subscribers are, allowing calls,
SMS and other mobile phone services to be delivered to them.
Types of area
Location area
A "location area" is a set of base stations that are grouped together to optimize signaling.
Typically, tens or even hundreds of base stations share a single Base Station Controller (BSC) in
GSM, or a Radio Network Controller (RNC) in UMTS, the intelligence behind the base stations.
The BSC handles allocation of radio channels, receives measurements from the mobile phones,
controls handovers from base station to base station.
To each location area, a unique number called a "location area code" is assigned. The location
area code is broadcast by each base station, known as a "base transceiver station" BTS in GSM,
or a Node B in UMTS, at regular intervals.
If the location areas are very large, there will be many mobiles operating simultaneously,
resulting in very high paging traffic, as every paging request has to be broadcast to every base
station in the location area. This wastes bandwidth and power on the mobile, by requiring it to
listen for broadcast messages too much of the time. If on the other hand, there are too many
small location areas, the mobile must contact the network very often for changes of location,
which will also drain the mobile's battery. A balance has therefore to be struck.
27
Routing area
The routing area is the PS domain equivalent of the location area. A "routing area" is normally a
subdivision of a "location area". Routing areas are used by mobiles which are GPRS-attached.
GPRS is optimized for "bursty" data communication services, such as wireless internet/intranet,
and multimedia services. It is also known as GSM-IP ("Internet Protocol") because it will
connect users directly to Internet Service Providers
The bursty nature of packet traffic means that more paging messages are expected per mobile,
and so it is worth knowing the location of the mobile more accurately than it would be with
traditional circuit-switched traffic. A change from routing area to routing area (called a "Routing
Area Update") is done in an almost identical way to a change from location area to location area.
The main differences are that the "Serving GPRS Support Node" (SGSN) is the element
involved.
Tracking area
The tracking area is the LTE counterpart of the location area and routing area. A tracking area is
a set of cells. Tracking areas can be grouped into lists of tracking areas (TA lists), which can be
configured on the User Equipment (UE). Tracking area updates are performed periodically or
when the UE moves to a tracking area that is not included in its TA list.
Operators can allocate different TA lists to different UEs. This can avoid signaling peaks in some
conditions: for instance, the UEs of passengers of a train may not perform tracking area updates
simultaneously.
On the network side, the involved element is the Mobility Management Entity (MME). MME
configures TA lists using NAS messages like Attach Accept, TAU Accept or GUTI Reallocation
Command.
28
Location update procedure
A GSM or UMTS network, like all cellular networks, is basically a radio network of individual
cells, known as base stations. Each base station covers a small geographical area which is part of
a uniquely identified location area. By integrating the coverage of each of these base stations, a
cellular network provides a radio coverage over a much wider area. A group of base stations is
named a location area, or a routing area.
The location update procedure allows a mobile device to inform the cellular network, whenever
it moves from one location area to the next. Mobiles are responsible for detecting location area
codes. When a mobile finds that the location area code is different from its last update, it
performs another update by sending to the network, a location update request, together with its
previous location, and its Temporary Mobile Subscriber Identity (TMSI).
There are several reasons why a mobile may provide updated location information to the
network. Whenever a mobile is switched on or off, the network may require it to perform an
IMSI attach or IMSI detach location update procedure. Also, each mobile is required to regularly
report its location at a set time interval using a periodic location update procedure. Whenever a
mobile moves from one location area to the next while not on a call, a random location update
is required. This is also required of a stationary mobile that reselects coverage from a cell in a
different location area, because of signal fade. Thus a subscriber has reliable access to the
network and may be reached with a call, while enjoying the freedom of mobility within the
whole coverage area.
29
When a subscriber is paged in an attempt to deliver a call or SMS and the subscriber does not
reply to that page then the subscriber is marked as absent in both the Mobile Switching Center /
Visitor Location Register (MSC/VLR) and the Home Location Register (HLR) (Mobile not
reachable flag MNRF is set). The next time the mobile performs a location update the HLR is
updated and the mobile not reachable flag is cleared.
Many transit agencies are embracing the concept of ‘mobility management’, which is a
strategic approach to service coordination and customer service that is becoming a worldwide
trend in the public transportation sector.
When implemented, mobility management will move transit agencies away from their roles as
fixed-route service operators, and toward collaboration with other transportation providers. The
idea behind this approach is to create a full range of well synchronized mobility services within a
community.
Mobility management starts with the creation of partnerships among transportation providers in a
particular region, so as to expand the range of viable options that communities have for
transportation. Communication is also a critical component of mobility management, as the
general public must be made aware of these options.
With the mobility management approach, transit resources are efficiently coordinated, enabling
customers to make better decisions, as well as improved customer service.
30
Resources



Profiles of mobility management programs in several states
Supporting the Effort to Manage Mobility
By Robert G. Stanley, Former Principal, Cambridge Systematic Inc.
Additional mobility management resources
Radio resource management
(RRM) is the system level control of co-channel interference and other radio transmission
characteristics in wireless communication systems, for example cellular networks, wireless
networks and broadcasting systems. RRM involves strategies and algorithms for controlling
parameters such as transmit power, user allocation, beam forming, data rates, handover criteria,
modulation scheme, error coding scheme, etc. The objective is to utilize the limited radiofrequency spectrum resources and radio network infrastructure as efficiently as possible.
RRM concerns multi-user and multi-cell network capacity issues, rather than the point-to-point
channel capacity. Traditional telecommunications research and education often dwell upon
channel coding and source coding with a single user in mind, although it may not be possible to
achieve the maximum channel capacity when several users and adjacent base stations share the
same frequency channel. Efficient dynamic RRM schemes may increase the system spectral
efficiency by an order of magnitude, which often is considerably more than what is possible by
introducing advanced channel coding and source coding schemes. RRM is especially important
in systems limited by co-channel interference rather than by noise, for example cellular systems
and broadcast networks homogeneously covering large areas, and wireless networks consisting
of many adjacent access points that may reuse the same channel frequencies.
31
The cost for deploying a wireless network is normally dominated by base station sites (real estate
costs, planning, maintenance, distribution network, energy, etc.) and sometimes also by
frequency license fees. The objective of radio resource management is therefore typically to
maximize the system spectral efficiency in bit/s/Hz/area unit or Erlang/MHz/site, under some
kind of user fairness constraint, for example, that the grade of service should be above a certain
level. The latter involves covering a certain area and avoiding outage due to co-channel
interference, noise, attenuation caused by path losses, fading caused by shadowing and
multipath, Doppler shift and other forms of distortion. The grade of service is also affected by
blocking due to admission control, scheduling starvation or inability to guarantee quality of
service that is requested by the users.
While classical radio resource managements primarily considered the allocation of time and
frequency resources (with fixed spatial reuse patterns), recent multi-user MIMO techniques
enables adaptive resource management also in the spatial domain. In cellular networks, this
means that the fractional frequency reuse in the GSM standard has been replaced by a universal
frequency reuse in LTE standard.
The primary goal of Radio Resource Management (RRM) is to control the use of radio resources
in the system while also ensuring that the Quality of Service (QoS) requirements of the
individual radio bearers are met and the overall usage of radio resources on the system level is
minimized. The objective of RRM is to satisfy the service requirements at the smallest possible
cost to the system, ensuring optimized use of spectrum.
Some of main functions of RRM include the following:

Radio Admission Control (RAC)

Radio Bearer Control (RBC)

Connection Mobility Control

Dynamic allocations of resources to UEs in both uplink and downlink (DRA)

Inter-Cell Interference Co-ordination (ICIC)

Load Balancing (LB)
32
Power Management
Many differences exist between wireless networks and tradition wired ones. The most notable
difference between these networks is the use of the wired medium for communication. The
promise of a truly wireless network is to have the freedom to roam around anywhere within the
range of the network and not be bound to a single location. Without proper power management
of these roaming devices, however, the energy required to keep these devices connected to the
network over extended periods of time quickly dissipates. Users are left searching for power
outlets rather than network ports, and becoming once again bound to a single location.
A plethora of power management schemes have been developed in recent years in order to
address this problem. Solutions exist at every layer of the traditional network protocol stack, and
each of them promises to provide their own level of energy savings. This paper takes a look at
the different techniques used at each layer and examines the standards that have emerged as well
as products being developed that are based on them. It focuses on the subset of wireless
networking that deals with Wireless Local Area Networks (WLAN) and Wireless Personal Area
Networks (WPAN). As a subset of WPANs known as LR-WPANs that require very low power
operation at very low data rates, techniques used in Wireless Sensor Networks (WSNs) are given
particular focus.
Power Constrained Wireless Networks
Wireless networks have been a hot topic for many years. Their potential was first realized with
the deployment of cellular networks for use with mobile telephones in the late 1970's. Since this
time, many other wireless wide are networks (WWANs) have begun to emerge, along with the
introduction of wireless Metropolitan Area Networks (WMANs), wireless Local Area Network
(WLANs), and wireless Personal Area Networks (WPANs). Fig. shows a number of standards
that have been developed for each of these types of networks.
33
Power Management Techniques
The previous section discussed WLANs and WPANs and the various standards that exist for
them. The differences between each type of network were introduced with an emphasis put on
their requirements for performing power management that each of them have. This section
discusses the various power management techniques used by these standards for reducing the
power consumed in each type of network. Many of the techniques introduced in this section do
not appear in any of these standards, but are used in common practice to reduce the power of
devices in both WLANs and WPANs. These techniques exist from the application layer all the
way down to the physical layer of a traditional networking protocol stack. Techniques specific to
a particular type of network are annotated as appropriate.
34
Application Layer
At the application layer a number of different techniques can be used to reduce the power
consumed by a wireless device. A technique known as load partitioning allows an application to
have all of its power intensive computation performed at its base station rather than locally. The
wireless device simply sends the request for the computation to be performed, and then waits for
the result. Another technique uses proxies in order to inform an application to changes in battery
power. Applications use this information to limit their functionality and only provide their most
essential features. This technique might be used to suppress certain "unnecessary" visual effects
that accompany a process . While these techniques may be adapted to work with any application
that wishes to support them, a number of techniques also exist for specific classes of
applications.
Some applications are so common that it is worth exploring techniques that specifically deal with
reducing the power consumed while running them. Two of the most common such applications
include database operations and video processing For database systems, techniques are explored
that are able to reduce the power consumed during data retrieval, indexing, as well as querying
operations. In all three cases, energy is conserved by reducing the number of transmissions
needed to perform these operations. For video processing applications, energy can be conserved
using compression techniques to reduce the number of bits transmitted over the wireless
medium. Since performing the compression itself may consume a lot of power, however, other
techniques that allow the video quality to become slightly degraded have been explored in order
to reduce the power even further. Please refer to for a more complete list of application specific
Transport Layer
The various techniques used to conserve energy at the transport layer all try to reduce the number
of retransmissions necessary due to packet losses from a faulty wireless link. In a traditional
(wired) network, packet losses are used to signify congestion and require back off mechanisms to
account for this. In a wireless network, however, losses can occur sporadically and should not
immediately be interpreted as the onset of congestion.
35
The TCP-Probing and Wave and Wait Protocols have been developed with this knowledge in
mind. They are meant as replacements for traditional TCP, and are able to guarantee end-to-end
data delivery with high throughput and low power consumption.
Network Layer
Power management techniques existing at the network layer are concerned with performing
power efficient routing through a multi-hop network. They are typically backbone based,
topology control based, or a hybrid of them both. In a backbone based protocol (sometimes also
referred to as Charge Based Clustering), some nodes are chosen to remain active at all times
(backbone nodes), while others are allowed to sleep periodically. The backbone nodes are used
to establish a path between all source and destination nodes in the network. Any node in the
network must therefore be within one hop of at least one backbone node, including backbone
nodes themselves. Energy savings are achieved by allowing non-backbone nodes to sleep
periodically, as well as by periodically changing which nodes in fact make up the backbone.
Data Link Layer
The two most common techniques used to conserve energy at the link layer involve reducing the
transmission overhead during the Automatic Repeat Request (ARQ) and Forward Error
Correction (FEC) schemes. Both of these schemes are used to reduce the number of packet
errors at a receiving node. By enabling ARQ, a router is able to automatically request the
retransmission of a packet directly from its source without first requiring the receiver node to
detect that a packet error has occurred. Results have shown that sometimes it is more energy
efficient to transmit at a lower transmission power and have to send multiple ARQs than to send
at a high transmission power and achieve better throughput. Integrating the use of FEC codes to
reduce the number of retransmissions necessary at the lower transmission power can result in
even more energy savings. Power management techniques exist that exploit these observations.
36
Other power management techniques existing at the link layer are based on some sort of packet
scheduling protocol. By scheduling multiple packet transmission to occur back to back (i.e. in a
burst), it may be possible to reduce the overhead associated with sending each packet
individually. Preamble bytes only need to be sent for the first packet in order to announce it
presence on the radio channel, and all subsequent packets essentially "piggyback" this
announcement. Packet scheduling algorithms may also reduce the number of retransmissions
necessary if a packet is only scheduled to be sent during a time when its destination is known to
be able to receive packets. By reducing the number of retransmissions necessary, the overall
power consumption is consequently reduced as well.
MAC Layer
Power saving techniques existing at the MAC layer consists primarily of sleep scheduling
protocols. The basic principle behind all sleep scheduling protocols is that lots of power is
wasted listening on the radio channel while there is nothing there to receive. Sleep schedulers are
used to duty cycle a radio between its on and off power states in order to reduce the effects of
this idle listening. They are used to wake up a radio whenever it expects to transmit or receive
packets and sleep otherwise. Other power saving techniques at this layer include battery aware
MAC protocols (BAMAC) in which the decision of who should send next is based on the battery
level of all surrounding nodes in the network. Battery level information is piggy-backed on each
packet that is transmitted, and individual nodes base their decisions for sending on this
information.
Sleep scheduling protocols can be broken up into two categories: synchronous and
asynchronous. Synchronous sleep scheduling policies rely on clock synchronization between
nodes all nodes in a network. As seen in Fig. 5., senders and receivers are aware of when each
other should be on and only send to one another during those time periods. They go to sleep
otherwise.
37
Asynchronous sleep scheduling, on the other hand, does not rely on any clock synchronization
between nodes whatsoever. Nodes can send and receive packets whenever they please, according
to the MAC protocol in use. shows how two nodes running asynchronous sleep schedulers a abl
to communicate. Nodes wake up and go to sleep periodically in the same way they do for
synchronous sleep scheduling. Since there is no time synchronization, however, there must be a
way to ensure that receiving nodes are awake to hear the transmissions coming in from other
nodes. Normally preamble bytes are sent by a packet in order to synchronize the starting point of
the incoming data stream between the transmitter and receiver. With asynchronous sleep
scheduling, a significant number of extra preamble bytes are sent per packet in order to guarantee
that a receiver has the chance to synchronize to it at some point. In the worst case, a packet will
begin transmitting just as its receiver goes to sleep, and preamble bytes will have to be sent for a
time equal to the receiver's sleep interval (plus a little more to allow for proper synchronization
once it wakes up). Once the receiver wakes up, it synchronizes to these preamble bytes a remains
on until it receives the packet. Unlike the power efficient routing protocols introduced in it
doesn't make sense to have a hybrid sleep scheduling protocol based on each of the two
techniques. The energy savings achieved using each of them varies from system to systemand
application to application. One technique is not "better" than the other in this sense, so efforts are
being made to define exactly when each type should be used.
Physical Layer
At the physical layer, techniques can be used to not only preserve energy, but also generate it.
Proper hardware design techniques allow one to decrease the level of parasitic leak currents in an
electronic device to almost nothing. These smaller leakage currents ultimately result in longer
lifetimes for these devices, as less energy is wasted while idle. Variable clock CPUs, CPU
voltage scaling, flash memory, and disk spin down techniques can also be used to further reduce
the power consumed at the physical layer\. A technique known as Remote Access Switch (RAS)
can be used to wake up a receiver only when it has data destined for it. A low power radio circuit
is run to detect a certain type of activity on the channel.
38
Only when this activity is detected does the circuit wake up the rest of the system for reception
of a packet. A transmitter has to know what type of activity needs to be sent on the channel to
wake up each of its receivers. Energy harvesting techniques allow a device to actually gather
energy from its surrounding environment.
Ambient energy is all around in the form of vibration, strain, inertial forces, heat, light, wind,
magnetic forces, etc Energy harvesting techniques allow one to harness this energy and either
convert it directly into usable electric current or store it for later use within an electrical system.
In the latest technological advances in both low power design and energy harvesting techniques
will be introduced.
Existing Standards
In the previous section, various techniques were explored that enable energy to be conserved at
various layers within the wireless networking protocol stack. Some techniques were looked at in
greater detail than others, and some techniques existing at the overall system level were not
discussed at all. These power management schemes involve controlling the power state for
peripheral devices such as the display or hard disk on a laptop computer. Others include cycling
through the use of multiple batteries on a device in order to increase the overall lifetime of each
individual one. Since these techniques do not explicitly exist at any single layer within the
wireless networking protocol stack itself, they have been left out of this discussion. For more
information on these and other power management techniques not discussed in the previous
section, please refer to chapter eleven of and its corresponding list of references.
39
The following section focuses on the use of the techniques introduced in the previous section for
defining the various power management schemes built into the IEEE 802 standards discussed in
.As the IEEE standards body only concerns itself with defining the various MAC layer protocols
for the 802 family of wireless networks, the standards discussed in this section only make use of
the sleep scheduling protocols discussed previously. The standards that exist for WLANs
(802.11-PSM), WPANs (Bluetooth), and WSNs (802.15.4/Zigbee) are all introduced separately.
Wireless LANs
The IEEE 802.11 standard specifies how communication is achieved for wireless nodes existing
in a Wireless Local Area Network (WLAN). Part of this standard is dedicated to describing a
feature known as Power Save Mode (PSM) that is available for nodes existing in an
infrastructure based 802.11 WLAN. PSM is based on a synchronous sleep scheduling policy, in
which wireless nodes (stations) are able to alternate between an active mode and a sleep mode.
As a wireless station using PSM first joins an infrastructure based WLAN, it must notify its
access point that it has PSM enabled. The access point then synchronizes with the PSM station
allowing it to begin running its synchronous sleep schedule. When packets arrive for each of
these PSM stations, the access point buffers them until their active period comes around again.
At the beginning of each active period, a beacon message is sent from the access point to each
wireless station in order to notify them of these buffered packets. PSM stations then request these
packets and they are forwarded from the access point. Once all buffered frames have been
received, a PSM station resumes with its sleep schedule wherever it left off. Whenever a PSM
station has data to send, it simply wakes up, sends its packet, and then resumes its sleep schedule
protocol as appropriate.
40
Although this feature of 802.11 networks is readily available on all devices implementing the full
802.11 specification, it is not very widely used. Many studies have been done to investigate the
effects of using PSM and other power saving techniques for WLANs. They all conclude that the
throughput achieved with these techniques is significantly less than with them disabled. While
PSM may significantly reduce the energy consumed by a wireless station, many users prefer to
sacrifice these power savings for an increase in performance.
Wireless PANs
The 802.15.1 standard [Bluetooth] provides provisions for power management as well. Wireless
nodes in a Bluetooth network are organized into groups known as piconets, with one node
dedicated as the master node and all others as slave nodes. Up to seven active nodes can exist in
a piconet at any given time, with up to 256 potential members (249 inactive). All nodes operate
using a synchronous sleep scheduling policy in order to exchange data. A beacon messaging
system similar to the one described in for 802.11 based networks is used to exchange messages
between slave nodes and their master. All nodes are able to communicate with all other nodes
within the Piconet, but messages between slaves must be sent exclusively through the master
node.
Bluetooth defines eight different operational states, 3 of which are dedicated to low power
operations. These three low power states are known as Sniff, Hold, and Park. While in the Sniff
state, an active Bluetooth device simply lowers its duty cycle and listens to the pioneer at a
reduced rate. When switching to the Hold state, a device will shut down all communication
capabilities it has with a pioneer, but remain "active" in the sense that it does not give up its
access to one of the seven active slots available for devices within the pioneer. Devices in the
Park state disable all communication with the pioneer just as in the Hold state, except that they
also relinquish their active node status.
41
Wireless Sensor Networks
The 802.15.4 wireless networking standard provides low data rate, low power communication
that is ideal for wireless sensor networking applications. It too is based on a synchronous sleep
scheduling policy that periodically wakes nodes up and puts them to sleep in order to exchange
data. The difference between this standard and the others is in the frequency with which nodes
wake up, and the data rate (and correspondingly the required transmission power) with which
they transmit data. As will be seen in many products for WSNs are being developed in industry
with "Zigbee" compatibility as a very strong marketing point.
42
Download