Construction of LDPC codes for Application with Broadband

advertisement
School of Electrical, Electronics and
Computer Engineering
University of Newcastle-upon-Tyne
Noise in Communication
Systems
Prof. Rolando Carrasco
Lecture Notes
Newcastle University
2008/2009
1
Noise in Communication Systems
1.
2.
3.
4.
5.
6.
7.
8.
9.
Introduction
Thermal Noise
Shot Noise
Low Frequency or Flicker Noise
Excess Resister Noise
Burst or Popcorn Noise
General Comments
Noise Evaluation – Overview
Analysis of Noise in Communication
Systems
• Thermal Noise
• Noise Voltage Spectral Density
• Resistors in Series
• Resistors in Parallel
10.Matched Communication Systems
11. Signal - to – Noise
12. Noise Factor – Noise Figure
13. Noise Figure / Factor for Active
Elements
14. Noise Temperature
15. Noise Figure / Factors for Passive
Elements
16. Review – Noise Factor / Figure /
Temperature
17. Cascaded Networks
18. System Noise Figure
19. System Noise Temperature
20. Algebraic Representation of Noise
21. Additive White Gaussian Noise
2
1. Introduction
Noise is a general term which is used to describe an unwanted signal
which affects a wanted signal. These unwanted signals arise from a
variety of sources which may be considered in one of two main
categories:•Interference, usually from a human source (man made)
•Naturally occurring random noise
Interference
Interference arises for example, from other communication systems
(cross talk), 50 Hz supplies (hum) and harmonics, switched mode
power supplies, thyristor circuits, ignition (car spark plugs) motors
… etc.
3
1. Introduction (Cont’d)
Natural Noise
Naturally occurring external noise sources include atmosphere disturbance
(e.g. electric storms, lighting, ionospheric effect etc), so called ‘Sky Noise’
or Cosmic noise which includes noise from galaxy, solar noise and ‘hot
spot’ due to oxygen and water vapour resonance in the earth’s atmosphere.
4
2. Thermal Noise (Johnson Noise)
This type of noise is generated by all resistances (e.g. a resistor,
semiconductor, the resistance of a resonant circuit, i.e. the real part of the
impedance, cable etc).
Experimental results (by Johnson) and theoretical studies (by Nyquist) give
the mean square noise voltage as _ 2
V  4 k TBR (volt 2 )
Where k = Boltzmann’s constant = 1.38 x 10-23 Joules per K
T = absolute temperature
B = bandwidth noise measured in (Hz)
R = resistance (ohms)
5
2. Thermal Noise (Johnson Noise) (Cont’d)
The law relating noise power, N, to the temperature and bandwidth is
N = k TB watts
Thermal noise is often referred to as ‘white noise’ because it has a
uniform ‘spectral density’.
6
3. Shot Noise
• Shot noise was originally used to describe noise due to random
fluctuations in electron emission from cathodes in vacuum tubes
(called shot noise by analogy with lead shot).
• Shot noise also occurs in semiconductors due to the liberation of
charge carriers.
• For pn junctions the mean square shot noise current is
I n2  2I DC  2 I o  qe B
(amps) 2
Where
is the direct current as the pn junction (amps)
is the reverse saturation current (amps)
is the electron charge = 1.6 x 10-19 coulombs
B is the effective noise bandwidth (Hz)
• Shot noise is found to have a uniform spectral density as for thermal
7
noise
4. Low Frequency or Flicker Noise
Active devices, integrated circuit, diodes, transistors etc also exhibits
a low frequency noise, which is frequency dependent (i.e. non
uniform) known as flicker noise or ‘one – over – f’ noise.
5. Excess Resistor Noise
Thermal noise in resistors does not vary with frequency, as previously
noted, by many resistors also generates as additional frequency
dependent noise referred to as excess noise.
6. Burst Noise or Popcorn Noise
Some semiconductors also produce burst or popcorn noise with a
spectral density which is proportional to  1 f 
2
8
7. General Comments
For frequencies below a few KHz (low frequency systems), flicker
and popcorn noise are the most significant, but these may be ignored
at higher frequencies where ‘white’ noise predominates.
9
8. Noise Evaluation
The essence of calculations and measurements is to determine the
signal power to Noise power ratio, i.e. the (S/N) ratio or (S/N)
expression in dB.  S 
S

 
N
  ratio N
 S 
 S 
   10 log 10  
 N  dB
N
Also recall that
 S ( mW ) 
S dBm  10 log 10 

1
mW


 N ( mW ) 
and N dBm  10 log 10 

1
mW


 S 
i.e.   10 log 10 S  10 log 10 N
 N  dB
 S 
   S dBm  N dBm
 N  dB
10
8. Noise Evaluation (Cont’d)
The probability of amplitude of noise at any frequency or in any band
of frequencies (e.g. 1 Hz, 10Hz… 100 KHz .etc) is a Gaussian
distribution.
11
8. Noise Evaluation (Cont’d)
Noise may be quantified in terms of
noise power spectral density, po watts per
Hz, from which Noise power N may be
expressed as
N= po Bn watts
Ideal low pass filter
Bandwidth B Hz = Bn
N= po Bn watts
Practical LPF
3 dB bandwidth shown, but noise does not suddenly cease
at B3dB
Therefore, Bn > B3dB, Bn depends on actual filter.
N= p0 Bn
In general the equivalent noise bandwidth is > B3dB.
12
9. Analysis of Noise In Communication Systems
Thermal Noise (Johnson noise)
This thermal noise may be represented by an equivalent circuit as shown below
____
2
V  4 k TBR (volt 2 )
(mean square value , power)
then VRMS = V____2  2 kTBR Vn
i.e. Vn is the RMS noise voltage.
A) System BW = B Hz
N= Constant B (watts) = KB
B) System BW
N= Constant 2B (watts) = K2B
For A, S  S
N
KB
For B, S  S
N
K 2B
13
9. Analysis of Noise In Communication Systems (Cont’d)
Resistors in Series
Assume that R1 at
temperature T1 and R2 at
temperature T2, then
____
2
n
___
V V
____
2
n1
____
V
2
n1
___
V
2
n2
 4 k T1 B R1
Vn 2  4 k T2 B R2
2
____
2
n
 V
____
2
n
V
 4 k B (T1 R1  T2 R2 )
 4 kT B ( R1  R2 )
i.e. The resistor in series at same temperature behave as a
single resistor
14
9. Analysis of Noise In Communication Systems (Cont’d)
Resistance in Parallel
R2
Vo1  Vn1
R1  R2
____
2
n
___
V V
____
2
n
V 
_____
2
n
V
o1
___
V
4kB
R1  R2 2

_____
2
n
V
2
Vo 2 Vn 2
R1
R1  R2
2
o2
R
2
2
R R
T1 R1 R12 T2 R2   1 2
 R1 R2

4kB R1 R2 (T1 R1 T2 R2 )
R1  R2 2
 RR 
 4kTB  1 2 
 R1  R2 
15



10. Matched Communication Systems
In communication systems we are usually concerned
with the noise (i.e. S/N) at the receiver end of the system.
The transmission path may be for example:-
Or
An equivalent circuit, when the line is connected to the receiver is shown below.
16
10. Matched Communication Systems (Cont’d)
17
11. Signal to Noise
The signal to noise ratio is given by
S Signal Power

N
Noise Power
The signal to noise in dB is expressed by
S
S
  dB  10 log 10  
N
N
 S 
  dB S dBm  N dBm
N
for S and N measured in mW.
12. Noise Factor- Noise Figure
Consider the network shown below,
18
12. Noise Factor- Noise Figure (Cont’d)
• The amount of noise added by the network is embodied in the
Noise Factor F, which is defined by
Noise factor F =
S N 
S N 
IN
OUT
• F equals to 1 for noiseless network and in general F > 1. The
noise figure in the noise factor quoted in dB
i.e.
Noise Figure F dB = 10 log10 F
F ≥ 0 dB
• The noise figure / factor is the measure of how much a network
degrades the (S/N)IN, the lower the value of F, the better the
network.
19
13. Noise Figure – Noise Factor for Active Elements
For active elements with power gain G>1, we have
F=
S N 
S N 
IN
OUT
= S IN N OUT
N IN S OUT
But
SOUT  G S IN
Therefore
S IN N OUT
N
 OUT
N IN G S IN G N IN
Since in general F v> 1 , then NOUT is increased by noise due to the active element i.e.
F
Na represents ‘added’ noise measured at the output. This added noise may be referred to the
input as extra noise, i.e. as equivalent diagram is
20
13. Noise Figure – Noise Factor for Active Elements (Cont’d)
Ne is extra noise due to active elements referred to the input; the element is thus
effectively noiseless.
21
14. Noise Temperature
22
15. Noise Figure – Noise Factor for Passive Elements
23
16. Review of Noise Factor – Noise Figure –Temperature
24
17. Cascaded Network
A receiver systems usually consists of a number of passive or active elements connected in
series. A typical receiver block diagram is shown below, with example
In order to determine the (S/N) at the input, the overall receiver noise figure or noise
temperature must be determined. In order to do this all the noise must be referred to the same
point in the receiver, for example to A, the feeder input or B, the input to the first amplifier.
Te
or N e is the noise referred to the input.
25
18. System Noise Figure
Assume that a system comprises the elements shown below,
Assume that these are now cascaded and connected to an aerial at the input, with N IN  N ae
from the aerial.
N OUT  G3 N IN 3  N e3 
Now ,
Since
similarly
 G3 N IN 3  F3  1 N IN 
N IN 3  G2 N IN 2  N e 2   G2 N IN 2  F2  1N IN 
N IN 2  G1  N ae  F1  1N IN 
26
18. System Noise Figure (Cont’d)
N OUT  G3 G2 G1 N ae  G1 F1  1N IN   G2 F2  1N IN   G3 F3  1N IN
The overall system Noise Factor is
N OUT
N OUT
Fsys 

GN IN G1G2 G3 N ae
 1 F1  1
Fsys
N IN F2  1 N IN F3  1 N IN


N ae
G1 N ae
G1G2 N ae

F2  1 F3  1 F4  1
F 


 ........... 
1
G1
G1G2
The equation is called FRIIS Formula.
G1G2 G3
Fn  1
G1G2 ..........Gn1
27
19. System Noise Temperature
28
20. Algebraic Representation of Noise
Phasor Representation of Signal and Noise
The general carrier signal VcCosWct may be represented as a phasor at any instant
in time as shown below:
If we now consider a carrier with a noise voltage with “peak” value superimposed we
may represents this as:
Both Vn and  n are random variables, the above phasor diagram represents a snapshot
at some instant in time.
29
20. Algebraic Representation of Noise (Cont’d)
We may draw, for a single instant, the phasor with noise resolved into 2 components, which
are:
a) x(t) in phase with the carriers
x(t ) Vn Cos n
b) y(t) in quadrature with the carrier
y(t ) Vn Sin n
30
20. Algebraic Representation of Noise (Cont’d)
31
20. Algebraic Representation of Noise (Cont’d)
32
20. Algebraic Representation of Noise (Cont’d)
Considering the general phasor representation below:-
33
20. Algebraic Representation of Noise (Cont’d)
From the diagram
 Vn Sin n t 

  tan 
 Vc  Vn Cos n t 
1
 Vn

Sin  n t 

Vc

 tan 1 


Vn
Cos n t 
1
V
c


34
21. Additive White Gaussian Noise
Additive
Noise is usually additive in that it adds to the information bearing signal. A model of the
received signal with additive noise is shown below
White
White noise = po  f  = Constant
Gaussian
We generally assume that noise voltage amplitudes have a Gaussian or Normal distribution.
35
School of Electrical, Electronics and
Computer Engineering
University of Newcastle-upon-Tyne
Error Control Coding
Prof. Rolando Carrasco
Lecture Notes
University of Newcastle-upon-Tyne
2005
36
Error Control Coding
• In digital communication error occurs due to noise
no of errors in N bits
for large N ( N  )
•Bit error rate =  
N bits
•Error rates typically range from 10-1 to 10-5 or better
• In order to counteract the effect of errors Error Control Coding
is used.
a) Detect Error – Error Detection
b) Correct Error – Error Correction
37
Channel Coding in Communication
38
Automatic Repeat Request (ARQ)
39
Automatic Repeat Request (ARQ) (Cont’d)
40
Forward Error Correction (FEC)
41
Block Codes
• A block code is a coding technique which generates C check bits
for M message bits to give a stand alone block of M+C= N bits
• The code rate is given by
Rate =
M
M

M C N
• A single parity bit (C=1 bit) applied to a block of 7 bits give a
code rate
7
7

Rate =
7 1 8
42
Block Codes (Cont’d)
• A (7,4) Cyclic code has N=7, M=4
Code rate R =
4
7
A repetition-m code in which each bit or message is transmitted m
times and the receiver carries out a majority vote on each bit has a
code rate
M
1
Rate 

mM m
43
Message Transfer
It is required to transfer the contents of Computer A to Computer B.
COMPUTER A
COMPUTER B
• The messages transferred to the Computer B, some may be rejected (lost)
and some will be accepted, and will be either true (successful transfer) or
false
• Obviously the requirement is for a high probability of successful transfer
(ideally = 1), low probability of false transfer (ideally = 0) and a low
probability of lost messages.
44
Message Transfer (Cont’d)
Error control coding may be considered further in two
main ways
1. In terms of System Performance i.e. the probabilities
of successful, false and lost message transfer. We need
to know error correcting /detection ability to detect and
correct errors (depends on hamming distance).
2. In terms of the Error Control Code itself i.e. the
structure, operation, characteristics and implementation of
various types of codes.
45
System Performance
In order to determine system performance in terms of successful,
false and lost message transfers it is necessary to know:
• the
probability of error or b.e.r p.
• the no. of bits in the message block N
• the ability of the code to detect/ correct errors, usually expressed as
a minimum Hamming distance, dmin for the code
N!
N R
R
( R ) 
p 1  p 
N  R ! R!
This gives the probability of R errors in an N bit block subject to a
bit error rate p.
46
System Performance (Cont’d)
Hence, for an N bit block we can determine the probability of no errors in
the block (R=0) i.e.
• An error free block
N!
N 0
(0) 
p 0 1  p   (1  p) N
N  0!0!
• The probability of 1 error in the block (R=1)
N!
N 1
(1) 
p 1 1  p   N p (1  p) N 1
N 1!1!
• The probability of 2 error in the block (R=2)
( 2) 
N!
 N  2 !2!
p 2 1  p 
N 2
47
Minimum Hamming distance
• A parameter which indicates the worst case ability of the code to
detect /correct errors.
Let
dmin = minimum Hamming distance
l = number of bits errors detected
t = number of bit errors corrected
dmin = l + t + 1
with t ≤ l
For example, suppose a code has a dmin = 6.
We have as options
1)
6= 5 + 0 + 1 {detect up to 5 errors , no correction}
2)
6= 4 + 1 + 1 {detect up to 4 errors , correct 1 error}
3)
6= 3 + 2 + 1 {detect up to 3 errors , correct 2 error}
48
Minimum Hamming distance (Cont’d)
• For option 3 for example, if 4 or more errors occurred, these
would not be detected and these messages would be accepted but
would be false messages.
• Fortunately, the higher the no. of errors, the less the probability they
will occur for reasonable values of p.
Messages transfers are successful if no errors occurs or if t errors
occurs which are corrected.
t
i.e. Probability of Success = p (0)   p (i )
i 1
Messages transfers are lost if up to l errors are detected which are
not corrected, i.e
Probability of lost = p(t+1) + p(t+2)+ …. p(l)
l
=

i  t 1
p (i )
49
Minimum Hamming distance (Cont’d)
Message transfers are false of l+1 or more errors occurs
Probability of false = p(l+1) + p(l+2)+ …. p(N)
=
N
 p (i )
i  l 1
Example
Using dmin = 6, option 3, (t =1, l =4)
Probability of Successful transfer = p(0) + p(1)
Probability of lost messages = p(2) + p(3) + p(4)
Probability of false messages = p(5) + p(6)+ …….+ p(N)
50
Probability of Error
• Each bit has a probability of error p, i.e. probability that a transmitted
‘0’ is received as a ‘1’ and a transmitted ‘1’ is received as a ‘0’.
• this probability is called the single bit error rate or bit error b.e.r.
• For example, if p = 0.1 , the probability that any single bit is in
error is ‘1 in 10’ or 0.1.
• If there were 5 consecutive bits in error, the probability that the 6th
bit will be in error is still 0.1, i.e. it is independent of the previous bits
in error.
51
Probability of Error (Cont’d)
Consider a typical message block below.
Error Control Coding
Data Information
Address bits
Synchronization bit pattern
• The first requirement for the receiver/decoder is to identify the
synchronization pattern (SYNC) in the received bit stream and then
the address and data bits etc may be relatively easily extracted.
•Because of errors, the sync’ pattern may not be found exactly.
52
Probability of Error (Cont’d)
• Synchronization is required for Error control coding (ECC ) to be
Applied.
• When synchronization is achieved, the EC bits which apply to the
ADD (address) and DATA bits need to be carefully chosen in order to
achieve a specified performance.
• To clarify the synchronization and ECC requirements, it is necessary
to understand the block error rates.
• For example, what is the probability of three errors in a 16 bit
block if the b.e.r is p = 10-2?
53
Probability of Error (Cont’d)
Let N be number of bits in a block. Consider N=3 block.
• Probability of error = p , (denote by Good , G)
• Probability that a bit is not in error = (1-p), denote by Error, E
• An error free block, require ,G G G i.e, Good, Good and Good.
• Let R= the number of errors, in this case R=0.
Hence we may write
• Probability of error free block = Probability that R=0 or
P(R=0) = P(0) = P (Good, Good and Good)
54
Probability of Error (Cont’d)
• Since probability of good = (1-p) and probability are independent so,
P(0)= p(G and G and G) = (1-p). (1-p). (1-p)= (1-p)3
P(0) = (1-p)3
For 1 error in any position
Probability of one error P(R=1) = P(1)
E G G
or
G E G
or
G G E
Pr ob( E and G and G )
Pr ob(G and E and G )
Pr ob(G and G and E )









P(1) = p(1-p) (1-p) + (1-p) p (1-p) + (1-p) (1-p) p
P(1) = 3 p (1-p)2
55
Probability of Error (Cont’d)
For 2 errors in combination
Probability of one error P(R=2) = P(2)
E E
or
E G
or
G E
G
Pr ob( E and E and G )
E
Pr ob( E and G and E )
E
Pr ob(G and E and E )









P(2) = p p (1-p) + p (1-p) p + (1-p) p p
P(2) = 3 p2 (1-p)
For 3 errors
E E E
Pr ob( E and E and E)

P(3) = p p p = p3
56
Probability of Error (Cont’d)
In general, it may be shown that
The probability of R errors in an N bit block subject to a bit error
rate p is
p( R)  C R p (1  p)
N
Where
N
CR or
N
N!
  
 R  ( N  R)! R!
p ( R)  C R p
N
R
R
(1  p)
N R
is the number of ways getting
R errors in N bits
N R

Prob. of (N-R) good bits
Prob. of R bits in error
No. of ways getting R errors in N bits
Prob. of R errors.
57
Probability of Error (Example 1)
An N=8 bit block is received with a bit error rate p=0.1. Determine
the probability of an error free block, a block with 1 error, and the
probability of a block with 2 or more errors.
Prob. Of error free block,
p ( R  0)  p (0)

p (0)  8C 0 p 0 (1  p ) 80

 (1  0.1) 8  (0.9) 8
p (0)  0.4304672
Prob. of 1 error,
p ( R  1)  p (1)

p (1)  8C1 p 1 (1  p ) 81

 8 (0.1) (1  0.1) 8
p (1)  0.3826375
58
Probability of Error (Example 1)
Prob. of two or more errors = P(2) + P(3) + P(4)+ ……. P(8)
i.e.
8
 p( R)
R2
It would be tedious to work this out , but since
N
 p( R) 1
then p (0)  p (1)  p ( 2) 1
R 0
i.e. p ( 2) 1  ( p (0)  p (1))
p ( 2)  (1  (0.4304672  0.3826375))  0.1868952
59
Probability of Error (Example 2)
A coin is tossed to give Heads or Tails. What is the probability of 5
heads in 5 throws?
Since the probability of head, say p = 0.5 and the probability of a tail,
(1-p) is also 0.5 and N=5 then
Prob. of 5 heads

p(5)  5C5 p 5 (1  p) N 5

 5C5 (0.5) 5
p(5)  (0.5) 5  3.125 10 2
Similarly the probability of 3 heads in 5 throws (3 in any
sequence) is
p(3)  5C3 p 3 (1  p) 53  5C3 (0.5) 3 (0.5) 2
p(3)  0.3125


60
Synchronization
One method of synchronization is to compare the received bits with a
‘SYNC’ pattern at the receiver decoder.
In general sense, synchronization will be
•successful if the sync bits are received error free, enabling an
exact match
•lost if one or more errors occurs.
61
Synchronization (Cont’d)
Let S denote the number of sync bits. To illustrate let S=4 bits and let
the sync pattern be 0 1 1 0
The probability of successful sync
The probability of lost sync
Psucc  p(0)  (1  p) S
Plost  1  p(0)
62
Error Detection and Correction
Given that the synchronization has been successful, the message may
be extracted as shown below.
N
Probability of successful transfer =
 p( R)
R 0
63
Error Detection and Correction (Cont’d)
A message, after synchronization contains N=16 bits, with a
b.e.r, p= 10-2 . If the ECC can correct 1 error determine the
probability of successful message transfer.
p(0)  (1  p) N  (1  0.01)16  0.851458
p(1)  C R p (1  p)
N
R
N R
 C1 p (1  p)
16
1
161
p(1) 16 (0.01) (1  0.01)  0.137609
15
1
p succ   p( R)  p(0)  p(1) 0.989067
R 0
64
Download