Uploaded by yinaciy392

P0933

advertisement
See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/224399988
Legendre-FLANN-based nonlinear channel equalization in wireless
communication system
Conference Paper · November 2008
DOI: 10.1109/ICSMC.2008.4811554 · Source: IEEE Xplore
CITATIONS
READS
63
500
4 authors, including:
P.K. Meher
Goutam Chakraborty
Sandhaan Labs Private Limited
Iwate Prefectural University
270 PUBLICATIONS 4,856 CITATIONS
199 PUBLICATIONS 1,580 CITATIONS
SEE PROFILE
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
Study of high performance pairing computation: Ate pairing implementation on FPGA View project
Digital System Design View project
All content following this page was uploaded by Goutam Chakraborty on 29 March 2018.
The user has requested enhancement of the downloaded file.
Legendre-FLANN-based Nonlinear Channel
Equalization in Wireless Communication System
J. C. Patra, W. C. Chin, P. K. Meher
G. Chakraborty
School of Computer Engineering
Nanyang Technological University, Singapore
Email: {aspatra, y050045 and aspkmeher}@ntu.edu.sg
Faculty of Software and Information Science
Iwate Prefectural University, Japan
Email: goutam@soft.iwate-pu.ac.jp
Abstract— In this paper, we present the result of our study on
the application of artificial neural networks (ANNs) for adaptive
channel equalization in a digital communication system using 4quadrature amplitude modulation (QAM) signal constellation.
We propose a novel single-layer Legendre functional-link ANN
(L-FLANN) by using Legendre polynomials to expand the input
space into a higher dimension. A performance comparison was
carried out with extensive computer simulations between
different ANN-based equalizers, such as, radial basis function
(RBF), Chebyshev neural network (ChNN) and the proposed LFLANN along with a linear least mean square (LMS) finite
impulse response (FIR) adaptive filter-based equalizer. The
performance indicators include the mean square error (MSE), bit
error rate (BER), and computational complexities of the different
architectures as well as the eye patterns of the various equalizers.
It is shown that the L-FLANN exhibited excellent results in terms
of the MSE, BER and the computational complexity of the
networks.
Keywords—Legendre functional link
network, nonlinear channel equalization.
artificial
neural
I.
INTRODUCTION
In wireless communication systems, transmission
bandwidth is one of the most precious resources. To make
efficient utilization this resource signals are usually transmitted
through band-limited channels. Some of the inherent properties
of the frequency-selective channels are that they are nonlinear
and dispersive, and possess long delay spread. Due to these
properties, the channel introduces inter-symbol interference
(ISI) which reduces the data transmission rate. If the duration
of the transmitted pulse is much smaller than the multipath
delay spread, then each of the multipath components cannot be
resolved in time at the receiver. Therefore, the currently
transmitted pulse interferes with the previously and
subsequently transmitted pulses, resulting in undesirable ISI
and irreducible error floors at the receiver end of digital
communication systems [1].
To mitigate the adverse effects of nonlinear channels,
usually channel equalization is carried out in the digital system.
Equalization refers to signal processing technique used at the
front-end of the receiver to combat ISI in dispersive channels in
the presence of additive noise. Traditionally, linear adaptive
filters (AFs) are used to implement the equalizer. However, the
performance of AF equalizers severely deteriorates when the
1-4244-2384-2/08/$20.00 ©2008 IEEE
channel is nonlinear and highly dispersing [2]. Therefore, in
order to improve the performance of equalizers in nonlinear
channels, new equalizer structures are needed.
Artificial neural networks (ANNs) can perform complex
mapping between its input and output space and are capable of
forming complex decision regions with nonlinear decision
boundaries [3]. Further, because of nonlinear characteristics of
the ANNs, these networks of different architectures have found
successful application in channel equalization problem. Siu et
al. [4] proposed a multilayer perceptron (MLP) structure for
channel equalization with decision feedback and have shown
that the performance of this network is superior to that of a
linear equalizer trained with LMS algorithm. A radial basis
function (RBF)-based equalizer structure with satisfactory
performance has been reported [5]-[6].
The functional link-ANN (FLANN) is first introduced by
Pao [7]. In FLANN, the original input pattern undergoes a
pattern enhancement by using some nonlinear functions. Then
the enhanced patterns are applied to a single-layer perceptron.
Due to the absence of hidden layers, the computational
complexity of FLANN is drastically reduced. In order to reduce
the computational complexity, efficient functional-link ANN
(FLANN)-based equalizer structures have been proposed [8][9]. The functional expansion in these networks was carried out
using orthogonal trigonometric functions. Recently, a reduceddecision feedback FLANN channel equalizer is also proposed
[10]. Another computational efficient network, i.e., Chebyshev
neural network (ChNN) has been proposed for pattern
classification [11], functional approximation [12], nonlinear
dynamic system identification [13]-[14] and nonlinear channel
equalization [15]. In these networks the expansion of input
pattern is carried out using Chebyshev polynomials. ChNN
provides similar, and in some cases, better performance than an
MLP network but with much reduced computational load.
Similar to ChNN, the Legendre function-based neural
networks, i.e., Legendre functional-link ANN (L-FLANN),
provides computational advantage while promising better
performance. In this paper, we propose a novel L-FLANN
based nonlinear channel equalization technique. By taking
several channels and different nonlinearities, with extensive
simulations we have shown the effectiveness of the L-FLANNbased equalizer. We have shown that the proposed equalizer
performs much better than the RBF-based and a linear FIRbased equalizers. However, its performance is similar to that of
SMC 2008
ChNN [15]. We have compared the performance of the four
equalizers under different noisy nonlinear channels in terms of
computational complexity, bit error rate (BER), mean square
error (MSE) and eye patterns, that indicate true performance of
an equalizer.
II.
DIGITAL COMMUNICATION CHANNEL EQUALIZATION
Consider the system in Fig. 1 which depicts a commonly
used wireless digital communication channel with an equalizer
placed right at the front-end of the receiver. The transmitted
symbols, represented by t (k ), where k denotes the discrete
time index, traverses through the channel, which can be of
linear or nonlinear in nature.
a(k)
t(k)
Channel
b(k)
III.
ANN STRUCTURES FOR CHANNEL EQUALIZATION
In this section, we present the three different ANN
structures, namely: L-FLANN, ChNN and RBF, used in this
study. A brief illustration for each of the neural networks is
given below.
q(k)
A. The Legendre-FLANN
The L-FLANN structure is shown in Fig. 2. The input
pattern is expanded into a nonlinear high dimensional space
using Legendre polynomials and a single-layer perceptron
network. The enhanced pattern is then used for the channel
equalization process. The network is trained using the popular
backpropagation (BP) algorithm [3]. The Legendre
polynomials are denoted by Pn (x), where n = 0, 1, 2, ..., is
the order of the polynomial and - 1 ≤ x ≤ 1 . These are a set of
orthogonal polynomials defined as a solution to the differential
equation:
r(k)
Nonlinearity
Equalizer
_
y(k)
+
d(k)
e(k)
Delay
Figure 1. A typical wireless digital communication system with equalizer.
The channel output at time instant k, which is a convolution
between the transmitter filter, the transmission medium and the
receiver filter, can be modeled mathematically with a FIR filter,
is given by:
a(k) =
At the receiver, the received signal is first passed into the
equalizer to reconstruct the transmitted symbols based on the
noisy channel observations r (k ). The output of the adaptive
equalizer, y (k ), is then compared with a delayed version of the
desired signal d (k ) , to produce an error e(k ). This error is used
to update the weights of the network according to some
learning algorithm [3].
N f −1
∑h(i) t(k − i) ,
(1)
i=0
where h(i ) represents the channel tap values and N f is the
length of the FIR channel.
The “Nonlinearity” block represents the type of
nonlinearity present in the channel that may cause distortion of
the transmitted symbols and thus can be expressed as:
b( k ) = Ψ{(t (k ), t ( k − 1), t (k − 2),..., t (k − N f + 1);
h(0), h(1), h(2),..., h( N f − 1)},
(2)
where Ψ (.) represents the nonlinearity function present in the
existing channel block.
Since practically, noise prevails in all channels, the channel
output after the passing through the nonlinearity block, b(k ) , is
be corrupted with noise. Assuming the noise to be additive
white Gaussian noise (AWGN), represented by q (k ) , with
variance σ 2 , the channel output with the noise is given by:
r ( k ) = b( k ) + q( k ) ,
which is the corrupted signal received by the receiver.
d
d
[(1 − x 2 ) Pn ( x)] + n(n + 1) Pn ( x) = 0
dx
dx
The first few Legendre polynomials are as follows:
P0 ( x) = 1
P1 ( x) = x
1
(3 x 2 − 1)
2
1
P3 ( x) = (5 x 3 − 3 x)
2
1
P4 ( x) = (35 x 4 − 30 x 2 + 3).
8
P2 ( x) =
(5)
The higher order Legendre polynomials can be derived
from the following recursion formula:
Pn +1 ( x) =
1
[(2n + 1) xPn ( x) − nPn −1 ( x)]
n +1
(6)
Since the input pattern undergoes a nonlinear enhancement
process by the use of the Legendre polynomials, there is no
need of any hidden layer as needed in an MLP. Note that due to
pattern enhancement, the dimension of original pattern is
increased to a higher dimension. It is expected that the patterns
that are nonlinearly separable in the original pattern space will
be separable linearly in the high-dimension pattern space.
Consider an input pattern given by:
X = [ x1 , x 2 ]
(3)
(4)
(7)
Using the Legendre polynomials, this pattern can be
enhanced as:
SMC 2008
X ' = [ 1 P1 ( x1 ) P2 ( x1 ) ... P1 ( x2 ) P2 ( x2 ) ... ]
(8)
This enhanced pattern is applied to a single-layer
perceptron as shown in Fig. 2.
x
x1
φ1(x)
w1
x2
φ2(x)
w2
x3
Legendre Expansion
w1
w2
φ3(x)
xM
∑
y(k)
ρ(.)
w3
wN
φN(x)
…
x2
w0
…
x1
φ1(x1)
φ2(x1)
φ3(x1)
…
x
+1
output
∑
Figure 3. Structure of an RBF network.
φ1(x2)
φ2(x2)
φ3(x2)
-
Adaptive
Algorithm
e(k)
The basis function involves a Gaussian kernel that uses the
Euclidean distance between its own reference, called the centre,
and the network input, and is given by:
∑
…
+
d(k)
φ ( x) = exp − (|| x − c ||) 2 / 2σ 2
Figure 2. Structure of an L-FLANN.
B. Chebyshev Neural Network - ChNN
The ChNN consists of Chebyshev polynomials which are a
set of orthogonal polynomials defined on the interval (-1, 1) as
the solution to the Chebyshev differential equations and is
denoted by Tn (x) . The network structure is similar to that of
the L-FLANN except that it uses Chebyshev polynomials in the
functional expansion block. The Chebyshev polynomials for
this defined range of values can be found with the recursive
formula which is given by:
where x is the input vector, c is the center of the node and σ is
the spread parameter. Training of the RBF network involves
the selection of suitable centers of the nodes and finding the
weights of the output layer. Usually, the selection of the
centers is carried out with the K-Means algorithm. Thereafter,
the training patterns and their corresponding target patterns are
applied to the network using some algorithm (e.g., least squares
(LS) or orthogonal least squares (OLS)), the weights are
determined. It is shown that the RBF-based equalizers are quite
effective in channel equalization and provides near optimal
solutions [5]-[6].
IV.
Tn +1 ( x) = 2 xTn ( x) − Tn−1 ( x)
(9)
C. The Radial Basis Function - RBF
Fig. 3 shows the structure of an RBF network. Let f(x) be
the sum of the basis functions {φ(.)} for N neurons. Therefore,
the equation at the output can be represented by:
N
y = ∑ wiφi (x)
(10)
i =1
where x is the input vector and wi are the weight values.
The RBF networks have become increasingly popular due
its simple structure and efficient learning. The RBF network is
a universal approximator. The network structure consists of
two layers, each performing a specific task. The input contains
the source symbols. In the hidden layer, the input space is
expanded into a high dimensional space with a set of basis
functions by applying non-linear transformations. At the output
layer, it is a linear combination of the output generated by the
basis functions.
(11)
COMPUTATIONAL COMPLEXITY
In this section, we compare the computational complexity
among the different ANN structures of the RBF, ChNN and LFLANN. Let us consider an RBF structure with M-inputs and
N-hidden units. As for the ChNN and L-FLANN networks, the
number of nodes in the input and output layers are represented
by D and K, respectively. The ChNN and L-FLANN
architectures are trained using the BP algorithm. In the case of
the RBF, it will require an additional division operation since
the squared distance between the centre and the network input
vector is divided by a width parameter [5], [15]. The respective
computational complexities are shown in Table I.
TABLE I CALCULATION OF COMPUTATIONAL COMPLEXITY
Operation
RBF
ChNN
L-FLANN
Addition(+)
2MN+M+N+1
3DK+3K
3DK+3K
Multi(*)
NM+2N+M+1
6DK+6K
6DK+6K
Division(/)
M+N
-
-
Exp(.)
N
-
-
tanh(.)
-
K
K
SMC 2008
Since both the ChNN and L-FLANN networks do not have
any hidden layer as opposed to RBF, the computational
complexity for the these networks are relatively lower
compared to the RBF neural network.
V. SIMULATION STUDIES
In order to study the channel equalization problem as
depicted in Fig. 1, in-depth investigations with extensive
simulations were initiated using a linear FIR equalizer that is
trained with the LMS algorithm as well as the three ANN
architectures as discussed in the previous sections. For the
purpose of our study, we have used the following channel
impulse response [2] and is given by:
⎧1 ⎧
⎡ 2π
⎤⎫
(i − 2)⎥ ⎬, i = 1, 2,3
⎪ ⎨1 + cos ⎢
h(i ) = ⎨ 2 ⎩
Λ
⎣
⎦⎭
⎪
0
,
otherwise .
⎩
Together with a linear equalizer of order eight trained with
the LMS algorithm, all the four network architectures were
simulated. The learning rate and momentum rate for the linearbased equalizer were both set to 0.01, whereas for the ChNN
and L-FLANN networks, the parameters were set to 0.7 and
0.5, respectively.
In our study, we have used four different channels with the
normalized impulse response as follows:
CH = 1 : 0.209 + 0.995 z −1 + 0.209 z −2
CH = 2 : 0.260 + 0.930 z −1 + 0.260 z − 2
(13)
CH = 3 : 0.304 + 0.903z −1 + 0.304 z − 2
CH = 4 : 0.341 + 0.876 z −1 + 0.341z − 2
(12)
Since our input symbols are of the 4-QAM signal
constellation, therefore the transmitted message will be in the
form of {±1±j1}, consisting of both the real and imaginary
components of the signal. In addition, AWGN is added to the
channel output to simulate the actual conditions of the channel.
The variable Λ was varied between 2.9 to 3.5 in increments of
0.2 in order to investigate the performance of each of the
equalizers under different eigen value ratio (EVR) conditions
of the channel. The variations of Λ produce EVR values of
6.08, 11.12, 21.71 and 46.82, respectively [2].
Extensive simulations were carried out with different test
cases on the various parameters of the types of equalizers to be
experimented. These include the spread of the RBF, learning
rate, the momentum rate, number of expansion levels to be
used in the functional block of the orthogonal networks etc.
The architectures of the three networks are shown in Table II.
These four channels will correspond to the parameter Λ of
values of 2.9, 3.1, 3.3 and 3.5 for channels 1 to 4, respectively.
To further analyze the effect of the degree of nonlinearity on
the performance of the equalizers, three different nonlinear
channel models with the following types of nonlinearities are
chosen:
NL = 0 : b(k ) = a(k )
NL = 1 : b(k ) = tanh(a (k ))
NL = 2 : b(k ) = a(k ) + 0.2a 2 (k ) − 0.1a 3 (k )
(14)
NL = 3 : b(k ) = a (k ) + 0.2a (k )
2
− 0.1a 3 (k ) + 0.5 cos(πa (k )).
Note that NL=0 represents a purely linear channel model,
since the output after the onset of the nonlinearity is the same
as its input (refer Fig. 1). NL=1 may occur to systems impaired
by such distortion caused by the saturation of amplifiers in the
transceivers. The other NL channel models are arbitrarily
selected.
TABLE II NEURAL NETWORK ARCHITECTURES
ANN
Structure
No. of input
dimensions
No. of hidden nodes
(N)/ nonlinear
dimensions (D)
No. of output
nodes (K)
RBF
8
10 (N)
2
ChNN
8
18 (D)
2
L-FLANN
8
18 (D)
2
For the RBF equalizer, we have utilized the newrb function
available in MATLAB for the simulation. The spread
parameter was set at 50. As for the ChNN and L-FLANN
networks, their input vectors were expanded into an 18dimensional space network with the Chebyshev and Legendre
polynomials, respectively, and BP algorithm was used for
updating of the weights.
VI.
PERFORMANCE EVALUATION
Performance evaluation of the three ANN structures along
with a linear FIR LMS-based equalizers using the four channel
models with four NL models.
A. The Computational Complexity
The training times were tabulated for one iteration of the
network on an Athlon XP system, 1.86 GHz computer. Hence,
the training times, in seconds, recorded were 8.0, 6.5 and 6.4
for the RBF, ChNN and L-FLANN, respectively.
B. The Convergence Characteristics
Fig. 4 shows the convergence characteristics of the various
equalizers defined for Channel 3 at SNR=15dB. Clearly, it can
be seen that all the three neural network architectures
outperformed the linear-based equalizer. Also, the ChNN and
L-FLANN achieved a much faster convergence as compared to
the RBF network.
SMC 2008
Next, a closer comparison is made between the ChNN and
the L-FLANN. At NL=0, the ChNN performed slightly better
than the L-FLANN. But as the channel complexity increases,
the L-FLANN performance is comparable to that of the ChNN.
This shows that L-FLANN is also able to work well under
harsh conditions of the channel.
pattern for CH=1 at NL=1 with SNR=15dB. It can be observed
that the three neural networks showed similar classification of
the output. This further justifies the use of the L-FLANN
architecture for channel equalization where high precision in
classification is needed. Similar observations were also made
for the different channels under different nonlinearity settings.
Figure 4. Convergence characteristics of the four architectures for CH=3 using
different nonlinear model settings at SNR=15dB. (a) NL=0. (b) NL=1. (c)
NL=2. (d) NL=3.
Figure 5. BER performance of the four architectures for CH=2 with varying
SNR values from 10-18. (a) NL=0. (b) NL=1. (c) NL=2. (d) NL=3.
C. BER Performance
The BER represents the percentage of error bits relative to
the total number of bits sent in a given transmission. It serves
as an indication of how often the data has to be retransmitted
due to an error; hence it determines the true performance of the
various equalizers. Computation and comparison of the BER
was carried out with the 4-QAM signal constellation of the
linear-based equalizer and the three neural network
architectures.
In order to observe the effect of the different variations of
the SNR has on the BER, graphs were plotted with MATLAB
for CH=2 across all the linear and nonlinear channel models, as
shown in Fig. 5. From the plots, we can see that the neural
networks performed far more efficiently than the LMS
equalizer with increased SNR. On a closer observation, we also
note that the L-FLANN performed slightly better than the other
two artificial neural networks. This again, reaffirmed the
efficiency of L-FLANN for use in the channel equalization
problem.
D. The Eye Patterns
The eye patterns, or the equalizer output values, determine
how well the equalizers can perform the equalization process.
In obtaining the eye pattern, each equalizer undergoes 5000
iterations during the training phase and tested with 1000 data
samples where the output is captured. Fig. 6 shows the eye
Figure 6. Eye pattern of the four architectures for CH=1 at NL=1 with
SNR=15dB with 1000 data symbols. (a) LMS equalizer. (b) RBF equalizer. (c)
ChNN equalizer. (d) L-FLANN equalizer.
VII. CONCLUSIONS
Three different artificial neural networks are proposed to
solve the adaptive channel equalization problem with 4-QAM
SMC 2008
signal constellation. Especially in the case of L-FLANN, it
employs a novel functional expansion model to cast the input
vectors into a higher dimensional space with a single-layer
perceptron network. Through simulation results, we have
shown that the performance of the L-FLANN in terms of its
MSE and BER outperforms the traditional linear equalizer and
the RBF network, with comparable results obtained with the
ChNN. In addition, L-FLANN offers significantly less
computational complexity than the RBF due to the fact that
RBF is a two-layer structure unlike the single layer structure of
the L-FLANN. With such an advantage, L-FLANN may find
its way into many applications in the areas of signal processing.
[6]
REFERENCES
[11]
[1]
[2]
[3]
[4]
[5]
S. Haykin, Communication Systems, Wiley, New York, USA, 4th Ed.,
2001.
S. Haykin, Adaptive Filter Theory, Prentice Hall, Eaglewood Cliffs, NJ,
2nd Ed., 1991.
S. Haykin, Neural Networks, Prentice Hall, Upper Saddle River, NJ,
USA, NJ, 2nd Ed., 1999.
S. Siu, G. J. Gibson, and C. F. N. Cowan, “Decision feedback
equalization using neural network structures and performance
comparison with standard architecture,” IEE Proceedings– Commun.,
Speech and Vision, vol. 137, Part 1, pp. 221–225, Aug. 1990.
S. Chen, B. Mulgrew and Peter M. Grant, “A clustering technique for
digital communications channel equalization using radial function
networks”, IEEE Trans. Neural Networks, vol. 4, No. 4, pp. 570-579,
Jul. 1993.
[7]
[8]
[9]
[10]
[12]
[13]
[14]
[15]
P. C. Kumar, P. Saratchandran and N. Sundararajan, “Minimal radial
basis function neural networks for nonlinear channel equalisation,” IEE
Proceedings- Image Signal Processing, vol. 147, No. 5, pp. 428-435,
Oct. 2005
Y.-H. Pao, Adaptive Pattern Recognition and Neural Networks. Reading,
MA: Addison-Wesley, 1989.
J. C. Patra and R. N. Pal, “A functional link artificial neural network for
adaptive channel equalization,” Signal Processing, vol. 43, pp. 181–195,
May 1995.
J. C. Patra, R. N. Pal, R. Baliarsingh and G. Panda, “Nonlinear channel
equalization for QAM signal constellation using artificial neural
networks” IEEE Trans. SMC, vol 29, no. 2, pp. 262-271, Apr. 1999.
W. D. Weng and C. T. Yen, “Reduced-decision feedback FLANN
nonlinear channel equalizer for digital communications systems”, IEE
Proceedings– Communications, vol. 151, no. 4, pp. 305-311, Aug. 2004.
A. Namatane and N. Uema, "Pattern classification with Chebyshev
neural network," Int J. Neural Network, vol. 3, pp. 23-31, Mar 1992.
T. T. Lee and J. T. Teng, "The Chebyshev-polynomials-based unified
model neural network for function approximation," IEEE Trans. SMC,
Part B, vol. 28, pp. 925-935, Dec 1998.
J. C. Patra and A. C. Kot, "Nonlinear Dynamic System Identification
using Chebyshev functional link artificial neural network," IEEE Trans.
SMC, Part B, vol. 32, pp. 505-511, Aug. 2002.
Purwar, S., Kar, I.N. and Jha, A. N., “On-line system identification using
Chebyshev neural networks”, Applied Soft Computing, vol. 7, no. 1, pp.
364-372, Jan. 2007.
J. C. Patra, W. B. Poh, N. S. Chaudhari and A. Das, “Nonlinear Channel
Equalization with QAM Signal Using Chebyshev Artificial Neural
Network”, IEEE IJCNN 2005, Montreal, Canada, pp. 3214-3219, Jul.–
Aug. 2005
SMC 2008
View publication stats
Download