Research Journal of Applied Sciences, Engineering and Technology 6(10): 1794-1798,... ISSN: 2040-7459; e-ISSN: 2040-7467

advertisement
Research Journal of Applied Sciences, Engineering and Technology 6(10): 1794-1798, 2013
ISSN: 2040-7459; e-ISSN: 2040-7467
© Maxwell Scientific Organization, 2013
Submitted: November 12, 2012
Accepted: January 01, 2013
Published: July 20, 2013
Improved Fast ICA Algorithm Using Eighth-Order Newton’s Method
Tahir Ahmad, Norma Alias, Mahdi Ghanbari and Mohammad Askaripour
IbnuSina Institute for Fundamental Sciences, Universiti Teknologi Malaysia 81310 Skudai,
Johor, Malaysia
Abstract: Independent Component Analysis (ICA) is a computational method to solve Blind Source Separation
(BSS) problem. In this study, an improved Fast ICA based on eighth-order Newton’s method is proposed to solve
BSS problems. Eight-order Newton’s method for finding the solution of nonlinear equations is much faster than
ordinary Newton’s iterative method. The improved FastICA algorithm is applied to separate sound signals. The
simulation results show the method has fewer iterations and faster convergence speed.
Keywords: FastICA, Independent Component Analysis (ICA), Newton’s method
2001; Chio et al., 2005). The basic linear mixture
model (without noise) for ICA is as follows:
INTRODUCTION
Independent Component Analysis (ICA) or Blind
Source Separation (BSS) is a signal processing method
that extracts statistically independent components from
a set of measured signals (Hyvarinen, 1998; Lee et al.,
1999; Ye et al., 2006). There are various ICA
approaches including maximum likelihood estimation
(Shia et al., 2004; Cardoso, 1998), mutual information
minimization (Amari et al., 1996; Bell and Sejnowski,
1995; Pham, 2004) and negentropy maximization
(Hevarinen et al., 2001). The FastICA algorithm was
presented by Hyvarinen (1999). A kind of Newton-type
algorithm was used in FastICA and the algorithm was
based on fixed-point algorithm iteration to maximize
nongaussianity as a measure of statistical independence.
In FastICA algorithm, the Newton iteration method
with two-order convergence was employed in a fixedpoint algorithm for estimating the separation matrix.
In this study, we first briefly introduce ICA
algorithms with Kurtosis and negentropy contrast
function. Then, an improved FastICA based on an
eighth-order convergence Newton’s Method (Li et al.,
2011) is proposed. Using this new algorithm, the update
equations inside the FastICA algorithm is developed for
estimating separation matrix with less iterative. Then,
the new algorithm is applied to separation sound
signals. Finally, the performance of the newalgorithm is
compared with FastICA algorithm in terms of evaluate
performance rate and convergence rates.
INDEPENDENT COMPONENT ANALYSIS
AND FASTICA
Independent Component Analysis (ICA) has
become a powerful method for signal processing in last
decade (Cichochi and Amari, 2004; Hevarinen et al.,
X = AS
(1)
where, S = (S1 , … , Sm )T denotes the original
independent sources and X = (X1 , … , Xn )T denoted
mixtures of original sources and A is the unknown
m × n constant mixing matrix. ICA aims to recover or
estimate the independent components (ICs) from their
mixtures. An estimated IC is denoted by y and a
separation vector is denoted byw, such that
y = w T X = w T AS
(2)
𝐾𝐾𝐾𝐾𝐾𝐾𝐾𝐾(𝑦𝑦) = 𝐸𝐸[𝑦𝑦 4 ] − 3(𝐸𝐸[𝑦𝑦 2 ])2
(3)
𝐾𝐾𝐾𝐾𝐾𝐾𝐾𝐾(𝑦𝑦) = 𝐸𝐸[𝑦𝑦 4 ] − 3
(4)
The signal mixtures tend to have Gaussian
probability density functions (pdfs) and the source
signals have nongaussianpdfs (Stone, 1999). In terms of
central limit theorem (CLT), Gaussian signals often
consist of a mixture of nongaussian signals (Stone,
1999). Given a set of such Gaussian mixtures
signal X = (𝑋𝑋1 , … , 𝑋𝑋𝑛𝑛 )𝑇𝑇 . Therefore, the process of ICA
is to find source signal by finding separation vector 𝑀𝑀
that extract signal 𝑦𝑦, which has to be the most
nongaussian. One of the classic measures for estimation
of nongaussianity of random variable is Kurtosis. We
denote the Kurtosis of y by 𝐾𝐾𝐾𝐾𝐾𝐾𝐾𝐾(𝑦𝑦) and it is defined
by:
where, 𝑦𝑦 is a random variable with zero mean. If 𝑦𝑦 is
normalized, variance of 𝑦𝑦 is equal to one i.e.𝐸𝐸[𝑦𝑦 2 ] = 1.
Then, (3) is simplified to:
Corresponding Author: Tahir Ahmad, Department of Mathematics, Faculty of Science, Universiti Teknologi Malaysia 81310
Skudai, Johor, Malaysia
1794
Res. J. Appl. Sci. Eng. Technol., 6(10): 1794-1798, 2013
Negentropy for Nongaussianity: An important
measure of nongaussianity is negentropy. The concept
of negentropy strongly depended on entropy that it is
another fundamental concept in information theory.
Given 𝑋𝑋 as a discrete value random variable, Entropy 𝐻𝐻
for random variable 𝑋𝑋 is defined by:
𝐻𝐻(𝑋𝑋) = − ∑𝑖𝑖 𝑃𝑃(𝑋𝑋 = π‘₯π‘₯𝑖𝑖 ) ln 𝑃𝑃(𝑋𝑋 = π‘₯π‘₯𝑖𝑖 )
(5)
In Eq. (5), the π‘₯π‘₯𝑖𝑖 are the value of random variable
𝑋𝑋 and 𝑃𝑃 is density function of the random variable 𝑋𝑋
(Cover and Thomas, 1991; Papoulis, 1991).
The definition of entropy for continues random
variable is obtained from generalizing (5). Given 𝑋𝑋 as a
continue random variable with density function 𝑃𝑃𝑋𝑋 (.).
The differential entropy 𝐻𝐻 for continues random
variable 𝑋𝑋 is defined by:
𝐻𝐻(𝑋𝑋) = − ∫ 𝑃𝑃𝑋𝑋 (𝛾𝛾) ln(𝛾𝛾)𝑑𝑑𝑑𝑑
(6)
𝐽𝐽(𝑋𝑋) = 𝐻𝐻�𝑋𝑋𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔 οΏ½ − 𝐻𝐻(𝑋𝑋)
(7)
Let that 𝑋𝑋 as a vector random variable. Negentropy
of 𝑋𝑋 is showed by 𝐽𝐽(𝑋𝑋) and is defined by:
where, 𝑋𝑋𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔 is gaussian random variable with same
covariance of the 𝑋𝑋. Negentropy is another measure for
nongaussianity. But, for the estimate of negentropy, we
must first estimate the pdf of the vector random
variable. Therefore, it is very hard to obtain it from the
computation. One method to approximate negentropy
by nonpolynomial function was proposed by Hyvarinen
(1998). In related to this study, a method using
expansion of pdf to approximate negentropy was given
by Hevarinen et al. (2001). Let 𝑦𝑦 be a whitened random
variable. Then, approximate of 𝐽𝐽(𝑦𝑦) was given:
𝐽𝐽(𝑦𝑦) ∝ (𝐸𝐸[𝐺𝐺(𝑦𝑦)] − 𝐸𝐸[𝐺𝐺(πœ—πœ—)])2
(8)
where, G is a nonquadratic function and variable πœ—πœ— is a
gaussian variable with unit variance and zero mean.
Note that "proportion to" is the means of symbol ∝ . In
order to achieve a very strongly estimator, choose G
that grow slowly. Whereby:
𝐺𝐺1 (𝑦𝑦) =
1
π‘Žπ‘Ž 1
log cosh π‘Žπ‘Ž1 𝑦𝑦
𝐺𝐺2 (𝑦𝑦) = − exp οΏ½−
𝑦𝑦 2
2
οΏ½
and 1 ≤ π‘Žπ‘Ž1 ≤ 2 (Hevarinen et al., 2001).
(9)
𝑀𝑀 ← 𝑀𝑀 −
�𝐸𝐸�𝑧𝑧𝑧𝑧 �𝑀𝑀 𝑇𝑇 𝑧𝑧��−𝛽𝛽𝛽𝛽 οΏ½
(11)
�𝐸𝐸�𝑔𝑔 ‫��𝑧𝑧 𝑇𝑇 𝑀𝑀� ׳‬−𝛽𝛽 οΏ½
where, 𝑀𝑀 is separation vector, 𝑔𝑔 is function as Eq. (9)
and (10), 𝑧𝑧 is the whitening observation vector,
𝑇𝑇
𝑇𝑇
π‘§π‘§π‘§π‘§οΏ½π‘€π‘€π‘œπ‘œπ‘œπ‘œπ‘œπ‘œ
𝑧𝑧��and π‘€π‘€π‘œπ‘œπ‘œπ‘œπ‘œπ‘œ corresponds to
𝛽𝛽 = πΈπΈοΏ½π‘€π‘€π‘œπ‘œπ‘œπ‘œπ‘œπ‘œ
optimum 𝑀𝑀 . Equation (11) can be simplified by
multiplying both sides by𝛽𝛽 − 𝐸𝐸�𝑔𝑔‫�)𝑧𝑧 𝑇𝑇 𝑀𝑀( ׳‬and:
𝑀𝑀 ← 𝐸𝐸�𝑔𝑔‫ οΏ½)𝑧𝑧 𝑇𝑇 𝑀𝑀( ׳‬− 𝐸𝐸{𝑧𝑧𝑧𝑧(𝑀𝑀 𝑇𝑇 𝑧𝑧)}𝑀𝑀
(12)
Let 𝑋𝑋 is a vector random variable. Assume that 𝑧𝑧 is
given by the whitening of 𝑋𝑋. the fixed-point algorithm
using negentropy is given as follow (Hevarinen et al.,
2001):
Step 1 :
Step2
:
Step 3 :
Step 4 :
Let π‘˜π‘˜ = 0 and 𝑀𝑀(0) is an initial random
vector with unit norm
Let
𝑀𝑀(π‘˜π‘˜) = 𝐸𝐸{𝑧𝑧𝑧𝑧(𝑀𝑀(π‘˜π‘˜ − 1)𝑇𝑇 𝑧𝑧)} −
β€«π‘˜π‘˜(𝑀𝑀( ׳‬
𝐸𝐸�𝑔𝑔
(13)
− 1)𝑇𝑇 𝑧𝑧)�𝑀𝑀(π‘˜π‘˜ − 1)
𝑀𝑀(π‘˜π‘˜)
Let 𝑀𝑀(π‘˜π‘˜) = ‖𝑀𝑀(π‘˜π‘˜)β€–
(14)
If the algorithm is not converged, then
π‘˜π‘˜ = π‘˜π‘˜ + 1 and go back to step 2.
From this algorithm, we obtain one component.
Therefore, in order to estimate several independent
components, the algorithm needs to be run for several
times.
EIGHT-ORDER NEWTON’S METHOD
The classic Newton’s method for single nonlinear
equation can be expressed as follows:
π‘₯π‘₯𝑛𝑛+1 = π‘₯π‘₯𝑛𝑛 −
𝑓𝑓(π‘₯π‘₯ 𝑛𝑛 )
(15)
𝑓𝑓 ‫) 𝑛𝑛 π‘₯π‘₯( ׳‬
This is an important and basic method, which
converge quadratically.Pota and Petak (Potra and Ptak,
1984) proposed a modification Newton’s method with
third-order convergence defined by:
(10)
π‘₯π‘₯𝑛𝑛+1 = π‘₯π‘₯𝑛𝑛 −
𝑓𝑓(π‘₯π‘₯ 𝑛𝑛 )+𝑓𝑓�π‘₯π‘₯ 𝑛𝑛 −
𝑓𝑓 ‫) 𝑛𝑛 π‘₯π‘₯( ׳‬
𝑓𝑓(π‘₯π‘₯ 𝑛𝑛 )
𝑓𝑓 ‫) 𝑛𝑛 π‘₯π‘₯( ׳‬
οΏ½
(16)
King (1973) developed a one- parameter family of
forth-order methods, which is written as follows:
A fixed-point algorithm using negentropy: A
FastICA algorithm based on maximizing negentropy
was introduced by Hyvarinen (1999). In this algorithm,
fixed-point iteration and Newton’s method with twoorder convergence is used to find separation 𝑀𝑀 vector
as follows:
1795
𝑦𝑦𝑛𝑛 = π‘₯π‘₯𝑛𝑛 −
𝑓𝑓(π‘₯π‘₯ 𝑛𝑛 )
𝑓𝑓 ‫) 𝑛𝑛 π‘₯π‘₯( ׳‬
π‘₯π‘₯𝑛𝑛+1 = 𝑦𝑦𝑛𝑛 −
𝑓𝑓(π‘₯π‘₯ 𝑛𝑛 )+𝛽𝛽𝛽𝛽 (𝑦𝑦𝑛𝑛 )
(17)
𝑓𝑓(𝑦𝑦𝑛𝑛 )
𝑓𝑓(π‘₯π‘₯ 𝑛𝑛 )+(𝛽𝛽−2)𝑓𝑓(𝑦𝑦𝑛𝑛 ) 𝑓𝑓 ‫) 𝑛𝑛 π‘₯π‘₯( ׳‬
(18)
Res. J. Appl. Sci. Eng. Technol., 6(10): 1794-1798, 2013
Such that ∈ R . Many methods have been proposed
to improve the order of convergence such as (Kou,
2007; Chun, 2007; Parhi and Gupta, 2008; Bi et al.,
2008).
Li et al. (2011) proposed an eighth-order Newton’s
method convergence defined by:
𝑒𝑒𝑛𝑛+1 = π‘₯π‘₯𝑛𝑛 −
𝑓𝑓(π‘₯π‘₯ 𝑛𝑛 )+𝑓𝑓�π‘₯π‘₯ 𝑛𝑛 −
𝑣𝑣𝑛𝑛+1 = 𝑒𝑒𝑛𝑛+1 −
π‘₯π‘₯𝑛𝑛+1 = 𝑣𝑣𝑛𝑛+1 −
𝑓𝑓 ‫) 𝑛𝑛 π‘₯π‘₯( ׳‬
𝑓𝑓(π‘₯π‘₯ 𝑛𝑛 )
𝑓𝑓 ‫) 𝑛𝑛 π‘₯π‘₯( ׳‬
οΏ½
𝑓𝑓(𝑒𝑒 𝑛𝑛 +1 )
𝑓𝑓(π‘₯π‘₯ 𝑛𝑛 )
𝑓𝑓 ‫ 𝑛𝑛 π‘₯π‘₯οΏ½ ׳‬−
οΏ½
𝑓𝑓 ‫) 𝑛𝑛 π‘₯π‘₯( ׳‬
𝑓𝑓(𝑣𝑣𝑛𝑛 +1 )
(19)
and 𝛽𝛽 ∈ 𝑅𝑅 in (28) will be updated in every iterations
by β = 𝐸𝐸{𝑀𝑀 𝑇𝑇 𝑧𝑧𝑧𝑧(𝑀𝑀 𝑇𝑇 𝑧𝑧)} . Hence, we propose following
fixed-point algorithm for estimate the separation
matrix:
Step1: Let 𝑛𝑛 = 0 and given 𝑀𝑀0 as an initial random
vector with unit norm. Assume β =
𝐸𝐸{𝑀𝑀0𝑇𝑇 𝑧𝑧𝑧𝑧(𝑀𝑀0𝑇𝑇 𝑧𝑧)}
Step 2: Let:
IMPROVED FASTICA USING EIGHTH-ORDER
NEWTON’S METHOD
With regards to Eq. (11) can be derived new
equations for estimating the separation matrix (unmixing matrix) by using Eq. (19) until (21) as follows:
𝐸𝐸�𝑧𝑧𝑧𝑧 �𝑀𝑀 𝑣𝑣𝑇𝑇 𝑧𝑧��−𝛽𝛽 𝑀𝑀 𝑣𝑣
(22)
𝐸𝐸�𝑔𝑔 ‫��𝑧𝑧 𝑇𝑇𝑒𝑒 𝑀𝑀� ׳‬−𝛽𝛽
Such that:
𝑀𝑀𝑣𝑣 ← 𝑀𝑀𝑒𝑒 −
𝑀𝑀𝑒𝑒 ← 𝑀𝑀 −
𝐸𝐸�𝑧𝑧𝑧𝑧 �𝑀𝑀 𝑒𝑒𝑇𝑇 𝑧𝑧��−𝛽𝛽 𝑀𝑀 𝑒𝑒
𝐸𝐸�𝑔𝑔 ‫��𝑧𝑧 𝑇𝑇𝑦𝑦 𝑀𝑀� ׳‬−𝛽𝛽
𝐸𝐸�𝑧𝑧𝑧𝑧 �𝑀𝑀 𝑇𝑇 𝑧𝑧��−𝛽𝛽𝛽𝛽 +𝐸𝐸�𝑧𝑧𝑧𝑧 �𝑀𝑀 𝑦𝑦𝑇𝑇 𝑧𝑧��−𝛽𝛽 𝑀𝑀 𝑦𝑦
𝑀𝑀𝑦𝑦 ← 𝑀𝑀 −
𝐸𝐸�𝑔𝑔�𝑀𝑀 𝑇𝑇 𝑧𝑧��−𝛽𝛽
𝐸𝐸�𝑧𝑧𝑧𝑧 �𝑀𝑀 𝑇𝑇 𝑧𝑧��−𝛽𝛽𝛽𝛽
(23)
(24)
where,
−
𝑀𝑀
𝑀𝑀𝑣𝑣 ←
𝑛𝑛 +1
Step 4: If the algorithm is not converged then, β =
𝑇𝑇
𝑧𝑧𝑧𝑧(𝑀𝑀𝑛𝑛𝑇𝑇+1 𝑧𝑧)}, 𝑛𝑛 = 𝑛𝑛 + 1 π‘Žπ‘Žπ‘Žπ‘Žπ‘Žπ‘Ž go back
𝐸𝐸{𝑀𝑀𝑛𝑛+1
to step 2.
SIMULATION RESULTS
We used three sound signals from MATLAB
database as shown in Fig. 1. Each size of simple signals
is 1000. These source signals were randomly mixed.
The mixed signals are shown in Fig. 2. After whitening
the mixed signals, we ran the proposed improved
FastICA algorithm to separate the mixed signals .The
results are shown in Fig. 3.
5
0
𝐸𝐸�𝑔𝑔‫𝑣𝑣𝑀𝑀�)𝑧𝑧 𝑇𝑇𝑒𝑒𝑀𝑀( ׳‬
−
(33)
Step 3: 𝑀𝑀𝑛𝑛 +1 = ‖𝑀𝑀 𝑛𝑛 +1 β€–
-5
‫׳‬
𝐸𝐸�𝑔𝑔 �𝑀𝑀𝑦𝑦𝑇𝑇 𝑧𝑧��𝑀𝑀𝑒𝑒
𝑀𝑀𝑒𝑒 ← 𝐸𝐸{𝑧𝑧𝑧𝑧(𝑀𝑀 𝑇𝑇 𝑧𝑧)} + 𝐸𝐸�𝑧𝑧𝑧𝑧�𝑀𝑀𝑦𝑦𝑇𝑇 𝑧𝑧�� −
𝐸𝐸{𝑔𝑔(𝑀𝑀 𝑇𝑇 𝑧𝑧)}𝑀𝑀 + 𝛽𝛽𝑀𝑀𝑦𝑦
𝑀𝑀𝑦𝑦 ← 𝐸𝐸{𝑧𝑧𝑧𝑧(𝑀𝑀 𝑇𝑇 𝑧𝑧)} − 𝐸𝐸�𝑔𝑔‫𝑀𝑀�)𝑧𝑧 𝑇𝑇 𝑀𝑀( ׳‬
0
100
200
300
400
500
600
700
800
900
1000
0
100
200
300
400
500
600
700
800
900
1000
0
100
200
300
400
500
600
700
800
900
1000
5
(26)
0
-5
𝐸𝐸{𝑧𝑧𝑧𝑧(𝑀𝑀𝑒𝑒𝑇𝑇 𝑧𝑧)}
(32)
𝑀𝑀𝑦𝑦 = 𝐸𝐸{𝑧𝑧𝑧𝑧(𝑀𝑀𝑛𝑛 𝑇𝑇 𝑧𝑧)} − 𝐸𝐸�𝑔𝑔‫𝑛𝑛𝑀𝑀�)𝑧𝑧 𝑇𝑇 𝑛𝑛𝑀𝑀( ׳‬
Equations (22) to (25) can be simplified to:
𝑀𝑀 ←
(31)
(25)
𝐸𝐸�𝑔𝑔 ‫��𝑧𝑧 𝑇𝑇 𝑀𝑀� ׳‬−𝛽𝛽
𝐸𝐸{𝑧𝑧𝑧𝑧(𝑀𝑀𝑣𝑣𝑇𝑇 𝑧𝑧)}
𝑀𝑀𝑣𝑣 = 𝐸𝐸{𝑧𝑧𝑧𝑧(𝑀𝑀𝑒𝑒𝑇𝑇 𝑧𝑧)} − 𝐸𝐸�𝑔𝑔‫𝑒𝑒𝑀𝑀��𝑧𝑧 𝑇𝑇𝑦𝑦𝑀𝑀� ׳‬
𝑀𝑀𝑒𝑒 = 𝐸𝐸{𝑧𝑧𝑧𝑧(𝑀𝑀𝑛𝑛 𝑇𝑇 𝑧𝑧)} + 𝐸𝐸�𝑧𝑧𝑧𝑧�𝑀𝑀𝑦𝑦𝑇𝑇 𝑧𝑧�� −
𝐸𝐸{𝑔𝑔(𝑀𝑀𝑛𝑛 𝑇𝑇 𝑧𝑧)}𝑀𝑀 +
𝛽𝛽𝑀𝑀𝑦𝑦
It is observed that this method required less
number of iterations then traditional Newton’s method:
𝑀𝑀 ← 𝑀𝑀𝑣𝑣 −
(30)
where,
(21)
𝑓𝑓 ‫ 𝑛𝑛𝑣𝑣( ׳‬+1 )
𝑀𝑀𝑛𝑛+1 = 𝐸𝐸{𝑧𝑧𝑧𝑧(𝑀𝑀𝑣𝑣𝑇𝑇 𝑧𝑧)} − 𝐸𝐸�𝑔𝑔‫𝑣𝑣𝑀𝑀�)𝑧𝑧 𝑇𝑇𝑒𝑒𝑀𝑀( ׳‬
(20)
(27)
5
0
(28)
-5
(29)
Fig. 1: Three source signals
1796
Res. J. Appl. Sci. Eng. Technol., 6(10): 1794-1798, 2013
1
οΏ½∑𝑛𝑛𝑖𝑖=1 π‘Ÿπ‘Ÿπ‘ƒπ‘ƒπ‘ƒπ‘ƒπ‘–π‘– + ∑𝑛𝑛𝑗𝑗=1 𝑐𝑐𝑃𝑃𝑃𝑃𝑗𝑗 οΏ½
𝑛𝑛 2
�𝑝𝑝 𝑖𝑖𝑖𝑖 οΏ½
1
οΏ½∑𝑛𝑛𝑖𝑖=1 οΏ½∑𝑛𝑛𝑗𝑗=1
− 1οΏ½ +
max π‘˜π‘˜ |𝑝𝑝 𝑖𝑖𝑖𝑖 |
𝑛𝑛 2
𝑃𝑃𝑃𝑃 =
5
0
-5
100
0
200
300
400
500
600
700
800
900
𝑗𝑗=1𝑛𝑛𝑖𝑖=1𝑛𝑛𝑝𝑝𝑖𝑖𝑗𝑗maxπ‘˜π‘˜π‘π‘π‘˜π‘˜π‘—π‘—−1
1000
5
where,
οΏ½∑𝑛𝑛𝑖𝑖=1
0
-5
0
100
200
300
400
500
600
700
800
900
1000
0
100
200
300
400
500
600
700
800
900
1000
0
Fig. 2: Three mixed signals
Fig. 3: Separation the signals
Table 1: Comparison of the separation performance
NO
1
2
3
4
5
FastICA 0.2156 19340
0.1873 0.1721 0.1964
Improved 0.2155 19350
0.1871 0.1724 0.1961
algorithm
6
0.2143
0.2146
P
P
P
max π‘˜π‘˜ �𝑝𝑝 π‘˜π‘˜π‘˜π‘˜ οΏ½
−1
and
𝑐𝑐𝑃𝑃𝑃𝑃𝑗𝑗 =
− 1οΏ½ in which 𝑝𝑝𝑖𝑖𝑖𝑖 is the ijth element of
In this study, improved Fast ICA using eight-order
Newton’s method was proposed for BSS. The
separation performance of the proposed algorithm was
almost the same as the Fast ICA algorithm. Besides
that, the mixed sound signal was separated by new
algorithm with fewer iteration and faster convergence
than the Fast ICA.
P
P
�𝑝𝑝 𝑖𝑖𝑖𝑖 οΏ½
�𝑝𝑝 𝑖𝑖𝑖𝑖 οΏ½
max π‘˜π‘˜ |𝑝𝑝 𝑖𝑖𝑖𝑖 |
CONCLUSION
Table 2: Comparison of iteration numbers and CPU system
Fast ICA
Improved algorithm
The 1th component
23.11
2.35
The 2th component
9.45
4.71
The 3th component
4
3.2N
Total iteration
6824.
10.26
CPU time
0.2412 sec.
0.1381 sec.
P
π‘Ÿπ‘Ÿπ‘ƒπ‘ƒπ‘ƒπ‘ƒπ‘–π‘– = ∑𝑛𝑛𝑗𝑗=1
(34)
𝑛𝑛 × π‘›π‘› matrix 𝑃𝑃 = π’˜π’˜π‘‰π‘‰π‘‰π‘‰, where π’˜π’˜is a separation matrix,
𝑉𝑉 is a whitening matrix and 𝐴𝐴 isa mixing matrix. The
large value 𝑃𝑃𝑃𝑃 is the poor statistical performance of the
BSS algorithm (Amari et al., 1996; Shi and Zhang,
2006). These algorithms were run six times to evaluate
their performances. The results are presented in
Table 1. Whereby the separation performance of
proposed algorithm is almost the same as the FastICA
algorithm. Furthermore, we compared the convergence
speed of these two algorithms. We randomly ran each
of these two methods for 15 times executively. We then
calculated the average iterative numbers and presented
in Table 2.
As a result, the improved Fast ICA algorithm
performances with less iteration than Fast ICA, while
maintaining comparable separation performance. This
is due to the fact that convergence of eighth-order
Newton’s method is much faster than the traditional
Newton’s method with second-order convergence to
find the solution for non-linear equation. To calculate
the CPU time, two algorithms 15 times was run by
using the MATLAB version R2010 on the Intel(R)
Core(TM) 2 DUE CPU T6400 2.00 GHZ processor
with 2.00 GB RAM. Table 2 describes iteration
numbers and the CPU times required for sound signal
separation from three mixed sound signal, whereby the
average CPU times are 0.2412 s and 0.1381s,
respectively. This demonstrated that the proposed
method exhibited faster convergence with less iteration
than Fast ICA algorithm, with comparable separation
performance.
5
-5
=
We can also get the same result when using the
Fast ICA algorithm (Hevarinen et al., 2001). In order to
measure the accuracy of separation, we calculated the
performance index (Amari et al., 1996; Shi and Zhang,
2006).
ACKNOWLEDGMENT
The researchers would like to thanks from School
of post graduate Studies (SPS) of Universiti Teknologi
Malaysia (UTM) for getting Fellowship (IDF).
1797
Res. J. Appl. Sci. Eng. Technol., 6(10): 1794-1798, 2013
REFERENCES
Amari, S., A. Cichocki and H. Yang, 1996. A new
learning algorithm for blind separation of sources.
Adv. Neural Inform. Process. Syst., 8: 757-763.
Bell, A. and T. Sejnowski, 1995. An informationmaximization approach to blind separation and
blind deconvolution. Neural Comp., 7: 1129-1159.
Bi, W., H. Ren and Q. Wu, 2008. New family of
seventh-order methods for nonlinear equations.
Appl. Math. Comp., 203: 408-412.
Cardoso, J.F., 1998. Blind signal separation: Statistical
principles. Proceeding of the IEEE, 86: 2009-2025.
Chun, C., 2007. Some improvements of Jarratt’s
method with sixth-order convergence. Appl. Math.
Comp., 190: 1432-1437.
Cover, T.M. and J.A. Thomas, 1991. Elements of
Information Theory. Publisher John Wiley & Sons,
Hoboken, pp: 640, ISBN: 0471748811.
Cichochi, A. and S. Amari, 2004. Adaptive Blind
Signal and Image Processing: Learning Algorithms
and Applications. Wiley, New York.
Chio, S., A. Cichocki and H.M. Park, 2005. Blind
source separation and independent component
analysis: A review. Neural Inform. Process. Lett.
Rev., 6(1): 1-57.
Hyvarinen, A., 1998. New approximations of
differential entropy for independent component
analysis and projection pursuit. Adv. Neural
Inform. Proc. Syst., 10: 273-279.
Hyvarinen, A., 1999. Fast and robust fixed-point
algorithms for independent component analysis.
IEEE T. Neural Networ., 10(3): 626-634.
Hevarinen, A., J. Karhunen and E. Oja, 2001.
Independent Component Analysis. John Wiley &
Sons, New York.
Stone, J.V., 1999. Independent Component Analysis a
Tutorial Introduction. The MIT Press, Cambridge,
Mass, pp: 193, ISBN: 0262693151.
King, R., 1973. A family of fourth order methods for
nonlinear equations, SIAM J. Numer. Anal., 10:
876-879.
Kou, J., 2007. The improvements of modified Newton’s
method. Appl. Math. Comp., 189: 602-609.
Lee, T.W., M. Girolami and T.J. Sejnowski, 1999.
Independent component analysis using an extended
infomax algorithm for mixed sub-Gaussian and
super- Gaussian source. Neural Comp., 11:
417-441.
Li, Z., C. Peng, T. Zhou and J. Gao, 2011. A new
newton-type method for solving nonlinear
equations with any integer order of convergence. J.
Comp. Inform. Syst., 7: 2371-2378.
Parhi, S.K. and D.K. Gupta, 2008. A sixth order method
for nonlinear equations. Appl. Math. Comput., 203:
50-55.
Pham, D.T., 2004. Fast algorithms for mutual
information based independent component
analysis. IEEE T. Signal Proc., 52(10).
Potra, F.A. and V. Ptak, 1984. Nondiscrete Induction
and Iterative Processes. Pitman, Boston, pp: 207,
ISBN: 0273086278.
Papoulis, A., 1991. Probability, Random Variables and
Stochastic Processes. 3rd Edn., McGraw-Hill, New
York.
Shia, Z., H. Tanga and Y. Tang, 2004. A new fixedpoint algorithm for independent component
analysis. Neurocomputing, 56: 467-473.
Shi, Z. and C. Zhang, 2006. Energy predictability to
blind source separation. Elec. Lett., 42(17):
1006-1008.
Ye, Y., Z.L. Zhang, S. Wu and X. Zhou, 2006.
Improved multiplicative orthogonal-group based
ICA algorithm for separating mixed sub-Gaussian
and super-Gaussian sources. Proceeding of
International Conference on Communications,
Circuits and Systems, Guilin, China (ICCCAS), pp:
340-343.
1798
Download