abstract

advertisement
BLIND SEPARATION OF OPTICAL TRACKER RESPONSES INTO INDEPENDENT
COMPONENTS DISCRIMINATES OPTICAL SOURCES
Ivica Kopriva†, Antun Peršin‡
†Institute for Defense Studies, R&D, Bijenička 46, 10000 Zagreb, Croatia
‡Institute "Ruđer Bošković", Laser R&D Department, Bijenička 56, 10000 Zagreb, Croatia
e-mail: ivica.kopriva@public.srce.hr
ABSTRACT
An optical system based on rotating reticle is used to determine
polar co-ordinates of the primarily infrared optical source. Such
system fails to discriminate simultaneously several optical
sources. We demonstrate experimentally in a case of two optical
sources that this drawback can principally be overcome by
application of the blind signal separation (BSS) algorithms on
data recorded at the outputs of an originally modified optical
system. Separation of modified optical system responses into
independent components yields modulating functions that carry
information about polar co-ordinates of the corresponding single
optical sources.
1. INTRODUCTION
Potential application of the BSS theory lies in many areas such
as: speaker separation in speech recognition [7], communication
signal processing by performing adaptive blind equalization [1],
[6], recovery of biomedical signals (EEG, ECG, MEG) [13],
etc. We believe that for the first time the BSS theory is applied
to the new problem: discrimination of optical sources.
An optical system based on rotating reticle is used to
detect and determine the position of an object from which some
form of the primarily infrared energy is emanated [5], [9], see
figure 1.
Lens
M
(  , t )
Photodetector
Amplif ier
BP f ilter
x( t )
Reticle
Figure 1: Optical tracker basic construction
From the reasons of convenience optical system shown on figure
1 will be called optical tracker in the rest of the paper. The
rotating reticle, also called modulating disk, modulates
incidental optical flux (  , t ) and is located at the focal plane
of an optical imaging system. Here  stands for wavelength and
t means time. Depending on the shape of clear and opaque
segments of the reticle the optical flux after reticle i.e. on the
output of the photodetector is modulated on the appropriated
way. It has been pointed out that optical tracker fails to
discriminate several optical sources simultaneously [14]. The
subject of this paper is to show that such serious limitation of
the optical tracker can principally be overcome by combined use
of the BSS algorithms and appropriated modification in the
optical tracker construction.
In section two the qualitative mathematical model of the
optical tracker output signal according to figure 1, and also of
the output signals of the modified optical tracker, figure 3, is
derived.
In section three a brief overview of the fundaments of the
BSS theory is given. Three main assumptions on which all BSS
algorithms are based (statistical independence of the source
signals, non-Guassianity of the source signals and nonsingularity of the mixing matrix) are interpreted in the context of
the optical source discrimination problem.
In the fourth section experimental results are described. It
has been demonstrated that in a case of two optical sources
discrimination between them is possible by using modified
optical tracker and adaptive blind separation algorithms. The
statistical independence assumption of the source signals is
experimentally in that section also verified.
Conclusion is given in section five while quoted references
are listed in section six.
2. DERIVATION OF THE SIGNAL MODEL
The polar coordinates ( r ,  ) of the optical source projection on
the reticle area are defined relative to the center of the reticle
that corresponds with the center of the optical tracker field of
view. It can be shown [10] that speed of rotation of the optical
source projection around the center of the reticle is given with:
 ( t )   M 1   cos( M t   )
(1)
where   r / r0 is relative distance between center of the circle
with radius r0 and center of the reticle, and  M is speed of
rotation of the reticle, see figure 1. For reticle with fan-bladed
pattern, such as shown on figure 2, with n pairs of clear and
opaque segments the instantaneous frequency of the optical
source radiating flux after the reticle is given with [9], [10]:
 (t )  0  m cos( M t   )
(2)
where:
(3)
0 n M , m  n M
and * means convolution. When two optical sources S1 and S2
are present simultaneously in the optical tracker filed of view on
coordinates (r1, 1 ) and (r2 , 2 ) respectively the selective
amplifier output signal will be of the form:
x(t )  g1(t ) s1(r,, t )  g2 (t ) s2 (r,, t )
where:
Figure 2: Reticle with fan-bladed pattern
g1( t )  g( t )  Ahc 
The time function the instantaneous frequency of which is
respecting rule (2) has the form:
s( r , , t )  cos0t   sin(  M t   )
 (  , t )  (  , t )  s( r , , t )

(5)
The deviation of the FM signal (4) i.e. the magnitude of the
modulation is given with:
f 0 nf M
r

 n  n  k  r
fM
fM
r0
(6)
where k i.e. n and r0 are construction constants. Deviation
(6) is directly proportional to the radial distance of the image
projection from the axis of rotation [14]. By using elementary
semiconductor theory [16] it can be shown that current response
of the PIN photodiode on the incidental optical flux (5) is given
with:
i(  , t )  A
(7)

where A is detector sensing area, c is speed of light, h is
Planck's constant, R(  ) is photodiode responsivity and  is
wavelength. The overall photocurrent is obtained integrating
expression (7) over detector sensitivity range:
i( t )  Ahc 

R(  )(  , t )


d  s( r ,  , t )
(8)
The selective amplifier output signal x(t), see figure 1, is
obtained by convolution of the selective amplifier impulse
response, g(t), with the photocurrent i(t). Assuming that photon
flux (  , t ) is slow varying function relative to s( r , , t ) the
following expression for x(t) is obtained:
x(t )  g (t ) s(r,, t )
where g ( t ) is given with:
g ( t )  g( t )  Ahc 


g2 ( t )  g( t )  Ahc 
(9)
R(  )(  , t )

d
(10)
R(  )1(  , t )

R(  ) 2 (  , t )



d
d
(12)
Expression (12) shows that, when photo-detector linearity is
assumed, the optical tracker output signal is convolved
combination of the modulating functions, that carry information
about polar co-ordinates of the corresponding optical source,
and non-stationary impulse responses of the form (12). It has
been analytically shown in [14] that optical tracker in such case
follows the centroid the coordinates of which are functions of
the effective brightness of the two sources. In a case of two
collinear sources located at points (r1,0) and (r2 ,0) with equal
brightness the optical tracker will see the fictitious optical
1
source with coordinates ( r1  r2 ), 0 . The point is that optical
2
tracker fails to determine the accurate coordinates of any of the
two sources. In order to resolve this drawback the modification
of the optical tracker construction was performed such as shown
on figure 3.
Lens
1(  , t )
M
Beam
s1 ( r,  , t )1 (  , t ) splitter
s2 ( r,  , t ) 2 (  , t )
ch 
(  , t ) R(  )


(4)
what is canonical representation of the frequency modulated
(FM) signal [18]. This waveform is actually the fundamental
term of the photodetector response on the incidental optical flux.
The spectral terms around the frequencies 20 , 30 ,... exist but
the selective amplifier, figure 1, removes those terms [9], [14].
Consequently the incidental optical flux at the detector area can
be approximately described with:

(11)
 2 (, t )
Photodetector
-1-
Amplifier
BP filter
-1-
x1(t )
Photodetector
-2-
Amplifier
BP filter
-2-
x2 ( t )
Reticle
Figure 3: Modified optical tracker
The main point is in introducing beam splitter after modulating
disk and then detecting modulated optical flux with two
photodetectors. This is in agreement with the BSS theory that
requires for blind separation of N sources M  N detectors to
be used [2], [17], [19-21]. In accordance with the exposed
material the selective amplifiers output signals are:
x1(t )  g11(t ) s1(r,, t )  g12 (t ) s2 (r,, t )
x2 (t )  g21(t ) s1(r,, t )  g22(t ) s2 (r,, t )
(13)
where impulse responses g11(t), g12(t), g21(t), g22(t) are given
with:
g11( t )  g1( t )  A1hc 

 (  ) R1(  )1(  , t )
d


 (  ) R1(  )2 (  , t )
d


g12 ( t )  g1( t )  A1hc 

g21( t )  g2 ( t )  A2hc 

(  ) R2 (  )1(  , t )
d


(  ) R2 (  )2 (  , t )
d


g22 ( t )  g2 ( t )  A2hc 

(14)
Expressions (13) and (14) represent mathematical model of the
modified optical tracker output signals. In (14)  (  ) and ( )
are beam splitter transmission and reflection coefficients
respectively, while R1( ) and R2 ( ) are photo-detector
spectral responsivities. Convolutive signal model (13), suitable
for application of the BSS algorithms, is illustrated with figure
4, where G11(z,k), G12(z,k), G21(z,k) and G22(z,k) are Z transforms
of the related impulse responses (14).
s1( r ,  , k )
G11( z , k )

x1( k )
G12 ( z , k )
G21( z, k )
s2 ( r ,  , k )
G22 ( z , k )

x2 ( k )
Figure 4: Convolutive signal model
By using on-line BSS algorithms it should be possible to
recover the source signals s1(r, , t ) and s2 (r,, t ) on the basis
of the observed signals x1(t) and x2(t) only.
3. INTERPRETATION OF THE BSS THEORY
REQUIREMENTS
The real-world situations that often include time delays between
the sources in the mixture are described with convolutive
mixing model x(t)=G*s(t), where s(t) is vector of source signals,
G is mixing matrix that contains channel impulse responses, x(t)
is vector of the observed signals and * means convolution. The
goal is to reconstruct the source signals from the observed
signals only, i.e. the mixing matrix G is assumed to be unknown.
Solutions for such problem are given in [1], [4], [7], [8], [11],
[12], [15], [17], [19]-[21]. However, some of the proposed
solutions are iterative by nature, [2], [7], [12], [21]. For real
time source separation adaptive i.e. sequential version of the
algorithms is necessary. Such algorithms are described in [1],
[4], [8], [11], [15], [17], [19], [20]. This kind of algorithms is of
interest for discrimination of optical sources that is a real-time
problem.
There are three fundamental assumptions on which all BSS
algorithms are based: statistical independence of the source
signals, non-Gaussianity of the source signals and nonsingularity of the mixing matrix in the model of the observed
signals. Here, it will be briefly examined whether these
assumptions are fulfilled for the model of the modified optical
tracker output signals (13). The statistical independence
assumption of the source signals s1(r,  , t ) and s2 (r,  , t ) is
logical since they are generated by two different (independent)
optical sources. It will be demonstrated experimentally in the
fourth section that this assumption holds. The second
assumption, non-Gaussianity of the source signals, is for signals
s1(r, , t ) and s2 (r,, t ) also fulfilled. As shown in expression
(4) they are frequency modulated. This type of signals, as most
of the communication signals, belongs to the sub-Gaussian class
of signals having negative kurtosis. The third assumption is nonsingularity of the mixing matrix when convolutive model (13) is
transformed into frequency domain. Then it can be written as:
(15)
X ( )  G( )  S ( )
where X ( ) and S( ) are vectors the components of which
are Discrete Fourier Transforms (DFTs) of the observed and
source signals respectively, while G( ) is matrix the
components of which are DFTs of the related impulse responses
given with (14). The non-singularity requirement of the mixing
matrix over some frequency region of interest is formally
expressed as:
det G( )  0   1, 2 
(16)
where 1 and  2 are frequencies determined by the design of
the selective amplifiers. Satisfaction of the condition (16) is
important since it ensures that we have benefit from using two
sensors. Otherwise, both observed signals would deliver to us
the same information and one of them would be useless.
Assuming that optical flux (  , t ) is piecewise stationary at
least over the time interval determined by the length of DFT, the
DFTs of the impulse responses (14) can be written as:
Gij ( )  Gi ( )  kij i, j 1,2
(17)
where Gi ( ) is DFT of the related selective amplifier impulse
response gi(t), while k ij can be easily identified from (14). Then
condition (16) can be rewritten as:
G1( )G2 ( ) k11k22  k21k12   0  1, 2  (18)
and is transformed in:
k11k22  k21k12  0
(19)
since in (18) G1( ) and G2 ( ) represent frequency responses
of the selective amplifiers and their product is always different
from zero. Assuming for photo-detector spectral responsivities
in (14) R1( )  R2 ( )  R( ) and taking into account
( ) 1 ( ) expression (19) is transformed in:


 (  ) R(  )1 (  )
d 



R(  ) 2 (  )

d 


 (  ) R(  ) 2 (  )
d 



R(  )1 (  )

(20)
Condition (20) and consequently condition (16) will be fulfilled
when  ( )  const. over region of interest. When real beam
splitter devices are taking into consideration this is always true.
Hence, the role of the beam splitter device is twofold. Firstly, it
ensures that both detectors see the optical sources in the same
system of co-ordinates and secondly the beam splitter device
ensures non-singularity of the mixing matrix in the frequency
domain. The above physical based reasoning gives us
framework for performing experiment with modified optical
tracker.
For reconstruction of the source signals the feedback
separation network shown on figure 5 is applied. Specifically, it
has been adopted W11( z ) W22 ( z ) 10
. . The reasons for using
feedback network with W11(z) and W22(z) set to unity is to avoid
whitening effect explained nicely in [20].
d
wij ( k  1, l )  wij ( k , l )  yip 1( k )sign( yi ( k )) y j ( k  l )
(23)
where in (21) and (23) i, j  1,2 and l  1,2,..., L , where L is
order of the cross-filters in the feedback separation network.
4. EXPERIMENTAL RESULTS
Two real-world signals were recorded at the output of the
modified optical modulator, see figure 3, with sampling
frequency of 100 kHz. Figure 6 shows power spectrum of the
observed signal x1(t) recorded at the output of one
photoamplifier when both optical sources were present in the
optical modulator field of view. The spectrum of the second
measured signal looks similarly. The information about position
of both optical sources is present in the recorded signal. First
light source, with smaller radius coordinate, has deviation of
approximately 20 Hz and was located in the center of the filed
of view that corresponds with carrier frequency of 22.5 kHz.The
second signal, with larger radius, has deviation of approximately
4 kHz and occupies spectrum from 18 to 27.5 kHz. The length
of the recorded blocks of observed signals x1(t) and x2(t) was
32768 data samples.
Figure 5: Feedback separation network
To reconstruct modified optical tracker source signals two time
domain on-line separation algorithms will be used: the
algorithm based on infomax criterion [12], [20] and algorithm
that minimizes instantaneous [8] or generalized instantaneous
energy [11]. As pointed out in [12], [20] separation based on
infomax criteria is performed by minimizing the mutual
information between components of random vector z(t)=g(y(t)),
where g is a nonlinear function approximating the cumulative
density function (cdf) of the sources. Sigmoids that are most
often in use (g(y)=tanh(y), g(y)=1/(1+e-y)) approximate well the
cdfs of the positively kurtotic signals like speech. Nevertheless,
we managed to separate optical tracker signals that belong to
sub-Gaussian class of signals. As it will be reported in the next
section this was probably due to the fact that one source signal
had negative kurtosis very close to zero. The infomax learning
rules with z=g(y)=tanh(y) for feedback separation network,
figure 5, are given with:
wij ( k  1, l )  wij ( k , l )   2 tanh( yi ) y j ( k  l )
(21)
Minimization of the energy based criterion [11] for odd value of
p:
i ( k )  E  yip ( k )sign( yi ( k )) i  1,2
(22)


yields the following learning rules:
Figure 6: Power spectrum of the observed signal x1(t)
Power spectrums of the separated signals y1(t) and y2(t) are
shown on figures 7 and 8 respectively. Compared with figure 6
the separation of two signals is obvious. After separation
determination of the co-ordinates of the optical sources is the
matter of demodulation of the recovered source signals that are
of the form (4). In order to measure the separation quality and
compare different separation algorithms the separated signals
y1(t) and y2(t) as well as source signals s1(t) and s2(t) were
demodulated by means of quadrature FM demodulator. Due to
the expression (4) and (6) amplitude of the demodulated signal
is directly proportional with the polar coordinate r of the
corresponding optical source.
gives 29dB more accurate estimate of the first optical source
position. That clearly justifies our approach to discriminate
optical sources by performing blind separation of optical tracker
responses into independent components. Signals obtained by
demodulation of the source signal s1(t) and reconstructed signal
y1(t) are shown on figures 9 and 10 respectively.
Figure 7: Power spectrum of signal y1(t)
Figure 9: Demodulation of the source signal s1(t)
Figure 8: Power spectrum of signal y2(t)
Since optical sources in our experiment were not moving it was
possible to record and demodulate recorded signals when each
optical source was present in the modified optical modulator
field of view alone. On that way we have obtained referenced
data in relation to which accuracy of the blind separation
approach can be verified. Results obtained with infomax
separators [12], [20] – (21), with deccorelator [8] as well as
those obtained by minimizing criteria (23)-[11] are reported in
table 1. Kurtosis of the source signals is  ( s1)  0.487 and
 (s2 )  138
. . Since  ( s1 ) is relatively close to zero source
signal s1 is not far from Gaussian and infomax separator with
g(yi)=tanh(yi) has not diverged. Amplitudes of the demodulated
signals were expressed in dBs and measured when demodulated
signals were transformed in frequency domain. First optical
source alone has distance of -33.2dB and second optical source
–65dB. Direct demodulation of the observed signals x1(t) and
x2(t) discriminates only second optical source giving amplitudes
of –64dB. The cross-filters in the feedback separation network
have order 61.
Table 1. Separation results
Algorithm
First source
Second source
Deccorelation [8]
-36.4 dB
-64.2 dB
p=3, (22),(23)
-36.4 dB
-64.2 dB
Infomax – (21)
-35 dB
-64.4 dB
No separation
-63.4
-64.6
When infomax separation is employed 2dB error in relation to
the reference position is obtained. Compared with direct
demodulation of the observed signals the infomax separator
Figure 10: Demodulation of the separated signal y1(t)
The statistical independence of the source signals is of essential
importance for the success of the BSS theory. We have
measured the level of the second and fourth order statistical
(in)dependence between observed and between separated
signals. Three fourth order cross-cumulants C31, C22 and C13
were computed in order to measure fourth-order statistical
(in)dependence. Extreme values for each statistics are given in
table 2.
Table 2. Level of statistical (in)dependence
Statistics
Extreme for x1,x2
Extreme for y1,y2
C11
0.262
-0.0067
C31
-0.1097
4.669e-5
C22
-0.112
3.885e-4
C13
-0.0533
8.545e-4
In table 2 from 39 to even 2349 times less level of statistical
dependence between separated signals relative to measured
signals can be observed. That can be considered as experimental
verification of the statistical independence assumption. The
cross-correlation C11 between observed and separated signals
are shown on figures 11 and 12 respectively when time lag was
moving from –500 to 500.
Figure 11: Cross-correlation between x1 and x2
Figure 12: Cross-correlation between y1 and y2
5. CONCLUSION AND FUTURE WORK
A solution for discrimination of several optical sources by
means of modified optical tracker and BSS algorithms is
proposed. Experiment performed with modified optical tracker
and two optical sources confirmed the principal possibility to
discriminate two optical sources. Future work will be directed
toward on-line BSS algorithms working in frequency domain
[15], [17] where more accurate results are expected. Here, due
to the orthogonal basis and natural gradient learning rules [1],
[3] faster convergence and consequently better separation
quality is expected. The permutation indeterminacy inherent to
all BSS algorithms, which makes especially serious problems
when BSS is performed in frequency domain, should be
successfully resolved by using newly proposed spectral kurtosis
criterion [15].
6. REFERENCES
1 S. Amari, et. all, "Multichannel Blind Deconvolution and
Equalization Using the Natural Gradient", IEEE Signal
Processing Workshop on Signal Processing Advances in
Wireless Communications, Paris, France, pp. 101-104, April,
1997.
2 S. Amari, T. P. Chen, A. Cichocki, "Stability Analysis of
Adaptive Blind Source Separation", Neural Networks, 1997.
3 S. Amari, S. C. Douglas, "Why Natural Gradient ?", Proc. of
the ICASSP'98, Seattle, WA, USA, May, 1998.
4 S. Amari, A. Cichocki, J. Cao, "Neural Network Models for
Blind Separation of Time Delayed and Convolved Signals",
IEIE Trans. on Fundamentals, Vol. E80-A, No. 9, pp. 15951603, September, 1997.
5 G. F. Aroyan, "The Technique of Spatial Filtering", Proc. of
the IRE, Vol. 47, pp. 1561-1568, September, 1959.
6 S. Choi, R.W. Liu, "A Direct Adaptive Blind Equaliser for
Multichannel Transmission", NIPS 1996 Workshop on Blind
Signal Processing and Its Applications, Aspen, Colorado,
December, 1996.
7 F. Ehlers, H. G. Schuster, "Blind Separation of Convolutive
Mixtures and an Application in Automatic Speech Recognition
In Noisy Environment", IEEE Transactions on Signal
Processing, Vol. 45, No. 10, pp. 2608-2609, 1997.
8 S. Van Gerven, D. Van Compernolle, "Signal Separation by
Symmetric Adaptive Decorrelation: Stability, Convergence, and
Uniqueness", IEEE Transactions on Signal Processing, Vol. 43,
No. 7, pp. 1602-1612, July, 1995.
9 R. D. Hudson, Jr."Infrared System Engineering", John
Wiley, pp. 235-263, 1969.
10 I. Kopriva, "Localisation of Light Sources by Using Blind
Signal Separation Theory", Ph.D. Thesis - University of Zagreb
(in croatian), September, 1998.
11 I. Kopriva, "Adaptive Blind Separation of Convolved
Sources Based on Minimisation of the Generalised
Instantaneous Energy", accepted for EUSIPCO'98, Island of
Rhodes, Greece, September, 1998.
12 T. W. Lee, A. J. Bell, R. H. Lambert, "Blind Separation of
Delayed and Convolved Signals", Advances in Neural
Information Processing Systems, 1996.
13 S. Makeing, et. all., "Independent Component Analysis of
Electroencephalographic Data", Advance in Neural Information
Processing Systems 8, MIT Press, 1996.
14 A. F. Nicholson, "Error Signals and Discrimination of
Optical Trackers that See Several Sources", Proc. of the IEEE ,
Vol. 53, No. 1, pp. 143-182, January, 1965.
15 Ch. Serviére, "Feasibility of Source Separation in
Frequency Domain", Proc. of the ICASSP'98, Seattle, WA,
USA, May, 1998.
16 J. Singh, "Semiconductor Optoelectronics - Physics and
Technology", McGraw-Hill, pp. 336-364, 1995.
17 P. J. Smaragdis, "Blind Separation of Convolved Mixtures
in the Frequency Domain", International Workshop on
Independency and Artificial Neural Networks, Tenerife, Spain,
February, 1998.
18 H.L. Taub, D.L. Schilling, "Principles of Communication
Systems", McGraw-Hill, pp. 143-182, 1997.
19 H. L. Nguyen Thi, et. all,"Speech Enhancement: Analysis
and Comparison of Methods on Various Real Situations",
EUSIPCO'92, pp. 303-306, 1992.
20 K. Torkkola, "Blind Separation of Convolved Sources
Based on Information Maximization", IEEE Workshop on
Neural Networks for Signal Processing, Kyoto, Japan,
September, 1996.
21 D. Yellin, E. Weinstein, "Criteria for Multichannel Signal
Separation", IEEE Transactions on Signal Processing, Vol. 42,
No. 8, pp. 2158-2168, August, 1994.
Download