Uploaded by Parce Domine

babusiak mohylova paper EEG (3)

advertisement
See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/251290657
Eye-blink artifact detection in the EEG
Article · January 2009
DOI: 10.1007/978-3-642-03882-2_310
CITATIONS
READS
6
2,618
2 authors:
Branko Babusiak
Jitka Mohylová
University of Žilina
VŠB-Technical University of Ostrava
55 PUBLICATIONS 100 CITATIONS
44 PUBLICATIONS 119 CITATIONS
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
INTELIGENTEX View project
"Passing" of courses View project
All content following this page was uploaded by Jitka Mohylová on 27 May 2016.
The user has requested enhancement of the downloaded file.
SEE PROFILE
1
Eye-blink artifact detection in the EEG
B. Babušiak1, J. Mohylová2
1
2
Department of Measurement and Control, VSB-Technical University of Ostrava, Ostrava, Czech Republic
Department of General Electrical Engineering, VSB-Technical University of Ostrava, Ostrava, Czech Republic
Abstract— An electroencephalogram (EEG) is often
corrupted by different types of artifacts. Many efforts
have been made to enhance its quality by reducing the
artifact. The EEG contains the technical artifacts (noise
from the electric power source, amplitude artifact, etc.)
and biological artifacts (eye artifacts, ECG and EMG
artifacts). This paper is focused on eye-blinking artifact
detection from the video which is recorded with EEG
data simultaneously. Detection of eye artifacts is not a
simple process and therefore there are many efforts to
develop an optimal method for eye artifact detection or
in better case its elimination. In this paper there is described an unusual detection method based on image
processing and analysis.
Fig. 1 Segment of EEG record with marked eye artifacts in channels Fp1
and Fp2. EEG record with 19 channels is used.
Keywords— EEG, artifact, image processing, object recognition
I. INTRODUCTION
Electroencephalography is the neurophysiologic measurement of the electrical activity of the brain by recording
from electrodes placed on the scalp or, in special cases,
subdurally or in the cerebral cortex. The resulting traces are
known as electroencephalogram (EEG) and represent an
electrical signal (postsynaptic potentials) from a large number of neurons. These are sometimes called brainwaves.
EEGs are frequently used in experimentation because the
process is non-invasive to the research subject.
The EEG record is often digitalized and stored on appropriate type of storage medium (CD, DVD, hard disk …) for
additional processing and analysis. The EEG record contains many types of artifacts. An artifact is event or process
which has not its source in an examined organ. One type of
artifact is eye artifact – blinking and eye movement. However the amplitude of the electrooculographic (EOG) signals
is only six-times greater than EEG signals, there is a large
interference because of short distance between sources of
these signals. The eye artifact is best seen in first two channels Fp1 and Fp2 (Fig. 1).
II. EYE-BLINK DETECTION METHOD
The video record is obtained from two cameras. The first
one is scanning the whole person in the bed and the second
one is focused on the face. Detail of the face is used for the
detection method.
To detect eye blinking (opening and closing eyes), there
is used measurement of mean value of intensity in the selected region of interest. The measurement is carried out for
each frame and at the end the curve of mean intensity is
made. The moments for opening or closing eyes are set
according to increasing or decreasing values of the curve.
In the pre-processing phase it is appropriate to reduce
image data in order to accelerate detection of blinking. That
means, only area focused on the face is cut from all frames
and color depth from true color (24-bit) to grayscale (8-bit)
is changed [1].
Let us set region of interest (ROI) in the reference frame.
The way of setting ROI is interactive. The ROI has dimensions (k x l) and coordinates of upper left corner (L, T) for
left and right eye (Fig. 2). In the application a user can set
optimal ROI interactively in order to reach adequate signal
to noise ratio. The ROI has to be selected in appropriate
way, whole eye (in opened and closed state) must be in the
selected area.
2
Then computation of mean intensity for N-th frame is
given by
I avg ( N ) 
1 k l
 f (i  T , j  L) N
k .l i 1 j 1
(1)
where f(i,j)N is luminance (intensity) level of pixel at coordinates i, j in the frame N.
Fig. 4 Eye-blinking curve. Curve of mean intensity after threholding.
The computation of threshold is based on local extremes
of the curve (function). The second derivative test is a criterion useful for determining whether a given stationary point
of a function is a local maximum or a local minimum. This
way of threshold computation makes the method independent on changes of brightness in the room.
When vector of all local extremes is found then user
(usually a doctor) interactively sets amplitude of one eye
blink (circles in Fig. 3). A new point of boundary is selected
if the following condition is satisfied:
( L max - L min ) > A blink /2
Fig. 2 Region of Interests setting
The algorithm computes the mean intensity for each
frame over the whole video sequence and then the mean
intensity curve is being created. This curve displays mean
intensity variance of the selected area over the time (Fig. 3
blue).
(2)
where ( Lmax – Lmin ) is difference between neighboring local
maximum Lmax and local minimum Lmin . Ablink is amplitude of one eye blink.
Final step is marking artifacts (blinking) in the EEG record.
Marking is executed by adding one additive channel (EYE)
into the record (Fig. 5). In contrast to previous figure logical
1 stands for opened eyes. Moreover, the state of closed eyes
is colored by another color (green color in this case) in the
whole record. The change of colors represents blinking. In
these segments the influence of eye blinking on the other
channels is visible. Thanks to this a person who evaluates
record knows origin of waves in the EEG channels [4].
Fig. 3 Curve of mean intensity (blue). Threshold for transformation (red).
From the curve of mean intensity it is not clearly seen
when eyes are opened or closed. Therefore, it is appropriate
to transform the curve to Boolean curve. Thresholding process is using for this purpose. Thresholding transforms the
curve of mean intensity to Boolean curve with only two
logical levels (Fig. 4).
Fig. 5 Marking of blinks in the EEG record
3
This detection method is very reliable just in case patient
does not move his/her head. Assumption of non-moving is
unviable in practice. Therefore, this fact is a big disadvantage of detection method described above. The next part
of the paper is aimed at possible way of compensation of
head movement.
III. OBJECT TRACKING ALGORITHM
For reliable detection of eye-blinking artifacts, it is necessary to keep eyes in regions of interest during the whole
video record. For this purpose, two objects in appropriate
color, shape and size are placed on forehead. In this case,
appropriate color is black because occurrence of dark levels
is much smaller than bright levels (Fig. 6). Choice of shape
depends on deformation of object from 3-D space to 2-D
space. Therefore, sphere looks like most suitable for this
purpose.
3. Detection of contour and centre of each labeled component.
4. Notification of components which are similar to
searched object.
Threshold for image conversion in step 1 is set interactively by user. For setting threshold and other input parameters was created simple application. Users (medical doctors,
hospital staff) do not need any special knowledge about
image processing to use this application.
There are plenty of algorithms for labeling components.
The applied algorithm uses run-length encoding as the first
step. This is a very efficient and fast method because it
scans each pixel in the input image only once [2].
For finding contour of labeled component can be used
one of edge detectors (Prewitt, Roberts, Canny…). In this
case there was used another algorithm called backtracking
bug follower [3].
To recognize shape of component Euclidean distance between centre and each contour point for every component is
measured. If the distances are very similar, the object will
be considered as circle (searched object). The real situation
is not so easy, because of sensible deformation of circle due
to shadows, etc. Therefore, it is necessary to define particular boundary when it is possible to consider shape as circle.
Maximal percentage offset from mean value of Euclidean
distancies is defined as:
od 
Fig. 6 Frame from video record with equivalent image histogram.
By tracking these objects during the whole video sequence, it is possible to relocate ROIs proportional to centre
of objects (Fig. 7).
max  max  vd   vd , vd  min  vd 
vd
 100
(3)
where vd is the vector of Euclidean distances and
vd 
1
N
v
d
is the mean value of vector vd.
N
Percentage offset is computed only for reference frame and
for other frames it is defined as reference value.
Fig. 7 Change of position of ROIs according to position of detected objects
(G1 and G2). (A) – reference frame, (B) – tilted and rotated head
Designed algorithm for object detection is very complicated; therefore the algorithm will be described very shortly.
The algorithm consists of following basic steps:
1. Conversion from grayscale to binary image.
2. Labeling connected components.
Fig. 8 Comparison of deformed circle (up) and other component (down).
Euclidean distances on the right.
4
In figure 8 two different objects with their Euclidean distances on the right are displayed. An object will be considered a circle, if 75% of Euclidean distances lie in the defined interval. The interval is determined by percentage
offset (3). For the first object 80.65 % of values lie in interval and for the second object only 34.38 % of values lie in
the defined interval. It follows that only the first object
stands for circle.
In this section an algorithm for object tracking was shortly described. This algorithm is consuming too much of
processing time and capacity because a big amount of computations have to be done. Result of the algorithm is shown
in figure 9.
Fig. 9 Demonstration of object tracking component for video record with
complicated background
IV. CONCLUSIONS
In this paper there was presented an algorithm for the eye
artifacts (blinking) detection. The algorithm was incorporated into an application with a user friendly interface. The
designed algorithm is able to detect movements of eyeball
but it needs better video quality – higher frame rate, higher
resolution and video sequence without compression.
The algorithm for eye-blinking detection had a big disadvantage in case of head movement. This disadvantage was
removed by creating an algorithm for tracking objects.
Thanks to this algorithm it is possible to relocate regions of
interest so that eyes are in these regions during whole video
record.
Algorithm for tracking object has not yet been tested
with a real EEG measurement but the authors do not see any
limitation in functionality of the implementation.
View publication stats
V. ACKNOWLEDGEMENT.
This research has been supported by the research program
„Information
Society“
under
grant
No.
1ET101210512 „Intelligent methods for evaluation of longterm EEG recordings“.
REFERENCES
1.
Umbaugh, Scott E. (1999) Computer vision and image processing.
Prenticle-Hall, New Jersey, USA
2.
Haralick, M.- Shapiro, L. (1992) Computer and Robot Vision, Volume I, Addison-Wesley, pp. 28-48
3.
Pratt, W. (2007) Digital Image Processing 4. edition, Wiley, p. 614
4.
Babušiak, B. (2008): The eye-blinking artefact detection in the EEG
record, zborník Wofex, pp. 198-202
Author:
Institute:
Street:
City:
Country:
Email:
Branko Babušiak
VSB-Technical University of Ostrava
17. Listopadu 15
Ostrava
Czech Republic
branko.babusiak@vsb.cz
Author:
Institute:
Street:
City:
Country:
Email:
Jitka Mohylová
VSB-Technical University of Ostrava
17. Listopadu 15
Ostrava
Czech Republic
jitka.mohylova@vsb.cz
Download