SSEF 2012 CS008_Poster

advertisement
THINK AND TYPE: DECODING EEG
SIGNALS FOR A BRAIN-COMPUTER
INTERFACE VIRTUAL SPELLER
Sherry Liu Jia Ni1, Joanne Tan Si Ying1, Yap Lin Hui1, Zheng Yang Chin2 and Chuanchu Wang2.
Nanyang Girls’ High School, 2 Linden Drive, Singapore 288683
[2]1 Fusionopolis Way, #21-01 Connexis (South Tower), Singapore 138632
[1]
Abstract
Computer
Monitor
Neuroscan
Quikcap
Scalp brain signals or Electroencephalogram (EEG) exhibit different characteristics during different types of
mental activities. These different characteristics could be classified by a Mental Activity Brain-Computer
Interface (MA-BCI) (Figure 1), which allows the control of external devices using only EEG as a control input.
Subject
undergoing
experiment
This technology would be potentially useful for patients who are incapable of communication due to total
paralysis arising from medical conditions. With the aim of fulfilling the needs of these patients, this project
EEG
Acquisition
Device
investigates: first, the performance of the BCI, which employs the Filter Bank Common Spatial Pattern
(FBCSP) algorithm (Figure 2) in differentiating mental activities from the EEG; second, a proposed virtual
Figure 1: BCI subject getting ready for training
speller prototype that allows its user to type words on the computer with the EEG as the input.
Methodology
1. Designed and developed the Virtual Speller in Adobe Flash ActionScript3.0 (Figure 3 and Figure 4)
Table 1: Proposed features of the Speller
CSP
8-12Hz
CSP
.
.
.
.
.
.
36-40Hz
CSP
Frequency
Filtering
Spatial
Filtering
EEG
Feature
Type of Mental Activity Function
Type letter on
screen
Word predictive
function
Undo function
Right-hand, Left-hand, Allows the user to select his desired row or column using mental activities
mental arithmetic
The steps of typing a letter are illustrated in Figure 4.
Reduces the need to type the full word and increases efficiency of the speller
Foot
4-8Hz
Feature
Selection
Classification
Subject’s
Task
Figure 2: Filter Bank Common Spatial Pattern algorithm
Takes into account the possibility of misclassifications by BCI as well as human error
2. Conducted experiments to determine the accuracy of the FBCSP algorithm of the 5 mental activities (MA)
MIBIF4
NBPW
Text output of the speller
• Initial training session: subject performs 5 MA to train the FBCSP algorithm to classify EEG data
• Training Session with BCI Visual Feedback: subject performs 400 trials of MA: 80 Left-hand (L), 80 Righthand (R), 80 Foot (F), 80 Tongue (T) motor imageries and 80 Mental Arithmetic (AR) to determine
accuracy of the computational model obtained in the initial training session.
Cue provided to the user
to start / stop performing
motor imagery
3. Analyzed the experiment results offline to obtain accuracies of the 5 MA
Grid of buttons which
consists of letters and
predicted words with a
current highlighted row
and column
Figure 3: Screenshot of the Virtual Speller GUI
• 10x10 Cross Validation (CV) to estimate accuracy of FBCSP algorithm on unseen data
• Selection of 4 MA with the highest classification accuracies for proposed Virtual Speller
4. Tested Virtual Speller
• Testing session with Visual Speller: subject is tasked to type ‘hello’ with and without the word predictive
function to determine speller’s efficiency
• Characters typed per minute to determine efficiency of the Virtual Speller
Results and Discussion
Table 2: 10 x 10 CV Confusion matrix for 5 classes of MA using
data from initial training session
Denoting L=left hand; R=right hand, T= tongue, AR=mental arithmetic and F=foot.
1. Classification accuracy of 10 x10 Cross Validation (CV) on initial training data
• Splits data into n=10 sets, and uses k=9 sets for constructing classifier and remaining n-k sets for
validation and repeats this 10 times, with different random partitions into training and validation sets.
• Average classification accuracy of 10x10 CV about 66.62±1.8785%, as shown in Table 2.
2. Classification accuracy of 10x10 CV of L,R,T,F and L,R,F,AR using initial training data
• Comparison of classification accuracy of L,R,T,F and L,R,F,AR was performed to determine the optimal 4
classes. Top 4 classes are L, R, F and AR.
• Testing accuracies of the combinations L,R,T,F (72.50%) and L,R,F,AR (71.88%) are highly similar and not
conclusive
Figure 4: Flow chart illustrating usage of the Virtual Speller
Table 2: 10 x 10 CV confusion matrix for 5 classes of MA
using data from initial training session
• Accuracy of 10x10 CV, with L,R,F,T having an average accuracy of 73.86±1.8762% and L, R, F, AR
having that of 79.41±1.1918%. Thus, the latter was selected.
Predicted Class
3. Performance of Virtual Speller
• The number of single trials taken by the subject to type the word “hello” is summarized in Table 3.
L
R
L
77.50%
13.75%
1.250%
5.000%
2.500%
R
11.25%
86.25%
0.000%
1.250%
1.250%
T
6.330%
6.330%
39.24%
21.52%
26.58%
F
2.500%
1.250%
16.25%
62.50%
7.500%
AR
11.25%
1.250%
15.00%
3.750%
68.75%
Table 3: Number of trials (theoretical and actual) needed to type “hello”
Text Prediction
Theoretical no. of trials
actual no. of trials
time taken/s
Characters per min/
With
13
16
115
2.61
Without
23
40
267
1.12
min-1
True
Class
4. Analysis of CSP plots (Figure 5)
These results also tallied with the understanding of the human homunculus. The spatial patterns
arising from the 3 MI achieved a distinct, focused point of activation. As AR is not a type of MI, the
activation in the spatial pattern for this MA is not well-defined.
Left Hand Motor Imagery
T
Right Hand Motor Imagery
F
AR
Mental Arithmetic
Foot Motor Imagery
The Rest
The Rest
Conclusion
The Rest
• Results show that four types of mental activities, left-hand (L), right-hand (R) and foot (F), and mental
arithmetic (AR) could be classified with an accuracy of 76.39% and thus, employed in virtual speller
The Rest
• L and R are more accurate compared to other classes; algorithm used was originally designed for these
two types of MI
-0.2
-0.1
0
0.1
0.2
0.3
-0.1
• Undo function allowed error correction, while text prediction function improved the usability of the virtual
speller as it decreased the time taken to type a five-letter word by 56.39%
• Future extension includes an auto-elimination feature, which automatically eliminates the letters that must
not follow the previous letter chosen and shorten the number of trials required, improving its usability for
real-world applications
Key References:
1. J. R. Wolpaw, N. Birbaumer, D. J. McFarland, et al., "Brain Computer interfaces for communication and control," Clin Neurophysiol., vol. 113, 2002.
2. K. K. Ang, Z. Y. Chin, H. Zhang, et al., "Filter Bank Common Spatial Pattern (FBCSP) in Brain-Computer Interface," in Proc. IJCNN'08, 2008, pp. 2390-2397.
3. Ramoser, J. Muller-Gerking, and G. Pfurtscheller, "Optimal spatial filtering of single trial EEG during imagined hand movement," IEEE Trans Rehabil Eng., vol. 8, pp. 441-446, 2000.
-0.1
0
0.1
0.2
0.3
0
0.1
0.2
0.3
0.4
0.4
-0.1
0
0.1
0.2
L (top) versus R (top) versus
AR (top)
F (top) versus
the
the Rest
versus the
the
Rest (bottom)
(bottom)
Rest (bottom) Rest (bottom)
Figure 5: Common Spatial Pattern (CSP) Plots
4. L.A. Farwell and E. Donchin, "Talking off the top of your head: toward a mental prosthesis utilizing event-related brain potentials," Electroencephalogr Clin Neurophysiol., vol. 70, pp. 510-523, 1988.
5. F.-B. Vialatte, M. Maurice, J. Dauwels, et al., "Steady-state visually evoked potentials: Focus on essential paradigms and future perspectives," Progress in Neurobiology, vol. 90, pp. 418-438, 2010.
6. Gernot R Müller-Putz, Reinhold Scherer, C. Brauneis, et al., "Steady-state visual evoked potential (SSVEP)-based communication: impact of harmonic frequency components," J Neural Eng, vol. 2, pp. 123-126, 2005.
All photographs are taken from research institution, plots and images are self-drawn.
Download