Design of an Asynchronous Brain-Computer Interface for Control of a Virtual Avatar Hye-Soo An, Jeong-Woo Kim, and Seong-Whan Lee Department of Brain and Cognitive Engineering, Korea University, Seoul, Korea {hs_an, jw_kim, sw.lee}@korea.ac.kr Abstract—Brain-Computer Interface (BCI) enables the human to control external devices by measuring brain activities. Among the various BCI paradigms, motor imagery (MI) is the natural one to accomplish the objectives of BCI. The asynchronous mode enables the user to perform the MI in a self-paced manner. In this study, we propose a design of the asynchronous BCI based on MI for control of a virtual avatar in BCI game. Filter bank common spatial pattern (FBCSP) is applied in proposed system to discriminate correctly and detect rapidly the user’s different intention based on EEG analysis in real-time. In conclusion, we expect that our system would improve the performance of MIbased asynchronous BCI system. Keywords-Asynchronous system; Motor Imagery; Cybathlon; Brain Computer Interface (BCI); Electroencephalogram (EEG); Filter bank common spatial pattern (FBCSP); I. INTRODUCTION Brain-Computer Interface (BCI) allows a user to control computer applications by measuring brain signal such as electroencephalography (EEG) [1]. Several EEG-based BCI systems rely on voluntary modulations of sensory motor rhythms (SMRs) (e.g., imagination of motor movement) [2]. Also there is the evidence that patients diagnosed with amyotrophic lateral sclerosis (ALS) can accomplish SMR modulations [1]. A distinct effect is that both movement and motor imagery (MI) of a limb are accompanied by a decrease of power in mu and beta frequency bands known as eventrelated desynchronization (ERD) and followed rebound of power in the beta band known as event-related synchronization (ERS). The spatial patterns of ERD and ERS are specific for each limb. For these reasons, MI is considered as one of the natural paradigms for building a BCI [3]. There are two kinds of modes for MI-based BCI system (MI-BCI): synchronous (cue-paced) and asynchronous (selfpaced) [3]. The former is easier to implement but less convenient to reflect the user’s intent. In contrast, the latter called asynchronous MI-BCI controls the interface continuously without cues or temporal constraints in the real world. However, there were some limitations in the systematic approach. First, the continuous signal processing over the time was followed but did not consider the overlapped time. Second, the extension to multiclass classification for various commands was not taken into account well. In addition, features extracted from the broadly estimated frequency band had been commonly used and led to the degradation of system (A) (B) Figure 1. BrainRunners. (A) Screenshot. (B) Setup. performance. To solve the problem, the Filter Bank Common Spatial Pattern (FBCSP) algorithm [4], which enables the system to select autonomously the discriminative subjectspecific frequency range, has proposed to be as effective in synchronous but yet to be explored in asynchronous mode. In this paper, we propose a novel design of the multiclass asynchronous MI-BCI with the advanced signal processing methods used to detect and discriminate different motor imagery classes on real-time EEG analysis, which is applicable to virtual running environment. II. MATERIALS AND METHODS A. Application BrainRunners presented in Fig. 1 is a multi-play “running” game developed for the BCI race of the Cybathlon [5]. Each avatar continuously runs to the finish line of the race even though there is no command input. Running on a specific colored pad, the player who controls the avatar has to send the matched command using the BCI. There are three kinds of the colored pads including SPEED, JUMP, and ROLL action. The standard user datagram protocol (UDP) as a network protocol is used to send the commands for avatar to the game and the input value of command to each pad is predetermined in the game manual. If the player sends the correct command in the respective action pad, then it gives the advantage time (8.0 s) and lasts boost until the end of the pad. However, the erroneous command (e.g. ROLL command on SPEED pad) brings about the delayed time (2.5 s). The player reaching the finish line as the first wins the race. B. Asynchronous MI-BCI Our BCI system runs in two phases: the calibration phase and the feedback phase. We designed an asynchronous MI-BCI Figure. 2 System architecture of the asynchronous MI-BCI applicable to BrainRunners online BCI game. The system architecture is shown in Fig. 2. 1) Calibration phase: The subjects are instructed to perform motor imagination tasks indicated on the gray screen for every eight seconds. One trial is composed of 2 s fixation cross, 4 s motor imagery cued by arrows, and 2 s blank screen. There are the arrow-shaped visual cues pointing left, right, or down and each cue corresponds to imaginary movement of the left hand (LH), right hand (RH), or foot (F). Also the blank screen refers to resting state (REST). In one run, 25 trials of each motor imagination condition are recorded. Above all, we record the raw EEG data in one run of performed movements and then three runs of imagined movements [1]. The recorded the EEG signals are used for the training of the classifier and assessment of generalization error by cross-validation [2]. a) Filter Bank Common Spatial Pattern: In the FBCSP, the EEG signals are first bandpass-filtered into multiple frequency bands, namely, 4-8, 8-12, …, 36-40 Hz [4]. The signals are segmented in the regular time interval of 4s. After that, spatial filtering using the Common Spatial Pattern (CSP) for multiclass MI is performed for nine bands. Then, Mutual Information based Best Individual Feature (MIBIF) algorithm is used to select the discriminative CSP features from the filter bank [4]. b) Regularized Linear Discriminant Analysis: The shrinkage-Regularized Linear Discriminant Analysis (RLDA) is used to classify the selected features. In particular, the RLDA classifier in One-Versus-Rest (OVR) approach is employed for classification of four-class of MI (e.g., LH, RH, F and REST). 2) Feedback Phase: After the calibration phase, the subjects are instructed to control the avatar in BrainRunners by performing the motor imagination tasks. In this phase, the EEG signals are recorded with 250 Hz of sampling rates. For real-time operation, features are calculated every 40 ms with a sliding 3 s window [1]. The output of the classification is processed through applying the CSP filters, band-pass filtering, calculating log-variance and applying the OVR RLDA classifier in 1b). To send the subject’s commands to the application, we construct UDP object associated with the remote host. Until binding to the local socket, the classification result of the asynchronous BCI system is not transmitted to the remote application. a) Online interface control: Connecting the BCI system to BrainRunners, the avatar is synchronized to each subject. While the avatars are standing at the starting line, every command is invalid. After starting the race, the subject controls the avatar using the BCI input. III. DISCUSSION AND CONCLUSION In the real world, the key is how does BCI system rapidly detect and correctly discriminate the user’s different intention. This paper has described a novel design of the multiclass asynchronous MI-BCI for control of a virtual avatar. In particular, finding the subject-specific parameters in the multiclass extension of FBCSP and applying those in the asynchronous MI-BCI would improve the performance of classification compared to the previous MI-BCI studies. In addition, the introduction of the overlapped sliding window for real-time operation is expected to make the system more efficient and the user more convenient. Therefore, our future work is to implement this designed BCI and demonstrate the systematic efficiency in some experiments. ACKNOWLEDGMENT This work was supported by ICT R&D program of MSIP/IITP. [R0126-15-1107, Development of Intelligent Pattern Recognition Softwares for Ambulatory Brain-Computer Interface]. REFERENCES [1] [2] [3] [4] [5] B. Blankertz, C. Sannelli, S. Halder, E.M. Hammer, A. Kübler, K.-R. Müller, G. Curio, and T. Dickhaus, “Neurophysiological Predictor of SMR-based BCI Performance,” NeuroImage, Vol. 51, No. 4, 2010, pp. 1303-1309. B. Blankertz, R. Tomioka, S. Lemm, M. Kawanabe, and K.-R. Müller, “Optimizing Spatial Filters for Robust EEG Single-Trial Analysis,” IEEE Signal Processing Magazine, Vol. 25, No. 1, 2008, pp. 41-56. R. Kus, D. Valbuena, J. Zygierewicz, T. Malechka, A. Graeser, and P. Durka, “Asynchronous BCI Based on Motor Imagery With Automated Calibration and Neurofeedback Training,” IEEE Trans. Neural Syst. Rehabil. Eng., Vol. 20, No. 6, 2012, pp. 823-835. K. K. Ang, Z. Y. Chin, H. Zhang, and C. Guan, “Filter Bank Common Spatial Pattern (FBCSP) in Brain-Computer Interface,” Proceedings of the IEEE International Joint Conference on Neural Networks, 2008, pp. 2391-2398. Cybathlon: BCI Race – Races & Rules (Ver. 2015-06-12), http://www.cybathlon.ethz.ch/the-disciplines/bci-race.html