Neural Correlates of Extended Dynamic ... Processing in Neurotypicals Luke Urban DEC 16

advertisement
Neural Correlates of Extended Dynamic Face
Processing in Neurotypicals
by
MASSACHUSETS INSTITUTE
OF TECHNO0LOGY
DEC 16 2010
Luke Urban
LIBRARIES
Submitted to the
Department of Electrical Engineering and Computer Science
in partial fulfillment of the requirements for the degree of
Masters of Engineering in Computer Science and Electrical
Engineering
ARCHIVES
at the
MASSACHUSETTS INSTITUTE OF TECHNOLOGY
Sept 2010
@ Massachusetts Institute of Technology 2010. All rights reserved.
Author ......
Department of
Engineering and Computer Science
1ie-f~5iQi
July 30, 2010
Certified by..
Pawan Sinha
Associate Professor
Thesis Supervisor
Accepted by ......
Kj
~
Dr. Christopher J. Terman
Chairman, Department Committee on Graduate Theses
2
Neural Correlates of Extended Dynamic Face Processing in
Neurotypicals
by
Luke Urban
Submitted to the
Department of Electrical Engineering and Computer Science
on July 30, 2010, in partial fulfillment of the
requirements for the degree of
Masters of Engineering in Computer Science and Electrical Engineering
Abstract
This thesis explores the unique brain patterns resulting from prolonged dynamic
face stimuli. The brain waves from neurotypical subjects were recorded using the
electroencephalography (EEG) while viewing a series of 10 second long video clips.
These clips were one of two categories: face or non-face. Modern signal processing and
machine learning techniques were applied to the resulting waveforms to determine the
underlying neurological signature for extended face viewings. The occipitotemporal
(left hemisphere), occipitotemporal (right hemisphere), and occipital proved to have
the largest change in activity. Across the 12 recorded subjects a consistent decrease
in the 10 Hz power range and increase in the 20 Hz power range was found. This
biomarker will serve later works in the study of autism.
Thesis Supervisor: Pawan Sinha
Title: Associate Professor
4
Acknowledgments
I would like to thank Pawan Sinha. He was been a wonderful mentor and has gone
out of his way to make this work possible. There is absolutely no better place I could
have spent my MEng than in his lab.
I would also like the thank my mother and father for their constant support of my
studies in 'gidgets and gadgets'
6
Contents
I INTRODUCTION
15
2 BACKGROUND
17
17
2.1
Imaging Modality ....................................
2.2
Event Related Potential (ERP) ......
2.3
Brain Computer Interface (BCI) ....................
.....................
17
. 20
23
3 EXPERIMENTAL SETUP
23
...................................
3.1
Stimuli ........
3.2
Stimuli Collection ............................
. 25
3.3
Lab Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. 26
3.4
Subject Collection .......
3.5
Capping Subject .......
.............................
27
3.6
Connecting Subject to Amplifier ..........................
28
3.7
Running Experiment .....
3.8
Experim ent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
............................
......................
26
29
30
31
4 PREPROCESSING
. . . . . . . . . . . . . . . . . . . . .
. 31
4.1
Segm entation . . . . . . . ..
4.2
Filter D esign
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
31
4.3
Down Sam pling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
34
4.4
Artifacts............. . . . . . . . .
7
. . . . . . . . . . ...
34
5
6
ANALYSIS
37
5.1
Amplitude Time Analysis
5.2
Frequency Analysis .......
5.3
Short-Time Frequency Analysis.. . . . . . . .
........................
37
............................
38
. . . . . . . . . . .
CLASSIFICATION
45
6.1
M achine Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
45
6.2
Support Vector Machine (SVM) . . . . . . . . . . . . . . . . . . . . .
46
6.3
Feature Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
47
7 RESULTS
8
40
51
7.1
Spatial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
51
7.2
Frequency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
51
7.3
Tem poral
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
53
7.4
Energy Content . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
55
CONTRIBUTIONS
59
A Spectrograms
61
B Frequency Plots
75
C Temporal Plots
89
D BCS-Subjects Email
103
List of Figures
Examples of fMRI (left) and EEG (right) . . . . . . . . . . . . . . . .
18
2-2 N 170 response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
19
2-1
2-3
The P300 matrix. Subject are asked to fixate on a particular letter.
The columns and rows ae flashed at random. When the row or column
containing the fixated character is flashed a P300 spike is recorded.
Combining the row and column causing the spike a computer can determine the fixated character. . . . . . . . . . . . . . . . . . . . . . .
3-1
21
Example images of the videos used in the experiment. The top two are
. . . . . . . . . . .
24
3-2
Subject wearing EEG electrode cap . . . . . . . . . . . . . . . . . . .
27
3-3
Impedence measurement screen. . . . . . . . . . . . . . . . . . . . . .
29
3-4
Experimental Process. Video was displayed for ten seconds, then a
face videos. The bottom two are non-face vidoes.
grey screen. Process was repeated for all 60 videos . . . . . . . . . . .
30
4-1
Example EEG trace. . . . . . . . . . . . . . . . . . . . . . . . . . . .
32
4-2
Magnitude plot of filter designed through the Parks-McClellan algorithm. 34
4-3
EEG noise resulting from blinking.
. . . . . . . . . . . . . . . . . . .
35
5-1
Automatic peak detection using the first and second derivative . . . .
38
5-2
Fourier transform of an EEG signal. . . . . . . . . . . . . . . . . . . .
39
5-3
Hamming Window (image taken from Wikipedia) . . . . . . . . . . .
41
5-4
Spectrogram resulting from short-time Fourier transform . . . . . . .
42
5-5
Spectrogram gridded into 1 Hz by .5 second blocks. The different size
is a result of rounding
. . . . . . . . . . . . . . . . . . . . . . . . . .
43
6-1
SVM splitting a data set into two groups . . . . . . . . . . . . . . . .
47
6-2
Example of frequency feature used to create frequency histogram.
48
7-1
Average classification rate of the 9 brain regions across all subjects
52
7-2
Example of frequency histogram from subject 7 . . . . . . . . . . . .
53
7-3
Example of frequency histogram from subject 10 . . . . . . . . . . . .
54
7-4
Effect of temporal information from subject 7 . . . . . . . . . . . . .
55
7-5
Effect of temporal information from subject 1 . . . . . . . . . . . . .
56
7-6
Log power content in the 10 Hz frequency band . . . . . . . . . . . .
57
7-7
Log power content in the 20 Hz frequency band . . . . . . . . . . . .
58
.
.
A-1 Spectrogram for Subject 1 . . . . .
. . . . . . . . . . . . . . . . . .
62
A-2 Spectrogram for Subject 2 . . . . .
. . . . . . . . . . . . . . . . . .
63
A-3 Spectrogram for Subject 3 . . . . .
. . . . . . . . . . . . . . . . . .
64
A-4 Spectrogram for Subject 4 . . . . .
. . . . . . . . . . . . . . . . . .
65
A-5 Spectrogram for Subject 5 . . . . .
. . . . . . . . . . . . . . . . . .
66
A-6 Spectrogram for Subject 6 . . . . .
. . . . . . . . . . . . . . . . . .
67
A-7 Spectrogram for Subject 7 . . . . .
. . . . . . . . . . . . . . . . . .
68
A-8 Spectrogram for Subject 8 . . . . .
. . . . . . . . . . . . . . . . . .
69
A-9 Spectrogram for Subject 9 . . . . .
. . . . . . . . . . . . . . . . . .
70
A-10 Spectrogram for Subject 10
. . . .
. . . . . . . . . . . . . . . . . .
71
A-11 Spectrogram for Subject 11
. . . .
. . . . . . . . . . . . . . . . . .
72
A-12 Spectrogram for Subject 12
. . . .
. . . . . . . . . . . . . . . . . .
73
B-1 Frequency Information for Subject 1 . . . . . . . . . . . . . . . . . .
76
B-2 Frequency Information for Subject 2 . . . . . . . . . . . . . . . . . .
77
B-3 Frequency Information for Subject 3
. . . . . . . . . . . . . . . . . .
78
B-4 Frequency Information for Subject 4 . . . . . . . . . . . . . . . . . .
79
B-5 Frequency Information for Subject 5 . . . . . . . . . . . . . . . . . .
80
B-6 Frequency Information for Subject 6
. . . . . . . . . . . . . . . . . .
81
B-7 Frequency Information for Subject 7
.. .... ... ..... ... .
82
B-8 Frequency Information for Subject 8 . . . . . . . . . . . . . . . . . .
83
B-9 Frequency Information for Subject 9 . . . . . . . . . . . . . . . . . .
84
B-10 Frequency Information for Subject 10 . . . . . . . . . . . . . . . . . .
85
B-11 Frequency Information for Subject 11 . . . . . . . . . . . . . . . . . .
86
B-12 Frequency Information for Subject 12 . . . . . . . . . . . . . . . . . .
87
C-1 Temporal Information for Subject 1
.. ... .... ..... ... .
90
C-2 Temporal Information for Subject 2
.. ... .... .... .... .
91
C-3 Temporal Information for Subject 3
.. ... .... .... .... .
92
C-4 Temporal Information for Subject 4
. .... .... .... .... .
93
C-5 Temporal Information for Subject 5
. .... ... ..... .... .
94
C-6 Temporal Information for Subject 6
. .... .... .... .... .
95
C-7 Temporal Information for Subject 7
. ... .... ..... .... .
96
C-8 Temporal Information for Subject 8
. ... .... ..... .... .
97
C-9 Temporal Information for Subject 9
. ... .... ..... .... .
98
C-10 Temporal Information for Subject 10 . . . . . . . . . . . . . . . . . .
99
C-11 Temporal Information for Subject 11 . . . . . . . . . . . . . . . . . . 10 0
C-12 Temporal Information for Subject 12 . . . . . . . . . . . . . . . . . . 10 1
12
List of Tables
5.1
Traditional Frequency Bands. . . . . . . . . . . . . . . . . . . . . . .
40
7.1
Probability brain region classifications resulted from chance . . . . . .
52
14
Chapter 1
INTRODUCTION
Modern neuroscience to this day lacks a fundamental understanding of the neural
activity of autistic individuals in comparison to neurotypicals. Despite the wealth
of research there is no known unique brain signature found in ASD population.
This is particularly surprising given the notable behavioral differences between these
two groups. This experiment will lay the ground work towards finding just such a
biomarker. Neurotypical patients will be studied using extend dynamic face stimuli
with the hopes of finding a consistent underlying brain pattern. The discoveries made
through this experiment will allow for future work on its presence in autism.
While there has been a great deal of work in the area of face processing, typical
studies are only interested in breaking down when and where it begins. This is accomplished by evoking the brain's transient response using very brief stimuli presentations
(on the order of factions of a second). This approach provides great insight into when
face/object discrimination occurs, but these experiments lose the temporal aspect of
face processing. The dynamic nature of faces will be retained in this experiment by
using stimuli which are much longer (on the order of 10 seconds).
It is the belief that this biomarker will serve well in the study of autism because it
retains the temporal component of face processing. Autism's main symptom is seen
in abnormal social behaviors. Social activities unfold over time so by chopping face
stimuli into 300 millisecond chunk, as modern studies do, a great deal of information
is lost. The reason there is still no known underlying biomarker for autism could
be the result of modern approaches not evoking the aspect of face processing which
is difficult for autistics. It is possible that face recognition functions perfectly but
some problem in higher order integration exists. It is through this integration of faces
through time that this experiment believes the neural correlates of extend dynamic
face processing will serve as a valid biomarker in the study of autism.
In addition to autism research, another application of this knowledge is in the
field of brain computer interfaces (BCI). This new area of research hopes to bridge
the gap between man and machine by providing a channel of communication directly
from the brain. Pieces of this technology are already knocking at our doors. In 2009,
the toy company Uncle Milton introduced a Star Wars 'Force Trainer' containing a
simplified EEG device which allows the user to control the position of a ball using
only their brain waves.
Researchers at the University of Wisconsin also recently
designed an EEG system which allows the user to 'tweet' on the social networking
site Twitter using only the EEG signal[16]. This technology is in its infancy, but as
these devices become more commonplace a better understanding of the human brain
will be needed to fully utilize them. Finding the neural correlates to extended and
dynamic face viewings will provide a novel method of interacting with machines.
Chapter 2
BACKGROUND
2.1
Imaging Modality
There are two main noninvasive image recording devices used in neuroscience research;
electroencephalography (EEG) and functional magnetic resonance imaging (fMRI).
Each tool has a specific set of advantages and disadvantages. EEG measures the
electrical activity of the scalp allowing for millisecond temporal resolution.
This
resolution comes at a price as electrical activity dissipates through the skull causing
a lack in precise spatial resolution. FMRI detects blood flow and can be used to
pinpoint exactly which sections of the brain are being activated, but because blood
takes time to flow it fails to have the temporal resolution of EEG. In general, fMRI can
only image the brain once every quarter second[14], much slower than EEG. For this
experiment temporal resolution was weighed more important than spatial resolution,
so EEG was the imaging modality of choice.
2.2
Event Related Potential (ERP)
One of the main techniques used in neuroscience is event related potentials (ERP).
This process involves presenting a subject with a series of very brief stimuli (on
the order of hundreds of milliseconds) and analyzing the brain's transient response.
Imaging tools like EEG are prone to noise, so a large number of trials are run and
Figure 2-1: Examples of fMRI (left) and EEG (right)
the signals are averaged together. This averaging trick increases the signal-to-noise
ratio because through out the trials the brain pattern will remain constant while the
random noise averages to zero. The rate at which the noise decays is related to the
square root of the number of trials, so ERP studies can range anywhere from 50 to
500 repetitions for a given stimulus[24].
ERP studies are frequently used to study face processing. These experiments typically involve flashing a randomized series of face and non-face images to a subject. [12}
Exactly what constitutes non-face image varies from experiment to experiment, but
typical non-face images are: car, houses, and other everyday items[23] [10]. The major
discovery ERP studies have found in face processing is the N170 response.[23] It has
been shown that at approximately 170 milliseconds after the initial presentation of
face stimuli the occipital lobe makes a sharp negative spike as shown in Figure 2-2.
This negative dip is unique to faces and is not evoked when viewing other objects,
even human body parts like hands.[23] This result shows that within 170 milliseconds of the light entering the eye, the human brain can distinguish between face and
non-faces.
Studies on the differences between the N170 response in neurotypicals and people
with autism has been relatively inconclusive. One noted difference is a decreased
Non-Face Data
Face Data
0
0
-0.05
6 -0.05
-0.1
-0.1
-0 .
-0.15
-0.15 V
0
-0.2,-0.2
50
100
Time- ms
150
200
0
J
50
100
Time- ms
150
200
Figure 2-2: N170 response
amplitude[1] [15] and delayed latency[13) of the N170 response in subjects with ASD.
This difference should be regarded cautiously, as other factors besides neurological
underpinnings can be the cause. Anatomical properties of subject, such as the thickness of the skull[19], can effect electrical conductivity resulting in such distortion of
the underlying signal. In general, comparison of ERP data between subjects needs
to be done carefully as differences in waveform peaks do not inherently correspond to
differences in the underlying brain activity.[24]
Another noted difference between neurotypicals and the ASD population is an
atypical spatial distribution of face processing. A lack of right hemisphere lateralization has been found[9], implying no cortical specialization. This finding has been
question by MEG studies showing right hemisphere preference of faces when compared against other objects[1]. In total results in this area are at odds and can not
be used to make any strong case either way.
The literature on ERP studies involving ASD populations is relatively sparse and
inconclusive [4]. As it stands there exists no concrete biomarker which separates subjects with ASD and neurotypicals. As postulated in the introduction, this may be
the result of the ERP framework. Since autism's most striking symptoms come from
atypical social behavior, the brief nature of stimuli presentation may not test the appropriate aspect of ASD neurological function. This could be the reason neuroscience
has yet to find a firm biomarker.
2.3
Brain Computer Interface (BCI)
The field of Brain Computer Interface (BCI) attempts to connect man and machine
through the analysis of brain waves. This technology has a three fold mission:
1. Provide a means of communication for people with sever muscular paralysis.
2. Help neuroscience researcher better understand the brain
3. Create a novel means of interaction between man and machine
This new research area takes advantage of the progress in neuroscience and signal
processing to provide a real time communication pathway between the brain and a
computer. It draws on findings from all field of neuroscience to make such things a
reality.
An example of how ERP studies have developed into BCI methods is the P300
response. This spike is elicited in response to an oddball paradigm[21}. In this setting
two possible stimuli are presented, one being much rarer than the other. When the
unlikely stimulus is presented, a sharp positive swing is found at approximately 300
milliseconds later.
This paradigm is used to communicate using a P300 matrix[21. This matrix is a
6x6 grid containing both letters and numbers. The subject is asked to fixate on a
character while the rows and columns are flashed randomly. Most of the time the
character remains in its unflashed state, but every so often its column or row will blink.
This flashing serves as the oddball stimulus and results in the P300 response. By
combining which row and column evoke the response, a computer is able to determine
which character the subject is attending to. It is through this method that paralyzed
patients are able to spell words [25] [61 and do things like 'twitter'[16 using their brain.
In addition of analyzing ERP style experiments, brain computer interface devices
are also used to monitor more continuous brain activity such as alertness. Numbers of
EEG experiments on short term memory tasks have found neurophysiological effects
of mental workload. Real-time BCI system have been built to monitor subjects in
tasks like alertness behind the wheel[11]. To quantify alertness, these systems use the
Figure 2-3: The P300 matrix. Subject are asked to fixate on a particular letter. The
columns and rows ae flashed at random. When the row or column containing the
fixated character is flashed a P300 spike is recorded. Combining the row and column
causing the spike a computer can determine the fixated character.
power spectral density in specific frequency bands as features for linear discriminate
analysis. This classifier is then used to assign a high or low mental workload rating.
Techniques used in BCI devices are directly applicable to the type of analysis needed
for this experiment on extended dynanic face viewings.
22
Chapter 3
EXPERIMENTAL SETUP
3.1
Stimuli
The stimuli set for this experiment contains 30 face videos and 30 non-face videos.
Face videos are defined as containing a single person speaking in the direction of the
camera. It was not required that the speakers face directly towards, as many faces
are slightly angled away from the camera. Non-face videos are defined as any video
which did not contain a human face. These non-face videos are allowed to contain
human hands, which have been shown not to evoke face specific responses like the
N170. [23]
Non-face stimuli are as follows:
* Moth getting caught in Venus fly trap
" Airplane taking off
" Arthur Ganson's kinetic sculpture of wishbone walking
* Arthur Ganson's kinetic sculpture of chair exploding
" Arthur Ganson's kinetic sculpture device with swinging head
" Bridge wobbling
" Windmill exploding
Figure 3-1: Example images of the videos used in the experiment. The top two are
face videos. The bottom two are non-face vidoes.
" Ferrofluids
" Sidewinder snake moving over desert
" Firework
* Time-lapse Popsicle melting
* Whale breaching
* Helicopter taking off
" Tornado
" Big Dog walking through woods
* Lego toy robot
" Industrial robotic arm constructing car
" Folding a paper airplane
9
Slow motion water droplet
" Slow motion water droplet hitting another droplet
* Flower blooming
" Wrecking ball crushing building
" Backhoe digging dirt
" Waves crashing over deck
" Car swerving on the highway
* Tactile electronic sensor
" A swarm of birds
" Domino Ouroboros
" Glass Blowing
" Magnetic Pendulum
3.2
Stimuli Collection
Source videos were downloaded from YouTube using the free software 'YouTubeDownloader.' The majority of the face videos were downloaded from personal YouTube
blogs as well as various clips of celebrities. There was an attempt to balance race,
gender, age, and notoriety. No video contained computer generated graphics.
Once the source videos were downloaded, all clips were processed using Final Cut
Pro. Video segments were cut down to approximately 10 seconds and saved in a
240x320 M-PEG format, maintaining aspect ratio. Each clip contains one continuous
sequence video; no cuts were allowed. The shortest of all the videos was 7 seconds.
3.3
Lab Setup
The Sinha laboratory is set up with the following equipment:
" NetStation EGI EEG Amplifier
* 4 Electrode caps
" Windows PC
" Mac Laptop
* Subject Monitor
* Video Splitter
EEG experiments are designed on the Windows PC using E-Prime psychology
software. This computer is connected to the Mac laptop via an Ethernet cable, allowing programs running on both machines to communicate. The Mac laptop contains
NetStation software and handles data acquisition from the EEG amplifier. The amplifier is connected through a USB port on the laptop. The electrode cap worn by the
subject is connected to the amplifier and allows the laptop to record brain activity.
The subject is placed in front of a monitor which is connected to a video splitter.
This splitter allows the experimenter to display either the laptop or PC screen. The
Mac laptop screen is displayed to help measure impedances of electrodes and to show
subjects their brain activity. The PC screen is used to display the experiment. Each
time a stimulus is presented on the subjects monitor, the PC notifies the Mac laptop
which in turns marks the EEG signal for later segmentation.
3.4
Subject Collection
Subjects were drawn from the MIT Brain and Cognitive Science subject mailing list
(bcs-subjects~mit.edu). This list is available to the public and contains a collection
of people in the Boston area who are interested in participating in non-invasive brain
experiments. The make up of this list tends to be college students, MIT faculty, and
Figure 3-2: Subject wearing EEG electrode cap
surrounding residents. A short email was sent to the list asking for volunteers for an
EEG experiment lasting approximately 1 hour. It can be found in APPENDIX D.
The pay was $10. 12 people responded and were individually scheduled for time slots.
3.5
Capping Subject
When the subject entered the lab they were given a consent form.
This waiver
explained the EEG set up and that no physical or mental harm would come to them.
While this waiver was not specifically geared toward this experiment, it provides a
blanket approval for EEG experiments in the Sinha laboratory. This waiver explicitly
states they could leave at any time, should they want to. After they signed and dated
the waiver, the size of the subject's head was measured to fit an electrode cap. The
end of a measuring tape was placed at the top of the bridge of the nose and wrapped
horizontally around the head. From this measurement the correct electrode cap was
selected.
The electrode cap was then placed in a saline solution of Potassium Chloride and
shampoo and allowed to soak for 5 minutes. In the mean time, the experiment was
explained to the subject and the center of their skull was measured to help correctly
place the cap. This was accomplished by measuring distance across the top of the
skull from ear to ear and from the bridge of the nose to the ridge in the back of the
head. The intersection of these two lines was defined as the center of the skull and
was marked with an 'X' using a red grease pencil.
Once the cap finished soaking, the subject was instructed to close their eyes and
the cap was placed over their head. The reference electrode (electrode 129) was
positioned such that it rested over the 'X'. A series of straps beneath the subjects
chin help tighten the electrode cap such that all electrodes made contact with the
scalp and where correctly positioned.
3.6
Connecting Subject to Amplifier
After the electrode cap was fitted, the subject was positioned in front of a computer
screen. The connector at the end of the electrode cap was connected to the NetStation
amplifier and a new recording session was created in the Netstation software on the
Macintosh laptop. After the gains were measured, the impedance display screen
was loaded. This screen displayed a 2-D map of the electrodes. Electrodes with an
impedance higher than an acceptable threshold were displayed as red, and electrodes
with an impedance below the acceptable threshold were displayed in green. This
screen was placed on the subject's monitor for ease of the experimenter and every red
electrode was adjusted until they turned green. These electrodes were repositioned
to insure a good connection to the scalp and pipetted with the saline solution. Once
each electrode displayed as green on the impedance screen, the measurements were
saved and the window was closed. Next, the dense waveform display was opened and
the subject was allowed to look at their own brain activity. The subject was asked
to blink and take note of the corresponding wave pattern. It was explained that the
process of blinking involves large electrical activity in the brain which overwhelms
4
Figure 3-3: Impedence measurement screen.
the small signals being tested. The subject was told that the experiment involved
watching a series of 10 second long videos clips and that there would be a 10 second
gap between each video. It was asked that the subjects try their best to hold off
blinking until the gaps, but that occasional blinking during a video presentation was
fine.
3.7
Running Experiment
The subject's screen was switched to the PC and B-Prime was loaded. The Extend
Face Viewing experiment was opened and executed. The door to the BEG room was
closed and the lights were shut off to block out any distraction. An information screen
explaining the experiment was shown to the subject and once they felt ready they
began. The experiment was initiated by the researcher hitting 'Space Bar' on the PC,
the subject had no control. The researcher sat behind the subject and monitored the
BEG waveforms to ensure no complications through out the experiment.
Figure 3-4: Experimental Process. Video was displayed for ten seconds, then a grey
screen. Process was repeated for all 60 videos
3.8
Experiment
E-Prime was used to create the experiment for this EEG study. At the start a
description of the experiment was presented to the subject. Once the subject was
prepared to begin, pressing any key on the keyboard initiated the videos. Once the
experiment began, a gray screen with a set of cross hairs was presented. This screen
lasted for approximately 10 seconds and the cross hairs provided a fixation point for
the subject. After the 10 seconds were up, a video was presented. These videos were
presented in a random fashion with each subject's run being unique. After the video
ended, the gray screen and cross hairs were presented again and the cycle repeated.
This continued until all 60 videos had been displayed. Audio was not cut out from
the videos, so the speakers were manually turned off.
Chapter 4
PREPROCESSING
4.1
Segmentation
The EEG system records one long continuous signal from the start of the experiment
to the end. As stimuli are present the system tags their location in the stream. The
first preprocessing step was to segment out only the regions of interest from the EEG
signal. This was accomplished by breaking the EEG trace into 60 clips starting at
the moment of presentation for each stimulus. Each clip was cut to 7 seconds, the
length of the shortest video. This guaranteed each clip will contain the brain signal
of the subject while exposed to a stimulus.
4.2
Filter Design
By default the EEG amplifier samples at 1 kHz. The choice of sampling rate defines
what types of signals can be recorded. Based on Nyquist's criterion a signal can
faithfully be represented by a discrete set of sample points as long as the sampling
rate is more than twice its highest frequency. In the case of the EEG amplifier,
sampling at 1 kHz allows for frequencies up to 500 Hz.
This range of signals is much wider than needed for EEG data analysis. Having
such a large spectrum can cause a problem by allowing noise to creep in. One such
source of noise come from the fact that the United State the power lines operate at 60
Figure 4-1: Example EEG trace.
Hz. Since the EEG amplifier is powered by the grid, it is common for the alternating
current in the power lines to cause 60 Hz artifacts in the signals. Typical ways to
cope with this and other sources of noise is to use a filter.
There are four main types of filters: low-pass, high-pass, band-pass, and notch
filters. A low-pass filter cancels out all frequencies above a certain cutoff point. A
high-pass filter does the opposite by canceling out all frequencies below a certain
cutoff. The band-pass is a combination of the two. It allows only a specific band (a
pass band) of frequencies to pass through, while canceling out all the rest. A notchfilter only cancels a specific portion of the frequency content and can be thought of
as the inverse of the band-pass. Notch-filters typically can be used to cancel out the
60 Hz power line noise as it is a known and specific problem.
It is typical for EEG experiments to low-pass filter EEG data after acquisition. A
common cutoff frequency is 50 Hz[7]. It is also common to high-pass filter since EEG
sensors tend to drift over time it. High-pass filtering serves to baseline by removing
any DC (0 Hz) component of the signal. This method of low-pass and high-pass
filtering is identical to running the signal through a single band-pass filter. For this
experiment a band-pass filter with a pass band starting at .1 Hz (to baseline) and
ending at 50 Hz (to low-pass filter) was used.
Since the EEG signal is saved as a discrete set of sample point, the signal is defined
in the discrete time domain. In this domain filters come one of two types: infinite
impulse response (IIR) or finite impulse response (FIR). This labeling corresponds
to how the filter responds to a given input. In the case of the IIR filter, each input
has an effect on every other input before and after it for infinity. Since this filtering
needs to happen on a computer in finite time, IIR filters are not possible. This leave
the subset of FIR filters.
The ideal filter has a pass band of magnitude exactly one and a stop band of
exactly zero. Having such a magnitude profile serves to cancel out all unwanted
frequencies without distorting the others. In the real world ideal filters fall under
the IIR domain. Since computers need to run in finite time approximations to the
ideal filter are made. Four common filters are used as approximations: Butterworth,
Chebyshev I, Chebyshev II, and Elliptic. All four of these filters work well and are
often used in neuroscience signal processing [20]
Unfortunately these filters are IIR, so for each approximation there is an additional
FIR approximation. In this experiment a Chebyshev II filter was used. This filter
has a monotonic pass band and equiripple in the stop band.
This type of filter
typically has a sharper transition band than the others which make it ideal for bandpass filtering. The FIR approximation of a Chebyshev II filter is implemented using
the Parks-McClellan algorithm[27]. For this experiment the pass-band defined above
(pass-band starting at 0.1 Hz and ending at 50 Hz) was used with an acceptable
transition band of .1 Hz. To cancel out any phase distortion the filter is run twice
over the signal, once forward and once backwards.
Filter
1.4
Ideal
Fir Parks-Mcclellan Filter Design
1.21
_0=
-
0.8
~u 0.6 0.40.2 0
0
100
200
300
400
500
Hz
Figure 4-2: Magnitude plot of filter designed through the Parks-McClellan algorithm.
4.3
Down Sampling
After the filtering stage the EEG trace contains a 50 Hz signal sampled at 1 kHz.
The Nyquist criterion states that a 50 Hz signal only needs to be sampled at 100
Hz for faithful reproduction. This means the EEG signal is oversampled by a factor
of ten. To help reduce the size of data the EEG signal is decimated by saving only
every tenth point. This cuts every EEG trial to one tenth of the size of the original
recording without losing any information.
4.4
Artifacts
The EEG signal can often be plagued by biological artifacts. The muscular activity
of blinking is strong enough to register as a significant spike in the EEG trace. The
spike caused by a blink is comprised of a number of frequencies, many of which blend
into the underlying signal, making it hard to filter out.
Typical ERP studies deal with artifacts by removing any trials containing blinks.
Because ERP studies record a large number of trials, losing a few will not greatly
effect the overall result. In the case of this experiment, removing tainted trails can not
be afforded. With each trial lasting approximately 10 seconds the likelihood of some
Figure 4-3: EEG noise resulting from blinking.
trials containing blinks is high. This, coupled with the small numbers of stimulus
presentations, means eye blinks in the signal are inevitable and something which will
have to be compensated for in the processing stage.
36
Chapter 5
ANALYSIS
5.1
Amplitude Time Analysis
Typical analysis in ERP experiments involve studying signals in the amplitude-time
domain. In the case of EEG experiments, this means looking at brain signals as a
function of electrical activity over time. As ERP studies have shown, this approach
can have rewarding results. Response like the N170, P100, and P300 all are derived
by analyzing signals in this manner.
As in all methods of analysis, studying signals in the amplitude-time domain
involves defining features of interest. Usually, these features are peaks and valleys.
These are called extrema and can be defined mathematically as points in the signal
where the slope is zero. Computationally, these extrema can be found using the first
derivative[8]. Since the first derivative is the instantaneous rate of change of a signal
over time, the extrema of a signal will be the points where the first derivative is equal
to zero. To determine whether a extrema is a peak or valley, the second derivative
can be computed[8]. If the value of the second derivative at the point of the extrema
is positive then the signal has positive curvature, meaning the extrema is a minimum.
If the second derivative is negative, then the extrema is a maximum. The value of
these extrema and their location in the signal can be used to compare the effect of
different stimuli on neural activity.
This style of analysis lends itself well to the ERP framework. Since ERP studies
0.40.3 -
-0.1
-0.21
0
20
40
60
04
04
0
06
80
100
0
1
10
120
140
160
180
1.2
14
1.4
1.6
1.60
1.8
1.8
200
0.04-
0.03
0.005-
-0.051
D
04.2
02
80
Figure
Automatc
5
peak dtection sin
Figure
utomatic
5-1:
pea
detection
sin
1.2
h econd
is
h
istadscnd
n
22
derivative
derivative
only look a few hundred milliseconds after the presentation of a stimulus there are
relatively few peaks and valleys. Also, since the time line is so short the likelihood
of the signal accumulating large amounts of phase lag is low. Therefore comparing
peaks from two different stimuli conditions is a viable option, even possible by hand.
Unfortunately the time course is much longer for extended dynamic faces.
Since
signals for this experiment are on the order of seconds, the ability to make one-to-one
comparisons between extrema across multiple trials is low. As a result, analyzing the
brain functions in the time domain is not the ideal method for this experiment.
5.2
Frequency Analysis
All is not lost. Another approach to signal analysis is to look at the frequency domain.
Fourier showed that any arbitrary function can be represented as an infinite series
of sines and cosines. From this rule the Fourier transform is defined. In the case of
Fourier Transform of Face Data
40
35302520151050.
-50
50
0
Frequency - Hz
Figure 5-2: Fourier transform of an EEG signal.
the EEG signals, the Fourier transform converts a signal from a function of electrical
activity over time to a function of electrical activity over frequency. Mathematically
this transform is defined as:
F(w)
J
f (t)e-wt dt
This transformation holds over all frequencies, but because of the filtering stage
described in the preprocesing section the transform of the signals used in this experiment will is zero above 50 Hz. Brain imagine studies often use the Fourier transform
in their analysis. These studies involve more long term signals designated by mental states like sleep, stress, and awareness
[5][22][181.
The approach Fourier analysis
takes in these experiments involve computing the power spectral density in specific
frequency regions. Mathematically the power spectral density is defined as:
F(w)F*(w)
27r
This equation computes how much activity there is in specific frequency of a signal.
Table 5.1: Traditional Frequency Bands
Delta
1-4 Hz
Theta
4-8 Hz
Alpha
8-13 Hz
Beta
13-40 Hz
Gamma 40+ Hz
This is a continuous function, so the make analysis easier the power spectral density
is integrated into a series of bins. Neurosciencists historically divide the frequency
space of brain waves into the 5 categories: delta, theta beta alpha and gamma. These
are purely traditional frequency groupings and are not used in this experiment.
This approach is a step in the right direction, but it is missing one critical fact
about EEG signals. Brain patterns tend to be very transient spikes. The signals of
interest do not typically oscillate at a constant frequency, they change and decay over
time. These brief spikes can cause noise in the Fourier transform as they contain a
large amount of high frequency data. Distinct spikes will also have similar frequency
content, leading to an overlap in the Fourier domain. This overlap can make it
difficult to piece out which aspects to the signal are of interest. In general the Fourier
transform loses all time information, making it difficult to determine where activity
takes place. As a result it is a poor tool for studying EEG signals.
5.3
Short-Time Frequency Analysis
There exists a solution which straddles both the temporal and frequency worlds. A
short-time Fourier transform computes the frequency content of windowed potions of
the signal. Mathematically, this is defined as:
F(w)
w(t)f(t)e-w t dt
Where w(t) is some windowing function. There is no one window inherent to the shorttime Fourier transform, but a typical function is the Hann window. This windowing
OL7
M~
function
-40
taes-the"form
-00
4
N-1
Mn.
wrtm
2
40
so
Figure 5-3: Hamming Window (image taken from Wikipedia)
function takes the form:
w(t)
=
1
27rt
-[I - cos(_
2
N - 1
)]
Where the width of the window is defined by the value N. The Hamming window
is well suited for the short-time Fourier transform because it has low aliasing. This
means its Fourier transform has lower frequency content then most other windowing functions allowing for better temporal resolution. All studies which use Fourier
analysis have some form of windowing function. The first step of any EEG study
involves cutting out a region of interest in the original signal, which can be though
of as using a simple rectangular window. Beyond that, most studies break the signal
into a series of epochs and compute the Fourier transform individually. [5] [22] [18] This
helps compensate for the transient nature of the brain activity. In this experiment
it will be additionally helpful in dealing with blinking artifacts. Since blinks happen
for very brief periods of time, most windowed portions of the signal will section out
blinks. This means individual eye blinks will not destroy entire trials.
Sliding the windowing function down the signal allows frequency content to be
defined at discrete time points in the signal. This is how the short-time Fourier
transform preserves both temporal and frequency information. Unfortunately there
is a trade off between resolution in the time domain and resolution in the frequency
domain. Setting a wide windowing function allows for accurate frequency analysis,
but knowing exactly when those frequencies occur suffers. Vise versa, narrowing the
Spectrogram
5
10
15
20
S25
W
*1 30
35
40
45
50
1
2
3
4
Time (s)
5
6
7
Figure 5-4: Spectrogram resulting from short-time Fourier transform
windowing function increases the understanding of when things occur, but because
the small window function only allows a small chunk of signal to pass through the
frequency information suffers. Typically a balance is made between frequency and
temporal resolution, as done by this experiment.
This type of analysis is ideal for this experiment. It is not known where or when
specific neural activities occur which are unique to extended face viewings. Keeping
both frequency and temporal data will provide a better understanding of the underlying signal. As stated above, a Hamming window function was used with a width
of .64 seconds and a 93.75% overlap between each epoch. Computing the short-time
Fourier transform turns a signal into a 2D signal called a spectrogram. In a spectrogram frequency is on one axis and time is on the other. To define a set of features from
the spectrogram, blocks 1 hertz tall and .5 second wide were integrated, resulting in
700 individual blocks for each EEG trace. Example spectrograms for each subject
are listed in Appendix A.
Spectrogram
5
10
15
N 20
8 25
(D0
2 30
U-..
35
40
45
50
2
3
4
Time (s)
5
6
7
Figure 5-5: Spectrogram gridded into 1 Hz by .5 second blocks. The different size is
a result of rounding
44
Chapter 6
CLASSIFICATION
6.1
Machine Learning
Breaking the spectrogram into 1 Hz and 0.5 second blocks results in 50 blocks for
every 14 half second epoch, resulting in 700 total features per trial. More than that,
each trial is recorded over 128 sensors, making it 700 features for each 128 sensors.
Attempting to piece out which of the 89,600 total features are correlated to face
processing using only 60 examples is a Herculean task to do by hand. Thankfully
the field of machine learning provides a tractable solution. Algorithms from this field
are used to automatically learn the difference between two different classes of stimuli
(for this experiment face and non-face). By running these algorithms on subsets of
features, the important aspects of the EEG signal can be teased out.
Machine learning algorithms learn by training off a subset of data. They assume
patterns in the overall data set are represented in this smaller collection, so any difference found in a training set will generalize to the overarching data. The boundary
found by these algorithm normally is tested on the remaining set of data to determine
it how well it classifies the two types of data. Typical approaches involve breaking
the entire data set into a series of groups and training multiple algorithms. Each
group serves as the test set for one algorithm, and from the average classification rate
the separability of the data can be judged. In this experiment the classification rate
will be used to determine the information content of features. If a feature has a high
classification rate then it is assumed that it is helpful in determining face processing.
If classification is near chance then it is assumed it is playing no role.
In machine learning there are three general types of algorithms: supervised, semisupervised , and unsupervised.
In the case of the supervised learning algorithm,
the entire training set is labeled. This tells the algorithm which data points are in
the positive class (face) and which are in the negative class (non-face).
In semi-
supervised learning only a portion of the training set is labeled and in unsupervised
learning none are labeled. Each type of algorithm has different applications, but
for this experiment a supervised learning algorithm is ideal. Every trace from the
EEG data is automatically labeled as a face or non-face data point. Not including
this information will only hinder the algorithm's performance. Semi-supervised and
unsupervised learning algorithm are reserved when there are a large number of data
points and acquiring labels is too expensive.
6.2
Support Vector Machine (SVM)
One of the biggest revolutions in the field of machine learning has been the support
vector machine (SVM). This algorithm maps input vectors onto a high-dimensional
space where it computes the hyperplane which optimally separates the data into
two classes. Optimality for the SVM is the hyperplane which maximizes the margin
between the two classes. The margin is defined as the shortest line perpendicular from
the hyperplane to any data point. This type of learning algorithm is called a support
vector machine because this hyperplane is only constrained by the select few training
points (support vectors) which lie on the margin. Adding additional points outside
of the margin has no effect on overall the classification boundary. In cases where no
linear hyper-plane can divide the training set, slack variables can be introduced to
allow data points to lie within the margin.
By default the SVM looks for a linear hyperplane, but by using the 'kernel trick'
it can be extended to nonlinear classification.
Conceptually this can be thought
of as manipulating the mapping of the input vectors on to a nonlinear dimension
Classified Data Set
Data Set
77
+0
+
1
6
+
4-
0
6
+
+
+t
+
2-
2
+
++
3V
*
+
1+
-8
-6
-4
-2
Vetrs
++ ± +
4
4
5 -+
+
Su
+
++
+
0
2
4
6
-8
-6
-
-4
-2
0
2
4
6
Figure 6-1: SVM splitting a data set into two groups
space. Solving for the ideal hyperplane in this new mapping results in a nonlinear
classification boundary in the original space. This 'trick' allows more complicated
data sets to be correctly classified.
Data with complex but known relationships lend themselves to nonlinear classification, but the data in this experiment is new. Using such a classifier is unjustified
and could result in a overly constrained classification boundary. In cases where the
underlying relationship is unknown, linear classifiers are typically used. For this reason linear SVM were chosen for this experiment. To estimate the performance of the
SVM a five fold cross-validation set up was used.[17] The 60 trials were be broken
up into 5 groups of 12. The SVM was trained on four of the groups and tested on
the remaining one. This process was be repeated such that every group served as the
test set once. Cross validation insured the classification performance was not a fluke
of the data and was the result of more generalized underlying pattern.
6.3
Feature Selection
To find which aspects of the EEG signal contain important face processing data, the
features created from the spectrogram were broken into subsets. As stated above,
there were 700 features for each of the 128 sensors. For this experiment each sensor
was treated independently and acted as its own classifier. To find an underlying
Figure 6-2: Example of frequency feature used to create frequency histogram
biomarker three different types of information where looked for: spatial, frequency,
and temporal.
To determine spatial information, the entire set of 700 features was given to the
SVM. This defines an overall classification rate for a particular sensor. By looking at
where these sensors lie on the skull and how they compare to their neighbors spatial
information about face processing can be determined.
Frequency information can be determined by submitting individual frequency
bands. Looking at the spectrogram, this can be thought of as only using one row
at a time. This frequency vector will contain the power content of a specific frequency over the time course. Analysis in this manner will show the role of specific
bands throughout extended face processing.
Finally, basic temporal information will be computed by splitting the spectrogram
in two. The exact same frequency analysis described above will be run, except instead
of providing the SVM with the entire time course only have will be given. This process
will be run once on the first 3.5 seconds and again on the last 3.5 seconds. Splitting
the spectrogram up in this manner provides basic information about when specific
frequencies play a role.
Looking at particular sensors individually can be misleading. Sensor values are
unlikely to match across subjects because of variations in skull size and how the
EEG sensor cap was positioned. Luckily regions of the brain tend to act together, so
looking at group statistics will provide a more reliable insight into how the brain is
functioning. For this experiment there are 9 brain regions used to group sensors:
" Frontal
" Parietal
" Temporal-parietal (LH)
" Temporal-parietal (RH)
" Temporal (LH)
* Temporal (RH)
" Occipitotemporal (LH)
" Occipitotemporal (RH)
" Occipital
The classification results from each sensor will be aggregated into these groups
and statistical tests of significance will be run. Each brain region will contain a list
of results for all of the feature sets. A Student's t-test will be used to compute how
likely these results were drawn from a random classification. A t-test computes the
probability that a given set of observations were drawn from a distribution centered
around some specific mean. For the purposes of this experiment, the list of classification rates (the set of observations) will be compared against a 50% mean. That is to
say, the t-test will determine how likely the results for a given feature came from classifiers that perform at chance. Likelihoods below 5% will be considered statisically
significant.
50
Chapter 7
RESULTS
7.1
Spatial
After processing each sensor was grouped into its respective brain region. A list was
compiled of the classification rates for each sensor when the entire spectrogram was
given to the SVM. The mean classification rate was computed and the whole process
was repeated for each of the 12 subjects. At the end there were 12 observations for
the classification rates of each of the 9 brain region. The average classification rate
across subjects for each brain region was above chance and after running a t-test to
see if the 12 observations could have resulted from chance each brain region proved
to be statistically significant. The average classification results across all subjects
are plotted in Figure 7-1 and the probabilities they resulted from chance are shown
in Table 7.1. The classification rates for the Occipitotemporal (Left Hemisphere),
Occipitotemporal (Right Hemisphere), and Occipital regions performed significantly
higher than the other 6 regions and were selected for for further analysis.
7.2
Frequency
The average frequency histogram (classification rates as function of frequency) for
each of the three brain regions was computed and plotted. A t-test was run for each
frequency band across all sensors each brain region to determine if the frequency
Spatial Classification Rate
0.6
0.58
. 0.56
0
0.54
C')
0.52
0.5
Figure 7-1: Average classification rate of the 9 brain regions across all subjects
Table 7.1: Probability brain region classifications resulted from chance
Frontal
0.29%
Parietal
0.35%
Temporal-parietal (LH) 1.26%
Temporal-parietal (RH) 1.52%
Temporal (LH)
2.32%
2.92%
Temporal (RH)
Occipitotemporal (LH)
.04%
Occipitotemporal (RH) .12 %
Occipital
0.04%
Occipitotemporal (LH) - Classification Rate
C
~0.5-0
102030405
0
10
20
30
Frequency - Hz
Occipitotemporal
(4-
Cz
40
50
(RH) - Classification Rate
.2 0.6-
~0.5
Mo 0.4
o
0
10
d
D-IsaOccipital
y
CU
20
30
Frequency - Hz
-
40
50
40
50
Classification Rate
~0.6Co
co
CU
0.4
WW
o 0
1khd&
10
20
Frequency
-
30
Hz
Figure 7-2: Example of frequency histogram from subject 7
band was statistically significant.
If the frequency band performed above chance
and passed the t-test then it was displayed as blue, if it failed either test it was
displayed as red. Visual inspection showed a consistent spike around the 10 Hz and
the 20 Hz frequency range across the subjects.
Figure 7-2 and Figure 7-3 show
example frequency histograms displaying the noted frequency spikes. The plots of
every subject can be found in Appendix B.
7.3
Temporal
There appeared to be little difference between the frequency histogram for the first
half of the stimuli presentation against the second half. This seems to show that the
resulting 10 Hz and 20 Hz information spikes are not the result of transient signals
evoked by the onset of the stimuli. Figure 7-4 and Figure 7-5 show examples of the
Occipitotemporal (LH) - Classification Rate
Cz
.2 0.4
0
10
20
30
40
50
Frequency - Hz
Occipitotemporal (RH) - Classification Rate
Cz
.20.
Cz
~0.5
0
10
20
30
40
50
40
50
Frequency - Hz
Occipital - Classification Rate
.2 0.6
~0.5-A
0
10
20
30
Frequency - Hz
Figure 7-3: Example of frequency histogram from subject 10
Occipitotemporal (LH) - First Half
1-
0.5
0
10
20
30
40
50
Frequency - Hz
Occipitotemporal (RH) - First Half
1
0.5
U
0
L.
10
LA
I
20
EW J
.-
30
40
50
Occipitotemporal (LH) - Second Half
1
0.5 0
20
30
40
50
Frequency - Hz
Occipitotemporal (RH) - Second Half
0.5
0
Frequency - Hz
Occipital - First Half
10
20
30
40
50
Frequency - Hz
Occipital - Second Half
1-
0.5 S
0
10
10
1
20
30
40
50
0.5
0
Frequency - Hz
10
20
30
40
50
Frequency - Hz
Figure 7-4: Effect of temporal information from subject 7
relatively consistent frequency histograms for the first and second half of the time
course. The same color scheme described in the Frequency section was used. The
plots for every subject can be found in Appendix C.
7.4
Energy Content
To determine the unique brain signature for extended face viewings, the power content
of these two frequency bands was computed. The average power across the 10 Hz
band (averaged from 8 Hz to 12 Hz) and the 20 Hz band (averaged from 18 Hz to 22
Hz) for the entire time course was computed during face and non-face viewings. A
two Hz margin was used to account for subject variablity. Comparing the two values
for each band showed that the power density decreases in the 10 Hz range during and
extended face viewing and increases in the 20 Hz range. Figure 7-6 and Figure 7-7
show the comparison between the face and non-face power conditions.
Occipitotemporal (LH) - First Half
(D
0
Occipitotemporal (LH) - Second Half
11
0
" 0.5
5
0
0.5
10
20
30
40
50
Frequency - Hz
Occipitotemporal (RH) - First Half
0
10
20
30
40
50
Frequency - Hz
Occipitotemporal (RH) - Second Half
1
0
C1
0.5
L
0.5
0
10
20
30
40
50
0
Frequency - Hz
Occipital - First Half
C
1
S1
0.5
0
10
AA,"iAJia
20 30
0.5
dL"
40
Frequency - Hz
50
10
20
30
40
50
Frequency - Hz
Occipital - Second Half
MM
10
0
20
30
40
Frequency - Hz
Figure 7-5: Effect of temporal information from subject 1
50
Occipitotemporal (LH) - Log-Power Content at: 10 Hz
Face Data
Non-face Data
0
-25-.o
-
(D-30
0II
-35-
0
-40
0
-20 -
'
2
4
6
8
10
Subject Number
Occipitotemporal (RH) - Log-Power Content at: 10 Hz
-
12
IFace Data
CU
-25.)
~-25-
Non-face Data
0
0 (D-30
0
-
-
-35
-401
0
2
4
6
Subject Number
8
10
12
Occipital - Log-Power Content at: 10 Hz
-20+_E
Face Data
Non-face Data
-25
-30 0
-35 -40,
0
2
4
6
Subject Number
8
10
Figure 7-6: Log power content in the 10 Hz frequency band
12
Occipitotemporal (LH) - Log-Power Content at: 20 Hz
-30NFae Data
0)
0
No-fa
45
Data
II
10
8
6
Subject Number
Occipitotemporal (RH) - Log-Power Content at: 20 Hz
2
0
-30a--40
o -35
4
II4II
8II0'I
Non-face Da
10
8
6
SbetNumber
- Lg-PwerContent at: 20 Hz
Occiita
12
10
12
4
02
12
-30-M
at
o -350)
0
:J
-45
0
2
4
6
Subject Number
8
Figure 7-7: Log power content in the 20 Hz frequency band
Chapter 8
CONTRIBUTIONS
This thesis discovered a distinct brain signature for extended dynamic face processing
in neurotypical subjects. With this biomarker future work can begin on its presence
in autism. This brain pattern is defined by the following three findings:
1. The Occipitotemporal (Left Hemisphere), Occipitotemporal (Right Hemisphere),
and Occipital regions contain the most information during extended face processing
2. In those brain regions the 10 Hz and 20 Hz frequency bands play a consistent
and statistical significant role
3. Activity in the 10 Hz range decreases and activity in the 20 Hz range increases
60
Appendix A
Spectrograms
Average Spectrogram - Occipitotemporal (LH) - Face
Average Spectrogram - Occipitotemporal (LH) - Non-Face
-20
10
20
1071
p
r
-30
_-30
20
20
04
-40
030
Q)
Q)
40
-5
500
50
--
2
4
Time - seconds
_40
60
6
Average Spectrogram - Occipitotemporal (RH) - Face
50
-50
2
4
Time - seconds
6
Average Spectrogram - Occipitotemporal (RH) - Non-Face
-25
10
-330
-010
N
-30
N
XI-35
20
30
40
30
-45
50
5
4440
505
50
20
-55
-60
2
4
Time - seconds
6
2
Average Spectrogram - Occipital - Face
4
Time - seconds
6
20
10
-30
N
-5
Average Spectrogram - Occipital - Non-Face
-20
10
a
....
A
50
20
-30
20
-40
-40
30
30
~)
4
30(D
LL
50 L50
2
4
Time - seconds
6
2
4
Time - seconds
Figure A-1: Spectrogram for Subject 1
6
-60
Average Spectrogram - Occipitotemporal (LH) - Face
Average Spectrogram - Occipitotemporal (LH) - Non-Face
m-20
-20
10-25
10-25
Mrg
20
-35
ID
C30
*
LL
-35
-40
30
-45
50
2
4
Time - seconds
-
u
-40
-45
50:-50
:-50
50
IW
103
M-30
120
-30
50
6
Average Spectrogram - Occipitotemporal (RH) - Face
2
4
Time - seconds
6
Average Spectrogram - Occipitotemporal (RH) - Non-Face
-20
-25
10
-
10
-30
N
20
_3
-35
20
-40
30
40
-50
2
40
~-55
50
4
Time - seconds
C
Average Spectrogram - Occipital - Face
-50
5
6
C)
-40
:330
-45-
2
4
6
Time - seconds
Average Spectrogram - Occipital - Non-Face
C
-20
-25
10
10
N-303N
50
50
20
-35
20
-45
40-504
0
50
-50
50
2
4
Time - seconds
6
2
4
Time - seconds
Figure A-2: Spectrogram for Subject 2
6
Average Spectrogram - Occipitotemporal (LH) - Face
Average Spectrogram - Occipitotemporal (LH) - Non-Face
-20
-20
10
10
NN
23020
3)
-40
40
-50
50
2
-30
20
C-30
4
Time - seconds
-40
40.
50
6
Average Spectrogram - Occipitotemporal (RH) - Face
2
4
Time - seconds
6
Average Spectrogram - Occipitotemporal (RH) - Non-Face
-20
10
-50
N
-20
10
N
~30
20
~30
30.
-40
30
-40
40
-- 50
40
-50
20
C.,C
50
2
4
Time - seconds
50
6
Average Spectrogram - Occipital - Face
4
Time - seconds
6
Average Spectrogram - Occipital - Non-Face
-20
10
2
N
10
-20
-30
N
20
-30
20
30
-40
30-4
C-)
40
50-
50
2
4
Time - seconds
40
50
6
2
4
Time - seconds
Figure A-3: Spectrogram for Subject 3
64
6
-L
Average Spectrogram - Occipitotemporal (LH) - Face
10
N
Average Spectrogram - Occipitotemporal (LH) - Non-Face
0
N10
-20
N
20
2030
-40
0
30
LL
LL
40
50
20
0
40
-50
2
-50
50
Time - seconds
Time - seconds
Average Spectrogram - Occipitotemporal (RH) - Face
Average Spectrogram - Occipitotemporal (RH) - Non-Face
--10
10.
.-10
10_
-20
I-20
20
-30
-30
30
:30
T
-40
40
50
T
-50
2
4
Time - seconds
-40
40
50
6
Average Spectrogram - Occipital - Face
-50
2
4
Time - seconds
6
Average Spectrogram - Occipital - Non-Face
-10--1
10-20
10
-20
>,20
-30
NX
0
>20
-30
I
C
30
30
LL
LL
40
50
-50
2
4
Time - seconds
6
40
50
-50
2
4
Time - seconds
Figure A-4: Spectrogram for Subject 4
6
Average Spectrogram - Occipitotemporal (LH) - Face
~~10
NN
~-20
I
Average Spectrogram - Occipitotemporal (LH) - Non-Face
10
-10100
10
0-
-20
020
C
-30
S30
-30
30
-4040
-
"
40
-5050
50
2
4
Time - seconds
50
6
Average Spectrogram - Occipitotemporal (RH) - Face
-10
N
10
2
4
Time - seconds
6
Average Spectrogram - Occipitotemporal (RH) - Non-Face
-10
-2N
20
20
-30
20
-30
CD
S30
LL
' -40
40
-50
500
2
4
Time - seconds
fl30
40
50
6
Average Spectrogram - Occipital - Face
40
LL-
-50
2
4
Time - seconds
6
Average Spectrogram - Occipital - Non-Face
-10
10
10
10
-20
S-20
20
20
-30
30
-40
-30
:-30
-40
LLLL
__________
40
50
50
2
4
Time - seconds
6
40
50
50
2
4
Time - seconds
Figure A-5: Spectrogram for Subject 5
6
L
Average Spectrogram - Occipitotemporal (LH) - Face
Average Spectrogram - Occipitotemporal (LH) - Non-Face
-7
W
%w_-20
S-25
10
2
C30
-40
50
N
-35
-40
2
CD
-25
10
-30
N
2(D
-30
20
-35
30
--
40
45
-45
5040
505
4
6
Time - seconds
2
m
50,
-55
Average Spectrogram - Occipitotemporal (RH) - Face
10
-50
2
Average Spectrogram
-
6
Occipitotemporal (RH)
-
Non-Face
-25
-10
20
30
-20
--
~40
30
-35
N
403
0-5
4
Time - seconds
304-4
LL40-5
_40
-45
U5
50
2
4
Time - seconds
-50
50
6
Average Spectrogram - Occipital - Face
~25
-30
~20
c-3c
4
Time - seconds
6
Average Spectrogram - Occipital - Non-Face
10
N
2
1
2
N-3
-3520-3
30
-0
C30
-40
-45
40
50
40
-50
50
2
4
Time - seconds
6
50-50
50
2
4
Time - seconds
Figure A-6: Spectrogram for Subject 6
6
Average Spectrogram - Occipitotemporal (LH) - Face
Average Spectrogram - Occipitotemporal (LH) - Non-Face
-20
-25
-20
N
2030
03
2030
-35
3D
0
3
40
40
50
)-40
-45
2
-50
4
Time - seconds
4050
50
6
-50
2
Average Spectrogram - Occipitotemporal (RH) - Face
4
Time - seconds
Average Spectrogram
6
Occipitotemporal (RH)
-20
-25
10
r
7
C
~-30
Non-Face
2
2
N1
:
3
-35
-35
)
-45
40
540
10
50
2
4
Time -seconds
Average Spectrogram
C)
40
-20
10-
50
6
Occipital - Face
U
-430
-45
-20
2
4
Time -seconds
Average Spectrogra-
6
Occipital
- Non-Face -40
40
4
-25
NN
-30
20
-30
--
50
35
550
2
4
Time - seconds
6
2
4
Time - seconds
Figure A-?': Spectrogram for Subject 7
6
Average Spectrogram - Occipitotemporal (LH) - Face
Average Spectrogram - Occipitotemporal (LH) - Non-Face
-20
-20
10
NN
20
-25
10
-25
-30
20
-30
0)0
-35
-35
30
30
-40)
40
40
50
-40
400
2
-45
4
Time - seconds
40-45
50
6
Average Spectrogram - Occipitotemporal (RH) - Face
2
4
Time - seconds
6
-50
Average Spectrogram - Occipitotemporal (RH) - Non-Face
10.
-20
10 Kirk-
-20
20
-30O
20
-30
NN
30-4
40-4
30-4
LL)U
400
40
50
2
-50
4
Time - seconds
40-50
50
6
Average Spectrogram - Occipital - Face
2
4
Time - seconds
6
Average Spectrogram - Occipital - Non-Face
-20
S-20
10
-25
10
-25
-30
20-30
200
C
-35
U))
C30
a)0
U40
-40
= 30-4
-45
U_ 40
-45
-50
50
2
4
Time - seconds
6
-50
50
2
4
Time - seconds
Figure A-8: Spectrogram for Subject 8
6
Occipitotemporal (LH)
Average Spectrogram
Average Spectrogram - Occipitotemporal (LH) - Face
-
Non-Face
-20"
2
30
3
r-2
30 10P
-40
30
-25
NI
u-
4040
2veg
2
S
4
Time - seconds
2
6
Average Spectrogram - Occipitotemporal (RH)
6-
r
- Face
4
Time - seconds
6
Average Spectrogram - Occipitotemporal (RH) - Non-Face
-20
-20
10
N
N
. -30
20
-30
20
C,
~40
30
LL
-40
LL 30
LL
-50
-50
2
4
4
2
6
6
Time - seconds
Time - seconds
Average Spectrogram - Occipital - Face
Average Spectrogram - Occipital - Non-Face
-25
-30
N
-35
-40
20
3
30
-45
a,
-50
40
50
-55
2
4
Time - seconds
6
N
10
-25
-30
2
20
-35
Cm30-
-40
-45
(D
40;50
-50
-55
2
4
Time - seconds
Figure A-9: Spectrogram for Subject 9
6
Average Spectrogram - Occipitotemporal (LH) - Face
Average Spectrogram - Occipitotemporal (LH) - Non-Face
-25-2
10
1
-~30
N
20
3
N
-35
20-
-25
10
-40
30
a)h
-45
040
50
-50
2
4
Time - seconds
1
6
-550
Average Spectrogram - Occipitotemporal (RH) - Face
2
4
Time - seconds
-_55
6
Average Spectrogram - Occipitotemporal (RH) - Non-Face
-25
10
N
20
02
-25
-30
10-30
-35
20-35
-40
30
U40
50
_40
-530-4
-50
-55
50
2
4
Time - seconds
LL40
50-55
50
6
Average Spectrogram - Occipital - Face
-50
2
4
Time - seconds
Average Spectrogram - Occipital - Non-Face
-25
10
T-25
10
-30
N-30
-40
C-40
20
20
30-45
U40
50
6
-50
:
50
2
4
Time - seconds
6
-55
30
-45
U_40
-50
50-55
50
2
4
Time - seconds
Figure A-10: Spectrogram for Subject 10
6
Average Spectrograrn - Occipitotemporal (LH)
Average Spectrogram - Occipitotemporal (LH) - Non-Face
- Face
M-10
7r,
10 [
i
-40
-30
20
-30
-10
-030
-40
-50
2
4
4
2
6
6
Time - seconds
Time - seconds
Average Spectrogram - Occipitotemporal (RH)
N
- Fac e
Average Spectrogram - Occipitotemporal (RH) - Non-Face
-20
-20
-30
-30
-40
-40
20
CD
= 30
-50
40
50
U-
-50
6
Time - seconds
4
Time - seconds
Average Spectrogram - Occipital - Face
Average Spectrogram - Occipital - Non-Face
2
4
2
6
-20
10
N
-30
-40
2
4
Time - seconds
6
20.
30
4
2
Time - seconds
Figure A-11: Spectrogram for Subject 11
6
Average Spectrogramn - Occipitotemporal (LH)
-
Face
10.
-20
-10
0r
-30
Average Spectrogram - Occipitotemporal (LH) - Non-Face
1-20
3
XP 20
30
30
40
U
LL
40
2
4
6
2
Time - seconds
4
6
Time - seconds
Average Spectrogram - Occipitotemporal (RH) - Face
Average Spectrogram - Occipitotemporal (RH) - Non-Face
10-25
10-25
1
-30
20
-35
-30
20
40-40
30
-35
-40
30
-45
(D
40
-50
2
4
Time - seconds
40
-50
so:-55
:-55
50iiiii
50
-45
*)
50
6
Average Spectrogram - Occipital - Face
2
4
Time - seconds
6
Average Spectrogram - Occipital - Non-Face
-20
-20
10
-30
-30
20
-40
-40
30
-50
2
4
Time - seconds
6
-50
50
2
4
Time - seconds
Figure A-12: Spectrogram for Subject 12
6
-60
74
Appendix B
Frequency Plots
Occipitotemporal (LH) - Classification Rate
0.6-
0.6- -
0.4
0
10
20
30
40
50
Frequency - Hz
Occipitotemporal (RH) - Classification Rate
0.60.5
0.4d
0
20
30
4
J.iab.i
Frequency - Hz
Occipital - Classification Rate
0.60.5
0
10
20
30
40
Frequency - Hz
Figure B-1: Frequency Information for Subject 1
Occipitotemporal (LH) - Classification Rate
0.60.5
01.40
10
20
30
Frequency - Hz
40
Occipitotemporal (RH) - Classification Rate
0.60.5
0.4
0
10
20
30
40
Frequency - Hz
Occipital - Classification Rate
0.60.5
0.4
0
10
20
30
40
Frequency - Hz
Figure B-2: Frequency Information for Subject 2
50
Occipitotemporal (LH) - Classification Rate
0.6
0.5
0.4
0
10
20
30
40
Frequency - Hz
Occipitotemporal (RH) - Classification Rate
0.60.51
0.40
10
20
30
40
Frequency - Hz
Occipital - Classification Rate
0.60.5
0.41
0
10
20
30
-I--
Frequency - Hz
Figure B-3: Frequency Information for Subject 3
50
Occipitotemporal (LH) - Classification Rate
0.6k.0.5 -0.4 -L"
0
10
20
30
Frequency - Hz
40
50
Occipitotemporal (RH) - Classification Rate
0.60.50.4'0
10
20
30
Frequency - Hz
Occipital - Classification Rate
40
0.60.50.4 Lm
0
10
20
30
40
Frequency - Hz
Figure B-4: Frequency Information for Subject 4
50
Occipitotemporal (LH) - Classification Rate
0.60.5
0.4
0
10
20
30
40
Frequency - Hz
Occipitotemporal (RH) - Classification Rate
0.6
0.50.4-
0
10
20
30
40
Frequency - Hz
Occipital - Classification Rate
0.6 0.5
0.4
0
10
20
30
Frequency - Hz
40
Figure B-5: Frequency Information for Subject 5
50
Occipitotemporal (LH) - Classification Rate
0.60.5-
0.40
20
30
40
Frequency - Hz
Occipitotemporal (RH) - Classification Rate
0.6 0.50.40
10
20
30
40
Frequency - Hz
Occipital - Classification Rate
0.6
0.5
0.4
0
10
20
30
40
Frequency - Hz
Figure B-6: Frequency Information for Subject 6
Occipitotemporal (LH) - Classification Rate
0.60.5
-
0.40
10
20
30
40
Frequency - Hz
Occipitotemporal (RH) - Classification Rate
0.6-
0
10
20
30
L0LJL0
40
50
40
50
Frequency - Hz
Occipital - Classification Rate
0.60.510.4 ]
10
20
30
Frequency - Hz
Figure B-7: Frequency Information for Subject 7
Occipitotemporal (LH) - Classification Rate
0.6
0.5
0.4
0
20
30
40
Frequency - Hz
Occipitotemporal (RH) - Classification Rate
0.6 0.50.4
20
0
30
40
Frequency - Hz
Occipital - Classification Rate
0.6
0.5
0.4
10
20
30
40
Frequency - Hz
Figure B-8: Frequency Information for Subject 8
Occipitotemporal (LH) - Classification Rate
(D
S9 0.6U0.5ii
0
U,
-i.uLijh JLJIILU j
40
30
Frequency - Hz
Occipitotemporal (RH) - Class ification Rate
10
20
50
OC
.90.6oil
0
w
10
20
30
Frequency - Hz
Occipital
Cz
-
40
50
40
ILJ
Classificatio i Rate
.20.6-
D.5,
ca 0
10
Jil
20
Frequency
30
-Hz
Figure B-9: Frequency Informatic n for Subject 9
50
Occipitotemporal (LH) - Classification Rate
0
10
20
30
40
Frequency - Hz
Occipitotemporal (RH) - Classification Rate
.0 0.6
~0.5
0.
10
20
30
Frequency - Hz
Occipital - Classification Rate
10
20
30
Frequency - Hz
40
0.6-
0
Figure B-10: Frequency Information for Subject 10
50
Occipitotemporal (LH) - Classification Rate
0.6
0.4
0
20
10
30
40
51
Frequency - Hz
Occipitoternporal (RH) - Classification Rate
0
j j0
10
20
30
40
Frequency - Hz
Occipital - Classification Rate
0.60.5
0.4
0
.-
Mr-
20
30
40
Frequency - Hz
Figure B-11: Frequency Information for Subject 11
50
-4
Occipitotemporal (LH) - Classification Rate
a)+
0 0
0.5-
d
0
1k-&
-
10
20
30
40
50
Frequency - Hz
Occipitotemporal (RH) - Classification Rate
.0 0.6
c.0lall.r.1
ca 0
10
20
ill.ItadL M
30
Frequency
-
40
Hz
Occipital - Classification Rate
.0 0.6
Frequency - Hz
Figure B-12: Frequency Information for Subject 12
87
50
88
Appendix C
Temporal Plots
Occipitotemporal (LH) - First Half
1-
0.5
0
10
t l
20
30
40
50
Frequency - Hz
Occipitotemporal (RH) - First Half
1
0.5
0
10
20
30
40
50
Occipitotemporal (LH) - Second Half
1
0.5
0
10
20
30
40
50
Frequency - Hz
Occipitotemporal (RH) - Second Half
1
0.5
0
Frequency - Hz
Occipital - First Half
10
20
30
40
50
Frequency - Hz
Occipital - Second Half
1
0
10
20
30
Frequency - Hz
40
50
0.5
0
10
20
30
Frequency - Hz
Figure C-1: Temporal Information for Subject 1
40
50
Occipitotemporal (LH) - First Half
Occipitotemporal (LH) - Second Half
1
1-
0.5
L
0.5
0
10
20
30
40
50
0
Frequency - Hz
Occipitotemporal (RH) - First Half
C
0
0
Ca,
C.)
10
20
30
40
50
20
30
A
0.5
C,)
0
10
A
20
30
0.5
20
30
Frequency - Hz
40
40
50
Frequency - Hz
Occipital - Second Half
0.5
10
50
Occipitotemporal (RH) - Second Half
1
Frequency - Hz
Occipital - First Half
0
40
Frequency - Hz
1
0.5 -
10
50
-ag-
0
10
20
30
Frequency - Hz
Figure C-2: Temporal Information for Subject 2
40
50
Occipitotemporal (LH) - Second Half
Occipitotemporal (LH) - First Half
1-
0.5
0
L
0.5
10
20
30
40
50
0
10
Frequency - Hz
20
30
40
50
Frequency - Hz
Occipitotemporal (RH) - First Half .(D
1
Occipitotemporal (RH) - Second Half
1
0.505
0
10
20
30
0
40
Frequency - Hz
Occipital - First Half
10
20
30
40
50
Frequency - Hz
Occipital - Second Half
1
1
0
Co
0.5
0
10
20
30
Frequency - Hz
40
0.5
Uog
0
10
20
30
Frequency - Hz
Figure C-3: Temporal Information for Subject 3
40
50
-
----
___
Occipitotemporal (LH) - First Half
0
Occipitotemporal (LH)
-
Second Half
cl
0
0
0 0.5
o
0
+-
Frequency - Hz
Occipitotemporal (RH) - First Half 4D
10
20
30
40
50o
0
10
20
30
40
50
Frequency - Hz
Occipitotemporal (RH)
-
Second Half
C
0
0
+
P
UQ0.5
o
0
ci1
I
50.5-
L
10
20
30
40
Frequency - Hz
Occipital - First Half
50
0
10
20
30
40
50
Frequency - Hz
Occipital - Second Half
0
C0
Ca 0
0
~0.5
0
10
20
30
Frequency - Hz
40
50o
C0
0
10
20
30
Frequency - Hz
Figure C-4: Temporal Information for Subject 4
40
50
Occipitotemporal (LH) - Second Half
Occipitotemporal (LH) - First Half
1
0.5
a.L
0
0.5gi
10
20
30
40
50
Frequency - Hz
Occipitotemporal (RH) - First Half
0
.D
10
20
30
40
50
Frequency - Hz
Occipitotemporal (RH) - Second Half
0
0
10
20
30
40
Frequency - Hz
Occipital - First Half
0.5 .5b
w
0
50
1-
1
0.5 4
0
0.5
10
20
30
40
Frequency - Hz
50
0
30.5
10
20
30
40
50
Frequency - Hz
Occipital - Second Half
10
20
30
Frequency - Hz
Figure C-5: Temporal Information for Subject 5
40
50
Occipitotemporal (LH) - First Half
1
E
0.5
0
10
20
30
0.5
40
50
Frequency - Hz
Occipitotemporal (RH) - First Half
1-
0.5
0
10
20
30
40
50
Frequency - Hz
Occipitotemporal (RH) - Second Half
1
0.5
g
0
Occipitotemporal (LH) - Second Half
1
10
20
30
40
50
0
Frequency - Hz
Occipital - First Half
10
20
30
40
50
Frequency - Hz
Occipital - Second Half
1
S q41
0.5
0
10
20
30
Frequency - Hz
0.5
40
50
0
10
20
30
Frequency - Hz
Figure C-6: Temporal Information for Subject 6
40
50
Occipitotemporal (LH) - Second Half
1
Occipitotemporal (LH) - First Half
1-
0.5
0
10
20
30
40
0.5
0
50
Frequency - Hz
Occipitotemporal (RH) - First Half
10
20
30
40
50
Frequency - Hz
.
1
Cz
Occipitotemporal (RH) - Second Half
1
0
4)
0.5
0.5
0
10
20
30
40
50og
Frequency - Hz
Occipital - First Half
0
CU
10
20
30
40
50
Frequency - Hz
Occipital - Second Half
1
0
Cz
0.5
0
0
10
20
30
Frequency - Hz
40
5
0.5
0
10
20
30
Frequency - Hz
Figure C-7: Temporal Information for Subject 7
40
50
Occipitotemporal (LH) - First Half
Occipitotemporal (LH) - Second Half
1
0.5 6LAJJL"
0
10
0.5
20
30
40
50
Frequency - Hz
Occipitotemporal (RH) - First Half
1-
0.56&
JJ~LM
0 10 20 30 40 50
0
30
40
50
Occipitotemporal (RH) - Second Half
1
0.5 0
10
20
30
40
50
Frequency - Hz
Occipital - Second Half
1
1
M
0
20
Frequency - Hz
Frequency - Hz
Occipital - First Half
0.5
10
0.5
10
20
30
Frequency - Hz
40
50
0
10
20
30
Frequency - Hz
Figure C-8: Temporal Information for Subject 8
40
50
Occipitotemporal (LH) - First Half
1
a,
0
Occipitotemporal (LH) - Second Half
1
En
0.5"I-1fiAk
0.5
0
20
10
30
40
50
Frequency - Hz
Occipitotemporal (RH) - First Half
1
0
U6
0
U)
Co
10
20
30
40
50
Frequency - Hz
Occipitotemporal (RH) - Second Half
1
a,
0.5 L
0
U)
10
20
30
40
.)
C-)
10
20
30
Frequency - Hz
40
10
20
30
40
50
Frequency - Hz
Occipital - Second Half
0
U
0
0
50
Frequency - Hz
Occipital - First Half
0.5
JFd-j J.
0.5
1
0.5
50
0
10
20
30
Frequency - Hz
Figure C-9: Temporal Information for Subject 9
40
50
Occipitotemporal (LH) - First Half
1
0.5
Occipitotemporal (LH) - Second Half
1
0.5
0
10
20
30
40
50
Frequency - Hz
Occipitotemporal (RH) - First Half
1L
0.5 k""
0
LL0
30
40
50
Occipitotemporal (RH) - Second Half
1
0.5 rl
10
20
30
40
50
0
10
20
30
40
50
Frequency - Hz
Occipital - Second Half
1-
1
1
0
20
Frequency - Hz
Frequency - Hz
Occipital - First Half
0.5
10
10
0.5
20
30
40
Frequency - Hz
50
-
0
10
20
30
Frequency - Hz
Figure C-10: Temporal Information for Subject 10
40
50
Occipitotemporal (LH)
1
-
First Half
.
Cz
Occipitotemporal (LH) - Second Half
0
-0-1.5
0.5
50 S9
0
40
30
20
10
20
30
40
50
Frequency - Hz
Frequency - Hz
Occipitotemporal (RH) - First Half 42 Occipitotemporal (RH) - Second Half
0
10
Co
1
1
0
C)
U.
0.5
0.5
0
00
Frequency - Hz
10
20
30
40
50
Frequency - Hz
Occipital - Second Half
Occipital - First Half
1
C
1
0
Cd
0.5
0
10
20
30
40
Frequency - Hz
50,
0.5
0
10
20
30
Frequency - Hz
Figure C-11: Temporal Information for Subject 11
100
40
50
Occipitotemporal (LH) - First Half
1-
0.5
0
Occipitotemporal (LH) - Second Half
0.5
10
20
30
40
50
Frequency - Hz
Occipitotemporal (RH) - First Half
1
0.5
0
10
20
30
40
30
40
50
J&-
50
0
Frequency - Hz
Occipital - First Half
10
20
30
40
50
Frequency - Hz
Occipital - Second Half
1
1
0.5
0
20
Frequency - Hz
Occipitotemporal (RH) - Second Half
1
0.5
0
10
10
20
30
40
50
Frequency - Hz
0.5g 0
10
20
30
40
Frequency - Hz
Figure C-12: Temporal Information for Subject 12
101
50
102
Appendix D
BCS-Subjects Email
103
From: Luke Urban <lsurban@mit.edu>
To: bcs-subjects@mollylab-1.mit.edu
Date: Mon, Feb 8, 2010 at 6:07 PM
Subject: EEG subjects needed
Hi,
The Sinha Lab is looking for volunteers for an EEG experiment this
week and next. Sessions will last roughly 1 hour, and you'll
be paid \$10 for participating.
Our only requirements are that you have normal or corrected to normal
vision and normal hearing.
Be prepared to have your hair get a little damp, since we have to keep
an electrolyte solution on the EEG net at all times in order for a
recording to be transferred from your head to the net.
If you are interested in volunteering, please reply to
lsurban@mit.edu.
The lab is in 46-4089. If you need directions, they can be provided
when your appointment is confirmed.
Thanks,
Luke Urban
Sinha Lab
104
Bibliography
[1] Bailey A, Braeutigam S, Jousmaki V, and Swithenby S. Abnormal activation
of face processing systems at early and intermediate latency in individuals with
autism spectrum disorder: a magnetoencephalographic study. European Journal
of Neuroscience, 21:2575-2585, 2005.
[2] Farwell L. A. and E. Donchin. Talking off the top of your head: toward a
mental prothesis utilizing event-related brain potentials. Electroencephalography
and Clinical, 70(6):510:523, 1988.
[3] Oppenheim A and Willsky A. Signals and systems. 1996.
[4] Jemel B, Mottron L, and Dawson M. Impaired face processing in autism: Fact of
artifact? Journal of Autism and Developmental Disorders, 36(1):91-106, 2006.
[5] Greene BR, Mahon P, McNamara B, Boylan GB, and Shorten G. Autmated
estimation of sedation depth from the eeg. Proceedings of the 29th Annual International Conference of the IEEE EMBS., pages 3188-3191, 2007.
[6] Emanuel Donchin, Kevin M. Spence, and Ranjith Wijesinghe. The mental prosthesis: Assessing the speed of a p300-based brain-computer interface. IEEE
Transactions on Neural Systems and RehabilitationEngineering, pages 174-179,
2000.
[7] Estrada EF, Nazeran H, and Ochoa H. Hrv and eeg signal features for computeraided detection of sleep apnea. IFMBE Proceedings,24:265-266, 2009.
[8] Sufi F, Fang Q, and Cosic I. Ecg r-r peak detection of mobile phones. Proceedings
of the 29th Annual International Conference of the IEEE EMBS, pages 36973700, 2007.
[9] Dawson G, Webb S, Carver Lm, Panagiotides H, and McPartland J. Young
children with autism show atypical brain responses to fearful versus neutral facial
expressions of emotion. Developmental Science, 7(3):340-359, 2004.
[10] Rousselet G, Husk J, Bennet P, and Sekuler A. Time course and robustness of
erp object and face differences. Journal of Vision, 8(12):1-18, 2008.
105
[11] Kohlmorgen J, Donheg G, Braun M, Blankertz B, Muller K, Curio G, Hagemann
K, Bruns A, Schrauf M, and Kincses W. Improving human performance in a real
operating environment through real-time mental workload detection. Toward
Brain-ComputerInterfacing, pages 409-422, 2007.
[12] Lui J, Harris A, and Kanwisher N. States of processing in face perception: an
meg study. Nature Neuroscience, 5(9):910-916, 2002.
[13] McPartland J, Dawson G, Webb S, Panagiotides H, and Carver L. Event-related
brain potentials reveal anomalies in temporal processing of faces in autism spectrum disorder. Journal of Child Psychology and Psychiatry, 45(7):1235-1245,
2004.
[14] Logothetis N. K., Guggenberger H., S Peled, and J. Pauls. Functional imaging
of the monkey brain. Nature Neuroscience, 2:555-562, 1999.
[15] O'Conner K, Hamm J, and Kirk I. The neurophysiological correlates of face
processing in adults and children with asperger's syndrome. Brain and Cognition,
pages 82-95, 2005.
[16] Brandon Keim. Twitter telepathy: Researchers turn thoughts into tweets. Wired,
2009.
[17] Breiman L and Spector P. Submodel selection and evaluation in regression.
InternationalStatical Review, 60(3):291-319, 1992.
[18] Gugino LD, Chabot RJ, Prichep LS, John ER, Formanek V, and Aglio LS.
Quantitative eeg changes associated with loss and return of consciousness in
healthy adult volunteers anesthetized with propofi or sevoflurane. British Journal
of Anaesthesia, 87(3):421-428, 2001.
[19] Akhtari M, Bryant H, Mamelak A, Flynn E, Heller L, Shih J, Mandelkern M,
Matlachov A, Ranken D, Best E, DiMauro M, Lee R, and Sutherling W. Conductivities of three-layer live human skull. Brain Topography, 14(3):151-167,
2002.
[20] Chavan M, Agarwala RA, and Uplane MD. Comparative study of chebyshev i
and chebyshev ii in ecg. InternationalJournal of Circuits, Systems, and Signal
Processing, 1(2):1-17, 2008.
[21] Fabiani M, Gratton G, Karis D, and Donchin E. Definition, identification, and
reliability of measurement of the p300 component of the event-related brain
potentials. Advances in Psychophysiology, 2:1 -78, 1987.
[22] Dressler 0, Schneider G, Stockmanns G, and Kochs EF. Awareness and the eeg
power spectrum: analysis of frequencies. British Journal of Anaesthesia, 2004.
106
[23] Bentin S, Allison T, Puc A, Perez E, and G. McCarthy. Electrophysiological
studies of face perception in humans. Journal of Cognitive Neuroscience, pages
551-565, 1996.
[24] Luck S. Ten simple rules for designing erp experiments. Event-Related Potentials:
A Methods Handbook, pages 17-32, 2005.
[25] Eric W. Sellers, Andrea Kbler, and Emanuel Donchin. Brain-computer interface
research at the university of south florida cognitive psychophysiology laboratory:
The p300 speller. IEEE Transactions on Neural Systems and Rehabilitation
Engineering, pages 221-224, 2006.
[26] Handy T. Basic principles of erp quantification. Event-Related Potentials: A
Methods Handbook, pages 32-56, 2005.
[27] Parks T and McClellan J. Chebyshev approximation for nonrecursive digital
filters with linear phase. IEEE Transactions on Circuit Theory CT, 19(2):189194, 1972.
107
Download