1.Visualization of Functional Connectivity Graphs related to FCμstates

advertisement
On the Quantization
of time-varying phase synchrony patterns
into distinct Functional Connectivity Microstates (FCμstates)
in a multi-trial visual ERP paradigm
S. I. Dimitriadis, N.A. Laskaris, A. Tzelepi
1.Visualization of Functional Connectivity Graphs related to FCμstates
Detected FCμstates are illustrated in S1 and S2 for left and right presentation
of the stimulus. To represent the mined patterns of brain’s organization, we adopted a novel
visualization scheme (S1,2 : ‘‘topographies within topography’’, Nolte et al., 2003), in which the
relative position of a single sensor u is used to embed a whole brain’s topography that represents
the PLV(u,v) measurements related to it. A minute topography depicts the phase coupling from
a particular sensor to all other destinations (sensor locations). Each connectivity pattern is shown
in topographic manner therein, along with a GE value characterizing the integration of the
underlying network based on phase synchrony. S1 and S2 are
related to Fig.5,6 in the
manuscript.
Page 1 of 13
S1. Topographies of FCμstates with the related global efficiency when checkerboard pattern appeared
in the left side of the screen.
Page 2 of 13
S2. Topographies of FCμstates with the related global efficiency when checkerboard pattern appeared in the
right side of the screen.
Page 3 of 13
2. Characterizing FCμstates based on global and local efficiency
Each FCμstate reflects a complex network instantiation (of the interacting cortical areas)
and is therefore important to be characterized with the appropriate network metrics so as to the
point out the prevailing trends of functional organization. To this end the column vector Tj is
transformed back to its (Nsensor × Nsensor ) tabular counterpart WT and two popular metrics are
adopted. The first is the global efficiency (GE)
w
GEW 
T
1
 d ij T

j i
N sensor iN
N
1
sensor sensor
(S.1)
where the symbol dij in the numerator denotes the absolute path length between two nodes.
Specifically, this is the inverse of the harmonic mean of the shortest path length between each
pair of nodes. GE reflects the global efficiency of parallel information transfers in the network
(Achard and Bullmore,2007).
The second is the local efficiency (LE)
LE wT

1

N sensor iN sensor
G
( d jhi ) 1

j , hGi , j , h  i
(S.2)
ki ( ki 1)
where ki denotes the total number of spatial neighbors (first level neighbors) of the i-th node and
dij denotes the shortest absolute path length between every possible pair in the
neighborhood of the current (i-th) node. LE can be understood as a measure of the fault
tolerance of the network, indicating how well each subgraph Gi exchanges information
when the indexed node is eliminated (Achard and Bullmore,2007). It captures local
Page 4 of 13
connectivity around the node, which in this study correspond to 1st order (immediate) and 2nd
order neighbors (Costa and Silva, 2006).
To further illustrate and compare FCμstates related to both directions of the
stimulus, we presented them in a common 2d space
where the first dimension
corresponds to the GE and the second to the LE. Both plots based on the adopted pair
of network metrics cannot differentiate a large portion of FCμstates between the two
directions (S3). Additionally, LE based on 1st
succeeded to reveal a few FCμstates
belonging to left/right direction marked with 16/9,13 correspondingly. As a conclusion,
the adopted network metric (LE) for the characterization of hierarchical organization of
FCμstates
with collaboration of GE couldn’t distinguish
them extensively in terms of
directionality of the stimulus.
S3. Representing FCμstates in both conditions employing two network metrics: global efficiency and local
efficiency with hierarchical definitions based on 1st (a) and 2nd (b) neighbors (blue/red circles referred left/right
direction and numbers to the FCμstates as represented in Figures 5 and 6).
Page 5 of 13
3. Quantifying the deterministicity of symbolic time series
We applied information-theoretic measures to the symbol sequences in order to study
their underlying dynamics called entropy reduction rate based on the dynamics of stepwise
transitions from one functional microstate to another.
Conditional entropy Hs is defined as
ko
H s    P ( s' / s ) log P ( s' / s ) (S.3)
s '
where P(s’/s) is the probability of the occurrence of symbol s’ immediately after the occurrence
of symbol s (  P ( S' / S )  1 ). This measure represents the uncertainty not resolved by the
S'
occurrence of the preceding symbol. Entropy reduction rate hred is defined as
ko
hred 
H   P( s)H s
s 1
H
(S.4)
represents how large a portion of the uncertainty about the next symbol on average is resolved by
the occurrence of the preceding symbol (Schack, 2004 ; Ito et al., 2007).
In the present study, we studied short term dynamics with entropy reduction rate hred.
This measure takes a value between 0 and 1; larger values indicate higher (stepwise)
predictability. In Table 1, averaged
directions of the stimulus and
hred values across subjects are shown
for the two
for both baseline and after the onset of the stimulus
period. To summarize across subjects, pairs of hred values between baseline and after the
onset of the stimulus were analyzed via Wilcoxon test. Setting as significance level P =
0.001, we filtered out the non-significant. The statistical analysis of hred -values implies that
Page 6 of 13
there exist higher predictability after the onset of the stimulus compared to the control
condition.
Table 1. hred averaged values across trials for each subject corresponding to the two possible comparisons.
*p < 0.001
Left
Baseline period
After the onset of
Right
Baseline period After the onset of
Subj1
Subj2
0.061  0.017
0.064  0.021
0.2321  0.043 *
the stimulus
0.2514  0.063 *
0.097  0.031
0.101  0.034
0.3543  0.078 *
the stimulus
0.3213  0.072 *
Subj3
0.074  0.027
0.2871  0.083 *
0.121  0.042
0.3421  0.089 *
Subj4
0.085  0.029
0.2913  0.046 *
0.071  0.051
0.2805  0.091 *
Subj5
0.095  0.031
0.2573  0.074 *
0.083  0.089
0.2931  0.107 *
Subj6
0.081  0.023
0.3127  0.069 *
0.093  0.055
0.2967  0.091 *
Subj7
0.092  0.035
0.3023  0.072 *
0.103  0.065
0.3214  0.098 *
Subj8
0.088  0.034
0.3356  0.065 *
0.121  0.061
0.3151  0.092 *
Subj9
0.101  0.028
0.3289  0.073 *
0.091  0.082
0.3013  0.083 *
Subj10
0.081  0.043
0.3214  0.082 *
0.113  0.091
0.3121  0.102 *
4.Volume conduction effects on functional connectivity
Volume conduction is an important issue for EEG analysis. Electrical currents spread
nearly instantaneously throughout any volume. Because of the physics of conservation ( law of
conservation of energy) there is a balance between negative and positive potentials at each
moment of time with slight delays at the speed of light (Feynmann, 1963). Sudden synchronous
synaptic potentials on the dentrites of a cortical pyramidal cell result in a change in the local
electrical potential referred to as a Dipole. Depending on the solid angle between the source and
the sensor (i.e., electrode) the polarity and shape of the electrical potential is different. Volume
conduction is an electrical field produced at near the speed of light by an electrical dipole and
thus exhibits approximately zero phase lag everywhere in the field (Nunez, 1981). Zero phase
delay is one of the important properties of volume conduction, when separated generators exhibit
Page 7 of 13
a stable phase difference of, for example, 30 degrees then this can not be explained by volume
conduction. We examined the distribution of instantaneous phase differences for many pairs of
electrodes (in particular those of strongest phase coupling) and confirmed that it was not centered
around 0 or ±π. Moreover checking for phase distribution that does not peak around 0 or π is no
guarantee that it will not be affected by volume conduction but it may be less sensitive
(Stam,2007b ; Daffertshofer and Stam,2010).Theoretically, large phase differences can be
produced by volume conduction when there is a deep and temporally stable tangential dipole that
has a positive and negative pole with an inverse electrical field at opposite ends of the human
skull. In this instance, phase difference is maximal at the spatial extremes and approximates zero
half way between the two ends of the standing dipole. This is a special situation that is
sometimes present in evoked potential studies. In order to explain the results of the present study
based on volume conduction there must be a single standing dipole that exhibits a zero phase
delay at its midpoint and oscillates and rotates differentially from anterior-to-posterior and
posterior-to-anterior. To test this particular volume conduction model, we compared phase
difference values as that computed via PLV estimator from anterior-to-posterior and from
posterior-to-anterior direction. EEG phase was different between both directions even though the
inter-electrode distances were the same, thus further disconfirming a standing dipole model
(Thatcher et al., 2008).
The observation in the standard ERP microstate analyses and also in multichannel timefrequency EEG analyses is that there is a huge amount of 0 or 180 degree phase synchronization.
To address the above issues, we plot in a histogram all the phase difference angles
derived from left attentive task and taking account the whole group of subjects and the
entire set of trials (see S4). From the distribution of phase difference angle, we mentioned
Page 8 of 13
that less than 6% of phase difference angle was accounted centered near 0 and 180 degrees.
Almost 4 % of phase difference angle was accounted centered near +-90 degree difference.
S4. Distribution of phase angle between all pairs of sensors derived, collectively, for the entire set of trials and for
all subjects (from EEG traces associated with left attentive task).
Page 9 of 13
Appendix 1. Phase Locking Index
PLV computation is based on estimates of instantaneous phase obtained from the convolution of Morlet
wavelets with the EEG signals xi(n) filtered within one of the frequency bands under study. The resulting Dyadic
Wavelet Transform of a discrete sequence x(n) sampled with time spacing δt, consisting of N data points
(n=0,1,..N-1) and with frequency scale s is denoted as:
W X ( s , n)   t / s
N 1

n'  0
x(n' )  0* (( n'n) t / s) (A.1)
where ‘*’ denotes complex conjugation with the consecutive scaled and translated versions of the principal
wavelet function, the complex Morlet wavelet ψo(n):
2
 0 (n)   1 / 4eiω0 n e  n / 2 (A.2)
where ω0 is the nondimensional frequency, here taken to be 6 (Torrence & Compo,1998).
A set of different scales s is implied in eq. (A.1). Writing the scales as
fractional powers of two yields the
following:
s j  so 2
j j
, j  0,1,..., J
(A.3)
J   j 1 log 2 ( N t / so )
where so is the smallest resolvable scale and J the largest scale; our analysis starts by estimating the optimal δj for
each band/condition.
The instantaneous phase φXi(fs,n) is then calculated as follows:
 Xi (s, n)  arctan
imag (W Xi ( s, n))
real (W Xi ( s, n))
(A.4)
The successive phase values(originally ranging in [-π π]) underwent an unwarping transform to get rid off
discontinuities (in accordance with the phase correction algorithm described in (Freeman and Rogers,2002).
Page 10 of 13
The single-trial version of PLV measure for a pair of signals xk(n) and xl(n) recorded at different sites utilizes a
window of length 2W+1 samples (here W=10 samples of the original EEG signals ) centered at the n-th data point
(Mormann et al., 2000):
PLV trial ( x k (f n ), x l (f n ) ) 
s2
n W
x
x
1

 exp(i ( k (s,f n' ) -  l (s,f n' ))
(2W 1).s n'  n W s  s
1
(A.5)
s1/s2 refers to the scale limits, Δ denotes the corresponding range and fn the filtered sample in the studying frequency
band (Lachaux et al., 2000).
The multi-trial version of PLV measure is computed by averaging the instantaneous PLVs across trials
PLV average 
N trials
trial #
(A.6)
 PLV
N trials trial # 1
1
By applying the Morlet complex wavelet transform to each single-trial signal separately, the instantaneous
phase
 (t , f )
tria l
u
(for all the scales corresponding to the 4-10Hz frequency range) was estimated for u sensor
(Dimitriadis et al., 2010 ; Valencia et al., 2008). The PLV attached to any pair (u,v) of sensors is inversely related to
the variability of phase differences across trials where Ntrials is the total number of trials. If the phase difference
varies little across trials, its distribution is concentrated around a preferred value and PLV is close to one. The above
PLV-measurements were integrated within the frequency range under study ( i.e. 4-10Hz), resulting in a latency
depended time-course PLV(u,v)(t) for every pair of sensors. In order to verify that PLV capture dynamical changes of
actual functional dependencies, we examined the distribution of instantaneous phase differences for many pairs of
electrodes (in particular those of strongest phase coupling) and confirmed that it was not centered around 0 or ±π
(Nolte et al., 2008 ; Stam et al., 2007b ; Daffertshofer and Stam, 2007).
Page 11 of 13
Appendix 2.Neural Gas Algorithm
Neural gas is an artificial neural network, inspired by the self-organizing map and introduced in 1991
by Thomas Martinetz and Klaus Schulten (Martinez and Schulten,1994). The neural gas is a simple algorithm for
finding optimal data representations based on feature vectors. The algorithm was coined "neural gas" because of the
dynamics of the feature vectors during the adaptation process, which distribute themselves like a gas within the data
space. It is applied where data compression or vector quantization is an issue, for example speech recognition
(Angelopoulou et al., 2005), image processing (Curatelli, Mayora-Iberra, 2000) or pattern recognition.
Given a probability distribution P(x) of data vectors x and a finite number of feature vectors wi, i=1,...,N.
With each time step t a data vector randomly chosen from P is presented. Subsequently, the distance order of the
feature vectors to the given data vector x is determined. i0 denotes the index of the closest feature vector, i1 the index
of the second closest feature vector etc. and iN-1 the index of the feature vector most distant to x. Then each feature
vector (k=0,...,N-1) is adapted according to
t 1  wt   e k /   ( x  wt )
wik
ik
ik (A.7)
with ε as the adaptation step size and λ as the so-called neighborhood range. ε and λ are reduced with increasing t.
After sufficiently many adaptation steps the feature vectors cover the data space with minimum representation error
(Martinez and Schulten,1991).The adaptation step of the neural gas can be interpreted as gradient descent on a cost
function. By adapting not only the closest feature vector but all of them with a step size decreasing with increasing
distance order, compared to k-means clustering a much more robust convergence of the algorithm can be achieved.
The neural gas model does not delete a node and also does not create new nodes.
Page 12 of 13
References
1)
Achard S, Salvador R, Whitcher B, Suckling J, Bullmore E (2006) A resilient, low-frequency, small-worl human
brain functional network with highly connected association cortical hubs. J Neurosci 26:63–72.
2) Angelopoulou A, Psarrou A, Garcia RJ, Revett, K (2005) Automatic landmarking of 2D medical shapes
using the growing neural gas network. In Yanxi Liu, Tianzi Jiang, Changshui Zhang. Computer vision for
biomedical image applications: first international workshop, CVBIA 2005, Beijing, China, October 21,
2005 : proceedings. Springer. p. 210. DOI:10.1007/11569541_22. ISBN 978-3-540-29411-5.
3) Costa LDF, Silva FN (2006) Hierarchical Characterization of Complex Networks. Journal of Statistical Physics 125:
841–76.
4)
Curatelli F, Mayora-Iberra O (2000) Competitive learning methods for efficient Vector Quantizations in a
speech recognition environment. In Osvaldo Cairó, L. Enrique Sucar, Francisco J. Cantú-Ortiz. MICAI
2000: Advances in artificial intelligence : Mexican International Conference on Artificial Intelligence,
Acapulco, Mexico, April 2000 : proceedings. Springer. p. 109. ISBN 978-3-540-67354-5.
5)
6)
Daffertshofer A, Stam CJ (2007) Influences of volume conduction on phase distributions. Int Congr Ser 1300:209-212.
Dimitriadis SI, Laskaris NA, Tsirka V, Vourkas M, Micheloyannis S, Fotopoulos S (2010) Tracking brain dynamics
via time-dependent network analysis. Journal of Neuroscience Methods 193(1):145-155.
7)
Feynman, R.P., Leighton, R.B. and Sands, M. (1963). The Feynman Lectures on Physics, vols. I and II.
Reading.MA: Addison-Wesley.
8)
Freeman WJ, Rogers LJ (2002) Fine temporal resolution of analytic phase reveals episodic synchronization by state
transitions in gamma EEGs. J Neurophysiol 87:937–945.
Ito J, Nikolaev AR, Cees van L (2007): Dynamics of spontaneous transitions between global brain states. Human Brain
Mapping 25:904-913.
Lachaux JP, Rodriguez E, Le Van Quyen M, Lutz A, Martinerie J, Varela FJ (2000) Studying single-trials of phase
synchronous activity in the brain Int J Bifurcat Chaos 10:2429–2439.
Martinetz, T., Schulten, K.: A neural-gas network learns topologies. In T. Kohonenet al. (Eds.), In Int. Conf. on Articial
Neural Networks (pp. 397-402). Amsterdam:North-Holland (1991).
Martinez T, Schulten K (1994) Topology representing networks. Neural Netw 7(3):507–522.
Mormann F, Lehnertz K, David P, Elger CE (2000) Mean phase coherence as a measure for phase synchronization and
its application to the EEG of epileptic patients. Physica D 144:358-369.
Nolte G, Ziehe A, Nikulin VV, Schlögl A, Kr¨amer N, Brismar T, M¨uller KR: Robustly estimating the flow direction
of information in complex physical systems. Physical Review Letters 00(23):234101, 2008.
9)
10)
11)
12)
13)
14)
15) Nunez, P. 1981. Electrical Fields of the Brain. Oxford University Press,New York.
16) Schack B (2004) How to construct a microstate-based alphabet for evaluating information processing in time. Int J
Bifurcat Chaos 14:793-814.
17) Stam CJ, Nolte G, Daffertshofer A (2007b) Phase lag index: Assessment of functional connectivity from multichannel
EEG and MEG with diminished bias from common sources. Hum Brain Mapp 28:1178-1193
18) Thatcher RW, North DM, Biver CJ.Development of cortical connections as measured by EEG coherence
and phase delays. Hum
19) Brain Mapp. 2008 Dec;29(12):1400-15.
20) Torrence C, Compo GP (1998) A practical guide to wavelet analysis. Bull Am Meteorol Soc 79:61-78.
21) Valencia M, Martinerie J, Dupont S, Chavez M (2008) Dynamic small-world behavior in functional brain networks
unveiled by an event-related networks approach. Phys. Rev E 77:050905(R).
Page 13 of 13
Download