The methods reported in our recently published paper (1) vary

advertisement
The methods reported in our recently published paper (1) vary slightly from the methods that were actually applied, and
this note is to clarify the discrepancy. After re-analysis as per the originally reported methods, the main conclusions of
the paper remain unaffected. However, some details in the results differ and we have therefore decided to report these
updated results, and highlight their differences from the original analysis, here.
The affected step occurred during adjustment of the extracted time-series prior to segmentation and concatenation,
functional connectivity estimation, and classification analyses. When extracting time series for DCM or PPI functional
connectivity analyses, it is conventional to adjust the time-series by subtracting out effects that are modeled by
confound regressors in the GLM design matrix (i.e. mean signal, motion parameters, global signal, etc.). In SPM, this is
accomplished by creating an F-contrast that models all conditions of interest in each subject’s design matrix. The
modeled effects of regressors whose columns correspond to the null-space of this F-contrast matrix (i.e. columns that
consist of all zeros in the F-contrast matrix) are then removed from each voxel's time-series during extraction. When
batching the processes, a critical parameter that must be provided by the user is the index number of each subject’s
effects-of-interest F-contrast into the VOI extraction code. For more information and details please see (2).
In our case (1), the effects of interest contrast was [1 0 0 0; 0 1 0 0; 0 0 1 0; 0 0 0 1] which modeled 4 effects of interest
(masked and unmasked, fearful and neutral faces), leaving 9 columns of zeros corresponding to 6 motion regressors,
grey and white matter signal, and mean signal across the whole run. However, in our case, the index which was
admitted to the VOI extraction code corresponded instead to the 1st T-contrast ([1 -1], which represents the contrast
of unmasked fearful faces vs. neutral faces).
We reran the analysis correcting each region’s time-series using the conventionally used F-contrast. The time-series that
were extracted were indeed different from those when adjusting according to the T-contrast (the mean correlation in
full time-series across 270 nodes and 38 subjects was 0.87, std across the mean correlations from each subject was 0.04,
while std across all correlations was 0.35. While maximum decoding accuracies were slightly lower under the re-analysis,
it is important to note that the main conclusions of the paper remain unaffected. Maximum accuracies of 86-96% (p <
0.0001) were achieved with the top 10-20 features (Figure 1A). The top 16 are shown in neuroanatomical display in
Figure 1B-D.
There were also some differences in the most informative connections (different connections are shaded grey in Table
1). Of note, the connection between Right Temporal Occipital Fusiform Cortex and Right Amygdala, consistent with
previous evidence (3) , became one of the top 16 informative connections (t-value original analysis/current
analysis=2.6/4.9).
(1) Pantazatos SP, Talati A, Pavlidis P, Hirsch J (2012) Decoding Unattended Fearful Faces with Whole-Brain Correlations:
An Approach to Identify Condition-Dependent Large-Scale Functional Connectivity. PLoS Comput Biol 8(3): e1002441.
doi:10.1371/journal.pcbi.1002441
(2) https://www.jiscmail.ac.uk/cgi-bin/webadmin?A2=ind03&L=SPM&P=R400564&I=3&d=No+Match%3BMatch%3BMatches
(3) Vuilleumier P, Richardson MP, Armony JL, Driver J, Dolan RJ (2004) Distant influences of amygdala lesion on visual
cortical activation during emotional face processing. Nat Neurosci 7: 1271–1278.
Table 1. F vs. N, Top 16 features when re-adjusting time series prior to functional connectivity calculation and
classification analyses. Consensus features are shown in bold, while shaded grey depicts connections that differ from the
top 25 displayed in Table 1 of the original text (1).
Edge label
Left_Thalamus_PC2 - Left_Planum_Polare_PC1
Right_Lateral_Occipital_Cortex_inferior_division_PC2 Left_Juxtapositional_Lobule_Cortex_Supp_Motor_cortex_PC1
0.061307
0.09372
-0.082794
-0.071973
SVM
FSets
weight
4.3082
1.7364
38
4.3893
1.3856
38
Right_Angular_Gyrus_PC1 - Left_Hippocampus_PC2
Vermis_4_5_PC1 - Right_Putamen_PC1
Right_Central_Opercular_Cortex_PC1 Left_Planum_Polare_PC1
Right_Amygdala_PC2 - Left_Putamen_PC1
Left_Supramarginal_Gyrus_posterior_division_PC2 Left_Lateral_Occipital_Cortex_inferior_division_PC2
0.089644
-0.052167
0.101
-0.043317
0.068933
0.24164
4.7277
-4.0958
-4.141
1.3773
-1.1339
-1.1261
38
17
25
0.018875
0.013263
0.14839
0.15074
-4.7533
-3.9791
-1.116
-1.112
38
11
Right_Inferior_Temporal_Gyrus_posterior_division_PC1 Cerebelum_6_R_PC2
-0.02189
0.11678
-4.521
-1.1031
38
Left_Ventral_Lateral_Occipital_Cortex_superior_division_PC2 Left_Accumbens_PC2
0.039594
-0.10233
4.6239
1.0477
38
Right_Ventral_Lateral_Occipital_Cortex_superior_division_PC2
- Right_Middle_Temporal_Gyrus_posterior_division_PC2
0.041073
-0.063035
5.3268
1.0433
38
Left_Middle_Temporal_Gyrus_anterior_division_PC2 Left_Lateral_Occipital_Cortex_inferior_division_PC1
-0.028552
0.062753
-4.1191
-1.0078
23
0.040634
-0.098713
4.2072
1.0032
35
Vermis_7_PC2 - Midbrain_PC1
Right_Temporal_Occipital_Fusiform_Cortex_PC1 Right_Amygdala_PC1
0.12648
0.23713
-0.001608
0.10776
4.5083
4.9032
0.99178
0.90304
38
38
Left_Temporal_Fusiform_Cortex_anterior_division_PC1 Left_Paracingulate_Gyrus_PC1
-0.14323
-0.016626
-4.2079
-0.77203
34
Left_Superior_Frontal_Gyrus_PC2 Left_Cingulate_Gyrus_posterior_division_PC2
0.069026
-0.074108
4.4018
0.67325
38
Left_Temporal_Occipital_Fusiform_Cortex_PC2 Cerebelum_8_L_PC1
Mean R F
Mean R N
T-value
Figure 1.
Figure Legend
Erratum Figure 1: Large-scale functional connectivity discriminates between unattended, conscious
processing of fearful and neutral faces (re-analysis as per originally described methods): (A) Decoding
accuracy when classifying F vs. N as a function of the number of features (1 to 40) included ranked in
descending order by their absolute t-score. Maximum accuracy for F vs. N classification (96%, p < 0.002,
corrected) was achieved when learning was based on the top 16 features in each training set. Mean accuracy
scores for shuffled data are plotted along the bottom, with error bars representing standard deviation about the
mean. Posterior (B), ventral (C) and right lateralized (D) anatomical representation of the top 16 features when
classifying supraliminal fearful vs. supraliminal neutral face conditions (F vs. N). The thalamus (large red
sphere in the center of each view) is the largest contributor of connections the differentiate the F from N. Red
indicates correlations that are greater in F, and blue represents correlations that are greater in N. For display
purposes, the size of each sphere is scaled according to the sum of the SVM weights of each node’s
connections, while the color of each sphere is set according to the sign of this value; positive sign, red, F > N
and negative sign, blue, N > F. In addition, the thickness of each connection was made proportional to its SVM
weight.
Download