Why Optimize Pipeline Meta-models? - National Alliance for Medical

advertisement
Managing and Optimizing
fMRI Pipelines
Stephen C. Strother, Ph.D.
Rotman Research Institute, Baycrest Centre
http://www.rotman-baycrest.on.ca/rotmansite/home.php
& Medical Biophysics, University of Toronto
© S. C. Strother, 2006
Overview
• Background
− data-driven statistics, pipelines and meta-models
• fMRI File Management: NIfTI and the DFWG
• Why optimize pipeline meta-models;
− the Functional Imaging Analysis Contest (FIAC) experience?
• Seven meta-model optimization frameworks
• Results with the 7th framework: NPAIRS
• Data-analysis choices in pipeline meta-models:
− General linear model (GLM)
− Canonical variates analysis (CVA)
• Pipeline-driven, between-subject heterogeneity
• Recap: What have we learnt?
© S. C. Strother, 2006
The Statistician and the Scientist
A statistician and a scientist are going to be
executed, and the executioner asks each for their
last request.
When asked, the statistician says that he’d like to
give one last lecture on his theory of statistics.
When the scientist is asked, he says, “I’d like to be
shot first!”
Rob Tibshirani @ S-Plus Users conference Oct., 1999.
http://www-stat.stanford.edu/~tibs
© S. C. Strother, 2006
Why Bother with Data-Driven Statistics?
Philosophy
• “All models (pipelines) are wrong, but some are useful!”
− “All models are wrong.” G.E. Box (1976) quoted by Marks Nester in, “An
applied statistician’s creed,” Applied Statistics, 45(4):401-410, 1996.
• A goal is to quantify and optimize utility!
• “I believe in ignorance-based methods because
humans have a lot of ignorance and we should play to
our strong suit.”
− Eric Lander, Whitehead Institute, M.I.T.
• Minimize the number of modeling assumptions and/or
test multiple hypotheses, i.e., strong inference!
© S. C. Strother, 2006
fMRI Pipelines and Meta-models
Reconstructed
fMRI Data
B0 Correction
Slice Timing
Adjustment
Automated Software
Frameworks
Why are we not using more
techniques?
What are the most,modern
2nd mostanalysis
etc., important
steps?
Why are we still focused on nulltesting?
We needhypothesis
more research
across multiple data sets!
We need better tools and
Motion
education!
Correction
Data Modeling/
Analysis
Experimental
Design
Matrix
XNAT,Fiswidget
LONI, Fiswidgets
GUI
Optimisation Metrics
– ROCs
Non-Linear
Warping
Spatial &
Temporal
Filtering
Statistical
Analysis
Engine
Rendering of
Results on
Anatomy
Statistical
Maps
– p-values
– AIC, BIC
– Replication
– Prediction
– NPAIRS
© S. C. Strother, 2006
Overview
• Background
− data-driven statistics, pipelines and meta-models
• fMRI File Management: NIfTI and the DFWG
• Why optimize pipeline meta-models;
− the FIAC experience?
• Seven optimization frameworks
• Results with the 7th framework: NPAIRS
• Data-analysis choices in pipeline meta-models:
− General linear model (GLM)
− canonical variates analysis (CVA)
• Pipeline-driven, between-subject heterogeneity
• What have we learnt?
© S. C. Strother, 2006
NIfTI-DFWG-NIfTI-1.1
© S. C. Strother, 2006
NIfTI-DFWG-NIfTI-1.1
© S. C. Strother, 2006
NIfTI-DFWG-NIfTI-1.1
Leading candidates for NIfTI-2 are:
MINC-2.0, Multi-frame DICOM
XCEDE XML schema
© S. C. Strother, 2006
Overview
• Background
− data-driven statistics, pipelines and meta-models
• fMRI File Management: NIfTI and the DFWG
• Why optimize pipeline meta-models;
− the FIAC experience?
• Seven optimization frameworks
• Results with the 7th framework: NPAIRS
• Data-analysis choices in pipeline meta-models:
− General linear model (GLM)
− canonical variates analysis (CVA)
• Pipeline-driven, between-subject heterogeneity
• Recap: What have we learnt?
© S. C. Strother, 2006
Why Optimize Pipeline Meta-models?
Practice
New insights into human brain function may
be obscured by poor and/or limited choices
in the data-processing pipeline!
 We don’t understand the relative importance of metamodel choices.
 “Neuroscientific plausibility” of results is used to justify the
meta-model choices made.
 Systematic bias towards prevailing neuroscientific
expectations, and against new discoveries and/or testing
multiple hypotheses.
© S. C. Strother, 2006
The Functional Image Analysis Competition 1
 Examine the perisylvian
language network using a
repetition-priming design with
spoken sentences
 3T whole-body Bruker
 T2-weighted EPI, TR=2.5s, 30
x 4 mm slices
 Epochs of 20s ON & 9s OFF
 2 x 2 design for 4 conditions
Same
sentence,
same
speaker
Same sentence,
different speaker
Different
sentence,
same
speaker
Different sentence,
different speaker
 4 epochs/condition x 2 runs
© S. C. Strother, 2006
The Functional Image Analysis Competition 2
Poline JB, Strother SC, Dehaene-Lambertz G, Egan GF, Lancaster JL. Motivation and synthesis of
the FIAC experiment: The reproducibility of fMRI results across expert analyses. (in press, special
issue Hum Brain Mapp)
© S. C. Strother, 2006
The Functional Image Analysis Competition 3
Abstract: “… the FIAC … helped identify new activation
regions in the test-base data, and …, it illustrates the
significant methods-driven variability that potentially exists
in the literature. Variable results from different methods
reported here should provide a cautionary note, and
motivate the Human Brain Mapping community to explore
more thoroughly the methodologies they use for analysing
fMRI data.”
Poline JB, Strother SC, Dehaene-Lambertz G, Egan GF, Lancaster JL.
Motivation and synthesis of the FIAC experiment: The reproducibility of fMRI
results across expert analyses. (in press, special issue Hum Brain Mapp)
© S. C. Strother, 2006
The Functional Image Analysis Competition 4
z=-12
z=2
2
3
z=5
1
1,4
3
3
3
3
1
The main effects of sentence repetition (in red) and of speaker
repetition (in blue). 1: Meriaux et al, Madic; 2: Goebel et al, Brain
voyager; 3: Beckman et al, FSL; and 4: Dehaene-Lambertz et al,
SPM2.
© S. C. Strother, 2006
Overview
• Background
− data-driven statistics, pipelines and meta-models
• fMRI File Management: NIfTI and the DFWG
• Why optimize pipeline meta-models;
− the FIAC experience?
• Seven optimization frameworks
• Results with the 7th framework: NPAIRS
• Data-analysis choices in pipeline meta-models:
− General linear model (GLM)
− canonical variates analysis (CVA)
• Pipeline-driven, between-subject heterogeneity
• Recap: What have we learnt?
© S. C. Strother, 2006
Optimization Metric Frameworks
Simulation & ROC curves
1.
2.
3.
4.
Skudlarski P., et al., Neuroimage. 9(3):311-329, 1999.
Della-Maggiore V., et al., Neuroimage 17:19–28, 2002.
Lukic AS., et al., IEEE Symp. Biomedical Imaging, 2004.
Beckmann CF & Smith SM. IEEE Trans. Med. Img. 23:137-152, 2004.
Data-Driven:
1.
GLM Diagnostics
1.
2.
SPMd, Luo W-L, Nichols T. NeuroImage 19:1014-32, 2003
Minimize p-values
3.
1.
Hopfinger JB, et al., Neuroimage, 11:326-333, 2000.
2.
Tanabe J, et al. Neuroimage, 15:902-907, 2002.
Model Selection: Classical hypothesis testing, maximum likelihood,
Akaike’s information criterion (AIC), Minimum DescriptionLength, Bayesian
Information Criterion (BIC) & Model Evidence, Cross Validation
4.
Replication/Reproducibility
a.
Empirical ROCs – mixed multinomial model
1.
2.
3.
b.
Empirical ROCs – lower bound on ROC
1.
5.
Nandy RR & Cordes D. Magnetic Resonance in Medicine 49:1152–1162, 2003.
Prediction Error/Accuracy
1.
2.
3.
6.
Genovese CR., et al., Magnetic Resonance in Medicine, 38:497–507, 1997.
Maitra, R., et al., Magnetic Resonance in Medicine, 48, 62 –70, 2002.
Liou M., et al., J. Cog. Neuroscience, 15:935-945, 2003.
Kustra R & Strother SC. IEEE Trans Med Img 20:376-387, 2001.
Carlson, T.A., et al., J Cog Neuroscience, 15:704–717, 2003.
Hanson,S.J., et al., NeuroImage 23:156– 166, 2004
NPAIRS: Prediction + Reproducibility
1.
2.
3.
4.
5.
6.
Strother SC, et. al., Neuroimage 15:747-771, 2002.
Kjems U, et al., et al., Neuroimage 15:772-786, 2002.
Shaw ME, et. al. Neuroimage 19:988-1001, 2003.
LaConte S, et. al. Neuroimage 18:10-23, 2003.
Strother SC, et. al., Neuroimage 23S1:S196-S207, 2004.
LaConte S, et. al., Neuroimage 26:317-329, 2005
© S. C. Strother, 2006
Optimization via Simulations
Receiver Operating Characteristic (ROC) Curves
PA = P(True positive)
= P(Truly active voxel
is classified as active)
= Sensitivity
pAUC
PI = P(False positive)
= P(Inactive voxel
is classified as active)
= False alarm rate
Skudlarski P, Neuroimage. 9(3):311-329, 1999.
Della-Maggiore V, Neuroimage 17:19–28, 2002.
Lukic AS, IEEE Symp. Biomedical Imaging, 2004.
Beckmann CF, Smith SM. IEEE Trans. Med. Img. 23:137-152, 2004.
© S. C. Strother, 2006
Optimization via Simulations
© S. C. Strother, 2006
Optimization Framework 1 (SPMd)
Massively univariate testing of GLM
assumptions and data exploration:
• Luo W-L, Nichols T. Diagnosis and exploration of
massively univariate neuroimaging models.
NeuroImage 19:1014-32, 2003
• Zhang H, Luo W-L, Nichols TE. Diagnosis of Single
Subject & Group fMRI Data with SPMd. Hum Brain
Mapp (in press, special FIAC issue)
Example: The impact of high-pass filtering in
a phantom.
© S. C. Strother, 2006
Optimization Framework 1 (SPMd)
Lund TE, Madsen KH, Sidaros K, Luo W-L, Nichols TE. Non-white noise in fMRI:
Does modelling have an impact? Neuroimage 29:54 – 66, 2006
© S. C. Strother, 2006
Optimization Framework 2
Minimize p-values or maximize SPM values,
e.g.,
• Hopfinger JB, Buchel C, Holmes AP, Friston KJ, A study of
analysis parameters that influence the sensitivity of event
related fMRI analyses, Neuroimage, 11:326-333, 2000.
• Tanabe J, Miller D, Tregellas J, Freedman R, Meyer FG.
Comparison of detrending methods for optimal fMRI
preprocessing. Neuroimage, 15:902-907, 2002.
Does not imply a stronger likelihood of
getting the same result in another
replication of the same experiment!
© S. C. Strother, 2006
Optimization Framework 3
Model Selection: An attempt to formulate some
traditional problems in the methodology of science
in a rigorous way.
Standard methods: (Classical hypothesis testing,
maximum likelihood, Akaike’s information criterion (AIC),
Minimum DescriptionLength, Bayesian Information Criterion
(BIC) & Model Evidence, Cross Validation) compensate for
errors in the estimation of model parameters.
 All tradeoff fit with simplicity (least # parameters), but give
simplicity different weights.
 All favor more complex (less simple) models with more data.
Forster MR. Key concepts in model selection: Performance and Generalizability. J
Math Psych 44:205-231, 2000
© S. C. Strother, 2006
Optimization Framework 4
Quantifying replication/reproducibility
because:
• replication is a fundamental criterion for a result to be
considered scientific;
• smaller p values do not necessarily imply a stronger
likelihood of repeating the result;
• for “good scientific practice” it is necessary, but not
sufficient, to build a measure of replication into the
experimental design and data analysis;
• results are data-driven and avoid simulations.
© S. C. Strother, 2006
Optimization Framework 4a
Data-Driven, Empirical ROCs:
• Genovese CR, Noll DC, Eddy WF. Estimating test-retest reliability
in functional MR imaging. I. Statistical methodology. Magnetic
Resonance in Medicine, 38:497–507, 1997.
• Maitra, R., Roys, S. R., & Gullapalli, R. P. Test–retest reliability
estimation of functional MRI data. Magnetic Resonance in
Medicine, 48, 62 –70, 2002.
• Liou M, Su H-R, Lee J-D, Cheng PE, Huang C-C, Tsai C-H.
Bridging Functional MR Images and Scientific Inference:
Reproducibility Maps. J. Cog. Neuroscience, 15:935-945, 2003.
M 
(M - R V )
(M - R V )
RV
RV
+ 1 - λ  PI 1 - PI 
  λ PA 1 - PA 
 RV 
© S. C. Strother, 2006
Optimization Framework 4b
Data-Driven, Empirical ROCs:
• Nandy RR, Cordes D. Novel ROC-Type Method for Testing the
Efficiency of Multivariate Statistical Methods in fMRI. Magnetic
Resonance in Medicine 49:1152–1162, 2003.
 P(Y) = P(voxel identified as active)
 P(Y/F) = P(inactive voxel identified as active)
 P(Y) vs. P(Y/F) is a lower bound for true ROC
• Two runs:
− standard experimental AND
− resting-state for P(Y/F).
• Assumes common noise structure for accurate P(Y/F).
© S. C. Strother, 2006
Is Replication a Sufficient Metric?
A silly data analysis approach produces the
value 1.0/voxel regardless of the input data!
Results are perfectly replicable;
• no variance;
• completely useless because they are severely
biased!
Must consider such bias-variance tradeoffs
when measuring pipeline performance.
© S. C. Strother, 2006
Optimization Framework 5
Prediction/
Crossvalidation
Resampling
Stone, M. Cross-validatory choice and
assessment of statistical predictions. J.
R. Stat. Soc. B 36: 111–147. 1974
Hastie T, Tibshirani R, Friedman J. The
elements of statistical learning theory.
Springer-Verlag, New York, 2001
© S. C. Strother, 2006
Optimization Framework 5
Prediction/Crossvalidation Resampling Papers
Principal Component Analysis
L. K. Hansen, et al. Neuroimage, vol. 9, no. 5, pp. 534-44, 1999.
Prediction via GLM, Split-Half Reproducibility
J. V. Haxby, et al. Science, vol. 293, no. 5539, pp. 2425-30, 2001.
Linear Discriminant Analysis/Canonical Variates Analysis
R. Kustra and S. Strother, IEEE Trans Med Imaging, vol. 20, no. 5, pp. 376-87, 2001.
T. A. Carlson, et al. J Cogn Neurosci, vol. 15, no. 5, pp. 704-17, 2003.
J. D. Haynes and G. Rees, Nat Neurosci, vol. 8, no. 5, pp. 686-91, 2005.
Y. Kamitani and F. Tong, Nat Neurosci, vol. 8, no. 5, pp. 679-85, 2005.
A.J. O'Toole, et al. J Cogn Neurosci, vol. 17, no. 4, pp. 580-90, 2005.
Support Vector Machines (and LDA)
D. Cox and R. L. Savoy, Neuroimage, vol. 19, no. 2 Pt 1, pp. 261-70, 2003.
S. LaConte, et al. Neuroimage, vol. 26, no. 2, pp. 317-29, 2005.
J. Mourao-Miranda, et al. Neuroimage, vol. 28, no. 4, pp. 980-95, 2005.
Artificial Neural Networks
B. Lautrup, et al. in Proceedings of the Workshop on Supercomputing in Brain Research: From
Tomography to Neural Networks, H. J. Hermann, et al., Eds. Ulich, Germany: World Scientific,
pp. 137-144, 1994.
N. Morch, et al. Lecture Notes in Computer Science 1230, J. Duncan and G. Gindi, Eds. New York:
Springer-Verlag, pp. 259-270, 1997.
S. J. Hanson, et al. Neuroimage, vol. 23, no. 1, pp. 156-66, 2004.
S. M. Polyn, et al. Science, vol. 310, no. 5756, pp. 1963-6, 2005.
© S. C. Strother, 2006
Optimization Framework 6: NPAIRS
 NPAIRS Uses “split-half” resampling to combine:
• Prediction & Reproducibility Metrics
• PCA-based reproducibility measures of:
− uncorrelated signal and noise SPMs;
− reproducible SPMs (rSPM) on a Z-score scale;
− multivariate dimensionality.
• Combined prediction and reproducibility metrics for:
− data-driven ROC-like curves;
− optimizing bias-variance tradeoffs of pipeline interactions.
• Other Measures:
− empirical random effects correction;
− measures of individual observation influence.
© S. C. Strother, 2006
NPAIRS Metrics in Functional
Neuroimaging Studies
PET






Strother SC, et. al., Hum Brain Mapp, 5:312-316, 1997.
Frutiger S, et. al., Neuroimage 12:515-527, 2000.
Muley SA, et. al., Neuroimage 13:185-195, 2001.
Shaw ME, et. al., Neuroimage 15:661-674, 2002.
Strother SC, et. al., Neuroimage 15:747-771, 2002.
Kjems U, et al., et al., Neuroimage 15:772-786, 2002.
fMRI






Tegeler C, et. al. Hum Brain Mapp, 7:267-283, 1999.
Shaw ME, et. al. Neuroimage 19:988-1001, 2003.
LaConte S, et. al. Neuroimage 18:10-23, 2003.
Strother SC, et. al., Neuroimage 23S1:S196-S207, 2004.
LaConte S, et. al., Neuroimage 26:317-329, 2005.
Chen X, et. al., Hum Brain Mapp (in press, special FIAC
issue)
© S. C. Strother, 2006
NPAIRS: Split-half reSampling for ActivationPattern Reproducibility Metrics
1 r 


 r 1







1
2
1
2
1 


2   1+r 0  


1   0 1-r  


2

1
2
1
2
1 
2 
1 

2
© S. C. Strother, 2006
NPAIRS Split-Half Prediction and
Reproducibility Resampling
© S. C. Strother, 2006
Overview
• Background
− data-driven statistics, pipelines and meta-models
• fMRI File Management: NIfTI and the DFWG
• Why optimize pipeline meta-models;
− the FIAC experience?
• Seven optimization frameworks
• Results with the 7th framework: NPAIRS
• Data-analysis choices in pipeline meta-models:
− General linear model (GLM)
− canonical variates analysis (CVA)
• Pipeline-driven, between-subject heterogeneity
• Recap: What have we learnt?
© S. C. Strother, 2006
A Multivariate Model for NPAIRS
PCA of data matrix:
 
svd X  Et S U v
txv
Canonical Variates Analysis (CVA):
svd  (G G )
T
1/ 2
T
T
t
G E t (E E t )
1/ 2

1
 W B
• Design matrix (G) “brain states” = discriminant classes.
− prediction metric = posterior probability of class membership.
− maximizes a multivariate signal-to-noise ratio:
(between-class, B)/(pooled within-class, W) covariance;
© S. C. Strother, 2006
Optimization of fMRI Static Force fMRI
 Sixteen subjects with 2 runs/subject
 Acquisition:
•
•
•
•
Whole-brain, interleaved 1.5T BOLD-EPI;
30 slices = 1 whole-brain scan;
1 oblique slice = 3.44 x 3.44 x 5 mm3;
TR/TE = 4000 ms/70 ms
 Experimental Design:
 Analyzed with NPAIRS, GLM and PCA/CVA:
•
•
•
•
Dropped initial non-equilibrium and state-transition scans;
2-class single-subject;
11-class 16-subject, group analysis;
NPAIRS/CVA, GLM-CVA comparison across preprocessing pipelines.
© S. C. Strother, 2006
Preprocessing for Static Force
 All runs/subject(s) passed initial quality control:
•
•
•


movement (AIR 5) < 1 voxel;
no artifacts in functional or structural scans;
no obvious outliers in PCA of centered data matrix.
Alignment (AIR 5):
•
Within-Subject: across runs to 1st retained scan of run one;
•
Between-Subject: 1st (Affine), 3rd, 5th and 7th order polynomials;
•
Tri-linear and sinc (AIR 05) interpolation.
Temporal Detrending using GLM Cosine Basis (SPM):
•
•
None,
0.5, (0.5,1.0), (0.5-1.5), (0.5-2.0), (0.5-2.5), (0.5-3.0) cosines/run.
− (0.5-1.5) includes three GLM columns with 0.5, 1.0 and 1.5 cosines/run

Spatial Smoothing with 2D Gaussian:
•
•
None;
FWHM = 1, 1.5, 2, 3, 4, 6, 8 pixels (3.44 mm)
− FWHM = 1.5 voxels = 0.52 mm; FWHM = 6 voxels = 21 mm.
© S. C. Strother, 2006
ROC-Like: Prediction vs. Reproducibility
2-Class Static Force, Single Subject
 A Bias-Variance Tradeoff.
As model complexity increases (i.e.,
#PCs 10 →100), prediction of
design matrix’s class labels
improves and reproducibility
(i.e., activation SNR) decreases.
 Optimizing Performance.
Like an ROC plot there is a single
point, (1, 1), on this prediction vs.
reproducibility plot with the best
performance; at this location the
model has perfectly predicted the
design matrix while extracting an
infinite SNR.
LaConte S, et. al. Evaluating preprocessing
choices in single-subject BOLD-fMRI
studies using data-driven performance
metrics. Neuroimage 18:10-23, 2003
© S. C. Strother, 2006
Prediction, Reproducibility, Dimensionality and
Canonical Variates (1.5 cos)
© S. C. Strother, 2006
Differences in Scanner Smoothness
Courtesy Lee Friedman, UNM & Functional BIRN
© S. C. Strother, 2006
Overview
• Background
− data-driven statistics, pipelines and meta-models
• fMRI File Management: NIfTI and the DFWG
• Why optimize pipeline meta-models;
− the FIAC experience?
• Seven optimization frameworks
• Results with the 7th framework: NPAIRS
• Data-analysis choices in pipeline meta-models:
− General linear model (GLM)
− canonical variates analysis (CVA)
• Pipeline-driven, between-subject heterogeneity
• Recap: What have we learnt?
© S. C. Strother, 2006
Pipeline Meta-models: Data Analysis 1
 Bias-variance tradeoffs as a function of finite sample size are a
critical issue because:
•
•
Traditional, inferential, statistical parameter estimation is only asymptotically
unbiased & minimum variance.
Non-traditional estimation may = better signal detection!
− smaller parameter variance in non-asymptotic samples;
− asymptotically-biased, often no asymptotic, inferential framework leading to resampling
techniques!
 Resampling
•
•
•
Favour parameter estimation with Bootstrap over null hypothesis testing with
permutations!
Bootstrap’s advantage: it can be combined with cross-validation to
simultaneously obtain prediction and parameter estimates.
Boostrap’s disadvantage: requires iid samples; more restrictive than
permutations.
© S. C. Strother, 2006
Pipeline Meta-models: Data Analysis 2
 Part of science by Strong Inference:
• for a scientifically interesting observation enumerate all alternative
hypotheses that can account for the observation, based on present
knowledge
− Jewett DL. What’s wrong with a single hypothesis. The Scientist, 19(21):10, 2005
− Platt JR. Strong inference. Science, 146:347-353, 1964
 Comparing univariate GLM versus multivariate CVA
data analysis is a simple means of implementing multihypothesis tests in neuroimaging:
• test localizationist versus network theories of brain function!
• account for differences in data-analysis sensitivity and specificity!
• test different interactions with preprocessing pipeline choices!
© S. C. Strother, 2006
Simple Motor-Task Replication at 4.0T
L
R
t-test
Fisher Linear Discriminant = 2-class CVA
C. Tegeler, S. C. Strother, J. R. Anderson, and S. G. Kim, "Reproducibility of BOLD-based functional
MRI obtained at 4 T," Hum Brain Mapp, vol. 7, no. 4, pp. 267-83, 1999.
© S. C. Strother, 2006
Testing Meta-Model Differences: Static Force
 Apply 9 different pipelines to each run of 16 static-force
subjects:
−
−
−
−
4 x NPAIRS.CVA (optimised detrending, smoothing, # PCs);
3 x NPAIRS.GLM (same paramters as 3 x NPAIRS.CVA);
FSL3.2.GLM (high-pass filtering, prewhitening, default HRF)
SPM2.GLM (high-pass filtering, prewhitening, default HRF)
 9 statistical parametric images (SPI) x 2 runs = 18 SPIs/subject.
 perform NPAIRS, splitting on subjects for 18 x 16 SPIs.
© S. C. Strother, 2006
Testing Pipeline Differences: Static Force
© S. C. Strother, 2006
Group-Preprocessing Interactions
© S. C. Strother, 2006
Group-Preprocessing Interactions
© S. C. Strother, 2006
Overview
• Background
− data-driven statistics, pipelines and meta-models
• fMRI File Management: NIfTI and the DFWG
• Why optimize pipeline meta-models;
− the FIAC experience?
• Seven optimization frameworks
• Results with the 7th framework: NPAIRS
• Data-analysis choices in pipeline meta-models:
− General linear model (GLM)
− Canonical variates analysis (CVA)
• Pipeline-driven, between-subject heterogeneity
• Reacp: What have we learnt?
© S. C. Strother, 2006
Subject-Specific Pipeline Optimization
Shaw ME, et. al., Neuroimage 19:988-1001, 2003
© S. C. Strother, 2006
Subject-Specific Pipeline Optimization
Shaw ME, et. al., Neuroimage 19:988-1001, 2003
© S. C. Strother, 2006
Recap
• Background
− data-driven statistics, pipelines and meta-models
• fMRI File Management: NIfTI and the DFWG
• Why optimize pipeline meta-models;
− the Functional Imaging Analysis Contest (FIAC) experience?
• Seven meta-model optimization frameworks
• Results with the 7th framework: NPAIRS
• Data-analysis choices in pipeline meta-models:
− General linear model (GLM)
− Canonical variates analysis (CVA)
• Pipeline-driven, between-subject heterogeneity
• Recap: What have we learnt?
© S. C. Strother, 2006
Acknowledgements
Rotman Research Institute
Xu Chen, Ph.D.
Anita Oder, B.Sc.
Wayne Lee, B.Eng.
Cheryl Grady, Ph.D.
Randy McIntosh, Ph.D.
Principal Funding Sources: NIH Human Brain Project, P20-EB0201310 & P20-MH072580-01.
© S. C. Strother, 2006
Acknowledgements
University of Minnesota
International Neuroimaging Consortium & VAMC: http://neurovia.umn.edu/incweb
Jon R. Anderson, M.Sc., Sally Frutiger, Ph.D., Kelly Rehm, Ph.D., David Rottenberg, M.D.,
Kirt Schaper, M.Sc., John Sidtis, Ph.D., Jane Zhang, Ph.D.
Seong-Ge Kim, Ph.D., Essa Yacob, Ph.D.,
CMRR & Biomed. Eng.
James Ashe, M.D., Ph.D.,
Neurology & VAMC
Suraj A. Muley, M.D.,
Neurology & VAMC
Emory University
University of Toronto
Xiaoping Hu, Ph.D.
Rafal Kustra, Ph.D.
Stephen LaConte, Ph.D.
Technical University of Denmark
Melbourne University
Lars Kai Hansen, Ph.D.
Gary Egan, Ph.D.
Finn Arup Nielsen, Ph.D.
Marnie Shaw, Ph.D.
Principal Funding Sources: NIH Human Brain Project, P20-EB02013-10 & P20-MH072580-01.
© S. C. Strother, 2006
Consensus-Model ROC Results
Simple Signal
Complex Signal
Hansen LK, Nielsen FA, Strother SC, Lange N. Consensus Inference in
Neuroimaging. Neuroimage 13:1212-1218, 2001
© S. C. Strother, 2006
NPAIRS-CVA Static Force Results: f(Preprocessing)
© S. C. Strother, 2006
Download