LING646_LAB3_fMRI

advertisement
LING646 FALL 2013
LAB31
In this lab, you will explore fMRI data from a single subject that was collected during a
simple language localizer of approximately 5 minutes.
During the scan, three kinds of trials were visually presented: well-formed sentences,
lists of unrelated nouns, and consonant strings (the paradigm was modeled on an
anterior temporal cortex localizer used by Rogalsky & Hickok, 2008 Cerebral Cortex).
Each trial was 4 seconds long: an initial 400ms of fixation cross followed by 9
consecutive words presented using RSVP (rapid serial visual presentation) at 400ms
SOA (200ms on and 200ms off). The three conditions were presented 20 times each in
randomized order, intermixed with a total of 80 additional seconds of fixation periods (in
which only a fixation cross was presented), in order to improve the deconvolution of the
event-related responses.
Example Stimuli:
(A) - Sentence - The gray mouse quickly scurried underneath the dusty rug.
(B) – Noun list - pillow dragon tile clay milk ladder truck foil shell
(C) – Consonant string list – trkjcr tphqj rnd bspsjsqc kdr bbqtgx cklpd bfqnkr rhvnj
The data were recorded from a 3T scanner. During the functional run, 160 functional
volumes (36 axial slices (AC-PC aligned), 3mm slice thickness, .3mm skip, in-plane
resolution of 3.125mm) were acquired with a gradient-echo sequence (repetition
time=2s, echo time=25ms, flip angle = 90deg, interleaved acquisition).
To do this lab, you will use the MRICron and Neurolens software packages. MRICron is
a free, platform-independent MRI visualization software. Neurolens is a free fMRI
software analysis package for Mac developed by researchers at McGill and MGH.
Neurolens is no longer widely used for fMRI analysis, but it provides much better
visualization and user interaction than most other packages, which makes it a good
choice for learning your way around fMRI datasets. You may do the lab on the
cephalopod analysis machine, or if you own a Mac you can also download MRICron
(http://www.mccauslandcenter.sc.edu/mricro/mricron/) and Neurolens
(www.neurolens.org).
The directory on cephalopod that holds the data for this exercise is
/Users/Shared/Courses/LING646/MRI
I am again offering an alternative assignment for those who may find it challenging to
obtain computer resources for this assignment: write a 3-page (single-spaced) report
discussing important similarities and differences between MEG and fMRI for cognitive
neuroscience.
1
Parts of this lab are inspired by a lab exercise developed by instructors of HST583 at
MIT/HMS!
Inside the lab2 directory you should see several .nii files. These are commonly referred
to as ‘NIFTI’ files. Most MRI data comes off the scanner in the form of ‘DICOM’ or .dcm
files, which are in a standardized medical image format that usually returns one file per
brain volume (so, for example, in a functional run that takes 200 images of the brain
across the scan, you would get back 200 .dcm files). NIFTI is a more compact and more
convenient file format—all of the volumes from a functional run can be stored in a single
.nii file.
1. For this lab, the .dcm files have already been converted to .nii files—one for the
functional data (LangLoc.nii) and one for the structural data (MPRAGE.nii). Before we
get there, take a look at the file ‘scanlog.txt’. As this functional localizer scan was
collected as part of a much longer MRI session, the output was a folder full of
thousands of mysteriously numbered .dcm files. Most DICOM conversion routines can
automatically pull information out of these files to detect which should be grouped
together, and can then output something like the ‘scanlog.txt’ file to provide a record of
the scan. Use the scanlog to answer the following questions:
(a) How many separate runs of MRI data were collected in this session?
(b) What are the names of the first two runs? Any guesses about what purpose these
runs might serve?
(c) The functional scan you will be analyzing in this lab is the first functional scan, listed
on row 7. After the ‘ok’ column, there are 4 columns of numbers. From what you were
told above about the parameters of the functional scans, what do you think these 4
numbers refer to?
2. The structural data collected (the high-resolution static image of the subject’s brain) is
called MPRAGE.nii because MPRAGE is the name of the standard structural imaging
sequence (Magnetization Prepared RApid Gradient Echo). Although this lab will mainly
focus on analyzing the lower-resolution functional data, take a few minutes to explore
the structural data by opening the MPRAGE.nii file in MRICron (you can open MRICron
from the Applications menu and you can simply drag and drop the MPRAGE.nii file on
top). The X-Y-Z coordinates will appear on top, but note that these are not MNI or
Talairach coordinates—this brain has not been put into any reference system, so we
just have image coordinates, which are only going to be meaningful for this particular
subject in this particular scan (i.e. if they moved their head before the next scan, their
cerebellum will be in a slightly different part of the image space).
(a) Click around until you find the cerebellum. Take a screenshot. What image
coordinates are you at?
(b) What about the STG (superior temporal gyrus)?
(c) What about the IFG (inferior frontal gyrus)?
(d) The anterior commissure is a little tough to find on this particular brain, but at about
what coordinates do you think it might be? It may help to increase the value of the
bottom right number box on the console before you do this (e.g. to 700).
(e) Is the skull lighter or darker than the scalp in this image? Why might that be?
(f) Move your cursor to the most medial position in the brain (where you have the
sagittal view that includes the corpus callosum). During structural imaging we collect a
full view of the whole head, but since each slice takes time, in functional imaging we
usually want to collect only the slices that cover the cortex. Now, imagine that you are
helping with a functional scan and your advisor tells you to pick the position of the most
inferior functional slice that will still include the whole cortex. First, move your cursor to
the medial sagittal view (e.g. X=88). What Z position appears to clear all of cortex from
this sagittal view? Now, examine the coronal and axial views. What part of cortex would
your functional scan be leaving out if you had selected your slices this way?
3. The final critical data file is the experiment logfile containing information about when
the stimuli were presented relative to the fMRI data collected. For various reasons, we
cannot record timestamps for stimuli directly into the fMRI data the way we can with
EEG and MEG. Therefore, the experiment logfile (the file output by Matlab or E-Prime
or whatever presentation software you are using) takes on paramount importance. In
order to sync the fMRI data and the stimuli presentation perfectly, most stimulus
presentation scripts are actually triggered by a signal sent from the MRI scanner itself
when fMRI data acquisition begins. In this lab the stimulus times are stored in the file
‘LangLoc_stimlog.txt’. The first column is the time the stimulus was presented, the
second column is the trial order, and the column following the last word of the trial
indicates the condition.
Remember that in fMRI we will do a regression across the whole timecourse of the
experiment, given information about when each condition was being presented. To
facilitate this process, the critical information from the logfile has been transformed for
input to Neurolens in LangLoc_condtimes.txt. Open up this file in a text editor.
(a) What information is included in the second column of this file?
(b) What about the third column?
(c) Is it surprising to you that the second column does not only include even numbers?
Why do you think that might be?
B. Functional analysis
4. Open the functional data in Neurolens by double-clicking on the LangLoc.nii file or by
opening Neurolens through the Applications folder. You should see three views of the
functional data pop up on the left side of the screen. Take a moment to explore the data
by clicking in different parts of the image. You can view larger images by clicking on
‘Axial’, ‘Sagittal’, or ‘Coronal’ in the top middle of the window.
(a) What are your first impressions of the data?
(b) Which tissue is brighter in these images, white matter or gray matter?
(c) Do the voxels appear to be relatively isotropic, that is, the same size in all 3
dimensions?
(d) In the middle screen display the axial view, and click on the ‘Show all slices’ box at
the bottom. Are the slices in the top row more ventral or more dorsal than the slices in
the bottom row?
(e) Click on the Plot view, which displays the mean intensity of the MR signal across
time. Click on the autoscale button at the bottom right. Do you see any trends in the
global signal across the course of the 5 minute run?
(f) By clicking on different points in the timecourse, the corresponding images will
appear on the lefthand screen. When you click on different timepoints, do the images
displace at all, indicating movement from one scan to the next?
5. We will begin by analyzing the data from this run without any preprocessing. Go to
the Actions menu and select Linear Modeling. A window should pop up showing the
current hemodynamic response function (HRF) and model. Note several important
things about this window. First, even though we have not told the software anything
about the timing in our experiment, it has already introduced several terms into the
regression model, each illustrated by one of the blue lines. The flat blue line on top is
the constant or baseline term, which will capture the baseline difference from zero
signal. The other three lines are capturing various kinds of simple drift that are likely to
occur during a scan and contribute to the signal (for example, signal may linearly
increase or decrease across the scan for physical reasons unrelated to our paradigm).
Notice also that the lower right corner of this window indicates that there is a different
model for each slice. This is because the slices were acquired at slightly different times.
Although it is likely that these boring factors will be able to reliably explain variability in
the MR signal, we really don’t care about these significant effects. We are including
them because they will explain away the boring variability, and make it easier to
determine whether the language stimuli we presented are able to explain a reasonable
amount of the remaining variability. Since the information about the stimuli is not
included in the MR datafile itself, we need to load it in now. In the Linear Modeling
window, the Table of Events is currently empty. Click on the ‘Read file’ button and load
the ‘LangLoc_condtimes.txt’ file. The table should now be filled and you should see the
timecourse models for the three conditions appear in the HRF and Model window. If you
click on a row in the Event Table, the timecourse for that condition will appear in bold.
(a) Are the conditions presented at regular intervals?
(b) Why do you think the peaks in the three timecourses appear to overlap with each
other in time, even though the actual stimuli were presented sequentially?
(c) Anything else you find interesting about the timecourse model?
6. Now we will create a map illustrating a contrast between the effect of two conditions
of interest on the BOLD signal. Go to the Contrast tab of the Linear Modeling window.
Since we have 7 regressors in our model (3 conditions of interest and 4 nuisance
regressors), to request a contrast between two conditions we need to provide a vector
of 7 numbers in which the conditions we want to compare are 1 and -1, and the rest are
0.
In this case we will be comparing the effect of Sentences against the effect of
Consonant Strings, as we are hoping that most of the mechanisms involved in
processing language will be invoked by the Sentence condition and not the Consonant
String condition. Since we expect that Sentences will invoke more activity in the
language areas than Consonant Strings, it would make sense to assign Sentences to
+1 and Consonant Strings to -1. Therefore, since Sentences are listed as the first
regressor in the model and Consonant Strings as the third regressor, our contrast will
be 1 0 -1 0 0 0 0. Double click on the first row of the Contrast table and enter these 7
digits separated by spaces. Then, in the ‘Name’ field, you can enter ‘SentencesConsStrings’.
Click on the Slice Timing tab. In this experiment, the slices were collected in interleaved
order, so instead of ‘Ascending’ select ‘Interleaved’.
Finally, click on the ‘Outputs’ tab. Notice that we can select between several different
statistical maps. The default is the –log(p) map, which has the nice property that the
numbers are directly translatable to p values for the contrast (a value of 2 corresponds
to p < .01, a value of 3 corresponds to p < .001, etc.). Click OK. Two new things should
appear. First is a ‘green’ statistical map for the contrast (LangLoc-logp-c1.mnc) (you can
change the color mapping using the drop-down menu at the top left). Second, in the
original functional window is a purple line on top of the yellow line that plots signal
across time.
It can be hard to know what you’re looking at in the logp map, and unfortunately loading
high-resolution anatomical images in Neurolens can be buggy. However, a good way to
navigate around the logp map is by putting the LangLoc.nii and the LangLoc-logpc1.mnc maps side-by-side and holding down the Apple ‘command’ key while you click
on areas of interest; this will lock the cursor position in both images so you can use the
LangLoc.nii image to figure out where you are in the brain.
(a) Click around to different voxels in the LangLoc.nii map and examine how the
timecourse plot changes. Click somewhere in temporal cortex and take a screenshot for
illustration. What do you think the purple line represents? Why do you think the overall
shape of the waveform changes so drastically from voxel to voxel? Do you notice any
patterns across regions?
(b) Click around and try to find voxels with stronger ‘warm’ colors in the logp map,
illustrating a lower p-value for the Sentences-ConsonantString contrast (remember you
can click on the ‘Show all slices’ box for a birds-eye view). Where do you see such
voxels? Do they seem to be clustered close to each other?
(c) Do you see voxels with ‘cold’ colors, suggesting a stronger response in the
ConsonantString condition than the Sentence condition? Where?
7. Now we will see whether correcting for motion and smoothing (blurring) the data in
space improves our estimates. Minimize the logp image and return to the LangLoc.nii
window. Click on Actions->Motion Correction. In this exercise we will align all of the
images in the run with the first image collected, so enter ‘1’ for the Target Frame. In the
Outputs tab, click on both ‘Aligned series’ and ‘Motion parameters’, then click OK.
Three windows should now pop up, two illustrating the motion estimates and one the
corrected images. Focus on the motion estimates first (LangLoc-shift.mnc and LangLocrot.mnc).
(a) From the ‘shift’ estimate, in which direction was the most displacement observed?
(hint: by clicking around at different positions in the timecourse, you get different
coordinates at the bottom of the screen)
(b) In which direction was the most rotation observed?
(c) Do you think it’s a good idea to register all images to the first timepoint in the scan?
What might be the advantages and disadvantages?
8. Minimize the motion estimates and click on the motion corrected image window,
which will be labeled LangLoc-MC.mnc. Now click Action->Spatial smoothing. Leave the
defaults, which is a Gaussian 6mm smoothing kernel, and click OK. A new window with
the smoothed data, LangLoc-MC-Sm.mnc should now pop up.
(a) What do you notice about the images?
(b) Do you think this is a good thing or a bad thing?
9. Now, making sure that the LangLoc-MC-SM.mnc window is active, go to Action>Linear Modeling and run through the same steps as above to run the linear model,
making sure to update the Design, Contrasts, and Slice Timing tabs appropriately. Pull
up the original LangLoc-logp map and the LangLoc-MC-SM maps side by side so that
you can compare.
(a) What salient differences do you notice in the original logp map and the preprocessed
logp map? Have the timecourses changed at all?
(b) Which areas show reliably increased BOLD signal for Sentences relative to
Consonant Strings in the preprocessed logp map? Illustrate with a screenshot. Are the
effects bilateral? Are they in areas that you might have predicted?
(c) Which areas show reliably decreased BOLD signal for Sentences relative to
Consonant Strings? Illustrate with a screenshot.
If you are interested, you can re-run the linear model to explore the other possible
contrasts between the three conditions. To contrast the Sentence and NounList
conditions, you could enter 1 -1 0 0 0 0 0. To look at the effect of a single condition
relative to baseline, enter a 1 for that condition and a 0 for all other conditions.
10. Did interacting with this dataset raise any other thoughts or questions for you about
the basic principles of fMRI analysis?
Download