Jody Culham Brain and Mind Institute Department of Psychology University of Western Ontario http://www.fmri4newbies.com/ Basics of Experimental Design for fMRI: Event-Related Designs Last Update: January 18, 2012 Last Course: Psychology 9223, W2010, University of Western Ontario Part III Choosing an Event-Related Design Convolution of Single Trials Neuronal Activity BOLD Signal Haemodynamic Function Time Time Slide from Matt Brown BOLD Summates Neuronal Activity Slide from Matt Brown BOLD Signal BOLD Overlap and Jittering • Closely-spaced haemodynamic impulses summate. • Constant ITI causes tetanus. Burock et al. 1998. = trial of one type (e.g., face image) = trial of another type (e.g., place image) Block Design Slow ER Design Rapid Counterbalanced ER Design Rapid Jittered ER Design Mixed Design Design Types = null trial (nothing happens) Block Designs = trial of one type (e.g., face image) = trial of another type (e.g., place image) Block Design Early Assumption: Because the hemodynamic response delays and blurs the response to activation, the temporal resolution of fMRI is limited. Positive BOLD response BOLD Response (% signal change) WRONG!!!!! 3 2 Overshoot 1 Initial Dip 0 Post-stimulus Undershoot Time Stimulus What are the temporal limits? What is the briefest stimulus that fMRI can detect? Blamire et al. (1992): 2 sec Bandettini (1993): 0.5 sec Savoy et al (1995): 34 msec 2 s stimuli single events Data: Blamire et al., 1992, PNAS Figure: Huettel, Song & McCarthy, 2004 Data: Robert Savoy & Kathy O’Craven Figure: Rosen et al., 1998, PNAS Although the shape of the HRF delayed and blurred, it is predictable. Event-related potentials (ERPs) are based on averaging small responses over many trials. Can we do the same thing with fMRI? Detection vs. Estimation % Signal Change • detection: determination of whether activity of a given voxel (or region) changes in response to the experimental manipulation 1 0 0 4 8 12 • estimation: measurement of the time course within an active voxel in response to the experimental manipulation Time (sec) Definitions modified from: Huettel, Song & McCarthy, 2004, Functional Magnetic Resonance Imaging Block Designs: Poor Estimation Huettel, Song & McCarthy, 2004, Functional Magnetic Resonance Imaging Pros & Cons of Block Designs Pros • high detection power • has been the most widely used approach for fMRI studies • accurate estimation of hemodynamic response function is not as critical as with event-related designs Cons • poor estimation power • subjects get into a mental set for a block • very predictable for subject • can’t look at effects of single events (e.g., correct vs. incorrect trials, remembered vs. forgotten items) • becomes unmanagable with too many conditions (e.g., more than 4 conditions + baseline) Slow Event-Related Designs Slow ER Design Slow Event-Related Design: Constant ITI Block 2 s stim vary ISI Bandettini et al. (2000) What is the optimal trial spacing (duration + intertrial interval, ITI) for a Spaced Mixed Trial design with constant stimulus duration? Sync with trial onset and average Source: Bandettini et al., 2000 Optimal Constant ITI Source: Bandettini et al., 2000 Brief (< 2 sec) stimuli: optimal trial spacing = 12 sec For longer stimuli: optimal trial spacing = 8 + 2*stimulus duration Effective loss in power of event related design: = -35% i.e., for 6 minutes of block design, run ~9 min ER design Trial to Trial Variability Huettel, Song & McCarthy, 2004, Functional Magnetic Resonance Imaging How Many Trials Do You Need? Huettel, Song & McCarthy, 2004, Functional Magnetic Resonance Imaging • • • standard error of the mean varies with square root of number of trials Number of trials needed will vary with effect size Function begins to asymptote around 15 trials Effect of Adding Trials Huettel, Song & McCarthy, 2004, Functional Magnetic Resonance Imaging Pros • good estimation power • allows accurate estimate of baseline activation and deviations from it • useful for studies with delay periods • very useful for designs with motion artifacts (grasping, swallowing, speech) because you can tease out artifacts • analysis is straightforward % signal change Pros & Cons of Slow ER Designs Activation Hand motion artifact Cons • poor detection power because you get very few trials per condition by spending most of your sampling power on estimating the baseline • subjects can get VERY bored and sleepy with long inter-trial intervals Time “Do You Wanna Go Faster?” • Yes, but we have to test assumptions regarding linearity of BOLD signal first Rapid Counterbalanced ER Design Rapid Jittered ER Design Mixed Design Linearity of BOLD response Linearity: “Do things add up?” red = 2 - 1 green = 3 - 2 Sync each trial response to start of trial Not quite linear but good enough! Source: Dale & Buckner, 1997 Optimal Rapid ITI Source: Dale & Buckner, 1997 Rapid Mixed Trial Designs Short ITIs (~2 sec) are best for detection power Do you know why? Efficiency (Power) Design Types Rapid Counterbalanced ER Design = trial of one type (e.g., face image) = trial of another type (e.g., place image) Detection with Rapid ER Designs Figure: Huettel, Song & McCarthy, 2004 • To detect activation differences between conditions in a rapid ER design, you can create HRF-convolved reference time courses • You can perform contrasts between beta weights as usual Variability Between Subjects/Areas • greater variability between subjects than between regions • deviations from canonical HRF cause false negatives (Type II errors) • Consider including a run to establish subject-specific HRFs from robust area like M1 Handwerker et al., 2004, Neuroimage Event-Related Averaging In this example an “event” is the start of a block In single-trial designs, an event may be the start of a single trial First, we compute an event related average for the blue condition • Define a time window before (2 volumes) and after (15 volumes) the event • Extract the time course for every event (here there are four events in one run) • Average the time courses across all the events Event-Related Averaging Second, we compute an event related average for the gray condition Event-Related Averaging Third, we can plot the average ERA for the blue and gray conditions on the same graph Event-Related Averaging in BV Define which subjects/runs to include Set time window Define which conditions to average (usually exclude baseline) We can tell BV where to put the y=0 baseline. Here it’s the average of the two circled data points at x=0. Determine how you want to define the y-axis values, including zero But what if the curves don’t have the same starting point? In the data shown, the curves started at the same level, as we expect they should because both conditions were always preceded by a resting baseline period But what if the data looked like this? …or this? Epoch-based averaging FILE-BASED AVERAGING: zero baseline determined across all conditions (for 0 to 0: points in red circles) In the latter two cases, we could simply shift the curves so they all start from the same (zero) baseline EPOCH-BASED AVERAGING: zero baselines are specific to each epoch File-based vs. Epoch-based Averaging time courses may start at different points because different event histories or noise e.g., set EACH curve such that at time=0, %BSC=0 0 0 0 File-based Averaging Epoch-based Averaging • • • • • • zero is based on average starting point of all curves works best when low frequencies have been filtered out of your data similar to what your GLM stats are testing • each curve starts at zero can be risky with noisy data only use it if you are fairly certain your pre-stim baselines are valid (e.g., you have a long ITI and/or your trial orders are counterbalanced) can yield very different conclusions than GLM stats What if…? • This design has the benefit that each condition epoch is preceded by a baseline, which is nice for making event-related averages • However, we might decide that this design takes too much time because we are spending over half of the time on the baseline. • Perhaps we should use the following paradigm instead…? • This regular triad sequence has some nice features, but it can make ERAs more complicated to understand. Regular Ordering and ERAs • We might have a time course that looks like this Example of ERA Problems • If you make an ERA the usual way, you might get something that looks like this: File-Based (Pre=2, Post=10, baseline 0 to 0) Intact Scrambled One common newbie mistake is to make ERAs for all conditions, including the baseline (Fixation). This situation will illustrate some of the confusion with that Fixation • Initially some people can be confused how to interpret this ERA because the pre-event activation looks wonky. Example of ERA Problems File-Based (Pre=2, Post=10, baseline 0 to 0) File-Based (Pre=8, Post=18, baseline 0 to 0) • If you make the ERA over a longer time window, the situation becomes clearer. • You have three curves that are merely shifted in time with respect to one another. Example of ERA Problems File-Based (Pre=2, Post=10, baseline 0 to 0) Intact End of Intact Scrambled End of Scrambled End of Fixation Fixation • Now you should realize that the different pre-epoch baselines result from the fact that each condition has different preceding conditions – Intact is always preceded by Fixation – Scrambled is always preceded by Intact – Fixation is always preceded by Scrambled Example of ERA Problems File-Based (Pre=2, Post=10, baseline 0 to 0) Intact Scrambled Fixation • Because of the different histories, changes with respect to baseline are hard to interpret. Nevertheless, ERAs can show you how much the conditions differed once the BOLD response stabilized – This period shows, rightly so, Intact > Scrambled > Fixation Example of ERA Problems Epoch-Based (Pre=2, Post=10, baseline -2 to -2) • Because the pre-epoch baselines are so different (due to differences in preceding conditions), here it would be really stupid to do epoch-based averaging (e.g., with x=-2 as the y=0 baseline) • In fact, it would lead us to conclude (falsely!) that there was more activation for Fixation than for Scrambled Example of ERA Problems • In a situation with a regular sequence like this, instead of making an ERA with a short time window and curves for all conditions, you can make one single time window long enough to show the series of conditions (and here you can also pick a sensible y= 0 based on x=-2) File-Based average for Intact condition only (Pre=2, Post23, baseline -2 to -2) Intact Scrambled Fixation Partial confounding • In the case we just considered, the histories for various conditions were completely confounded – Intact was always preceded by Fixation – Scrambled was always preceded by Intact – Fixation was always preceded by Scrambled • We can also run into problems (less obvious but with the same ERA issues) if the histories of conditions are partially confounded (e.g., quasi-random orders) • Intact is preceded by Scrambled 3X and by Fixation 3X • Scrambled is preceded by Intact 4X and Fixation 1X • Fixation is preceded by Intact 2X, by Scrambled 2X and by nothing 1X • No condition is ever preceded by itself The Problem of Trial/Block History This problem also occurs for single trial designs. This problem also occurs even if the history is only partially confounded (e.g., if Condition A is preceded by Condition X twice as often as Condition B is preceded by Condition X). If we knew with certainty what a given subject’s HRF looked like, we could model it (but that’s rarely the case). Thus we have only two solutions: 1) Counterbalance trial history so that each curve should start with the same baseline 2) Jitter the intertrial intervals so that we can estimate the HRF • more on this in analysis when we talk about deconvolution One Approach to Estimation: Counterbalanced Trial Orders • Each condition must have the same history for preceding trials so that trial history subtracts out in comparisons • For example if you have a sequence of Face, Place and Object trials (e.g., FPFOPPOF…), with 30 trials for each condition, you could make sure that the breakdown of trials (yellow) with respect to the preceding trial (blue) was as follows: • • • …Face Face x 10 …Place Face x 10 …Object Face x 10 • • • …Face Place x 10 …Place Place x 10 …Object Place x 10 • • • …Face Object x 10 …Place Object x 10 …Object Object x 10 • Most counterbalancing algorithms do not control for trial history beyond the preceding one or two items Analysis of Single Trials with Counterbalanced Orders Approach used by Kourtzi & Kanwisher (2001, Science) for pre-defined ROI’s: • for each trial type, compute averaged time courses synced to trial onset; then subtract differences Event-related average Raw data Event-related average with control period factored out A signal change = (A – F)/F A … B B signal change = (B – F)/F F sync to trial onset Pros & Cons of Counterbalanced Rapid ER Designs Pros • high detection power with advantages of ER designs (e.g., can have many trial types in an unpredictable order) Cons and Caveats • reduced detection compared to block designs • estimation power is better than block designs but not great • accurate detection requires accurate HRF modelling • counterbalancing only considers one or two trials preceding each stimulus; have to assume that higher-order history is random enough not to matter • what do you do with the trials at the beginning of the run… just throw them out? • you can’t exclude error trials and keep counterbalanced trial history • you can’t use this approach when you can’t control trial status (e.g., items that are later remembered vs. forgotten) Design Types Rapid Jittered ER Design = trial of one type (e.g., face image) = trial of another type (e.g., place image) BOLD Overlap With Regular Trial Spacing Neuronal activity from TWO event types with constant ITI Partial tetanus BOLD activity from two event types Slide from Matt Brown BOLD Overlap with Jittering Neuronal activity from closely-spaced, jittered events BOLD activity from closely-spaced, jittered events Slide from Matt Brown BOLD Overlap with Jittering Neuronal activity from closely-spaced, jittered events BOLD activity from closely-spaced, jittered events Slide from Matt Brown Fast fMRI Detection A) BOLD Signal B) Individual Haemodynamic Components C) 2 Predictor Curves for use with GLM (summation of B) Slide from Matt Brown Post Hoc Trial Sorting Example Wagner et al., 1998, Science Algorithms for Picking Efficient Designs Optseq2 Algorithms for Picking Efficient Designs Genetic Algorithms Design Types Mixed Design = trial of one type (e.g., face image) = trial of another type (e.g., place image) Example of Mixed Design Otten, Henson, & Rugg, 2002, Nature Neuroscience • used short task blocks in which subjects encoded words into memory • In some areas, mean level of activity for a block predicted retrieval success Pros and Cons of Mixed Designs Pros • allow researchers to distinguish between staterelated and item-related activation Cons • sensitive to errors in HRF modelling EXTRA SLIDES A Variant of Mixed Designs: Semirandom Designs • a type of event-related design in which the probability of an event will occur within a given time interval changes systematically over the course of an experiment First period: P of event: 25% Middle period: P of event: 75% Last period: P of event: 25% • probability as a function of time can be sinusoidal rather than square wave Pros and Cons of Semirandom Designs Pros • good tradeoff between detection and estimation • simulations by Liu et al. (2001) suggest that semirandom designs have slightly less detection power than block designs but much better estimation power Cons • relies on assumptions of linearity • complex analysis • “However, if the process of interest differs across ISIs, then the basic assumption of the semirandom design is violated. Known causes of ISI-related differences include hemodynamic refractory effects, especially at very short intervals, and changes in cognitive processes based on rate of presentation (i.e., a task may be simpler at slow rates than at fast rates).” -- Huettel, Song & McCarthy, 2004