Siemens 3T Scanner User Training: Supporting Information and FAQ LAST MODIFIED: 8th March 2012 Some of the information below is required to be able to pass the 3 T user quiz. The rest is useful stuff that will help you get the best data and run efficient scan sessions. This isn’t a designed to be a standalone training course, but a reference to explain some of the features and terms you will come across as you learn to drive the scanner. It should be considered supporting information only. Accordingly, I don’t recommend trying to read this document until you have run the scanner for yourself a couple of times at least. I assume a reasonable familiarity with the scanner software and control. It is also essential to have some knowledge of the background physics and physiology of fMRI. If you haven’t taken a course on fMRI (e.g. Psy214) then you should read chapters 4-8 of the book Functional Magnetic Resonance Imaging by Huettel, Song & McCarthy. Sections added or modified since last version are highlighted in yellow on the Contents page. See also the Update Notes on this page for a brief summary of changes. Further assistance and feedback: binglis@berkeley.edu, 510-388-8321. Update Notes (8th March, 2012): Updated with new operating modes available under software syngo MR version B17. General tweaks to improve readability. Further recommendations on using the 32-channel coil for fMRI. Added a description of the new AutoAlign procedure, AAHScout. Added a new section: “I have an existing protocol that uses the old AutoAlign (AAScout). How do I get and use the new AutoAlign (AAHScout)?” Added a new section: “I want to add a new acquisition and acquire exactly the same slices as this other EPI acquisition I just acquired. How do I tell the scanner to do that?” Extended the discussion on the relative merits of PACE versus using an offline realignment alone, in the section on the ep2d_pace sequence. Fixed a typo concerning the slice ordering for descending slices. Added a new section: “What is a field map and how does it fix EPI distortion?” Added a new section: “I want to try to fix my distortion with a field map. What do I need to acquire?” Updated the sections on partial Fourier for EPI, noting that Siemens simply zero fills the omitted portion of k-space rather than doing a conjugate synthesis. Extended checklists. Update Notes (7th July, 2010): VB15 version New recommendations on using the 32-channel coil. Modified recommendations on re-shimming during a scan. Modified recommendations on flip angle selection. Added information on slice ordering for EPI. Added information on the use of fat saturation for fMRI. Modified recommendations on the use of GRAPPA, and interpretation of GRAPPA artifacts. Modified explanation of partial Fourier issues. Clarified method choice between partial Fourier and GRAPPA. Clarified implications of the Siemens research agreement, after interpretation by the UC intellectual property officials. CONTENTS: SETTING UP AND ACQUIRING SCANS: What is the practical difference between the 12-channel and 32-channel head coils? Which one is best for fMRI? I have a subject who has a lot of dental work. Is this person okay to scan? Why does the scanner instruct me that the patient bed might move when I start the first scan in my session (usually a localizer)? I can’t hear anything happening? How can I tell what the scanner is doing right now? Why do I sometimes get a message that the subject might experience peripheral nerve stimulation? Should I tell the subject? How does the AutoAlign feature work? Should I use it? NEW: I have an existing protocol that uses the old AutoAlign (AAScout). How do I get and use the new AutoAlign (AAHScout) instead? I don’t want to trust AutoAlign. How should I define my slice positions manually? NEW: I want to add a new acquisition and acquire exactly the same slices as this other EPI acquisition I just acquired. How do I tell the scanner to do that? When does shimming happen and what is actually done? I want to re-shim my subject’s brain midway through my session. How do I do it? How do I know whether I should re-shim or not? I want to know how long my scan will take. Where is the scan time shown? What is the difference between the Scan and Apply buttons for starting a scan? Help! What pulse sequence am I using? EPI: BASIC PARAMETER AND SEQUENCE ISSUES I’ve been told not to use echo spacing between 0.6 and 0.8 ms for EPI. How come? How many dummy scans happen before the first real (saved) volume of EPI in my time series? I want 200 volumes in my EPI time series. How do I do that? On the BOLD card, what is Motion Correction? How do I turn it on or off? My protocol has TE set at 28 ms for EPI. But I saw somebody else’s protocol that uses a TE of 22 ms. How come? I am using ep2d_bold. What are the specifics of using this sequence? I am using ep2d_pace. What are the specifics of using this sequence? I am using ep2d_neuro. What are the specifics of using this sequence? What flip angle should I use for fMRI? What TR should I use for fMRI? Should I use interleaved or sequential slices for fMRI? In what order does the scanner acquire EPI slices? EPI: ARTIFACTS I hear a lot about ghosting when people talk about EPI. What is a ghost and what causes them? How do I get rid of them? On the Contrast tab I notice that fat suppression is enabled for EPI. What does it do? What is the origin of signal dropout in EPI? Can it be fixed? What is the origin of distortion in EPI? Can it be fixed? NEW: What is a field map and how does it fix EPI distortion? NEW: I want to try to fix my distortion with a field map. What do I need to acquire? Whoa! I’m watching my EPIs on the Inline Display window and I’m seeing all sorts of weirdness. What’s going wrong? How much subject movement is too much? EPI: ADVANCED PARAMETER AND SEQUENCE ISSUES What the hell is iPAT? Last time I checked, grappa was a strong Italian drink! It makes no sense! Is GRAPPA a good technique to use? What are the caveats? What is “partial Fourier” and why might I want to consider it for EPI? Is partial Fourier a good technique to use? What are the caveats? It looks like I will need to use either partial Fourier or GRAPPA to get the spatial resolution and coverage that I want. Which method should I use? FINAL ISSUES: I want to scan overnight. Is there anything I need to watch out for? I hear we have a research agreement with Siemens. Why should I care? APPENDIX 1: CHECKLISTS Normal operation checklists: Experimenter prep Lab prep Subject prep Subject setup Start of scan Experimental protocol End of scan Emergency checklists: Unexpected image feature Panicked subject Magnetic object accident Fire Earthquake SETTING UP AND ACQUIRING SCANS: What is the practical difference between the 12-channel and 32-channel head coils? Which one is best for fMRI? RF coil selection is probably the first decision you will face when you start to develop a new protocol. Here are the main differences to consider: - - - - - - - The 32-ch coil gets approximately twice the signal-to-noise ratio (SNR) of the 12-ch coil in the cortex. SNR for deep structures is about 50% better for the 32-ch coil. (See figure, below.) The 32-ch coil is fairly heterogeneous in its reception profile, being especially sensitive for frontal brain if the subject is high in the coil (i.e. the bridge of the nose sits between the two loops at the top-front of the coil). This can yield funny-looking images to the uninitiated, but there is nothing inherently wrong with the coil; all RF coils are somewhat heterogeneous. The 12-channel coil comprises linear struts arranged in a circular geometry whereas the 32channel coil has pentagonal loops arranged on the surface of a partial sphere. The 32-ch coil therefore provides the ability to do parallel imaging, such as GRAPPA, with higher acceleration factors than the 12-ch coil. As a rough rule of thumb the 12-ch coil should be limited to acceleration factors of 2 whereas the 32-ch coil can use acceleration factors up to four. See the later section on GRAPPA for more information on GRAPPA. 32-ch coil has a smaller internal diameter than the 12-ch coil, and it can be a tight fit for large heads. Children and most female adult subjects have no problem fitting in comfortably but some large male adult heads may only fit with very minimal padding underneath the occipital pole, which may not provide sufficient comfort for a long duration scan. Because of its different geometry, we have an entire set of dedicated peripherals is available for the 32-ch coil. In particular, different mirror mounts must be used because of the different coil geometries. Also, the Siemens headphones won’t fit in the 32-ch coil so if you cannot simply use the in-magnet speaker to communicate with your subject (e.g. you want to provide an auditory stimulus) then you will need to use one of the insertable headphone variants. (Talk to Rick about the options.) Corrective lenses must be placed on the outside of the 32-ch coil; there is no room for goggles or glasses on a subject once he is inside the coil. (Talk to Rick about the custom mount for corrective lens use with the 32-ch coil.) Visual obscuration is quite different for the 12-ch and 32-ch coils. The 12-ch coil has a single strut that runs parallel with the subject’s nose, whereas the 32-ch coil has a veeshaped gap for the nose. In general, the 32-ch coil may provide slightly less obscuration of the subject’s visual field but only if the subject is positioned high in the coil, with the eyes close to the two front-center coil loops. Both the 12-ch and 32-ch coils can be used bottom half only, e.g. for TMS studies or for retinotopic studies that require an un-obscured visual field. So, which one is better for fMRI? As a general rule, the 32-channel coil will out-perform the 12channel coil for most anatomical imaging applications. However, there may be particular subject groups (large men, for instance, or subjects who might not like the idea of being in a tight-fitting coil) or particular applications (e.g. the use of LCD goggles) that cannot be made to fit inside the smaller 32-channel coil. More critically for fMRI, we have found that the 32-channel coil is more motion-sensitive than the 12-channel coil. This issue is under investigation. Note, however, that it’s not a Siemensspecific issue. Rather, it is related to the size of the smaller, more numerous coil elements in high dimensional coil arrays. (Tests on a 3 T GE scanner with three different vendors’ 32-channel coils all showed similar motion dependence as we see on our Siemens 32-channel coil.) Whether or not this motion sensitivity is a reason to avoid the 32-channel coil altogether is difficult to say at this point. Unless, that is, you don’t need the 32-channel coil for a particular reason, e.g. for high iPAT factor for GRAPPA, to attain very high spatial resolution (< 2mm voxels), or to match another site’s protocol. If you don’t require the use of the 32-channel coil, why take the risk with it when the 12channel coil will work without the gamble? If you opt to use the 32-channel coil then it is a good idea to enable the Prescan Normalize option on the Resolution > Filter tab when acquiring EPI time series for fMRI. This option acquires a brief (20 sec) low-resolution scan that is used to normalize the receive field heterogeneity of the head coil, thereby hopefully reducing somewhat the higher motion sensitivity of the 32-channel coil versus the 12-channel coil. In the absence of any normalization, motion of the brain relative to the head coil elements can be misinterpreted by a motion correction algorithm (assuming, as most do, that you will apply a rigid body realignment to the time series) and has the effect of reducing the temporal signal-to-noise of the time series. It’s perverse, but a very real likelihood that a 3D realignment can actually degrade TSNR when using a large array of small coil elements, as the 32channel head coil happens to be. (Incidentally, the Prescan Normalize can also be used with the 12channel coil, but we’re not yet sure whether there’s a definite benefit.) When using the Prescan Normalize option you can save to the database the raw, un-normalized time series as well as the normalized data. I would strongly advise doing this because then you have a risk-free decision with regard to prescan normalization. You get raw data and normalized data. You don’t like the normalized data for any reason? Fine, ignore it and use the raw data you would have obtained anyway! To get raw and normalized data go to the Resolution > Filter tab and select the “Unfiltered images” check box underneath the button that enables the Prescan Normalize. There is one caveat: this option won’t save the un-normalized data if you have the MoCo option selected. Then, both time series saved to the database will be prescan normalized, but the second time series will also have been motion-corrected via a realignment algorithm. (See the section on Motion Correction for more details on the MoCo options.) Since this prescan normalization is an advanced option, and because its use is an active area of research both at BIC and elsewhere – there is very sparse literature on the motion sensitivity of the 32-channel coil as yet - I would strongly encourage you to talk to Ben or Daniel before initiating a new experiment on the 32-ch coil. Let us explain the issues in some detail. We should double check that you really need this coil, and assure that you understand the potential consequences of selecting it over the 12-channel coil. Another minor consideration before selecting the 32-channel coil might be the availability of backup sites. At present, UCSF only has the 12-channel coil and console, so if the BIC scanner went down for an extended period of time and you wanted to move your scans to UCSF’s NIC, you’d have a problem unless you’re using the 12-channel coil. The bottom line at this point is for you to know that there is a choice and that the choice presents significant differences. Talk to Ben when you’re ready to set up a new protocol/experiment and you can make a more informed decision then. SNR profiles for the Siemens 12-ch and 32-ch coils. Note especially the high cortical sensitivity of the 32-ch coil, but that the SNR is also higher in the midbrain when compared to the 12-ch coil. I have a subject who has a lot of dental work. Is this person okay to scan? Most modern dental work is MRI safe, meaning that there is minimal risk to your subject of having an MRI scan. This is not to say, however, that dental hardware won’t potentially have a negative effect on an fMRI scan, even if anatomical MRIs can be acquired safely and effectively. As you might suspect, if there is a large amount of metalwork in and around the teeth, there can be problems getting a good shim across the brain, especially its inferior surface (which is already somewhat compromised by the shape of the skull and presence of sinuses). As a general rule we don't worry about retainers in the lower jaw. However, upper jaw retainers containing a significant amount of metal (usually stainless steel) or metal braces in lower or upper jaw can create shim problems. Movement of the metal, e.g. from swallowing, talking or head motion, may increase the amount of ghosting and decrease statistical power. A similar effect can occur in subjects having many metal amalgam fillings. Whether or not to accept a volunteer for a scan can be difficult to assess without simply trying the scan. However, a basic rule of thumb is to accept retainers, accept three or fewer metal amalgam fillings (either jaw) and reject braces unless the subject is especially valuable, in which case try it and see. Note, however, that any subject with a retainer, extensive amalgam fillings or braces is likely to show a signal void for the mouth in anatomical scans such as MP-RAGE. This doesn’t in any way signify how the EPI signal will behave; the EPI signal characteristics will depend on spatial resolution, slice prescription, TE and many other factors. Why does the scanner instruct me that the patient bed might move when I start the first scan in my session (usually a localizer)? Until a reference scan has been acquired, the scanner is using as its frame of reference the magnet isocenter - the center of the magnetic field, which is in the geometric center of the bore tube. This could, in principle, differ from the reference position, called REFERENCE, which we use. The reference we want to use is the center of our subject’s head, which we have just marked with the laser prior to putting the bed into the magnet. As soon as a localizer (or any other image) has been acquired using the REFERENCE positioning mode, the scanner software then ‘knows’ to reference all subsequent images relative to that first image. This allows you to prescribe slices on each subsequent image however you like, and the scanner will track where you are in space. This stays true throughout your scan session provided you don’t move the patient table, or intentionally change the reference mode that’s been preset in all the scans you will use. (To change the mode you would need to access the System tab, and to know what you’re doing when you get there. Thus, it’s not something a routine user need worry about!) I can’t hear anything happening? How can I tell what the scanner is doing right now? Look in the very bottom left-hand corner of the screen. It might say, for example: “Waiting for scan instructions,” or “Waiting for slice positioning,” or “Scanning 00:36 (3/20 B).” That last message tells you there are 36 seconds left in the current scan, and that it’s just finished acquiring three of twenty blocks in a time series. The other messages are usually self-explanatory. Don’t ask why some of the most useful information is hidden away in that bottom left-hand corner. It just is! Why do I sometimes get a message that the subject might experience peripheral nerve stimulation? Should I tell the subject? From your safety training you will recall that one of the risks to a subject from MRI is peripheral nerve stimulation. This can arise because of the rapidly switched gradient magnetic fields; the clicks, bangs and pings the scanner emits while it is acquiring data. (The sounds don’t cause the stimulation risk – only auditory damage risk – but the gradients that create the noise also create the stimulation risk.) The scanner will issue a warning whenever it calculates that the stimulus limit will be approached for the scan you are trying to acquire. If the limit were actually exceeded the scanner wouldn’t let you acquire and you’d need to change a parameter. So this condition isn’t necessarily hazardous, it’s just telling you that in subjects who are unusually sensitive to changing magnetic fields, they might feel something. So what to do about it? In general, it’s probably not a good idea to warn the subject that he or she might experience peripheral nerve stimulation because then he/she is going to be on high alert, and it is entirely possible (even likely) the subject will think the normal scanner vibration is the sensation you’ve just warned about! A better approach is to simply remind the subject at the start of the scan to squeeze the squeeze-ball if he/she is uncomfortable at any point during the scan. If a subject does report feeling tingling or twitching, don’t dismiss it! Assume this particular subject is sensitive to the pulsed magnetic fields and discontinue the scan if the subject is unwilling to proceed. Feel free to explain the effect to the subject, and if they are happy to continue, go for it. Err on the side of caution, however. How does the AutoAlign feature work? Should I use it? AutoAlign is a software method that allows, under the right circumstances, slice prescriptions to be set automatically as part of a protocol. It is designed to allow a protocol established on one subject to be duplicated for later subjects. Essentially, here is how it would work in practice: 1. On your first (pilot) subject, you acquire an AA scout scan, check that the AA software has been enabled (yellow slice bar icon), then set all your slice prescriptions by hand on whatever anatomical scans you want to use. (You’d typically use the Localizer plus perhaps an MP-RAGE or another fast 2D anatomical scan to define the slice prescription.) 2. Having completed the entire scan protocol this way, make a new (empty) protocol that will receive duplicates of all of the scans you have just acquired. 3. In the new protocol, drag and drop the AA scout plus all the scans you want to acquire from subsequent subjects. (You’ll learn how to make protocols in user training.) You can drag and drop scans from the exam queue or from the Patient Browser. Save the new protocol. 4. For subsequent subjects, bring up the saved protocol and run the scans in order, beginning with AA scout. Once it is active (yellow slice bars), the software will place slice positions in (approximately) the same anatomical locations as was done for the first subject. So does AutoAlign work? It depends! There are two flavors: old (AAScout) and new (AAHScout) suitable for head imaging. The old AAScout can be expected to fail (it will show red bars instead of yellow ones) if you have a subject with significant pathology, such as a large stroke. It might work, but it might not, too. Furthermore, if you are scanning children or adolescents 17 or younger, you will need to cheat the software by telling the patient registration that your subject is older; the software won’t run if the subject is registered as being under 17. Once that trick is established, many people then find AAScout works quite well on adolescent brains. And finally, AAScout definitely won’t run on anything that isn’t a human brain, such as a phantom. The newer AAHScout uses a different algorithm to determine reference features of the brain. In preliminary tests it seems to be more accurate than AAScout. Furthermore, AAHScout can be used to replace the Localizer scan, saving twenty seconds of data acquisition. Whether or not you should use either AutoAlign procedure depends on the specifics of your experiment. If your EPI slice prescription covers all or nearly all of the brain, has (near) isotropic voxels and you aren’t especially worried about getting certain anatomical features captured within specific 2D planes, then the performance of AutoAlign is probably good enough for you. At a minimum, however, you will want to inspect the automated slice prescription and make sure the yellow slice box is roughly where you intend. Don’t assume that it will always work! But if you’re doing high-resolution retinotopy with a dozen coronal slices placed so as to just capture all of V1V5 then you should probably stick to prescribing your slices by hand/eye, using detailed localizer/pilot scans and your anatomical knowledge to place your slices. One problem that AutoAlign can introduce for EPI is a rotation of the image plane, i.e. a rotation of the read and phase encode axes away from the primary gradient axes. In a typical axial scan, for example, the readout dimension uses the X gradient only (subject’s left-right) while the phase encoding is performed by the Y gradient (subject’s anterior-posterior). Rotating the image plane causes a mixing of these assignments and can increase EPI ghosting if the rotation becomes large (>5 degrees). If you do use AA, always check the parameter Phase enc. dir on the Geometry parameter card. It should be at or near zero. (See figure below.) If AA renders this parameter nonzero, overrule it and set Phase enc. dir to zero manually. On the Routine tab, click the three dots to the right of the Phase enc. dir. field. This opens the Inplane Rotation window, above. Assure the Rotation angle is zero. If AutoAlign has set it non-zero, set it back. I have an existing protocol that uses the old AutoAlign (AAScout). How do I get and use the new AutoAlign (AAHScout) instead? You can get a copy of the AAHScout sequence in the Exam Explorer, here: SIEMENS > head > library > localizer Select either AAHScout for the 12-channel coil or AAHScout_32 for the 32-channel coil. AAHScout replaces the combination of a Localizer and the old AAScout; there’s no need to acquire a separate Localizer unless you want to. AAHScout automatically creates a three-plane localizer display and loads it into the three windows to allow slice prescriptions. AAHScout has three modes: Basis, Brain, and Brain Atlas, settable on the Routine task card of the experiment you are about to acquire; that is, on your destination EPI or MP-RAGE scan, for example, not on the AAHScout scan itself. See the figure below for the location of the AutoAlign mode field on the Routine tab (of a destination EPI scan). Note that the default mode is off – indicated by three dashes – so if your slices don’t appear where you think they should, check that you have a mode enabled! According to the Siemens documentation, which isn’t very clear on how AAHScout is supposed to work, Brain Atlas is equivalent to the old AAScout, i.e. it uses a brain atlas to compute slice positions. I tested it, it doesn’t work especially well compared to the Brain or Basis modes, but I didn’t compare directly against the old AAScout. (So, if you want to use the old AAScout, don’t use AAHScout and set the mode to Brain Atlas! Use the old AAScout instead!) Brain is the mode to use for most standard axial and axial oblique prescriptions, and in a quick test it worked as well as Basis for coronal slices, too. Thus, unless I come across a failure mode in the future, my recommendation would be to always use the Brain mode regardless of your slice prescription. I don’t want to trust AutoAlign. How should I define my slice positions manually? In general, the Localizer scan can be used to set slice positions on. However, that scan acquires only three images: one sagittal, one axial and one coronal slice, each acquired at the geometric center of the magnet. Unless your EPI coverage is so large that whole brain coverage is assured no matter how big the subject’s head/brain, the anatomical information in the Localizer is probably not sufficient to assure that your EPI slices will cover both temporal poles, for example. Thus, it is a good idea to use a second anatomical scan to check your prescription on, and to make any subtle adjustments. (And in case you’re now wondering “why bother with the Localizer, then?” the answer is that it gives you, in under twenty seconds, a view of your subject that tells you whether you got the brain in the center of the magnet, the head isn’t grossly rotated, etc.) Some experiments will allow the acquisition of the MP-RAGE before any functional scans. If so, the MP-RAGE gives you about 5 minutes to set up scripts, etc. Also, once it has acquired, the MP-RAGE can be dragged into the graphical user interface (GUI) windows and used as an underlay for prescribing your EPI slices on. The typical MP-RAGE is acquired in the sagittal plane, meaning that in the GUI you’ll see a set of 2D slices acquired in the sagittal view. Leafing through these slices will easily allow you to determine the entire 3D extent of your EPI prescription, including over both temporal lobes. But if you don’t want to set up using a sagittal view, it is fairly straightforward to have the MP-RAGE acquire such that the GUI will display axial or coronal slices. See Ben for more information on how to set that up. If you don’t want to, or can’t, spend the first five minutes of a session acquiring an MPRAGE, there is a fast alternative. You can use the gre_neuro sequence in 2D or 3D mode (2D is probably best) to acquire low-resolution sections of the whole brain. Typically you would acquire about 24 slices with a resolution of about 4-5 mm in 15 seconds, but it is possible to spend more or less time and get higher or lower resolution, respectively. You can grab a suitable starting scan for gre_neuro set up for 2D acquisitions from the protocol DanZone/RELEASED, in either gre_neuro_12ch or gre_neuro_32ch. If you are unfamiliar with the sequence or the use of multislice 2D gradient echo images for setting slice prescriptions, drop Ben a line and get some tuition at the scanner. Below are examples of using MP-RAGE and gre_neuro_2DLoc scans to check that an EPI slice prescription covers both temporal poles as well as all of parietal cortex. Either scan can be used, the MP-RAGE being preferred if it can be acquired before any EPIs. To use either the MP-RAGE or a gre_neuro_2DLoc scan as a reference for your slice prescription, use the left mouse to drag and drop the completed image icon from the exam queue to the GUI window, as shown above. I want to add a new acquisition and acquire exactly the same slices as this other EPI acquisition I just acquired. How do I tell the scanner to do that? Once you’ve got a slice prescription you’re happy with (and assuming you’re not using AutoAlign) you may well want to assure that the prescription doesn’t change for all future EPIs, as well as perhaps for field maps or other 2D acquisitions. The specific parameters (and even the acquisition sequence) within the future scans may be different, but you want the slices to match. The slice packet location can be explicitly copied from one acquisition to another in a couple of different ways: 1. When you build the protocol: Having established your protocol in the Exam Explorer, identify the first experiment in the series of scans that is going to have the slices that you want to propagate into one or more later acquisitions. This might be the first EPI acquisition, for example. We are going to consider this acquisition the “source scan” for the purposes of slice packet location. Next, right click on a target acquisition - one that occurs beneath (after) the source acquisition you want to copy from - and scroll down to the bottom option of the menu to select Properties. This will open the Protocol properties window (below). Select the tab labeled Copy References. The window will then look something like this: Check the Copy reference is active box as shown above. This will reveal a list of potential source experiments that you can copy parameters from. There are five potential source scans in the figure above. Find the one on the list that you want to use as the source, select it and ensure that just the Center of slice groups & sat regions option is highlighted on the right. Also ensure that the two boxes at bottom-left are unchecked. (In this example we are assuming that all the spatial parameters have already been set up correctly in the target acquisitions, and all we’re trying to do here is match the center of the target slice packet to the center of a source slice packet.) Click OK to close the window. Now, in your protocol, you will see a little icon adjacent to the target acquisition, it looks like two pages of text with a number next to them. The number will be the acquisition number of the source acquisition. Re-save your protocol. Note that if you change the order of the acquisitions in your protocol, e.g. you insert a new acquisition before the source, or between the source and the target, the Exam Explorer will update the copy references icon number appropriately, and ensure that the target stays correctly associated with the source you chose. Likewise, if you start your session by moving your entire protocol into the Exam queue and then find that you have to re-acquire a scan between the source and the target (or you insert a new acquisition that wasn’t in the original protocol), the Exam queue will update the copy references parameter to maintain the correct association of the target and source. 2. During a scan: If you would rather copy the slice packet position manually, during your session, e.g. because you bring over one acquisition at a time in the Exam queue and decide on-the-fly when to acquire a co-planar acquisition (such as a field map or a high-res 2D T1 image), then first establish your source acquisition and start or fully acquire the scan. In the following example, scan #2 in the exam queue will be the source. It’s already acquiring. Scan #3 is the target and we want to match the slice positions. Ensure the target scan is open, as shown, then right click the source scan to open the following menu: Select the Copy Parameter option, as shown above. This will open a new window, as shown below: Select the Center of slice groups & sat regions as shown above, and ensure the two boxes at bottom left are unchecked. (As before, we are assuming all the spatial parameters of the target experiment have been preset correctly, or will be set up correctly once the slice packet has been copied.) Click OK to close the window. As the window closes you will see the yellow bars depicting the slice packet in scan #3 move to the new slice prescription, matching that already being acquired in scan #2. When does shimming happen and what is actually done? Shimming is the term given to the optimization of the magnetic field over the subject’s brain. In the absence of a subject, the magnetic field is homogeneous to a few parts per million across a 30 cm diameter spherical volume (DSV). But the subject’s head degrades the field considerably. In some places, such as the frontal lobe, the field heterogeneity can become as bad as parts per hundred. Unless this degradation is accounted for, echo planar images (or those regions of EPIs where the field is most heterogeneous) may have low signal (i.e. “dropout”), high distortion and high artifact (ghost) levels. To compensate for this degradation of the main magnetic field, the “bad” field regions are opposed (and ideally cancelled) by small magnetic fields generated by resistive (copper) coils that are wound on the gradient set, inside the magnet bore tube. You don’t really need to know anything about these coils other than that they exist, and that they are controlled by a shimming algorithm that attempts to optimize the magnetic field homogeneity over the entire head. Unless otherwise instructed the scanner will perform shimming automatically using a field mapping procedure, over a volume that encompasses your slices/volume of interest. No further shimming will be conducted in the current scan session unless you request a re-shim explicitly (see later). In general you’ll find that you’ll trigger a shim based on either your first EPI prescription or your MP_RAGE, whichever comes first in your protocol, and that’ll be it for the session. The shimming routine involves a magnetic field map acquisition. This is a 20 second buzzing that happens before the scan you’ve just initiated. The scanner acquires this field map and computes a correction based on the result. Expect the 20 seconds of buzzing only for the first EPI (or your MP-RAGE) scan in your protocol. After that, the only noise you’ll hear before your EPI starts is a couple of quick clicks. See later for an explanation of what those are doing. An advanced shim mode is available. In this mode, the scan does a first field map as in the standard mode and then acquires a second map to check the validity of the first. A small correction is made, if necessary, and a third field map is acquired to check that result. The total advanced shim takes approximately 90 seconds, whereas the standard shim takes 30 seconds (including computations). To request the Advanced Shim rather than the Standard Shim, go to the System card and select Adjustments. Shim mode is on the top of the left column. Should you use standard or advanced shimming? Well, based on the appearance of EPI ghosts, it seems that standard shimming is perfectly acceptable. If you have the time in your protocol, however, feel free to try the Advanced Shim. (Come talk to me first.) You probably won’t see any visible differences in EPI quality if you compared the two methods by eye, but you might find small improvements in fMRI statistics in hard-to-shim areas like frontal lobe. At this point there is insufficient evidence for me to recommend everybody use advanced shimming. My recommendation is to use standard shimming unless you are interested in partial brain coverage (e.g. occipital-only, or frontal-only scans), at which point there may be some benefit to advanced shimming. But we should talk about it before you try it! Finally, it is also possible to change the volume over which shimming is performed; you don’t have to accept the shim to be over the default, pre-defined volume if don’t want that for some reason. The default shim volume is set to cover the entire 3D volume of your slice prescription (either the MP-RAGE or EPI, whichever happens first in the imaging session). But if you want to tinker with a different (usually smaller) user-defined shim volume, drop me a line and I’ll show you how to do it. This can be useful if you are trying to do fMRI of a restricted volume such as the amygdala, LGN or occipital pole. I want to re-shim my subject’s brain midway through my session. How do I do it? Here's what you need to do to instigate a shim at any point during a protocol: 1. The scanner must not already be running or have scans that are queued, ready to acquire automatically. 2. In the exam window (where you start/stop scans) open the next exam (i.e. the scan you're about to run) so that you see the small black tab to the left of the protocol number. (Doing this also shows the slice prescription in yellow on the three image display windows.) 3. Now that the current protocol is open, select the Adjustments pull-down from the Options menu at the top of the screen. 4. On the window that opens, find the tab labeled Show towards the bottom-right. It's the last in a row of five tabs. 5. On the Show tab, click the Invalidate All button and then close the window. 6. Now start your scan as normal, using the Apply button above the protocol window. You should hear the scanner shim (low buzzing for 20 seconds and a message in the bottom-left corner of the screen telling you it's shimming). Simply repeat this procedure whenever you want to force a new shim. You will usually want to re-shim whenever you know the subject has moved, or if the ghost level in your EPI suddenly gets a lot worse (often an indication that your subject has moved without you knowing). See below for tips on shimming during a session. How do I know whether I should re-shim or not? The most common reason for re-shimming in the middle of a session, rather than just once at the beginning (see above) is subject movement. You can expect a new shim to improve the quality of the EPI if the subject has moved and is now stationary, e.g. the subject just sneezed or needed to adjust his back position to get comfortable. In these situations we can expect the subject’s head to remain still, albeit in a new position, perhaps, compared to earlier in the session. We should re-shim as a prophylactic measure; assume it will help and don’t waste time trying to diagnose whether the subject actually ended up in a new position or not. You will then most likely want to acquire another quick localizer scan and check the positioning of your EPI slices on the (new) position. Those of you using AutoAlign, you’ll want to acquire another AAScout at this point, too. (Or, if you are using AAHScout, that one single acquisition suffices as both localizer and AutoAlign basis.) What if you have no external clues that a subject might have moved, e.g. because you didn’t hear him sneeze or adjust his position? How can you keep a check on your subject’s behavior? A telltale sign that the subject may have moved but is now motionless is a pronounced increase in the ghost level from earlier in the session, where the ghosts are now more intense but relatively stable over time. Consider re-shimming any time you suspect the ghosts might have got worse. (And don’t waste too much time attempting to diagnose whether the ghosts really are worse or not. It’s often faster to simply re-shim than to determine whether you’re imagining things!) Another common situation is the slow, drift-like motion that arises because the subject’s neck/back muscles relax during the session, or the foam supporting his head compresses slowly over time. (Hard to blame the subject for either of these events!) If you are doing a long run, meaning anything over about 30 minutes, then it won’t hurt to re-shim any time you find yourself with a spare 30 seconds between fMRI runs. In general, whenever you know or suspect that the subject may have moved (and is now still), re-shim. But, if you have reason to believe the subject is continually moving, e.g. because the ghosts are fluctuating wildly from volume to volume and a re-shim didn’t fix the problem, you either need to re-pack his head with more foam, or you need a new, more compliant subject! Another reason to want to re-shim midway through a session is gradient heating. But before we look at the effects of heat, we first need to know why it might be an issue. When the magnet was installed, steel bars called passive shims were inserted into trays positioned between the inner surface of the magnet cryostat (the vessel containing the superconducting wire coil and all the liquid helium) and the gradient coils (the coils that impart the spatial information into the MR signal and which produce all the acoustic noise). The gradient coils double as the ‘fine tuning’ shim coils, too, allowing the magnetic field to be homogenized to a couple of parts per million. Now, let’s suppose that we decide to run an EPI sequence flat out for 30 minutes. Driving the gradient coils to do EPI produces heating in the coil as well as the familiar acoustic noise. That heat must be removed as quickly and efficiently as possible or the gradient coil will fail. (Actually, in our case there are temperature sensors that should take the scanner offline before damage can be done.) The gradient cooling is provided by chilled water fed from a unit located out the back of the scanner building. The water is fed in at about 20 C and goes out at between 20 and 30 C, depending on the particular EPI sequence being run; the more we drive the gradients, the more heat we need to remove, the warmer will be the return water temperature. Before you start your scan the magnet and its coils are at thermal equilibrium. Typically, this means the gradient coil and the adjacent passive metal shims are at about 20 C, because that’s the temperature of the water circulating through the gradient coil. (It’s also close to the ambient temperature of the magnet room.) Once we start running a scan, however, the gradient coil will start to heat up and this will also heat the passive shim metal nearby (via simple thermal conduction). After about 5-15 minutes, depending on the duty cycle of the EPI (i.e. how aggressively we are driving the gradients), the gradient coil and passive shim metal will establish a new, dynamic equilibrium somewhere approaching 25-30 C. This has the effect of causing the magnetic field to change slightly from its prior, resting value. And now you should be able to spot the problem: if you shimmed the subject when the magnet was at the cooler temperature, the magnetic field is now not exactly the same as it was; in effect, the gradient heating has slightly ‘de-shimmed’ the subject. We should consider re-shimming with everything warmed up. So how much of a problem is gradient heating, and when and how often should you re-shim to mitigate heating effects? It all depends on the duty cycle of your EPI (aggressive, high-resolution scans will generate more heat and be more susceptible to field drift), the duration of your EPI scans (time series acquisitions longer than 5 minutes will be more susceptible to field drift), and the amount of time in between your EPI scans. This latter point – the time between EPI runs - is the really sticky bit. It turns out that the cooling is rather efficient, which is what you want when you are running EPI but not really what you want when you’re in between runs! If you have a oneminute break between runs to set up a new script, there’s probably little departure from the steadystate (warm) temperature by the time you start the next run. But if you spend five minutes or more between runs, expect the system to have cooled sufficiently to the point where the following run will start from a condition nearer the baseline temperature than the steady-state, warm temperature. It is very difficult to make recommendations with regard to trying to shim away the effects of heating; we are trying to fix an exponential process with an occasional single point of correction. Some general rules are therefore useful: shim once at the start of the session, then shim again after you have run your first EPI time series (because the scanner will have warmed up a bit). Then don’t bother to re-shim unless you happen to leave a large gap (2 minutes or more) between two EPI time series, in which case repeat the prior procedure (i.e. shim now, then shim again after the EPI run, then don’t shim again unless you have a large time gap between runs). And of course be vigilant for signs of subject motion throughout, since you’re not just trying to combat the effects of heat during your experiment! I want to know how long my scan will take. Where is the scan time shown? On the Exam display, look approximately halfway down the screen, below the three image display windows and immediately above the parameter card area. In a violet/blue color is a line of information, for example: TA: 6:46 PM: REF PAT: 2 Voxel size: 1.6x1.6x3.0 mRel. SNR: 1. : epfid The information above is interpreted as follows: TA - time of acquisition, 6 mins 46 seconds. PM – parallel mode is a reference scan method with iPAT factor of two. (More on iPAT later.) The Voxel size is 1.6x1.6x3.0 mm. To get voxel size with two decimal places precision, place the mouse over the Voxel size field. It pops up in a new text box. Relative SNR you can ignore. It will always appear as 1. The pulse sequence being used is labeled as epfid. Place the cursor over the epfid word and a popup will tell you which pulse sequence is in use. Typically you will use ep2d_neuro, but you could also be using ep2d_bold or ep2d_pace if you have an older protocol. What is the difference between the Scan and Apply buttons for starting a scan? Somewhat counter-intuitively, the Apply button initiates the acquisition for the current scan and doesn’t alter anything else in the scan queue. The Scan button initiates the current acquisition, too, but it also makes a clone of the protocol and appends (or inserts) it immediately after the scan that has just been initiated. So the Scan button could be used for a series of identical EPI acquisitions, say, without the need to bring over a fresh protocol or use the Append menu item to make a protocol clone. In general I am in the habit of only using the Apply button, and if I need a repeat (cloned) acquisition I first make one using the Append menu item. It’s personal preference, but I find it makes keeping track of what’s in the protocol queue that much simpler. As far as the acquisitions themselves are concerned, however, there is no difference. Help! What pulse sequence am I using? The pulse sequence name is given in the violet/blue line of information on the Exam task card. (See the answer to the question above about scan time for how to read the information you want.) The pulse sequence is the last information field on that line. It might say epfid, for example. This is not actually the sequence name, however! To determine the sequence name, place the cursor over the epfid field. As you do, a window pops up for a few seconds and displays two more fields: Sequence name and Sequence variant. Sequence name could be ep2d_bold, for example. ep2d_neuro is the preferred sequence for all BIC users. There are some differences between the different EPI pulse sequences, a general explanation of which is provided in later sections. EPI: BASIC PARAMETER AND SEQUENCE ISSUES I’ve been told not to use echo spacing between 0.6 and 0.8 ms for EPI. How come? The gradient set has mechanical resonances that produce disproportionately larger vibration, and thus EPI ghosts, when the echo spacing is in the range 0.6-0.8 ms for axial or axial-oblique slices. Therefore, to assure good, clean EPI performance, you should operate outside this range of echo spacing. (There are additional mechanical resonances at very short echo spacing – generally below 0.45 ms – but these are at the highest end of gradient performance and aren’t as likely to impact fMRI protocols with typical spatial resolution. If you are pushing gradient performance for high spatial resolution, talk to Ben about avoiding problems at very short echo spacing.) First of all, why should echo spacing be of concern at all? Recall that EPI is a multiple echo, gradient-echo sequence; that is, it is a periodic gradient-recalled echo sequence whose echoes happen at a particular frequency. If the EPI matrix is 64x64, then 64 readout points are acquired for each of 64 echoes, making the echo train length 64. The echo spacing is the time it takes between each of these echoes, i.e. how long it takes to acquire the 64 readout points, plus a little bit of overhead. It just so happens that if the echo spacing is set to be certain values, the forces induced in the gradient set can resonate mechanically, just like an old washing machine on the spin cycle. But, all is not lost! For a start, we know the echo spacing values that generate the mechanical resonance effects, so we can work around them. When using axial or axial-oblique slices the readout image axis uses the X gradient. (X is the gradient oriented left-right as you look at the front of the magnet.) It turns out that the X gradient has the largest mechanical resonance. The mechanical problems (and the concomitant ghost levels) are highest when using echo spacing of between 0.6-0.8 ms. (The worst performance is attained at 0.69 ms.) Outside of these values you won’t see unnecessarily large ghosts. There is even better news for coronal and sagittal slices. Here, the X gradient isn’t used for readout so the mechanical resonance effects are much reduced. In fact, only the very shortest echo spacing of 0.43-5 ms cause significantly higher ghosts. The ghost level is persistently low above 0.5 ms echo spacing. As far as the mechanical resonance is concerned, as a general rule it doesn’t matter what your nominal matrix size is (say 64x64, or 96x96) or whether you have GRAPPA turned on or not. All that matters is whether the forces being generated by the switching gradients are happening at a frequency corresponding to the mechanically resonant frequency of the gradient set. Instead, slice prescription (which sets the readout gradient direction in addition to the slice axis, of course) and the echo spacing parameter are the primary concerns. In general, setting the echo spacing isn’t something you should be setting yourself unless you have fairly expert training. Call me for assistance. (The particular echo spacing in your EPI acquisition will usually be determined by the resolution you want, along with consideration of the mechanical resonances.) In any case, once you have a fixed protocol, echo spacing isn’t something you will have to worry about. But if you are stealing someone else’s protocol (not advised!) and don’t want any help from me, you may check for yourself the echo spacing on the Sequence tab of the parameters task card for your EPI acquisition. You’ll see Echo spacing in the bottom-right corner of that card. You want to see a value of 0.6 ms or less, or 0.8 ms or more. If you see a number between 0.6 and 0.8 ms it’s time to break down and call me. How many dummy scans happen before the first real (saved) volume of EPI in my time series? If you are using the ep2d_neuro sequence you can specify the number of dummy scans (above a minimum default). If you are using one of the Siemens EPI variants (described later on) then the number of dummy scans is computed for you. For the Siemens EPI sequences (ep2d_bold, ep2d_pace), here’s the formula for the default number of dummy scans (or the minimum if you are using ep2d_neuro). You always get at least one dummy scan - call it a freebie, or a dummy scan for good luck. Next, divide your TR into a reference time of three seconds. For example, a TR of 1.5 seconds goes twice, a TR of two seconds goes once. Ignore any remainder. So with a TR of 1.5 seconds there would be 1 (freebie) + 2 = 3 dummy scans total. For a TR of 2 seconds there would only be 1 + 1 = 2 dummy scans total. There is no way to control the number of dummy scans independent of TR. It’s always computed for you and fixed (with the exception of ep2d_neuro, when additional dummy scans can be added above the default/minimum). Note that if you are using a parallel imaging method, such as GRAPPA, the auto-calibrating signal (ACS) scans will occur immediately after the dummy scans and before the first real (saved) volume of data in your time series. So if you are using one of the Siemens EPI variants, you’ve asked for 200 volumes with a TR of 2 seconds, and a GRAPPA-factor of 2 then there will be two dummy scans (computed as above) followed by a single ACS scan. After this you acquire the first volume of your two hundred volumes. The overall scan duration, then, is 203 volumes x 2 sec = 406 seconds. If you are using the ep2d_neuro sequence (see description of the ep2d_neuro sequence in this FAQ) then there will be the chosen number of dummy scans followed by two ACS scans and your two hundred volumes. I want 200 volumes in my EPI time series. How do I do that? On the Exam task card (the main environment where you drive the scanner), select the BOLD tab on the parameter window. The number of volumes is specified by the rather cryptic parameter called Measurements. Just enter 200 and hit the return key. You will get 200 volumes of EPI data stored on disk, and you’ll get 200 TTL pulses from the scanner to control your stimulus script. You’ll get 200 TTL pulses no matter how many dummy scans there are, and regardless of any reference acquisition for iPAT. In other words, dummy scans, and reference scans for iPAT, don’t emit TTLs. Ever. Easy, right? On the BOLD card, what is Motion Correction? The answer to this deceptively simple question is sequence-dependent, so pay close attention! But, as a general rule, unless you have a specific requirement in mind you almost certainly don’t want it, whatever it is! Certain versions of EPI use a method called PACE that is invoked when the Motion correction option is enabled. This method can generate weird motion artifacts and it is not advised that you use it without pilot testing to see whether it offers any real benefit. Other versions of EPI don’t use PACE but do instigate a post hoc realignment on the time series. For reasons known only to themselves (or, more likely, because they messed up!) Siemens also uses the Motion Correction nomenclature to refer to this realignment, even when PACE is not available. Here, however, the consequences of having MoCo turned on are considerably less severe; all that happens is that you have one raw time series on disk, plus an additional time series that has had a realignment done to it. Ignore the latter and you are good to go! See the sections below on the specific EPI sequence variants for more information on the various PACE and motion correction options. My protocol has TE set at 28 ms for EPI. But I saw somebody else’s protocol that uses a TE of 22 ms. How come? In general, for fMRI the TE you select will be the primary determinant of the amount of BOLD contrast you’ll get. The idea is to try to match the TE to the approximate T2* value for gray matter at 3 T, which is a range approximately between 15-40 ms. The T2* is short in brain regions that suffer from gross susceptibility problems, such as the frontal and temporal lobes and the inferior surface. T2* is longer in well-shimmed regions of the brain, such as occipital lobe. Now, it is clear that TE can’t be simultaneously short and long! We have to compromise. The figure below shows how the optimum TE varies with brain region. For most studies, a TE in the range 25-35 ms is a good compromise between speed, contrast and raw signal level. If you need to get more slices per TR you might want to consider a slightly shorter TE. Or, if you are particularly interested in fMRI of frontal or temporal lobes, or hippocampus, or thalamus, you also might want to shorten the TE a bit. But if you’re doing retinotopy and all your slices are in the occipital lobe, and you have plenty of time to get the number of slices you require in your TR, then feel free to put the TE out around 35 or even 40 ms. If you don’t have any specific requirements and you want an all-around TE, use 30 ms, plus or minus a millisecond if it will allow you to get the exact spatial coverage you need. (The effect of TE choice on signal dropout is considered in a later section.) The optimum TE for fMRI varies across the brain. Spatial variations in susceptibility gradients cause T2*, and hence the optimum TE, to vary also. Optimal BOLD sensitivity for OFC occurs at a TE several milliseconds shorter than occipital or parietal cortex. I am using ep2d_bold. What are the specifics of using this sequence? On the BOLD card, setting Motion correction will generate a second time series of images on the disk. The first series will be the original, uncorrected EPIs. The second series will have had a rigid body realignment performed on them. It has been found empirically that this realignment is similar in performance to that available in SPM5 (depending on which option you select). Even so, it’s probably safer not to use that second, corrected series. In other words, unless you specifically want to use Siemens’ rigid body realignment, leave Motion correction off (unchecked) and instead perform your own realignment offline. I am using ep2d_pace. What are the specifics of using this sequence? This is the sequence variant to be especially wary of! In all respects but one, ep2d_pace is the same as ep2d_bold. With ep2d_pace, if you enable Motion correction on the BOLD card you will actually change the way your data are acquired, and in an irreversible fashion! Here, Motion correction invokes a method called PACE that attempts to compare the last EPI volume to the one before and, if there has been movement between them, it attempts to compute a new slice prescription for the next EPI volume such that the anatomical coverage remains constant throughout. In principle, this sounds like a wonderful idea for fMRI. But in practice the PACE method tends to work properly only for motion that is slow relative to the TR. For example, if a subject slowly rotates by a few millimeters over a five minute run, PACE may do a reasonable job of keeping the anatomical content consistent over the entire run when otherwise some of the regions at the top and bottom of the slices might drift in/out of the full 3D volume being sampled. But PACE does a poor job when the motion is rapid, such as from a cough, a sneeze or some other rapid head movement relative to the TR period. In these cases the PACE method tends to “chase” the motion and can actually introduce artifacts that persist for longer than the motion itself! Confused? Consider the situation where a subject sneezes at volume 100 of 200 and with a TR of 2 seconds, when PACE is turned off. The movement only lasts for a second, corrupting EPI volume number 100 alone. Volumes 1-99 are okay. From volume 101 onwards the subject goes back to his original head position; the images from 101-200 are also free of motion artifacts. Now consider the same situation but with PACE turned on. Again, volumes 1-99 are okay. Volume 100 is corrupted with motion – PACE can’t fix the fact that the subject was moving during the image acquisition, it only attempts to rectify motion between EPIs. Now PACE tries to make the image content in volume 101 the same as that in 100 by comparing these two sets of images. But volume 100 is messed up! Thus, volume 102 is a sort of “ring down” of the motion that happened two volumes prior! Volume 103 also may still possess a small amount of the history of the motion in volume 100, because it is comparing volumes 101 and 102 and each of these has some (decreasing) motion-related artifact. It can take five or so TR periods for the history of the motion to dissipate completely. Clearly that isn’t good. What, though, if the subject doesn’t return his head to the starting position after sneezing, but to some new position? Now, PACE might be some help! Volume 100 is corrupted, as before. And volumes 101-104 or thereabouts may also have some contamination. But once the motion artifact has “worked its way out” of the equation and the head is stationary in its new position, PACE will assure that the anatomical content in the slices acquired from 105-200 is the same as that for 1-99. When Motion correction (or MoCo) is enabled for ep2d_pace, two complete time series are written to disk, as they were for ep2d_bold. Now, however, there is a BIG difference! The first time series is PACE-corrected, as just described. The second time series is that same PACE-corrected data, on which a rigid body realignment has also been performed. Note that the uncorrected, nonPACE data is NOT saved to disk! It doesn’t exist!!! Once PACE is enabled, the scanner actually changes the way the EPI data is acquired, and this is done irreversibly. So, unlike a rigid body realignment for the ep2d_bold sequence, if you opt for PACE (i.e. MoCo turned on) with ep2d_pace then you are stuck with it, for better or worse. In summary: the first time series is PACEcorrected, the second time series is PACE-corrected as well as realigned with a rigid body algorithm. Which all begs the Big Question: should PACE be used? Experience tells us the answer is no, provided you, the experimenter, do a good job of packing your subject’s head so that any sudden (often involuntary) motion can’t displace the subject’s head to a chronic new position. Sufficient padding will normally render it almost impossible for a subject’s relaxed head position to be anywhere but where you placed it. In this way whenever the subject does move suddenly only the EPI volume being acquired at the time is affected, then the subject’s head should return to its starting point. Furthermore, if you have (near) isotropic sampling, using voxels of 3x3x3 mm, say, and none of your brain regions of interest is located at the margins of the 3D box being sampled by your stack of EPIs, then it’s not entirely clear whether PACE is even needed in principle. Let’s suppose that your subject does sneeze at volume 100 of 200, and ends up in a new position by a few millimeters. The shim will have changed slightly – this is true whether you’re using PACE or not – but provided some vital region isn’t now residing outside of the 3D sampling volume then an offline rigid body realignment and resampling of the time series should permit you to recover useful data from all nodes. The slice prescription doesn’t need to be changed/updated to ensure that we continue to sample all of the vital brain regions for the experiment. Generally speaking it’s the frequency of motion that is the bigger variable between subjects, and causes the bigger problems, in fMRI. (Given the choice you’d be better off with one single displacement of 2 mm halfway through a run than dozens of displacements of 0.5 mm plaguing the entire run.) PACE doesn’t seem to help in the situation of frequent motion events, and could in fact make the situation worse. The combination of good head restraint, compliant subjects and offline realignment still seems to offer the best data. Issues arising from poor head restraint and/or poorly compliant subjects aren’t fixed with PACE, and I remain unconvinced that it offers much of a fix. I am using ep2d_neuro. What are the specifics of using this sequence? This is the BIC default EPI sequence. It’s a local variant derived from the Siemens sequence, ep2d_bold. We add new features and fix occasional bugs in the ep2d_neuro sequence only. Unless you know for a fact you will want the PACE feature described under ep2d_pace, you should select a protocol with this sequence for new studies. Several starting protocols for both the 12-channel and 32-channel head coils can be found in the Exam Explorer under USER/DanZone/RELEASED. The ep2d_neuro EPI sequence is a modification of the ep2d_bold sequence. The following list describes the new features of the ep2d_neuro sequence: Fine time-scale adjustments of the TR period: The ep2d_bold sequence limited you to TR increments of 10 ms when your TR was greater than 1000 ms. With ep2d_neuro you may set the TR in increments of 1 ms when your TR is greater than 1000 ms. Interleaved ACS (auto-calibration signal) scan: For a GRAPPA acceleration factor of 2 the ep2d_bold sequence uses an ACS scan (i.e. reference scan) sampling trajectory that samples the full k-space in a single shot. This is not the best way to acquire ACS data. The proper way to do this is to use multiple (equal to the GRAPPA-factor you select) interleaved sampling trajectories for the ACS scans, i.e. if the iPAT factor is 2 then two ACS interleaves should be acquired, if the iPAT factor is 3 then three ACS interleaves should be acquired, etc. Does the fix matter? This modification can (as observed using a water phantom) result in GRAPPA reconstructed images with less residual aliasing and less distortion due to field inhomogeneity. Variable number of dummy scans: Allows you to select a variable number of dummy scans, provided that the selected number is greater than minimum number set by TR. See the Special task card to set the dummy scans above the default. The default number is computed as described elsewhere in this document. Double allowable PE FOV: Allows you to increase the FOV (field-of-view) in PE (phaseencoding) direction up to 100% greater than the FOV defined in the FE (frequencyencoding) direction. This feature probably has limited (no) utility for fMRI applications. Double allowable matrix size: Allows you to increase the base resolution to 256 points. Whether you can actually obtain the 256 maximum will, of course, depend upon your selection of EPI scanning parameters. As for the increased FOV, this feature probably has no utility for routine fMRI applications. Thinner slices: Allows for slices of nominal 1.0 mm thickness. The previous minimum slice thickness was 1.9 mm. If you select a slice thickness between 1.0 and 1.9 mm the sequence will need to increase the minimum allowable TE (for a given set of sampling parameters) by about 0.25 ms, a delay which will probably be of little consequence to BOLD fMRI. Physiology logging: On by default. The Siemens physiological sensors will be logged automatically, the data being written to the C:\Medcom\log\PHYSIO directory of the host computer. You will be instructed on how to grab the appropriate files during your user training. However, we have found that the BIOPAC physiological monitoring kit provides more robust data as well as file formats which are more convenient to use. Note: By having the physiological monitoring enabled by default, a bug is created when you terminate a time series acquisition prematurely, e.g. if you stop the scan after only 120 volumes for an experiment set to run for 200 volumes. In this case, the physio log files will not be closed and will continue to be written ad infinitum (or until the hard disk fills up, whichever comes first!). This means that log files you might want to keep might still be getting bigger (having irrelevant data written to them) when you come to save them. Here, the only practical consequence for you is that you have a file that consists of a lot of irrelevant data appended after the data you want - annoying. Thus, if you do terminate a run prematurely, please be a good citizen and follow it up with a short ep2d_neuro run that goes through to completion. The easiest way to do this: append a new ep2d_neuro experiment and set only, say, five volumes on the BOLD card. Run the experiment. It will complete in under 20 seconds and the physio log files it opens will be properly terminated. Now the hard disk won’t fill up with irrelevant crap! What flip angle should I use for fMRI? For a single MR experiment in a fully relaxed sample, maximum SNR is obtained following a 90 degree RF excitation pulse. But in a time series of EPIs, T1 effects become apparent such that for most commonly used repetition times (TR) for fMRI there is incomplete relaxation between EPI acquisitions. In this situation, the best SNR per unit time (which is equivalent to saying the best SNR available for an individual EPI in a time series) is obtained at an excitation flip angle of less than 90 degrees. Assuming a gray matter T1 of approximately one second at 3 T, then the Ernst angle (as the optimum flip angle is called) will be about 80 degrees for a TR of 2 seconds. There is an additional consideration, however. Whilst BOLD isn’t the quantifiable, specific assessment of neural activation we might like, it is also possible to do worse! With BOLD we are assuming that signal changes are being driven by susceptibility alterations in the post-capillary, or venous, blood pool. The BOLD changes are being driven by a change of cerebral blood flow (CBF) and volume (CBV) that happens upstream, in the capillaries, arterioles and arteries. But these upstream arterial changes don’t directly contribute to the BOLD signal. Rather they drive it once the blood has flowed into the veins. So, if we want pure BOLD contrast we want to restrict all signal changes to being venous ones. How might we not be getting pure BOLD contrast with a gradient echo EPI scan? One consequence of using 90 degree (or large) flip angles can be a sort of “arterial spin labeling” effect of blood that is flowing into the EPI slices. Fresh blood – that is, blood that hasn’t experienced the RF pulses that are exciting your EPI slices – is flowing into the brain via the carotid arteries, where it branches and distributes. This fresh blood is fully relaxed; it has no spin history. Thus, when fresh blood flows into an EPI slice it generates a disproportionately higher signal than it would have had it been stationary and experienced prior RF excitations. Now consider again what is driving the BOLD changes. For positive BOLD changes, it is an increase in CBF, i.e. an increase in the rate of delivery of fresh blood. Thus, when a neural area activates and demands an increased blood supply, if the signal has any sort of flow dependency then it will show a functional contrast. This is, in fact, the basis of the perfusion (or ASL) imaging method! How much of a problem is inflowing blood? It is difficult to quantify. What we can say is that perfusion is a tricky and insensitive method to get working well, so we don’t expect large effects from what is essentially a poor perfusion technique. Furthermore, you may not really care what the spatial origin of your contrast is. You don’t ordinarily try to differentiate between BOLD from small vessels and large vessels; you live with what you get. What’s more, the inflow-based contrast in a BOLD experiment will probably be very closely located to the actual site of neural activity, i.e. the arterioles just upstream from the firing neurons. Contrast that with a draining vein that could be several voxels away from the activation site. Talk about specificity! In general we don’t usually concern ourselves with inflow artifacts when establishing excitation flip angle. We don’t often get too carried away with Ernst angles, either. What we are primarily interested in is the signal stability, i.e. maximizing the temporal SNR (TSNR) and minimizing the contribution of physiologic noise to the time series. When considering the temporal stability of EPIs it turns out that flip angles over a wide range, from around 30 degrees to 90 degrees (for a TR of 2 seconds) perform fairly similarly. Some studies have actually suggested that large flip angles – which would generate the highest SNR in an individual EPI – might actually decrease the TSNR in a time series, because of the tendency to magnify the effects of physiologic noise (which drives the denominator in the TSNR) without concomitant increase in the BOLD effect (which appears in the numerator of the TSNR). But these effects tend to be subtle. So, which number to pick? For a TR of 2 seconds, consider using a flip angle in the range 50-80 degrees. If TR approaches 1 second then use a flip angle in the range 30-60 degrees. I will update this section with more specific recommendations as and when they arise in the literature. As much as I trust some of the most recent work on reduced flip angles in fMRI, I don’t want to suggest a blanket change until some more verification has occurred. There doesn’t seem to be a big risk to sticking with the larger flip angles that most people are using, here and elsewhere. But if you are especially interested in testing a reduced flip angle then we should talk. A short pilot experiment should show whether there is likely to be a substantial benefit to you. What TR should I use for fMRI? The short answer to this question is the equivocatory “It depends.” In brief, the TR should be set to the minimum that is compatible with the number of slices you require to get satisfactory brain coverage (so that you are sampling as often as possible). In other words, the more 3D space you want to cover, the longer the TR is likely to become. That said, however, some processing methods require TR to be within specific ranges. In the first instance, event-related fMRI requires that the volume-to-volume sampling happen not less than once every 2.5 seconds, given a time to peak of the BOLD response of approximately 5 seconds. (This is a Nyquist frequency-sampling requirement.) Whether you then need a TR less than 2.5 seconds will depend on your use of physiologic regressors (e.g. RETROICOR), or requirements for functional connectivity, as well as the ability of your experiment to distinguish between temporally separate events. Some experiments may attempt to use faster sampling, e.g. for causal processing methods, but it is important to consider the vascular delays before sacrificing spatial resolution in order to achieve a short TR. These complex issues are beyond the scope of this document. You should talk to Ben if you are going to try to get a TR much below 2 seconds, or if you think you need a TR longer than 2.5 seconds. Likewise, if you do change TR away from 2 seconds you also need to consider the RF excitation flip angle, as discussed above. At a TR of 1 second the Ernst angle decreases to about 68 degrees, but some empirical testing is prudent to assure adequate (temporal) SNR. At a TR of 2.5+ seconds you should probably increase the flip angle to 80-90 degrees. Should I use interleaved or sequential slices for fMRI? EPI, in common with almost all other 2D multi-slice imaging methods, tends to use interleaved slices; that is, the slices are acquired in the order odds then evens: 1,3,5,7,…2,4,6,8… By interleaving, a time of TR/2 is left between the excitation of any one slice and either of its nextnearest neighbors, thereby minimizing crosstalk (partial saturation) between them and maximizing SNR. Historically, interleaving was used to overcome the imperfect RF profile of the excitation RF pulse. In an ideal world the frequency profile – and hence the spatial profile of the excitation (or slice selection) pulse - would be a perfect square. In reality, however, excitation RF profiles tend to be more trapezoidal. The first consequence of trapezoidal slice profiles is one of nomenclature. When we talk about slice thickness and slice-to-slice distances we need to define the point on the profile we’re using as our reference. The standard convention is to take the half-height width as the slice width, and define inter-slice distances accordingly. This is not a universal rule, however, and empirical testing (see later) suggests that Siemens uses something like 5% or 1% above baseline to define its slice thickness. (In other words, when you ask for a 3 mm slice the base of the trapezoid would be 3 mm but the half-height might be only 2.95 mm.) Now let’s look at the inter-slice overlap issue from a practical standpoint, and address the issue of interleaving. Empirical testing revealed that with sequential slices, the slice SNR remained at its maximum (100%) level when using gaps of 5-20%. Only when the gap was reduced to a nominal 0% gap was there a very slight decrease of image SNR, to 99%. (This is how we estimate the Siemens convention of using the base of the trapezoid to define slice width.) These results have two consequences: firstly, it means that you can use gaps of 5-20% without getting appreciable saturation effects, and even zero slice gap has minimal effects, and secondly, the implication is that interleaving isn’t necessary to mitigate slice crosstalk; the slice profile takes care of most of it. Now that we have seen there is no strict reason, other than historical precedent, to use interleaving, what are the differences between interleaved and sequential slicing? Does one provide a definite advantage over the other? In the absence of head motion the answer is ambiguous: there is almost no difference in performance. But whenever the subject moves his head in the slice dimension (through slice movement) the consequences for interleaved slices can be more severe than for sequential slices. In the case of sequential slices, the movement would cause some new anatomical regions to be included at one end of the slice stack, while some other anatomical regions disappear from the opposite end, i.e. the brain moves through the slices. But the same motion would cause a slice-to-slice signal intensity variation when using interleaving. During the movement the signal steady state is altered differently for alternate slices, because alternate slices already differ in their spin history by 0.5*TR. Following the movement the signal steady state is re-established in 2-3 TRs, but again there is a slight difference in the recovery time for alternating slices. The overall result is a striping in the slice direction during and immediately following movement in the slice direction. Just how often does interleaved slicing suffer from a striping artifact from motion? It largely depends on the nature and magnitude of the motion. And, of course, when a subject moves, many more bad things can happen than just perturbing slice order! Changes of the shim can lead to large ghosting, for example. So what is the best approach? The most robust approach seems to be sequential slices acquired rostral to caudal. Sequential slicing will avoid the striping that might happen because of certain types of head motion, while going “top to bottom” with the slices will minimize the inflowing blood (ASL-like) enhancement of functional contrast that was mentioned in the earlier section on RF flip angle choice. It is worth noting, however, that the improvement to data of using sequential, descending slices as compared to interleaved slices will be marginal – provided you are packing your subject’s heads well. If you don’t do a good job at avoiding motion you cannot expect sequential, descending slices to provide much motion robustness. This is a fine tweak, not a bulletproofing step in your protocol. In what order does the scanner acquire EPI slices? There are three options for slice ordering for EPI. To understand the ordering you first need to know the Siemens reference frame for the slice axis: the negative direction is [Right, Anterior, Foot] and the positive direction is [Left, Posterior, Head]. The modes are then: Ascending - In this mode, slices are acquired from the negative direction to the positive direction. Descending - In this mode, slices are acquired from the positive direction to the negative direction. Interleaved - In this mode, the order of acquisition depends on the number of slices acquired: o If there is an odd number of slices, say 27, the slices will be collected as: 1 3 5 7 9 11 13 15 17 19 21 23 25 27 2 4 6 8 10 12 14 16 18 20 22 24 26. o If there is an even number of slices (say 28) the slices will be collected as: 2 4 6 8 10 12 14 16 18 20 22 24 26 28 1 3 5 7 9 11 13 15 17 19 21 23 25 27. Interleaved always goes in the negative to positive direction, i.e. foot-to-head for transverse slices. So, if you are doing 28 interleaved axial slices the order will be evens then odds in the foot-to-head direction. 27 interleaved axial slices would also be acquired in the foot-to-head direction but would be in the order odds then evens. If you switch to 28 descending axial slices the acquisition order will become 1,2,3,4,5…28 and the direction will swap to being head-to-foot. EPI: ARTIFACTS I hear a lot about ghosting when people talk about EPI. What is a ghost and what causes them? How do I get rid of them? The EPI pulse sequence is a train of gradient echoes, each echo encoding a piece of the second image dimension, the phase-encoded dimension. But before the spatial images (the images you are used to looking at) can be constructed with a 2D Fourier transform, the even-numbered echoes must first be time-reversed. In effect, time travels forwards for the odd-numbered echoes but backwards for the even-numbered echoes, so one must be made consistent with the other before we can apply the 2D FT. This is a relatively trivial processing step. However, there is a catch. While we might consider the data sampling of the even-numbered echoes to be running backwards in time, the data points are actually collected with time running forwards (of course); the fact that the data points themselves are being collected in reverse is neither here nor there for the physics of the situation. Imagine there is a simple delay at the very start of the gradient echo train. From the standpoint of the data in the echo train, this looks like a delay at the start of the sampling period for the odd echoes but a delay at the end of the sampling period for the even echoes! This causes the delay to manifest itself in a zigzag manner across the entire set of gradient echoes. The zigzag delay causes a different phase for the odd and even echoes - the phase zigzags in proportion to the delay – and when we then apply the 2D FT that phase zigzag creates an ambiguity in the spatial position of the brain signal. In fact, the ambiguity is at exactly half the field-of-view. For this reason these ghosts are often called N/2 ghosts, where N refers to the field-of-view. The bigger the delay, the bigger the phase zigzag, the bigger the ambiguity, the more the signal is deposited at the half fieldof-view position instead of the correct spatial position. Below is an example of three EPI slices, contrasted to show the ghosts: It was necessary to increase the background intensity to visualize the ghosts. That is typical for a well-shimmed, low ghost EPI. As a rough rule of thumb – and given that it is difficult to quantify by inspection – the ghost level should be 5% or less than the intensity of the brain signal. What are some physical causes of the phase zigzags that lead to N/2 ghosts? In short, any physical effect that causes a temporal mismatch of the data sampling periods (i.e. when the analogto-digital converter is turned on) and the readout gradient waveform will lead to ghosts. Another way to think of this mismatch is as any effect that causes the data sampling not to happen where it is supposed to, which is centered on the flat portions of the alternating positive and negative flat periods of the readout gradient echo train. Here are the big offenders: (1) Delays in the MRI signal through to the receiver electronics stages. Delays induced by analog filtering will appear at the start of sampling periods for positive read gradient episodes, but, following time reversal, at the end of sampling periods for negative read gradient episodes. (2) Short-time scale eddy currents. These cause an imbalance in the multiple gradient echo train, such that the eddy currents add or subtract to the gradient waveform and cause either early or late refocusing in an alternating fashion through the echo train. (3) Poor center-frequency adjustment (global or regional). Can include frequency drift with gradient heating. Anything that causes the frequency-encoded readout to be slightly off-resonance will be equivalent to an alternating phase shift imposed on alternating echoes in the train, directly causing ghosts. (Recall that in non-EPI imaging, if you acquire off-resonance this is equivalent to a shift in the frequency-encoded axis. If the off-resonance shift is sufficiently large the image will start to alias in the frequency-encode axis (assuming no filtering to clean it up!). Thus you can recognize the ghosting as an aliasing-like artifact.) (4) In-plane rotation of the field-of-view. Each physical gradient (Gx, Gy or Gz) has a slightly different electrical inductance and thus has a different response rate to being switched on/off. When the readout gradient is pure Gx, Gy or Gz there is no problem; we are attempting to switch one gradient coil on and off, its response characteristics are constant. But if the readout/phase-encode axes are “mixed” in the magnet reference frame by an in-plane rotation, now there will be, for example, some Gy plus some Gx in the readout gradient vector (Gr in the image reference frame). This leads to a difference in the rate at which one component of Gr comes on compared to the other component. (5) Mechanical resonances. These manifest in a similar fashion to eddy currents, causing an imbalance in the gradient waveform such that echoes may occur early then late, or late then early, in an alternating fashion throughout the echo train. Reviewing the list above, it should be clear that you can control (3), (4) and (5) to a considerable extent by shimming/on-resonance adjustment, by avoiding rotated image planes and by avoiding mechanically resonant echo spacings, respectively. A bad shim (e.g. because the subject’s head isn’t straight in the magnet), rotating your image plane or using mechanically resonant echo spacing will all lead to higher than necessary ghosts. Note, however, that sources (1) and (2) are largely beyond our control and are features of the scanner that have been refined by Siemens for decent EPI performance. Residual ghosts can be corrected to a certain extent by applying phase corrections to the data. On the Siemens scanner this is achieved by acquisition of three gradient echoes immediately after the excitation RF pulse and before the EPI readout echo train starts. These additional “reference” or “navigator” echoes are used to assess the mismatch between the positive and negative gradient data in the absence of phase encoding, and allow a phase correction to be applied to the raw data prior to 2D FT so that the zigzag phase difference is minimized between alternating k-space lines. You don’t need to do anything to have this correction step applied; it is done automatically (indeed can’t be turned off!). However, you do want the correction step to have the minimum work to do, so you should always pay close attention to accidentally introducing new ghost sources, such as (3)-(5) above. On the Contrast tab I notice that fat suppression is enabled for EPI. What does it do? While water constitutes some 60% of total human body mass, and is thus the largest source of hydrogen atoms (protons) giving MR signal, fat is the second-largest source of protons. We have several percent of our total mass as fat. And, unfortunately for fMRI experiments, some of it is present around the head, in the scalp. Not only that but the fat (or lipid, if you prefer) in the scalp is, on a molecular level, quite mobile, behaving almost as if it is a jelly or a viscous liquid rather than something solid, like bone. And that means it will have an intense MR signal. In the previous section you learnt that one of the sources of N/2 ghosts is off-resonant signal. So if we have an abundant source of hydrogen – fat is primarily long chains of CH2 groups – and if, because of a chemical structure that differs considerably from that of water, those hydrogen nuclei happen to have a different resonant frequency than the (dominant) water signal, then we have an immediate problem: we can’t have both the water and the fat protons be on-resonance at the same time! One of them – the fat, because it’s the smaller of the two – must by necessity be allowed to be off-resonance. Hence, the fat signal will cause ghosts, just like badly shimmed water signal (even though the physical source is very different). Indeed, the frequency shift of fat is usually greater than badly shimmed water! At 3 T, water and fat protons resonate some 430 Hz apart. When you consider that your typical EPI bandwidth is something like 30 Hz/pixel in the phase encoding dimension, you can quickly see that fat signals are going to produce big problems. What to do? Unlike badly shimmed water signal, there is no way to place the fat onresonance if the water is already on-resonance. Therefore, we need to get rid of the fat signal. This is fair enough because the signal from scalp fat isn’t valuable to us; we aren’t expecting to localize activations to it (I hope). In other words, if we can manage to eliminate the fat signal then we will eliminate this particular source of ghosts, too. There are a few different ways to eliminate fat signals. For fMRI, the two most common methods are to simply avoid exciting fat in the first place, via some sort of ‘spectral-spatial’ RF pulse that excites the water in the slice but fails to excite the fat, or to ‘pre-saturate’ the fat resonances just before the slice-selective excitation RF pulse is applied. Both of these approaches are designed to produce no fat signal at the time of the EPI readout train, and of course there are pros and cons to both of them. On our scanner, pre-saturation is the preferred method. This is primarily because the slice profile is generally better (i.e. squarer) and can be made narrower (thinner slices) for fat pre-saturation than for the spectral-spatial RF pulses designed to avoid fat excitation. The penalty for performing the suppression of fat and the excitation of the slice as separate events is a few milliseconds per slice of timing overhead, thereby reducing the spatial coverage in the slice direction a little bit. For all EPI sequences on our scanner, the parameter “Fat suppr.” on the Contrast tab should be set to Fat sat., for fat pre-saturation. You should not disable fat sat (set it to ‘none’) or switch to the spatial-spectral excitation pulse (‘Water excit. normal’ option) unless you fully understand what you’re doing. Disabling fat suppression entirely will lead to very intense ghosts from subcutaneous lipid, while switching to the spatial-spectral pulse will generate slice thicknesses that have not been verified as matching the nominal slice thickness shown on the screen. (Tests of the spatial-spectral option are ongoing. There may be several other problems with Water excit. normal. Bottom line: don’t use it!) So that’s subcutaneous fat dealt with. The astute among you might be wondering something: if subcutaneous lipid is such a problem, why doesn’t the white matter in the brain produce ghosts for the same physical reasons? White matter contains a lot of long-chain fatty compounds such as myelin, after all. Are these not also sources of problem CH2 signals? Luckily for us, whilst these compounds are indeed sources of abundant CH2-containing lipid molecules, these molecules are generally too tightly bound to produce very much MR signal. They are more solid-like than the jelly-like composition of scalp fat. In conventional anatomical as well as EPI scans, almost all the signal that we can get from white matter arises from the cellular water. Only a minuscule amount comes from fatty compounds, and we can safely ignore its effect on EPI. What is the origin of signal dropout in EPI? Can it be fixed? Signal dropout is another problem caused by magnetic susceptibility. Recall that because air, bone, tissue, etc. all interact with an applied magnetic field in different ways, severe and spatially complex magnetic field gradients are established across the boundaries between different substances. The spatial characteristics and magnitude of the gradients will depend on the composition as well as the geometry of the sample, and on the orientation of the sample to the applied magnetic field. The inferior portions of the brain and the frontal and temporal lobes are especially badly affected by susceptibility gradients because of the particular geometry of air-filled cavities and the cranium near these brain regions. So why does the signal disappear in some parts of the brain? The simplest conceptual answer to this question is to consider the phase of the signal across an individual image voxel. The thickness of the voxel in the slice direction is obviously the slice thickness. Let’s say the slice thickness is 4 mm. Now, we know that because of the background susceptibility gradients mentioned above the signal in the voxel is dephased in the time from the RF pulse, which generates the initial magnetization (i.e. the signal), to the time the signal is detected at the echo time, TE. The longer TE the more dephasing happens and the more signal is lost. Why is that? Well, if you think about the destructive interference caused by two signals that have the same magnitude but have opposing phase you can see how, as dephasing occurs, more and more parts of the signal across the voxel will cancel each other out. Once the phase dispersion across the voxel is random, the magnetization that was coherent at TE=0 (immediately after the RF excitation pulse) is now totally incoherent, and there is no net signal. We have signal ‘dropout.’ There are also in-plane mechanisms that can lead to signal dropout; essentially, the magnetic susceptibility gradients interfere with the applied (imaging) gradients and cause the local k-space trajectory to differ significantly from that intended by the imaging gradients alone. If the susceptibility gradients cause the local signal to refocus early or late relative to the multi-echo data readout then we will “miss” that signal in our sampling window. So, given the problem, what are the potential remedies? In essence there are four simple tactics available as standard on the Siemens Trio. These involve shimming, slice orientation, voxel resolution and echo time (TE). Of these, shimming is the preferred approach, as far as possible, because it gets at the root cause of the problem: it attempts to reduce the spatially complex susceptibility gradients so that the magnetic field becomes homogeneous across the entire head. If we can reduce the susceptibility gradients that cause dephasing in the first place, we can reduce the signal loss. But there are inherent limitations to shimming, not least of which is that the shim coil currents cannot usually be made large enough, or impart sufficient opposing spatial complexity, to offset the gradients established in the head at all locations simultaneously. We can fix 90% of the brain’s signals at the expense of doing a less-than-perfect job in the frontal and temporal lobes, for instance. Having done as well as we can with the Siemens field map shimming routine, what else can we do? The direction of the slice makes a difference because the susceptibility gradients tend not to be isotropic. It turns out that through-plane dephasing is usually worst for axial slices. Sagittal slices tend to preserve more frontal and temporal lobe signal than axial slices, and coronal slices even more signal than sagittal slices. Of course, it isn’t always possible to use sagittal or coronal slices for a particular experiment, either for spatial coverage reasons or some other sampling consideration. What else? Increasing voxel resolution – making voxels smaller – tends to reduce the dephasing effects, leading to some recovery of signal. An easy way to increase resolution is to use thinner slices. If a 4 mm thick slice is split into two 2 mm slices we can expect to reduce the phase evolution across the thinner slices and recover some signal that would have been lost in a single 4 mm slice. In magnitude mode, therefore, the addition of two 2 mm slices together would net more signal than a single 4 mm slice, all other parameters being equal. But again, there are limits to how many thin slices you can acquire in a particular TR. The final way to reduce dropout is to use a shorter TE. You saw in an earlier section how different brain regions require different TE for optimal functional contrast, because T2* varies across the brain. Well, the susceptibility gradients contribute significantly to the local T2*, so it should come as no surprise to learn that those regions of the brain requiring a shorter TE for optimal functional contrast are also the ones that will exhibit most dropout. If we reduce TE we reduce the dephasing and hence reduce the signal loss. Thus, for robust inferior frontal lobe coverage in axial slices, it may be necessary to use a TE as low as 18 ms. In the figure below you can see the effect of decreasing TE on the degree of signal dropout, as well as on overall image SNR, in a comparison of TE = 20 ms and TE = 36 ms. The frontal and temporal lobe signals are considerably lower in the latter image when compared to parietal and occipital lobes. Also note how much weaker the subcutaneous lipid in the scalp shows up at TE = 36 ms, because the presence of the skull immediately beneath, and air around the head, causes a strong susceptibility gradient across the scalp, leading to a relatively short T2* for the scalp fat. (The scalp fat T2 is actually very much longer than brain tissue T2, so the dark scalp fat signal would be a surprising observation if we didn’t understand the effects of susceptibility gradients causing short T2*.) TE=20 ms TE=36 ms What is the origin of distortion in EPI? Can it be fixed? To understand why EPIs are distorted it is instructive to reconsider what makes the pulse sequence useful for fMRI in the first place: its speed. Recall that EPI is a repeated (multi-echo) gradient echo sequence, where a train of gradient echoes recycles magnetization many times, each time acquiring another line of 2D spatial information. (EPI was the original “green” pulse sequence!) For convenience of labeling, let’s say we want to acquire an echo planar image that has a spatial matrix of 64x32 voxels in the plane. Here, the first dimension – 64 points – is the read axis, or frequency-encoded axis, and the second dimension – 32 points – is the phase-encoded axis. As the gradient echo train proceeds through the 32 echoes required to fully encode the 2nd image dimension, 64 frequency-encoded data points are read out during each individual echo. Now let’s put some timings on the scenario. Let’s say that to acquire 64 frequency-encoded data points takes 0.5 ms. It will thus take 32 times 0.5 ms to complete all the echoes in the train, i.e. the entire 2D plane takes 16 ms to acquire in total. Of course, from the perspective of covering anatomy this is fantastic! We’ve got an entire 2D image in 16 ms! But there is a penalty. While we are busily switching the gradients back and forth in our multiple gradient echo sequence, encoding the necessary spatial information into the signal, the signal is exposed to contamination from gradients in the sample that we can’t control. These are the infamous ‘susceptibility gradients’ that caused signal dropout in the previous section (and were one source of ghosts in the section before that). Now, however, we are principally concerned with that spatial component of the susceptibility gradients that acts in the same direction as the phase encode dimension of our EPI. In MRI, a gradient is a gradient is a gradient! Magnetization doesn’t care whether we turned the gradient on with the scanner, or whether a gradient was present anyway by virtue of the physical properties of the subject’s head in the magnet; the dephasing effects are the same! Which means that while the gradients we control – the scanner gradients - are imparting their spatial effect on the signal, the background susceptibility gradients are “contributing” too! Now, it should come as no surprise – especially after seeing the effects of TE on signal dropout in the previous section – that the longer we take in encoding our spatial information, the more contaminated the signal will become. What is more, different parts of the brain experience different susceptibility gradients, so some parts of the brain will have higher contamination levels than others. Unsurprisingly, the frontal and temporal lobes, especially the inferior aspects, are the worst off. Areas that suffer from dropout are also likely to suffer the most from distortion. How much distortion happens? Well, the conceptual answer is that it depends on the length of the echo train and the magnitude of the susceptibility gradients (in the phase encode dimension). An EPI echo train that lasts for 20 ms might experience a distortion that is less than a millimeter for signal in the occipital lobe, for example. In the frontal lobe the same 20 ms echo train might experience a distortion of several voxels: 6-10 mm or more! As already described for the issue of dropout, we have limited scope to shim the entire brain to the magnetic field homogeneity we might like. That leaves us with two other approaches. The first is to reduce the problem at source by reducing the duration of the echo train. Reducing the spatial resolution in the phase encoding dimension achieves a shorter echo train, as do parallel imaging methods such as GRAPPA. (GRAPPA is described in later sections.) Another approach is to try to fix the distortion with a post-processing step using a map of the susceptibility gradients; a so-called (magnetic) field map. (See below.) A formula that relates the spatial distribution of the magnetic field to the distorted EPI can then be applied on a voxelwise basis and provide an “undistorted” EPI. However, there are several limitations with this field mapping approach, such as: (i) signals that are distorted and end up overlapped in the original EPI cannot be repositioned separately; (ii) regions of space with poor signal coverage in the field map image are not well supported and this can lead to errors with the algorithm; (iii) head motion between the field map and the EPI to be undistorted will lead to a mismatch and erroneous “corrections.” For these reasons, the application of field maps to raw EPIs or to statistical parametric maps after processing isn’t as common as you might think. Indeed, most studies – even group studies – tend to simply accept the 1D spatial distortion and do as well as possible realigning the distorted EPIs to distortion-free 3D anatomical templates. The amount of distortion that can be tolerated, and whether or not to acquire field maps, is generally a matter for an individual study. When you are establishing a new protocol for a new study, go talk to Ben or Daniel about distortion and related issues (dropout, TE, GRAPPA, etc.) and decide on the best compromise for your needs. What is a field map and how does it fix EPI distortion? One way to try to undo the effects of distortion in the phase encoding dimension is to measure the susceptibility gradients that produce it, and compute a fix. It’s a simple idea with practical complications. The essential idea is to acquire a map of the magnetic field – a field map – across the brain using a distortion-free pulse sequence - a standard spin warp phase-encoded gradient echo (GRE) sequence is normally used - with the same spatial parameters as the EPI you want to fix. Thus, you establish the same slice prescription, slice thickness, gap and in-plane resolution as used in your EPI, and acquire a pair of GRE images that differ only in their TE. The TE difference is set to allow phase evolution - the same dephasing that causes the distortion – between zero (perfectly homogeneous magnetic field) and 360 degrees. (I won’t get into the issues of phase unwrapping here. Suffice it to say that it’s important not to allow mod(360) ‘bounce’ points, or the entire process gets more complicated. It’s not something you usually have to worry about in practice. The TE difference has been set up suitably for use at 3 T in brain.) The difference of the phases acquired by the pair of GRE images is proportional to the TE difference and the underlying magnetic field. The distortion in the phase encoding dimension of your EPI is proportional to the echo spacing and the underlying magnetic field. Thus, a simple equation can relate the phase at a point in undistorted space to the expected distortion, i.e. the displacement of signal that should be at position y in undistorted space, to its position yd in distorted space, i.e. in the EPI. This introduces the first complexity into the distortion correction process. The equation relating y to yd applies only if there is a unique solution for all pixels. It assumes the distortion process, and hence the undistortion, is linear. But if the susceptibility gradients cause signal from multiple undistorted positions to coalesce in the distorted space (in the EPI) then there is no way to correctly reassign the signal at yd back to the multiple correct y locations. For this reason, the preference is to get as much distortion to be a stretch in the EPI, and as little as possible distortion to be compression (where undistorted pixels have, by definition, coalesced). We do this via selection of the phase encoding gradient sign, e.g. anterior-posterior phase encoding is preferred over posterior-anterior phase encoding for axial slices. Even with stretching being the predominant distortion type, however, there will be regions of brain for which the undistortion algorithm is incorrect, and some pixels will remain misplaced in the “corrected” image. The next experimental complexity arises when you consider the separate acquisition of the field map from the EPIs you want to fix. What if the subject has moved? Clearly, motion will cause some degree of mismatch between the field map and some or all of the EPIs in a time series. The assumption in the correction method is that signals in the GRE images provide mathematical support (signal, in other words) for all regions of space that can be (and have been) distorted in the corresponding EPIs. Motion will very likely challenge that assumption. How much? It depends, of course. But if you are sufficiently bothered by distortion in EPIs to want to try to correct it with a field map, then a good general rule is to acquire one field map per block of EPI, unless your EPI blocks are very short (2-3 minutes each) in which case one field map every two or three blocks probably suffices. But, as indicated, the appropriateness of the field map is dependent on the amount your subject moves within and between EPI blocks. I want to try to fix my distortion with a field map. What do I need to acquire? You first need to add a suitable acquisition to your protocol. In the Exam Explorer, copypaste the gre_field_mapping sequence from: SIEMENS > advanced applications libraries > bold imaging into your protocol, then change all the spatial parameters of your copy of the gre_field_mapping sequence to match those of the EPI scan that you intend to undistort. The spatial parameters are the slice prescription, in-plane resolution (i.e. the field-of-view and matrix size), number of slices, slice thickness and slice gap. (If your protocol uses two or more different EPI parameter sets, e.g. one slice thickness for a task-based experiment and a different one for resting state, then you need a separate field map for each.) However, you don’t need to set up the gre_field_mapping sequence to use GRAPPA or partial Fourier should your EPI scan use one or both of these options. Contact Ben if you are unsure of how to match parameters. Unlike for EPI, it actually doesn’t matter whether you use interleaved or descending slices, so feel free to leave the multi-slice mode set at Interleaved. All timing parameters, including the TE and TR, can be left at their default values as well, unless the scanner tells you that it must increase TR to fit the number of slices you’re requesting, in which case accept the TR the scanner computes. Save the spatially matched field map sequence in with your protocol. During your experiment, when you are ready to acquire the field map data, ensure that the slice prescription for the field map sequence matches that for your EPI, by using either AutoAlign or by copying the slice prescription parameter directly from the EPI to the target field map sequence. If you’re using AAHScout, ensure that the AutoAlign parameter (on the Routine tab) is set to Head > Brain mode on the gre_field_mapping scan, exactly as you use for EPI. Depending on the spatial resolution requested, each field map will take about a minute to acquire. Larger matrix sizes will take longer. Re-acquire the field map whenever your EPI spatial parameters change, whenever you suspect the subject has moved (such that the field map is unlikely to match the EPIs you want to correct), and whenever there’s an idle moment in your protocol. Using the field map data: The gre_field_mapping sequence acquires two GRE images that differ in their TE; this is already set up for you. At the end of the acquisition the Siemens software automatically computes a phase map from the difference of the two TE images. Thus, on the database you will find one raw data set of 2N slices (two TEs for N slices) and one phase map with N slices, where N is the number of slices requested. The phase map data is what you need to take offline with your EPI to compute the distortion correction using, for example, the FSL routine, FUGUE or the Fieldmap Toolbox in SPM. As mentioned in the previous section, exactly when and how you acquire field maps is a matter of experimental preference. You definitely need separate field maps if you acquire two or more different EPI protocols, e.g. you use one set of parameters for a localizer task and a different, higher resolution set, say, for the rest of your experiment. Another common situation is to use axial slices for one part of an experiment and oblique slices for another. You need separate field maps for each. Essentially, any time you can expect a spatial mismatch between the field map and the EPI, you need a new field map. This can arise as just mentioned, because of intentional parameter changes, or because of subject motion. Whoa! I’m watching my EPIs on the Inline Display window and I’m seeing all sorts of weirdness. What’s going wrong? Motion is the biggest obstacle between you and a successful fMRI experiment! Whenever EPIs don’t appear as you’d expect, the initial suspicion should be subject motion. But it’s not the only source of artifacts in EPIs, nor is there just a single way in which motion can render your EPIs less than perfect. Here are the major artifact sources and suggested remedial steps to fix them: Nodding motion. A change in the chin-to-chest direction might be caused by your subject fidgeting, craning to see the visual display, coughing, swallowing, talking, reaching to scratch his knee, or a sympathetic head motion in concert with a button push on a response box. How the motion appears in your images will depend significantly on your slice direction. If you are using axial or near-axial slices you may notice significant change in the anatomical content of a particular slice from volume to volume, especially towards the top of the head where there is very little signal in the image and a small shift will add or subtract a relatively large amount of signal. Note also the fluctuations in the intensity of N/2 ghosts across the images, resulting from temporary degradation of the magnetic field homogeneity whenever the head is displaced from the position it was in during the shim. If you are seeing something like this, try running a short test EPI of say thirty volumes (having disabled your script so that it doesn’t run by accident!) and ask the subject some questions, or to cough or swallow, etc. and see what happens in the Inline Display window when you know motion is definitely present. Look like your earlier problem? Then go in and repack the subject’s head, or add some additional restraint. Don’t be tempted to just “coach” your subject into moving less. You want good head restraint AND a cooperative subject so that if and when the subject moves involuntarily the distance he can move is minimal. Side-to-side motion. This sort of motion is rare, unless you have failed to add any padding to the sides of your subject’s head, or your task involves making saccades to extreme left/right targets, or if you have fidgety kids or other subjects who find it difficult to lie still for any length of time, no matter what you tell them. If you are using axial or near-axial slices, this motion appears as an obvious in-plane rotation or translation left-to-right of the anatomical signal. The ghost level may also fluctuate in concert with the rotation or translation. If you suspect side-to-side motion and think you can reduce it, simply add an extra piece or two of foam padding to one or both sides of the subject’s head. Don’t squeeze the subject in so that they are uncomfortable - especially if they are wearing headphones that can dig into the sides of their head if they are too tight - but do make sure they are in snugly. Ask the subject to try shaking his head side-to-side and see how much capacity for movement remains. It should be difficult for the subject to twist more than a few millimeters left or right, and they should always return to a symmetric, centered position when they relax. Another strategy to deal with motion concerns subject comfort. Uncomfortable subjects move more. Start off by ensuring the subject is comfortable, with the knee support, a blanket, etc. Then inform the subject that if he needs to stretch his lower back or scratch his nose or move his feet, he should do so only when the scanner is silent. Let the subject know that movement of any part of his body – even his feet or his arms – is likely to move his head via his skeleton. You can’t prevent a subject, even a comfortable one, from moving entirely. Instead, try to ensure all movements happen between runs so there’s less need for a subject to move during a run. Ask the subject to let you know if/when he needs to stretch or scratch so that you are in a position to decide whether you might need to re-shim or check slice positions. Finally, too much head packing can be uncomfortable, too. Jamming excessive foam between the headphones and the RF coil is liable to leave circular imprints on the sides of your subject’s head. You shouldn’t be surprised if the subject asks to bail on the scan early because his ears hurt, or he starts trying to relieve the pressure points by moving during the scan. How much subject movement is too much? Oh, if only there were a simple answer to this age-old question! At the end of the day, only the results of a full analysis can determine whether your subject moved “too much.” As a rough rule of thumb, though, users report that rigid body realignment numbers of less than 2 mm of movement in any one axis over the duration of a time series (say 200 EPI volumes) is normally acceptable for getting activations that make sense, and without too many false positives. The more you scan and the more data you analyze, the more likely you are to be able to tighten this criterion and perhaps add your own empirical assessment that you can use during a scan session (where you have a chance to fix the problem). Most often this means watching the Inline Display closely for glaring examples of subject motion: yawning, nose scratching, head movement coincident with respiration because you didn’t pack the head very well, etc. If you can see the head moving the chances are you’ll get more than 2 mm overall movement. EPI: ADVANCED PARAMETER AND SEQUENCE ISSUES To understand this section you will need to have a basic understanding of the EPI pulse sequence. A basic understanding of k-space is also extremely useful. If you haven’t already done so, consider reading chapter 4 of the book Functional Magnetic Resonance Imaging by Huettel, Song & McCarthy, or read the series of blog posts, Physics for Understanding fMRI Artifacts at http://practiCalfMRI.blogspot.com. What the hell is iPAT? Last time I checked, grappa was a strong Italian drink! It makes no sense! While you may feel like you need a drink when you have to think about how parallel imaging works, the concepts and the practical consequences are relatively simple to understand. In the first instance, iPAT is just what Siemens calls its parallel imaging implementation. It stands for integrated parallel imaging techniques and is the general term for the entire family of receiver coilbased data acceleration methods. Essentially, with parallel imaging methods such as GRAPPA (“generalized autocalibrating partially parallel acquisitions”) and mSENSE (“modified sensitivity encoding”), spatial information is partly acquired from the receive-field of the RF coil elements, and partly from k-space (i.e. gradient) encoding. With conventional, non-parallel imaging we only use k-space encoding. Using iPAT means that we can acquire less gradient episodes and so acquire less data per volume during an EPI time series. For example, with GRAPPA enabled and iPAT = 2 we acquire half of the number of echoes for EPI as without iPAT. That means the level of distortion in the phase encode direction is also halved. And if we were using GRAPPA with iPAT=4 we would acquire only one quarter of the gradient-encoded data than would be needed without iPAT, and distortion would be reduced by a factor of four by comparison. Whilst iPAT is available for most pulse sequences, generally you won’t care whether iPAT is being used or not for anatomical scans. (It is being used for your standard MP-RAGE, for instance.) But you definitely need to be aware of using iPAT for your EPI scans because it has consequences for image SNR, artifacts, motion sensitivity and the maximum nominal spatial resolution per unit time. So let’s focus on iPAT as used for EPI. There are two flavors of iPAT available for all the EPI sequences. Click the Resolution tab then select the iPAT card option. PAT mode is either None, GRAPPA or mSENSE. If PAT mode is set to None then parallel imaging is not being used. GRAPPA and mSENSE are both parallel imaging methods, but they are k-space and image space-based methods, respectively. For reasons that you almost certainly don’t care about, it turns out that GRAPPA is better than mSENSE for fMRI. So if you want to use parallel imaging, set PAT mode to GRAPPA. When you select GRAPPA you will find two more information fields come alive: Accel. factor PE, and Ref. lines PE. The first, Accel. factor (also known as iPAT factor), is the acceleration amount. A factor of two means that only every second k-space line is acquired in the EPI echo train; a factor of three, every third line, etc. If you are using the standard, 12-channel head coil, set the Accel. factor to 2. Don’t use factors of 3 or 4 without talking to me first! If you are using the 32-channel head coil you may use a factor of 2, 3, or 4, your choice. But it is generally a good idea to decide in discussion with Ben or Daniel. The Ref. lines PE parameter controls the number of phase encoding lines that are acquired during the auto-calibrating signal (ACS) scan (sometimes referred to colloquially as the GRAPPA reference scan). This parameter can be left at the default 24. If it’s set to less than 24, come talk to me. If it’s higher than 24 feel free to set it to 24, or come talk to me and we’ll investigate whether there are any reasons not to use the lower value. In empirical tests I found no performance difference using 24, 36 or 48 reference lines. So what happens if you have GRAPPA enabled? Well, in exchange for being able to skip kspace lines in each EPI, we need to map spatial information at the start of the acquisition. With iPAT=2, two reference EPI volumes are acquired. These happen immediately after dummy scans and before the first real (saved) volume of EPI. (Higher iPAT factors require more reference steps, in proportion.) Not only do these reference scans add some time to the total measurement, but of more importance is that it is essential there be no subject motion while they are acquired! If the subject moves during those critical few seconds - for iPAT=2 and TR=2000 ms the reference scans would take 4 seconds to acquire - the spatial reconstruction will be affected, causing all of the EPIs in the subsequent time series to have artifacts in them. How do you know if your subject moved during these reference acquisitions? Well, all you can do is open the Inline Display window as soon as you’ve started the scan and wait to see the EPIs that result. If the subject did move during the reference scans, you’ll see artifacts in the images and these will stay fairly constant as the scan progresses, i.e. they don’t suddenly go away, leaving lovely EPIs. (See the next section for an example.) Contrast this with a situation where the subject does NOT move during the reference scans, but does move a short time thereafter. In this case, the EPIs will start out looking pretty good, then occasionally go bad with the subject movement, then perhaps go back to looking good again, etc. In summary, then, if the images start bad and stay bad, bet that the subject moved during the GRAPPA reference acquisitions and stop the scan. Remind the subject to lie as still as possible, and start again. One related trick is to ask the subject to swallow before the scan starts, and ask him not to swallow again until he has counted to ten seconds after the start of the EPI noise. With a TR of 2 seconds and two dummy scans the subject won’t then swallow until after the third real volume of EPI is being acquired. (Recall 4 secs of dummy scans, 4 secs of reference acquisitions for iPAT=2, then the first real EPI volume is acquired.) Many subjects don’t consider swallowing (or moving their eyes come to that!) as ‘head’ movement. Politely remind them that at the beginning of the scan it is also important to keep everything still, including the eyes, the mouth/throat, arms, legs… If you want the ultimate in experimental robustness for GRAPPA, consider having several null events at the start of your stimulus script. For example, you might have four fixation crosses in a row, each displayed for 2 seconds (for TR=2000 ms) before the first real stimulus is displayed. This would give you an eight-second time window during which you could evaluate the EPI quality – looking for possible movement during the GRAPPA reference acquisitions, as just described and, if needed (or even if you’re just slightly worried!) you can stop the scan before any real stimuli have been presented to the subject. You could stop and restart your EPI acquisition as many times as necessary to avoid movement during the reference scans. Of course, in doing this you will need some experience to differentiate movement during a GRAPPA reference scan from some other problem (e.g. the effects of a bad shim) but, given the general problem of subject motion, it doesn’t hurt to provide yourself a small cushion at the start of each run. Is GRAPPA a good technique to use? What are the caveats? In general, the decision whether or not to use parallel imaging (iPAT) - whether GRAPPA, mSENSE or another iPAT method not presently on the scanner - is driven by the spatio-temporal requirements of your experiment. (On occasion, a user might opt to use iPAT with the express purpose of reducing distortion, but in general that is a secondary consideration, after spatiotemporal specifications, sensitivity, etc.) If you can meet your voxel resolution and spatial coverage (slices per TR) requirements without GRAPPA, apply Occam’s razor and don’t introduce an unnecessary complexity (which will translate into additional motion sensitivity, as you will see) that your neuroscience question doesn't require. You will only want to consider GRAPPA if you need higher spatio-temporal resolution than can be achieved with full k-space EPI. As a rough rule of thumb, 64x64 matrix EPI can be acquired without GRAPPA, allowing circa 3.5 mm in-plane resolution and circa 32 slices in TR=2 sec. These parameters are typical for 3.5 mm voxels with whole brain coverage. If you need to push the spatial resolution below 3 mm in plane, or acquire thinner slices and maintain whole brain coverage, or maintain 3.5 mm voxels but use a TR much shorter than 2 secs (e.g. for connectivity) then GRAPPA may be a solution. Let’s first deal with method selection. Why GRAPPA, not mSENSE? We have found that mSENSE is very much less stable in the presence of subject motion when used to acquire EPI for fMRI. So at this point the choice is GRAPPA or not. As discussed in the previous section, GRAPPA (as with other parallel imaging methods) takes advantage of the spatial information provided by the RF coil geometry to allow undersampled EPI acquisitions. Here, undersampling means we don’t have to acquire every line in k-space. And just how much we can undersample, i.e. the maximum acceleration (or iPAT) factor that is permitted, will depend primarily on the RF coil in use. Generally, the more channels the RF coil has, the more spatial information can be encoded from the coil and the higher the maximum iPAT factor can be. As mentioned in the previous section that introduced the GRAPPA method, you are really limited to maxima of iPAT=2 for the 12-channel head coil and iPAT=4 for the 32-channel head coil. So GRAPPA allows faster EPI acquisitions. That’s good, right? Yup, it can be. If you are using iPAT=2 you only need acquire 32 echoes in the EPI echo train, instead of the full 64 echoes, and you can still get a 64x64 matrix image out of it! Clearly, reducing the length of the echo train means we spend less time acquiring the spatial information for each EPI slice, and that means that we can acquire more slices per unit time (or per TR), meaning that our spatial coverage can be improved. Thus, as a general principle, the higher the iPAT factor the higher we can make spatial resolution and/or spatial coverage, without altering TR. What about the caveats of using GRAPPA? First of all, you never get something for nothing! GRAPPA reduces SNR, even in the absence of motion. Sampling a shortened echo train with iPAT=2 reduces the image SNR by √2, or 40%. Next, there may be artifacts in the reconstruction process caused by the mixture of imperfect receive-field encoding with a k-space encoding process. These reconstruction errors tend to increase with increasing iPAT factor. This is essentially why we can’t use higher than iPAT=2 with the 12-channel coil; we need more channels (coil elements) to push up to iPAT=3 or 4. The next problem is far more insidious and there is no guaranteed way to avoid it ahead of time: head motion. Of course, you have carefully packed your subject’s head and he has been instructed not to move, but he is still alive! Some movement is involuntary! Now consider how GRAPPA works again. First, some calibration scans are acquired, then the (undersampled) EPI time series starts up. What if the subject just happens to move – perhaps swallows – during those calibration scans? These critical reference acquisitions will be corrupted in some fashion that depends on the magnitude and nature of the motion. What precisely the resultant EPIs will look like is anybody’s guess – there are infinite ways for a subject to move – but one example of a motioncontaminated GRAPPA scheme is shown below: Motion-free GRAPPA images. Note the relatively homogeneous background noise. Motion-contaminated ACS. Note the structured noise in several slices. This structure persists throughout every volume of the time series. Let’s continue to focus on selecting a suitable iPAT factor for our experiment. We now recognize that any sort of reference scan that is used for reconstruction will necessarily increase the motion sensitivity of the entire time series. We can state with confidence that the least motion sensitivity is achieved for single-shot, full k-space EPI, i.e. when we aren’t using GRAPPA. Use of GRAPPA will always increase motion sensitivity. And the longer we must spend acquiring reference scans before starting the EPI time series, the more motion sensitivity we introduce to the overall experiment. So we only want to move to higher iPAT factors if we can assure minimal subject motion, and/or we can take steps to mitigate any incidental motion (such as including dummy fixation cross events at the start of the task, to allow a window for evaluating the EPIs and making a decision on whether or not to allow the acquisition to proceed prior to the first real stimulus being presented). We also need to be concerned about motion after the reference acquisitions, however. For EPI volume n acquired n*TR seconds after the completion of the reference scans, we have an ever increasing opportunity for the spatial information obtained during the reference scans to be rendered invalid. Slow, drifting motion is quite common, e.g. as subjects get more comfortable in the scanner, their neck muscles relax, the foam padding compresses, etc. And of course subjects may be yawning, scratching their noses, etc. These motions will generate a form of ‘mismatch’ between the spatial information encoded via gradients in the nth volume acquisition, and the prior reference scan information acquired at the start of the time series. As before, precisely how that mismatch manifests in the resultant nth completed EPI depends on the nature of the motion. Whether or not you decide the artifacts are too large to continue the current EPI time series will depend on many things, not least whether the motion was a one-time event and the subject returned his head to the starting position, whether the subject seems to be moving almost continuously, whether the task has novel components that mean it can’t be re-run on the current subject, etc. As with many issues in fMRI, what you do will be dictated by your experience, and that means interpreting and differentiating between the various types of artifacts. GRAPPA isn’t for the inexperienced! To finish up this section, let’s go back to the initial question: GRAPPA or not? You’ve now got an appreciation of the trade-offs with GRAPPA: essentially, this means exchanging higher spatio-temporal resolution for lower SNR and more motion sensitivity. Is it a fair trade? It all depends! If your experiment requires 2 mm voxels then you have little choice but to select how you do GRAPPA, not whether you do it. But if you only need 3 mm voxels then you have the choice to do GRAPPA or not. (Probably not.) Are you in between? Then it’s probably time to talk protocols with Ben and see if one factor overrides the others for your experiment. What is “partial Fourier” and why might I want to consider it for EPI? Partial Fourier (pF) is another approach to reducing the number of k-space lines acquired in order to produce an echo planar image. (It can also be used for non-EPI sequences but here we will focus on its use for EPI.) Like parallel imaging methods, pF is intended to speed up data acquisition, usually as a way to increase the spatio-temporal resolution. However, unlike parallel imaging techniques such as GRAPPA, pF doesn’t require any sort of reference scan. All the information needed to reconstruct a particular EPI slice is contained in that (partial) slice acquisition. The temporal benefit arising from pF can be understood by considering the k-space matrix below. Rather than acquiring every single echo in the EPI echo train, only just over half of the echoes are acquired by omitting the first, say, one quarter of the phase-encoded echoes in the train. (In the diagram below the first 7/16ths of the echoes have been omitted.) This allows the TE to be shortened, thereby allowing more slices per unit time. Acquiring partial k-space produces a k-space matrix with two distinct parts: the low spatial frequencies in the central part (dark gray) are sampled symmetrically whereas the high spatial frequencies have been measured only once, on one side of the k-space matrix (light gray). To reconstruct the final EPI from a 2D FT we need to synthesize the missing k-space (white). This is permissible because k-space of a real object, such as a brain, exhibits what is known as Hermitian symmetry provided certain conditions are met. The high spatial frequencies sampled on the right, in light gray, can be converted mathematically into the missing data on the left, albeit with a slight reduction of the SNR for the high spatial frequencies. (By sampling the high frequencies only once their SNR is reduced by √2.) Then, once a complete k-space matrix has been obtained, the resultant can be 2D Fourier transformed to yield images. Now, Siemens simply leaves the white space (the omitted echoes) set to zero, so that they add no signal or noise to the final image. This is another approach to image reconstruction that isn’t as sophisticated as the method I outlined in the above paragraph, but provided the number of omitted echoes isn’t too large the zero filling approach seems to work. (Siemens allows a maximum omission of a quarter of the total echoes, through partial Fourier factors of 7/8ths or 6/8ths only.) In contrast to GRAPPA, skipping a portion of the echoes in a partial Fourier acquisition doesn’t alter the inherent distortion in the final image. This is because GRAPPA with iPAT=2 skips alternate lines in k-space, making the sampled (acquired) k-space step size twice what it would be for an unaccelerated, full k-space EPI matrix, thereby doubling the effective bandwidth in the phase encoding dimension and halving the inherent distortion. But with partial Fourier the k-space step size is maintained at the same value as for full k-space. The echoes that are dropped from the acquisition reside in a single block at one side of the k-space matrix. Thus, the bandwidth in the phase encoding dimension is unchanged from a full k-space acquisition, and the distortion in the phase encoding dimension is unchanged as well. Is partial Fourier a good technique to use? What are the caveats? In general, partial Fourier should only be considered when you wish to use a TE that is considerably shorter than can be attained by the acquisition of your desired full k-space matrix (e.g. to reduce signal dropout) or to increase by a few slices the spatial coverage in the slice direction (i.e. slices per TR). Let’s say you want to end up with images that are 128x128 pixels. With full kspace coverage let’s assume the minimum TE to achieve that matrix is 44 ms. But you want to use a TE of only 30 ms because you know that gives robust BOLD signal, and unless you can shave 14 ms off the acquisition time for each slice you won’t get sufficient brain coverage in the slice dimension, either. By omitting the first thirty-two of 128 echoes (i.e. using 6/8ths partial Fourier) it is feasible to reduce the minimum allowable TE by something like 16 ms, thus allowing the shorter TE of around 30 ms that you want for your experiment. You will acquire only 96x128 data points then have the scanner reconstruct the “missing” 32 lines of data in the phase encode dimension to yield images of 128x128 pixels, as you intend. There are of course experimental caveats to partial Fourier scanning. By acquiring only 6/8ths of the echoes in a full echo train, the per image SNR is decreased by sqrt(8/6), or 15%, compared to the full 8/8ths sampling. Of course, this SNR comparison is valid only at a fixed TE, but since the partial Fourier scheme allows you to shorten the TE compared to full echo train sampling you will likely recover, perhaps even increase, the actual SNR in each EPI! However, this caveat has a caveat of its own. Not all signal regions in every EPI slice will refocus at exactly the center of k-space. Well-shimmed regions, especially in occipital and parietal cortex, will likely refocus at kx,y=0, as they should, and they should obey the SNR rules just mentioned above. Similarly, brain regions for which the magnetic field causes the signal to refocus late in the echo train (to the right of kx,y=0) will be sampled in a partial k-space scheme as for full kspace, and again their SNR should not be drastically affected by the omitted portion of k-space. But regions suffering from strong magnetic field gradients – the usual suspects of inferior and deep brain, frontal cortex and lateral temporal lobes – may refocus earlier than the theoretical center of kspace. Recall that we don’t start sampling until 2/8ths of k-space would already have been acquired were we doing full k-space sampling. (This is the blank region of k-space bounded by the dashed line on the left-hand side of the figure in the previous section.) It is entirely possible for these signal regions to refocus before sampling even commences, effectively “falling off the edge” of the sampled k-space and contributing (if anything) only weakly to the final image. In other words, signal dropout for these regions is enhanced. Note also that this dropout effect is unlikely to be sufficiently mitigated by reducing the TE, unless the TE is made very short indeed (which would have its own negative connotations for BOLD sensitivity, as discussed in an earlier section). Below are three sets of images acquired with full, 7/8ths partial and 6/8ths partial k-space. Note the pronounced dropout in temporal lobes as the degree of k-space sampling is reduced. In this example the TE was held constant; no attempt was made to compensate for dropout from early refocusing. Full Fourier 64x64 EPI. 7/8ths partial Fourier EPI. 6/8ths partial Fourier EPI. It looks like I will need to use either partial Fourier or GRAPPA to get the spatial resolution and coverage that I want. Which method should I use? An obvious question, given the need to reduce the minimum attainable TE and/or increase spatial coverage (in terms of slices/TR), is whether to use GRAPPA or partial Fourier. There is no simple answer to this question, but there are a handful of points to consider. The first is your intended use. If you want to shorten the minimum attainable TE and can achieve the TE you want using partial Fourier, then that is probably a good enough reason to stick to pF; it doesn’t require any form of “reference scan” so it has lower motion sensitivity than GRAPPA. In some pilot studies at BIC, users have found that temporal SNR of partial Fourier is better than it is for GRAPPA when all other parameters are held constant. In one test on deep brain regions the TSNR for GRAPPA was 11, whereas it was 16 for partial Fourier. However, unlike GRAPPA, using partial Fourier does not reduce the level of distortion inherent in the phase-encoded dimension of the EPIs. Thus, if one of your intentions is to reduce distortion you might want to use GRAPPA and the highest acceleration factor that your experiment can tolerate subject to the reduction of SNR, the presence of residual aliasing artifacts, the enhanced motion sensitivity and all the other fun stuff that comes with that method! But do not despair! By the time you are ready to consider partial Fourier or GRAPPA for your protocol, it is time to talk to Ben or Daniel for an in-depth discussion of your experiment. We would probably suggest doing some simple pilot tests to assess each method’s utility for your purposes. Under no circumstances should you be opting for partial Fourier or GRAPPA without fully understanding how your experiment might benefit (or otherwise) from your selection. At this point it suffices that you simply know that these options exist. FINAL ISSUES: I want to scan overnight. Is there anything I need to watch out for? Yes there is. The magnet’s stability is maintained in part by a drift compensation coil. As the magnet drifts, e.g. with temperature, this compensation coil has induced in it a current which then makes it appear as if the magnetic field is static. However, the coil can’t keep on collecting current ad infinitum. Thus, it is ‘quenched’ once a day, so that instead of a steady magnet drift over 24 hours, instead there is a single ‘step’ down in field. This quench step happens at 2 am each day. If you happen to be running a scan during the compensation coil quench there will be a sudden shift in the appropriate on-resonance frequency; a shift that your present acquisition doesn’t ‘know’ about. Your images (whether EPI or anatomical) will therefore likely suffer from artifacts that could be big or small, depending on the size of the frequency step. To avoid these problems, it is suggested that you don’t scan between about 1.55 am and 2.05 am, using the Siemens clock at the bottom-right of the screen to determine the time the scanner is using. I hear we have a research agreement with Siemens. Why should I care? If you are writing pulse sequences or doing anything that utilizes Siemens software for development then your work is probably covered by the terms of UC’s research agreement with them. In short, writing code (processing modules, pulse sequences) for the Siemens scanner – even if Siemens doesn’t actually help you do it – gives them “non-exclusive, royalty-free rights” to any intellectual property (patents) that you might submit based on your work. Note, however, that the agreement does not extend to so-called “derivative works,” such as using someone else’s customized sequence for an experiment, provided that in order to do the experiment you don’t make your own modifications to the source code. Derivative works are interpreted to mean any actual use of a method after it has been developed, the development having already taken place under the terms of the Siemens master research agreement (MRA), whether at UC or elsewhere. As a general rule, then, if all you do is use pulse sequences to acquire data, whether it’s with EPI, ASL or whatever, and all you do is neuroscience, you have nothing further to worry about. Your revolutionary test for Alzheimer’s disease that utilizes a clever fMRI scan is all yours (and UC’s) to patent. (It would be considered derivative work.) But if you are working on pulse sequence development, you should be doing so having read over the terms of the MRA and possibly having submitted an addendum to Siemens (through the UC office of industrial relations). If you intend to do work that you think might be covered by the MRA, contact Ben for more information. APPENDIX 1: CHECKLISTS In aviation, different checklists are used for each distinct phase of flight: pre-flight inspection, pre-takeoff checks, post-takeoff checks, climb checks, cruise checks, pre-landing checks, etc. Using similar logical separation of the phases of an fMRI exam, I developed the generic checklists below for you to modify into your own systems. They are starting points only. In particular, the emergency checklists in no way replace what you learned during your safety training! I simply extracted some of the most critical action items and made reminder lists, nothing more. You should have your own emergency procedures (based on the safety training) and be prepared to use them. Using checklists: There are essentially three ways to use these checklists. The fastest, usually, is to try to remember to do everything correctly and then, once you think you’re ready to proceed to the next phase of the experiment, pull out the appropriate checklist and double-check that you have, indeed, remembered to do everything appropriately. Do or correct anything that isn’t checked off properly. The slowest, usually, is to pull out each checklist in turn and do each item in the order it appears on the list. Often this is the best way to learn new procedures and ensure that you don’t mess anything up. You should subsequently find that you begin to use the first method – do, then check - more frequently as you gain experience. That said, there is nothing inherently wrong with using the ‘read it, do it’ approach forever, with the possible caveat given below for non-written checklists that may be more appropriate during certain phases of the exam. The third way is a hybrid of the first two approaches, but it requires a second experimenter: your co-pilot. This is the ‘challenge, response’ approach and it’s the one that airline pilots use. You, the pilot, do as many items as you can either remember to do, or have had time to ‘read and do’ during the current phase of the exam, until it comes time to ensure that everything is correct and move on to the next phase of the exam, e.g. when you think you have the subject set up and you are ready to retreat to the operator room to commence scanning. At that point your co-pilot challenges you on each item on his written list, and you must respond with an appropriate answer or the item must be acted upon and/or corrected. “Check!” may be an appropriate response, but you are almost always better off responding with a status, not a simple acknowledgment. For instance, the checklist challenge could be “Laser alignment of head,” to which you could respond with “Check!” A more nuanced response would be “Centered!” In both instances the challenge is acknowledged appropriately, but the additional information in the second response allows both the experimenter and the challenger to do a “sanity check” on the status reported. It helps reduce potential ambiguity. This is especially useful – I would argue essential – the moment the choice is greater than binary. Non-written checklists: It’s not essential to use written checklists for every phase of an exam. It may be sufficient to generate a mnemonic and use a verbal/mental checklist. This can work well if there are just a few items on the checklist and when fishing around for a written checklist might be inconvenient (or embarrassing). For instance, you might use a mnemonic checklist for the subject setup on the patient bed, when pulling out the manual might not engender the most confidence in your already nervous subject! NORMAL OPERATION CHECKLISTS: Experimenter Prep: a. b. c. d. e. f. Bathroom break? Coffee, water, snacks. Experiment forms. Lab book. External hard disk for data removal. De-magnetize - empty pockets, remove watch and magnetic items. Lab Prep (before subject arrives): a. b. c. d. e. f. g. h. i. j. k. l. m. Turn scanner on if needed. (Allow 15 min warm-up from cold.) Check logbook for problems. Inspect lab for trash, untidiness, and presence of foreign objects. Check for unwanted connections on the filter panel (look for red labels). Once scanner is on, check acquisition is enabled. Check any errors. Assure adequate hard disk space. Delete old data if needed. Check RF coil sockets on the bed for debris, check plugs on RF coil for bent or missing pins. Connect head RF coil. Test response boxes. Check projector screen location, security. Turn projector on. Sanity check: does the lab “look right?” Register patient. Subject Prep: a. b. c. d. e. Screen, consent form. Bathroom break. (Female subjects: pregnancy test.) Second screen to check for metal, watch, wallet, etc. Corrective lenses if required. Earplugs. Subject Setup: a. b. c. d. e. f. Headphones. Fiducial reference (vitamin E capsule). Place subject in RF coil, use padding to secure head comfortably. Squeeze-ball. Laser alignment (subject’s eyes closed). Check RF coil connection. g. h. i. j. k. l. m. Place and check mirror alignment. Knee support. Blanket. Button boxes. Insert subject into magnet. Arm rest cushions. Check screen view. Adjust mirror & projector focus if needed. Start of Scan: a. b. c. d. e. f. Close magnet room door. Check seal. Magnet room lights off (ideally). Magnet room window blackout screen up/down (optional). Check communication with subject. Localizer scan. AutoAlign scan (optional). Experimental Protocol: a. MP-RAGE anatomical scan. b. Set up stimulus script. c. fMRI protocol. End of Scan: a. b. c. d. e. f. g. Transfer data to Mac (Osirix). Close patient in the Exam task card. Turn projector off. Return all materials to proper place. See printed checklist on magnet room wall for all other post-scan items. Complete the logbook!!!!! Burn data to external hard drive or DVD. EMERGENCY CHECKLISTS: Unexpected image feature: a. Don’t alarm the subject! b. Re-acquire the scan. Changed? c. Run any diagnostics you are trained to run, e.g. acquire a different type of scan (such as MPRAGE). d. Abandon the exam if the problem cannot be resolved. (Don’t alarm the subject!) e. Notify Ben/Rick/Miguel by email. f. Notify your PI to report the incident to CPHS. Panicked subject: a. b. c. d. Call for assistance if you want it. If threatened or assaulted, call UCPD (using lab phone ideally). Notify Ben/Rick/Miguel by email, text or phone. Notify your PI to report the incident to CPHS. Magnetic object accident: a. Life threatening? Quench the magnet! b. Serious injury or person trapped? Quench the magnet! c. If magnet quench is activated: i. Recover subject from magnet. ii. Evacuate the building. iii. Seek medical assistance – call UCPD or 911. d. If magnet quench is not activated: i. Consider not using the patient bed controls! ii. Don’t risk moving the magnetic item! iii. Seek assistance from Ben/Rick/Miguel. iv. If possible, recover subject from magnet leaving magnetic item in place. e. Notify Ben/Rick/Miguel by email, text or phone. Fire: a. b. c. d. e. f. Pull the fire alarm! Retrieve subject from magnet. Evacuate the building. Only if it is a small fire and it is safe to attempt, consider using an extinguisher. Notify Ben/Rick/Miguel by text or phone. Remain near the building for UCPD/Berkeley Fire Department. Earthquake: a. b. c. d. e. f. Take cover until shaking stops! Open the magnet room door, prop. Open the outer door, prop. Retrieve subject from magnet. Evacuate the building. If time/safety permits, leave a note describing the magnet status. Add your name and phone number to the note. g. Close outer door if you depart. h. Notify Ben/Rick/Miguel via email, text or phone of your evacuation.