More fMRI experiments are ruined by subject motion than any other single cause. At least, that is my anecdotal conclusion from a dozen years' performing post-acquisition autopsies on "bad" data. The reasons for this vulnerability are manifold, starting with the type of subjects you're trying to scan. You may be interested in people for whom remaining still is difficult or impossible without sedation of some kind.
However, I think there is another reason why many (most?) fMRIers end up with more subject motion than is practicable: they haven't taken the time to think through the different ways that subjects can thwart your best efforts. In other words, what we are considering is largely experimental technique, or bedside manner as medical types refer to this stuff.
With the possible (and debatable) exception of bite bars, which aren't popular for myriad reasons, there is no panacea for motion. Why? As we shall see, it's not just movement of the head that's a concern. You need to consider a subject's comfort, arousal level, propensity to want to breathe, and many other things that might be peripheral to your task but are very much on the mind of your (often fMRI-naive) subjects.
Now, before we get any farther I need to outline what this post will cover, and what it won't. The focus of this post is on single-shot, unaccelerated gradient echo EPI - the sort of plain vanilla sequence that the majority of sites use for fMRI. I won't be covering the effects of motion on parallel imaging such as GRAPPA, for example. I will also restrict discussion here to the effects of motion on axial slices. Hopefully you can extrapolate to different slice prescriptions. But, rest assured that this isn't the last word in motion, not by a long chalk. Motion has come up before on this blog, e.g. in relation to GRAPPA for EPI, and the ubiquity of the problem implies that the issue will arise in many subsequent posts, too. Take today's post as an introduction to the general problem.
My final caveat on the utility of today's post. As this blog is focused on practical matters I will restrict the bulk of the discussion to things that you'll see and can control online, in real time. There are many tools that can be used to provide useful diagnostics post hoc, some of which I will mention. But this isn't a post aimed at showing you what went wrong. Rather, the intent of this post is to describe what is going wrong, such that you might be able to intercede and fix the situation. Some sites have useful real-time diagnostics that can tell you when (and perhaps how) a subject is moving, but they aren't widespread. Thus, for today's post we shall keep things simple and restrict the discussion to what can be seen in the EPIs themselves, as they are acquired.
WARNING: If you haven't run an fMRI experiment in a while then you might want to stop reading this post here and go and review the earlier post, Understanding fMRI artifacts: "Good" axial data. That post highlights our target: the low motion case.
Let's start simply. Here is a video of a subject intentionally moving his eyes to a target. Saccading is the technical term, I hear. (See Note 1 for experimental details. Parameters were fixed throughout for this post, unless mentioned to the contrary in any section below.) There are twenty volumes played back at a rate of 5 frames/sec:
Movement of the eyeballs and optic nerves is quite obvious. But is there any effect on brain signals proper? To assess that it's easier to switch to looking at the standard deviation image for the time series:
We can now clearly see the effects of cardiac pulsation - blood vessels stand out, and CSF has higher variance than brain tissue, in agreement with previous good data. We can now also see that the muscles surrounding the eyes have high variance, albeit not quite as high as the eyes themselves. (See this post for further information on standard deviation and TSNR images.)
I find it quite interesting that there is minimal effect of the eye movements on the variance of brain signals in those slices containing the eyes. Phase encoding is anterior-posterior, so we might expect regions of the slices containing eyes to have higher overall variance. Specifically, for eye-containing slices, we might expect twin parallel columns in the A-P direction to have higher variance where the N/2 ghosts (from eyes) will occur. What does it mean that brain regions parallel to eye signals have similar variance to the rest of the brain? Not very much, since this is a case study. But I know that there are some fMRIers who advocate trying to position slices such that little or no eye signal is encompassed within slices, to avoid the effects of eye movement on variance of parallel brain regions altogether. (There are obviously limits to how far one can go with this tactic.) Perhaps, then, it is movement of the entire head - with those twin globes of delightfully high SNR - that leads to variance issues for eye-containing slices, rather than eye movements per se. Let's take a look.
Placing foam padding down the sides of your subject's head will do a pretty good job of preventing side-to-side head motion. But movement in the chin-to-chest direction is near impossible to prevent without a bite bar or some other form of skull restraint. (Neck muscles are really strong!) So nine times out of ten... okay, ninety nine times out of a hundred, it's this movement axis that is of primary concern.
Here is an example of what happens to EPIs when a subject intentionally moves (rotates) in the chin-to-chest direction. As before, there are twenty volumes played at a rate of 5 frames/sec:
Hmm. What can we say about that? Lots of stuff changes, doesn't it? The anatomical content of each slice clearly changes with the movement, as you would expect for a phenomenon that is essentially through-plane. And if you look carefully you might be able to see the degradation of the shim on high magnetic susceptibility regions - the frontal and temporal lobes, where distortion is highest. Can the standard deviation image decipher anything more subtle?
The standard deviation image for that time series confirms that the eyes, by virtue of being big signal generators, do indeed generate high variance when the whole head is moved:
So even if the eyes are kept still relative to the subject's head, movement of the entire head leads to large instability of the eye signals. Not a massive surprise.
We can also see that the cerebellum and frontal lobes generate high variance, presumably due to degradation of the shim but perhaps also due to the magnitude of displacement of these regions. (An aside: relative to these slices it's difficult to estimate the axis of rotation but it's probably quite close to the center of the brain.) Overall, though, I don't think this image provides any breakthrough insights. Rather, it simply confirms that if the head is rotated then bad stuff happens to the stats, and there are degrees of bad depending on where in the brain one considers. Not the most precise diagnosis.
There is, however, one additional factor to consider here: slice ordering. In the data shown above, slices were acquired in descending order; that is, contiguously in the head-to-foot direction. But what if slices had been acquired interleaved - odds then evens - as is sometimes done to reduce crosstalk? (There's a section on interleaved versus contiguous slicing in my user training guide/FAQ.) Here's a video showing the effects of a similar head nodding motion to the one above but on interleaved slices (all other parameters held constant):
In the case of interleaved slices, then, head movements cause a perturbation of the T1 steady state. This leads to a period of recovery of something like two to three TRs, for a TR of 2 sec and typical T1s at 3 T for brain tissue and CSF in the range 1-3 seconds, following each head movement. There are T1 effects in contiguous slices, too, but the magnitude of the perturbation tends to be smaller. (See Note 2.)
When the head moves in the slice direction with interleaved slices, some slices are displaced into brain regions that were just excited, during the current TR period. These slices get darker because the spins have had very little time to recover. Then, in regions of the brain that get skipped (omitted) during the current TR, the signal will become brighter than normal in the subsequent TR because those spins will have had more than one TR to recover. We need to wait for the dynamic equilibrium to re-establish itself (after the movement has stopped) before the signal intensities return to normal. Interleaved slices will show banding in the slice direction during and after head movement. This is seen as banding if the 3D volume is reconstructed and viewed. (There's a little more information and an example available from the CBU wiki.) Banding from T1 effects is also visible in the TSNR image of the 20-volume time series just shown above:
Before considering the next type of motion I'll make a quick statement about interleaved versus contiguous slices. For identical head movement in the slice direction, interleaved slices will experience prolonged image artifacts compared to contiguous slices. Thus, all other things being equal, and assuming negligible slice-to-slice crosstalk, it is generally the case that contiguous slices are preferred for fMRI.
Unless your task requires verbal responses there shouldn't be any reason for your subjects to talk during a scan. Even if they have a penchant for talking to themselves outside of the scanner, trying to do it inside is likely to be a frustrating experience, given the scanner noise. Still, one never knows... I've actually had subjects sing to themselves to alleviate boredom. Really.
Here is what talking at a normal conversational level, with no attempt by the subject to compensate for head movement, does to an EPI time series (of contiguous slices):
The movement is similar, although smaller, than that observed by intentional head nodding. This isn't at all surprising given that talking involves movements of the jaw that are primarily in the same direction as nodding, and also that head movements tend to accompany speech for emphasis. But, even if the skull was stationary during the speech, movements of the jaw itself may be sufficient to cause changes in the shim, leading to modulation of the frontal lobes and inferior portions of the brain in particular. Chest movements to support voice production add a further level of complexity. The bottom line: you don't want it. (Using voice responses for your task? See Note 3.)
Talking of talking, if you interact with your subject a lot between EPI blocks there is a chance your subject's head may end up in a new position between runs. Subjects with a penchant for nodding in the affirmative - try not to do it, it's really hard! - are especially likely to end up displaced. Thus, after extensive conversations, and/or after an amount of time during which the scanner is likely to have cooled from its working steady state, and/or you suspect that the foam supporting the subject's head has compressed from its starting shape, I would suggest that you consider re-shimming before starting the next block in order to ensure the ghost level is low and the effects of distortion are minimized. It only takes a few tens of seconds to re-shim, and in the case a subject has moved several millimeters it could save your data. (Siemens users, see Note 3 in this post for the procedure to force a new shim.)
Coughing, swallowing, yawning and sneezing
These four actions are pretty much unavoidable in subjects lying supine, in a relatively cool, dark, often dry and soporific environment. What's more they often aren't exactly subtle, offering the potential for stark head movements.
In this next video the subject coughed once then swallowed immediately thereafter, resulting in two obvious head displacements:
The movements are again similar in nature to the intentional head nod, except that the magnitude of displacements is larger, if anything. No surprises here.
Coughing and sneezing can result in especially hefty movements of the head. Swallowing and yawning tend to cause less dynamic head movements, but the caveat is that most people swallow once every minute or two, and yawning seems to be a by-product of being in an MRI scanner. Someone might go hours between coughs and days between sneezes. Swallowing and yawning, then, are the actions that you might want to pay the most attention to.
There's really not much you can do except to ask your subjects to own up to coughing or sneezing during a run. You should consider re-shimming if your subject experiences a sneeze or a violent cough. I always suggest to users that if they know or suspect that a subject may have moved his head since the last shim was performed, re-shim. Acquire a new localizer scan and check your slice prescription if you suspect the movement is substantial. If you're studying heavy smokers or other subjects prone to extensive coughing, I have nothing much further to offer you except the very best of luck!
To combat yawning and fatigue in general, ask your subject to take a couple of easy, deep breaths in between EPI blocks. Try to flush as much CO2 as you can from your subject's blood. As a side benefit you may help to keep your subjects more alert, yielding better overall task performance (and higher activation!) in addition to reducing motion. You might also consider turning on the scanner's bore fan periodically, to reduce CO2 build-up. I wouldn't run the fan continuously, however, because it could cause throat irritation leading to more coughing or swallowing, and dust could lead to sneezing. Dry eyes are also likely to produce fidgeting as a subject blinks to generate tears. So, use the bore fan in occasional, short bursts.
As for swallowing, a useful tactic can be to ask your subject to swallow just prior to the start of each block of EPI and to be mindful to wait until the noise stops before swallowing again. However, don't excessively focus your subject's attention on it or you'll find your subject can do nothing but swallow!
And finally, movement of body parts other than the head. There are twin concerns with body and limb movements. The first issue is the direct mechanical effect. It's exceedingly difficult to move an arm or a leg, or even a foot, without that motion being conducted via the spine into the head. Try it. Lay on the floor or a bed and move your ankle in any direction you like. The load on your heel causes your entire body - and your head - to move. It really is that simple.
Here is what ankle motion looks like in an EPI time series of the head:
If you have tried the ankle movement test then you're unsurprised to see movement that looks depressingly similar to the intentional head nods shown previously. The driving muscles may be different but the net result for the brain is the same.
In this test I moved both of my ankles very slightly, as if I was adjusting for comfort. I made the movements slowly and deliberately, trying as hard as I could to keep my head stationary. Yet I could still feel my bum and back moving on the patient bed, and of course I could detect that my head was moving in spite of my best efforts to counteract it.
The secondary effect of limb/body movements in the scanner is a change in magnetic field due to susceptibility. I've covered this topic briefly before, in relation to GRAPPA. I don't think this effect is nearly as problematic as the direct head movement, especially for single-shot (unaccelerated) EPI, but it's something to keep in mind.
So, what to do? If your subjects are using button response boxes, put the boxes in a position where only facile finger movements are required. If you're still worried, run some pilot tests on a volunteer and ensure that the way you're setting up doesn't invoke arm or shoulder movements. (Best case, run the tests with yourself as the subject. As recommended by Neuroskeptic on Neuroconscience's blog, testing on yourself is the only true way to understand your experimental setup.) And another simple mistake to avoid: don't just tell your subject not to move his head during the scan! Rather, inform your subjects that all forms of motion (except any required by the experiment, such as responding with button pushes) should be avoided any time the scanner is making a noise. If the subject needs to adjust for comfort, ask her to do so in between EPI blocks. Here are two simple rules to give to your subjects:
- Scanner noise = No moving whatsoever!
- No scanner noise = Movement with permission from the experimenter
Other handy tactics and things to pay attention to during an experiment
Whenever I'm watching EPIs as they acquire, using the inline display window on my Siemens scanner, I do a few things. Firstly, I "scan" the entire set of slices looking for any major changes with the window contrasted to show the brain. Secondly, I pay close attention to superior slices (if your slices are axial); those that contain just small, nearly circular "patches" of brain. Any movement in the chin-to-chest direction will cause these circular patches to get larger or smaller by a goodly amount. It's quite easy to see; often far easier than looking at ghosts or other signal regions. It's almost as if the brain signal in that end slice is breathing - moving in and out. And finally, I periodically re-contrast the window to show the noise and look for any telltale changes in ghosts. As a general rule, any time your subject moves you will see an increase in the N/2 ghosts. If the ghost increase is short-lived then the movement is/was short-lived.
But is it possible to tell from looking at your EPIs precisely how your subject is moving? As the results above suggest: not really. It's very difficult to differentiate between forms of chin-to-chest rotation, whether it arises from swallowing, fidgety feet or straining to see a target image. Instead, you should use knowledge of your task to try to refine your diagnosis. Thus, if you see movement more frequently than once every ten to twenty seconds, it's unlikely to be the subject swallowing (unless the task involves sipping coffee in the magnet). Take a peek through the magnet room window to see if you can see the soles of your subject's feet moving. (Most people move their feet whenever they move their bum, stretch their back, etc.) You might also simply ask the subject to own up to a particular movement. "Hi! So, I don't suppose you were talking to yourself during that last block, were you?" Be kind, remember that being in an fMRI is far more boring than watching paint dry, and remind your subject not to move at all during the scan. Don't just tell your subject not to move her head! As you have seen, hands, arms, bums, backs, feet,... all tend to couple through the skeleton into the old noggin. Bad, bad, bad.
1. Siemens Trio/TIM scanner with 12-channel receive-only RF head coil, 33 axial slices, 3 mm slice thickness, 0.3 mm gap, descending slice order, single-shot gradient echo EPI with 64x64 matrix, FOV=224x224 mm, echo spacing = 0.51 ms, anterior-posterior phase encoding, TR=2000 ms, TE=28 ms, 78 degree RF flip angle. The subject was a male in "early middle age," neurologically normal (to the best of his knowledge), and highly experienced with the fMRI environment. His head was supported with a Siemens-supplied foam pad. Head restraint was achieved courtesy of as many curved foam pieces as could be fit between the intercom headphones and the sides of the RF coil; some additional curved foam pieces were placed between the crown of the subject's head and the rear interior surface of the RF coil.
2. A further complication for both interleaved and contiguous slices, i.e. for all multislice 2D scans, is that the timing of any head movement relative to the slice number in the TR period determines the actual appearance of the set of slices for that TR. It's obvious, but given the common treatment of each block of 2D slices as a single block of 3D data in post-processing, it's worth stating explicitly. Thus, if your subject just happens to sneeze at the very end of a TR period then you might get lucky and have the pre-sneeze set of slices at one position of the head and the post-sneeze set of slices in a new head position. Motion correction algorithms would have a relatively easy time reconciling the post-sneeze position to the pre-sneeze position. Not so, however, if the sneeze happens in the middle of the TR. Now, some of the slices for this TR are in one head position, the rest in a new position. What does the motion correction algorithm do now? And what about slice timing correction? Which slabs of 3D space were sampled when? There is clearly blurring in the slice direction and it isn't at all obvious which position is "correct." For this reason, some people simply discard volumes of data when an acute movement occurs. But this is another one of those truly massive topics that deserves at least one post of it's own, so I shall stop here for now.
3. There are microphones, e.g. optical microphones, that can be used safely and effectively in the MRI environment. If you are using such a device then I would recommend training your subjects on a mock scanner or, at a minimum, have them practice speech production with minimal head and jaw movements, before you scan them. At some point I'll do a whole post on speech and fMRI.