Introducing the series
The workhorse sequence for fMRI in most labs is single-shot gradient echo echo planar imaging (EPI). As we saw in the final post of the last series, EPI is selected for fMRI because of its imaging speed (and BOLD contrast), not for its ability to produce accurate, detailed facsimiles of brain anatomy. Our need for speed means we are forced to live with several inherent artifacts associated with the sequence.
However, in addition to the "characteristic three" EPI artifacts of ghosting, distortion and dropout, when we're doing fMRI we are more concerned with changes over time than with the artifact level of an individual image. So, in this series we need to assess the sources of changes between images, even if the images themselves appear to be perfectly acceptable (albeit subject to the "characteristic three").
What's the data supposed to look like?
It would be rather difficult for you to determine when something has gone wrong during your fMRI experiment if you didn't have a solid appreciation of what the images ought to look like when things are going well. Accordingly, I'll begin this series with a review of what EPIs are supposed to look like in a time series. We'll look at typical levels of the undesirable features and assess those parts of an image that vary due to normal physiology. This is what we should expect to see, having taken all reasonable precautions with the subject set up and assuming that the entire suite of hardware (scanner and peripherals) is behaving properly.
Good axial data will be the focus of the first post in the series. (Axial oblique images will exhibit qualitatively similar features to the axial slices I'll show.) In the second post I'll show examples of good sagittal and coronal data. Artifacts may appear quite differently and with dissimilar severity merely by changing the slice prescription, so it's important to keep in mind the anisotropic nature of many EPI defects. Motion sensitivity is also different, of course. Motion that was through-plane for an axial prescription is in-plane for sagittal images, for example.
Ooh, that's bad. Is it...?
With a review of good data under our belts it will be time to look at the appearance of EPI when things go tango uniform. I will group artifacts according to their temporal behavior - either persistent or intermittent - and their origins - either from hardware, from the subject, or from operator error. You should then be able to understand and differentiate the various artifacts and be able to properly diagnose (and fix) them when it counts the most: during the data acquisition. Waiting until the subject has left the building before finding a scanner glitch is a bit like doing a blood test on a corpse. Sure, you might be able to determine that it was the swine flu that finished him off, either way he's dead. Our aim will be to do our “blood tests” while there is still a chance of administering medicine and perhaps achieving a recovery.
One of the hardest tasks facing someone new to fMRI is the ability to recognize when data is ostensibly ‘good.’ On the face of it such a problem might seem strange, even if we recognize from the outset that ‘good’ and ‘bad’ are subjective assessments. So let’s take a closer look at these definitions.
For the most part what you, as an experimenter, mean when you say that data is good is that the fMRI experiment showed the sensitivity to the stimulus-induced hemodynamic changes you expected, i.e. that a particular threshold of statistical significance was attained. In short, your experiment worked. However, what an MRI physicist usually means by ‘good' data is often subtly and crucially different.
To a physicist, ‘good’ data is obtained whenever the EPI time series yields images that have an appearance and quality that can be reasonably expected on that particular scanner and when using that particular set of parameters. Note that this assessment says nothing about the appropriateness of the time series for your experiment. You might have chosen a bum set of parameters for your intended application. For example, you might have been expecting activity in the inferior prefrontal cortex, but you went and used a TE of 40 ms to ensure lots of functional contrast in occipital lobe. Oops. Now you have a hole in the image where you wanted signal. Bad data, or bad experiment...?
So, to a physicist, ostensibly 'good' data is achieved any time the scanner didn’t introduce any (strong) artifacts such as global signal intensity drifts, when the peripheral equipment didn’t introduce any (large) RF noise, when the parameter selection avoided any settings not commensurate with maximum scanner performance (such as might be needed to avoid strong mechanical vibrations), and when subject motion was in some as yet to be defined ‘normal range.’ In other words, we are interested in the quality and stability of the images, not whether the time series was able to answer your question. See the difference?
There are many, many reasons why your experiment might not have worked as expected even though the images had low artifacts and were temporally stable. Perhaps your voxels were too large for the particular brain regions you are interested in. How is that the fault of the scanner? It's entirely possible to acquire good quality data with a sub-optimal protocol then implicate the data for failing to produce the magical colored blobs. But it is a planning problem, not a data quality problem.
Winning the data lottery
So, to a physicist, ostensibly 'good' data is achieved any time the scanner didn’t introduce any (strong) artifacts such as global signal intensity drifts, when the peripheral equipment didn’t introduce any (large) RF noise, when the parameter selection avoided any settings not commensurate with maximum scanner performance (such as might be needed to avoid strong mechanical vibrations), and when subject motion was in some as yet to be defined ‘normal range.’ In other words, we are interested in the quality and stability of the images, not whether the time series was able to answer your question. See the difference?
There are many, many reasons why your experiment might not have worked as expected even though the images had low artifacts and were temporally stable. Perhaps your voxels were too large for the particular brain regions you are interested in. How is that the fault of the scanner? It's entirely possible to acquire good quality data with a sub-optimal protocol then implicate the data for failing to produce the magical colored blobs. But it is a planning problem, not a data quality problem.
Winning the data lottery
Conversely, you may have used the ideal protocol but been sloppy with your subject's head restraint and obtained a positive result in spite of unnecessarily high motion. (Congratulations! Go buy a lottery ticket!) To a physicist, however, the fact that you managed to dodge lots of bullets doesn’t imply that your data was ‘good.’ When we are discussing the quality of a particular time series acquisition we are saying absolutely nothing about the experiment per se; we are simply going to determine whether that particular time series could, under normal circumstances, have been acquired better.
Post hoc analysis
Now, you might ask why we are even bothering to look at EPIs directly. Why not just run a script on the time series and have it spit out some sort of quantitative assessment? (See Note 1.) That is a great idea! But it is generally only useful to do after the time series has finished, often requiring the data set to have been ported off the scanner. What we would prefer to do is use our expertise and intuition to spot the moment something isn’t right during an EPI run – during the experiment - then quickly determine whether the problem has righted itself or will persist and necessitate some sort of remedial action on the part of the experimenter. And the fastest way to do that is to observe the data as it acquires, in real time. Of course, if you are fortunate to have a scanner or software that can analyze data in real time that is an added bonus to using your skills as an observer. Most labs don’t have such facilities yet, however.
Your analysis scripts will be extremely useful to quantify the data quality after the experiment, perhaps just seconds after the end of one time series acquisition and before the next one. Therefore, we will take a quick look at a couple of simple statistical images that can be produced on most scanners’ software, in the midst of your scan session. Unless you are fortunate to have a scanner that produces such statistical images in real time you won’t be able to produce these maps continuously through the session, but you might be able to assess each task block, say, and assure smooth progress, repeating suspect or definitively bad runs. Remember, the more ways you can assess your data during the scan session, the more likely you are to detect (and be able to remedy) a problem!
Contrasting your images for artifact recognition
We're very nearly ready to get down to business. In the posts to come I will routinely review the same images with different contrast levels. Let this segment serve as a general explanation of what I'm doing and why.
When you're hunting for artifacts it is a good habit to assess your EPIs with at least two different contrast settings. You want to assess the images with contrast as they might be used in a publication - what I'll term anatomically contrasted images - so that you can determine whether the appearance of the brain is acceptable. But you then want to review the images having brought up the background "noise" region to a visible intensity, to reveal the N/2 ghosts and any problems that may be lurking in the crud. Many artifacts in EPI are furtive and are often best detected at the level of the image background, where there should be only noise plus the ubiquitous N/2 ghosts. Indeed, the ghosts themselves can be used as sensors for problems, as you'll see when we encounter several of the artifacts later on in this series.
When you're hunting for artifacts it is a good habit to assess your EPIs with at least two different contrast settings. You want to assess the images with contrast as they might be used in a publication - what I'll term anatomically contrasted images - so that you can determine whether the appearance of the brain is acceptable. But you then want to review the images having brought up the background "noise" region to a visible intensity, to reveal the N/2 ghosts and any problems that may be lurking in the crud. Many artifacts in EPI are furtive and are often best detected at the level of the image background, where there should be only noise plus the ubiquitous N/2 ghosts. Indeed, the ghosts themselves can be used as sensors for problems, as you'll see when we encounter several of the artifacts later on in this series.
As a very rough rule of thumb, then, large errors, such as sizable subject motion or intense gradient spiking, will be highly visible in anatomically contrasted EPIs; that's the first place to look. Small subject motion and subtle scanner hardware issues, such as mechanical resonances, may only be visible in background-contrasted images. You could (perhaps should) even make this algorithmic:
Step 1 - contrast the anatomy and check that the brain appears as you expect it to look. Large holes? Stripes? Signal moving all over the place? If the answer to all of these is "no," proceed to step 2.
Step 2 - contrast the background noise to reveal the N/2 ghosts. Inspect the ghosts. Is their intensity acceptable? Are they stable over a few TRs? If all is well here, proceed to step 3.
Step 3 - with the image still contrasted to reveal the ghosts, inspect regions in the image that should be ghost-free, i.e. noise. Is it noise, or does anything with structure pop in and out of existence?
Okay then, that's how we're going to do it. Let's get to work by assessing the fluctuations that you can expect to see in "good" data. See you at the next post!
_________________________
Notes:
1. I'm sure there are dozens of useful post hoc diagnostic tools out there. One suite that I'm partially familiar with is courtesy of Matthew Brett from his Cambridge days. Another suite that looks useful but that I've not tested myself comes from the Gabrieli Lab. (The parent links to these sites are available in the right sidebar of the blog.)
No comments:
Post a Comment