Education, tips and tricks to help you conduct better fMRI experiments.
Sure, you can try to fix it during data processing, but you're usually better off fixing the acquisition!

Saturday, January 22, 2011

Comparing fMRI protocols

In a December post I suggested a decision tree that can be used for deciding whether or not to adopt a new (or new to you) method or device for your next fMRI experiment. In essence it was a form of risk analysis. But it isn't only new methods that need to be evaluated carefully before you embark on an experiment. What about the plethora of parameters that characterize even the simplest combination of single-shot EPI with whatever passes for standard hardware on your scanner? RF coil selection, echo spacing, TE, slice thickness, slice gap, TR, RF flip angle... all can have profound effects on your data. In the absence of a compelling paper that strongly implicates a particular protocol for your experiment, how do you make an informed choice before you proceed?

Functional signal and physiologic noise

In an ideal world you would be able to run a pilot experiment that robustly activates all the brain regions you're interested in. This approach can work well if all of your regions of interest lie in primary cortex: responses to stimuli are typically robust, baselines are fairly easily established, and simple stimuli can often be used to assess regional responses. But many contemporary experiments don't lend themselves to extensive piloting; actually doing the entire experiment may be the only way to assess whether regions A, B and C are activated at all, let alone more or less with a particular parameter setting! Instead, we may have to focus our attention on the noise properties of the tissue.

Now, before going any further, there is an important caveat here. In not doing a pilot functional experiment we will be unable to say anything meaningful about functional signal changes; we are about to focus on the background noise that we want to overcome with our functional signal changes. Thus, if your  comparison involves parameters that change functional sensitivity - TE is the simplest example - then by measuring only the noise properties of the brain we're missing a large part of the story. In other words, what you are about to read is appropriate only for parameter comparisons for which the functional signal change is held approximately constant.

Okay, so how can we emulate a functional comparison, given the caveat just mentioned? Recall that to be statistically significant, the task-correlated activation will have to overcome the background (physiologic) noise, i.e. we have to overcome all signal changes arising from every other source except the task. Thus, if we compare the noise properties of our two candidate parameter sets, in the absence of a systematic difference in task-correlated signal changes we can be confident that the parameters yielding the lowest noise will provide the best way to measure fMRI.

Temporal SNR: a proxy for fMRI performance

A convenient way to make a heuristic comparison is to set up each of your candidate parameter sets on one or two typical subjects, and acquire the number of volumes of EPI you're expecting to use in your fMRI experiment. You now have (at least) two sets of time series data, one time series per parameter set (per subject). Now all you need do is generate the voxelwise mean and voxelwise standard deviation, divide the former by the latter and.... Voila! One temporal SNR image. This simple image captures a lot of essential information about your experiment. For high statistical power, you're interested in the parameter set that produces the lowest fluctuations (noise) that aren't tied directly to your task. In other words, all you've got to do is determine which of the two parameter sets has the highest TSNR in the brain regions you're interested in.

Here's an example TSNR image produced from 100 volumes of EPI data with TR=2 sec. (The vertical while lines/dots are artifacts of the mean/std dev division and can be ignored.) Note how the gray matter and ventricles have lower TSNR than white matter because of the greater vascularity/pulsatility of the former regions. (Gray matter is more active metabolically than white matter, whereas CSF has greater pulsatility than tissue.)


Visualizing artifacts

The statisticians amongst you will have spotted that in producing a standard deviation image you've also got a view on the data's voxelwise variance (i.e. the square of the std dev). So while you're analyzing the TSNR maps you might as well take a good look at the std dev images, too. (You could, of course, take the voxelwise square of the std dev and look at the variance itself, but in my experience any comparisons can be done perfectly well using the std dev images. Feel free to disagree; try both.)

Here's an example standard deviation image also created from 100 volumes of EPI data with TR=2 sec:



Assuming you don't have a major problem in your data, the first thing you'll notice in the std dev images is that brain edges will have higher std dev than central brain regions. This is perfectly normal and is the product of small, typical head movements. (These artifacts are usually well contained with a rigid body realignment in post-processing.) Furthermore, gray matter has higher std dev than white matter, because it's more active metabolically and so its physiologic "noise" level is higher. (Hence the correspondingly lower TSNR for gray matter.) So far so good. But the std dev images will also reveal severe acquisition imperfections to casual inspection, too (e.g. motion-related reconstruction artifacts as can plague GRAPPA). All useful stuff, and from such a simple experiment! I'll revisit the use of TSNR and std dev images for artifact identification at a later date.

Further reading on TSNR

I'm not up on the literature like I should be. So apologies for plumbing for the first pertinent reference I found on pubmed. This paper by Kevin Murphy et al. provides a nice starting point for someone wanting to set up a new experiment and perhaps do a power analysis: Murphy, Bodurka & Bandettini, "How long to scan? The relationship between fMRI temporal signal to noise ratio and necessary scan duration." NeuroImage 34(2):565-574 (2007).

I'll let you read the paper for yourself. Suffice it to say that the last sentence of the abstract is probably the most important line ever written in an fMRI methods paper:
TSNR is likely to be critical for determining success or failure of an experiment.
 Amen to that.

Peter Bandettini's group has just published another thorough study that uses TSNR to assess flip angle effects: Gonzalez-Castillo et al., "Physiological noise effects on the flip angle selection in BOLD fMRI." NeuroImage 54(4):2764-78 (2011). This paper really emphasizes the need to focus on the temporal variations of signal - what we throw into the catch-all term, physiologic noise - rather than on high (static) signal-to-noise, as is required for good clinical anatomical scanning. As you'll see when you read the paper, having a high (static) SNR doesn't necessarily translate into higher TSNR, a phenomenon that I will come back to in a future post on RF coil selection. But that's for another day.



Appendix:  Making TSNR images for yourself

Standard packages like Matlab, IDL, ImageJ and the like can all be used to produce TSNR images. You may also be able to produce these images on your scanner directly. On a Siemens scanner you can produce stdev and mean images by selecting the time series of interest in the Viewing window (make the border solid blue), then use these menu items:

Evaluation > Dynamic Analysis > Arithmetic Mean…
                                                    > Standard Deviation…

Then, once you've got the mean and std dev images in the Viewing window, select (solid blue highlight) the pair while holding the Ctrl key and use the menu item:

Evaluation > Dynamic Analysis > Divide…

to create the TSNR=mean/stdev image. It doesn’t matter which order you select these two images, the Divide function has a button to swap numerator for denominator.

Note that all three images (mean, std dev, TSNR) are written to the database along with the raw data. So if you get in the habit of forming these statistical images during your session you will archive these images along with the raw data when you export it from the patient browser.

1 comment:

  1. GE users: I just found a way to make TSNR images using Osirix. (See side bar for a link to download Osirix.) First, select your entire time series of images with a single click. Then select the "4D Viewer" option. In this viewer you will see just the first 3D volume, i.e. all slices for the first volume in the time series.

    Now, in the 4D Viewer window, select the menu item:

    Plugins > Image Filters > SNRCalc

    Specify the starting and ending frame numbers, where the latter is the total number of volumes (TRs) in the time series. (I discard the first volume because the T1 contrast is very different for the first volume; apparently GE doesn't use dummy scans.) Having selected the SNRCalc option you will find three new windows pop up, one with Std Dev, one with Mean and one with TSNR in it, for all x slices in your 3D volume.

    ReplyDelete