Education, tips and tricks to help you conduct better fMRI experiments.
Sure, you can try to fix it during data processing, but you're usually better off fixing the acquisition!

Friday, November 12, 2010

Towards an optimal protocol for resting state fMRI – part II

A couple of weeks ago I used the results in a paper by van Dijk et al. to provide guidance towards a possible optimal/general protocol for resting state fMRI using EPI. That review concluded with the following rough criteria: whole brain coverage, spatial resolution around 3 mm and temporal resolution in the 2-3 seconds range. The largest of the open questions pertained to the interplay between these three specifications, in particular the ability to obtain whole brain (cortex and cerebellum) coverage in the time available, whilst minimizing (we hope) the dropout and distortion that are ever-present features of EPI.

Experimental details:

In what must be considered a disposable experiment on a single subject (medical types might call this a case study), I acquired test data sets with the following parameters:

Siemens 3 T Trio/TIM running VB15, 12-channel HEAD MATRIX coil, ep2d_bold pulse sequence, TR=2500 ms, TE=25 ms, slice thickness=3 mm, gap=0.3 mm, 43 interleaved slices, matrix=64x64, FOV=224x224 mm (except for one test with 192x192 mm), bandwidth=2056 Hz/pixel, echo spacing=0.55 ms, number of volumes=144, fatsat=ON, MoCo=ON, no spatial filters. (See note 1.)

The subject (me) rested with eyes open, the light on in the bore (level 1/3), the bore fan turned on (level 1/3) to reduce the potential for refluxed CO2, and eyes attempting to maintain fixation on small bumps in the paint on the bore liner (which was found to be marginally more interesting and more focal than the blue stripe along the bore roof). The subject didn’t fall asleep, though there were a few moments when it was quite a challenge to stay alert. Likely head movement was considered to be on the good side, however. I swallowed once right before the end of the first run (you’ll see it in volume 142/144 for the axial slice tests) but not at all during the other runs. I also didn’t cough, scratch or need to adjust for comfort during any of the runs, though I did swallow and stretch my lower back in between runs.

Four slice prescriptions were tested: (i) axial, (ii) a 10 degree tilt from axial to coronal, (iii) a 20 degree tilt from axial to coronal, and (iv) sagittal. The three axial/axial-oblique prescriptions were designed to allow efficient brain coverage with variable effects of magnetic field inhomogeneity. For my medium-sized melon, these prescriptions allowed full cortex and cerebellum coverage. A steeper angle of 30+ degrees tilt from axial to coronal was also considered – aggressive tilting can recover signal in OFC especially – but was rejected based on the inability to cover the entire brain without adding some more slices and extending TR. (I’ll come back to the issue of slice coverage and TR later.) Sagittal was also tested because it (just) allows full coverage of cortex with forty-three 3.3 mm slices (including gap); it does, however, allow complete cerebellar and brainstem coverage. (Sagittal is also the only slice prescription where the throat is imaged directly, allowing perhaps some sort of clever identification and correction of swallowing motion…?)

The four slice prescriptions were followed with two further test runs, both using the 10 degree axial oblique prescription. In one, the field-of-view was reduced to 192 mm from 224 mm, to increase in-plane spatial resolution, while in the other the bore fan was turned off.

Temporal SNR:

A simple way to assess the quality of time series data is to generate temporal signal-to-noise (TSNR) images, also called signal-to-fluctuation noise ratio (SFNR) images. (Siemens users, see note 2.) TSNR is simply the pixelwise mean of the time series divided by the pixelwise standard deviation of that series.

I assessed the raw (uncorrected) time series as well as those corrected by the Siemens online rigid body realignment (enabled via the MoCo option on the BOLD card for the ep2d_bold sequence). Cutting to the chase, TSNR differed little between the four slice orientations, with the slight exception of the axial run, acquired first, which had a TSNR a few percent lower than the rest of the runs. Given that my swallow didn’t happen until almost the end of that run (in frame 142/144), perhaps I was either fidgeting more, or perhaps my brain was more metabolically active having just got into the magnet. In any case, the reduction of TSNR was small and I don’t think this single result suggests that the axial prescription is that much worse than the others. More tests would be needed to conclude that.

It was interesting and encouraging to see that sagittal slices had similar TSNR to axial/axial-oblique slices. On TSNR grounds alone, then, there is no reason to reject the sagittal prescription. I’ll come back to this later.

Reducing the FOV to 192 mm produced the expected reduction of TSNR based on volumetric grounds, i.e. TSNR for the 3 mm in-plane resolution was about 75% of that for the 3.5 mm in-plane resolution (same slice thickness of 3 mm in each case). Importantly, no appreciable loss of TSNR was observed for regions of the brain that might have been impacted by overlapping Nyquist ghosts.

The TSNR of the 10 degree axial oblique run acquired with the bore fan turned off was very similar to that with the bore fan turned on. As measured by TSNR, then, there doesn’t appear to be a need to ventilate the bore. However, it may be prudent to maintain ventilation to keep subjects alert; CO2 reflux may make subjects drowsy. Refluxed CO2 could also be one of several mechanisms that need to be accounted for when assessing anti-correlations. This will be the subject of a future post.

Sagittal scans for rs-fMRI?

Sagittal doesn’t seem to get much consideration as a prescription for fMRI, which seems a shame given that it seems to retain signal in temporal and frontal lobes rather well, perhaps as well as coronal slices would. Of course, the signal in these regions is highly distorted, but acquiring any sort of signal is the first step in a distortion correction procedure! (You can’t fix it if it’s not there!)

There is a down side, of course. It was only just possible to acquire both temporal lobes with the sagittal prescription. So, while this orientation does guarantee complete coverage of the cerebellum and may preserve signal in regions of high magnetic field inhomogeneity, it comes at the risk of not being able to get all of cortex for subjects with large brains. Presumably most fMRIers would want to guarantee cortical coverage, even if there is a slight risk of missing the inferior portion of cerebellum. In any case, there are options for those who care more about cerebellum than temporal lobes.

Signal dropout considerations:

Reducing dropout could be essential for a general rs-fMRI protocol, especially for deep gray matter structures and the frontal and temporal lobes. I didn’t attempt to quantify the degree of signal dropout for the different slice prescriptions because it wouldn’t be a general result for all heads. Instead, I assessed the dropout qualitatively; of the three axial/axial-oblique prescriptions tested there wasn’t a clear winner.

- Smaller FOV:

Voxel size also has a role to play in minimizing signal dropout. I tested a 192 mm FOV, in case higher resolution recovered signal in deep gray matter regions. Benefit, if any, appears to be small. And since the van Dijk paper showed no benefit of higher resolution in detecting resting state networks, there seems little point to take the SNR hit unless it recovers some signal dropout, which it doesn’t seem to do very much of.

- Thinner slices?

As already stated, slice angle is difficult to evaluate on a single subject. However, given that it was possible to cover the entire brain with a 20 degree tilt using forty-three 3 mm slices, if the TR were extended to 3 seconds it would be possible to acquire more numerous thinner slices, perhaps recovering some signal in the problem areas. This will form the basis of a final test.


The final step in this quest for an optimal rs-fMRI protocol will be to check whether more signal can be preserved in high susceptibility regions through the use of thinner slices – keeping an eye on brain coverage, of course. Following the van Dijk study, we are not expecting TR to make a big difference to resting connectivity if it is kept in the range 2-4 seconds, so if we need to increase the TR from 2.5 seconds to assure whole cortical coverage then we could do it. I’m thinking of a protocol with a TR around 3 seconds, with sufficient thin slices to cover a large male head. Look for a post on this test some time next week. Then we’ll wrap the topic up for the time being with some guidelines.

Want the data?

You can download a zip file containing all the raw DICOM images as well as DICOM versions of the mean, stdev and TSNR images here:

If you don’t already have a DICOM viewer, check out Osirix for Mac OSX (available via a link in the side bar). ImageJ from NIH also has some nice features for ROI analysis. I’ll post introductions to using these two programs in future posts.


(1) MoCo for the ep2d_bold sequence simply produces a second time series of images following rigid body realignment. This is in contrast to the ep2d_pace sequence, for which the MoCo option invokes a prospective motion correction strategy called PACE that directly affects the raw time series data. In my experience the costs of PACE outweigh the benefits, but that is a subject for another day. Here, all that matters is that we end up with one raw time series data set and one that has been motion-corrected (rigid body realigned) using post-processing software correction only.

(2) On a Siemens scanner (there are probably similar utilities on other vendors’ machines) you can produce stdev and mean images by selecting the time series of interest in the Viewing window (make the border solid blue), then use these menu items:

Evaluation > Dynamic Analysis > Arithmetic Mean…
                                                    > Standard Deviation…
                                                    > Divide…

To create a TSNR image (TSNR=mean/stdev) having already created mean and stdev images, you need to select (solid blue border) the mean and stdev images in the Viewing window first. It doesn’t matter which order you select these two images, the Divide function has a button to swap numerator for denominator.


  1. Hi Ben, interesting post, on a topic that we have been thinking about here for a while. I read the previous post and was intrigued with your decision not to try GRAPPA, or another parallel imaging technique. I am aware of the potential reduction in SNR from the use of parallel techniques, but I have also heard from several prominent researchers that while this is true, due to the increase in SNR over birdcage coils that multicoil systems give you, using GRAPPA or SENSE in fMRI is not disadvantageous.
    Given the push by most researchers who want to use fMRI to study the brain with more temporal and spatial resolution with as much brain coverage as they can get, parallel techniques would seem to be the best way to get there. I'm not going to argue here that they are the best way, I, like you, don't really do fMRI experiments, I just have to suggest the best techniques for other people to use to do them (so often come up against the "But I want to go faster/ get more brain/ higher resolution" request). A recent paper by Mintzopoulos etal "fMRI Using GRAPPA EPI with High Spatial Resolution Improves BOLD Signal Detection at 3T" would seem to suggest that at least using GRAPPA doesn't hurt detection of BOLD changes, at least for the paradigm they tested.

    Still it is an interesting question. Like you I would always urge researchers to minimize noise, and improve signal during the acquisition rather then trying to remove them afterwards. Have you any personal empirical evidence to back up not using GRAPPA in rsMRI?

    Paul Mullins.

  2. That is a timely question, Paul. We were advocates of GRAPPA (which has been shown to be better than mSENSE for fMRI on a Siemens 3 T) until we started getting complaints from people not finding activations they were expecting. We've done some investigations and more are to come, but the simple answer to your question is motion sensitivity.

    GRAPPA (and all other parallel imaging methods) has two forms of motion sensitivity: 1. potential contamination during the autocalibration scan (ACS) data; 2. potential motion between the time of the ACS and the current (undersampled) k-space volume, leading to a mismatch between the ACS and the current k-space. (Problem 2 is common to any form of fMRI that requires a "reference scan," whether it's a field map, ACS, coil sensitivity map or what have you.)

    There are tricks to circumvent the type 1 problem, such as a visual inspection that the first block of images to roll off the scanner appear reasonable; just stop and restart, having reminded the subject not to move during the first few seconds of EPI noise. But the type 2 motion is tricky. You often don't know you have (had) a problem til you can analyze the entire time series, by which point the experiment is over, any novelty in stimuli has been used up for this subject, etc...

    In sum, then, what we find is that when subjects move very little, GRAPPA offers the advantages as advertised. But in the presence of motion, performance is degraded pretty swiftly, i.e. accelerated EPI doesn't have the innate motion insensitivity that single-shot EPI has, and temporal SNR is degraded by more than root-R when voxel dimensions are matched.

    I'm planning a separate post on GRAPPA vs non-GRAPPA and motion sensitivity at some point. I'd be glad to hear suggestions on this topic! Not being able to apply GRAPPA to every single fMRI protocol we do has been a big disappointment. I'm not saying that it can't work, and work well, I'm just saying that the robustness for routine use is in question, and people need to think carefully and use mitigating strategies to use it.

    One final GRAPPA tip: have your subjects swallow immediately before the start of each GRAPPA run, then tell them not to swallow til x seconds into the EPI noise, so they can't possibly swallow during the ACS. That's an easy fix to one of the most common causes of type 1 motion!

  3. A clarification: when I say that TSNR is reduced by more than root-R, I'm comparing with and without motion in the same subject, i.e. I'm not mistaking motion as the culprit rather than g-factor. Indeed, I generally see just the ideal root-R decrease of SNR and TSNR in the absence of motion (12ch or 32ch head coils, R=2), which suggests that GRAPPA is performing very well indeed without motion. But as soon as there's motion the TSNR drops away, especially in deep brain regions where now the coil sensitivity profiles ARE becoming a consideration. In fact, I wouldn't mind betting that the big problem with motion and GRAPPA for routine fMRI is this regional degradation of TSNR rather than a global effect; some activations appear as expected whereas others vanish.

  4. Very interesting!

    Can you post screenshots to illustrate the points?