Education, tips and tricks to help you conduct better fMRI experiments.
Sure, you can try to fix it during data processing, but you're usually better off fixing the acquisition!
Sure, you can try to fix it during data processing, but you're usually better off fixing the acquisition!
Thursday, December 19, 2013
Using partial Fourier EPI for fMRI
Back in August I did a post on the experimental consequences of using partial Fourier for EPI. (An earlier post, PFUFA Part Fourteen introduces partial Fourier EPI.) The main point of that post was to demonstrate how, with all other parameters fixed, there are two principal effects on an EPI obtained with partial Fourier (pF) compared to using full phase encoding: global image smoothing, and regionally enhanced signal dropout. (See Note 1.)
In this post I want to look a little more closely at how pF-EPI works in practice, on a brain, with fMRI as the intended application, and to consider what other parameter options we have once we select pF over full k-space. I'll do two sets of comparisons. In the first comparison all parameters except the phase encoding k-space fraction will be fixed so that we can again consider the first stage consequences of using pF. In the second comparison each pF-EPI scheme will be optimized in a "maximum performance" test. The former is an apples to apples comparison, with essentially one variable changing at a time, whereas the latter is how you would ordinarily want to consider the pF options available to you.
Why might we want to consider partial Fourier EPI for fMRI anyway?
If we assume a typical in-plane matrix of 64 x 64 pixels, an echo spacing (the time for each phase-encoded gradient echo in the train, as explained in PFUFA Part Twelve) of 0.5 ms and a TE of 30 ms for BOLD contrast then it takes approximately 61 ms to acquire each EPI slice. (See Note 2 for the details.) The immediate consequence should be obvious: at 61 ms per slice we will be limited to 32 slices in a TR of 2000 ms. If the slice thickness is 3 mm then the total brain coverage in the slice dimension will be ~106 mm, assuming a 10% nominal inter-slice gap (i.e. 32 x 3.3 mm slices). With axial slices we aren't going to be able to cover the entire adult brain. We will have to omit either the top of parietal lobes or the bottom of the temporal lobes, midbrain, OFC and cerebellum. Judicious tilting might be able to capture all of the regions of primary interest to you, but we either need to reduce the time taken per slice or increase the TR to cover the entire brain.
Partial Fourier is one way to reduce the time spent acquiring each EPI slice. There are two basic ways to approach it: eliminate either the early echoes or the late echoes in the echo train, as described at the end of PFUFA: Part Fourteen. Eliminating the early echoes doesn't, by itself, save any time at all. Only if the TE is reduced in concert is there any time saving. But omitting the late echoes will mean that we complete the data acquisition for the current slice earlier than we would for full Fourier sampling, hence there is some intrinsic speed benefit. I'll come back to the time savings and their consequences later on. Let's first look at what happens when we enable partial Fourier without changing anything else.
Wednesday, November 27, 2013
CALAMARI: Doing MRI at 130 microtesla with a SQUID
I've been dabbling in some ultralow field (ULF) MRI over the past several years, trying first to get functional brain imaging to work (more on that another day, perhaps) and more recently looking at the contrast properties of normal and diseased brains. We detect MR signals at less than three times the earth's magnetic field (of approximately 50 microtesla) using an ultra-sensitive superconducting quantum interference device (SQUID). The system is usually referred to as "The Cube" on account of the large aluminum box surrounding the entire apparatus; it provides magnetic shielding for the SQUID. But my own nickname for the system is CALAMARI - the CAL Apparatus for MAgnetic Resonance Imaging. Deep-fried rings or grilled strips, it's all good. Anyway, should you wish to know more about this home-built system and what it might be able to do, there's a new paper (John Clarke's inaugural article after being elected to the NAS) now out in PNAS. At some point I'll put up more blog posts on both anatomical and functional ULFMRI, and go over some of the work that's being done at high fields (1.5+ T) that may be relevant to ULFMRI.
Wednesday, September 18, 2013
i-fMRI: BRAIN scanners of the past, present and future
Have you ever wondered why your fMRI scanner is the way it is? Why, for example, is the magnet typically operated at 1.5 or 3 T, and why is there a body-sized transmission coil for the RF? The prosaic answer to these questions is the same: it's what's for sale. We are fortunate that MRI is a cardinal method for radiology, and this clinical utility means that large medical device companies have invested hundreds of millions of dollars (and other currencies) into its development. The hardware and pulse sequences required to do fMRI research aren't fundamentally different from those required to do radiological MRI so we get to use a medical device as a scientific instrument with relative ease.
But what would our fMRI scanners look like today had they been developed as dedicated scientific instruments, with little or no application to something as lucrative as radiology? Surely the scanner-as-research-device would differ in some major ways from that which is equally at home in the hospital or the laboratory. Or would it? While it's clear that the fMRI revolution of the past twenty years has ridden piggyback on the growing clinical importance of diffusion and other advanced anatomical imaging techniques, what's less obvious is the impact of these external factors on how we conduct functional neuroimaging today. State-of-the-art fMRI might have looked quite different had we been forced to develop scanners explicitly for neuroscience.
"I wouldn't start from here, mate."
This week's interim report from the BRAIN Initiative's working group is an opportunity for all of us involved in fMRI to think seriously about our tools. We've come a long way with BOLD contrast to be sure, even though we don't fully understand its origins or its complexities. Should I be delighted or frustrated at my capacity to operate a push-button clinical machine at 3 T in order to get this stuff to work? It's undoubtedly convenient, but at what cost to science?
I can't help but wonder what my fMRI scanner might look like if it was designed specifically for task. Would the polarizing magnet be horizontal or would a subject sit on a chair in a vertical bore? How large would the polarizing magnet be, and what would be its field strength? The gradient set specifications? And finally, if I'm not totally sold on BOLD contrast as my reporting mechanism for neural activity, what sort of signal do I really want? In all cases I am especially interested in why I should prefer one particular answer over the other alternatives.
Note that I'm not suggesting we all dream of voltage-sensitive contrast agents. That's the point of the BRAIN Initiative according to my reading of it. All I'm suggesting is that we spend a few moments considering what we are currently doing, and whether there might be a better way. Unless there has been a remarkable set of coincidences over the last two decades, the chances are good that an fMRI scanner designed specifically for science would have differed in some major ways from the refined medical device that presently occupies my basement lab. There would be more duct tape for a start.
Monday, August 5, 2013
The experimental consequences of using partial Fourier for EPI
PFUFA Part Fourteen introduced the idea of acquiring partial k-space and explained how the method, hereafter referred to as partial Fourier (pF), is typically used for EPI acquisitions. At this point it is useful to look at some example data and to begin to assess the options for using pF-EPI for experiments.
Image smoothing
The first consequence of using pF is image smoothing. It arises because we've acquired all of the low spatial frequency information twice - on both halves of k-space - but only half of some of the high spatial frequency information. We've then zero-filled that part of k-space that was omitted. This has the immediate effect of degrading the signal-to-noise ratio (SNR) for the high spatial frequencies that reside in the omitted portion of k-space. (PFUFA Part Eleven dealt with where different spatial frequencies are to be found in k-space.) Thus, the final image has less detail and is smoother than it would have been had we acquired the full k-space matrix, and because of the smoothing the final image SNR tends to be higher for pF-EPI than for the full k-space variant.
It was surprising to me that pF-EPI has higher SNR - due to smoothing - than full Fourier EPI in spite of the reduced data sampling in the acquisition. Conventional wisdom, which is technically correct, states that acquiring less data will degrade SNR. To understand this conundrum, we can think of pF as being like a square filter applied asymmetrically to the phase encoding dimension of an EPI obtained from a complete k-space acquisition. Indeed, as we start to evaluate the costs and benefits of pF for EPI we should probably be thinking about a minimum of a three-way comparison. Firstly, we obviously want to compare our pF-EPI to the full k-space alternative having the same nominal resolution. But we should also consider whether there is any advantage over a lower resolution EPI with full k-space coverage, too. Why? Because this lower resolution version is, in effect, what you get when partial Fourier is applied symmetrically, i.e. when the high spatial frequencies are omitted from both halves of the phase encoding dimension!
Let's do our first assessment of pF on a phantom. There are four images of interest: the full k-space image, two versions of pF - omitting the early or the late echoes from the echo train - and, for the sake of quantifying the amount of smoothing, a lower resolution full k-space image which is tantamount to omitting both the early and late echoes. (See Note 1.) From this point on I'm going to refer to omission of the early and late echo variants as pF(early)-EPI and pF(late)-EPI, respectively.
Wednesday, July 31, 2013
Shared MB-EPI data
This is cool, publicly available test-retest pilot data sets using MB-EPI and conventional EPI on the same subjects courtesy of Nathan Kline Institute:
What's available:
- R-mfMRI (TR = 645msec; voxel size = 3mm isotropic; duration = 10 minutes)
- R-mfMRI (TR = 1400msec; voxel size = 2mm isotropic; duration = 10 minutes)
- R-fMRI (TR = 2500msec; voxel size = 3mm isotropic; duration = 5 minutes)
- Diffusion Tensor Imaging (137 direction; voxel size = 2mm isotropic)
The acquisition protocols are available as PDFs via the links given in the release website (and copied here). I like that they restricted the acceleration (MB) factor to four. I also like that the 3 mm isotropic MB-EPI data acquired at TR=645 ms used full Fourier acquisition (no partial Fourier) and an echo spacing of 0.51 ms. The former may help with signal in deep brain regions as well as frontal and temporal lobes, while the latter avoids mechanical resonances in the range 0.6-0.8 ms on a Trio, and also keeps the phase encode distortion reasonable.
There are already studies coming out that use these data sets, such as this one by Liao et al (which is how I learned of their existence). I don't yet know which reconstruction version was used for these data sets, but those of you who are tinkering should be aware that the latest version from CMRR, version R009a, has significantly lower artifacts and less smoothing than prior versions:
MB-EPI using CMRR sequence version R008 on a Siemens Trio with 32ch coil. MB=6, 72 slices, TE=38 ms, 2 mm isotropic voxels. |
MB-EPI using CMRR sequence version R009a on a Siemens Trio with 32ch coil. MB=6, 72 slices, TE=38 ms, 2 mm isotropic voxels. |
The bubbles visible in the bottom image of a gel phantom are real. The other intensity variations are artifacts. In both images one can easily make out the receive field heterogeneity of the 32-channel head coil.
----
Note added post publication
From Dan Lurie (@dantekgeek): We’re also collecting/sharing data from 1000 subjects using the same sequences, plus deep phenotyping http://fcon_1000.projects.nitrc.org/indi/enhanced/
Saturday, July 6, 2013
12-channel versus 32-channel head coils for fMRI
At last month's Human Brain Mapping conference in Seattle, a poster by Harvard scientists Stephanie McMains and Ross Mair (poster 3412) showed yet more evidence that the benefits of a 32-channel coil for fMRI at 3 T aren't immediately obvious. Previous work by Kaza, Klose and Lotze in 2011 (doi: 10.1002/jmri.22614) had suggested that the benefits were regional, with cortical areas benefiting from the additional signal-to-noise ratio (SNR) whereas the standard 12-channel coil was superior for fMRI of deeper structures such as thalamus and cerebellum. The latest work by McMains and Mair confirms an earlier report from Li, Wang and Wang (ISMRM 17th Annual Meeting, 2009. Abstract #1614) that showed spatial resolution also affects the benefit, if any. In a nutshell, if a typical voxel resolution of 3 mm is used then the 32-channel coil provides no benefit over a 12-channel coil. The 32-channel coil was best only when resolution was pushed to 2 mm, thereby pushing the SNR down towards the thermal noise limit, or when using high acceleration, e.g. GRAPPA with acceleration, R > 2.
What's going on? In the first instance we need to think about the regimes that limit fMRI at different spatial resolutions. In the absence of subject motion and physiologic noise, the SNR of an EPI voxel will tend towards a thermal noise-limiting regime as it gets smaller. Let's assume a fairly typical SNR of 60 for a voxel that has dimensions 3.5x3.5x3.5 mm^3, as detected by a 12-channel head coil at 3 T. If we shrink the voxel to 3x3x3 mm^3 the SNR will decrease by ~27/43, to about 38, while if we shrink to 2x2x2 mm^3 the SNR will decrease to about 11. (Here I am assuming that all factors affecting N are invariant to resolution while S scales with voxel volume, which is sufficient for this discussion.) If we decrease the voxels to 1.5x1.5x1.5 mm^3 the SNR decreases to below five. The SNR is barely above one if we push all the way to 1x1x1 mm^3 resolution, which is why you don't often see fMRI resolution better than 2 mm at 3 T. Thus, if high spatial resolution is the goal then one needs to boost the SNR well beyond what we started of with to achieve a reasonable image. Hence the move to larger phased-array receive coils.
So that's the situation when the thermal noise is limiting. This is generally the case for anatomical MRI, but does it apply to fMRI? If something else is limiting - either physiologic noise or subject motion - then increasing the raw SNR may not help as expected. In fMRI we are generally less concerned with true (white) thermal noise than we are with erroneous modulation of our signal. It's not noise so much as it is signal changes of no interest. For this reason, Gonzalez-Castillo et al. (doi: 10.1016/j.neuroimage.2010.11.020) recently proposed using a very low flip angle in order to minimize physiologic noise while leaving functional signal changes unchanged.
From ISMRM e-poster 3352, available as a PDF via this link. |
What if we can't even attain the physiologic noise-limiting regime? It's quite possible to be in a subject motion-limiting regime, as anyone who has run an fMRI experiment can attest. In that case, the use of a high dimensional array coil (of 32 channels, say) could actually impose a higher motion sensitivity on the time series than it would have had were it detected by a smaller array coil (of 12 channels, say), due to the greater receive field heterogeneity of the 32-channel coil. This was something a colleague and I considered last year, in an arXiv paper (http://arxiv.org/abs/1210.3633) and accompanying blog post. In an e-poster at this year's ISMRM Annual Meeting (abstract #3352; a PDF of the slides is available via this Dropbox link) we simulated the effects of motion on temporal SNR (tSNR), as well as the potential for spurious correlations in resting-state fMRI, when using a 32-channel coil. In doing these simulations we assumed perfect motion correction yet there were still drastic effects, as the above figure illustrates.
Whether the equivocal benefits of a 32-channel coil for routine fMRI (that is, using 3-ish mm voxels) are due to enhanced motion sensitivity, higher physiologic noise or some other factor I'm not in a position to say with any certainty. My colleagues and I, and others, are investigating ways that we might reduce the effects of receive field contrast on motion correction. The use of a prescan normalization is one idea that might help, at least a bit. The process has many assumptions and potential flaws, but it may offer the prospect of getting back some of what might be lost courtesy of the enhanced motion sensitivity. We simply don't know yet. The bigger problem, however, seems to be that a heterogeneous receive field contrast will impart motion sensitivity on a time series even if motion correction were perfect. Strong receive field heterogeneity, of the sort exhibited by a 32-channel head coil, is a killer if the subject moves.
Unless you are attempting to use highly accelerated parallel imaging (in particular the multiband sequences) and/or pushing your voxel size towards 2 mm, then, you're almost certainly better off sticking with the 12-channel coil as far as fMRI performance is concerned. Other scans, in particular anatomical scans and perhaps some diffusion-weighted scans, may benefit from larger array coils (because these scans may be in the thermal noise-limiting regime), but each application will need to be verified independently.
Wednesday, June 12, 2013
Physics for understanding fMRI artifacts: Part Fourteen
Partial Fourier EPI
(The full contents for the PFUFA series of posts is here.)
In PFUFA Part Twelve you saw how 2D k-space for EPI is achieved in a single shot, i.e. using a repetitive gradient echo series following a single excitation RF pulse. The back and forth gradient echo trajectory permits the acquisition of a 2D plane of k-space in tens of milliseconds. That's fast to be sure, but when one wants to achieve a lot of three-dimensional brain coverage then every millisecond counts.
In the EPI method as presented in PFUFA Part Twelve it was (apparently) necessary to cover - that is, to sample - the entire k-space plane in order to then perform a 2D Fourier transform (FT) and recover the desired image. Indeed, this "complete" sampling requirement was developed earlier, in PFUFA Part Nine, when we looked at 2D k-space and its relationship to image space.
One aspect of the FT that I glossed over in previous posts has to do with symmetry. Perhaps the eagle-eyed among you spotted the symmetry in the 2D k-space of the first couple of pictures in PFUFA Part Nine. If you didn't, don't worry about it because I'm about to show it to you in detail. It turns out that there's actually no need to acquire the entire 2D k-space plane; it suffices to acquire some of it - at least half - and then use post-processing methods to fill in the missing part. At that point one can apply the 2D FT and recover the desired image.
Now, as you would expect, there's no free lunch on offer. There are practical consequences from not acquiring the full k-space plane. In this post we will look briefly at the physical principles of partial Fourier EPI, then in the next post we'll take a look at some example data that will provide a basis for evaluating partial versus full k-space coverage for fMRI.
(The full contents for the PFUFA series of posts is here.)
In PFUFA Part Twelve you saw how 2D k-space for EPI is achieved in a single shot, i.e. using a repetitive gradient echo series following a single excitation RF pulse. The back and forth gradient echo trajectory permits the acquisition of a 2D plane of k-space in tens of milliseconds. That's fast to be sure, but when one wants to achieve a lot of three-dimensional brain coverage then every millisecond counts.
In the EPI method as presented in PFUFA Part Twelve it was (apparently) necessary to cover - that is, to sample - the entire k-space plane in order to then perform a 2D Fourier transform (FT) and recover the desired image. Indeed, this "complete" sampling requirement was developed earlier, in PFUFA Part Nine, when we looked at 2D k-space and its relationship to image space.
One aspect of the FT that I glossed over in previous posts has to do with symmetry. Perhaps the eagle-eyed among you spotted the symmetry in the 2D k-space of the first couple of pictures in PFUFA Part Nine. If you didn't, don't worry about it because I'm about to show it to you in detail. It turns out that there's actually no need to acquire the entire 2D k-space plane; it suffices to acquire some of it - at least half - and then use post-processing methods to fill in the missing part. At that point one can apply the 2D FT and recover the desired image.
Now, as you would expect, there's no free lunch on offer. There are practical consequences from not acquiring the full k-space plane. In this post we will look briefly at the physical principles of partial Fourier EPI, then in the next post we'll take a look at some example data that will provide a basis for evaluating partial versus full k-space coverage for fMRI.
Friday, April 19, 2013
Multiband (aka simultaneous multislice) EPI validation in progress!
I am pleased to see a couple of presentations at next week's ISMRM conference in Salt Lake City dealing with some of the important validation steps that should be performed before multiband (MB) EPI (or simultaneous multislice (SMS) EPI if you prefer) is adopted for routine use by the neuroimaging community:
Characterization of Artifactual Correlation in Highly-Accelerated Simultaneous Multi-Slice (SMS) fMRI Acquisitions
Abstract #0410, ISMRM Annual Meeting, 2013.
Kawin Setsompop, Jonathan R. Polimeni, Himanshu Bhat, and Lawrence L. Wald
Simultaneous Multi-Slice (SMS) acquisition with blipped-CAIPI scheme has enabled dramatic reduction in imaging time for fMRI acquisitions, enabling high-resolution whole-brain acquisitions with short repetition times. The characterization of SMS acquisition performance is crucial to wide adoption of the technique. In this work, we examine an important source of artifact: spurious thermal noise correlation between aliased imaging voxels. This artifactual correlation can create undesirable bias in fMRI resting-state functional connectivity analysis. Here we provide a simple method for characterizing this artifactual correlation, which should aid in guiding the selection of appropriate slice- and inplane-acceleration factors for SMS acquisitions during protocol design.
(I also found this link to the full abstract.)
An Assessment of Motion Artefacts in Multi Band EPI for High Spatial and Temporal Resolution Resting State fMRI
Abstract #3275, ISMRM Annual Meeting, 2013.
Michael E. Kelly, Eugene P. Duff, Janine D. Bijsterbosch, Natalie L. Voets, Nicola Filippini, Steen Moeller, Junqian Xu, Essa S. Yacoub, Edward J. Auerbach, Kamil Ugurbil, Stephen M. Smith, and Karla L. Miller
Multiband (MB) EPI is a recent MRI technique that offers increased temporal and/or spatial resolution as well as increased temporal SNR due to increased temporal degrees-of-freedom (DoF). However, MB-EPI may exhibit increased motion sensitivity due to the combination of short TR with parallel imaging. In this study, the performance of MB-EPI with different acceleration factors was compared to that of standard EPI, with respect to subject motion. Although MB-EPI with 4 and 8 times acceleration exhibited some motion sensitivity, retrospective clean-up of the data using independent component analysis was successful at removing artefacts. By increasing temporal DoF, accelerated MB-EPI supports higher spatial resolution, with no loss in statistical significance compared to standard EPI. MB-EPI is therefore an important new technique capable of providing high resolution, temporally rich FMRI datasets for more interpretable mapping of the brain's functional networks.
The natural question to ask next occurs at the interface of these two topics: what about head motion-driven artifactual correlations between simultaneously excited slices? I am also curious to see how retrospective motion correction, e.g. affine registration algorithms, performs with MB-EPI that contains appreciable motion contamination. Is the "pre-processing" pipeline that we use for single-shot EPI appropriate for MB-EPI?
In-plane parallel imaging such as GRAPPA and SENSE were adopted for EPI-based fMRI experiments prematurely in my view, i.e. before full validations had been conducted. (Mea culpa. I was one of those beguiled by GRAPPA when I first saw it.) The failure modes - like motion sensitivity - hadn't been fully explored before a lot of us began employing the methods for their purported benefits. It would be nice if the failure modes of MB-EPI get a thorough workout before the neuroimaging community adopts it en masse.
That said, I am still very excited that MB-EPI may offer the most significant performance boost for fMRI acquisition for more than a decade (since the introduction of scanners capable of EPI readout on all three gradient axes). But I continue to seek validation before recommending widespread adoption of MB-EPI (or any other method) and I look forward to seeing more reports such as these in the literature and online, prior to people using them in experiments to solve the brain.
Tuesday, April 9, 2013
Resting state fMRI confounds
(Thanks to Dave J. Hayes for tweeting the publication of these papers.)
Two new papers provide comprehensive reviews of some of the confounds to the acquisition, processing and interpretation of resting state fMRI data. In the paper, "Resting-state fMRI confounds and cleanup," Murphy, Birn and Bandettini consider in some detail many of the noise sources in rs-fMRI, especially those having a physiologic origin.
In "Overview of potential procedural and participant-related confounds for neuroimaging of the resting state," Duncan and Northoff review the effects that other circumstantial factors, such as the scanner's acoustic noise, subject instructions, subjects' emotional state, and caffeine might have on rs-fMRI studies. Without due consideration, some or all of these factors may inadvertently become experimental variables; the implications for inter-individual differences are considerable. (I've reviewed some of the issues concerning what we can permit subjects to do before and during rs-fMRI in this post.)
While we're on the subject of confounds in rs-fMRI - especially those with a motion component - another confound that motion introduces is a sensitivity to the receive field heterogeneity of the head coil. This problem gets worse the more channels the coil has, because the coil elements get smaller as the number of channels goes up. For an introduction to the issue see this arXiv paper; there will also be simulations of the effect for a 32-channel coil at the ISMRM conference in a couple of weeks' time. (See e-poster, abstract #3352.) The result is that spurious correlations and anti-correlations can result, necessitating some sort of clever sorting or de-noising scheme to distinguish them from "true" brain correlations. I mention it here because there is a common misconception in the field that applying a retrospective motion correction step fixes all motion-related artifacts. It doesn't. Nor does including all of the motion parameters as regressors in a model. Motion has some insidious ways in which it can modulate the MRI signal level, and it is high time that we, as a field, reconsider very carefully what we are doing for motion correction, and why.
Finally, I'll note in passing that slice timing correction may not be a good idea for rs-fMRI. It's been known since the correction was first proposed that it should interact a with a motion correction step. (The two corrections should be applied simultaneously, as one 4D space-time correction rather than a separate 3D space then time correction, or vice versa.) I don't have data to share just yet, but if anyone is wondering whether they should include STC in their rs-fMRI analysis, as they would do for event-related fMRI, then my advice is to skip it until someone can prove to you that it has no unintended consequences. (Demonstration of unintended consequences to follow eventually....)
References:
Resting state fMRI confounds and cleanup. K Murphy, RM Birn and PA Bandettini, NeuroImage Epub.
DOI: 10.1016/j.neuroimage.2013.04.001
Overview of potential procedural and participant-related confounds for neuroimaging of the resting state. NW Duncan and G Northoff, J. Psychiatry Neurosci. 2013, 38(2), 84-96.
PMID: 22964258
DOI: 10.1503/jpn.120059
Two new papers provide comprehensive reviews of some of the confounds to the acquisition, processing and interpretation of resting state fMRI data. In the paper, "Resting-state fMRI confounds and cleanup," Murphy, Birn and Bandettini consider in some detail many of the noise sources in rs-fMRI, especially those having a physiologic origin.
In "Overview of potential procedural and participant-related confounds for neuroimaging of the resting state," Duncan and Northoff review the effects that other circumstantial factors, such as the scanner's acoustic noise, subject instructions, subjects' emotional state, and caffeine might have on rs-fMRI studies. Without due consideration, some or all of these factors may inadvertently become experimental variables; the implications for inter-individual differences are considerable. (I've reviewed some of the issues concerning what we can permit subjects to do before and during rs-fMRI in this post.)
While we're on the subject of confounds in rs-fMRI - especially those with a motion component - another confound that motion introduces is a sensitivity to the receive field heterogeneity of the head coil. This problem gets worse the more channels the coil has, because the coil elements get smaller as the number of channels goes up. For an introduction to the issue see this arXiv paper; there will also be simulations of the effect for a 32-channel coil at the ISMRM conference in a couple of weeks' time. (See e-poster, abstract #3352.) The result is that spurious correlations and anti-correlations can result, necessitating some sort of clever sorting or de-noising scheme to distinguish them from "true" brain correlations. I mention it here because there is a common misconception in the field that applying a retrospective motion correction step fixes all motion-related artifacts. It doesn't. Nor does including all of the motion parameters as regressors in a model. Motion has some insidious ways in which it can modulate the MRI signal level, and it is high time that we, as a field, reconsider very carefully what we are doing for motion correction, and why.
Finally, I'll note in passing that slice timing correction may not be a good idea for rs-fMRI. It's been known since the correction was first proposed that it should interact a with a motion correction step. (The two corrections should be applied simultaneously, as one 4D space-time correction rather than a separate 3D space then time correction, or vice versa.) I don't have data to share just yet, but if anyone is wondering whether they should include STC in their rs-fMRI analysis, as they would do for event-related fMRI, then my advice is to skip it until someone can prove to you that it has no unintended consequences. (Demonstration of unintended consequences to follow eventually....)
References:
Resting state fMRI confounds and cleanup. K Murphy, RM Birn and PA Bandettini, NeuroImage Epub.
DOI: 10.1016/j.neuroimage.2013.04.001
Overview of potential procedural and participant-related confounds for neuroimaging of the resting state. NW Duncan and G Northoff, J. Psychiatry Neurosci. 2013, 38(2), 84-96.
PMID: 22964258
DOI: 10.1503/jpn.120059
Saturday, April 6, 2013
Impressively rapid follow-ups to a published fMRI study
Alternative post title: Why blogs can be seriously useful in research.
Last week there was quite a lot of attention to an article published in PNAS by Aharoni et al. In their study they claimed that fMRI could be useful in predicting the likelihood of rearrest in a group of convicts up for parole:
The senior author, Kent Kiehl, was interviewed on National Public Radio on Friday morning. I heard it on my way into work. An NPR interview would suggest the media attention was widespread, although I haven't looked at this aspect specifically.
What I did notice, however, was that The Neurocritic came out with two quick posts (here and here) wherein he brought up a couple of interesting limitations of the study and even ran his own re-analysis of the data, the PNAS authors having been kind enough to make their data available publicly.
This afternoon, Russ Poldrack has followed up with his own analysis and interpretation of the study's data. I'll be honest, all the stats leaves me flat-footed. But I am very seriously impressed by the way the blogosphere, combined with shared data, has been able to poke and prod the original study's conclusions.
Why am I so enthused? Because the mainstream media (still) has the power to dominate the narrative in the public sphere, and it is especially important that specific criticisms can be leveled within the same news cycle, while the public might still be paying attention to the story. So, while I think it's highly unlikely that NPR will interview the senior author of the next study that finds there is no predictive use of fMRI for recidivism - we seem to have a serious positive results bias in science - maybe there's a slim chance that NPR will interview Russ about his follow-up analysis, just to balance the record. And if not, at least those in the field have the benefit of the post-publication peer review that blogs can offer.
Last week there was quite a lot of attention to an article published in PNAS by Aharoni et al. In their study they claimed that fMRI could be useful in predicting the likelihood of rearrest in a group of convicts up for parole:
"Identification of factors that predict recurrent antisocial behavior is integral to the social sciences, criminal justice procedures, and the effective treatment of high-risk individuals. Here we show that error-related brain activity elicited during performance of an inhibitory task prospectively predicted subsequent rearrest among adult offenders within 4 y of release (N = 96). The odds that an offender with relatively low anterior cingulate activity would be rearrested were approximately double that of an offender with high activity in this region, holding constant other observed risk factors. These results suggest a potential neurocognitive biomarker for persistent antisocial behavior."
The senior author, Kent Kiehl, was interviewed on National Public Radio on Friday morning. I heard it on my way into work. An NPR interview would suggest the media attention was widespread, although I haven't looked at this aspect specifically.
What I did notice, however, was that The Neurocritic came out with two quick posts (here and here) wherein he brought up a couple of interesting limitations of the study and even ran his own re-analysis of the data, the PNAS authors having been kind enough to make their data available publicly.
This afternoon, Russ Poldrack has followed up with his own analysis and interpretation of the study's data. I'll be honest, all the stats leaves me flat-footed. But I am very seriously impressed by the way the blogosphere, combined with shared data, has been able to poke and prod the original study's conclusions.
Why am I so enthused? Because the mainstream media (still) has the power to dominate the narrative in the public sphere, and it is especially important that specific criticisms can be leveled within the same news cycle, while the public might still be paying attention to the story. So, while I think it's highly unlikely that NPR will interview the senior author of the next study that finds there is no predictive use of fMRI for recidivism - we seem to have a serious positive results bias in science - maybe there's a slim chance that NPR will interview Russ about his follow-up analysis, just to balance the record. And if not, at least those in the field have the benefit of the post-publication peer review that blogs can offer.
Wednesday, March 27, 2013
Quick update for Siemens users
Apologies for the lengthy absence. Many irons in the fire, etc. So until I can provide a more considered post I give you these three random tidbits:
1. Syngo MR version D13 for Verio and Skyra
There is an EPI sequence in VD13 that has a real time update of the on-resonance frequency, i.e. one that is computed and applied TR by TR, to combat drift caused by gradient heating. There are apparently versions for fMRI and diffusion-weighted imaging. I don't have any detailed information but if you are working on a Verio or a Skyra it might be time to talk to your physicist and/or local Siemens rep.
2. Phase encode direction for axial and axial-oblique EPI
Siemens uses A-P phase encoding by default whereas GE uses P-A by default. Essentially, for axial (and axial oblique) EPI the A-P direction compresses the frontal lobe but stretches occipital lobe whereas P-A stretches frontal lobe and compresses occipital. Pick your poison. (See Note 1.) Test each one out by setting the Phase enc. dir. parameter on the Routine tab. To set P-A from A-P (default) first click the three dots (...) to the right of the parameter field and open the dialog box, then enter 180 <return> instead of 0. You will probably find that the parameter change doesn't "stick" for appended scans, so saving a modified protocol in the Exam Explorer is a way to ensure the default (A-P) doesn't get reinstated without you noticing. More details to come in the next version of my user training/FAQ document.
3. Another way to force a re-shim
In my last user training/FAQ document (and here) I gave a simple way to force the scanner to re-shim at any point, e.g. when you know or strongly suspect the subject may have moved, or between lengthy blocks as a way to maintain high quality data in spite of slow subject motion and scanner drifts. But there is another way to do it and from some basic tests it looks to be superior. Here's a shaky video of the procedure conducted on a Trio running Syngo MR B17 (see Note 2):
(The essential procedure is the same for later software versions, but the layout of the 3D Shim window is slightly different.)
Wednesday, January 30, 2013
A checklist for fMRI acquisition methods reporting in the literature
This post updates the draft checklist that was presented back in October. Thanks to all who provided feedback. The updated checklist, denoted version 1.1, incorporates a lot of the suggestions made previously. The main difference is the reduction from three to two categories. The logic is that we should be encouraging reporting of "All of Essential plus any of Supplemental" parameters in the methods section of any fMRI publication.
(Click to enlarge.) |
Explanatory notes, consolidated from the post on the draft list, and abbreviations appear below.
Subscribe to:
Posts (Atom)