Thursday, February 27, 2014
There was quite a lot of activity yesterday in response to PLOS ONE's announcement regarding its data policy. Most of the discussion I saw concerned rights of use and credit, completeness of data (e.g. the need for stimulus scripts for task-based fMRI) and ethics (e.g. the need to get subjects' consent to permit further distribution of their fMRI data beyond the original purpose). I am leaving all of these very important issues to others. Instead, I want to pose a couple of questions to the fMRI community specifically, because they concern data quality and data quality is what I spend almost all of my time dealing with, directly or indirectly. Here goes.
1. Under what circumstances would you agree to use someone else's data to test a hypothesis of your own?
Possible concerns: scanner field strength and manufacturer, scan parameters, operator experience, reputation of acquiring lab.
2. What form of quality control would you insist on before relying on someone else's data?
Possible QA measures: independent verification of a simple task such as a button press response encoded in the same data, realignment "motion parameters" below/within some prior limit, temporal SNR above some prior value.
If anyone has other questions related to data quality that I haven't covered with these two, please let me know and I'll update the post. Until then I'll leave you with a couple of loaded comments. I wouldn't trust anyone's data if I didn't know the scanner operator personally and I knew first-hand that they had excellent standard operating procedures, a.k.a. excellent experimental technique. Furthermore, I wouldn't trust realignment algorithm reports (so-called motion parameters) as a reliable proxy for data quality in the same way that chemicals have purity values, for instance. The use of single value decomposition - "My motion is less than 0.5 mm over the entire run!" - is especially nonsensical in my opinion, considering that the typical voxel resolution exceeds 2 mm on a side. Okay, discuss.
UPDATE 13:35 PST
Someone just alerted me to the issue of data format. Raw? Filtered? And what about custom file types? One might expect to get image domain data, perhaps limited to the magnitude images that 99.9% of folks use. So, a third question is this: What data format(s) would you consider (un)acceptable for sharing, and why?
Tuesday, January 28, 2014
This is the final post in a short series concerning partial Fourier EPI for fMRI. The previous post showed how partial Fourier phase encoding can accelerate the slice acquisition rate for EPI. It is possible, in principle, to omit as much as half the phase encode data, but for practical reasons the omission is generally limited to around 25% before image artifacts - mainly enhanced regional dropout - make the speed gain too costly for fMRI use. Omitting 25% of the phase encode sampling allows a slice rate acceleration of up to about 20%, depending on whether the early or the late echoes are omitted and whether other timing parameters, most notably the TE, are changed in concert.
But what other options do you have for gaining approximately 20% more slices in a fixed TR? A common tactic for reducing the amount of phase-encoded data is to use an in-plane parallel imaging method such as SENSE or GRAPPA. Now, I've written previously about the motion sensitivity of parallel imaging methods for EPI, in particular the motion sensitivity of GRAPPA-EPI, which is the preferred parallel imaging method on a Siemens scanner. (See posts here, here and here.) In short, the requirement to obtain a basis set of spatial information - that is, a map of the receive coil sensitivities for SENSE and a set of so-called auto-calibration scan (ACS) data for GRAPPA - means that any motion that occurs between the basis set and the current volume of (accelerated) EPI data is likely to cause some degree of mismatch that will result in artifacts. Precisely how and where the artifacts will appear, their intensity, etc. will depend on the type of motion that occurs, whether the subject's head returns to the initial location, and so on. Still, it behooves us to check whether parallel imaging might be a better option for accelerating slice coverage than partial Fourier.
Deciding what to compare
Disclaimer: As always with these throwaway comparisons, use what you see here as a starting point for thinking about your options and perhaps determining your own set of pilot experiments. It is not the final word on either partial Fourier or GRAPPA! It is just one worked example.
Okay, so what should we look at? In selecting 6/8ths partial Fourier it appears that we can get about 15-20% more slices for a fixed TR. It turns out that this gain is comparable to using GRAPPA with R=2 acceleration with the same TE. To keep things manageable - a five-way comparison is a sod to illustrate - I am going to drop the low-resolution 64x48 full Fourier EPI that featured in the last post in favor of the R=2 GRAPPA-EPI that we're now interested in. For the sake of this comparison I'm assuming that we have decided to go with either pF-EPI or GRAPPA, but you should note that the 64x48 full Fourier EPI remains an option for you in practice. (Download all the data here to perform for your own comparisons!)
I will retain the original 64x64 full Fourier EPI as our "gold standard" for image quality as well as the two pF-EPI variants, yielding a new four-way comparison: 64x64 full Fourier EPI, 6/8pF(early), 6/8pF(late), and GRAPPA with R=2. Partial Fourier nomenclature is as used previously. All parameters except the specific phase encode sampling schemes were held constant. Data was collected on a Siemens TIM/Trio with 12-channel head coil, TR = 2000 ms, TE = 22 ms, FOV = 224 mm x 224 mm, slice thickness = 3 mm, inter-slice gap = 0.3 mm, echo spacing = 0.5 ms, bandwidth = 2232 Hz/pixel, flip angle = 70 deg. Each EPI was reconstructed as a 64x64 matrix however much actual k-space was acquired. Partial Fourier schemes used zero filling prior to 2D FT. GRAPPA reconstruction was performed on the scanner with the default vendor reconstruction program. (Siemens users, see Note 1.)
Thursday, December 19, 2013
Back in August I did a post on the experimental consequences of using partial Fourier for EPI. (An earlier post, PFUFA Part Fourteen introduces partial Fourier EPI.) The main point of that post was to demonstrate how, with all other parameters fixed, there are two principal effects on an EPI obtained with partial Fourier (pF) compared to using full phase encoding: global image smoothing, and regionally enhanced signal dropout. (See Note 1.)
In this post I want to look a little more closely at how pF-EPI works in practice, on a brain, with fMRI as the intended application, and to consider what other parameter options we have once we select pF over full k-space. I'll do two sets of comparisons. In the first comparison all parameters except the phase encoding k-space fraction will be fixed so that we can again consider the first stage consequences of using pF. In the second comparison each pF-EPI scheme will be optimized in a "maximum performance" test. The former is an apples to apples comparison, with essentially one variable changing at a time, whereas the latter is how you would ordinarily want to consider the pF options available to you.
Why might we want to consider partial Fourier EPI for fMRI anyway?
If we assume a typical in-plane matrix of 64 x 64 pixels, an echo spacing (the time for each phase-encoded gradient echo in the train, as explained in PFUFA Part Twelve) of 0.5 ms and a TE of 30 ms for BOLD contrast then it takes approximately 61 ms to acquire each EPI slice. (See Note 2 for the details.) The immediate consequence should be obvious: at 61 ms per slice we will be limited to 32 slices in a TR of 2000 ms. If the slice thickness is 3 mm then the total brain coverage in the slice dimension will be ~106 mm, assuming a 10% nominal inter-slice gap (i.e. 32 x 3.3 mm slices). With axial slices we aren't going to be able to cover the entire adult brain. We will have to omit either the top of parietal lobes or the bottom of the temporal lobes, midbrain, OFC and cerebellum. Judicious tilting might be able to capture all of the regions of primary interest to you, but we either need to reduce the time taken per slice or increase the TR to cover the entire brain.
Partial Fourier is one way to reduce the time spent acquiring each EPI slice. There are two basic ways to approach it: eliminate either the early echoes or the late echoes in the echo train, as described at the end of PFUFA: Part Fourteen. Eliminating the early echoes doesn't, by itself, save any time at all. Only if the TE is reduced in concert is there any time saving. But omitting the late echoes will mean that we complete the data acquisition for the current slice earlier than we would for full Fourier sampling, hence there is some intrinsic speed benefit. I'll come back to the time savings and their consequences later on. Let's first look at what happens when we enable partial Fourier without changing anything else.
Wednesday, November 27, 2013
I've been dabbling in some ultralow field (ULF) MRI over the past several years, trying first to get functional brain imaging to work (more on that another day, perhaps) and more recently looking at the contrast properties of normal and diseased brains. We detect MR signals at less than three times the earth's magnetic field (of approximately 50 microtesla) using an ultra-sensitive superconducting quantum interference device (SQUID). The system is usually referred to as "The Cube" on account of the large aluminum box surrounding the entire apparatus; it provides magnetic shielding for the SQUID. But my own nickname for the system is CALAMARI - the CAL Apparatus for MAgnetic Resonance Imaging. Deep-fried rings or grilled strips, it's all good. Anyway, should you wish to know more about this home-built system and what it might be able to do, there's a new paper (John Clarke's inaugural article after being elected to the NAS) now out in PNAS. At some point I'll put up more blog posts on both anatomical and functional ULFMRI, and go over some of the work that's being done at high fields (1.5+ T) that may be relevant to ULFMRI.
Wednesday, September 18, 2013
Have you ever wondered why your fMRI scanner is the way it is? Why, for example, is the magnet typically operated at 1.5 or 3 T, and why is there a body-sized transmission coil for the RF? The prosaic answer to these questions is the same: it's what's for sale. We are fortunate that MRI is a cardinal method for radiology, and this clinical utility means that large medical device companies have invested hundreds of millions of dollars (and other currencies) into its development. The hardware and pulse sequences required to do fMRI research aren't fundamentally different from those required to do radiological MRI so we get to use a medical device as a scientific instrument with relative ease.
But what would our fMRI scanners look like today had they been developed as dedicated scientific instruments, with little or no application to something as lucrative as radiology? Surely the scanner-as-research-device would differ in some major ways from that which is equally at home in the hospital or the laboratory. Or would it? While it's clear that the fMRI revolution of the past twenty years has ridden piggyback on the growing clinical importance of diffusion and other advanced anatomical imaging techniques, what's less obvious is the impact of these external factors on how we conduct functional neuroimaging today. State-of-the-art fMRI might have looked quite different had we been forced to develop scanners explicitly for neuroscience.
"I wouldn't start from here, mate."
This week's interim report from the BRAIN Initiative's working group is an opportunity for all of us involved in fMRI to think seriously about our tools. We've come a long way with BOLD contrast to be sure, even though we don't fully understand its origins or its complexities. Should I be delighted or frustrated at my capacity to operate a push-button clinical machine at 3 T in order to get this stuff to work? It's undoubtedly convenient, but at what cost to science?
I can't help but wonder what my fMRI scanner might look like if it was designed specifically for task. Would the polarizing magnet be horizontal or would a subject sit on a chair in a vertical bore? How large would the polarizing magnet be, and what would be its field strength? The gradient set specifications? And finally, if I'm not totally sold on BOLD contrast as my reporting mechanism for neural activity, what sort of signal do I really want? In all cases I am especially interested in why I should prefer one particular answer over the other alternatives.
Note that I'm not suggesting we all dream of voltage-sensitive contrast agents. That's the point of the BRAIN Initiative according to my reading of it. All I'm suggesting is that we spend a few moments considering what we are currently doing, and whether there might be a better way. Unless there has been a remarkable set of coincidences over the last two decades, the chances are good that an fMRI scanner designed specifically for science would have differed in some major ways from the refined medical device that presently occupies my basement lab. There would be more duct tape for a start.
Monday, August 5, 2013
PFUFA Part Fourteen introduced the idea of acquiring partial k-space and explained how the method, hereafter referred to as partial Fourier (pF), is typically used for EPI acquisitions. At this point it is useful to look at some example data and to begin to assess the options for using pF-EPI for experiments.
The first consequence of using pF is image smoothing. It arises because we've acquired all of the low spatial frequency information twice - on both halves of k-space - but only half of some of the high spatial frequency information. We've then zero-filled that part of k-space that was omitted. This has the immediate effect of degrading the signal-to-noise ratio (SNR) for the high spatial frequencies that reside in the omitted portion of k-space. (PFUFA Part Eleven dealt with where different spatial frequencies are to be found in k-space.) Thus, the final image has less detail and is smoother than it would have been had we acquired the full k-space matrix, and because of the smoothing the final image SNR tends to be higher for pF-EPI than for the full k-space variant.
It was surprising to me that pF-EPI has higher SNR - due to smoothing - than full Fourier EPI in spite of the reduced data sampling in the acquisition. Conventional wisdom, which is technically correct, states that acquiring less data will degrade SNR. To understand this conundrum, we can think of pF as being like a square filter applied asymmetrically to the phase encoding dimension of an EPI obtained from a complete k-space acquisition. Indeed, as we start to evaluate the costs and benefits of pF for EPI we should probably be thinking about a minimum of a three-way comparison. Firstly, we obviously want to compare our pF-EPI to the full k-space alternative having the same nominal resolution. But we should also consider whether there is any advantage over a lower resolution EPI with full k-space coverage, too. Why? Because this lower resolution version is, in effect, what you get when partial Fourier is applied symmetrically, i.e. when the high spatial frequencies are omitted from both halves of the phase encoding dimension!
Let's do our first assessment of pF on a phantom. There are four images of interest: the full k-space image, two versions of pF - omitting the early or the late echoes from the echo train - and, for the sake of quantifying the amount of smoothing, a lower resolution full k-space image which is tantamount to omitting both the early and late echoes. (See Note 1.) From this point on I'm going to refer to omission of the early and late echo variants as pF(early)-EPI and pF(late)-EPI, respectively.
Wednesday, July 31, 2013
This is cool, publicly available test-retest pilot data sets using MB-EPI and conventional EPI on the same subjects courtesy of Nathan Kline Institute:
- R-mfMRI (TR = 645msec; voxel size = 3mm isotropic; duration = 10 minutes)
- R-mfMRI (TR = 1400msec; voxel size = 2mm isotropic; duration = 10 minutes)
- R-fMRI (TR = 2500msec; voxel size = 3mm isotropic; duration = 5 minutes)
- Diffusion Tensor Imaging (137 direction; voxel size = 2mm isotropic)
The acquisition protocols are available as PDFs via the links given in the release website (and copied here). I like that they restricted the acceleration (MB) factor to four. I also like that the 3 mm isotropic MB-EPI data acquired at TR=645 ms used full Fourier acquisition (no partial Fourier) and an echo spacing of 0.51 ms. The former may help with signal in deep brain regions as well as frontal and temporal lobes, while the latter avoids mechanical resonances in the range 0.6-0.8 ms on a Trio, and also keeps the phase encode distortion reasonable.
There are already studies coming out that use these data sets, such as this one by Liao et al (which is how I learned of their existence). I don't yet know which reconstruction version was used for these data sets, but those of you who are tinkering should be aware that the latest version from CMRR, version R009a, has significantly lower artifacts and less smoothing than prior versions:
|MB-EPI using CMRR sequence version R008 on a Siemens Trio with 32ch coil. MB=6, 72 slices, TE=38 ms, 2 mm isotropic voxels.|
|MB-EPI using CMRR sequence version R009a on a Siemens Trio with 32ch coil. MB=6, 72 slices, TE=38 ms, 2 mm isotropic voxels.|
The bubbles visible in the bottom image of a gel phantom are real. The other intensity variations are artifacts. In both images one can easily make out the receive field heterogeneity of the 32-channel head coil.
Note added post publication
From Dan Lurie (@dantekgeek): We’re also collecting/sharing data from 1000 subjects using the same sequences, plus deep phenotyping http://fcon_1000.projects.nitrc.org/indi/enhanced/