Tuesday, April 1, 2014

i-fMRI: A virtual whiteboard discussion on multi-echo, simultaneous multi-slice EPI

Disclaimer: This isn't an April Fool!

I'd like to use the collective wisdom of the Internet to discuss the pros and cons of a general approach to simultaneous multislice (SMS) EPI that I've been thinking about recently, before anyone wastes time doing any actual programming or data acquisition.


Multi-echo EPI for de-noising fMRI data


These methods rest on one critical aspect: they use in-plane parallel imaging (GRAPPA or SENSE, usually depending on the scanner vendor) to render the per slice acquisition time reasonable. For example, with R=2 acceleration it's possible to get three echo planar images per slice at TEs of around 15, 40 and 60 ms. The multiple echoes can then be used to characterize BOLD from non-BOLD signal variations, etc.
The immediate problem with this scheme is that the per slice acquisition time is still a lot longer than for normal EPI, meaning less brain coverage. The suggestion has been to use MB/SMS to regain speed in the slice dimension. This results in the combination of MB/SMS in the slice dimension and GRAPPA/SENSE in-plane, thereby complicating the reconstruction, possibly (probably) amplifying artifacts, enhancing motion sensitivity, etc. If we could eliminate the in-plane parallel imaging and do all the acceleration through MB/SMS then that would possibly reduce some of the artifact amplification, might simplify (slightly) the necessary reference data, etc.


A different approach? 

Thursday, March 13, 2014

WARNING! Stimulation threshold exceeded!


When running fMRI experiments it's not uncommon for the scanner to prohibit what you'd like to do because of a gradient stimulation limit. You may even hit the limit "out of the blue," e.g. when attempting an oblique slice prescription for a scan protocol that has run just fine for you in the past. I'd covered the anisotropy of the gradient stimulation limit as a footnote in an old post on coronal and sagittal fMRI, but it's an issue that causes untold stress and confusion when it happens so I decided to make a dedicated post.

Some of the following is take from Siemens manuals but the principles apply for all scanners. There may be vendor-specific differences in the way the safety checking is computed, however. Check your scanner manuals for details on the particular implementation of stimulus monitoring on your scanner.

According to Siemens, then:



The scanner monitors the physiological effects of the gradients and prohibits initiating scans that exceed some predefined thresholds. On a Siemens scanner the limits are established according to two models, used simultaneously:



The scanner computes the expected stimulation that will arise from the gradient waveforms in the sequence you are attempting to run. If one or both models suggests that a limit will be exceeded, you get an error message. I'll note here that the scanner also monitors in real time the actual gradients being played out in case some sort of fault occurs with the gradient control.

Thursday, February 27, 2014

Using someone else's data


There was quite a lot of activity yesterday in response to PLOS ONE's announcement regarding its data policy. Most of the discussion I saw concerned rights of use and credit, completeness of data (e.g. the need for stimulus scripts for task-based fMRI) and ethics (e.g. the need to get subjects' consent to permit further distribution of their fMRI data beyond the original purpose). I am leaving all of these very important issues to others. Instead, I want to pose a couple of questions to the fMRI community specifically, because they concern data quality and data quality is what I spend almost all of my time dealing with, directly or indirectly. Here goes.


1.  Under what circumstances would you agree to use someone else's data to test a hypothesis of your own?

Possible concerns: scanner field strength and manufacturer, scan parameters, operator experience, reputation of acquiring lab.

2. What form of quality control would you insist on before relying on someone else's data?

Possible QA measures: independent verification of a simple task such as a button press response encoded in the same data, realignment "motion parameters" below/within some prior limit, temporal SNR above some prior value.


If anyone has other questions related to data quality that I haven't covered with these two, please let me know and I'll update the post. Until then I'll leave you with a couple of loaded comments. I wouldn't trust anyone's data if I didn't know the scanner operator personally and I knew first-hand that they had excellent standard operating procedures, a.k.a. excellent experimental technique. Furthermore, I wouldn't trust realignment algorithm reports (so-called motion parameters) as a reliable proxy for data quality in the same way that chemicals have purity values, for instance. The use of single value decomposition - "My motion is less than 0.5 mm over the entire run!" - is especially nonsensical in my opinion, considering that the typical voxel resolution exceeds 2 mm on a side. Okay, discuss.


UPDATE 13:35 PST

Someone just alerted me to the issue of data format. Raw? Filtered? And what about custom file types? One might expect to get image domain data, perhaps limited to the magnitude images that 99.9% of folks use. So, a third question is this: What data format(s) would you consider (un)acceptable for sharing, and why?

Tuesday, January 28, 2014

Partial Fourier versus GRAPPA for increasing EPI slice coverage


This is the final post in a short series concerning partial Fourier EPI for fMRI. The previous post showed how partial Fourier phase encoding can accelerate the slice acquisition rate for EPI. It is possible, in principle, to omit as much as half the phase encode data, but for practical reasons the omission is generally limited to around 25% before image artifacts - mainly enhanced regional dropout - make the speed gain too costly for fMRI use. Omitting 25% of the phase encode sampling allows a slice rate acceleration of up to about 20%, depending on whether the early or the late echoes are omitted and whether other timing parameters, most notably the TE, are changed in concert.

But what other options do you have for gaining approximately 20% more slices in a fixed TR? A common tactic for reducing the amount of phase-encoded data is to use an in-plane parallel imaging method such as SENSE or GRAPPA. Now, I've written previously about the motion sensitivity of parallel imaging methods for EPI, in particular the motion sensitivity of GRAPPA-EPI, which is the preferred parallel imaging method on a Siemens scanner. (See posts here, here and here.) In short, the requirement to obtain a basis set of spatial information - that is, a map of the receive coil sensitivities for SENSE and a set of so-called auto-calibration scan (ACS) data for GRAPPA - means that any motion that occurs between the basis set and the current volume of (accelerated) EPI data is likely to cause some degree of mismatch that will result in artifacts. Precisely how and where the artifacts will appear, their intensity, etc. will depend on the type of motion that occurs, whether the subject's head returns to the initial location, and so on. Still, it behooves us to check whether parallel imaging might be a better option for accelerating slice coverage than partial Fourier.


Deciding what to compare

Disclaimer: As always with these throwaway comparisons, use what you see here as a starting point for thinking about your options and perhaps determining your own set of pilot experiments. It is not the final word on either partial Fourier or GRAPPA! It is just one worked example.

Okay, so what should we look at? In selecting 6/8ths partial Fourier it appears that we can get about 15-20% more slices for a fixed TR. It turns out that this gain is comparable to using GRAPPA with R=2 acceleration with the same TE. To keep things manageable - a five-way comparison is a sod to illustrate - I am going to drop the low-resolution 64x48 full Fourier EPI that featured in the last post in favor of the R=2 GRAPPA-EPI that we're now interested in. For the sake of this comparison I'm assuming that we have decided to go with either pF-EPI or GRAPPA, but you should note that the 64x48 full Fourier EPI remains an option for you in practice. (Download all the data here to perform for your own comparisons!)

I will retain the original 64x64 full Fourier EPI as our "gold standard" for image quality as well as the two pF-EPI variants, yielding a new four-way comparison: 64x64 full Fourier EPI, 6/8pF(early), 6/8pF(late), and GRAPPA with R=2. Partial Fourier nomenclature is as used previously. All parameters except the specific phase encode sampling schemes were held constant. Data was collected on a Siemens TIM/Trio with 12-channel head coil, TR = 2000 ms, TE = 22 ms, FOV = 224 mm x 224 mm, slice thickness = 3 mm, inter-slice gap = 0.3 mm, echo spacing = 0.5 ms, bandwidth = 2232 Hz/pixel, flip angle = 70 deg. Each EPI was reconstructed as a 64x64 matrix however much actual k-space was acquired. Partial Fourier schemes used zero filling prior to 2D FT. GRAPPA reconstruction was performed on the scanner with the default vendor reconstruction program. (Siemens users, see Note 1.)


Thursday, December 19, 2013

Using partial Fourier EPI for fMRI


Back in August I did a post on the experimental consequences of using partial Fourier for EPI. (An earlier post, PFUFA Part Fourteen introduces partial Fourier EPI.) The main point of that post was to demonstrate how, with all other parameters fixed, there are two principal effects on an EPI obtained with partial Fourier (pF) compared to using full phase encoding: global image smoothing, and regionally enhanced signal dropout. (See Note 1.)

In this post I want to look a little more closely at how pF-EPI works in practice, on a brain, with fMRI as the intended application, and to consider what other parameter options we have once we select pF over full k-space. I'll do two sets of comparisons. In the first comparison all parameters except the phase encoding k-space fraction will be fixed so that we can again consider the first stage consequences of using pF. In the second comparison each pF-EPI scheme will be optimized in a "maximum performance" test. The former is an apples to apples comparison, with essentially one variable changing at a time, whereas the latter is how you would ordinarily want to consider the pF options available to you.


Why might we want to consider partial Fourier EPI for fMRI anyway?

If we assume a typical in-plane matrix of 64 x 64 pixels, an echo spacing (the time for each phase-encoded gradient echo in the train, as explained in PFUFA Part Twelve) of 0.5 ms and a TE of 30 ms for BOLD contrast then it takes approximately 61 ms to acquire each EPI slice. (See Note 2 for the details.) The immediate consequence should be obvious: at 61 ms per slice we will be limited to 32 slices in a TR of 2000 ms. If the slice thickness is 3 mm then the total brain coverage in the slice dimension will be ~106 mm, assuming a 10% nominal inter-slice gap (i.e. 32 x 3.3 mm slices). With axial slices we aren't going to be able to cover the entire adult brain. We will have to omit either the top of parietal lobes or the bottom of the temporal lobes, midbrain, OFC and cerebellum. Judicious tilting might be able to capture all of the regions of primary interest to you, but we either need to reduce the time taken per slice or increase the TR to cover the entire brain.

Partial Fourier is one way to reduce the time spent acquiring each EPI slice. There are two basic ways to approach it: eliminate either the early echoes or the late echoes in the echo train, as described at the end of PFUFA: Part Fourteen. Eliminating the early echoes doesn't, by itself, save any time at all. Only if the TE is reduced in concert is there any time saving. But omitting the late echoes will mean that we complete the data acquisition for the current slice earlier than we would for full Fourier sampling, hence there is some intrinsic speed benefit. I'll come back to the time savings and their consequences later on. Let's first look at what happens when we enable partial Fourier without changing anything else.

Wednesday, November 27, 2013

CALAMARI: Doing MRI at 130 microtesla with a SQUID


I've been dabbling in some ultralow field (ULF) MRI over the past several years, trying first to get functional brain imaging to work (more on that another day, perhaps) and more recently looking at the contrast properties of normal and diseased brains. We detect MR signals at less than three times the earth's magnetic field (of approximately 50 microtesla) using an ultra-sensitive superconducting quantum interference device (SQUID). The system is usually referred to as "The Cube" on account of the large aluminum box surrounding the entire apparatus; it provides magnetic shielding for the SQUID. But my own nickname for the system is CALAMARI - the CAL Apparatus for MAgnetic Resonance Imaging. Deep-fried rings or grilled strips, it's all good. Anyway, should you wish to know more about this home-built system and what it might be able to do, there's a new paper (John Clarke's inaugural article after being elected to the NAS) now out in PNAS. At some point I'll put up more blog posts on both anatomical and functional ULFMRI, and go over some of the work that's being done at high fields (1.5+ T) that may be relevant to ULFMRI.





Wednesday, September 18, 2013

i-fMRI: BRAIN scanners of the past, present and future


Have you ever wondered why your fMRI scanner is the way it is? Why, for example, is the magnet typically operated at 1.5 or 3 T, and why is there a body-sized transmission coil for the RF? The prosaic answer to these questions is the same: it's what's for sale. We are fortunate that MRI is a cardinal method for radiology, and this clinical utility means that large medical device companies have invested hundreds of millions of dollars (and other currencies) into its development. The hardware and pulse sequences required to do fMRI research aren't fundamentally different from those required to do radiological MRI so we get to use a medical device as a scientific instrument with relative ease.

But what would our fMRI scanners look like today had they been developed as dedicated scientific instruments, with little or no application to something as lucrative as radiology? Surely the scanner-as-research-device would differ in some major ways from that which is equally at home in the hospital or the laboratory. Or would it? While it's clear that the fMRI revolution of the past twenty years has ridden piggyback on the growing clinical importance of diffusion and other advanced anatomical imaging techniques, what's less obvious is the impact of these external factors on how we conduct functional neuroimaging today. State-of-the-art fMRI might have looked quite different had we been forced to develop scanners explicitly for neuroscience.


"I wouldn't start from here, mate."

This week's interim report from the BRAIN Initiative's working group is an opportunity for all of us involved in fMRI to think seriously about our tools. We've come a long way with BOLD contrast to be sure, even though we don't fully understand its origins or its complexities. Should I be delighted or frustrated at my capacity to operate a push-button clinical machine at 3 T in order to get this stuff to work? It's undoubtedly convenient, but at what cost to science?

I can't help but wonder what my fMRI scanner might look like if it was designed specifically for task. Would the polarizing magnet be horizontal or would a subject sit on a chair in a vertical bore? How large would the polarizing magnet be, and what would be its field strength? The gradient set specifications? And finally, if I'm not totally sold on BOLD contrast as my reporting mechanism for neural activity, what sort of signal do I really want? In all cases I am especially interested in why I should prefer one particular answer over the other alternatives.

Note that I'm not suggesting we all dream of voltage-sensitive contrast agents. That's the point of the BRAIN Initiative according to my reading of it. All I'm suggesting is that we spend a few moments considering what we are currently doing, and whether there might be a better way. Unless there has been a remarkable set of coincidences over the last two decades, the chances are good that an fMRI scanner designed specifically for science would have differed in some major ways from the refined medical device that presently occupies my basement lab. There would be more duct tape for a start.