Monday, June 2, 2014
For such a short abbreviation QA sure is a huge, lumbering beast of a topic. Even the definition is complicated! It turns out that many people, myself included, invoke one term when they may mean another. Specifically, quality assurance (QA) is different from quality control (QC). This website has a side-by-side comparison if you want to try to understand the distinction. I read the definitions and I'm still lost. Anyway, I think it means that you, as an fMRIer, are primarily interested in QA whereas I, as a facility manager, am primarily interested in QC. Whatever. Let's just lump it all into the "QA" bucket and get down to practical matters. And as a practical matter you want to know that all is well when you scan, whereas I want to know what is breaking/broken and then I can get it fixed before your next scan.
The disparate aims of QA procedures
The first critical step is to know what you're doing and why you're doing it. This implies being aware of what you don't want to do. QA is always a compromise. You simply cannot measure everything at every point during the day, every day. Your bespoke solution(s) will depend on such issues as: the types of studies being conducted on your scanner, the sophistication of your scanner operators, how long your scanner has been installed, and your scanner's maintenance history. If you think of your scanner like a car then you can make some simple analogies. Aggressive or cautious drivers? Long or short journeys? Fast or slow traffic? Good or bad roads? New car with routine preventative maintenance by the vendor or used car taken to a mechanic only when it starts smoking or making a new noise?
Saturday, April 26, 2014
On Tuesday I became involved in a discussion about data sharing with JB Poline and Matthew Brett. Two days later the issue came up again, this time on Twitter. In both discussions I heard a lot of frustration with the status quo, but I also heard aspirations for a data nirvana where everything is shared willingly and any data set is never more than a couple of clicks away. What was absent from the conversations, it seemed to me, were reasonable, practical ways to improve our lot.* It got me thinking about the present ways we do business, and in particular where the incentives and the impediments can be found.
Now, it is undoubtedly the case that some scientists are more amenable to sharing than others. (Turns out scientists are humans first! Scary, but true.) Some scientists can be downright obdurate when faced with a request to make their data public. In response, a few folks in the pro-sharing camp have suggested that we lean on those who drag their feet, especially where individuals have previously agreed to share data as a condition of publishing in a particular journal; name and shame. It could work, but I'm not keen on this approach for a couple of reasons. Firstly, it makes the task personal which means it could mutate into outright war that extends far beyond the issue at hand and could have wide-ranging consequences for the combatants. Secondly, the number of targets is large, meaning that the process would be time-consuming.
Where might pressure be applied most productively?
Tuesday, April 1, 2014
Disclaimer: This isn't an April Fool!
I'd like to use the collective wisdom of the Internet to discuss the pros and cons of a general approach to simultaneous multislice (SMS) EPI that I've been thinking about recently, before anyone wastes time doing any actual programming or data acquisition.
Multi-echo EPI for de-noising fMRI data
There has been quite a lot of interest in using multi-echo EPI to characterize and de-noise time series data, e.g.
These methods rest on one critical aspect: they use in-plane parallel imaging (GRAPPA or SENSE, usually depending on the scanner vendor) to render the per slice acquisition time reasonable. For example, with R=2 acceleration it's possible to get three echo planar images per slice at TEs of around 15, 40 and 60 ms. The multiple echoes can then be used to characterize BOLD from non-BOLD signal variations, etc.
The immediate problem with this scheme is that the per slice acquisition time is still a lot longer than for normal EPI, meaning less brain coverage. The suggestion has been to use MB/SMS to regain speed in the slice dimension. This results in the combination of MB/SMS in the slice dimension and GRAPPA/SENSE in-plane, thereby complicating the reconstruction, possibly (probably) amplifying artifacts, enhancing motion sensitivity, etc. If we could eliminate the in-plane parallel imaging and do all the acceleration through MB/SMS then that would possibly reduce some of the artifact amplification, might simplify (slightly) the necessary reference data, etc.
A different approach?
Thursday, March 13, 2014
When running fMRI experiments it's not uncommon for the scanner to prohibit what you'd like to do because of a gradient stimulation limit. You may even hit the limit "out of the blue," e.g. when attempting an oblique slice prescription for a scan protocol that has run just fine for you in the past. I'd covered the anisotropy of the gradient stimulation limit as a footnote in an old post on coronal and sagittal fMRI, but it's an issue that causes untold stress and confusion when it happens so I decided to make a dedicated post.
Some of the following is take from Siemens manuals but the principles apply for all scanners. There may be vendor-specific differences in the way the safety checking is computed, however. Check your scanner manuals for details on the particular implementation of stimulus monitoring on your scanner.
According to Siemens, then:
The scanner monitors the physiological effects of the gradients and prohibits initiating scans that exceed some predefined thresholds. On a Siemens scanner the limits are established according to two models, used simultaneously:
The scanner computes the expected stimulation that will arise from the gradient waveforms in the sequence you are attempting to run. If one or both models suggests that a limit will be exceeded, you get an error message. I'll note here that the scanner also monitors in real time the actual gradients being played out in case some sort of fault occurs with the gradient control.
Thursday, February 27, 2014
There was quite a lot of activity yesterday in response to PLOS ONE's announcement regarding its data policy. Most of the discussion I saw concerned rights of use and credit, completeness of data (e.g. the need for stimulus scripts for task-based fMRI) and ethics (e.g. the need to get subjects' consent to permit further distribution of their fMRI data beyond the original purpose). I am leaving all of these very important issues to others. Instead, I want to pose a couple of questions to the fMRI community specifically, because they concern data quality and data quality is what I spend almost all of my time dealing with, directly or indirectly. Here goes.
1. Under what circumstances would you agree to use someone else's data to test a hypothesis of your own?
Possible concerns: scanner field strength and manufacturer, scan parameters, operator experience, reputation of acquiring lab.
2. What form of quality control would you insist on before relying on someone else's data?
Possible QA measures: independent verification of a simple task such as a button press response encoded in the same data, realignment "motion parameters" below/within some prior limit, temporal SNR above some prior value.
If anyone has other questions related to data quality that I haven't covered with these two, please let me know and I'll update the post. Until then I'll leave you with a couple of loaded comments. I wouldn't trust anyone's data if I didn't know the scanner operator personally and I knew first-hand that they had excellent standard operating procedures, a.k.a. excellent experimental technique. Furthermore, I wouldn't trust realignment algorithm reports (so-called motion parameters) as a reliable proxy for data quality in the same way that chemicals have purity values, for instance. The use of single value decomposition - "My motion is less than 0.5 mm over the entire run!" - is especially nonsensical in my opinion, considering that the typical voxel resolution exceeds 2 mm on a side. Okay, discuss.
UPDATE 13:35 PST
Someone just alerted me to the issue of data format. Raw? Filtered? And what about custom file types? One might expect to get image domain data, perhaps limited to the magnitude images that 99.9% of folks use. So, a third question is this: What data format(s) would you consider (un)acceptable for sharing, and why?
Tuesday, January 28, 2014
This is the final post in a short series concerning partial Fourier EPI for fMRI. The previous post showed how partial Fourier phase encoding can accelerate the slice acquisition rate for EPI. It is possible, in principle, to omit as much as half the phase encode data, but for practical reasons the omission is generally limited to around 25% before image artifacts - mainly enhanced regional dropout - make the speed gain too costly for fMRI use. Omitting 25% of the phase encode sampling allows a slice rate acceleration of up to about 20%, depending on whether the early or the late echoes are omitted and whether other timing parameters, most notably the TE, are changed in concert.
But what other options do you have for gaining approximately 20% more slices in a fixed TR? A common tactic for reducing the amount of phase-encoded data is to use an in-plane parallel imaging method such as SENSE or GRAPPA. Now, I've written previously about the motion sensitivity of parallel imaging methods for EPI, in particular the motion sensitivity of GRAPPA-EPI, which is the preferred parallel imaging method on a Siemens scanner. (See posts here, here and here.) In short, the requirement to obtain a basis set of spatial information - that is, a map of the receive coil sensitivities for SENSE and a set of so-called auto-calibration scan (ACS) data for GRAPPA - means that any motion that occurs between the basis set and the current volume of (accelerated) EPI data is likely to cause some degree of mismatch that will result in artifacts. Precisely how and where the artifacts will appear, their intensity, etc. will depend on the type of motion that occurs, whether the subject's head returns to the initial location, and so on. Still, it behooves us to check whether parallel imaging might be a better option for accelerating slice coverage than partial Fourier.
Deciding what to compare
Disclaimer: As always with these throwaway comparisons, use what you see here as a starting point for thinking about your options and perhaps determining your own set of pilot experiments. It is not the final word on either partial Fourier or GRAPPA! It is just one worked example.
Okay, so what should we look at? In selecting 6/8ths partial Fourier it appears that we can get about 15-20% more slices for a fixed TR. It turns out that this gain is comparable to using GRAPPA with R=2 acceleration with the same TE. To keep things manageable - a five-way comparison is a sod to illustrate - I am going to drop the low-resolution 64x48 full Fourier EPI that featured in the last post in favor of the R=2 GRAPPA-EPI that we're now interested in. For the sake of this comparison I'm assuming that we have decided to go with either pF-EPI or GRAPPA, but you should note that the 64x48 full Fourier EPI remains an option for you in practice. (Download all the data here to perform for your own comparisons!)
I will retain the original 64x64 full Fourier EPI as our "gold standard" for image quality as well as the two pF-EPI variants, yielding a new four-way comparison: 64x64 full Fourier EPI, 6/8pF(early), 6/8pF(late), and GRAPPA with R=2. Partial Fourier nomenclature is as used previously. All parameters except the specific phase encode sampling schemes were held constant. Data was collected on a Siemens TIM/Trio with 12-channel head coil, TR = 2000 ms, TE = 22 ms, FOV = 224 mm x 224 mm, slice thickness = 3 mm, inter-slice gap = 0.3 mm, echo spacing = 0.5 ms, bandwidth = 2232 Hz/pixel, flip angle = 70 deg. Each EPI was reconstructed as a 64x64 matrix however much actual k-space was acquired. Partial Fourier schemes used zero filling prior to 2D FT. GRAPPA reconstruction was performed on the scanner with the default vendor reconstruction program. (Siemens users, see Note 1.)
Thursday, December 19, 2013
Back in August I did a post on the experimental consequences of using partial Fourier for EPI. (An earlier post, PFUFA Part Fourteen introduces partial Fourier EPI.) The main point of that post was to demonstrate how, with all other parameters fixed, there are two principal effects on an EPI obtained with partial Fourier (pF) compared to using full phase encoding: global image smoothing, and regionally enhanced signal dropout. (See Note 1.)
In this post I want to look a little more closely at how pF-EPI works in practice, on a brain, with fMRI as the intended application, and to consider what other parameter options we have once we select pF over full k-space. I'll do two sets of comparisons. In the first comparison all parameters except the phase encoding k-space fraction will be fixed so that we can again consider the first stage consequences of using pF. In the second comparison each pF-EPI scheme will be optimized in a "maximum performance" test. The former is an apples to apples comparison, with essentially one variable changing at a time, whereas the latter is how you would ordinarily want to consider the pF options available to you.
Why might we want to consider partial Fourier EPI for fMRI anyway?
If we assume a typical in-plane matrix of 64 x 64 pixels, an echo spacing (the time for each phase-encoded gradient echo in the train, as explained in PFUFA Part Twelve) of 0.5 ms and a TE of 30 ms for BOLD contrast then it takes approximately 61 ms to acquire each EPI slice. (See Note 2 for the details.) The immediate consequence should be obvious: at 61 ms per slice we will be limited to 32 slices in a TR of 2000 ms. If the slice thickness is 3 mm then the total brain coverage in the slice dimension will be ~106 mm, assuming a 10% nominal inter-slice gap (i.e. 32 x 3.3 mm slices). With axial slices we aren't going to be able to cover the entire adult brain. We will have to omit either the top of parietal lobes or the bottom of the temporal lobes, midbrain, OFC and cerebellum. Judicious tilting might be able to capture all of the regions of primary interest to you, but we either need to reduce the time taken per slice or increase the TR to cover the entire brain.
Partial Fourier is one way to reduce the time spent acquiring each EPI slice. There are two basic ways to approach it: eliminate either the early echoes or the late echoes in the echo train, as described at the end of PFUFA: Part Fourteen. Eliminating the early echoes doesn't, by itself, save any time at all. Only if the TE is reduced in concert is there any time saving. But omitting the late echoes will mean that we complete the data acquisition for the current slice earlier than we would for full Fourier sampling, hence there is some intrinsic speed benefit. I'll come back to the time savings and their consequences later on. Let's first look at what happens when we enable partial Fourier without changing anything else.