Wednesday, October 1, 2014

i-fMRI: My initial thoughts on the BRAIN Initiative proposals


So we finally have some grant awards on which to judge the BRAIN Initiative. What was previously a rather vague outline of some distant, utopian future can now be scrutinized for novelty, practicality, capability, etc. Let's begin!

The compete list of awards across six different sections is here. The Next Generation Human Imaging section has selected nine diverse projects to lead us into the future. Here are my thoughts (see Note 1) based mostly on the abstracts of these successful proposals.

Friday, August 15, 2014

QA for fMRI, Part 3: Facility QA - what to measure, when, and why


As I mentioned in the introductory post to this series, Facility QA is likely what most people think of whenever QA is mentioned in an fMRI context. In short, it's the tests that you expect your facility technical staff to be doing to ensure that the scanner is working properly. Other tests may verify performance - I'll cover some examples in future posts on Study QA - but the idea with Facility QA is to catch and then diagnose any problems.

We can't just focus on stress tests, however. We will often need more than MRI-derived measures if we want to diagnose problems efficiently. We may need information that might be seem tangential to the actual QA testing, but these ancillary measures provide context for interpreting the test data. A simple example? The weather outside your facility. Why should you care? We'll get to that.


An outline of the process

Let's outline the steps in a comprehensive Facility QA routine and then we can get into the details:

  • Select an RF coil to use for the measurements. 
  • Select an appropriate phantom.
  • Decide what to measure from the phantom.
  • Determine what other data to record at the time of the QA testing.
  • Establish a baseline.
  • Make periodic QA measurements.
  • Look for deviations from the baseline, and decide what sort of deviations warrant investigation.
  • Establish procedures for whenever deviations from "normal" occur.
  • Review the QA procedure's performance whenever events (failures, environment changes, upgrades) occur, and at least annually.

In this post I'll deal with the first six items on the list - setting up and measuring - and I'll cover analysis of the test results in subsequent posts.

Tuesday, July 29, 2014

Free online fMRI education!


UCLA has their excellent summer Neuroimaging Training Program (NITP) going on as I type. Most talks are streamed live, or you can watch the videos at your leisure. Slides may also be available. Check out the schedule here.

I am grateful to Lauren Atlas for tweeting about the NIH's summer fMRI course. It's put together by Peter Bandettini's FMRI Core Facility (FMRIF). It started in early June and runs to early September, 3-4 lectures a week. The schedule is here. Videos and slides are available a few days after each talk.

Know of others? Feel free to share by commenting!

Saturday, July 26, 2014

QA for fMRI, Part 2: User QA


Motivation

The majority of "scanner issues" are created by routine operation, most likely through error or omission. In a busy center with harried scientists who are invariably running late there is a tendency to rush procedures and cut corners. This is where a simple QA routine - something that can be run quickly by anyone - can pay huge dividends, perhaps allowing rapid diagnosis of a problem and permitting a scan to proceed after just a few minutes' extra effort.

A few examples to get you thinking about the sorts of common problems that might be caught by a simple test of the scanner's configuration - what I call User QA. Did the scanner boot properly, or have you introduced an error by doing something before the boot process completed? You've plugged in a head coil but have you done it properly? And what about the magnetic particles that get tracked into the bore, might they have become lodged in a critical location, such as at the back of the head coil or inside one of the coil sockets? Most, if not all, of these issues should be caught with a quick test that any trained operator should be able to interpret.

User QA is, therefore, one component of a checklist that can be employed to eliminate (or permit rapid diagnosis of) some of the mistakes caused by rushing, inexperience or carelessness. At my center the User QA should be run when the scanner is first started up, prior to shut down, and whenever there is a reason to suspect the scanner might not perform as intended. It may also be used proactively by a user who wishes to demonstrate to the next user (or the facility manager!) that the scanner was left in a usable state.

Monday, June 2, 2014

QA for fMRI, Part 1: An outline of the goals


For such a short abbreviation QA sure is a huge, lumbering beast of a topic. Even the definition is complicated! It turns out that many people, myself included, invoke one term when they may mean another. Specifically, quality assurance (QA) is different from quality control (QC). This website has a side-by-side comparison if you want to try to understand the distinction. I read the definitions and I'm still lost. Anyway, I think it means that you, as an fMRIer, are primarily interested in QA whereas I, as a facility manager, am primarily interested in QC. Whatever. Let's just lump it all into the "QA" bucket and get down to practical matters. And as a practical matter you want to know that all is well when you scan, whereas I want to know what is breaking/broken and then I can get it fixed before your next scan.


The disparate aims of QA procedures

The first critical step is to know what you're doing and why you're doing it. This implies being aware of what you don't want to do. QA is always a compromise. You simply cannot measure everything at every point during the day, every day. Your bespoke solution(s) will depend on such issues as: the types of studies being conducted on your scanner, the sophistication of your scanner operators, how long your scanner has been installed, and your scanner's maintenance history. If you think of your scanner like a car then you can make some simple analogies. Aggressive or cautious drivers? Long or short journeys? Fast or slow traffic? Good or bad roads? New car with routine preventative maintenance by the vendor or used car taken to a mechanic only when it starts smoking or making a new noise?

Saturday, April 26, 2014

Sharing data: a better way to go?


On Tuesday I became involved in a discussion about data sharing with JB Poline and Matthew Brett. Two days later the issue came up again, this time on Twitter. In both discussions I heard a lot of frustration with the status quo, but I also heard aspirations for a data nirvana where everything is shared willingly and any data set is never more than a couple of clicks away. What was absent from the conversations, it seemed to me, were reasonable, practical ways to improve our lot.*  It got me thinking about the present ways we do business, and in particular where the incentives and the impediments can be found.

Now, it is undoubtedly the case that some scientists are more amenable to sharing than others. (Turns out scientists are humans first! Scary, but true.) Some scientists can be downright obdurate when faced with a request to make their data public. In response, a few folks in the pro-sharing camp have suggested that we lean on those who drag their feet, especially where individuals have previously agreed to share data as a condition of publishing in a particular journal; name and shame. It could work, but I'm not keen on this approach for a couple of reasons. Firstly, it makes the task personal which means it could mutate into outright war that extends far beyond the issue at hand and could have wide-ranging consequences for the combatants. Secondly, the number of targets is large, meaning that the process would be time-consuming.


Where might pressure be applied most productively?

Tuesday, April 1, 2014

i-fMRI: A virtual whiteboard discussion on multi-echo, simultaneous multi-slice EPI

Disclaimer: This isn't an April Fool!

I'd like to use the collective wisdom of the Internet to discuss the pros and cons of a general approach to simultaneous multislice (SMS) EPI that I've been thinking about recently, before anyone wastes time doing any actual programming or data acquisition.


Multi-echo EPI for de-noising fMRI data


These methods rest on one critical aspect: they use in-plane parallel imaging (GRAPPA or SENSE, usually depending on the scanner vendor) to render the per slice acquisition time reasonable. For example, with R=2 acceleration it's possible to get three echo planar images per slice at TEs of around 15, 40 and 60 ms. The multiple echoes can then be used to characterize BOLD from non-BOLD signal variations, etc.
The immediate problem with this scheme is that the per slice acquisition time is still a lot longer than for normal EPI, meaning less brain coverage. The suggestion has been to use MB/SMS to regain speed in the slice dimension. This results in the combination of MB/SMS in the slice dimension and GRAPPA/SENSE in-plane, thereby complicating the reconstruction, possibly (probably) amplifying artifacts, enhancing motion sensitivity, etc. If we could eliminate the in-plane parallel imaging and do all the acceleration through MB/SMS then that would possibly reduce some of the artifact amplification, might simplify (slightly) the necessary reference data, etc.


A different approach?