Monday, June 2, 2014
QA for fMRI, Part 1: An outline of the goals
For such a short abbreviation QA sure is a huge, lumbering beast of a topic. Even the definition is complicated! It turns out that many people, myself included, invoke one term when they may mean another. Specifically, quality assurance (QA) is different from quality control (QC). This website has a side-by-side comparison if you want to try to understand the distinction. I read the definitions and I'm still lost. Anyway, I think it means that you, as an fMRIer, are primarily interested in QA whereas I, as a facility manager, am primarily interested in QC. Whatever. Let's just lump it all into the "QA" bucket and get down to practical matters. And as a practical matter you want to know that all is well when you scan, whereas I want to know what is breaking/broken and then I can get it fixed before your next scan.
The disparate aims of QA procedures
The first critical step is to know what you're doing and why you're doing it. This implies being aware of what you don't want to do. QA is always a compromise. You simply cannot measure everything at every point during the day, every day. Your bespoke solution(s) will depend on such issues as: the types of studies being conducted on your scanner, the sophistication of your scanner operators, how long your scanner has been installed, and your scanner's maintenance history. If you think of your scanner like a car then you can make some simple analogies. Aggressive or cautious drivers? Long or short journeys? Fast or slow traffic? Good or bad roads? New car with routine preventative maintenance by the vendor or used car taken to a mechanic only when it starts smoking or making a new noise?
There are three general categories of QA that are performed routinely at my center: User QA, Facility QA and Study QA. Each has a distinct goal in mind so this is how I'm going to organize the rest of the posts in this series. Here are the three broad goals:
This is a way for a user to quickly determine that the scanner is in a specific, known state. It's a bit like a pilot's pre-flight inspection. The pilot assumes the mechanics have checked the guts of the operation, but we might like to ensure no obvious problems before we strap in and go.
User QA can be done after the scanner is turned on, immediately before running an experimental scan, just prior to shutting the scanner down, or at any time there is a question about the scanner's performance (or state) and the user wants to be proactive and not call for help right away.
We need to be able to run User QA frequently during the day so the data must be collected and analyzed quickly - five minutes or less - so it's going to have limited scope. We want to detect major issues but we're not expecting to detect subtle problems. Indeed, at my facility the main goal of User QA is to determine whether a previous user has left the scanner in a state unusable by you. And by running User QA at the end of a session a user can verify that he is leaving the scanner in a known state so that if a problem is detected subsequently he can point to the User QA as part of his defense.
This is the stuff that you, as a user, expect me, as the facility physicist, to be doing in the background to ensure all is well with the scanner. It's probably what springs to mind when someone mentions "doing QA." Now, what your physicist chooses to measure (or to omit) is very much dependent on what is expected to fail, how quickly it is expected to fail, and how critical that failure might be. For example, a slow degradation of the RF amplifier is an issue that does need to be addressed, but it's unlikely to be critical for the very next fMRI session. Nine times out of ten the subject's performance will dominate subtle scanner issues. But, if there is gradient spiking then it's important to catch the issue, and address it, as soon as possible. FMRI data might escape serious damage, or it might not. Ahead of time there's probably no way to know.
Another aspect of Facility QA is being able to diagnose faults so that they can be rectified quickly. The data needed to diagnose a fault might be different to that needed in detecting it. Deriving measures from images alone may be insufficient. We may want to characterize our scanner's environment, e.g. the ambient temperature and humidity, and also know that the power delivered to the scanner isn't at the mercy of other equipment in the vicinity.
In the Facility QA, then, we are going to be recording a ton of data and we're going to tailor the measurements to our scanner's circumstances. We also need to allow some time to assess the data and perhaps repeat one or two measures if anything is questionable. Robust Facility QA will likely require an hour of scanner time. It may be feasible to run the tests daily, or you may have to accept less frequent measurements in favor of doing science experiments. You may even need to develop two or more versions of your Facility QA: one that you can do daily and a comprehensive version that happens whenever you have a big block of time. We'll discuss the "when and why" issues in the Facility QA post.
This is the sort of QA that you may have performed for your own purposes. Brain imaging data destined for analysis - your experimental data, in other words - may be subjected to testing for the presence or absence of certain features, e.g. using summary statistical images or time series diagnostics to detect artifacts or to ensure signal above some threshold. The quality of your fMRI data is checked before deciding whether to process it further, or discard it.
Study QA may also be performed using phantoms. If you run a multi-center or longitudinal clinical study then it is common to have dedicated phantoms and QA routines in order to track scanner performance. ADNI and FBIRN are two well-known research programs utilizing dedicated, custom phantoms. And much like the way you might run checks on your own fMRI data, these phantom measurements are the component of a research plan, e.g. to establish whether certain data should be included, or provide a way to merge data from scanners with systematic differences in performance.
A way forward, not the way forward
Hopefully you can already anticipate the experiments and measurements that might fit into each of my three categories. There are many ways to do useful QA, of course, and each facility and research group seems to do something slightly different. In the posts to come I shall try to include references and links to other sites wherever I can. But feel free to submit your own now! I will gladly expand any particular area based on interest. Otherwise, what you'll read in the next few posts is what I've found to be most useful to me and the users at my facility, plus a bit of a literature review.