The majority of "scanner issues" are created by routine operation, most likely through error or omission. In a busy center with harried scientists who are invariably running late there is a tendency to rush procedures and cut corners. This is where a simple QA routine - something that can be run quickly by anyone - can pay huge dividends, perhaps allowing rapid diagnosis of a problem and permitting a scan to proceed after just a few minutes' extra effort.
A few examples to get you thinking about the sorts of common problems that might be caught by a simple test of the scanner's configuration - what I call User QA. Did the scanner boot properly, or have you introduced an error by doing something before the boot process completed? You've plugged in a head coil but have you done it properly? And what about the magnetic particles that get tracked into the bore, might they have become lodged in a critical location, such as at the back of the head coil or inside one of the coil sockets? Most, if not all, of these issues should be caught with a quick test that any trained operator should be able to interpret.
User QA is, therefore, one component of a checklist that can be employed to eliminate (or permit rapid diagnosis of) some of the mistakes caused by rushing, inexperience or carelessness. At my center the User QA should be run when the scanner is first started up, prior to shut down, and whenever there is a reason to suspect the scanner might not perform as intended. It may also be used proactively by a user who wishes to demonstrate to the next user (or the facility manager!) that the scanner was left in a usable state.
Decide on the test configuration
Although different users may want to test the scanner in different configurations - a major variable would be the head RF coil, if your scanner has more than one - there is benefit in maintaining a standard test configuration for all User QA. It makes differentiating real scanner problems more tractable. Besides, there's no reason why you can't add further bespoke tests for an individual user. The intent of the common procedure, then, is to test the scanner in a way that is as close to routine operation and default configuration as possible. We should select the RF coil and a phantom accordingly.
If you only have one head RF coil then you have no choice to make in that regard. But if you have more than one coil then logic suggests you should select the most commonly used coil for User QA. Next, decide on a phantom. A dedicated phantom used only for User QA is ideal, but it shouldn't matter provided you have available something stable. You could use your Facility QA phantom or an FBIRN phantom, for example, but think carefully about the other uses of the phantom before committing. (See Note 1.)
Set up a standard operating procedure (SOP) for the phantom and RF coil so that every user attempts to perform identical operations every time the User QA is conducted. The current User QA protocol for my scanner is here.
Decide what to test
In my User QA I really want the user to determine two key points quickly: in its present configuration, will the scanner (1) acquire images and (2) acquire reasonable EPI for fMRI? The first question can be answered by a simple localizer scan. On Siemens scanners this is most often a three-plane gradient echo scan requiring less than fifteen seconds. For the second question - EPI for fMRI - I use an attenuated version of one of the EPI protocols used in the comprehensive Facility QA routines that will be the subject of future posts. The User QA version is only 3.5 minutes so that the total time required to set up the phantom, insert it into the bore, acquire the localizer and EPI and evaluate the data is a total of five minutes. (I timed it.) If that is too long for you, a shorter EPI run would probably suffice.
Evaluate the performance
If the user is unable to get the User QA routine to complete successfully then we are already in trouble-shooting mode. So, successful completion is our first goal; it establishes that the scanner is going through the correct operations and seems to be working normally. After that, a modicum of experience (or dedicated training, if you prefer) should permit any qualified operator from conducting a reasonable assessment of the data. I am in the habit of contrasting the EPIs so that I can see the background noise, then initiating a cine loop during which I watch for anything to change. I might then repeat the cine loop with the contrast set for the phantom signal itself, in case there is a subtle problem with the RF transmission, say. I don't do anything more fancy than this. Furthermore, I don't actually require my users to evaluate their User QA data quality but it's clearly prudent (and simple enough) to learn how to do it.
What to do if all is well
From the user's perspective, if everything runs smoothly and the data appear as expected there's nothing further to do for User QA. Carry on with the experiment.
At my facility I request that all User QA data be transferred to our offline data storage host, where the data will reside for 30 days, just in case I want to review it for any reason. I will probably review the most recent User QA data as a first step if I'm called to look at a problem with the scanner.
I don't archive the data but if you had the time and the resources you could do so. I tend to review the last 30 days of User QA results if a real scanner problem is detected, e.g. gradient spiking, in case the User QA history can give me a better indication of when the issue first began. So far it hasn't helped me, but I live in hope!
What to do if something isn't right
Since one of the main aims of User QA is to determine pilot error, the first action on the user's part should be to determine whether the procedure was followed appropriately. If so, and if time is of the essence, it may be time to call for assistance. Alternatively, a quick bit of sleuthing can pay dividends. How much and what sort of sleuthing? That will depend heavily on the user's experience level.
- Software glitch arising out of an interrupted boot procedure.
- Failure to insert the sample properly.
- Failure to connect the RF coil properly.
- Conductive debris in the coil sockets, or in/on the phantom.
- Bent or broken pin(s) on the RF coil plugs.
- The last user left the on-resonance frequency outside the 600 Hz range used by the automated adjustment, e.g. because that person was testing a development pulse sequence and forgot to return the scanner frequency to its starting point.
- Other custom configuration changes - RF amplifier in standby, one or more gradient amplifiers in standby, something unplugged inside a cabinet - as implemented by physicists and engineers who supposedly know what they're doing, but forget to undo what they did prior to departing.
- Real scanner issues, such as gradient spiking.
If the results of User QA suggest that the scanner might have a real problem, it doesn't hurt to re-run the User QA from scratch to verify every step in the chain before moving on to more involved testing. Intermittent problems, as may be caused by conductive debris in a socket, say, can be difficult to diagnose. I will usually want to assess the reproducibility of a problem before I do anything else. And who knows, if you do manage to find and remove a small iron filing from a plug and return the scanner to normal operation, chances are you'll get to scan as you'd intended!
Coming soon, for technicians, facility managers and highly motivated routine users!
QA for fMRI, Part 3: Facility QA - what to measure, when, and why
1. I have a dedicated Facility QA phantom that is used only by staff. The stability of that phantom is critical to my being able to detect subtle scanner problems. I don't want to risk it getting damaged by frequent use! Similarly, although the FBIRN gel phantom is a lovely piece of kit, it's not cheap. If it gets broken being used three times a day then the replacement cost is significant. I thus chose to use the standard doped water phantoms provided by Siemens. They're cheap and easy to replace if/when they leak. And if I can't maintain precisely the same phantom over time for User QA it doesn't matter all that much.
(Download the User QA procedure used at UC Berkeley, for a Siemens TIM/Trio scanner, here.)