Education, tips and tricks to help you conduct better fMRI experiments.
Sure, you can try to fix it during data processing, but you're usually better off fixing the acquisition!

Saturday, December 4, 2010

Beware of physicists bearing gifts!

A decision tree to evaluate new methods for fMRI.

Sooner or later, someone – probably a physicist - is going to suggest you adopt some revolutionary new method for your fMRI experiment. (See note [1].) The method will promise to make your fMRI simultaneously faster and better! Or, perhaps you’ll see a new publication from a famous group – probably a bunch of physicists – that shows stunning images with their latest development and you’ll rush off to your own facility’s physicist with the challenge, “Why aren’t we doing this here?”

On the other hand, inertia can become close to insurmountable in fMRI, too. Many studies proceed with particular scan protocols for no better reason than it worked last time, or it worked for so-and-so and he got his study published in the Journal of Excitable Findings and then got tenure at the University of Big Scanners. Historical precedent can be helpful, no doubt, but there ought to be a more principled basis for selecting a particular setup for your next experiment. At a minimum, selection of your equipment, pulse sequence and parameters should be conscious decisions!

But what to do if your understanding of k-space can be reasonably described as ‘fleeting’ and you couldn’t tell a Nyquist ghost from the Phantom of the Opera? Do you blindly trust the paper? Trust your physicist? Succumb to your innate fear of the unknown and resist all attempts at change…? You need a mechanism whereby you can remain the central part of the decision-making process, even if you don’t have the background physics knowledge to critically evaluate the new method. It is, after all, your experiment that will benefit or suffer as a result.

Skepticism is healthy

Let’s begin by considering the psychology surrounding what you see in papers and at conferences. Human nature is to put one’s best foot forward. So start by recognizing that whatever you see in the public domain is almost certainly not what you can expect to get on an average day. Consider published results to be the best-case scenario, and don’t expect to be able to match them without effort and experience. There may be all kinds of practical tricks needed to get consistently good data.

Next, recognize that there is usually no correlation between the difficulty of implementing a method as described in a paper’s experimental section and the actual amount of time and energy it took to get the results. For all you know it may have taken six research assistants working sixty hours a week for six months to get the analysis done. That said, do spend a few moments reviewing the experimental description and look for clues as to the amount of legwork involved. If the method used standard software packages, for example, that’s usually a sign that you could implement the method yourself without hiring a full-time programmer. Custom code? A flag that advanced computing skills and resources may be required.

Okay, at this point we are now in a fit mental state to pour cold, hard logic into this decision-making process. We’re ready to ask some questions of the new method, and to make a direct comparison to the standard alternatives we have available (where ‘standard’ means something that has been well tested and used extensively by your own and other labs).