Sooner or later, someone – probably a physicist - is going to suggest you adopt some revolutionary new method for your fMRI experiment. (See note [1].) The method will promise to make your fMRI simultaneously faster and better! Or, perhaps you’ll see a new publication from a famous group – probably a bunch of physicists – that shows stunning images with their latest development and you’ll rush off to your own facility’s physicist with the challenge, “Why aren’t we doing this here?”
On the other hand, inertia can become close to insurmountable in fMRI, too. Many studies proceed with particular scan protocols for no better reason than it worked last time, or it worked for so-and-so and he got his study published in the Journal of Excitable Findings and then got tenure at the University of Big Scanners. Historical precedent can be helpful, no doubt, but there ought to be a more principled basis for selecting a particular setup for your next experiment. At a minimum, selection of your equipment, pulse sequence and parameters should be conscious decisions!
But what to do if your understanding of k-space can be reasonably described as ‘fleeting’ and you couldn’t tell a Nyquist ghost from the Phantom of the Opera? Do you blindly trust the paper? Trust your physicist? Succumb to your innate fear of the unknown and resist all attempts at change…? You need a mechanism whereby you can remain the central part of the decision-making process, even if you don’t have the background physics knowledge to critically evaluate the new method. It is, after all, your experiment that will benefit or suffer as a result.
Skepticism is healthy
Let’s begin by considering the psychology surrounding what you see in papers and at conferences. Human nature is to put one’s best foot forward. So start by recognizing that whatever you see in the public domain is almost certainly not what you can expect to get on an average day. Consider published results to be the best-case scenario, and don’t expect to be able to match them without effort and experience. There may be all kinds of practical tricks needed to get consistently good data.
Next, recognize that there is usually no correlation between the difficulty of implementing a method as described in a paper’s experimental section and the actual amount of time and energy it took to get the results. For all you know it may have taken six research assistants working sixty hours a week for six months to get the analysis done. That said, do spend a few moments reviewing the experimental description and look for clues as to the amount of legwork involved. If the method used standard software packages, for example, that’s usually a sign that you could implement the method yourself without hiring a full-time programmer. Custom code? A flag that advanced computing skills and resources may be required.
Okay, at this point we are now in a fit mental state to pour cold, hard logic into this decision-making process. We’re ready to ask some questions of the new method, and to make a direct comparison to the standard alternatives we have available (where ‘standard’ means something that has been well tested and used extensively by your own and other labs).
The decision tree
You are about to try to climb the decision tree, and as you reach each branch you will ask yourself a go/no-go question. Only if you get a ‘go’ do you try to ascend to the next higher branch. A single ‘no-go’ at any point sees you climb back down the tree; no new method for you, you will stick with the familiar instead.
1. Do you need it?
Does the method offer (or claim to offer) some performance feature that will allow you, in principle, to answer a question you couldn’t answer with an old or existing method?Take as an example spatial resolution. Let us suppose that the new method offers a gain in spatial resolution at no (apparent) time expense. Indeed, the new method might offer a higher spatial resolution that simply can’t be achieved with earlier methods. If so, that might be a compelling reason to pursue it. For retinotopy we know that spatial resolution is really useful, and as a general rule the more spatial resolution you can get, the better. A vision scientist might thus want to pursue the new method.
But what if you are doing cognitive neuroscience? If you routinely smooth with a 6 mm Gaussian kernel and you expect functional nodes extending over several voxels anyway, will 2 mm acquired resolution actually help you? And if it did, would you want to alter your hypothesis, e.g. could you have regions where the higher spatial resolution might tease apart two adjacent nodes? If not, why bother? Try to compare the (purported) features of the new method as compared to the old method – in particular the TR, spatial coverage, spatial resolution, dropout, and distortion characteristics – and make a fair determination of the likely benefits to your experiment.
If the old method provides perfectly satisfactory data for your needs, stick with the tried and true and get down off the tree now! Adding the risk of a new method will only increase the likelihood of failure, whereas the best-case scenario will probably produce only marginal benefit. Alternatively, if you think the new method might have some benefit - even if you can’t exactly predict how at this point - and you’re willing to take a bit of a gamble for this potential benefit, then stay on the tree and attempt to reach a higher branch.
2. Is it robust?
Is the new method likely work well under a range of typical circumstances, on your subject population, in your hands, and what is likely to happen if the ideal conditions can’t be guaranteed?Determining the manifold circumstances under which any method might fail is extremely difficult, for obvious reasons. So, instead of trying to obtain absolute answers to the question of robustness, let’s keep it simple and look for a relative measure and pay attention for any potential trouble spots. I’ll use a few specific examples; your job will be to determine the appropriate comparison(s) to make that would be appropriate for the new method (or device) you’re considering.
Let’s start with image acquisition schemes. If you are used to using single-shot EPI for fMRI, for example, then you have a fairly good idea of the effects of motion on your data: a single EPI acquisition is relatively insensitive to motion whereas motion from TR to TR, or a gross departure of the head from the position when it was shimmed, will cause increased ghosting and other problems. Okay, now take a look at the publication that describes the new method. Does it acquire all of its spatial information in a single shot? If not, how much time is there between the different parts of spatial encoding? You already know about the propensity for movement in your subjects, so knowing this temporal aspect can allow you to make a judgment on the new method relative to what you’re used to with single-shot EPI, i.e. whether the new method is likely be more or less sensitive to motion, or about the same. (As a general rule, any time a reference scan is used to acquire some spatial information, or is used to provide a correction for ghosts or distortions or other artifacts, the motion sensitivity of the image formation process goes up.)
Difference methods, where one image is subtracted from another, either in a running fashion or via more complex schemes, are also prone to increased motion sensitivity relative to a single image acquisition. This isn’t to say that methods such as arterial spin labeling (which uses tag and control images to form the final perfusion-weighted image) should be avoided! What it does mean is that you have to look at how such a method might perform in your subject population. Scanning kids and elderly subjects is far more challenging than scanning undergrad students. Bear in mind that additional steps, such as practice sessions in a mock scanner, may become necessary to assure reasonable performance for your application.
You should also exercise the same healthy skepticism of any method that is new to you, even if the new method is offered as standard on your scanner. It doesn’t imply that it is robust! We’ve just seen that ASL has more motion sensitivity inherently than does single-shot EPI, for example. Furthermore, the method could have been implemented sub-optimally, either because the vendor didn’t or couldn’t know better. Finally, there can be hardware performance differences across vendors, which can mean that a method with the same name works quite differently across two different platforms. (Sometimes the same method applied on two different scanners from the same vendor doesn’t work consistently, e.g. because of gradient performance differences!)
Robustness has another face, too. No offense, but it’s entirely possible that you aren’t as gifted an experimentalist as the people who devised the new method. Even the most robust methods can fail when placed in the hands of the clumsy or the inexperienced. Are your own skills up to the task of using the new method, or will you introduce a level of instability that’s likely to cause problems? Try to be honest in your self-evaluation, and consider further training if you don’t think you’ll be up to the task. If the new method requires a manual tweaking of some kind that requires extensive background knowledge of the pulse sequence physics, it’s no embarrassment to admit that you may require some help.
If you’re drawing a blank on the question of robustness, don’t sweat it. Worse, perhaps I’ve now made you paranoid that the method will be too fragile for your application. Let’s not reject it just yet! There is a handy surrogate that may be able to answer the question heuristically, and it is waiting for you on the next higher branch.
3. Is it validated?
What sort of history does this new method have, who has used it and for what, do you trust the results and are these results directly or only indirectly relevant to what you would like to do?This is where history counts. It is time for a thorough literature review. Who has validated this new method, and how did they do it? Was this validation suitably relevant to your intended application? If the method you’re considering has already formed the basis of a dozen well-cited publications then you have a basis for confidence. Other people, some physicists, some not, will presumably have done a lot of these assessments on your behalf.
But what if it’s a brand new method, just out in the literature? Chances are, it’s been tested on thirty undergraduates with near perfect health in a single experiment that used either a blocked visual or motor paradigm. (These are the classic “real world” fMRI tests we physicists do when presenting a new method. In 1994 they were highly pertinent. Now – not so much.) Determining how well the method might fare when you try to use it on a seventy-year old who has had a stroke and is mildly claustrophobic is a different proposition altogether. In other words, how different is your experimental regime compared to that in the paper?
In the absence of a solid, published validation you may have to do the next best thing and solicit the opinions of your colleagues. Talk to lots of physicists but poll them, don’t take the opinion of any single physicist as anything more than just that – an opinion. We all have our biases. And try to get explanatory answers rather than categorical ones. “It works great!” isn’t very informative. It works great when? How? On what scanner?
4. How much will it cost?
Will you and/or somebody at your facility have to invest significant amounts of time and/or money to obtain the new method and get it working properly?Congratulations, you’re nearly to the top of the tree! There’s only one snag. The new method requires a license from the scanner vendor and the license costs $22,000. Doh! You now have to go talk to colleagues to perhaps share the cost, or you may even have to write a grant before you can proceed.
Often it’s not money that’s the major cost, but time. Many research fMRI facilities have agreements in place with the scanner vendor to enable the sharing of pulse sequences and other code. Getting and installing an arterial spin labeling sequence, say, is quite straightforward. Setting up the acquisition is a little more involved but can often be done by simply cloning what someone else has done and verifying similar performance. Processing the data, however, is a whole different ball game. You are going to need to read a lot of papers, get some new software and invest some time to understand the numbers you get out. Is this something you can do yourself? Do you have a student or a research assistant who can do this? Or is it going to take a PhD physicist or collaboration with another lab to get the data processed? Try to be realistic as you estimate the amount of time and expense involved, and pay particular attention to any steps in the process that may not be within your direct control. (Telling a student to start reading up on ASL is radically different than trying to convince an academic colleague to begin a new collaboration.) If you need the assistance of your facility physicist, find out how much time she thinks it will take, how busy she is, when she thinks it can be tested and made available, etc.
A frequent impediment to progress is the vendor research agreement, the legal framework that allows your university/hospital to use particular code on your scanner. Someone – almost certainly not you – will need to review and ultimately sign a lot of legal paperwork before you can even get source code and begin work. It can be a staggeringly lengthy process. Bear in mind that your ability to influence the timescale of this process will probably be limited. Plan ahead.
A final word about expense considerations: consider what else you might use the time or money for. An investment is only as good as any alternative investment you could use the same time or money for. If you’re considering the purchase of a new head RF coil, say, would you rather have 20% more SNR or another post-doc for two years? (I’ve yet to see equipment have ideas or write papers.) Similarly, balance the amount of effort the new method might require against the myriad other things you (or your students or RAs) could be doing instead.
Decisions, decisions!
A decision tree isn’t the only way to evaluate a new method, of course. In reality it shouldn’t matter what specific process you use to arrive at your decision provided that you make it a conscious, systematic process.
We scientists could learn a thing or two from the world of business when it comes to making decisions. There are many approaches, but a common one is the SWOT analysis. You could probably use SWOT to evaluate a new fMRI method but it’s not broken into the natural language of neuroimaging; for example, faster acquisition could be a strength, a weakness, an opportunity and a threat, all at the same time! In any case, the point is that other realms outside of science make decisions systematically, so why shouldn’t we?
Sometimes new really is improved. My advice is that you maintain a healthy skepticism of all new fMRI methods, and ask the questions I’ve mentioned above before you leap into the unknown. Good luck!
[1] Here, new means “new to you,” it doesn’t have to be a method that’s just come out in the literature. Indeed, the most frequent incidences of users running into ‘new’ methods is when they try to use an installed feature that came with the scanner, often on the basis of a recommendation of a physicist or another user. New methods also include new devices, such as a different type of head RF coil, or a gradient insert coil.
No comments:
Post a Comment