The story so far...
Finally, here is the third part of a three-part series of posts that have sought to determine a general protocol for resting state fMRI (rs-fMRI). In the first post I reviewed a paper by van Dijk et al. that showed that spatial and temporal resolution didn't make a huge difference to the way resting state networks could be detected using current methods (i.e. seed cross correlation or ICA).
In the second post I presented the results of some simple tests that aimed to determine what sort of spatial coverage could be attained with parameters in accordance with the conclusions of the van Dijk paper. Temporal SNR (TSNR) was used as a simple proxy for data quality. It was found that TSNR for 3.5 mm in-plane resolution was fairly consistent across a range of axial and axial-oblique slice orientations, as well as for sagittal slices.
One question remained, however: given the tolerance to a longish TR (compared to event-related fMRI) for detecting resting networks, would it be beneficial to acquire many thinner slices in a longer TR, or fewer thicker slices in a shorter TR? Following van Dijk et al. we wouldn't expect any huge penalty from extending the TR a bit, but there might be a gain of signal in regions suffering extensive dropout which would suggest that thinner slices might be useful.
Education, tips and tricks to help you conduct better fMRI experiments.
Sure, you can try to fix it during data processing, but you're usually better off fixing the acquisition!
Sure, you can try to fix it during data processing, but you're usually better off fixing the acquisition!
Wednesday, December 22, 2010
Saturday, December 4, 2010
Beware of physicists bearing gifts!
A decision tree to evaluate new methods for fMRI.
Sooner or later, someone – probably a physicist - is going to suggest you adopt some revolutionary new method for your fMRI experiment. (See note [1].) The method will promise to make your fMRI simultaneously faster and better! Or, perhaps you’ll see a new publication from a famous group – probably a bunch of physicists – that shows stunning images with their latest development and you’ll rush off to your own facility’s physicist with the challenge, “Why aren’t we doing this here?”
On the other hand, inertia can become close to insurmountable in fMRI, too. Many studies proceed with particular scan protocols for no better reason than it worked last time, or it worked for so-and-so and he got his study published in the Journal of Excitable Findings and then got tenure at the University of Big Scanners. Historical precedent can be helpful, no doubt, but there ought to be a more principled basis for selecting a particular setup for your next experiment. At a minimum, selection of your equipment, pulse sequence and parameters should be conscious decisions!
But what to do if your understanding of k-space can be reasonably described as ‘fleeting’ and you couldn’t tell a Nyquist ghost from the Phantom of the Opera? Do you blindly trust the paper? Trust your physicist? Succumb to your innate fear of the unknown and resist all attempts at change…? You need a mechanism whereby you can remain the central part of the decision-making process, even if you don’t have the background physics knowledge to critically evaluate the new method. It is, after all, your experiment that will benefit or suffer as a result.
Skepticism is healthy
Let’s begin by considering the psychology surrounding what you see in papers and at conferences. Human nature is to put one’s best foot forward. So start by recognizing that whatever you see in the public domain is almost certainly not what you can expect to get on an average day. Consider published results to be the best-case scenario, and don’t expect to be able to match them without effort and experience. There may be all kinds of practical tricks needed to get consistently good data.
Next, recognize that there is usually no correlation between the difficulty of implementing a method as described in a paper’s experimental section and the actual amount of time and energy it took to get the results. For all you know it may have taken six research assistants working sixty hours a week for six months to get the analysis done. That said, do spend a few moments reviewing the experimental description and look for clues as to the amount of legwork involved. If the method used standard software packages, for example, that’s usually a sign that you could implement the method yourself without hiring a full-time programmer. Custom code? A flag that advanced computing skills and resources may be required.
Okay, at this point we are now in a fit mental state to pour cold, hard logic into this decision-making process. We’re ready to ask some questions of the new method, and to make a direct comparison to the standard alternatives we have available (where ‘standard’ means something that has been well tested and used extensively by your own and other labs).
Sooner or later, someone – probably a physicist - is going to suggest you adopt some revolutionary new method for your fMRI experiment. (See note [1].) The method will promise to make your fMRI simultaneously faster and better! Or, perhaps you’ll see a new publication from a famous group – probably a bunch of physicists – that shows stunning images with their latest development and you’ll rush off to your own facility’s physicist with the challenge, “Why aren’t we doing this here?”
On the other hand, inertia can become close to insurmountable in fMRI, too. Many studies proceed with particular scan protocols for no better reason than it worked last time, or it worked for so-and-so and he got his study published in the Journal of Excitable Findings and then got tenure at the University of Big Scanners. Historical precedent can be helpful, no doubt, but there ought to be a more principled basis for selecting a particular setup for your next experiment. At a minimum, selection of your equipment, pulse sequence and parameters should be conscious decisions!
But what to do if your understanding of k-space can be reasonably described as ‘fleeting’ and you couldn’t tell a Nyquist ghost from the Phantom of the Opera? Do you blindly trust the paper? Trust your physicist? Succumb to your innate fear of the unknown and resist all attempts at change…? You need a mechanism whereby you can remain the central part of the decision-making process, even if you don’t have the background physics knowledge to critically evaluate the new method. It is, after all, your experiment that will benefit or suffer as a result.
Skepticism is healthy
Let’s begin by considering the psychology surrounding what you see in papers and at conferences. Human nature is to put one’s best foot forward. So start by recognizing that whatever you see in the public domain is almost certainly not what you can expect to get on an average day. Consider published results to be the best-case scenario, and don’t expect to be able to match them without effort and experience. There may be all kinds of practical tricks needed to get consistently good data.
Next, recognize that there is usually no correlation between the difficulty of implementing a method as described in a paper’s experimental section and the actual amount of time and energy it took to get the results. For all you know it may have taken six research assistants working sixty hours a week for six months to get the analysis done. That said, do spend a few moments reviewing the experimental description and look for clues as to the amount of legwork involved. If the method used standard software packages, for example, that’s usually a sign that you could implement the method yourself without hiring a full-time programmer. Custom code? A flag that advanced computing skills and resources may be required.
Okay, at this point we are now in a fit mental state to pour cold, hard logic into this decision-making process. We’re ready to ask some questions of the new method, and to make a direct comparison to the standard alternatives we have available (where ‘standard’ means something that has been well tested and used extensively by your own and other labs).
Subscribe to:
Posts (Atom)