The story so far...
Finally, here is the third part of a three-part series of posts that have sought to determine a general protocol for resting state fMRI (rs-fMRI). In the first post I reviewed a paper by van Dijk et al. that showed that spatial and temporal resolution didn't make a huge difference to the way resting state networks could be detected using current methods (i.e. seed cross correlation or ICA).
In the second post I presented the results of some simple tests that aimed to determine what sort of spatial coverage could be attained with parameters in accordance with the conclusions of the van Dijk paper. Temporal SNR (TSNR) was used as a simple proxy for data quality. It was found that TSNR for 3.5 mm in-plane resolution was fairly consistent across a range of axial and axial-oblique slice orientations, as well as for sagittal slices.
One question remained, however: given the tolerance to a longish TR (compared to event-related fMRI) for detecting resting networks, would it be beneficial to acquire many thinner slices in a longer TR, or fewer thicker slices in a shorter TR? Following van Dijk et al. we wouldn't expect any huge penalty from extending the TR a bit, but there might be a gain of signal in regions suffering extensive dropout which would suggest that thinner slices might be useful.
Education, tips and tricks to help you conduct better fMRI experiments.
Sure, you can try to fix it during data processing, but you're usually better off fixing the acquisition!
Sure, you can try to fix it during data processing, but you're usually better off fixing the acquisition!
Wednesday, December 22, 2010
Saturday, December 4, 2010
Beware of physicists bearing gifts!
A decision tree to evaluate new methods for fMRI.
Sooner or later, someone – probably a physicist - is going to suggest you adopt some revolutionary new method for your fMRI experiment. (See note [1].) The method will promise to make your fMRI simultaneously faster and better! Or, perhaps you’ll see a new publication from a famous group – probably a bunch of physicists – that shows stunning images with their latest development and you’ll rush off to your own facility’s physicist with the challenge, “Why aren’t we doing this here?”
On the other hand, inertia can become close to insurmountable in fMRI, too. Many studies proceed with particular scan protocols for no better reason than it worked last time, or it worked for so-and-so and he got his study published in the Journal of Excitable Findings and then got tenure at the University of Big Scanners. Historical precedent can be helpful, no doubt, but there ought to be a more principled basis for selecting a particular setup for your next experiment. At a minimum, selection of your equipment, pulse sequence and parameters should be conscious decisions!
But what to do if your understanding of k-space can be reasonably described as ‘fleeting’ and you couldn’t tell a Nyquist ghost from the Phantom of the Opera? Do you blindly trust the paper? Trust your physicist? Succumb to your innate fear of the unknown and resist all attempts at change…? You need a mechanism whereby you can remain the central part of the decision-making process, even if you don’t have the background physics knowledge to critically evaluate the new method. It is, after all, your experiment that will benefit or suffer as a result.
Skepticism is healthy
Let’s begin by considering the psychology surrounding what you see in papers and at conferences. Human nature is to put one’s best foot forward. So start by recognizing that whatever you see in the public domain is almost certainly not what you can expect to get on an average day. Consider published results to be the best-case scenario, and don’t expect to be able to match them without effort and experience. There may be all kinds of practical tricks needed to get consistently good data.
Next, recognize that there is usually no correlation between the difficulty of implementing a method as described in a paper’s experimental section and the actual amount of time and energy it took to get the results. For all you know it may have taken six research assistants working sixty hours a week for six months to get the analysis done. That said, do spend a few moments reviewing the experimental description and look for clues as to the amount of legwork involved. If the method used standard software packages, for example, that’s usually a sign that you could implement the method yourself without hiring a full-time programmer. Custom code? A flag that advanced computing skills and resources may be required.
Okay, at this point we are now in a fit mental state to pour cold, hard logic into this decision-making process. We’re ready to ask some questions of the new method, and to make a direct comparison to the standard alternatives we have available (where ‘standard’ means something that has been well tested and used extensively by your own and other labs).
Sooner or later, someone – probably a physicist - is going to suggest you adopt some revolutionary new method for your fMRI experiment. (See note [1].) The method will promise to make your fMRI simultaneously faster and better! Or, perhaps you’ll see a new publication from a famous group – probably a bunch of physicists – that shows stunning images with their latest development and you’ll rush off to your own facility’s physicist with the challenge, “Why aren’t we doing this here?”
On the other hand, inertia can become close to insurmountable in fMRI, too. Many studies proceed with particular scan protocols for no better reason than it worked last time, or it worked for so-and-so and he got his study published in the Journal of Excitable Findings and then got tenure at the University of Big Scanners. Historical precedent can be helpful, no doubt, but there ought to be a more principled basis for selecting a particular setup for your next experiment. At a minimum, selection of your equipment, pulse sequence and parameters should be conscious decisions!
But what to do if your understanding of k-space can be reasonably described as ‘fleeting’ and you couldn’t tell a Nyquist ghost from the Phantom of the Opera? Do you blindly trust the paper? Trust your physicist? Succumb to your innate fear of the unknown and resist all attempts at change…? You need a mechanism whereby you can remain the central part of the decision-making process, even if you don’t have the background physics knowledge to critically evaluate the new method. It is, after all, your experiment that will benefit or suffer as a result.
Skepticism is healthy
Let’s begin by considering the psychology surrounding what you see in papers and at conferences. Human nature is to put one’s best foot forward. So start by recognizing that whatever you see in the public domain is almost certainly not what you can expect to get on an average day. Consider published results to be the best-case scenario, and don’t expect to be able to match them without effort and experience. There may be all kinds of practical tricks needed to get consistently good data.
Next, recognize that there is usually no correlation between the difficulty of implementing a method as described in a paper’s experimental section and the actual amount of time and energy it took to get the results. For all you know it may have taken six research assistants working sixty hours a week for six months to get the analysis done. That said, do spend a few moments reviewing the experimental description and look for clues as to the amount of legwork involved. If the method used standard software packages, for example, that’s usually a sign that you could implement the method yourself without hiring a full-time programmer. Custom code? A flag that advanced computing skills and resources may be required.
Okay, at this point we are now in a fit mental state to pour cold, hard logic into this decision-making process. We’re ready to ask some questions of the new method, and to make a direct comparison to the standard alternatives we have available (where ‘standard’ means something that has been well tested and used extensively by your own and other labs).
Sunday, November 21, 2010
A call to publish negative results?
The Journal of Cerebral Blood Flow and Metabolism has taken the brave and, I would argue, constructive step of actively soliciting manuscripts that present negative results. In their words:
If any field could use a forum for negative results it is neuroimaging, and fMRI in particular. We seem to have a bias towards positive results that is second only to pharmaceuticals research. Of course, it is perfectly natural for scientists to want to find something rather than not find it. Not finding an earth-like planet orbiting another solar system isn't nearly as exciting as finding one. And one wonders whether Ponce de Leon would have got tenure at a modern university if his most cited work was entitled "On not finding the Fountain of Youth."
One of the challenges facing fMRI is that it demands extensive statistics or modeling to coax meaning out of tiny signals in an ocean of noise. The convoluted analyses provide skeptics with plenty of ammunition that we are basically making it all up by establishing the test that yields the answer we were looking for. (I have a former colleague - a 'real' MR scientist - who claims dismissively that fMRI stands for fictional MRI.) Without rigorous stats/models, though, it is easy to fall into the trap of false positive errors, a problem that has led to accusations of all sorts of voodoo of its own. How, then, should we treat negative fMRI results? Are there any caveats to encouraging their publication?
A negative result or a bad experiment: what's the difference?
"In addition to original research articles, authors are welcomed to submit Negative Results. The Negative Results article intends to provide a forum for data that did not substantiate the alternative hypothesis and/or did not reproduce published findings."Good for them! With one of the biggest physiology journals getting its act together, what field might be next? How about neuroimaging?
If any field could use a forum for negative results it is neuroimaging, and fMRI in particular. We seem to have a bias towards positive results that is second only to pharmaceuticals research. Of course, it is perfectly natural for scientists to want to find something rather than not find it. Not finding an earth-like planet orbiting another solar system isn't nearly as exciting as finding one. And one wonders whether Ponce de Leon would have got tenure at a modern university if his most cited work was entitled "On not finding the Fountain of Youth."
One of the challenges facing fMRI is that it demands extensive statistics or modeling to coax meaning out of tiny signals in an ocean of noise. The convoluted analyses provide skeptics with plenty of ammunition that we are basically making it all up by establishing the test that yields the answer we were looking for. (I have a former colleague - a 'real' MR scientist - who claims dismissively that fMRI stands for fictional MRI.) Without rigorous stats/models, though, it is easy to fall into the trap of false positive errors, a problem that has led to accusations of all sorts of voodoo of its own. How, then, should we treat negative fMRI results? Are there any caveats to encouraging their publication?
A negative result or a bad experiment: what's the difference?
Friday, November 12, 2010
Towards an optimal protocol for resting state fMRI – part II
A couple of weeks ago I used the results in a paper by van Dijk et al. to provide guidance towards a possible optimal/general protocol for resting state fMRI using EPI. That review concluded with the following rough criteria: whole brain coverage, spatial resolution around 3 mm and temporal resolution in the 2-3 seconds range. The largest of the open questions pertained to the interplay between these three specifications, in particular the ability to obtain whole brain (cortex and cerebellum) coverage in the time available, whilst minimizing (we hope) the dropout and distortion that are ever-present features of EPI.
Experimental details:
In what must be considered a disposable experiment on a single subject (medical types might call this a case study), I acquired test data sets with the following parameters:
Siemens 3 T Trio/TIM running VB15, 12-channel HEAD MATRIX coil, ep2d_bold pulse sequence, TR=2500 ms, TE=25 ms, slice thickness=3 mm, gap=0.3 mm, 43 interleaved slices, matrix=64x64, FOV=224x224 mm (except for one test with 192x192 mm), bandwidth=2056 Hz/pixel, echo spacing=0.55 ms, number of volumes=144, fatsat=ON, MoCo=ON, no spatial filters. (See note 1.)
Experimental details:
In what must be considered a disposable experiment on a single subject (medical types might call this a case study), I acquired test data sets with the following parameters:
Siemens 3 T Trio/TIM running VB15, 12-channel HEAD MATRIX coil, ep2d_bold pulse sequence, TR=2500 ms, TE=25 ms, slice thickness=3 mm, gap=0.3 mm, 43 interleaved slices, matrix=64x64, FOV=224x224 mm (except for one test with 192x192 mm), bandwidth=2056 Hz/pixel, echo spacing=0.55 ms, number of volumes=144, fatsat=ON, MoCo=ON, no spatial filters. (See note 1.)
Saturday, November 6, 2010
FOD for thought.
As my scanner is down at the moment, with service engineers tearing things apart to identify some sources of spikes, there’s a bit of a delay in getting the resting state data I promised a couple of weeks ago. Please standby, normal service will be resumed shortly. For the time being I thought I’d continue the topic of ‘foreign objects and debris.’ Changing tacks a little bit, away from the insidious, small stuff, I thought it might be edifying to take a look at the big stuff – the stuff with major safety implications.
If you’re a regular fMRIer then you will already have been treated to safety videos demonstrating the sorts of bad things that can happen to a watermelon or a brick wall when magnetic objects are allowed to impact an MRI magnet. If you’re an fMRI newbie, welcome. May I suggest you spend a few minutes checking out YouTube videos for your enlightenment? Here are links to some goodies:
Oxygen cylinder 1 – 0 Watermelon
The rear wall of an MRI suite gets a good bashing from an oxygen cylinder
One wonders whether someone had been sitting on this chair when it started to move…
More watermelon abuse
And here is a video of some tests we did with an old 4 T magnet that was about to be decommissioned. We did a chair, too, but ours was deliberate:
Fun, eh? Sure, this stuff is exhilarating when it’s intentional and controlled. But I bet you don’t fancy being the person responsible for stabbing your subject repeatedly with the pair of scissors that you accidentally carried into the magnet room.
If you’re a regular fMRIer then you will already have been treated to safety videos demonstrating the sorts of bad things that can happen to a watermelon or a brick wall when magnetic objects are allowed to impact an MRI magnet. If you’re an fMRI newbie, welcome. May I suggest you spend a few minutes checking out YouTube videos for your enlightenment? Here are links to some goodies:
Oxygen cylinder 1 – 0 Watermelon
The rear wall of an MRI suite gets a good bashing from an oxygen cylinder
One wonders whether someone had been sitting on this chair when it started to move…
More watermelon abuse
And here is a video of some tests we did with an old 4 T magnet that was about to be decommissioned. We did a chair, too, but ours was deliberate:
Fun, eh? Sure, this stuff is exhilarating when it’s intentional and controlled. But I bet you don’t fancy being the person responsible for stabbing your subject repeatedly with the pair of scissors that you accidentally carried into the magnet room.
Tuesday, November 2, 2010
FOD happens!
Pieces of metal, especially magnetic ones, will find their way into all sorts of strange and potentially detrimental locations inside an MRI. During your safety training you will have learned a lot about the hazards of chairs, keys, rotary mops, oxygen cylinders and other objects that have, at one time or another, found their way into or onto an MRI – often with disastrous consequences.
Yet there is another category of foreign objects or debris - known as FOD to aviation types - that doesn’t get as much attention during safety training, largely because there are fewer safety issues. There are, however, serious implications for the quality of your data.
Finding FOD
Take yesterday, for example. There we were, a service engineer and I, rooting around in the back of the magnet checking for carbonization, testing locking nut security and the like, in a quest to identify sources of spikes that had shown up in the morning’s QA data. (I’ll do a separate post on spikes another day.) We (meaning the engineer) had already found, cleaned and replaced “standoff” spacers for the gradient power cables. These spacers – especially the one for the X coil, which gets the most use as the read axis gradient for EPI – are prone to micro-arcing, a phenomenon that can be discerned by the telltale black carbon deposits on one or both ends of the metal tube.
Yet there is another category of foreign objects or debris - known as FOD to aviation types - that doesn’t get as much attention during safety training, largely because there are fewer safety issues. There are, however, serious implications for the quality of your data.
Finding FOD
Take yesterday, for example. There we were, a service engineer and I, rooting around in the back of the magnet checking for carbonization, testing locking nut security and the like, in a quest to identify sources of spikes that had shown up in the morning’s QA data. (I’ll do a separate post on spikes another day.) We (meaning the engineer) had already found, cleaned and replaced “standoff” spacers for the gradient power cables. These spacers – especially the one for the X coil, which gets the most use as the read axis gradient for EPI – are prone to micro-arcing, a phenomenon that can be discerned by the telltale black carbon deposits on one or both ends of the metal tube.
Tuesday, October 26, 2010
Resting state fMRI: is there an optimal protocol?
With resting state fMRI (rs-fMRI) and functional connectivity booming, and an increasing number of fMRIers adding a resting state scan to their otherwise task-based protocols (even if they don't know what they'll do with the data), the question of whether there is an optimal protocol, perhaps even a standard that could be established across multiple centers, seems timely. From my limited investigation it appears that many fMRIers are doing the logical thing: they are using a version of their task-based fMRI experiment for their resting state acquisition. Is that a good, bad or indifferent thing to do?
A recent review from van Dijk et al. (J. Neurophysiol. 103, 297-321, 2010) set out to determine whether parameters such as run duration, temporal resolution (i.e. TR), spatial resolution (voxel size) and a series of processing steps made any appreciable difference to the detection of default mode and attention networks, against a reference network (a set of nodes not expected to have functional connectivity). Their findings can be summarized as follows:
A recent review from van Dijk et al. (J. Neurophysiol. 103, 297-321, 2010) set out to determine whether parameters such as run duration, temporal resolution (i.e. TR), spatial resolution (voxel size) and a series of processing steps made any appreciable difference to the detection of default mode and attention networks, against a reference network (a set of nodes not expected to have functional connectivity). Their findings can be summarized as follows:
Subscribe to:
Posts (Atom)