Education, tips and tricks to help you conduct better fMRI experiments.
Sure, you can try to fix it during data processing, but you're usually better off fixing the acquisition!

Thursday, July 26, 2012

Methods reporting in the fMRI literature


(Thanks to Micah Allen for the original Tweet and to Craig Bennett for the Retweet.)


If you do fMRI you should read this paper by Joshua Carp asap:

"The secret lives of experiments: Methods reporting in the fMRI literature."

It's a fascinating and sometimes troubling view of fMRI as a scientific method. Doubtless there will be many reviews of this paper and hopefully a few suggestions of ways to improve our lot. I'm hoping other bloggers take a stab at it, especially on the post-processing and stats/modeling issues.

At the end the author suggests adopting some sort of new reporting structure. I concur. We have many variables for sure, but not an infinite number. With a little thought we could devise a simple, logical reporting structure that could be decoded by a reader just like a header can be interpreted from a headed file. (Dicom and numerous other file types manage it, you'd think we could do it too!)

To get things started I propose a shorthand notation for the acquisition side of the methods; this is the only part I'm intimately involved with. All we need to do is make an exhaustive list of the parameters and sequence options that can be used for fMRI, then sort them into a logical order and decide on how to encode each one. Thus, if I am doing single-shot EPI on a 3 T Siemens TIM/Trio with a 12-channel receive-only head coil, 150 volumes, two dummy scans, a TR of 2000 ms, TE of 30 ms, 30 descending 3 mm slices with 10% gap, echo spacing 0.50 ms, 22 degrees axial-coronal prescription, FOV 22.4x22.4 cm, 64x64 matrix, etc. then I might have a reporting string that looks something like this:

3T/SIEM/TRIO/VB17/12CH/TR2000/TE30/150V/2DS/30SLD/THK3/GAP0.3/ESP050/22AXCOR/FOV224x224/MAT64x64

Interleaved or ascending slices? Well, SLI or SLA, of course! 

Next we add in options for parallel imaging, then options for inline motion correction such as PACE, and extend the string until we have exhausted all the options that Siemens has to offer for EPI. All the information is available from the scanner, much of it is included in the data header.

But that's just the first pass. Now we consider a GE running spiral, then we consider a GE running SENSE-EPI, then a Philips running SENSE-EPI, etc. Sure, it's a teeny bit involved but I'm pretty sure it wouldn't take a whole lot of work to collect all the information used in 99% of the fMRI studies out there. Some of the stuff that could be included is rarely if ever reported, so we could actually be doing a whole lot better than even the most thorough methods sections today. Did you notice how I included the software version in my Siemens string example above? VB17? I could even include the specific type of shimming routine used, even the number and type of shim coils!

If an option is unused then it is simply included with a blank entry: /-/ And if we include a few well-positioned blanks in the sequence for future development then we can insert some options and append those we can't envisage today. With sufficient thought we could encapsulate everything that is required to replicate a study in a few lines of text in a process that should see us through the next several years. (We just review and revise the reporting structure periodically, and we naturally include a version number at the very start so that we immediately know what we're looking at!)

There, that's my contribution to the common good for today. I just made up a silly syntax by way of example. The precise separators, use of decimal points, etc. would have to be thrashed out. But if this effort has legs then count me in as a willing participant in capturing the acquisition side of this business. We clearly need to do better for a litany of reasons. One of them is purely selfish: I find it hard or impossible to evaluate many fMRI papers for wont of just a few key parameters. I think we can fix that. We really don't have an excuse not to.


13 comments:

  1. Just to cause trouble, how would you represent software upgrades during a study in your shorthand? Yes, those upgrades may be relevant to report, but it's just one example of how a simple shorthand code starts getting a bit less simple. For that matter, many studies keep most things constant, but have scans with different numbers of volumes, which can also make this more complex.

    I like the idea of standardizing acquisition reporting in general, but I'm not sure there's much a benefit in comparing your original three lines to the shorthand.

    One limit I see with the Carp article is that they sample multiple journals, but don't get a big enough N to show a journal effect. I strongly suspect specialist journals like Neuroimage and Human Brain Mapping rank much higher on reporting methodology. This is both good for those journals, but might also point to how disastrously bad things are in journals with broader audiences. Is the really bad under-reporting concentrated in the 28/241 article from PLoS ONE or scattered among the clinical journals?

    Also, I don't see this from skimming the paper, but if a manuscript reports FOV, then voxel dimensions & image matrix+slice thickness are interchangable, but it's not sure they give papers credit for both. That FOV & matrix are much higher than voxdim, I'm suspect they didn't account for issues like this. I wouldn't be surprised if similar issues surround other parameters they tabulated.

    ReplyDelete
  2. Hi Dan,

    I think the task is to capture more (and more accurate) information about the acquisition, not to make a shorthand per se. I see a shorthand as a sort of checklist. I have no problem having everything in descriptive terms and in English, as was implied for stats/processing by Russ Poldrack & co. in their 2008 paper. But a shorthand or a checklist should encourage the submission of exhaustive information.

    Now, I would prefer to fix 100% of the reporting problem but that's unlikely. What if we could capture better information on 90% of fMRI studies? Or 80% Or even a lousy 50%? We'd be ahead of where we seem to be today. And it sets the bar higher for those writing fMRI papers even if they don't comply explicitly with a formal reporting system. I would first try to capture information pertinent to the vast majority of studies, which are at 1.5 or 3 T, use EPI of some kind, and try to keep the acquisition constant across the study. Upgrades happen, parameter changes happen, etc., but in my experience fMRIers try (correctly) to hold as many acquisition parameters constant as possible. I don't know what percentage of studies use fixed methods but I would wager it's a high percentage. We should aim at those studies first, with an eye towards including those with more awkward reporting later. It's all doable, but we don't have to do it all at once. Actually, we just have to start!

    Perhaps it is as easy as providing a checklist for people to consider when writing a methods section, and perhaps suggesting some simple shorthand notation where otherwise things might get verbose. I'll see where this discussion goes, and in the mean time I've started putting together a categorized format of parameters and options that could form the basis of a simple checklist. I could have a pretty thorough version up on this blog by the end of August, I think. And that could include talking to GE (you? :-) and Philips users first, to ensure there is a reasonable chance of including 90% of the options in the first pass.

    ReplyDelete
  3. Good post. I love the idea of a shorthand description of acquisition parameters. I'd suggest there should also be one for data analysis - and ideally the latter should be machine readable so that you could actually copy and paste the relevant bit of the paper into (a future version of) your analysis package and immediately reproduce it...

    ReplyDelete
  4. Neuroskeptic, where do you get off being all logical and sensible, huh? Your suggestion of having a machine-readable code, rather than just a reporting structure, is far too useful!

    Hmmm, I don't see why this can't be done (except for the usual reason - it takes effort). I'm not involved directly in the post-proc side of things but I know a couple of blokes who are. I'll go talk to them. They are already involved in handling cross-platform interaction so they should be able to spot immediately any major hurdles.

    I'm going to continue to listen to the various opinions on all of this reporting stuff for a while longer. In the mean time I'll ponder how I can approach the acquisition part; my forte, such as I have one. And assuming that I don't find a solid reason not to, I may just have a go at it (for the acquisition side) and try to motivate others to take a stab at the processing side, which I do realize is FAR more complicated. Still, Rome wasn't built in a day.

    ReplyDelete
  5. Great post, and thanks for mentioning my paper! I completely agree that standardized reporting of acquisition parameters (in natural language, a checklist, a specialized syntax, or whatever) would be useful. One tool that might be helpful here would be a script to read in data headers or scanner log files and automatically output to some standard reporting format. As for data processing, I think the NiPype team has hit on a really nice way to store and visualize methods pipelines: [ http://miykael.github.com/nipype-beginner-s-guide/howToVisualizeAPipeline.html ]. I don't use NiPype yet, but this is the single feature I'm most looking forward to when I do finally switch.

    @Dan: Absolutely right that my paper doesn't have enough articles from each journal for cross-journal comparisons. This is a limitation of the work, and I would like to see someone put together an appropriate sample for that kind of comparison. On the other hand, journals might feel antagonized if they were called out for having lousy methods reporting. On the other other hand, maybe some journals deserve to be called out. You're also right that some parameters can be inferred from other details even if they aren't explicitly reported. I just coded the articles according to Russ's checklist, without trying to impute anything that wasn't clearly stated.

    @Neuroskeptic: Machine readable methods reports would be awesome, and I think the NiPype markup and visualizations are a good step in that direction. Natural language is ambiguous and really annoying to parse! It could also be useful for labs to upload their entire analysis stream to github, for people who want to replicate their protocols.

    ReplyDelete
  6. I agree 100% that the methods sections of fMRI papers are lacking - very. I doubt that I have read one yet that gives enough information for the well informed reader to make an adequate assessment of possible systematic errors.

    The analysis and post processing is another story entirely. I just do not trust many of the steps in the processing and giving me a list of those steps won't change that. What I want to see is error analysis - science. An instrument is a device and the algorithms it uses. The input to the processing algorithms has error that can be estimated and propagated through the associated algorithms yielding an overall estimated error for the instrument. Until I start seeing that estimated error a list of applied processing steps will mean little to me, scientifically speaking.

    ReplyDelete
  7. Thanks Joshua. Thanks also for pushing the momentum on the issues.

    "One tool that might be helpful here would be a script to read in data headers or scanner log files and automatically output to some standard reporting format."

    That was my first thought, too, but when I perused the information in a Siemens dicom I didn't find all the information I was looking for, in particular the RF coil type. Of course, a script might do 90% of the work but it would also make the effort primarily a computer science/scripting one first, and that's not a skill I possess these days! So for a first pass I'll focus on a shorthand for reporting generic parameters, and perhaps if it proves useful then others will want to produce a script that converts from their scanner's header info into the generic shorthand.

    Re. NiPy, two of the big proponents/developers are the two I want to talk to about starting a parallel effort on the post-processing side!

    Overall, I think the best thing to do is start. Do something. Anything! And then as people don't like what's suggested we can tweak it, extend it, etc. until in 2014 we'll all wonder what the fuss was about. I'll propose a first pass on the acquisition side (but don't let that stop anyone reading this from having a go yourself!) and we can tweak it via this blog.

    Anyone reading from OHBM, perhaps methods reporting should be a topic for discussion at next year's meeting? (If we have some draft proposals in play before May then we will have something tangible to discuss, it won't be an open-ended session.)

    ReplyDelete
  8. @practiCal I think GE scanners already give most or all of this information in a fairly easily readable text file.

    As for the reporting issue, I think having machine readable meta-data in a pdf or as a supplemental file would be much more useful than any standardization in the manuscript text. A rigid format that would standardize everything from acquisition to results, would not be human readable anyway.

    @Joshua, You don't have enough data to go journal-specific, but I think you have enough to examine journal type. You could divide journals into "imaging specialized journals," "general neuroscience journals," and "Clinical journals." If the under-reporting is almost all in the clinical journals, with the imaging journals doing ok, that points to a very different type of solution. It means the specialized journals simply need to explain what they're doing better in a way that editors of clinical journals can better police their submissions. I don't think any editor would feel antagonized if the end goal is to give them resources to make sure the articles the publish are more relevant (and cited).

    I also think just working off the checklist and not trying to understand what parameters mean is a non-trivial flaw in the paper that overstates the results. When multiple parameters precisely define unstated parameters, then those unstated parameters are included in the paper. The FOV/voxel size is the first example that came to mind, but I'm sure there are others too.

    ReplyDelete
  9. Hi Dan, I'm thinking of taking as my inspiration a coding system used by NOAA and others for weather reporting, especially for aviation/marine services. These codes can be read with a bit of training, but more importantly the syntax is fixed in such a way that a decoder can be easily constructed.

    One of the problems with scanner vendors' information is that they use many proprietary descriptors and parameter names. So insight is definitely required to consolidate across all the common platforms. Then, the question is how a non-expert user can transfer from vendor-specific information to a generic system. I'm curious to see how this might work. It's marginally useful for me to consider all these issues and also look across vendors' products, so I'll give it some thought and shove the draft results out here.

    At a minimum we should have a checklist for acquisition parameters, perhaps broken into essential, useful and optional. Written in English and with a list of proprietary options, this alone should make a big difference to the reporting issue. (Some people - most? - will work mindlessly from a checklist to be sure, just like they run the scanner. Sigh. At least we who read their manuscripts would get what we need out of the process, even if they don't.)

    ReplyDelete
  10. Hi Dan

    You wrote:

    "A rigid format that would standardize everything from acquisition to results, would not be human readable anyway."

    Why not?

    And, if not then there is a big problem because it is humans that need to judge whether the hardware and associated algorithms could possibly have introduced systematic error.

    ReplyDelete
  11. Anon, The trouble is that the information that matters changes and it changes relatively fast. Parallel imaging & the related parameters went from rarely to frequently used by non-physicists in just a few years. The same will probably happen with the multi-slice acquisitions & those are just two examples from acquisition, which is probably the easiest part to standardize. Taking practiCal's NOAA weather code example, how much has this code needed to change in the past 50 years?

    A rigid format for fMRI studies that makes sense today is going to be gibberish to a grad student 10 years from now.

    I like the idea of advisory checklists of what should be included in a paper. My own reviewing standard is that, if I can't mostly replicate a paper's methods, I demand more details. Still, manuscripts should include this information in full sentences in a way that makes sense to readers without the added jargon of a rigid format. If we want something that's machine readable, that's great, but that can't replace a well-written manuscript text.

    ReplyDelete
  12. Dan H

    Which is why the human-readable list must be updated regularly.

    ReplyDelete