"In addition to original research articles, authors are welcomed to submit Negative Results. The Negative Results article intends to provide a forum for data that did not substantiate the alternative hypothesis and/or did not reproduce published findings."Good for them! With one of the biggest physiology journals getting its act together, what field might be next? How about neuroimaging?
If any field could use a forum for negative results it is neuroimaging, and fMRI in particular. We seem to have a bias towards positive results that is second only to pharmaceuticals research. Of course, it is perfectly natural for scientists to want to find something rather than not find it. Not finding an earth-like planet orbiting another solar system isn't nearly as exciting as finding one. And one wonders whether Ponce de Leon would have got tenure at a modern university if his most cited work was entitled "On not finding the Fountain of Youth."
One of the challenges facing fMRI is that it demands extensive statistics or modeling to coax meaning out of tiny signals in an ocean of noise. The convoluted analyses provide skeptics with plenty of ammunition that we are basically making it all up by establishing the test that yields the answer we were looking for. (I have a former colleague - a 'real' MR scientist - who claims dismissively that fMRI stands for fictional MRI.) Without rigorous stats/models, though, it is easy to fall into the trap of false positive errors, a problem that has led to accusations of all sorts of voodoo of its own. How, then, should we treat negative fMRI results? Are there any caveats to encouraging their publication?
A negative result or a bad experiment: what's the difference?
In neuroimaging, the problem with negative results is - of course - that they are *so* easy to get. It is ridiculously easy to botch the acquisition, and almost as easy to botch the processing. Now, I may be going out on a limb here, but I am quite confident that most other fields don't have the problem to the extent that we do. In a 'conventional' experiment, such as flow cytometry in physiology, or titration in chemistry, or Doppler shifts in cosmology, an experimenter can expect to get certain clues that something is either broken or isn't optimal for detection of the particular observable. The experimenter can then do a quick control experiment to verify that the apparatus is working as desired. In other words, in a conventional experiment negative results due to methodological problems can often be quite detectable via the experiment itself, or via a control. And, if you don't see what you expect to see, the very next step is to figure out how you screwed up! (If the telescope isn't measuring very much light from that star, try taking the lens cap off.) Only after you've checked that you haven't been a total klutz are you likely to conclude that the reason for your negative result was for solid, scientific reasons and not because you're an idiot.
How do we determine whether we've screwed up our experiment in fMRI? We've got a negative result sure enough, but what does it mean and why? If you don't get an activation in the ABC area, for example, was it because the subject couldn't read or didn't know the alphabet, or was he simply asleep? You might even draw the conclusion that he didn't have any hemoglobin. All are valid explanations in the absence of further information. What do we use for our experimental control, then? How do we determine the equivalent of ensuring the telescope is working (lens cap off) and is pointed in the right direction?
The need for meaningful controls.
For some experiments a control can be as simple as having a subject push a button in response to certain stimuli, thereby providing evidence that he is awake and giving us robust fMRI signals in pre-motor and motor areas. (Hemoglobin: check!) Still, though, subtle problems can plague us. Have we selected appropriate stimuli for a contrast, for example? If you're trying to map the ABC area you might need a stronger contrast than DEF stimuli in the alternate condition. (That's a guess, by the way. For all I know there's a Brodmann Area for each three letters of the alphabet.)
Next up, my area: the acquisition. We have a spectacular array of parameters we can mess up. We could, for example, choose a crappy slice thickness and orientation, or a sub-optimal TE, and render the experiment such that signal from the ABC area tends towards zero no matter how stellar the subject's language abilities. Yet the pre-motor and motor areas might still produce robust signals whenever a vowel appears and the subject pushes a button. Cripes. Does that mean there is no ABC area? Difficult to say! At the very least we might want to consider some sort of localizer task that can check that particular region would be expected to activate/deactivate relative to the chosen baseline. We don't want to have the metaphorical equivalent of the lens cap on the telescope.
Finally, even when the stimuli, contrast, and acquisition are sound, you can still trip up in the analysis through the use of an inappropriate statistical test, or setting a statistical threshold incorrectly, or masking 'noise' at too aggressive a level.... Makes you wonder why we bother, doesn't it?
"You win some, lose some, it's all the same to me."
Motorhead - "Ace of Spades"
So does the fact that fMRI is easy to screw up imply that there's no scope for increasing the amount of negative results published in the neuroimaging journals? Not necessarily. Some unpublished studies with negative results have presumably used appropriate stimuli, contrasts, acquisition and processing techniques for the particular hypothesis being tested. In these instances the negative results may be wholly valid, and the only reason they weren't rushed to print was because of a conflict with a prior publication that caused pause in the minds of the researchers. ("Surely, we must have screwed up! The other study is already published, so it must be right, right?") It can take Herculean courage to contradict a published result, but you should bear in mind that the earlier study may have had methodological flaws that produced an artificial positive result!
The way forward.
What definitely isn't needed in neuroimaging is a forum for every flake, hack and charlatan to get his experimentally flawed study into print. So, if the neuroimaging journals do opt to join this brave new world where negative findings are actively solicited, we will need to ensure a robust answer to this question: were the methods performed correctly? It would be interesting to see how the reviews of these negative findings publications get done. A critical review of the methods of the prior and the contemporary studies would seem to be warranted, lest one or other (or both!) has a fatal flaw.
If you do have a negative result you're thinking about publishing, I would encourage it. If nothing else, reports of negative findings might save a few people from exploring research cul-de-sacs of their own. Additionally, negative findings might stimulate ideas for new experiments in a way that positive results don't. I see negative results to be somewhat analogous to making mistakes when learning a new task: they tend to be bloody annoying but they can be disproportionately instructive. Just bear in mind that massive caveat: make sure that your methods are correct!
No comments:
Post a Comment