IN the last week Ben Goldacre’s ire has been felt, and rightly so, because what the Ire of Goldacre has been pointing at is a systematic bias in the publication of science and medical information. Ben’s focus relates to the way in which big pharmaceutical companies manipulate an overwhelmingly positive academic publication record, accusing them of selectively burying the results of negative trial data and publishing only the positive trial data. This serves the interests of pharmaceutical companies, but not those of the patients or doctors. You can see a video of Ben discussing this here.
The problem of publication bias in the scientific and medical literature is that positive results get published, and negative results – or those from studies attempting to replicate previous studies – by and large, don’t. There are several problems with this, and with which I’ve had practical experience:
1. Positive results can be the results of flukes, or sheer luck; a consonance of local circumstances that result in a publishable observation. Of course, there will be the necessary internal controls and repetitions that validate a result (one hopes), and these will form the minimal requirements to not be laughed out of the peer review process. However, repeats made within a single laboratory – by the same researchers – can be insufficient, or test too small a component to justify the universality of the conclusion(s) made. Peer reviewers take it ‘on trust’ that the materials and controls being tested are as described, and without contamination.
There is no suggestion that such results are knowingly contrived or maliciously fabricated, more the case that researchers’ naturally enjoy good results, and so they perhaps don’t entertain the necessary rigour once their controls support their conclusion. I don’t necessarily fault this attitude; it is borne of the knowledge that biology is highly variable, with subtle degrees of variation across populations of cells, proteins and DNA. Despite working with the exact same components, it is possible to test something until a new behaviour appears that casts doubt on your previously solid hypothesis. I defy any biochemist or microbiologist who has not encountered a protein that doesn’t behave slightly differently between preparations, or cells that don’t, on occasion, do something a little different from the last three times. Why take the risk? You can always address this in a later paper!
2. Negative results are still results, they can still tell us something new; almost as important as knowing that X causes Y within a given context, is knowing that X doesn’t cause Y within this context. Whilst we might argue that if X causing Y was never known in the first place, publishing the fact that X doesn’t cause Y is going to be a hard sell. However, if X causing Y is already within the literature, the only circumstances where this will be independently verified is when someone not only demonstrates that X causes Y, but also builds upon it; a mere replication would be unlikely to be published. A study that attempted to, but couldn’t, demonstrate that X causes Y, would similarly be unable to publish, unless it managed to show that something else causes Y.
So within the current bias of publishing, the self-correcting nature of science is only possible on results that can be refined and built upon, or debunked whilst also presenting a new positive finding. This leaves a lot of ‘orphan’ results that remain apparently unchallenged within the literature, despite being incorrect.
3. The upshot of this is that the true labourers within science research, grad students, may base aspects of their projects on data that cannot be replicated. This wastes a considerable amount of time because most grad students, being the dutiful and self-doubting scientists they are, are more likely to assume that they did something wrong, rather than the authors of a peer reviewed paper. More painfully, they face a choice, to essentially adopt the study and demonstrate where it went wrong and show what the true result is, therein having something to publish; or continue with their own project, accepting both the loss of time and the fact that they will not be able to publish a cautionary tale.
ON a personal note, I can’t think of a time when I attended a conference and wasn’t either the recipient – or the deliverer – of knowledge that, “So-and-so technique won’t work for that…”, or, “Yeah, we looked at that, we didn’t see anything either…”, to much groaning. It would have been nice had this information been shared in the literature.
4. Perhaps more worringly is when other studies make conclusions based upon the finding of X causing Y, without ever actually verifying this fact; there is no expectation that we should repeat all the work that has ever gone before, thus the fact is we work ‘on trust’ with regards the literature, up to a point. So a lab may have found that that our old friend Y causes Z, and if Z is an undesirable thing, they may ‘reasonably’ conclude that viable targets to address the problem of Z is to target X or Y. Thus the literature branches out, and new investigations become founded upon unrealistic science.
This is of course a gross over-simplification, merely to illustrate a point; granted, scientific research is a little more nuanced than this, but I have my reasons for illustrating these points as I have (see below). It may also be the case that some scientists would choose not to publish negative results, even if they could, in competitive fields where it might be seen as giving competitors a leg-up at your expense. I would add that there have indeed been efforts to catalogue ‘negative’ results, which include databases such as the Negatome, an ambitious effort to list proteins and protein fragments that are known to not interact with each other – useful to remove false positives when looking for proteins that do interact. There have also been various attempts at establishing Journals of Negative Results, with varying degrees of success, though one seemingly active such journal is the Journal of Negative Results in Biomedicine.
My reason for posting this is simply as a pre-amble to my next post, which will be about my experiences with taking on a project that was based on published data that we found to be incorrect, then wasting a lot of time on it, trying to publish it, and having to jump over A LOT of hurdles to actually do so. I will also take the opportunity to address the darker side of peer review that researchers can face. I’ve been meaning to write this story for some time, not least because a Twitter-based conversation made the final part of the project possible, which is as good a reason as any to write about.
Read my next post on ‘My published negative result‘, my experiences of getting negative results published, problematic peer review and the value of Twitter to researchers.
[UPDATE 5th Oct: Think I’ll compile a list of all negative journals here; if anyone knows of any, let me know @jacaryl]
Whilst I appreciate the ethos of these journals, without further investigation I cannot testify as to their quality.
– Journal of Negative Results in Biomedicine
– All Results Journals – Chem, Nano, Biol and Phys [via @CEbikeme]
– Journal of Pharmaceutical Negative Results (!)
– Journal of Negative Results – Ecology and Evolutionary Biology
– Journal of Articles in Support of the Null Hypothesis (via PsychFileDrawer.org)
– The Journal of Spurious Correlations (via PsychFileDrawer.org)
[UPDATE 6th Oct: On the subject of validating the reproducibility of experiments in biomedicine, an editorial in Nature asks some critical questions about the efforts of the Reproducibility Initiative, started by a for-profit start-up based in California. RI aims “to tackle this problem by engaging independent laboratories to repeat experiments and validate results”. Worth a read]
[UPDATE 10th Oct: More discussion: ‘No result is worthless: the value of negative results in science‘, BioMed Central blog]
[This post was restored from a WayBackWhen archive. It was originally posted to a blog called ‘The Gene Gym” that began life on the Nature Network in 2010, and then moved to Spekrum’s SciLogs platform.]