IN science and medical publishing, everything is positive. Less than 4% of articles deal with negative results. There is a perception that negative results are non-results; only positive results are worth publishing. Why is it that showing that something does something is so much more important that showing that something doesn’t do something?
Obviously, I expect some common sense in this; I don’t very well expect that a paper should be published just because you have demonstrated that drinking water doesn’t cause sunburn, this would be a deeply unsurprising discovery. But what if it is a study that demonstrates that a particular drug doesn’t do what people expected it to do? What if it is a biotechnology that doesn’t work for a whole swathe of biological research?
Online science forums (or fora) are replete with anecdotal evidence describing how time, and time, and time again research scientists make the same mistakes, or encounter the same limitations, in particular techniques. This is because no-one ever publishes such limitations, or at least, not more than 4% of the time.
So what is the problem? Well, science is expensive. Very expensive. It is expensive in material cost, and it is expensive in research hours. To have discovered that you’ve wasted a year doing work that elsewhere in the world someone once wasted a similar amount of time doing, only, 3 years ago, is deeply frustrating.
In coffee breaks around the world, many scientists have discussed the idea of a Journal of Negative Results, a compendium that can be consulted at the outset of a research project to determine whether a technique or approach has already been taken toward a research problem, but has been found not to work. Sometimes such negative results a mentioned, but only in passing, and only after an alternative technique resulted in positive results, which resulted in the subsequent publication. They are rarely keyword searchable and thus inordinately difficult to find.
As I mentioned, science costs a lot of money, far more money than is necessary. This is largely because the money isn’t real, there is poor ownership of it, it is monopoly money. If it were coming out of our own pockets, we simply wouldn’t pay the price we do, we’d demand more competitive prices. Consumable companies are free to charge extortionate prices for items that they are producing by the million. I have tubes in my lab that cost £3.75 each; they can only be used once, and invariably one or two of them can be wasted due to one problem or another. Kits are all the rage in research; pre-fabricated methodologies with all the reagents and instructions one needs to perform a particular experiment. The reagents themselves cost practically nothing in most cases, yet the kits can cost anywhere between £300 – £1500, and in many circumstances, afford you between 5 – 20 experiments.
Now this combination of expensive research is part of what leads to negative results being unwanted. There’s no real money in debunking an idea, it must come along side a positive result if it is to come at all. In the pharmaceutical industry, it is part of the reason why any new drug being produced is just too much of an investment to allow to fail, so the pressure is on to ensure, by hook or by crook, that the drug is licensed. Ben Goldacre writes at length about this in his recent book, and blog of the same name, Bad Science; this is most definitely worth a read!
Expensive research also prevents investment into rarer diseases, or any medications that run the risk of having a short shelf-life. One class of drugs that have fallen foul of this economic equation are antibiotics, and this is a rather long pre-amble into what I wanted to say in this blog essay (or blessay, and Stephen Fry attests to horribly calling it).
These are currently the topic of many conversations in the microbiological world (and represent a popular source of scare-mongering by tabloids like The Sun and the Daily Mail). Obviously antibiotic resistant superbugs* are a serious problem in hospitals, thus you might expect that there is an army of researchers trying to address the problems. You’d be correct in this, but unfortunately much of this is with minimal backing by the big pharamceutical companies. Why is this you wonder?
Well, let’s go through it point by point:
It costs ~$1 billion and 10-15 years to get a drug to market.
Companies apply for a patent on a new prospective drug, which gives them a legal right to exploit their “invention” for 20 years, in return for disclosing all the details of how the invention works. However, drugs can take 10 – 15 years to get to market, and at huge cost, consequently, this leaves them with 5 – 10 years to recoup the costs. Furthermore, if they wanted to extend the patent, then they would need to apply for that within 2 years of the initial filing, so unless the drug looks promising within two years, they may consider shelving it.
They therefore want long-term treatment of chronic conditions.
If they are going to go to the bother of developing a drug, then they would prefer to treat long-term illnesses, or produce lifestyle drugs, which people take for many months or years. A 10 – 14 day course of antibiotics is small business, despite its obvious importance. Of course, the major risk in developing antibiotics is that they have a limited shelf-life. If prescription methodologies continue the way they are, i.e. you get a single antibiotic in your ‘scrip’, then the incidence of resistance reduces the value of the drug. Antibiotics should be used in cocktails, wherein the bacteria would have to develop resistance to several antibiotics, rather than just one, making resistance highly unlikely.
They want broad-spectrum antibiotics – largest patient base.
Broad-spectrum antibiotics are antibiotics that kill a broad range of different bacteria, rather than just the specific groups you’d normally target. The problem is that by doing this, you are putting a selective pressure on many different types of bacteria, which leads to an increase in the background resistance to the drug. Bacteria that are normally of no concern to us can be made selectively resistant, and in worse case scenarios, can pass this resistance on to disease-causing bacteria. We should not aim to create a reservoir of resistance in numerous bacteria; ideally we’d have numerous highly selective antibiotics targeted at the different groups of important pathogens. Of course, for the above reasons, this turns out to be impractical.
The FDA mandated that new antibiotics must be better than the old ones, even if there is no resistance to new ones. Very few antibiotics will be “better” than penicillin, i.e. the effacy (capacity to produce the desired effect) was very good. However, there is broad scale resistance to penicillin and the various generations of penicillin-derivatives. A new drug could be identified that perhaps does not have the same efficacy as penicillin, but it still works, and as a bonus, there’s no resistant bacteria. Ah, but the efficiacy isn’t good enough, sorry. This rule is a general rule, and perhaps in heart medications it works to challenge the pharmaceutical industry, but it can’t encompass the issue of bacteria developing resistance. We’re fighting a war and we need weapons, any weapons; we can’t wait for the next phase II laser-firing super-shoulder-locking bazooker.
Reduced demand for antibiotics, less able to recoup costs.
One of the medical responses to address antibiotic abuse, i.e. the bad practise of prescribing antibiotics where they have no use (viral infections), is to reduce the number of prescriptions given. The upshot of this is that this means less profit for the pharmaceutical industry, so less impetus to sell them.
To recoup costs, pharma makes big sales to agriculture.
Though if the healthcare system won’t buy antibiotics, they can be flogged off to the agricultural industry, who use them by the boat-load, as prophylactic, “against illness”, growth enhancers, to increase livestock productivity. Prophylactic use of antibitotics should be reserved for serious operations in humans, and not not to get a few more pounds of meat on your cattle. Massive and constant exposure to antibiotics breeds resistance in the natural bacteria of animals, and such resistance can find its way back into the clinic; suddenly a once useful new drug in humans becomes useless, because a related drug has been given to cattle for several years.
I am no fan of the pharmaceutical industry, in fact, I would go so far as to say that I actively dislike them. The economics speak for themselves, but it would be nice if they were a little more altruistic in their outlook. Such is their power and control over medications, they can dictate the course of research in this and other countries, and essentially get to decide what diseases get treated, and what diseases don’t. They are not transparent in their research and have been found guilty time and again of being and morally dubious in both their approach to licensing drugs that they know may kill people; furthermore, they have dumped tons of almost expired and expired (i.e. dangerous) medications on the 3rd world, thinking that they’re doing them a favour, but instead get a great tax right-off and don’t have to deal with the disposal costs of the medications. The 3rd world recipients meanwhile need to fork out money that they don’t have in order to safely dispose of these useless drugs.
It’s a systematically difficult world in which to get idealistic ideas realised.