Time for a new Wordle: see at a glance what I’ve been blessaying (blog essaying) about recently.
A dichotomy of research…..
[ratings]
I AM a juggler by profession, but it is not balls that I juggle; instead, I juggle variables.
I currently work on basic technologies research. This does not mean to say that my research is basic, rather it means that the technologies that we are trying to develop are at a fundamental, or grass-roots, level. As such, some might call such research ground-breaking; though I hasten to add that this term is more considered in it’s metaphorical context. If any of you have ever had to dig in a new garden, or have led the way in deep snow on a mountain, then you’ll have some idea of what breaking new ground is all about. It is hard, unrelenting and there is no lateral movement, you either move forward into new ground, or you back-step until you get to base camp and set off in a new direction.
I am working on a biochemical reaction (a reaction involving biomolecules such as proteins and DNA) that has been shown to work in solution, in a tube. Other people have shown this, and I have shown it. However, interesting as this reaction is, we want to go further with it and have it work on a solid gold surface. Why? Well, ultimately we would like to be able to control the reaction by using an electric field, to turn it on or off, or better still control the precise level of its activity.
Having a biological system that works at the flick of a switch seems like a pretty cool idea, right? You’re sitting in a small, dusty village in central Africa or central South America. You’ve taken a load of blood and saliva samples that you intend to test for certain antibodies, or for the presence of current bacterial or viral diseases. It’s hot, there is no power and it took you a week of travel to get there. You are worried that your samples will degrade before you can get into a position to test them, and if you are especially well funded, you will have brought a whole portable lab with you, at great cost, and at great loss if it breaks on route.
What if, instead of the above hassle, you take out the small box you brought with you, in which there are several foil-wrapped packets containing plastic cassettes about the size of your thumb, but considerably flatter. What if you could inject the blood or saliva sample in at one end of the cassette, wait a minute and at the other end one of several LEDs light up. The combination of LEDs will tell you what the cassette has detected. Inside the cassette are small flow-channels that each contain different biomolecules that have all been painstakingly developed to function in this capacity. Simple, potentially very cheap, a laboratory on a chip.
Furthermore, you flick a switch on the cassette, which makes the juice from the small battery alternate from just powering the LEDs to instead put an electric field through the flow-cells; you can now conduct a second set of reactions using biomolecules that had been inactive until that point. TWO labs on a chip! Lab on a chip technology already exists, but there is much further for it to go, and this requires basic technologies research.
Ideally we would have a chip that could perform logic functions, so rather than it just detecting molecule A and it saying “hey man, you’ve got some molecule A here”, or likewise with molecule B; perhaps if molecule A and molecule B are both present, then this indicates something more serious, which is indicated by the presence of molecule C. Rather than you wasting time, and another chip, going back and testing the blood sample again for molecule C, the chip can detect both A AND B in combination, and in doing so will have activated another internal component that is able to then detect C. However, if the chip detects only B, NOT A, then this could indicate something else, so it triggers something that can detect this something else, perhaps molecule D. See? A logical chip.
Well, returning from the realms of near-future science fiction, in order to achieve any of the above in my basic technologies research, there are several variables we (meaning I) need to juggle, and this is where the fun pain starts:
Continue reading “A dichotomy of research…..”
Writing for all…
Who are the best people to communicate science? Is it the actual scientists producing the science? Is it other scientists who are not directly involved in the science? Is it journalists or science writers? I would argue that it is largely irrelevant; the best people to communicate science are those who are interested in it.
If you would have asked me the same question 10 years ago, I might have answered differently. I might have suggested that it is best that scientists communicate science and that journalists leave well alone, but then these would be the words of a recent graduate, cock-sure and arrogantly entering into their chosen field with the kind of bravado that I still see in every newly minted graduate. In any case, 10 years ago we were dealing with the height of the MMR-autism fallacy that demonstrated precisely the wrong way to go about reporting science. As we get older though, we mellow as we start to see the bigger picture; amusingly it is this attitude that is probably responsible for so many teenage tirades against their parents, the teenager believing that the parents don’t take anything seriously, and the parents, having seen it all before, have the benefit of perspective.
A question of balance…
Giving equal attention to “all sides” can misrepresent the prevailing scientific consensus.
One of the major issues that is often debated in science journalism is one of balance. It is an issue raised to public awareness by a pamphlet produced by Chris Mooney entitled, ‘Blinded By Science: How ‘Balanced’ Coverage Lets the Scientific Fringe Hijack Reality’ (Columbia Journalism Review, November 2004). In it he asserted:
…the journalistic norm of “balance” has no parallel in the scientific world and, when artificially grafted onto that world, can lead reporters to distort or misrepresent what’s known, to create controversies where none actually exist, or to fall prey to the ploys of interest groups who demand equal treatment for their “scientific” claims.
A journalist may try to find a compromise or objective ‘truth’ by combining numerous sources and affording them equal opportunity to give their opinions, and allow the reader to make up their mind. The question is, how well does this journalistic system of ‘objectivity’ serve a science journalist when reporting on science topics.
Hox box…
[ratings]
WHAT do hedgehog, merlin and okra have in common with jelly belly, pimples and Genghis Khan? What if I said that hedgehog is not just a cute, spiny mammal? That Merlin is not just the name of a bird and a wizard? That okra is not just a vegetable? Would you be interested to learn that “jelly belly” is crucial for gut muscle development? “Genghis Khan”, far from being a Mongol lord who conquered Asia, is involved in the stimulation of structural components in cells. Oh, and we can look to “Merlin” to restrain cell proliferation.
Confused? Don’t be. They are all names of genes – sequences of DNA that exert influence on a creature by encoding and regulating the production of a protein. These particular genes are found in Drosophila melanogaster, otherwise known as the fruit fly.
Scientists are interested in a region of the fruit fly chromosome called the “homeobox”. The homeobox (“homeo”, from the Greek for “similar”, and “box” as the sequence is in a defined package) contains “Hox genes”. First identified in fruit flies in the early 1980s, they control the different aspects of body development: head, legs, wings or other structures. Interestingly, many other creatures, including humans, possess these genes, where they carry out similar functions. We are all basically running on the same genetic software.
Research in this area is helping us to understand why our head is where it is, why we have two arms joined to our upper body and not to our hips, and why we have feet, rather than hands, at the ends of our legs. More significantly, they are helping to identify the genetic basis of certain human diseases by helping us understand the mechanisms of this genetic control; errors in embryonic development account for a large number of spontaneous abortions in humans.
Hox genes produce simple proteins that govern the activities of other “target” genes, which result in the development of a specific body parts at specific locations; it is those “target” genes that contain the specific information about how and what appendages look like, and the hox genes control the degree to which those target genes are switched on or off. The arrangement of genes mirror the arrangement of the body parts they control, starting with the head at one end, followed by the mid-sections, and so on. It’s a logical blueprint that works because it represents economy of information.
In the above figure the hox gene clusters of the fruit fly are colour coded for the respective sections of head-bottom development they control, and below are the homologous (performing the same function) genes in a mouse. Whilst we mammals have four clusters of these gene groups, some of which have become redundant or lost due to compensation by one of the other clusters, the startling similarity with the fruit fly hox cluster is unmistakable.
So over the course of millions of years, despite the changes to body form and function (the changes to the specific genes that the above clusters control) in the course of animal evolution, the controlling elements themselves have been conserved. This says something important about the blueprint of animals on Earth; if it works, keep it.
Continue reading “Hox box…”
Premature conclusions
Something the media is very good at, and alas some scientists too, is making a conclusion about a scientific investigation before actually performing the investigation.
This is not how science works!
A recent example of this appeared in today’s Daily Mail, the popular gutter-rag that leads the way in pseudo-scientific sensationalism:
Women who drink coffee or tea during pregnancy may increase their baby’s odds of developing cancer, doctors believe.
Experts say caffeine may damage the DNA of babies in the womb, making them more susceptible to leukaemia, the most common cancer in children.
To establish the link, scientists at Leicester University will scrutinise the caffeine intake of hundreds of pregnant women and compare the results with blood samples from their babies after birth.
Researcher Dr Marcus Cooke said there was a ‘good likelihood’ the study would make a connection. Previous research has shown that caffeine damages DNA, cutting cells’ ability to fight off cancer triggers such as radiation.
Changes of this kind have been seen in the blood cells of children with leukaemia. Scientists know they occur in the womb, but do not know why.
‘Although there’s no evidence at all of a link between caffeine and cancer, we’re putting two and two together and saying: caffeine can induce these changes and it has been shown that these changes are elevated in leukaemia patients,’ added Dr Cooke.
So, they’re planning to investigate this link, though Dr Cooke is quoted as (apparently) saying there is a ‘good likelihood of making the connection‘, despite, as he is later quoted, there being ‘no evidence at all of a link between caffeine and cancer’.
Dr Cooke is also quoted as saying that ‘previous research has shown that caffeine damages DNA, cutting cells’ ability to fight off cancer triggers such as radiation‘; now, I am not going to judge Dr Cooke on the basis of such quotes, because I well know how much gutter-rags like to quote out of context, but I can’t help wondering whether this prior research was a case of caffeine being introduced to cells in a dish, rather than to an actual living and breathing mammal. Any number of chemicals can cause physiological disturbance to cell cultures, but these do not necessarily translate to their being harmful to us generally.
So what’s my problem?
The same, but different…
Epigenetics is a term that is being bandied around quite a bit in the biological literature these days. It is not a new term, but in its current definition the term is used to term heritable changes in gene expression that occur without changes to the DNA.
So what does this actually mean? Well, most people will be familiar with the fact that DNA provides the blue print for how an organism is put together, and that over time mutations in the DNA can change certain properties of what that organism looks like; or they may result in a genetic disease, such as cystic fibrosis or sickle cell anaemia.
However, how do we explain the phenomenon wherein sets of identical Human twins, whom share identical DNA, one twin can develop schizophrenia, pancreatic cancer or diabetes, whilst the other remains unaffected? If we were interpreting their development on the basis of their DNA sequence only, then we have a conundrum.
The answer is that DNA is involved in a dynamic, interpretative process. For example, you may buy a new computer, and in this computer there is a graphics chip that is controlled by a piece of software called “firmware”. This software tries to get the most out of the hardware. Every so often, a new piece of firmware is released, and sometimes it can revolutionise the function of that graphics chip. The chip hasn’t changed, but the software has. This is not a perfect analogy, but what I want to convey is that sometimes the hardware doesn’t need to change; sometimes you can just change the way it’s used.
Thus, the sequence of bases of DNA does not necessarily have to be altered for a new effect to be seen in the resulting organism; some changes can occur by epigenetic processes. There are several different types of epigenetic process, and these differ depending on whether we are speaking about high organisms, such as Humans, or single-celled organisms, such as bacteria.
At the simplest level, one such epigenetic change might be a process call methylation; in this, a chemical group is literally tacked onto the DNA at a certain sequence, which can result in a change of gene expression. If a gene is seen as a piece of DNA that results in a functional product, then we can start to see how changing the level at which this product is produced can have an effect.
One of the recent and interesting findings about such epigenetic changes is that they too can be inherited, leading to questions about the nature of “genetic memory”, the idea that the lifestyle lead by your grandparents can have had a direct effect on the way that your DNA expresses its instructions. In the example of the twins, once identical twin embryos have separated, each cell division can result in the accumulation of an increasing number of these epigenetic changes, adding up to quite a difference over a life-time. Thus even things that are the same can be different.
The state of art…
If you take a photograph of art, is that art too? Or is it just a photo of some art? When does it stop being art? Do you have to manipulate the photo before it becomes art; a change in the original intended emphasis of message?
What about if you then take a photo of the photo of the art, does that then make it art? I think it might, but then why does taking more photos of the photos of the art make it more art than just the original photo; surely the latter is “closer” to the original artwork than the 2nd+ generation photo?
But then, is it the degradation and graininess in the 2nd+ generation photos that makes it artsy? Why, if the original artwork is degraded would this be more artistic? What would that mean for the original artwork? Was it not degraded enough, or are we now talking about something different? If so, where was the disconnect? Was it capturing the light of the art on a film or CCD? How is this different from capturing it on your retina? Is it the depth of field that makes the experience better, you can achieve this with a photograph; what if it is the smell?
Would it mean that the original artwork needed to be more degraded to be art?
I’m just a scientist tying to make matter do what it doesn’t want to do, but clearly it seems that art is doing what matters.
The perils of positivity….
IN science and medical publishing, everything is positive. Less than 4% of articles deal with negative results. There is a perception that negative results are non-results; only positive results are worth publishing. Why is it that showing that something does something is so much more important that showing that something doesn’t do something?
Obviously, I expect some common sense in this; I don’t very well expect that a paper should be published just because you have demonstrated that drinking water doesn’t cause sunburn, this would be a deeply unsurprising discovery. But what if it is a study that demonstrates that a particular drug doesn’t do what people expected it to do? What if it is a biotechnology that doesn’t work for a whole swathe of biological research?
Online science forums (or fora) are replete with anecdotal evidence describing how time, and time, and time again research scientists make the same mistakes, or encounter the same limitations, in particular techniques. This is because no-one ever publishes such limitations, or at least, not more than 4% of the time.
So what is the problem? Well, science is expensive. Very expensive. It is expensive in material cost, and it is expensive in research hours. To have discovered that you’ve wasted a year doing work that elsewhere in the world someone once wasted a similar amount of time doing, only, 3 years ago, is deeply frustrating.
In coffee breaks around the world, many scientists have discussed the idea of a Journal of Negative Results, a compendium that can be consulted at the outset of a research project to determine whether a technique or approach has already been taken toward a research problem, but has been found not to work. Sometimes such negative results a mentioned, but only in passing, and only after an alternative technique resulted in positive results, which resulted in the subsequent publication. They are rarely keyword searchable and thus inordinately difficult to find.
As I mentioned, science costs a lot of money, far more money than is necessary. This is largely because the money isn’t real, there is poor ownership of it, it is monopoly money. If it were coming out of our own pockets, we simply wouldn’t pay the price we do, we’d demand more competitive prices. Consumable companies are free to charge extortionate prices for items that they are producing by the million. I have tubes in my lab that cost £3.75 each; they can only be used once, and invariably one or two of them can be wasted due to one problem or another. Kits are all the rage in research; pre-fabricated methodologies with all the reagents and instructions one needs to perform a particular experiment. The reagents themselves cost practically nothing in most cases, yet the kits can cost anywhere between £300 – £1500, and in many circumstances, afford you between 5 – 20 experiments.
Now this combination of expensive research is part of what leads to negative results being unwanted. There’s no real money in debunking an idea, it must come along side a positive result if it is to come at all. In the pharmaceutical industry, it is part of the reason why any new drug being produced is just too much of an investment to allow to fail, so the pressure is on to ensure, by hook or by crook, that the drug is licensed. Ben Goldacre writes at length about this in his recent book, and blog of the same name, Bad Science; this is most definitely worth a read!
Expensive research also prevents investment into rarer diseases, or any medications that run the risk of having a short shelf-life. One class of drugs that have fallen foul of this economic equation are antibiotics, and this is a rather long pre-amble into what I wanted to say in this blog essay (or blessay, and Stephen Fry attests to horribly calling it).
Too busy to gripe….
Oh man I’m so busy, where does all the time go?
Well Professor Brian Cox has some idea, or no idea, depending on which physicist you ask. Time was the subject of a recent Horizon episode you see.
Did you see what I did there? It’s called a link; subtle, wasn’t it?
Actually, I’m unbelievably busy at the moment, trying to get material together for a paper, prepare presentations, plan holidays and book flights; it’s all go over here!
But speaking of flights and going (you see that? Another link, woo!), one concept that particularly interested me about Brian Cox’s Horizon episode was that if time was thought of as a dimension, i.e. one dimension of perception, in the same manner that physical space around you represents such a dimension, then the speed at which we are moving through this time dimension is staggering; in fact, we are moving through it at the speed of light!
Ah, but we all remember our lessons in Relativity from school physics don’t we? Erm….don’t we? Anyway, wasn’t one of those iron-clad laws of physics something about not being able to move at the speed of light? This is true, says Brian Cox, except it is only true that you cannot move through space at the speed of light.
There is apparently no problem with moving through time at the speed of light. Furthermore, time travels at different speeds at different places in the universe; time is marginally slower for us on the Earth, as the gravitational force acting on the Earth warps time (remember Einstein, space and time, they’re linked: space-time); elsewhere, near stars, time is moving at a rather slovenly rate, and not at all in the vicinity of black holes.
The other thing that Brian managed to “make bitesize” is this idea of time slowing down as you speed up, such that it would appear to stop when moving though space (all be it impossibly) at the speed of light. You see, when you’re stood still, well, as stood still as you can be anywhere in the universe*, then you can move through time at the speed of light. However, as we are talking space-time, in trying and move through space, such as riding a bike, you are slowing your passage through time; the energy cost of moving through space is compensated for by a loss of energy from the time component, thus time slows. The faster you move, the more time slows, all be it imperceptibly; unless you are late for work and rush around, then time seems to move really quickly.
Which links me onto another point (yes, good isn’t it), perception. Time is as much perception as it is a universal inconstant. What time is it? Universal time? Earth time? Greenwich Mean Time? My time? Your time? They’re all different. In fact, Einstein once said that the only real way that you can share the same perception of time, is by sitting next to each other. This of course was almost certainly one of his outlandish ruses to chat-up women.
For which we forgive him.
* Earth rotates on its own axis, and orbits the Sun; the solar system orbits within the galaxy; and the galaxy orbits who knows what – we’re moving in lots of directions, and at great speed



