Visualising large numbers

How much is a million posterFrom distances between stars to the mass of the cells in the body, science is full of both vary large and very small numbers.

To help deal with such large and small numbers, scientists often use scientific notation. So, for example, the mass of a typical human cell (which is around 0.000000001 grams) would be written as 1 × 10-9 g. This notation helps make such numbers more readable. Unfortunately, I think it also makes it harder to appreciate just how big or small these numbers really are.

To regain an appreciation of the size of these numbers, I like to try and visualise them. To help me do this, I made a poster containing one million dots (click on the link to open up a pdf version of it).

one million dots poster

You will have to zoom in a bit to be able to make out each individual dot!

If we use the example of the human cell (which weighs 0.000000001 grams), one million human cells (or one poster worth of human cells) would still only weigh 0.001 grams. Indeed, it would take 1000 million cells (or 1000 posters) to equal 1 gram. Just imagine that for a second… if each dot on the poster represented 1 human cell, we would need 1000 posters to equal the weight of a single paper clip. That I think really brings into perspective just how small a human cell is.

So, when you next see a very large or small number, try to put that number into perspective by remembering just how many dots are on this poster.

Feel free to use this poster for you own ends. If you want to edit the poster at all, you can open a PowerPoint version of it here.

Note: Just for fun, I made one of the dots on this poster red! See if you can find it.

Accuracy and precision

Last week, I wrote about two terms that often come up when describing the effectiveness of a test: sensitivity and specificity. Today, I thought I would talk about two related terms: accuracy and precision. Both accuracy and precision are words that, unlike sensitivity and specificity, are part of most people’s everyday vocabulary. However, the actual meanings of these two words are perhaps less clear and are on many occasions used synonymously. Therefore, I will start by briefly explaining accuracy and precision as well as the difference between them.

Accuracy is the ability of a measurement to reflect what is true. Therefore, an accurate blood pressure measurement would be one that is close to the person’s actual blood pressure. Conversely, an inaccurate blood pressure measurement would be one that if far away from the person’s actual blood pressure. This can be represented by the figure below showing the results of an inaccurate test on the left (where the marks are on average to the left of the target centre) and of an accurate test on the right (where the marks surround the target centre). The important point to understand here is that it is not required that all measurements are near the true value for the test to be accurate, only that they are on average near the true value.

Image

Precision on the other hand is a measure of how close the measurements are to each other. Therefore, a precise blood pressure test would be one that gave a similar measurements each time the test was repeated (given the conditions remain unchanged between tests). An imprecise blood pressure test would be one that gave very different measurements each time the test was repeated. This is represented by the figure below showing imprecise measurements on the left and precise ones on the right.

Image

It should now be clear that both accuracy and precision have separate meanings. You may have noticed that the left hand image in the two figures is there same. This represents measurements that are both inaccurate and imprecise. The figure below shows measurements that are both accurate and precise.

Image

In terms of the types of test that I used as an example in my previous post (those that have just two possible outcomes: positive or negative), accuracy and precision have very specific meanings. You may remember that for a binary classification test used to test a person for a disease, there are four potential outcomes. These are

  • correctly predicting that the person has the disease – true positive
  • correctly predicting that the person does not have the disease – true negative
  • incorrectly predicting that the person has the disease (when they do not) – false positive
  • incorrectly predicting that the person does not have the disease (when they do) – false negative

Accuracy is a measure of the number of correct results given by the test. This is calculated by dividing the number of true positives and true negatives by the total number of tests. Precision is a measure of how often a positive diagnosis is correct. It is calculated by dividing the number of true positives by the total number of positive results.

So, if we have the following breakdown of test results

sensitivity and specificity

Accuracy will be calculated as (3 + 84) / (3 + 1 + 12 + 84) = 0.87 = 87%

Precision will be calculated as 3 / (3 + 12) = 0.2 = 20%

In other words, the test gives the correct result 85% of the time. If the test says that a person has the disease, we can be 20% certain that they do indeed have the disease.

Sensitivity and Specificity

One of the things that I notice about science is that people often think that it is difficult and that only a privileged few have the ability to grasp it. That is not the case at all. If it was, I would have never have got very far in the subject. A big reason why I started this blog is because I want to help people understand that anyone can understand science. If you don’t, it is probably because it is not being explained to you very well.

With this in mind, I hope to write some short articles on topics related to science. I think that when people read and understand them, they will realise that behind all the technical jargon they associate with science is something interesting… something worth exploring further. The first topic I will discuss is something I often come across in the health profession, sensitivity and specificity.

medical testThe most common use for these terms is for describing the effectiveness of medical tests. To help explain sensitivity and specificity, consider for a moment a medical test for a disease that is performed on 100 people. Each of these 100 people either has or does not have the disease. Also, the result of each of the 100 tests will either be positive (predicting that the person tested has the disease) or negative (predicting that they do not). Therefore, for each test, there are four potential scenarios:

  • correctly predicting that the person has the disease – true positive
  • correctly predicting that the person does not have the disease – true negative
  • incorrectly predicting that the person has the disease (when they do not) – false positive
  • incorrectly predicting that the person does not have the disease (when they do) – false negative

If we know how many of the 100 tests fall into each of these categories, we can determine very useful information about the test. The ability of the test to correctly predict that a person has the disease can be calculated by dividing the number of true positives by the total number of people who have the disease. This is know as the sensitivity of the test. The ability of the test to correctly predict that a person does not have the disease can be calculated by dividing the number of true negatives by the total number of people who do not have the disease. This is know as the specificity of the test.

So, if we have the following breakdown of test results

sensitivity and specificity

Sensitivity will be calculated as 3 / (3 + 1) = 0.75 = 75%

Specificity will be calculated as 84 / (12 + 84) = 0.875 = 87.5%

In other words, if the person has the disease, the test will diagnose them with the disease 75% of the time. If the person does not have the disease, the test will return a negative diagnosis 87.5% of the time While both sensitivity and specificity are important indicators of a test’s effectiveness, you can probably see by now that there is a distinct different between the two.

I will finish with a quick note on the clinical relevance of sensitivity and specificity. If the disease that you are trying to detect is life threatening, it is of course vital that you identify every person who has the disease (even if the test mistakenly says someone has the disease when they do not). You therefore want your test to have a high sensitivity. If the way in which you treat the disease you are trying to identify has a high cost associated with it (e.g. it involves surgery or amputation), you do not want to put healthy people through it. You therefore want your test to have a high specificity.

You can see a post I wrote about two related terms, accuracy and precision, here.

Is GM food safe?

The short answer is yes.

The slightly longer answer…

A paper has just been published in the Critical Reviews in Biotechnology journal cataloguing and summarising 1783 scientific studies that looked into the safety and environmental impact of GM food. Its conclusion was that

The scientific research conducted so far has not detected any significant hazards directly connected with the use of GE crops

In addition, there is a consensus conclusion from science organisations worldwide, that GM foods are safe. So why then is there such apparent controversy about the topic?

A search of just one online newspaper, the Daily Mail, reveals hundreds of articles about GM food. Many of these question their safety or environmental impact. To get a flavour of the concerns raised by journalists about this issue, I will analyse the first article I clicked on.

ImageThe title of this article ‘‘Frankenstein food’ a good thing? It’s all great GM lies‘ is unashamedly provocative and immediately tells us the stance the journalist is going to be taking. The thrust of the article relate to comments made by Environment Secretary Owen Paterson about whether meat being sold in the UK has been raised on a GM diet. I will however jump forward to focus on the claims made by the article about GM food safety.

The biggest piece of evidence against the safety of GM foods used by the article comes in the form of a scientific study by Dr Spiroux de Vendômois et al in the International Journal of Biological Sciences. Whilst the study in questions concludes that it identified new side effects related to the consumption of GM food, it also has many problems. Chiefly among these is a small sample size. As admitted in the paper

Only 10 rats were measured per group for blood and urine parameters and served as the basis for the major statistical analyses conducted.

When statistical analyses is based upon such a small sample size, it is very difficult to determine what is causing the effects that you are seeing in the data.

The Daily Mail article references a number of other pieces of evidence to further support it claim. It talks about “large-scale GM crop trials conducted in this country”. Without further details or references to this however, I was unable to investigate this.  Other unreferenced claims about studies are also made. The journalist also uses a second hand anecdote to further make her point.

That might explain some anecdotal evidence I came across recently concerning a Danish farmer whose pig herd had mysteriously fallen ill.

Finally, the article cites public opinion in its argument against the proliferation of GM foods in the UK.

a British Science Association survey showed public support for GM crops declining from 46 per cent in 2002 to just 27 per cent now

Of course, it is very easy to cherry pick numbers from two particular years of a survey (and from one of the many questions that was asked) to make your point. I might equally say (using that very same survey) that 25% Britons are now unconcerned by GM food, compared with 17% in 2003. The caution with which we should approach these figures is emphasised by Dr Tom MacMillan, director of innovation at the Soil Association.

The share saying they agree that GM food “should be encouraged” actually drops from 46% in 2002 to 27% in 2012. Not only does that directly call into question the notion that there is greater public appetite for GM, but the fact that the figures are 35% in 2005 yet 44% in 2010 suggest it is absolute nonsense to suggest a clear trend here.

In summery, there is a significant amount of evidence saying that GM foods are both safe to eat and safe to the environment, There still however remains, particularly in the media, the perception, that GM food is unsafe or that the evidence of its safety is not sufficient. Articles claiming such, like the Daily Mail article presented here, point towards single specific studies (which in this case had significant methodological flaws) for support, whilst ignoring the large body of reputable evidence available. In addition, anecdotal evidence and specially selected survey results are used as to support a particular claim.

When trying to sell a certain story or viewpoint, it is of course easy to find a particular study, anecdote or statistic that appears to back you up. Science however does not work like that. Science requires continuous questioning, testing and experimentation in order to come to (and maintain) a consensus conclusion. That is exactly why we can now say that GM food is safe.

Sense About Science media workshop

I have often found myself frustrated about the way in which science is represented in the media. I don’t think this is something just restricted to the usual suspects of the Daily Mail and Fox News but is something more prevalent in the way in which science is perceived by people in general. This was brought to my attention just a few weeks ago when searching for science books within Waterstones in Piccadilly, London. Despite being the flagship store in London and possessing eight and a half miles of shelving, I was surprised to find a science section comprising of just a few hundred books, many of which were actually abut DIY.

With a newfound resolve to somehow help right this perceived wrong, I recalled a presentation I attended that was given by a member of an organisation dedicated to achieving a better representation and understanding of science. This organisation (as I later found out) was Sense about Science and, after applying online, I was lucky enough to make it onto one of their quarter yearly media workshops.

Sense About Science media workshopThe day started at a leisurely 10.30am (for those of us living in London) with registration, a wide assortment of chocolate biscuits and a ‘goodie bag’ containing a rather futuristic looking mouse mat. The first discussion of the day began half an hour later, with the three invited panellists speaking about their experiences with the media. Robert Dorey talked about what he called “the good, the bad and the off-the-wall” of media interaction. In the ‘bad’ part of his talk, Robert Dorey explained that one of the biggest challenges associated with live radio interviews was that everything you say is broadcast. It is therefore important to be clear about what you are saying and also to be aware of the situation that you are entering into (since you may well be having to contend with sound effects and the like). Conversely, a pre-recorded interview can mean that much of what you say (often the very thing that you actually wanted to say) is mercilessly edited out of the final program.

Following a brief talk by each of the panellists, there was the opportunity for questions and discussion.  Questions asked included “How do you deal with questions from fundamentalists?” and “How do you get your core message across?”. The advice given was that you don’t always have to answer a question and, if you do, your answer should be well thought out beforehand. The discussion part of this first session involved dividing into four groups based upon the shapes printed on our name badges (which somewhat unfortunately included squares).  Within our groups we were asked to think about the good and bad aspects of media reporting of science. Perhaps the most pertinent point made by my group was that the agenda of the journalist is different to that of the scientist. Whilst this can help to make science stories more entertaining to the public, it can lead to misrepresentation and selective bias.

The next part of the session was a panel discussion on what journalists are looking for. One of the panellists was Claire Coleman, a freelance journalist who often reports on science stories for the Daily Mail. She said how her non-scientific background was of benefit to her since it made communication with readers (who generally also do not have a scientific background) easier. I contended this point later on, during the questions portion of the session, by arguing that a lack of training in the field that the journalist worked would not be seen as an obstacle in other disciplines, such a finance or sport. Midway through the talks from the panellists, the session was halted by an ominously sounding “sewage emergency”. Thankfully, following a timely relocation of everyone to the first floor of a nearby pub, the session was able to continue.

The final session of the day involved representatives from a university media office, Voice of Young Science and Sense About Science. In particular, I found the advice on the various ways in which we can communicate our research useful. One of the key points I think the panellists made was that it is vital to think about what it is that the journalist wants. By thinking from their perspective, we can think of our research in terms of a message that means something to the public. Indeed, we also have to realise that sometimes our work is simply not suitable for media reporting.

In summary, the workshop, my first foray into the other side of science reporting (the reporting side) was a great experience and one that I feel will better prepare me for any future encounters with the media. It has also (as workshops have a habit of doing) inspired me to be a bit more proactive in getting a better representation of science into the wider world.