Last week, I wrote about two terms that often come up when describing the effectiveness of a test: sensitivity and specificity. Today, I thought I would talk about two related terms: accuracy and precision. Both accuracy and precision are words that, unlike sensitivity and specificity, are part of most people’s everyday vocabulary. However, the actual meanings of these two words are perhaps less clear and are on many occasions used synonymously. Therefore, I will start by briefly explaining accuracy and precision as well as the difference between them.

Accuracy is the ability of a measurement to reflect what is true. Therefore, an accurate blood pressure measurement would be one that is close to the person’s actual blood pressure. Conversely, an inaccurate blood pressure measurement would be one that if far away from the person’s actual blood pressure. This can be represented by the figure below showing the results of an inaccurate test on the left (where the marks are on average to the left of the target centre) and of an accurate test on the right (where the marks surround the target centre). The important point to understand here is that it is not required that all measurements are near the true value for the test to be accurate, only that they are on average near the true value.

Precision on the other hand is a measure of how close the measurements are to each other. Therefore, a precise blood pressure test would be one that gave a similar measurements each time the test was repeated (given the conditions remain unchanged between tests). An imprecise blood pressure test would be one that gave very different measurements each time the test was repeated. This is represented by the figure below showing imprecise measurements on the left and precise ones on the right.

It should now be clear that both accuracy and precision have separate meanings. You may have noticed that the left hand image in the two figures is there same. This represents measurements that are both inaccurate and imprecise. The figure below shows measurements that are both accurate and precise.

In terms of the types of test that I used as an example in my previous post (those that have just two possible outcomes: positive or negative), accuracy and precision have very specific meanings. You may remember that for a binary classification test used to test a person for a disease, there are four potential outcomes. These are

- correctly predicting that the person has the disease – true positive
- correctly predicting that the person does not have the disease – true negative
- incorrectly predicting that the person has the disease (when they do not) – false positive
- incorrectly predicting that the person does not have the disease (when they do) – false negative

Accuracy is a measure of the number of correct results given by the test. This is calculated by dividing the number of true positives and true negatives by the total number of tests. Precision is a measure of how often a positive diagnosis is correct. It is calculated by dividing the number of true positives by the total number of positive results.

So, if we have the following breakdown of test results

Accuracy will be calculated as (3 + 84) / (3 + 1 + 12 + 84) = 0.87 = 87%

Precision will be calculated as 3 / (3 + 12) = 0.2 = 20%

In other words, the test gives the correct result 85% of the time. If the test says that a person has the disease, we can be 20% certain that they do indeed have the disease.

[…] You can see a post I wrote about two related terms, accuracy and precision, here. […]