# Reducing Measurement Error

Weisberg^{1} *Willful Ignorance: The Mismeasure of Uncertainty*; Weisberg;
Wiley; 2014. reminds us of how recent the idea of statistics is. It’s hard for
me to imagine doing things without it.

For example, imagine it’s the 1600s and we are trying – for the first time – to
make scientific measurements of our neighbouring planet Mars. We have pointed
our telescopes at Mars at 8 different times this year, and tried to estimate its
radius. Our tools are not perfect, but we have arrived at the following
numbers.^{2} These are in Megametres – thousands of kilometres – but that
doesn’t really matter.

2.92 | 2.57 | 3.35 | 3.81 | 3.18 | 2.66 | 4.16 | 2.69 |

The smallest value is 2.57 and the largest is 4.16 – quite the difference!

What should be our best guess for the radius of Mars? As modern (post-1700s humans) we want to take the average of all eight values, because that cancels out errors. We’d guess 3.17, which is only 7 % off the true value.

What did they do in the 1600s? They wrote down everything about the circumstances in which the observations were made, like atmospheric conditions, status of the telescope and other equipment, times, climate, weather – I wouldn’t be surprised if they recorded what the astronomer had eaten and how much they had slept – and then they asked an expert to judge, based on the circumstances, which observation was best. And they used that single observation.

It makes complete sense – they knew what the problem of measurement error was,
and they tried to reduce it by picking the least erroneous observation.
Absolutely the right intention, but they just didn’t have the technology we do
now: an understanding of statistics and how numbers behave in aggregate.^{3} The
mean of even quite erroneous observations is far more accurate than the “best”
single observation.

It’s easy to forget how recent that invention is, and how much it has changed the world.