Bayesian Statistics
I’m going to play very fast and loose with terminology in this article. More precisely, I’m going to talk about how additional evidence affects probabilities, and how easy it is to forget to take that into account when estimating probabilities. I’m going to call this “Bayesian statistics”, even though that’s not proper terminology.
That may sound like a very mathy introduction, but truth is this is very useful in every day life. In fact, the two examples I’ll give you are from my life.
Terrorist Cooperation
On the radio last night, they talked about terrorist methodology, and how terrorists coordinate the things they do. They were talking about a specific Terrorist, who we will call Kim. One news anchor reported that
Kim has stated multiple times in interrogations that he worked completely on his own.
The co-host of the show had done some research, and could clue us listeners in.
This is interesting, because the field of terrorist cooperation has been studied, and we know that the majority of the terrorists that work with others are proud of this fact and like to talk about it. Only about one in five terrorists cooperate with others and then lie about it and say they work alone.
The news anchor said,
Oh, so only 20% of terrorists lie about working alone? Then the most likely situation is that this terrorist actually worked alone!
Co-host said,
Sure, that’s what I’ve gathered, anyway.
The expert they had on the line had to break in at this point.
Actually, only about one in ten terrorists overall work alone. Over 90% cooperate with someone else. So now we face two alternatives:
It could be the case that Kim actually worked alone, and is telling the truth about it. This has about a 9% probability.
The other alternative is that Kim has cooperated with someone else, but is now lying about it. This has a probability of 90% times 20%, which comes out as 18%.
As you see, the alternative that Kim has cooperated and is lying about it is twice as likely as the alternative that Kim actually worked alone, given how rare it is that terrorists actually work alone.
here was a short silence on the air as both hosts were trying to process what the expert had just told them.
What happened? The expert possessed additional evidence and accounted for that to refine their probabilities, and suddenly the results were completely flipped from what they were before. I hope this example was fairly intuitive, because the next is not at all obvious.
Medical Tests
There’s an encounter with a doctor I remember very strongly. I had met with him before because I was experiencing some diffuse symptoms, but he couldn’t figure out what it was so he asked me to come back a few days later to do a follow-up and see if any of my values had worsened.
In the intervening time, I had done some research and found out that there’s a tick-transmitted illness that often presents with these diffuse symptoms, so I asked the doctor if they could test for this tick-transmitted illness.
The doctor replied that, “I could, but it wouldn’t make any of us any wiser.”
What did that mean?
After all, the test for this tick-transmitted illness has a high sensitivity11 High sensitivity: if you have the illness, the test is highly likely (pretty much 100%) to have a positive result., it doesn’t have a stellar specificity22 Low specificity: it gives false positives relatively often. For the people who like counting, let’s pretend the test gives false positives 15% of the time., but does that really matter?
The naïve interpretation of this is that if the test shows positive, there’s a scary 85% risk I have the illness.
Not so fast.
The illness is very rare.33 If you like to count, say it exists in 0.4% of the patients that take the test. So even if the test shows positive, it’s highly likely that I don’t have the illness still!44 The counting people: 99.6% times 15% is much larger than 0.4% – about 37 times more probable!
So in the absence of performing the test, we can say with 99.6% certainty that I don’t have the tick-borne illness. If we did the test and the result was negative, we’d increase that 99.6% to almost 100%, but it was already almost 100%, so it would not actually accomplish a whole lot.
Similarly, if (which wasn’t entirely unlikely) the test result was positive, there would still be an overwhelming chance that I don’t have the illness – all the test accomplished was to make me worry more about something that’s still very unlikely!
I didn’t get this explanation from the doctor, but I had done similar counting exercises before so I sort of figured it out as soon as he said it.
To Remember
This sort of reasoning applies any time you have a set of potential outcomes and an indicator that shows you which outcome is the actual outcome, and your indicator isn’t absolutely perfect. In that case, the probability of the indicator showing the wrong outcome is surprisingly often more likely than the probability of a rare outcome itself.
Do you need to rewrite your code base? The actual need to rewrite the code base has a pretty low probability of occurring. But you have some indicators that tell you a rewrite is necessary. Are these indicators 100% reliable? If not, then perhaps it’s more likely that your indicators are lying than that a rewrite is truly necessary.