Confusing Uncertainty For Information
A while ago I came across a paper on how people extract information in uncertain situations11 A preference for the unpredictable over the informative during self-directed learning; Markant and Gureckis; 2014. Available online..
I care about this because effective hypothesis testing is one of two critical skills for good situational awareness22 The other is time and interruption management., and situational awareness is critical for high performance in many areas. Experts performing difficult tasks are really good at poking at the situation in a way that efficiently reveals to them what knowledge they are missing. It’s a short paper, I recommend reading it!
There were two points that I took away from it as particularly interesting. We’ll start with the most interesting first.
Maximising Uncertainty
Decision theory concerns situations in which we have a fixed amount of (limited) information, and we need to make the best decision with the information given. In the real world, we often have the ability to choose actions that can reveal further information to us. This makes the decision problem more complex: we need now to choose not only the decision with the best consequences given the information we have, but we should also account for what additional information is revealed by the decision. Sometimes it’s better to pick a decision with not-so-good consequences that reveal a lot of information, which opens doors to decisions with even better consequences.
For example, we’re selling a software service, and we are curious whether customers of version a or b are happier with the service. Our chief statistician gives us two options: we can either
- run a statistical analysis between the users of a and b, to find out if there are any differences in subscription history, activated premium features, etc. – or we can
- send out a survey to customers of a and b with some well-calibrated questions that reveal how happy they are with the service.
Here we have a situation where the first option is probably more informative, in that it will do a more efficient job of reducing our uncertainty about the true difference between a and b. The second option is likely more noisy, since customer surveys may not accurately represent the actual business choices by the customer, and are also affected by irrelevant things (e.g. cognitive biases.) 33 To be clear, I’m not saying the second option is not informative at all, it’s just that whatever information it contains is to a greater degree clouded by measurement noise.
The study indicates that humans have a tendency to choose the second (less efficient) option specifically because it is noisy. When we feel the outcome of an action can be a little all over the place, we are, apparently, attracted to that action because we are wired to interpret uncertainty as information. In some cases, uncertainty is a good proxy for information. It’s just that in the presence of noise, it’s not. In those cases, we need to account for the noise separately before we choose an action based on the uncertainty that remains.
As the example I chose indicates, this has practical consequences for product development, and any other situation in which we are acting on imperfect information but have the ability to retrieve more. Choose informative, not noisy.
Positive Testing
Research on expertise shows consistent differences in how experts and non-experts approach difficult situations:
- Experts to a greater degree interact with their environment with the express purpose of gathering information. Non-experts passively consume information from the environment.
- Experts specifically seek out the type of information that would reveal to them in which ways they are wrong. In contrast, non-experts take new information and try to fit into their pre-conceived model of what things are like.
I’m a big fan of invalidating hypotheses. I also sort of have the feeling that most people are instead trying to confirm their theories. In this study, positive testing (confirming a theory) was a rare method of extracting information in the test, contrary to what I would have believed. This could be because of the simplified test environment, or the participants, but either way, it’s interesting to discover positive testing is not as common as I had thought.
The study also points out that my fandom of hypothesis invalidation might be overly enthusiastic. Positive testing isn’t always confirmation bias, as it’s often presented. In some cases there is rare but definitive positive proof of a hypothesis, and in those cases positive testing can be the most efficient of uncovering information.