Entropic Thoughts

Reading Notes: Accelerated Expertise

Reading Notes: Accelerated Expertise

A while ago I read Accelerated Expertise1 Accelerated Expertise: Training for High Proficiency in a Complex World; Hoffman, Ward, Feltovich, et al.; Psychology Press; 2013. which was interesting. If you’re unfamiliar with it, it condensely summarises the available research on training people to do arbitrary difficult things well.

Specifically, there are some things which we are good at teaching people to do, like calculus or playing the piano. We have well-tested syllabi for these types of things. Then there are some things we just don’t know how to teach people, like software engineering, solving crossword puzzles, flying helicopters, and noticing improvised explosive devices in urban environments. Some people get really good at them, and others don’t. If you ask an expert what they are doing so well, they will shrug and go, “I don’t know, but it felt right at the time.” These are the types of skills Accelerated Expertise deals with.

Anyway. These are a few of my notes from the book. These points are, to the best of my recollection, paraphrasing what the authors wrote. Most of it was backed by at least somewhat solid research. My personal experiences don’t always agree with this, but it’s still worth keeping in mind.

Notes

  • Just doing your job does not improve your skill at it. You need challenges, the right type of feedback, etc.
  • Feedback needs to go beyond the superficial. It needs to address states of mind, what the assumptions are, what the evidence is, what ideas have been thought but discarded, etc.
  • There are some general characteristics of more difficult problems. These are especially important to call out explicitly, because when problems are on the more difficult end of these dimensions, non-experts tend to make the simplifying assumption that they are not (!):
    • Dynamic is more difficult than static.
    • Continuous is more difficult than discrete.
    • Interactive is more difficult than separable.
    • Simultaneous is more difficult than sequential.
    • Heterogeneous is more difficult than homogeneous.
    • Multiple representations is more difficult than single representation.
    • Organicism is more difficult than mechanism.
    • Non-linear is more difficult than linear.
  • Instructional content should, to help one become an expert, present concepts using multiple representations, avoid oversimplifying, tie into context, emphasise knowledge construction from cases, and interconnect with other instructional content.
  • It is important that training also covers conceptual models and abstractions. Giving the learner the right language, so to speak, helps them communicate with the instructor and gives them tools to reflect on their own.
  • Interleaving practise of different subjects delays acquisition but improves retention.
  • Varying the parameters of practise improves outcomes even compared to constant practise equal to the test situation. (Example: a group that practises throwing bean bags exactly 5 metres onto a target performs worse than the group that practises varying-distance throws – even when the test is to throw bean bags exactly 5 metres onto a target!)
  • Incentives during practise has a negative effect. People focus on getting the rewards or avoiding the punishments instead of challenging themselves and learning.
  • Lies to children are harmful. Simplifying assumptions persist for longer than intended. They get used as knowledge shields to reject evidence of additional knowledge requirements.
  • Teaching procedures instead of general principles is the same lies-to-children problem.
  • Simulations don’t need to be realistic or have any sort of physical fidelity. You don’t need to be fooled into thinking you’re actually performing the task. It is sufficient that your body goes through the motions of performing the task. The same goes for cognitive tasks. It is sufficient that you think the thoughts you would think when performing the task.
  • Simulations should be targeted against specific skills, but presented in-context and run from start to end. It’s fine if people fail at the primary objective, as long as they learn things. In fact, if people don’t fail often, it’s a sign the simulation is too easy.
  • Non-experts seek to confirm their theories. Experts seek to invalidate their hypotheses.
  • People with good situational awareness do two things well:
    • Managing their task load and interruptions; and
    • Actively create situations to disprove their hypotheses.
  • When training teams, it might actually be beneficial to introduce obstacles to communication into training. This helps them improvise and handle unexpected (non-communication related) obstacles during testing.
  • Team training can improve invidivual outcomes, as long as the team shares mental models, communicates openly and gives each other feedback.

Techniques

The notes above don’t say anything about how to actually train for expertise in complex, difficult tasks. I didn’t make detailed notes about that because I planned on going back to the topic from more angles.

But, to avoid leaving you hanging, here’s the basic idea:

  1. Get access to an existing expert. Ask them to recollect a tricky situation.
  2. Perform cognitive task analysis with the expert, to squeeze out as much information and detail as possible about the situation they are remembering.
  3. Create a simulation based on the information extracted above.
  4. Subject trainees to the simulation. Have the expert provide appropriate feedback – both during performance, and a thorough retrospective.
  5. Repeat for a large variety of situations.

The best way we know of to teach complex tasks is to subject people to simulations that mimick the real task we want them to get good at. Creating those simulations is the difficult part, because the expert won’t be able to tell you what makes a situation difficult and how to interpret it. That’s where techniques like cognitive task analysis come into force.

Referencing This Article